|Application opens:||March 1, 2014|
|Application closes:||May 15, 2014|
|Notification of acceptance:||May 20, 2014|
|Deadline for payment of registration:||June 1, 2014|
|Arrival of participants:||July 6, 2014|
|Start of event:||July 7, 2014|
|End of event:||July 11, 2014|
Dr.G.Andrienko has coordinated workpackages in many international research projects on topics of geovisualization, information visualization, visual analytics, and data mining. Since 2007 he is chairing the ICA Commission on GeoVisualization. He co-organized several workshops and conferences, including workshops on Visual Analytics at GIScience 2006, 2008, 2012 and AGILE 2010 conferences, two International Conferences on Coordinated & Multiple Views in Exploratory Visualization (2006-2007), and a workshop on Mining Spatio-temporal Data at the PKDD 2006. He co-edited ten journal special issues, including: Int J GI Sscinces 2007 and 2010, Information Visualization 2008, J Intelligent Information System 2006, J Visual Languages and Computing 2011, CaGIS 2010, Cartographica 2011. Gennady Andrienko is associate editor of two journals, Information Visualization and IEEE Transaction on Visualization and Computer Graphics (since 2012), and editorial board member of Cartography and Geographic Information Science (since 2013) and ISPRS Journal of Photogrammetry and Remote Sensing.
Lecture: Visual Analytics of Movement
The mission of Visual Analytics is to find ways to fundamentally improve the division of labour between humans and machines so that the computational power could amplify the human perceptual and cognitive capabilities. The term “Visual Analytics” stresses the key role of visual representations as the most effective means to convey information to human's mind and prompt human cognition and reasoning. Visual Analytics is defined as the science of analytical reasoning facilitated by interactive visual interfaces. It combines automated analysis techniques with interactive visualizations so that to support synergetic work of humans and computers. In many areas of people's life and activities it is important to understand movement behaviours and mobility patterns of people, animals, vehicles, or other objects. Thanks to the recent advent of inexpensive positioning technologies, data about movement of various mobile objects or agents are collected in rapidly growing amounts. There is a pressing need in adequate methods for analysing these data and extracting relevant information. Movement data are inherently complex as they involve (geographical) space and time. In addition to their own intrinsic complexities, these components are interdependent, which multiplies the overall complexity. As a result, movement data cannot be adequately modelled (at least at the present time) for a fully automatic analysis. At the same time, movement data, which are mostly acquired by automatic position tracking, are usually very poor semantically. The records basically consist of time stamps and coordinates. Semantic interpretations must emerge as a result of exploration and analysis where a human analyst plays the key role. Appropriate visual representations of movement data and outcomes from automated analysis procedures are paramount for this process. The presentation gives an overview of how to analyse such data. For selected tasks, we propose scalable visual analytics methods. The work of the methods is illustrated using several examples of real-world datasets significantly differing in their properties. We analyse to what extent these and other existing methods cover the space of the types of movement data and the possible analysis tasks, identify the remaining gaps, and outline the directions for the future research.
Dr. Frithjof Dau received a diploma in mathematics from the University of Hannover, Germany, in 1994. He then was working for 1/2 year as C++-developper for a start-up engineering office, then for two years as Lotus Notes consultant for a middle-size consulting company. In 1997, he joined the research group of Prof. Rudolf Wille, founder of Formal Concept Analysis, at TU Darmstadt, Germany. In 2002, he received his PhD for his thesis "The Logic System of Concept Graphs (And its Relationship to Predicate Logic)" (published by Springer in the series "Lecture Notes in Artificial Intelligence"). After this, he worked as lecturer and assistant professor at TU Darmstadt, FH Darmstadt, TU Dresden (all Germany) and University of Wollongong, Australia. In April 2008, he has joined SAP Research in Dresden as researcher, meanwhile he is senior researcher. From 2008 to 2010, he was responsible for the semantic technologies in the public funded project Aletheia - Semantische FÖderation umfassender Produktinformationen (http://www.aletheia-projekt.de). From 2010 to 2013, he was project lead of the public funded project CUBIST - Combining and Uniting Business Intelligence with Semantic Technologies (http://www.cubist-project.eu/). For the time being, he is working in an SAP-internal project based on SAP HANA.
Lecture: CUBIST - Combining and Uniting Business Intelligence with Semantic Technologies
CUBIST – Combining and Uniting Business Intelligence and Semantic Technologies – has been a EU-funded research project (a so-called STREP) which ran from Oct. 2010 until Sept. 2013. This lecture summarizes key technologies, achievements and results of CUBIST. The CUBIST project developed methodologies and a platform that combines essential features of Semantic Technologies and BI. The most-prominent deviations from traditional BI-platforms are: (1) The data persistency layer in the CUBIST-prototype based on a BI enabled triple store, thus CUBIST enables a user to perform BI operations over semantic data. (2) In addition to traditional charts like bar- or pie-charts, CUBIST provides novel und uncommon graph-based visualizations to analyse the data. As mathematical foundation for meaningful clustering the data, Formal Concept Analysis is used. In the lecture, first technical background on Semantic Technologies and Formal Concept Analysis are shown, which is needed to understand the remainder of the lecture. Both with respect to the backend and the visual analytics, CUBIST strongly deviates from traditional BI systems. The talk will address up- and downsides of using Semantic Technologies for BI and FCA-based visual analytics. Within the project, a prototype has been developed, which has been tailored to three different use cases, provided by three use case partners in CUBIST. The use cases and their specific BI-related information needs will be discussed in the lecture. The prototype and the overall CUBIST approach has been evaluated at the end of the project; major results from the evaluation will be presented.
Asterios Katsifodimos is a Postdoctoral Researcher co-leading the Stratosphere Research Project (stratosphere.eu) in the Database Systems and Information Management (DIMA) Group at the Technische Universität Berlin (TUB). He received his PhD in 2013 from INRIA Saclay and Université Paris-Sud under the supervision of Ioana Manolescu. His PhD thesis focused on materialized view-based techniques for the management of web data. He was a member of the High Performance Computing Lab at the University of Cyprus, where he obtained his B.Sc. and M.Sc. degrees. His research interests include query optimization, large-scale distributed data management, and big data analytics.
Lecture: Big Data looks tiny from Stratosphere
In this talk, I will present Stratosphere (stratosphere.eu), a European developed open source software stack for complex big data analytics currently available for download. Stratosphere covers a wide-variety of big data use cases, including data warehousing, information extraction/integration, data cleansing, graph analysis, and statistical analysis. Its unique set of features enables easy, efficient, and expressive programming that is well suited for the development of scalable data analytics. Stratosphere's features include “in situ” data processing, a declarative query language, treatment of user-defined functions as first-class citizens, automatic program parallelization and optimization, support for iterative programs, and an efficient, scalable execution engine.
As research engineer at Thomson (1981-1988) in the field of computer languages and formal grammars, Gabriel Kepeklian - INSA 1981 (National Institute of Applied Sciences of Lyon) - has developed compiler compilers and created supercomputer control languages. Later, he was architect of new information systems of the financial institution Altus Finance (1989-1993), product manager at Promind (1994-1997), founder of Salazie, a company specialized in the treatment of logistic and cartographic data (1995-2001). Entered the Atos group in 1997, he is now responsible for R & D. In his laboratory, the projects are focused on the technologies of web of data and linked data. Gabriel Kepeklian is currently Chairman of the DataLift association, which promote the development of web data, research, innovation and any activity to promote its uses as its technology aims. Its publications, courses and conferences have basically the same aims.
Lecture: The Web of Data, understanding the technological keys and publishing Linked Data
The Data, open or not, always raise the problems of heterogeneity: many data formats, many metadata schema descriptions, complexity to mix... Using the Web of Data technologies, these problems may be overcome. Semantic Web technologies allow to move from raw data to linked data after elevation and interlinking phases. However, lifting raw Data to Linked Data is far from being straightforward. We will see the whole process and the results of a concrete implementation. a) The basic principles of the Web of Data: from raw and heterogeneous Data to the lingua franca of the Web of Data - b) The RDF semantic and its main syntaxes - c) Handling and querying Data (triple store, end point, SPARQL) - d) Understanding the basics of knowledge modeling (ontology, OWL) - e) The lifting and linking of Data in order to publish 5 stars Data (DataLift).
Vincent Lemaire obtained his undergraduate degree from the University of Paris 12 in signal processing and was in the same period an Electronic Teacher. He obtained a PhD in Computer Science from the University of Paris 6 in 1999. He thereafter joined the R&D Division of France Télécom where he became a senior expert in data-mining. His research interests are the application of machine learning in various areas for telecommunication companies with an actual main application in data mining for business intelligence. He developed exploratory data analysis and classification interpretation tools. Incremental learning and clustering are now his main research interests. He obtained his Research Accreditation (HDR) in Computer Science from the University of Paris-Sud 11 (Orsay) in 2008.
Lecture: Data Stream Processing and Analytics
Data-streams processing is a recent domain of research which is complementary to the Big Data. This kind of algorithms analyze data on the fly, and could be qualified as designed to treat “Fast Data”. This talk aims at providing an overview of data-streams processing approaches and consist of 3 parts : i) querying, ii) unsupervised learning, iii) supervised learning.
Ruth Raventós received her Degree in Informatics Engineering and her Ph.D in Informatics from the UPC. She also received her M.Sc. in Information Systems from DePaul University (Chicago, USA) and has also been a visiting researcher at the University of Manchester, UK. She has been working, for almost ten years, as software developer and analyst in several companies (ICT, COOB'92, RACC and Mutual Cyclops) and has been Vicedean of Relations with Corporations of the Barcelona School of Informatics of the UPC. She has been teaching at the Information Systems Department of the ESADE Business School (Barcelona) and at the Barcelona School of Informatics of the UPC. Her main interests are conceptual modeling, requirements engineering and information systems design and has authored articles and papers in national and international conferences and journals on these subjects.
Lecture: Requirements Engineering for Business Intelligence
Requirements engineering is a complex process that has been profoundly studied during the last two decades and usually has consisted, at least, of three main phases: requirements elicitations, requirements specification and requirements validation. Requirements engineering for Business Intelligence (BI) is an emerging topic in the area because the techniques developed (use cases modelling, RUP, etc.) until now, were clearly useful for requirements engineering of traditional information systems, but not for BI projects. Systems whose main goal is to collect and analyse internal and external data to generate knowledge and value, providing decision support at the strategic, tactical and operational levels. When starting a BI project, how do you gather the business requirements? Which are the best tools to specify said requirements? How do you ensure that the product will actually meet the user's requirements? For users, it is particularly difficult to anticipate future requirements for the decision-making process. Regarding specification of requirements, BI projects focus on visual communication using dashboards and a combination of interactive tools and navigational functions. Moreover, BI systems are developed incrementally, seen as long-term projects and are linked to many other information systems. They cannot usually be developed based on a set of comprehensive set of requirements specification that would be able to validate at the end of the project. In this talk, we will first present an introduction to the core activities and techniques involved in requirements engineering. Then, we will describe the main differences between requirements engineering for traditional systems versus for BI systems. Next, we will propose a framework of requirements engineering for BI systems and show and example of real project as application of said framework.
Kurt Sandkuhl is Full Professor (with tenure) of “Wirtschaftsinformatik” (Business Information Systems) at the University of Rostock (Germany) and affiliated Professor of “Information Engineering” at School of Engineering, Jönköping University (Sweden). He received a diploma (Dipl.-Inform.) and a PhD (Dr.-Ing.) in computer science from Berlin University of Technology. Furthermore, he received the Swedish degree as “Docent” (postdoctoral lecturing qualification) from Linköping University, Institute of Technology, in 2005. Between 1988 and 1994, Sandkuhl was associated with the Computer Science Faculty of Berlin University of Technology, Germany. As a scientific employee, he was engaged in education in the field of Business Information Systems and in managing R&D projects. In 1993 he received the innovation award “Dr.-Ing.-Rudolf-Hell-Innovationspreis” from Linotype-Hell, Germany. Between 1994 and 2002, Sandkuhl was associated with Fraunhofer-Institute for Software Engineering and Systems Engineering ISST, Berlin and Dortmund, Germany. In 1996, he became head of the department Internet/Intranet-Technology. From 2001, Sandkuhl was head of the Berlin Branch of Fraunhofer ISST. His work area included the management of a research institute with more than 100 scientists, strategic acquisition of contract research from industry and public authorities, and supervision of scientific work in 5 departments. In 2002, Sandkuhl joined School of Engineering at Jönköping University and was responsible for the research group in Information Engineering from 2002-2010. From 2003-2010, Sandkuhl was head of Fraunhofer ISST's project group in information engineering at Jönköping University. In 2010, Sandkuhl was appointed professor of business information systems at the University of Rostock (Germany). Sandkuhl is responsible for the BSc and MSc programs in Business Information Systems at Rostock University. His current research interests include the fields of information logistics, enterprise modeling, ontology engineering, and model-based software engineering. He has published four books in the field of enterprise modeling and electronic publishing, and more than 200 peer-reviewed papers in information logistics, enterprise knowledge management, CSCW, information services, and software architectures.
Lecture: Knowledge Reuse
The importance of managing organizational knowledge for enterprises has been recognized since decades. The expectation is that systematic development and reuse of knowledge will help to improve the competitiveness of the enterprise under consideration. Enterprise knowledge modelling contributes to this purpose by offering methods, tools and approaches for capturing knowledge about processes and products in formalized models in order to support the entire lifecycle of organizational knowledge management. The seminar focuses on a specific aspect of knowledge management: knowledge prepared for reuse by using different techniques, like patterns, reference models or templates. The seminar investigates different approaches for knowledge reuse from computer science and business information systems. Starting from a discussion of fundamentals of knowledge reuse, different ways of reuse and their characteristics are introduced and compared.
Roel Wieringa is Chair of Information Systems at the University of Twente, the Netherlands. His research interests include modelling and design of e-business networks, requirements engineering , and research methodology for software engineering, information systems and the design sciences. He has written two books, Requirements Engineering: Frameworks for Understanding (Wiley, 1996) and Design Methods for Reactive Systems: Yourdon, Statemate and the UML (Morgan Kaufmann, 2003). His book Design Science Methodology for Information Systems and Software Engineering will appear in 2014 with Springer.
Lecture:Design Science Methodology for Business Intelligence
Design science is the design and investigation of artifacts in context. The artifacts can be methods, techniques, notations, business processes, organization structures, or any other system designed for a useful purpose. The context of an artifact consists of users and other stakeholders, software, hardware, organizations, or any other part of the world with which the artifact needs to interact. The methodology of design science consists of structuring and solving design problems on the one hand, and of framing and answering knowledge questions on the other hand. In this tutorial I first discuss the top-level structure of design science as a pair of mutually nested cycles, namely the design cycle and the empirical research cycle. The structure of tasks in each of these cycles is reviewed, and in a discussion the audience will be allowed to relate these structures to their own research. Next, I will zoom in on different ways of structuring the empirical cycle, namely as experimental and as observational research, and as case-based and as sample-based research. These two distinctions yield four kinds of empirical research, each of which has a characteristic way of modeling and reasoning about the object of study. For example, if conducted properly, sample-based experimental research can support reasoning about nondeterministic causality, and observational case-based research can support reasoning about architectures and mechanisms in the object of study. I will review these patterns of valid reasoning, using examples from published literature and from the own experience of the audience.