Important Dates: | |
---|---|
Application opens: | February 15, 2013 |
Application closes: | April 2, 2013 |
Notification of acceptance: | April 10, 2013 |
Deadline for payment of registration: | May 1, 2013 |
Arrival of participants: | July 7, 2013 |
Start of event: | July 8, 2013 |
End of event: | July 12, 2013 |
Wil van der Aalst is a full professor of Information Systems at the Technische Universiteit Eindhoven (TU/e). Currently he is also an adjunct professor at Queensland University of Technology (QUT) working within the BPM group there. His research interests include workflow management, process mining, Petri nets, business process management, process modeling, and process analysis. Wil van der Aalst has published more than 150 journal papers, 17 books (as author or editor), 300 refereed conference/workshop publications, and 50 book chapters. Many of his papers are highly cited (he has an H-index of more than 94 according to Google Scholar, making him the European computer scientist with the highest H-index) and his ideas have influenced researchers, software developers, and standardization committees working on process support. He has been a co-chair of many conferences including the Business Process Management conference, the International Conference on Cooperative Information Systems, the International conference on the Application and Theory of Petri Nets, and the IEEE International Conference on Services Computing. He is also editor/member of the editorial board of several journals, including the Distributed and Parallel Databases, the International Journal of Business Process Integration and Management, the International Journal on Enterprise Modelling and Information Systems Architectures, Computers in Industry, Business & Information Systems Engineering, IEEE Transactions on Services Computing, Lecture Notes in Business Information Processing, and Transactions on Petri Nets and Other Models of Concurrency. In 2012, he received the degree of doctor honoris causa from Hasselt University. He is also a member of the Royal Holland Society of Sciences and Humanities (Koninklijke Hollandsche Maatschappij der Wetenschappen) and the Academy of Europe (Academia Europaea). For more information about his work visit: www.workflowpatterns.com, www.workflowcourse.com, www.processmining.org, www.yawl-system.com, www.wvdaalst.com.
Email: w.m.p.v.d.aalst@tue.nl
Web: http://www.vdaalst.com
Lecture: Process Mining
Over the last decade, process mining emerged as a new research field that focuses on the analysis of processes using event data. Classical data mining techniques such as classification, clustering, regression, association rule learning, and sequence/episode mining do not focus on business process models and are often only used to analyze a specific step in the overall process. Process mining focuses on end-to-end processes and is possible because of the growing availability of event data and new process discovery and conformance checking techniques. Process models are used for analysis (e.g., simulation and verification) and enactment by BPM/WFM systems. Previously, process models were typically made by hand without using event data. However, activities executed by people, machines, and software leave trails in so-called event logs. Process mining techniques use such logs to discover, analyze, and improve business processes. The practical relevance of process mining and the interesting scientific challenges make process mining one of the "hot" topics in Business Process Management (BPM). In his lecture, Wil van der Aalst (the "godfather" of process mining) introduces process mining as a new research field, presents examples of algorithms and practical applications, and highlights challenges that Business Intelligence research should focus on.
Toon Calders graduated in 1999 from the University of Antwerp with a diploma in Mathematics. He received his PhD in Computer Science from the same university in May 2003, in the database research group ADReM, and continued working in the ADReM group as a postdoc until 2006. From 2006 till 2012 he was an assistant professor in the Information Systems group at the Eindhoven Technical University. In 2012 he joined the CoDE group at the ULB as a "Chargé de Cours" (associate professor). His main research interests include data mining and machine learning. Toon Calders published over 60 conference and journal papers in this research area and received several scientific awards for his works, including the recent "10 Year most influential paper" award for papers published in ECMLPKDD 2002. Toon Calders regularly serves in the program committees of important data mining conferences, including ACM SIGKDD, IEEE ICDM, ECMLPKDD, SIAM DM, was conference chair of the BNAIC 2009 and EDM 2011 conferences, and is a member of the editorial board of the Springer Data Mining journal and Area Editor for the Information Systems journal.
Email: tcalders@ulb.ac.be
Web: http://cs.ulb.ac.be/members/tcalders
Lecture: Data mining techniques for discovering local patterns
Pattern Mining is one of the most researched topics in the data mining
community. In pattern mining the goal is to discover local patterns in a
given dataset, describing surprising observations that can be made from the
data. Examples of such patterns include sets of products purchased together
more often than to be expected in a supermarket, finding regularities in
seemingly random sequences of events, observing common substructures in
databases of graphs, etc. Literally hundreds of algorithms for efficiently
enumerating all frequent patterns have been proposed. In my presentation I
will give an overview of the main algorithmic techniques for efficiently
uncovering such patterns. The mostly exhaustive algorithms, however, all
suffer from the pattern explosion problem. Depending on how thresholds are
set, even for moderately sized databases, millions of patterns may be
generated. Although this problem is by now well recognized in the pattern
mining community, it has not yet been solved satisfactorily. In my talk also
attention will be paid to different current approaches that have been
proposed to alleviate this problem.
Dr. Ralf-Detlef Kutsche, holds a position as Academic Director in the "Database and Information Management DIMA" group at TU Berlin. His hot topics in teaching and research are model building and modeling methodology, focusing on "model-based semantic integration of heterogeneous information systems". Dr. Kutsche has many publications in the foundations and applications of this area, as well as comprehensive experience in national and international project leadership. In the past years, he was (co-)chair of several international conferences/workshops, and invited speaker in conferences, international cooperations and with industry, with a special emphasis on model-based software and data integration. He studied Mathematics at FU Berlin, focusing on Numerical Mathematics, and Computer Science (as a minor at TU Berlin). After the diploma degree he has worked as scientist in Theoretical Computer Science at TU Berlin, and then in applications of formal methods to clinical information systems at German Heart Institute Berlin DHZB, concluding his Ph.D. in this area. As senior scientist from 1994 at TU chair 'Computation and Information Structures CIS', he estasblished the focus area 'Heterogeneous Distributed Information Systems'. At the same time he was scientific coordinator, project leader and member of the board of leaders with the Fraunhofer Institute for Software and Systems Engineering ISST. For an intermediate period of more than 30 months from 2005, Dr. Kutsche acted as provisional head of the CIS group. He is advisor of many diploma theses at TU Berlin and (co-)advisor of several Ph.D. dissertations within the Graduate School Distributed Information Systems, at TU Berlin, at Fraunhofer ISST, and at other universities. Presently, he is Science Chair of the BIZWARE initiative, after BIZYCLE a second large-scale project of TU Berlin and industry (SMEs) in the area of model based software and data integration in several business domains, funded by German BMBF. Beside his activities at TU Berlin, again he is engaged in coordination of 'Modeling' research at Fraunhofer FIRST, now Fraunhofer FOKUS institute.
Email: ralf-detlef.kutsche@tu-berlin.de
Web: http://www.dima.tu-berlin.de/menue/staff/ralf-detlef_kutsche/
Lecture: Model based software and data integration for business applications
Integration of heterogeneous distributed IT-systems is one of the major problems and cost-driving factors in the software industry today.
One of the main challenges in software and data integration is to overcome
interface incompatibilities. Integration specialists are confronted with different kinds
of semantic, structural, behavioral, communication and property mismatches.
In this lecture, we shall define the basics of a MDA-based theoretical
framework to propose solutions for all these aspects
by comprehensive software and data modeling approaches towards integration
at the CIM, PSM and PIM levels in order to achieve (semi-)automatic problem
and conflict analysis, conflict resolution, connector generation and integration.
However, small and medium enterprises (SMEs) frequently have problems to apply
such methodologies due to the high complexity of general purpose modeling languages
like UML and their tools, and of ontology editors in the context of semantic enhancement
of MDA-based tools. The methodology and development of the BIZYCLE project in 2007 to 2010,
including the MBIF tool set (Model-Based Integration Framework), takes care of the
needs and requirements in SMEs with available staff under daily business pressure.
Learning from these experiences, and, in conjunction with a new trend in model-based
software engineering, namely 'domain specific languages' (DSLs), there is another
large project among two academic and eight industrial partners, named BIZWARE.
BIZWARE now investigates the potential of DSLs and model-driven engineering for SMEs.
A systematic and standardized process of DSL-based software construction, as well as
deployment, runtime and lifecycle aspects, and operation of such domain software,
is developed. Participative modeling between software professionals and domain experts
shall be enabled by dedicated (graphical and textual) languages in the given domains,
being designed under the 'small is beautiful' philosophy.
The lecture includes the main conceptual approaches, methodologies and developments
of the previous BIZYCLE project and the running BIZWARE project, being underpinned
by several applications from the business domains of our industrial partners:
healthcare, manufacturing/production, finance/insurance, publishing and facility management.
Prof. Dr.-Ing. Wolfgang Lehner (1969) is full professor and head of the database technology group at the Dresden University of Technology (Technische Universität Dresden), Germany. He received his Master's degree (1995), Ph.D. degree (Dr.-Ing. - 1998), and habilitation (venia legend, 2001) in Computer Science from University of Erlangen-Nuremberg. He was PostDoc at the Business Intelligence (BI) group at the IBM Almaden Research Center in San Jose (CA). Wolfgang Lehner was also visiting scientist at Microsoft Research in Redmond (2004, 2006), at GfK Nuremberg (2005), at UBS Zurich (2007), and at SAP Walldorf (2008) and SAP Palo Alto (2012). Up to now, Wolfgang Lehner published five text books on multidimensional database systems (1999), subscription systems (2001), database technology for data warehouse systems (2003), the XQuery database language (2004), and most recently on "Data Management in the Cloud". He is also editor of a large variety of journals, books, and proceedings. Additionally, he published more than 150 reviewed research papers in conference proceedings and international journals. Wolfgang Lehner conducts are variety of different research projects with his team members ranging from designing data-warehouse infrastructures from a modeling perspective, supporting data-intensive applications and processes in large distributed information systems, adding novel database functionality to relational database engines to support data mining/forecast algorithms, investigate techniques of approximate query processing (e.g. sampling) to speed up execution times over very large data sets, and exploit the power of main-memory centric database architectures with emphasis on modern hardware capabilities. Apart from basic and mostly theoretic research questions (supported by Technische Universität Dresden and the German Science Foundation, DFG) Prof. Lehner puts a strong emphasis on practical research work in combination with industrial cooperations, on an international, national, and regional level.
Email: wolfgang.lehner@tu-dresden.de
Web: http://wwwdb.inf.tu-dresden.de/team/head/prof-dr-ing-wolfgang-lehner/
Lecture: Forecasting and Data Imputation Strategies in Database Systems
Data Analytics has undergone a tremendous shift over the last few years. On the one hand, databases have grown to gigantic data volumes and data is captured in almost real time. On the other hand, sophisticated statistical algorithms have been developed to extract higher-level information out of the vast data sets. With respect to database systems, this trend created (at least) two major challenges. First, database systems have to cope with read-heavy workloads on very large databases; sophisticated parallel query processing and fault tolerance is required to cope with these challenges. Secondly, compute-intensive algorithms have to seamlessly compiled into data-intensive architectures of classical data management platforms. Within this lecture, I will give a comprehensive overview of challenges and some solutions to deploy complex statistical algorithms on large data sets. In the first part, I will outline challenges and some domain-specific solutions of forecasting and data imputation techniques to in order to extract business insights out of incomplete or streaming data. In the second part, I will dive into some technical aspects of extending database systems with statistical algorithms as part of the query compilation and execution process. I will give insights into research as well as commercially available solutions.
As CTO for TARGIT, Morten focuses on the ways company managers and leaders make decisions - in particular the extent to which computers can assist in the decision making process. His research into the ways people make intelligent and intuitive decisions has catapulted TARGIT's position within Business Intelligence and Analytics. Today, TARGIT is recognized as one of the top 15 international vendors by industry analysts and has over 4,500 customers with more than 307,000 named users. In addition to guiding TARGIT's technological direction, he is also involved in educational activities that advocate research and experiences within Business Intelligence and Analytics. Through his research he has found new methods for companies to react faster and more efficiently to the real world challenges that organizations face - allowing them to stay competitive in the future.
Email: morton@targit.com
Web: http://targit.com/research
Lecture: Attain BI Synergy by Teaming Human Capacity with Computing
Join Dr. Morten Middelfart, creator of TARGIT BI Suite, for an exploration of his latest research on Computer-Aided Leadership & Management (CALM). Learn the fundamentals of the CALM philosophy and discover ways your company's Business Intelligence (BI) solution should be used to accelerate decision making, increase operational awareness, and improve performance across the organization.
Attendees at this session will learn the benefits of a BI solution that can help your organization achieve advanced goals including:
Oscar Romero has an MSc and a PhD in computing from the Universitat Politècnica de Catalunya. Currently, he holds a tenure-track 1 lecturer position at the Barcelona School of Informatics. He is also a member of the MPI research group at the same university, specializing in software engineering, databases and information systems, where he has participated in 4 different research projects. His research interests mainly fall into the business intelligence field, namely data warehousing, OLAP and service-oriented BI, semantic-aware formalisms such as the semantic web, ontologies and description logics that can be used to overcome the semantic gap in BI environments, NOSQL and alternative storage techniques for large BI data sets and query recommendation, among others. He has authored articles and papers in national and international conferences and journals on these subjects.
Email: oromero@essi.upc.edu
Web: http://www.essi.upc.edu/~oromero/
Lecture: On the Feasibility and Need of Semantic Aware Business Intelligence
This talk tackles the convergence of two of the most influential technologies in the last decade, namely business intelligence (BI) and the Semantic Web (SW). BI is used by almost any enterprise to derive important business-critical knowledge from both internal and (increasingly) external data. When using external data, most often found on the Web, the most important issue is knowing the precise semantics of the data. Without this, the results cannot be trusted. Here, SW technologies come to the rescue, as they allow semantics ranging from very simple to very complex to be specified for any web-available resource. SW-technologies do not only support capturing the ``passive'' semantics, but also support active inference and reasoning on the data.
In this talk, we will present a characterization of BI environments in terms of what they require, followed by an introduction to the relevant SW foundation concepts. Then, it goes on to survey the use of SW technologies for data integration, including semantic data annotation and semantics-aware extract, transform, and load processes (ETL). Next, we will describe the relationship of multidimensional (MD) models and SW technologies, including the relationship between MD models and SW formalisms, and the use of advanced SW reasoning functionality on MD models. Finally, all the findings are discussed together and a number of directions for future research are posed.
Michael Schrefl received his Dipl.-Ing. degree and his Doctorate from Vienna University of Technology, Vienna, Austria, in 1983 and 1988, respectively. During 1983-1984 he studied at Vanderbilt University, USA, as a Fulbright scholar. From 1985 to 1992 he was with Vienna University of Technology. During 1987-1988, he was on leave at GMD IPSI, Darmstadt, where he worked on the integration of heterogeneous databases. He was appointed Professor of Business Informatics at Johannes Kepler University of Linz, Austria, in 1992, and Professor in Computer and Information Science at University of South Australia in 1998. He currently leads the Department of Business Informatics - Data and Knowledge Engineering at University of Linz, with projects in business intelligence, semantic systems, and web engineering.
Email: schrefl -at- dke.uni-linz.ac.at
Web: http://www.dke.jku.at/index.html?/staff/mschrefl.html
Lecture: Ontology-Driven Business Intelligence for Comparative Data Analysis
On-line analytical processing (OLAP) is used in three different ways in Business Intelligence: (i) for "is-reporting" in business monitoring, (ii) for "is-to-target" comparison in performance measurement and (iii) for "is-to-is comparison" in comparative data analysis. This seminar addresses comparative data analysis as employed by business organisations such as health insurers to gain insight by detecting striking differences in comparing different but similar sets of facts (referred to as group of interest and group of comparison) using OLAP. Comparative data analysis in this context starts out with a vague analysis question. Groups to compare and specific measures to use are initially unknown. They crystallize out over time by varying parameters to identify group members and adjust measures. Once meaningful comparison groups and measures have been identified, proper comparative analysis can be repeated for similar situations, e.g. for another year or another state. OLAP-based comparative data analysis is not a replacement for data mining but a pre-cursor by identifying concrete questions to pose to statisticians
The seminar introduces an ontology-driven business intelligence approach for comparative data analysis that has been developed in a collaborative research project SemCockpit - supported by the Austrian Ministry of Transport, Innovation, and Technology - by researchers from academia and BI industry as well as prospective users from public health insurers. SemCockpit builds on and utilises modelling and reasoning techniques from knowledge-based systems, ontology engineering, and data warehousing to intelligently support the business analyst in her or his analysis task. It allows to complement dimension and fact data by concept definitions capturing relevant business terms and to use these in the definition of measures and scores, to use domain ontologies as semantic dimensions, to represent analysis processes by BI analysis graphs, and to capture and share previous insights through judgment rules. The approach is presented along a simplified case study.
Robert Wrembel (assoc. prof.) works in the Institute of Computing Science, at the Poznan University of Technology (PUT), in Poland. In 2008 he received the post-doctoral degree (habilitation) in computer science, specializing in database systems and data warehouses. He has been elected as a deputy dean of the Faculty of Computing, at the Poznań University of Technology for the terms 2012-2016 and 2008-2012. He has been actively involved in 7 research projects on databases/data warehouses, 4 industrial projects in the field of information technologies, and 3 EU funded educational projects. Currently he leads a 4-year educational project from the EU Human Capital Porogramme and is the PUT coordinator of the Erasmus Mundus Joint Doctorate on Information Technologies for Business Intelligence (IT4BI-DC). Robert Wrembel has paid a number of visits to research and education centers, including the INRIA Paris-Rocquencourt (France), Université Paris Dauphine (France), Klagenfurt University (Austria), Loyola University (USA), Université Lyon 2 (France), Ecole Nationale Supérieure de Mécanique et d'Aérotechnique (France), University of Maribor (Slovenia), and Unviersidad de Costa Rica (Costa Rica). In 2012 he finished a postgraduate course on "Innovation and Enterpreneurship" at Stanford University, in the frame of the "Top500 Innovators" scholarship received from the Polish Ministry of Research and Higher Education. Robert Wrembel is a PC member of numerous international conferences in the field of computer science and a reviewer of multiple international journals. In 2010 he received the IBM Faculty Award for highly competitive research, and in 2011 he was awarded the Medal of Polish National Education Committee. He also received 5 awards of the Rector of the Poznan University of Technology. The main research interests of Robert Wrembel encompass temporal and multiversion data warehouse technologies (views, data structures, compression, ETL), sequential OLAP, and object-oriented systems (views, data access optimization, methods and views materialization).
Email: Robert.Wrembel@cs.put.poznan.pl
Web: http://www.cs.put.poznan.pl/rwrembel/
Lecture: Data Warehouse Physical Design
A data warehouse (DW) is a database designed to integrate and store large amounts of data comming from multiple heterogeneous and distributed sources. The data are the subject of advanced analysis by the so-called Business Intelligcence (BI) applications. The applications analyze large amounts of data by means of either complex SQL/MDX queries or by means of data mining algorithms, therefore their response time may take hours. For this reason, an acceptable (or good) DW performance is one of the important features that must be guaranteed for DW users. Good DW performance can be achieved in multiple components of a DW architecture, starting from hardware (e.g., parallel processing on multiple nodes, fast disks, huge main memory, fast multi-core processor), through physical storage schemes (e.g., row storage, column storage, multidimensional store, data and index compression algorithms), state of the art techniques of query optimization (e.g., cost models and size estimation techniques, parallel query optimization and execution, join algorithms), and additional data materialized views, clusters, partitions).
This lecture will focus on some of the aforementioned technologies. First, three types of data structures, namely indexes (bitmap, join, and bitmap join), materialized views, and partitioned tables will be overviewed and their functionality will be shown in the three major DW management systems, namely Oracle, IBM DB2, and SQL Server. The process of query optimization will be overviewed based on the data structures. Second, some recent research developments in the area of indexing DW data and query optimization will be outlined. Finally, open research and technological issues in a DW physical design will be presented.