This is an old revision of the document!


INFO-H-419: Data Warehouses

Lecturer

Volume

  • Theory 24 h - Exercises 24h - Project 12h
  • 5 ECTS

Study Programme

  • Master in Computer Science and Engineering [MA-IRIF]
  • Master in Computer Sciences [INFO]
  • Erasmus Mundus Master in Big Data Management and Analytics (BDMA)

Schedule

The course is given during the first semester

  • Lectures on Tuesdays from 2 pm to 4 pm at the room S.UA4.218
  • Exercises on Fridays from 2 pm to 4 pm at the room S.UB4.130

Grading

  • Group project (30%)
  • Written exam (70%)
    • the exam is open book; notes and books can be used. Laptops and other electronic devices are not allowed.

Course Summary

Relational and object-oriented databases are mainly suited for operational settings in which there are many small transactions querying and writing to the database. Consistency of the database (in the presence of potentially conflicting transactions) is of utmost importance. Much different is the situation in analytical processing where historical data is analyzed and aggregated in many different ways. Such queries differ significantly from the typical transactional queries in the relational model:

  • Typically analytical queries touch a larger part of the database and last longer than the transactional queries;
  • Analytical queries involve aggregations (min, max, avg, …) over large subgroups of the data;
  • When analyzing data it is convenient to see it as multi-dimensional.


For these reasons, data to be analyzed is typically collected into a data warehouse with Online Analytical Processing support. Online here refers to the fact that the answers to the queries should not take too long to be computed. Collecting the data is often referred to as Extract-Transform-Load (ELT). The data in the data warehouse needs to be organized in a way to enable the analytical queries to be executed efficiently. For the relational model star and snowflake schemes are popular designs. Next to OLAP on top of a relational database (ROLAP), also native OLAP solutions based on multidimensional structures (MOLAP) exist. In order to further improve query answering efficiency, some query results can already be materialized in the database, and new indexing techniques have been developped.

In the course, the main concepts of multidimensional databases will be covered and illustrated using the SQL Server tools. Complimentary to the course, IBM and Teradata will give invited lectures.

Books

Extra books

The following materials have been used to construct the course material, but are not required reading for the course:

Prerequisites

  • Database System Concepts (6th ed.) by Abraham Silberschatz, Henri Korth, and S. Sudarshan. McGraw-Hill, 2011.
    • ER-modeling: Chapter 7
    • Keys and functional dependencies: Section 8.3.1
    • BCNF: 8.3.2

Course Slides

Software

All software used in the course is available in the computer labs. Students who wish a personal copy of the software on their own computers, can get free copies of the software. Succinct instructions to acquire the software have been included below; in case additional help is required you can contact the sysadmin of the department: Arthur Lesuisse alesuiss@ulb.ac.be

  • MS SQL Server Tools: can be downloaded for free from http://www.academicshop.be/msdnaa/ Register on this page with your ULB email address, and 'order' the free msdnaa. After verification you receive login credentials to download quite a few software packages for free. Select the SQL Server 2014 Enterprise edition.
  • Indyco Builder can be downloaded from http://www.indyco.com/ . License keys for all students will be added soon.

Exercises

Group Project

TPC is a non-profit corporation that defines transaction processing and database benchmarks and disseminates objective, verifiable TPC performance data to the industry. Regarding data warehouses, two TPC benchmarks are relevant:

  • TPC-DS, the Decision Support Benchmark, which models the decision support functions of a retail product supplier.
  • TPC-DI, the Data Integration Support Benchmark, which models a typical ETL process that loads a data warehouse.

The project of the course consist of 2 parts:

  • Part I: Implement the TPC-DS benchmark (deadline 1/11/2018)
  • Part II: Implement the TPC-DI benchmark (deadline 20/12/2018)

You have free choice to use the tools on which the two benchmarks will be implemented. For example, the TPC-DS benchmark could be implemented on SQL Server Analysis Services, Pentaho Analysis Services (aka Mondrian), etc. Similarly, the TPC-DI benchmark could be implemented on SQL Server Integration Services, Pentaho Data Integration, Talend Data Studio, SQL scripts, etc., which then load the data warehouse on a DBMS such as SQL Server, Oracle, PostgreSQL, etc.

Furthermore, both benchmarks can be implemented with several scale factors, which determine the size of the resulting data warehouse. For the purposes of this project you can use the smallest scale factor.

The project is carried out in groups of 3 to 4 persons, which will be the same for the two parts. Before you can submit part I of the project, you will have to register in a group. For this, please send an email to the lecturer with the information about your group by 1/10/2018 at the latest. The submission deadlines for parts I and II are strict.

The deliverables expected for each part of the project are the following:

  • A report in pdf explaining the essential aspects of your implementation, and
  • A zip file containing the code of your implementation, with all necessary instructions to be able to replicate your implementation by the lecturer in standard computing infrastructure.

The project evaluation will count for 30% of your total grade. This may seem undervalued, however, putting effort in the project will definitely help you in achieving a better understanding of the course material which will result in a better score in the paper exam which amounts for 70% of the grade.

Groups of the current year

  • SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), SQL Server: Hung Nguyen, Valdemar Hernández Siles, Julio Candela Caceres, Ariston Harianto Lim
  • Pentaho Analysis Services, Pentaho Data Integration, PostgreSQL: Dimitrios Tsesmelis, Andrea Armani, Hridaya Subedi, Uchechukwu Fortune Njoku
  • Apache Kylin, Talend Open Studio, SQL Server: Ricardo Holthausen, Alp Albay, Jesus Huete
  • MySQL, Apache Airflow, cube.js: Ali Arous, Fabrício Ferreira, Ishaan Rachit Dwivedi, Ledia Isaj
  • Big Query, Cloud Data Fusion: Nithish Sankaranarayanan, Gayane Vardanyan, Yu-Hsuan Chen, Anant Gupta
  • Microsoft Azure and <TBD>: Rodaina Mohamed, Karim Maatouk, Yi Chiau Li, Haftamu Hailu Tefera
  • Apache Hive and <TBD>: Yalei Li, Haonan Jin, Akash Malhotra
  • <TBD> and Jaspersoft: Haroon Rashid, Mahmudul Hasan, Emir Nurmatbekov
  • Spark SQL and <TBD>: Iva Mihajlovska, Tamara Bojanic, Đorđije Krivokapic, Iryna Nazarchuk
  • Oracle and <TBD>: Samia Azzouzi, Piotr Rochala, Paul Moua

Examinations from Previous Years

 
teaching/infoh419.1570616491.txt.gz · Last modified: 2019/10/09 12:21 by ezimanyi