MFE 2019-2020 : Web and Information Systems


The primary area of research in the Web and Information Systems laboratory of the the Department of Computer & Decision Engineering concerns information systems (both traditional and on the web). Broadly speaking, we can identify the following major themes in the laboratory's research. The MFE subjects presented below cover these themes.

  • Business Intelligence and Data Warehouses A data warehouse, an evolution of traditional databases, has become the basis of the new generation enterprise information systems. It contains an aggregated and historical view of the operational information of the entreprise. The laboratory's research focuses on the design of data warehouses using the techniques of conceptual modeling and their implementation into current operational platforms.
  • The Semantic Web and Web Data Management The Semantic Web, also known as the Web of Linked Data, aims at enabling people to share structured information on the Web. In the same way as one uses HTML and hyperlinks to publish and connect information on the Web of Documents, one uses the RDF data model and RDF links to publish and connect structured information on the Web of Linked Data. This has the potential to turn the Web into one huge database with structured querying capabilities that vastly exceed the limited keyword search queries so common on the Web of Documents today. Unfortunately, this potential still remains to be realized. In this respect, our work revolves around several issues: (1) the management of ontologies, and especially in the contextualisation, modularization, and the formalization of spatial and temporal aspects in the ontologies; (2) the design of suitable query languages for the web; and (3) the design of efficient evaluation strategies for these query languages.
  • Spatio-temporal databases Today, the management of data located in space is a necessity both for organizations and individuals. The application domains are numerous: cartography, land management, network utility management (electricity, water, transportation, etc.), environment, geomarketing, location-based services. In addition, the spatial dimension is often related to a temporal or historical dimension, which means that the systems must keep track of the evolution in time of the data contained in the database. Our research consists in defining conceptual models that allows the spatial and temporal aspects of applications to be expressed, and the mechanisms allowing the translation of these specifications into operational systems.

Please note that this list of subjects is not exhaustive. Interested students are invited to propose original subjects.

Master Thesis in Collaboration with Euranova

Our laboratory performs collaborative research with Euranova R&D ( The list of subjects proposed for this year by Euranova can be found here.

These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks.

Dynamic Query Processing in Modern Big Data Architectures

Dynamic Query Processing refers to the activity of processing queries under constant data updates. (This is also known as continuous querying). It is a core problem in modern analytic workloads.

Modern big data compute architectures such as Apache Spark, Apache Flink, and apache Storm support certain form of Dynamic Query Processing.

In addition, our lab has recently proposed DYN, a new Dynamic Query Processing algorithm that has strong optimality guarantees, but works in a centralised setting.

The objective of this master thesis is to propose extensions to our algorithm that make it suitable for distributed implementation on one of the above-mentioned platforms, and compare its execution efficiency against the state-of-the art solutions provided by Spark, Flink, and Storm. In order to make this comparison meaningfull, the student is expected to research, survey, and summarize the principles underlying the current state-of-the art approaches.

Deliverables of the master thesis project

  • An overview of the continuous query processing models of Flink, Spark and Storm
  • A qualitive comparison of the algorithms used
  • A proposal for generalizing DYN to the distributed setting.
  • An implementation of this geneneralization by means of a compiler that outputs a continous query processing plan
  • A benchmark set of continuous queries and associated data sets for the experimental validation
  • An experimental validation of the extension and state of the art

Interested? Contact : Stijn Vansummeren

Status: taken

Graph Indexing for Fast Subgraph Isomorphism Testing

There is an increasing amount of scientific data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, …). A crucial operation when searching in this data is that of subgraph isomorphism testing: given a pattern P that one is interested in (also a graph) in and a collection D of graphs (e.g., chemical molecules), find all graphs in G that have P as a subgraph. Unfortunately, the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing of this problem, we aim to reduce the number of candidate graphs in D to which a subgraph isomorphism test needs to be executed. Specifically, we index the graphs in the collection D by means of decomposing them into graphs for which subgraph isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.

In this master thesis project, the student will emperically validate on real-world datasets the extent to which graphs can be decomposed into graphs for which subgraph isomorphism is tractable, and run experiments to validate the effectiveness of the proposed method in terms of filtering power.

Interested? Contact : Stijn Vansummeren

Status: taken

Extending SPARQL for Spatio-temporal Data Support

SPARQL is the W3C standard language to query RDF data over the semantic web. Although syntactically similar to SQL, SPARQL is based on graph matching. In addition, SPARQL is aimed, basically, to query alphanumerical data. Therefore, a proposal to extend SPARQL to support spatial data, called GeoSPARQL, has been presented to the Open Geospatial Consortium.

In this thesis we propose to (1) perform an analysis of the current proposal for GeoSPARQL; (2) a study of current implementations of SPARQL that support spatial data; (3) implement simple extensions for SPARQL to support spatial data, and use these language in real-world use cases.

MFE 2019-2020 : Spatiotemporal Databases

Moving object databases (MOD) are database systems that can store and manage moving object data. A moving object is a value that changes over time. It can be spatial (e.g., a car driving on the road network), or non-spatial (e.g., the temperature in Brussels). Using a variety of sensors, the changing values of moving objects can be recorded in digital formats. A MOD, then, helps storing and querying such data. A couple of prototypes have also been proposed, some of which are still active in terms of new releases. Yet, a mainstream system is by far still missing. Existing prototypes are merely research. By mainstream we mean that the development builds on widely accepted tools, that are actively being maintained and developed. A mainstream system would exploit the functionality of these tools, and would maximize the reuse of their ecosystems. As a result, it becomes more closer to end users, and easily adopted in the industry.

In our group, we are building MobilityDB, a mainstream MOD. It builds on PostGIS, which is a spatial database extension of PostgreSQL. MobilityDB extends the type system of PostgreSQL and PostGIS with ADTs for representing moving object data. It defines, for instance, the tfloat for representing a time dependant float, and the tgeompoint for representing a time dependant geometry point. MobilityDB types are well integrated into the platform, to achieve maximal reusability, hence a mainstream development. For instance, the tfloat builds on the PostgreSQL double precision type, and the tgeompoint build on the PostGIS geometry(point) type. Similarly MobilityDB builds on existing operations, indexing, and optimization framework.

This is all made accessible via the SQL query interface. Currently MobilityDB is quite rich in terms of types and functions. It can answer sophisticated queries in SQL. The first beta version has been released as open source April 2019 (

The following thesis ideas contribute to different parts of MobilityDB. They all constitute innovative development, mixing both research and development. They hence will help developing the student skills in:

  • Understanding the theory and the implementation of moving object databases.
  • Understanding the architecture of extensible databases, in this case PostgreSQL.
  • Writing open source software.

JDBC driver for Trajectories

An important, and still missing, piece of MobilityDB is Java JDBC driver, that will allow Java programs to establish connections with MobilityDB, and store and retrieve data. This thesis is about developing such a driver. As all other components of PostgreSQL, its JDBC driver is also extensible. This documentation gives a good explanation of the driver and the way it can be extended: It is also helpful to look at the driver extension for PostGIS:

As MobilityDB build on top of PostGIS, the Java driver will need to do the same, and build on top of the PostGIS driver. Mainly the driver will need to provide Java classes to represent all the types of MobilityDB, and access the basic properties.


Status: taken

Mobility data exchange standards

Data exchange standards allow different software systems to integrate together. Such standards are essential in the domain of mobility. Consider for example the case of public transportation. Different vehicles (tram, metro, bus) come from different vendors, and are hence equipped with different location tracking sensors. The tracking software behind these vehicle use different data formats. These software systems need to push real time information to different apps. To support the passengers, for example, there must be a mobile or a Web app to check the vehicle schedules and to calculate routes. This information shall also be open to other transport service providers and to routing apps. This is how google maps, for instance, is able to provide end to end route plans that span different means of transport.

The goal of this thesis is to survey the available mobility data exchange standards, and to implement in MobilityDB import/export functions for the relevant ones. Examples for these standards are:


Status: available

Visualizing spatiotemporal data

Data visualization is essential for understanding and presenting it. starting with the temporal point, which is the database representation of a moving point object. Typically, it is visualized in a movie style, as a point that moves over a background map. The numerical attributes of this temporal point, such as the speed, are temporal floats. These can be visualized as function curves from the time t to the value v.

The goal of this thesis is to develop a visualization tool for the MobilityDB temporal types. The architecture of this tool should be innovative, so that it will be easy to extend it with more temporal types in the future. should be This tool should be integrated as an extension of a mainstream visualization software. A good candidate is QGIS ( The choice is however left open as part of the survey.


Status: available

Scalable Map-Matching

GPS trajectories originate in the form of a series of absolute lat/lon coordinates. Map-matching is the method of locating the GPS observations onto a road network. It transforms the lat/lon pairs into pairs of a road identifier and a fraction representing the relative position on the road. This preprocessing is essential to trajectory data analysis. It contributes to cleaning the data, as well as preparing it for network-related analysis. There are two modes of map-matching: (1) offline, where all the observations of the trajectory exist before starting the map-matching, and (2) online, where the observation arrive to the map-matcher one by one in a streaming fashion. Map-matching is known to be an expensive pre-processing, in terms of processing time. The growing amount of trajectory data (e.g., autonomous cars) call for map-matching methods that can scale-out. This thesis is about proposing such a solution. It shall survey the existing Algorithms, benchmark them, and propose a scale out architecture.

MobilityDB has types for lat/lon trajectories, as well as map-matched trajectories. the implementation of this thesis shall be integrated with MobilityDB.


Status: available

teaching/mfe/is.txt · Last modified: 2020/09/29 17:03 by mahmsakr