MFE 2019-2020 : Web and Information Systems


The primary area of research in the Web and Information Systems laboratory of the the Department of Computer & Decision Engineering concerns information systems (both traditional and on the web). Broadly speaking, we can identify the following major themes in the laboratory's research. The MFE subjects presented below cover these themes.

  • Business Intelligence and Data Warehouses A data warehouse, an evolution of traditional databases, has become the basis of the new generation enterprise information systems. It contains an aggregated and historical view of the operational information of the entreprise. The laboratory's research focuses on the design of data warehouses using the techniques of conceptual modeling and their implementation into current operational platforms.
  • The Semantic Web and Web Data Management The Semantic Web, also known as the Web of Linked Data, aims at enabling people to share structured information on the Web. In the same way as one uses HTML and hyperlinks to publish and connect information on the Web of Documents, one uses the RDF data model and RDF links to publish and connect structured information on the Web of Linked Data. This has the potential to turn the Web into one huge database with structured querying capabilities that vastly exceed the limited keyword search queries so common on the Web of Documents today. Unfortunately, this potential still remains to be realized. In this respect, our work revolves around several issues: (1) the management of ontologies, and especially in the contextualisation, modularization, and the formalization of spatial and temporal aspects in the ontologies; (2) the design of suitable query languages for the web; and (3) the design of efficient evaluation strategies for these query languages.
  • Spatio-temporal databases Today, the management of data located in space is a necessity both for organizations and individuals. The application domains are numerous: cartography, land management, network utility management (electricity, water, transportation, etc.), environment, geomarketing, location-based services. In addition, the spatial dimension is often related to a temporal or historical dimension, which means that the systems must keep track of the evolution in time of the data contained in the database. Our research consists in defining conceptual models that allows the spatial and temporal aspects of applications to be expressed, and the mechanisms allowing the translation of these specifications into operational systems.

Please note that this list of subjects is not exhaustive. Interested students are invited to propose original subjects.

Master Thesis in Collaboration with Euranova

Our laboratory performs collaborative research with Euranova R&D ( The list of subjects proposed for this year by Euranova can be found here.

These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks.

Dynamic Query Processing on GPU Accelerators

This master thesis is put forward in the context of the DFAQ Research Project: “Dyanmic Processing of Frequently Asked Queries”, funded by the Wiener-Anspach foundation.

Within this project, our lab is hence developing novel ways for processing “fast Big Data”, i.e., processing of analytical queries where the underlying data is constantly being updated. The analytics problems envisioned cover wide areas of computer science and include database aggregate queries, probabilistic inference, matrix chain computation, and building statistical models.

The objective of this master thesis is to build upon the novel dynamic processing algorithms being developed in the lab, and complement these algorithms by proposing dynamic evaluation algorithms that execute on modern GPU architectures, thereby exploiting their massive parallel processing capabilities.

Since our current development is done in the Scala programming language, prospective students should either know Scala, or being willing to learn it within the context of the master thesis.

Validation of the approach Validation of master thesis' work should be done on two levels:

  • a theoretical level; by proposing and discussing alternative ways to do incremental computation on GPU architectures, and comparing these from a theoretical complexity viewpoint
  • an experimental level; by proposing a benchmark collection of CEP queries that can be used to test the obtained versions of the interpreter/compiler, and report on the experimentally observed performance on this benchmark.

Deliverables of the master thesis project

  • An overview of query processing on GPUs
  • A definition of the analytics queries under consideration
  • A description of different possible dynamic evaluation algorithms for the analytical queries on GPU architectures.
  • A theoretical comparison of these possibilities
  • The implementaiton of the evaluation algorithm(s) (as an interpreter/compiler)
  • A benchmark set of queries and associated data sets for the experimental validation
  • An experimental validation of the compiler, and analysis of the results.

Interested? Contact : Stijn Vansummeren

Status: available

Multi-query Optimization in Spark

Distributed computing platforms such as Hadoop and Spark focus on addressing the following challenges in large systems: (1) latency, (2) scalability, and (3) fault tolerance. Dedicating computing resources for each application executed by Spark can lead to a waste of resources. Unified distributed file systems such as Alluxio has provided a platform for computing results among simultaneously running applications. However, it is up to the developers to decide on what to share.

The objective of this master thesis is to optimize various applications running on a Spark platform, optimize their execution plans by autonomously finding sharing opportunities, namely finding the RDDs that can be shared among these applications, and computing these shared plans once instead of multiple times for each query.

Deliverables of the master thesis project

  • An overview of the Apache Spark architecture.
  • Develop a performance model for queries executed by Spark.
  • An implementation that optimizes queries executed by Spark and identify sharing opportunities.
  • An experimental validation of the developed system.

Interested? Contact : Stijn Vansummeren

Status: available

Graph Indexing for Fast Subgraph Isomorphism Testing

There is an increasing amount of scientific data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, …). A crucial operation when searching in this data is that of subgraph isomorphism testing: given a pattern P that one is interested in (also a graph) in and a collection D of graphs (e.g., chemical molecules), find all graphs in G that have P as a subgraph. Unfortunately, the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing of this problem, we aim to reduce the number of candidate graphs in D to which a subgraph isomorphism test needs to be executed. Specifically, we index the graphs in the collection D by means of decomposing them into graphs for which subgraph isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.

In this master thesis project, the student will emperically validate on real-world datasets the extent to which graphs can be decomposed into graphs for which subgraph isomorphism is tractable, and run experiments to validate the effectiveness of the proposed method in terms of filtering power.

Interested? Contact : Stijn Vansummeren

Status: available

Extending SPARQL for Spatio-temporal Data Support

SPARQL is the W3C standard language to query RDF data over the semantic web. Although syntactically similar to SQL, SPARQL is based on graph matching. In addition, SPARQL is aimed, basically, to query alphanumerical data. Therefore, a proposal to extend SPARQL to support spatial data, called GeoSPARQL, has been presented to the Open Geospatial Consortium.

In this thesis we propose to (1) perform an analysis of the current proposal for GeoSPARQL; (2) a study of current implementations of SPARQL that support spatial data; (3) implement simple extensions for SPARQL to support spatial data, and use these language in real-world use cases.

A Generic Similarity Measure For Symbolic Trajectories

Moving object databases (MOD) are database systems that can store and manage moving object data. A moving object is a value that changes over time. It can be spatial (e.g., a car driving on the road network), or non-spatial (e.g., the temperature in Brussels). Using a variety of sensors, the changing values of moving objects can be recorded in digital formats. A MOD, then, helps storing and querying such data. There are two types of MOD. The first is the trajectory database, that manages the history of movement. The second type, in contrast, manages the stream of current movement and the prediction of the near future. This thesis belongs to the first type (trajectory databases). The research in this area mainly goes around proposing data persistency models and query operations for trajectory data.

A sub-topic of MOD is the study of semantic trajectories. It is motivated by the fact that the semantic of the movement is lost during the observation process. You GPS logger, for instance, would record a sequence of (lon, lat, time) that describe your trajectory. It won't, however, store the purpose of your trip (work, leisure, …), the transportation mode (car, bus, on foot, …), and other semantics of your trip. Research works have accordingly emerged to extract semantics from the trajectory raw data, and to provide database persistency to semantic trajectories.

Recently, Ralf Güting et al. published a model called “symbolic trajectories”, which can be viewed as a representation of semantic trajectories: Ralf Hartmut Güting, Fabio Valdés, and Maria Luisa Damiani. 2015. Symbolic Trajectories. ACM Trans. Spatial Algorithms Syst. 1, 2, Article 7 (July 2015), 51 pages. A symbolic trajectory is a very simple structure composed of a sequence of pairs (time interval, label). So, it is a time dependent label, where every label can tell something about the semantics of the moving object during its associated time interval. We think this model is promising because of its simplicity and genericness.

The goal of this thesis is to implement a similarity operator for symbolic trajectories. There are three dimensions of similarity in symbolic trajectories: temporal similarity, value similarity, and semantic similarity. Such an operator should be flexible to express arbitrary combinations of them. It should accept a pair of semantic trajectories and return a numerical value that can be used for clustering or ranking objects based on their similarity. Symbolic trajectories are similar to time series, except that labels are annotated by time intervals, rather than time points. We think that the techniques of time series similarity can be adopted for symbolic trajectories. This thesis should assess that, and implement a similarity measure based on time series similarity. The implementation is required to be done as an extension to PostGIS. We have already implemented some temporal types and operations on top of PostGIS, where you can start from.

Deliverables of the master thesis project

  • Reporting on the state of art of semantic trajectory similarity measures.
  • Reporting on the state of art in time series similarity measures.
  • Assessing the application of time series similarity to symbolic trajectories.
  • Implementing symbolic trajectories on top of PostGIS.
  • Implementation and evaluating the proposed symbolic trajectory similarity operator.


Status: available

teaching/mfe/is.txt · Last modified: 2019/02/18 15:39 by svsummer