This is an old revision of the document!
The primary area of research in the Web and Information Systems laboratory of the the Department of Computer & Decision Engineering concerns information systems (both traditional and on the web). Broadly speaking, we can identify the following major themes in the laboratory's research. The MFE subjects presented below cover these themes.
Our laboratory performs collaborative research with Euranova R&D (http://euranova.eu/). The list of subjects proposed for this year by Euranova can be found here
These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks.
The master thesis is put forward in the context of the SPICES “Scalable Processing and mIning of Complex Events for Security-analytics” research project, funded by Innoviris.
Within this project, our lab is developping a declarative language for Complex Event Processing (CEP for short). The goal in Complex Event Processing is to derive pre-defined patterns in a stream of raw events. Raw events are typically sensor readings (such as “password incorrect for user X trying to log in on machine Y” or “file transfer from machine X to machine Y”). The goal of CEP is then to correlate these events into complex events. For example, repeated failed login attempts by X to Y should trigger a complex event “password cracking warning” that refers to all failed login attempts.
The objective of this master thesis is to build an interpreter/compiler for this declarative CEP language that targets the distributed computing frameworks Apache Spark and/or Apache Storm as backends. Getting aquaintend with these technologies is part of the master thesis objective.
Validation of the approach Validation of the proposed interpreter/compiler should be done on two levels:
Deliverables of the master thesis project
Interested?
Status: available
There is an increasing amount of scientific data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, …). A crucial operation when searching in this data is that of subgraph isomorphism testing: given a pattern P that one is interested in (also a graph) in and a collection D of graphs (e.g., chemical molecules), find all graphs in G that have P as a subgraph. Unfortunately, the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing of this problem, we aim to reduce the number of candidate graphs in D to which a subgraph isomorphism test needs to be executed. Specifically, we index the graphs in the collection D by means of decomposing them into graphs for which subgraph isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.
In this master thesis project, the student will emperically validate on real-world datasets the extent to which graphs can be decomposed into graphs for which subgraph isomorphism is tractable, and run experiments to validate the effectiveness of the proposed method in terms of filtering power.
Interested? Contact : Stijn Vansummeren
Status: available
Datalog is a fundamental query language in datamanagement based on logic programming. It essentially extends select-from-where SQL queries with recursion. There is a recent trend in data management research to use datalog to specify distributed applications, most notably on the web, as well as do inference on the semantic web. The goal of this thesis is to engineer a basic distributed datalog system, i.e., a system that is capable of compiling & running distributed datalog queries. The system should be implemented in the Scala programming language. Learning Scala is part of the master thesis project.
The system should:
Validation of the approach The thesis should propose a benchmark collection of datalog queries and associated data workloads that be used to test the obtained system, and measure key performance characteristics (elasticity of the system; memory frootprint; overall running time, …)
Required reading:
Deliverables:
Interested?
Status: available
RDF is the W3C proposed framework for representing information in the Web. Basically, information in RDF is represented as a set of triples of the form (subject,predicate,object). RDF syntax is based on directed labeled graphs, where URIs are used as node labels and edge labels. The Linked Open Data (LOD) initiative is aimed at extending the Web by means of publishing various open datasets as RDF, setting RDF links between data items from different data sources. Many companies and government agencies are moving towards publishing data following the LOD initiative. In order to do this, the original data must be transformed into Linked Open Data. Although most of these data are alphanumerical, most of the time they contained a spatial or spatio-temporal component, that must also be transformed. This can be exploited by application providers, that can build attractive and useful applications, in particular, for devices like mobile phones, tablets, etc.
The goals of this thesis are: (1) study the existing proposals for mapping spatio-temporal data into LOD; (2) apply this mapping to a real-world case study (as was the case for the Open Semantic Cloud for Brussels project; (3) Based on the produced mapping, and using existing applications like the Linked Geo Data project, build applications that make use of LOD for example, to find out which cultural events are taking place at a given time at a given location.
SPARQL is the W3C standard language to query RDF data over the semantic web. Although syntactically similar to SQL, SPARQL is based on graph matching. In addition, SPARQL is aimed, basically, to query alphanumerical data. Therefore, a proposal to extend SPARQL to support spatial data, called GeoSPARQL, has been presented to the Open Geospatial Consortium.
In this thesis we propose to (1) perform an analysis of the current proposal for GeoSPARQL; (2) a study of current implementations of SPARQL that support spatial data; (3) implement simple extensions for SPARQL to support spatial data, and use these language in real-world use cases.
The problem of (sub-)structure similarity search over graph data has recently drawn significant research interest due to its importance in many application areas such as in Bio-informatics, Chem-informatics, Social Network, Software Engineering, World Wide Web, Pattern Recognition, etc. Consider, for example, the area of drug design, efficient techniques are required to query and analyze huge data sets of chemical molecules thus shortening the discovery cycle in drug design and other scientific activities.
Graph edit distance is widely accepted as a similarity measure of labeled graphs due to its ability to cope with any kind of graph structures and labeling schemes. Today, graph edit similarity plays a significant role in managing graph data , and is employed in a variety of analysis tasks such as graph classification and clustering, object recognition in computer vision, etc.
In this master thesis project, due to the hardness of graph edit distance (computing graph edit distance is known to be NP-hard problem), the student will investigate the current approaches that deals with problem complexity while searching for similar (sub-)structures. At the end, the student should be able to empirically analyze and contrast some of the interesting approaches.