Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
teaching:mfe:is [2016/04/15 15:51]
svsummer
teaching:mfe:is [2019/04/30 15:38]
ezimanyi [JDBC driver for Trajectories]
Line 1: Line 1:
-====== MFE 2016-2017 : Web and Information Systems ======+====== MFE 2019-2020 : Web and Information Systems ======
  
 ===== Introduction ===== ===== Introduction =====
Line 15: Line 15:
  
 <​note>​Please note that this list of subjects is **not exhaustive. Interested students are invited to propose original subjects.**</​note> ​ <​note>​Please note that this list of subjects is **not exhaustive. Interested students are invited to propose original subjects.**</​note> ​
-{{:​teaching:​mfe:​euranova_thesis_2016.pdf|}}+
 ===== Master Thesis in Collaboration with Euranova ===== ===== Master Thesis in Collaboration with Euranova =====
  
-Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found {{:teaching:​mfe:​euranova_thesis_2016.pdf|here}}+Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found [[https://​research.euranova.eu/​wp-content/​uploads/​proposals-thesis-2019.pdf|here]].
  
  
Line 25: Line 25:
   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
  
-===== Complex Event Processing in Apache Spark and Apache Storm ===== 
  
-The master thesis is put forward in the context of the SPICES "​Scalable Processing and mIning of Complex Events for Security-analytics"​ research project, funded by Innoviris. 
  
-Within this project, our lab is developping a declarative language for Complex Event Processing ​(CEP for short). The goal in Complex Event Processing is to derive pre-defined patterns in a stream of raw events. Raw events are typically sensor readings (such as "​password incorrect for user X trying to log in on machine Y" or "file transfer from machine X to machine Y"). The goal of CEP is then to correlate these events into complex events. For example, repeated failed login attempts by X to Y should trigger a complex event "​password cracking warning"​ that refers to all failed login attempts.+===== Dynamic Query Processing on GPU Accelerators =====
  
-The objective of this master thesis is to build an interpreter/​compiler for this declarative CEP language that targets ​the distributed computing frameworks Apache Spark and/or Apache Storm as backends. Getting aquaintend with these technologies is part of the master thesis objective.+This master thesis is put forward in the context ​of the DFAQ Research Project: "​Dyanmic Processing of Frequently Asked Queries",​ funded by the Wiener-Anspach foundation.
  
-**Validation of the approach** Validation of the proposed interpreter/​compiler should be done on two levels: +Within this project, our lab is hence developing novel ways for processing ​"fast Big Data", i.e., processing of analytical queries where the underlying data is constantly ​being updated. The analytics problems envisioned cover wide areas of computer science and include database aggregate ​queries, probabilistic inference, matrix chain computation, and building statistical models.
-  * a theoretical level; by comparing the generated Spark/Storm processors to a processor based on "Incremental computation" ​that is being developped at the lab +
-  * an experimental level; by proposing a benchmark collection ​of CEP queries ​that can be used to test the obtained interpreter/​compiler, and report on the experimentally observed performance on this benchmark.+
  
-**Deliverables** ​of the master thesis ​project +The objective ​of this master thesis ​is to build upon the novel dynamic ​processing ​algorithms being developed in the lab, and complement these algorithms by proposing dynamic ​evaluation ​algorithms that execute on modern GPU architecturesthereby exploiting their massive parallel processing capabilities.
-  * An overview of the processing ​models of Spark and Storm +
-  * A definition of the declarative CEP language under consideration +
-  * A description of the interpretation/​compilation algorithm +
-  * A theoretical comparison of this algorithm wrt an incremental ​evaluation ​algorithm. +
-  * The interpreter/​compiler itself (software artifact) +
-  * A benchmark set of CEP queries and associated data sets for the experimental validation +
-  * An experimental validation of the compilerand analysis of the results.+
  
-**Interested?​** +Since our current development is done in the Scala programming language, prospective students should either know Scala, or being willing to learn it within the context of the master thesis.
-  * Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+
  
-**Status**: available 
  
 +**Validation of the approach** Validation of master thesis'​ work should be done on two levels:
 +  * a theoretical level; by proposing and discussing alternative ways to do incremental computation on GPU architectures,​ and comparing these from a theoretical complexity viewpoint
 +  * an experimental level; by proposing a benchmark collection of CEP queries that can be used to test the obtained versions of the interpreter/​compiler,​ and report on the experimentally observed performance on this benchmark.
  
-===== Graph Indexing for Fast Subgraph Isomorphism Testing ===== 
  
-There is an increasing amount ​of scientific data, mostly from the bio-medical sciences, that can be represented as collections ​of graphs (chemical molecules, gene interaction networks, ...). A crucial operation when searching in this data is that of subgraph ​   isomorphism testing: given a pattern P that one is interested in (also a graphin and a collection D of graphs ​(e.g., chemical molecules), find all graphs in G that have P as a   ​subgraph. Unfortunately, ​the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing ​of this problemwe aim to reduce the number ​of candidate graphs in D to which a subgraph isomorphism test needs   to be executed. Specifically,​ we index the graphs in the collection D by means of decomposing them into graphs for which subgraph ​  ​isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.+**Deliverables** ​of the master thesis project 
 +  * An overview ​of query processing on GPUs 
 +  * A definition of the analytics queries under consideration 
 +  * A description of different possible dynamic evaluation algorithms for the analytical queries on GPU architectures. 
 +  * theoretical comparison ​of these possibilities 
 +  * The implementaiton of the evaluation algorithm(s) (as an interpreter/​compiler) 
 +  * A benchmark set of queries and associated data sets for the experimental validation 
 +  * An experimental validation ​of the compilerand analysis ​of the results.
  
-In this master thesis project, the student will emperically validate on real-world datasets the extent to which graphs can be decomposed into graphs for which subgraph isomorphism is tractable, and run experiments to validate the effectiveness of the proposed method in terms of filtering power. 
  
-**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+**Interested?​** Contact :  [[svsummer@ulb.ac.be|Stijn Vansummeren]] 
  
 **Status**: available **Status**: available
  
-===== A Scala-based runtime and compiler for Distributed Datalog ​=====+===== Multi-query Optimization in Spark =====
  
-Datalog is a fundamental query language in datamanagement based on logic programming. It essentially extends select-from-where SQL queries with recursion. There is a recent trend in data management research to use datalog to specify distributed applicationsmost notably on the webas well as do inference on the semantic webThe goal of this thesis is to engineer ​basic **distributed ​datalog system**, i.e., system that is capable of compiling & running ​distributed datalog queriesThe system should be implemented in the Scala programming language. Learning Scala is part of the master thesis project.+Distributed computing platforms such as Hadoop and Spark focus on addressing the following challenges ​in large systems: (1) latency(2) scalabilityand (3) fault toleranceDedicating computing resources for each application executed by Spark can lead to a waste of resources. Unified ​distributed ​file systems such as Alluxio has provided ​platform for computing results among simultaneously ​running ​applicationsHowever, it is up to the developers to decide on what to share.
  
-The system should: +The objective of this master thesis is to optimize various applications running on a Spark platformoptimize their execution plans by autonomously finding sharing opportunities,​ namely finding ​the RDDs that can be shared among these applications,​ and computing these shared plans once instead of multiple times for each query.
-  * incorporate recently proposed worst-case join algorithms (i.e., the [[http://​arxiv.org/​abs/​1210.0481|leapfrog trie join]]) +
-  * employ known local datalog optimizations (such as magic sets and QSQ)+
  
-**Validation ​of the approach** The thesis should propose ​benchmark collection of datalog ​queries ​and associated data workloads ​that be used to test the obtained system, ​and measure key performance characteristics (elasticity ​of the system; memory frootprint; overall running time, ...)+**Deliverables** ​of the master thesis project 
 +  ​An overview of the Apache Spark architecture. 
 +  ​Develop ​performance model for queries ​executed by Spark. 
 +  * An implementation ​that optimizes queries executed by Spark and identify sharing opportunities. 
 +  * An experimental validation ​of the developed ​system.
  
-**Required reading**: +**Interested?** Contact ​:  ​[[svsummer@ulb.ac.be|Stijn Vansummeren]]
-  * Datalog and Recursive Query Processing - Foundations and trends in query processing. +
-  * LogicBlox, Platform and Language: A Tutorial (Todd JGreen, Molham Aref, and Grigoris Karvounarakis) +
-  * Dedalus: Datalog in Time and Space (Peter Alvaro, William R. Marczak, Neil Conway, Joseph M. Hellerstein,​ David Maier, and Russell Sears) +
-  * Declarative Networking (Loo et al). For the distributed evaluation strategy. +
-  * Parallel processing of recursive queries in distributed architectures (VLDB 1989) +
-  * Evaluating recursive queries in distributed databases (IEEE trans knowledge and data engieneering,​ 1993)+
  
-**Deliverables**: +**Status**: available
-  * Semantics of datalog; overview of known optimization strategies (document) +
-  * Description of the leapfrog trie join (document) +
-  * Datalog system (software artifact) +
-  * Experimental analysis of developped system on a number of use cases (document)+
  
-**Interested?​** +===== Graph Indexing for Fast Subgraph Isomorphism Testing =====
-  * Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+
  
-**Status**: available+There is an increasing amount of scientific data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, ...). A crucial operation when searching in this data is that of subgraph ​   isomorphism testing: given a pattern P that one is interested in (also a graph) in and a collection D of graphs (e.g., chemical molecules), find all graphs in G that have P as a   ​subgraph. Unfortunately,​ the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing of this problem, we aim to reduce the number of candidate graphs in D to which a subgraph isomorphism test needs   to be executed. Specifically,​ we index the graphs in the collection D by means of decomposing them into graphs for which subgraph ​  ​isomorphism ​*istractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.
  
-===== Développement d’un système de gestion de l’information pour un réseau de dépistage et de suivi des lésions précancéreuses et cancéreuses du col de l’utérus dans la Région de Cochabamba en Bolivie =====+In this master thesis project, the student will emperically validate on real-world datasets the extent to which graphs can be decomposed into graphs for which subgraph isomorphism is tractable, and run experiments to validate the effectiveness of the proposed method in terms of filtering power.
  
-Full description available here:​{{:​teaching:​mfe:​mfe_u_bio-mechatronics_codepo_01.docx|}} +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
- +
-**Interested?​*+
-  * Contact :   * Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+
  
 **Status**: available **Status**: available
  
- 
- 
-=====Publishing and Using Spatio-temporal Data on the Semantic Web===== 
- 
- 
-[[http://​www.w3c.org/​|RDF]] is the [[http://​www.w3c.org/​|W3C]] proposed framework for representing information 
-in the Web. Basically, information in RDF is represented as a set of triples of the form (subject,​predicate,​object). ​ RDF syntax is based on directed labeled graphs, where URIs are used as node labels and edge labels. The [[http://​linkeddata.org/​|Linked Open Data]] (LOD) initiative is aimed at extending the Web  by means of publishing various open datasets as RDF,  setting RDF links between data items from different data sources. ​ Many companies ​ and government agencies are moving towards publishing data following the LOD initiative. 
-In order to do this, the original data must be transformed into Linked Open Data. Although most of these data are alphanumerical,​ most of the time they contained ​ a spatial or spatio-temporal component, that must also be transformed. This can be exploited ​ 
-by application providers, that can build attractive and useful applications,​ in particular, for devices like mobile phones, tablets, etc.  
- 
-The goals of this thesis are: (1) study the existing proposals for mapping spatio-temporal data into LOD; (2) apply this mapping to a real-world case study (as was the case for the [[http://​www.oscb.be/​|Open Semantic Cloud for Brussels]] project; (3) Based on the produced mapping, and using existing applications like the [[http://​linkedgeodata.org/​|Linked Geo Data project]], build applications that make use of LOD for example, to find out which cultural events are taking place at a given time at a given location. ​   
-  
- 
-    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]] 
  
 =====Extending SPARQL for Spatio-temporal Data Support===== =====Extending SPARQL for Spatio-temporal Data Support=====
Line 125: Line 95:
    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]]    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]]
  
-=====Efficient Management of (Sub-)structure ​ Similarity Search Over Large Graph Databases. =====  
- 
-The problem of (sub-)structure similarity search over graph data has recently drawn significant research interest due to its importance in many application areas such as in Bio-informatics,​ Chem-informatics,​ Social Network, Software Engineering,​ World Wide Web, Pattern Recognition,​ etc.  Consider, for example, the area of drug design, efficient techniques are required to query and analyze huge data sets of chemical molecules thus shortening the discovery cycle in drug design and other scientific activities. ​ 
- 
-Graph edit distance is widely accepted as a similarity measure of labeled graphs due to its ability to cope with any kind of graph structures and labeling schemes. ​ Today, graph edit similarity plays a significant role in managing graph data , and is employed in a variety of analysis tasks such as graph classification and clustering, object recognition in computer vision, etc.  
- 
-In this master thesis project, ​ due to the hardness of graph edit distance (computing graph edit distance is known to be NP-hard problem), the student ​ will investigate the current approaches that deals with problem complexity while searching for similar (sub-)structures. ​ At the end, the student should be able to empirically analyze and contrast some of the interesting approaches.  ​ 
  
 =====A Generic Similarity Measure For Symbolic Trajectories===== =====A Generic Similarity Measure For Symbolic Trajectories=====
Line 158: Line 121:
 **Status**: available **Status**: available
  
-=====Assessing Existing Communication Protocols In The Context Of DaaS =====  
-Data-as-a-Service (DaaS) is an emerging cloud model. The main offering of DaaS is to allow data producers/​owners to publish data services on the cloud. The idea of publishing data via a service interface is not new. SOA protocols have enabled this long ago. Yet, these protocols were not developed with the cloud and the big data in mind. This is probably why the term DaaS has emerged. It marks the need for protocols and tools that enable big data exchange. ​ 
  
-DaaS services need to exchange large amounts ​of data. Large here refers to large message sizelarge message countor a combination of both. RESTful servicesfor instance, communicate over HTTP, which is not good choice for communicating large messages/​filesSOAP services ​are not bound to HTTPbut they introduce another overhead of requiring messages to be strictly formatted in XMLThis is why researchers started to reconsider older protocols like the BitTorrent, and suggesting extension ​to existing protocols like the SOAP with Attachments+=====JDBC driver for Trajectories===== 
 +The research in moving object databases MOD has been active since the early 2000. Many individual works have been proposed ​to deal with the different aspects ​of data modelingindexingoperationsetc. A couple of prototypes have also been proposedsome of which are still active in terms of new releases. Yet, mainstream system is by far still missingExisting prototypes ​are merely research. By mainstream we mean that the development builds on widely accepted toolsthat are actively being maintained and developedA mainstream system would exploit ​the functionality of these tools, and would maximize the reuse of their ecosystems. As a result, it becomes more closer ​to end users, and easily adopted in the industry.
  
-The topic of this thesis is to perform ​comprehensive survey ​on the protocols ​data exchange, and assess their suitability ​for DaaSA quantitative comparison of protocols need to be done, considering at least these two dimensions: (1) the protocol: SOAPRESTBitTorrentetc, and (2the message: short inline, long inline, fileThe assessment should be in terms of reliabilityperformance, and security.+In our group, we are in the course ​of building MobilityDB, ​mainstream MOD. It builds ​on PostGIS, which is a spatial database extension of PostgreSQL. MobilityDB extends ​the type system of PostgreSQL and PostGIS with ADTs for representing moving object ​data. It defines, for instance, the tfloatp for representing a time dependant float, and the tgeompoint ​for representing a time dependant geometry pointMobilityDB types are well integrated into the platformto achieve maximal reusabilityhence a mainstream development. For instancethe tfloatp builds on the PostgreSQL doubleprecision type, and the tgeompoint build on the PostGIS geometry(pointtypeSimilarly MobilityDB builds on existing operationsindexing, and optimization framework.
  
-**Deliverables** of the master thesis project +This is all made accessible via the SQL query interfaceCurrently MobilityDB is quite rich in terms of types and functionsIt can answer sophisticated queries in SQL.
-  * A report that reviews the state of art communication protocols. +
-  * Propose a tool for DaaS developers to choose the best protocol/s based on their application needs. Such a tool might also provide means of automatically switching between protocols on certain thresholds. +
-  * Experiments to assess the suitability of protocols for DaaS, and to compare between themThese experiments need to be repeatable, so that others ​can use them on their own datasets and configurations +
  
-**Interested?​** +An important, and still missing, piece of MobilityDB is Java JDBC driver, that will allow Java programs to establish connections with MobilityDB, and store and retrieve data. This thesis is about developing such a driver. As all other components of PostgreSQL, its JDBC driver is also extensible. This documentation gives a good explanation of the driver and the way it can be extended: 
-  * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]+https://​jdbc.postgresql.org/​documentation/​head/​index.html 
 +It is also helpful to look at the driver extension for PostGIS: 
 +https://​github.com/​postgis/​postgis-java 
 + 
 +As MobilityDB build on top of PostGIS, the Java driver will need to do the same, and build on top of the PostGIS driver. Mainly the driver will need to provide Java classes to represent all the types of MobilityDB, and access the basic properties. ​  
 + 
 +This thesis project hence will help developing the student skills in: 
 +  ​Understanding the theory and the implementation of moving object databases. 
 +  ​Understanding the architecture of extensible databases, in this case PostgreSQL. 
 +  * Writing open source software.
  
-**Status**: available 
 
teaching/mfe/is.txt · Last modified: 2020/09/29 17:03 by mahmsakr