Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
teaching:mfe:is [2017/10/25 11:46]
msakr [Assessing Existing Communication Protocols In The Context Of DaaS]
teaching:mfe:is [2019/05/13 12:27]
mahmsakr [Visualizing spatiotemporal data]
Line 1: Line 1:
-====== MFE 2017-2018 : Web and Information Systems ======+====== MFE 2019-2020 : Web and Information Systems ======
  
 ===== Introduction ===== ===== Introduction =====
Line 18: Line 18:
 ===== Master Thesis in Collaboration with Euranova ===== ===== Master Thesis in Collaboration with Euranova =====
  
-Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found {{:teaching:​mfe:​euranova_masterthesis_2017.pdf|here}}.+Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found [[https://​research.euranova.eu/​wp-content/​uploads/​proposals-thesis-2019.pdf|here]].
  
  
Line 25: Line 25:
   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
  
-===== Complex Event Processing in Apache Spark and Apache Storm ===== 
  
-The master thesis is put forward in the context of the SPICES "​Scalable Processing and mIning of Complex Events for Security-analytics"​ research project, funded by Innoviris. 
  
-Within this project, our lab is developping a declarative language for Complex Event Processing ​(CEP for short). The goal in Complex Event Processing is to derive pre-defined patterns in a stream of raw events. Raw events are typically sensor readings (such as "​password incorrect for user X trying to log in on machine Y" or "file transfer from machine X to machine Y"). The goal of CEP is then to correlate these events into complex events. For example, repeated failed login attempts by X to Y should trigger a complex event "​password cracking warning"​ that refers to all failed login attempts.+===== Dynamic Query Processing on GPU Accelerators =====
  
-The objective of this master thesis is to build an interpreter/​compiler for this declarative CEP language that targets ​the distributed computing frameworks Apache Spark and/or Apache Storm as backends. Getting aquaintend with these technologies is part of the master thesis objective.+This master thesis is put forward in the context ​of the DFAQ Research Project: "​Dyanmic Processing of Frequently Asked Queries",​ funded by the Wiener-Anspach foundation.
  
-**Validation of the approach** Validation of the proposed interpreter/​compiler should be done on two levels: +Within this project, our lab is hence developing novel ways for processing ​"fast Big Data", i.e., processing of analytical queries where the underlying data is constantly ​being updated. The analytics problems envisioned cover wide areas of computer science and include database aggregate ​queries, probabilistic inference, matrix chain computation, and building statistical models.
-  * a theoretical level; by comparing the generated Spark/Storm processors to a processor based on "Incremental computation" ​that is being developped at the lab +
-  * an experimental level; by proposing a benchmark collection ​of CEP queries ​that can be used to test the obtained interpreter/​compiler, and report on the experimentally observed performance on this benchmark.+
  
-**Deliverables** ​of the master thesis ​project +The objective ​of this master thesis ​is to build upon the novel dynamic ​processing ​algorithms being developed in the lab, and complement these algorithms by proposing dynamic ​evaluation ​algorithms that execute on modern GPU architecturesthereby exploiting their massive parallel processing capabilities.
-  * An overview of the processing ​models of Spark and Storm +
-  * A definition of the declarative CEP language under consideration +
-  * A description of the interpretation/​compilation algorithm +
-  * A theoretical comparison of this algorithm wrt an incremental ​evaluation ​algorithm. +
-  * The interpreter/​compiler itself (software artifact) +
-  * A benchmark set of CEP queries and associated data sets for the experimental validation +
-  * An experimental validation of the compilerand analysis of the results.+
  
-**Interested?​** +Since our current development is done in the Scala programming language, prospective students should either know Scala, or being willing to learn it within the context of the master thesis.
-  * Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+
  
-**Status**: available 
  
 +**Validation of the approach** Validation of master thesis'​ work should be done on two levels:
 +  * a theoretical level; by proposing and discussing alternative ways to do incremental computation on GPU architectures,​ and comparing these from a theoretical complexity viewpoint
 +  * an experimental level; by proposing a benchmark collection of CEP queries that can be used to test the obtained versions of the interpreter/​compiler,​ and report on the experimentally observed performance on this benchmark.
  
-===== Graph Indexing for Fast Subgraph Isomorphism Testing ===== 
  
-There is an increasing amount ​of scientific data, mostly from the bio-medical sciences, that can be represented as collections ​of graphs (chemical molecules, gene interaction networks, ...). A crucial operation when searching in this data is that of subgraph ​   isomorphism testing: given a pattern P that one is interested in (also a graphin and a collection D of graphs ​(e.g., chemical molecules), find all graphs in G that have P as a   ​subgraph. Unfortunately, ​the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing ​of this problemwe aim to reduce the number ​of candidate graphs in D to which a subgraph isomorphism test needs   to be executed. Specifically,​ we index the graphs in the collection D by means of decomposing them into graphs for which subgraph ​  ​isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.+**Deliverables** ​of the master thesis project 
 +  * An overview ​of query processing on GPUs 
 +  * A definition of the analytics queries under consideration 
 +  * A description of different possible dynamic evaluation algorithms for the analytical queries on GPU architectures. 
 +  * theoretical comparison ​of these possibilities 
 +  * The implementaiton of the evaluation algorithm(s) (as an interpreter/​compiler) 
 +  * A benchmark set of queries and associated data sets for the experimental validation 
 +  * An experimental validation ​of the compilerand analysis ​of the results.
  
-In this master thesis project, the student will emperically validate on real-world datasets the extent to which graphs can be decomposed into graphs for which subgraph isomorphism is tractable, and run experiments to validate the effectiveness of the proposed method in terms of filtering power. 
  
-**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+**Interested?​** Contact :  [[svsummer@ulb.ac.be|Stijn Vansummeren]] 
  
 **Status**: available **Status**: available
  
 +===== Multi-query Optimization in Spark =====
  
-=====Sentiment Analysis=====+Distributed computing platforms such as Hadoop and Spark focus on addressing the following challenges in large systems: (1) latency, (2) scalability,​ and (3) fault tolerance. Dedicating computing resources for each application executed by Spark can lead to a waste of resources. Unified distributed file systems such as Alluxio has provided a platform for computing results among simultaneously running applications. However, it is up to the developers to decide on what to share.
  
 +The objective of this master thesis is to optimize various applications running on a Spark platform, optimize their execution plans by autonomously finding sharing opportunities,​ namely finding the RDDs that can be shared among these applications,​ and computing these shared plans once instead of multiple times for each query.
  
-The sentiment analysis task aims to detect subjective information polarity in the target text by applying Natural Language Processing (NLP), text analysis and computational linguistics techniques. With the emergence of web 2.0, it becomes easy for Internet users to post their opinionated comments and share their thoughts via social networks, forums and especially TwitterWith more resources ​and NLP tools becoming available and with the recent developed sentiment lexicons, sentiment analysis is having more attention from the research communityNevertheless,​ Named Entities (NEs) effectiveness was not studied even though it is easily noticeable that social resources include many NEs. In ongoing research, we aim to investigate the effectiveness ​of Named Entities (person, location and organization entities) on sentiment analysis and dive beyond ​the Named Entities recognition to propose a framework of Named Entities polarity classification and process an empirical evaluation on their effectiveness on Sentiment classification.+**Deliverables** of the master thesis project 
 +  * An overview of the Apache Spark architecture. 
 +  * Develop a performance model for queries executed by Spark. 
 +  * An implementation that optimizes queries executed by Spark and identify sharing opportunities. 
 +  * An experimental validation ​of the developed system.
  
-In this master thesis project, the student will empirically validate on real-world datasets the effectiveness of Named Entities (person, location and organization entities) on sentiment analysis and run experiments on different languages (French, Dutch, English and German). +**Interested?​** Contact :  [[svsummer@ulb.ac.be|Stijn Vansummeren]]
- +
-**Interested?​** Contact : [[haddad.hatem@gmail.com|Hatem Haddad]]+
  
 **Status**: available **Status**: available
  
-=====Publishing and Using Spatio-temporal Data on the Semantic Web=====+===== Graph Indexing for Fast Subgraph Isomorphism Testing ​=====
  
 +There is an increasing amount of scientific data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, ...). A crucial operation when searching in this data is that of subgraph ​   isomorphism testing: given a pattern P that one is interested in (also a graph) in and a collection D of graphs (e.g., chemical molecules), find all graphs in G that have P as a   ​subgraph. Unfortunately,​ the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing of this problem, we aim to reduce the number of candidate graphs in D to which a subgraph isomorphism test needs   to be executed. Specifically,​ we index the graphs in the collection D by means of decomposing them into graphs for which subgraph ​  ​isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.
  
-[[http://​www.w3c.org/​|RDF]] is the [[http://​www.w3c.org/​|W3C]] proposed framework for representing information +In this master thesis project, the student will emperically validate ​on real-world datasets the extent to which graphs ​can be decomposed into graphs for which subgraph isomorphism ​is tractable, and run experiments ​to validate ​the effectiveness ​of the proposed method ​in terms of filtering power.
-in the Web. Basicallyinformation in RDF is represented as a set of triples of the form (subject,​predicate,​object). ​ RDF syntax is based on directed labeled ​graphs, where URIs are used as node labels and edge labels. The [[http://​linkeddata.org/​|Linked Open Data]] (LOD) initiative ​is aimed at extending the Web  by means of publishing various open datasets as RDF ​setting RDF links between data items from different data sources. ​ Many companies  ​and government agencies are moving towards publishing data following the LOD initiative. +
-In order to do this, the original data must be transformed into Linked Open Data. Although most of these data are alphanumerical,​ most of the time they contained ​ a spatial or spatio-temporal component, that must also be transformed. This can be exploited  +
-by application providers, that can build attractive and useful applications, ​in particular, for devices like mobile phones, tablets, etc+
  
-The goals of this thesis are(1) study the existing proposals for mapping spatio-temporal data into LOD; (2) apply this mapping to a real-world case study (as was the case for the [[http://www.oscb.be/|Open Semantic Cloud for Brussels]] project; (3) Based on the produced mapping, and using existing applications like the [[http://​linkedgeodata.org/​|Linked Geo Data project]], build applications that make use of LOD for example, to find out which cultural events are taking place at a given time at a given location. ​   +**Interested?​** Contact ​: [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] 
- + 
 +**Status**available
  
-    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]] 
  
 =====Extending SPARQL for Spatio-temporal Data Support===== =====Extending SPARQL for Spatio-temporal Data Support=====
Line 97: Line 95:
    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]]    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]]
  
-=====Efficient Management of (Sub-)structure ​ Similarity Search Over Large Graph Databases. =====  
  
-The problem of (sub-)structure similarity search over graph data has recently drawn significant research interest due to its importance in many application areas such as in Bio-informaticsChem-informaticsSocial NetworkSoftware EngineeringWorld Wide WebPattern Recognitionetc ​Consider,​ for example, the area of drug designefficient techniques ​are required to query and analyze huge data sets of chemical molecules thus shortening ​the discovery cycle in drug design ​and other scientific activities+====== MFE 2019-2020 : Spatiotemporal Databases ====== 
 +Moving object databases ​(MODare database systems that can store and manage moving object ​data. A moving object is a value that changes over time. It can be spatial (e.g.a car driving on the road network), or non-spatial (e.g.the temperature in Brussels). Using a variety of sensorsthe changing values of moving objects can be recorded in digital formats. A MODthenhelps storing and querying such data. A couple of prototypes have also been proposedsome of which are still active in terms of new releasesYeta mainstream system is by far still missing. Existing prototypes are merely research. By mainstream we mean that the development builds on widely accepted toolsthat are actively being maintained ​and developed. A mainstream system would exploit the functionality ​of these tools, and would maximize ​the reuse of their ecosystems. As a result, it becomes more closer to end users, ​and easily adopted in the industry.
  
-Graph edit distance ​is widely accepted as similarity measure ​of labeled graphs due to its ability to cope with any kind of graph structures ​and labeling schemes Todaygraph edit similarity plays significant role in managing graph data , and is employed in variety of analysis tasks such as graph classification ​and clusteringobject recognition in computer visionetc+In our group, we are building MobilityDB, a mainstream MOD. It builds on PostGIS, which is a spatial database extension ​of PostgreSQL. MobilityDB extends the type system ​of PostgreSQL ​and PostGIS with ADTs for representing moving object dataIt defines, for instancethe tfloat for representing ​time dependant float, and the tgeompoint for representing ​time dependant geometry point. MobilityDB types are well integrated into the platform, to achieve maximal reusability,​ hence a mainstream development. For instance, the tfloat builds on the PostgreSQL double precision type, and the tgeompoint build on the PostGIS geometry(point) type. Similarly MobilityDB builds on existing operationsindexingand optimization framework.
  
-In this master thesis project, ​ due to the hardness ​of graph edit distance ​(computing graph edit distance is known to be NP-hard problem), the student ​ will investigate the current approaches that deals with problem complexity while searching for similar (sub-)structures. ​ At the end, the student should be able to empirically analyze and contrast some of the interesting approaches +This is all made accessible via the SQL query interface. Currently MobilityDB is quite rich in terms of types and functions. It can answer sophisticated queries in SQL. The first beta version has been released as open source April 2019 (https://​github.com/​ULB-CoDE-WIT/​MobilityDB).
  
-=====A Generic Similarity Measure For Symbolic Trajectories===== +The following thesis ideas contribute to different parts of MobilityDBThey all constitute innovative developmentmixing both research ​and developmentThey hence will help developing ​the student skills ​in
-Moving object databases (MOD) are database systems that can store and manage moving object data. A moving object is a value that changes over time. It can be spatial (e.g., a car driving on the road network), or non-spatial (e.g., the temperature in Brussels). Using a variety ​of sensors, the changing values of moving objects can be recorded in digital formatsA MODthen, helps storing ​and querying such dataThere are two types of MOD. The first is the trajectory database, that manages the history of movement. The second type, in contrast, manages ​the stream of current movement ​and the prediction ​of the near futureThis thesis belongs to the first type (trajectory ​databases). The research ​in this area mainly goes around proposing data persistency models and query operations for trajectory data+  * Understanding ​the theory ​and the implementation ​of moving object databases. 
 +  * Understanding ​the architecture of extensible ​databasesin this case PostgreSQL. 
 +  * Writing open source software.
  
-A sub-topic of MOD is the study of semantic trajectories. It is motivated by the fact that the semantic of the movement is lost during the observation process. You GPS logger, for instance, would record a sequence of (lon, lat, time) that describe your trajectory. It won't, however, store the purpose of your trip (work, leisure, …), the transportation mode (car, bus, on foot, …), and other semantics of your trip. Research works have accordingly emerged to extract semantics from the trajectory raw data, and to provide database persistency to semantic trajectories. ​ 
  
-RecentlyRalf Güting et al. published a model called “symbolic trajectories”which can be viewed as a representation ​of semantic trajectories:​ +=====JDBC driver for Trajectories===== 
-Ralf Hartmut GütingFabio Valdés, and Maria Luisa Damiani2015. Symbolic Trajectories. ACM Trans. Spatial Algorithms Syst. 1, 2, Article 7 (July 2015), 51 pages. +An importantand still missingpiece of MobilityDB is Java JDBC driverthat will allow Java programs to establish connections with MobilityDB, and store and retrieve dataThis thesis ​is about developing such driver. As all other components ​of PostgreSQLits JDBC driver is also extensibleThis documentation gives good explanation of the driver and the way it can be extended: 
-A symbolic trajectory ​is a very simple structure composed ​of a sequence of pairs (time intervallabel)So, it is time dependent label, where every label can tell something about the semantics of the moving object during its associated time intervalWe think this model is promising because of its simplicity and genericness  ​+https://​jdbc.postgresql.org/​documentation/​head/​index.html 
 +It is also helpful to look at the driver extension for PostGIS: 
 +https://​github.com/​postgis/​postgis-java
  
-The goal of this thesis is to implement a similarity operator for symbolic trajectoriesThere are three dimensions ​of similarity in symbolic trajectories:​ temporal similarity, value similarity, and semantic similaritySuch an operator should ​be flexible ​to express arbitrary combinations of themIt should accept a pair of semantic trajectories and return a numerical value that can be used for clustering or ranking objects based on their similaritySymbolic trajectories ​are similar ​to time seriesexcept that labels are annotated by time intervalsrather than time points. We think that the techniques of time series similarity can be adopted for symbolic trajectories. This thesis should assess thatand implement a similarity measure based on time series similarity. The implementation ​is required ​to be done as an extension ​to PostGISWe have already implemented some temporal types and operations on top of PostGISwhere you can start from+As MobilityDB build on top of PostGIS, the Java driver will need to do the same, and build on top of the PostGIS driverMainly the driver will need to provide Java classes to represent all the types of MobilityDB, and access the basic properties. ​  
 + 
 +**Interested?​** 
 +  * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]] 
 + 
 +**Status**: available 
 + 
 +=====Mobility data exchange standards===== 
 +Data exchange standards allow different software systems ​to integrate togetherSuch standards are essential in the domain ​of mobility. Consider ​for example the case of public transportationDifferent vehicles (tram, metro, bus) come from different vendors, and are hence equipped with different location tracking sensors. The tracking software behind these vehicle use different data formats. These software systems need to push real time information to different apps. To support the passengersfor examplethere must be a mobile or a Web app to check the vehicle schedules and to calculate routes. This information shall also be open to other transport service providers and to routing apps. This is how google mapsfor instance, is able to provide end to end route plans that span different means of transport   
 + 
 +The goal of this thesis ​is to survey the available mobility data exchange standards, and to implement in MobilityDB import/​export functions for the relevant onesExamples for these standards are: 
 +  * GTFS statichttps://​developers.google.com/​transit/​gtfs/​ 
 +  * GTFS realtime, https://​developers.google.com/​transit/​gtfs-realtime/​ 
 +  * NeTEx static, http://​netex-cen.eu/​ 
 +  * SIRI, http://​www.transmodel-cen.eu/​standards/​siri/ ​  
 +  * More standards ​can be found on http://​www.transmodel-cen.eu/​category/​standards/​
  
-  
-**Deliverables** of the master thesis project 
-  * Reporting on the state of art of semantic trajectory similarity measures. 
-  * Reporting on the state of art in time series similarity measures. 
-  * Assessing the application of time series similarity to symbolic trajectories. 
-  * Implementing symbolic trajectories on top of PostGIS. 
-  * Implementation and evaluating the proposed symbolic trajectory similarity operator. ​   
  
  
Line 130: Line 139:
 **Status**: available **Status**: available
  
 +=====Visualizing spatiotemporal data=====
 +Data visualization is essential for understanding and presenting it. starting with the temporal point, which is the database representation of a moving point object. Typically, it is visualized in a movie style, as a point that moves over a background map. The numerical attributes of this temporal point, such as the speed, are temporal floats. These can be visualized as function curves from the time t to the value v. 
  
 +The goal of this thesis is to develop a visualization tool for the MobilityDB temporal types. The architecture of this tool should be innovative, so that it will be easy to extend it with more temporal types in the future. should be This tool should be integrated as an extension of a mainstream visualization software. A good candidate is QGIS (https://​www.qgis.org/​en/​site/​). The choice is however left open as part of the survey. ​  
 +
 +
 +**Interested?​**
 +  * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
 +
 +**Status**: available
 
teaching/mfe/is.txt · Last modified: 2020/09/29 17:03 by mahmsakr