Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
teaching:mfe:is [2015/04/13 14:44]
svsummer [An implementation of the SCULPT schema language for tabular data on the Web]
teaching:mfe:is [2016/04/15 15:51]
svsummer
Line 1: Line 1:
-====== MFE 2015-2016 : Web and Information Systems ======+====== MFE 2016-2017 : Web and Information Systems ======
  
 ===== Introduction ===== ===== Introduction =====
Line 15: Line 15:
  
 <​note>​Please note that this list of subjects is **not exhaustive. Interested students are invited to propose original subjects.**</​note> ​ <​note>​Please note that this list of subjects is **not exhaustive. Interested students are invited to propose original subjects.**</​note> ​
 +{{:​teaching:​mfe:​euranova_thesis_2016.pdf|}}
 ===== Master Thesis in Collaboration with Euranova ===== ===== Master Thesis in Collaboration with Euranova =====
  
-Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found  +Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found {{:​teaching:​mfe:​euranova_thesis_2016.pdf|here}} 
-{{:​teaching:​mfe:​mt2014_euranova.pdf|here}}+
  
 These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks. These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks.
Line 25: Line 25:
   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
  
-===== Compiling SPARQL queries into machine code =====+===== Complex Event Processing in Apache Spark and Apache Storm =====
  
-Due to the increasing availability ​of larger and larger cheap RAM memories, ​the working set of modern database management systems becomes more and more main memory resident. This implies that, in contrast to traditional database management systems, slow disk accesses are rare, and that hence, the in-memory processing speed of databases becomes an important factor. As recently observed by a number of researchers,​ (e.g., [[http://​sites.computer.org/​debull/​A14mar/​p3.pdf|Neumann and Leis]]), one very attractive approach ​for fast query processing in this context is the just-in-time compilation of incoming queries into machine code. This compilation avoids the overhead of the traditional interpretation of query plansand can aid in minimzing memory traffic for boosting performance.+The master thesis is put forward in the context ​of the SPICES "​Scalable Processing ​and mIning ​of Complex Events ​for Security-analytics"​ research projectfunded by Innoviris.
  
-A number of recent research prototypes exist that compile SQL queries into machine code in this sense: ​ HyPer A Hybrid OLTP&​OLAP High Performance DBMS (http://​hyper-db.de/) and Legobase ​(https://​github.com/​epfldata/​NewLegoBase and http://​data.epfl.ch/​legobase).+Within ​this project, our lab is developping a declarative language for Complex Event Processing ​(CEP for short). The goal in Complex Event Processing is to derive pre-defined patterns in a stream of raw eventsRaw events are typically sensor readings ​(such as "​password incorrect for user X trying to log in on machine Y" or "file transfer from machine X to machine Y")The goal of CEP is then to correlate these events into complex eventsFor example, repeated failed login attempts by X to Y should trigger a complex event "​password cracking warning"​ that refers to all failed login attempts.
  
-The objective of this master thesis is to apply the same methodology to engineer a compiler that translates (fragments of) SPARQL (the standard query language for querying RDF data on the semantic web) into machine code. The overall methodology should follow the methodology used by HyPer and Legobase: +The objective of this master thesis is to build an interpreter/​compiler ​for this declarative CEP language ​that targets ​the distributed computing frameworks Apache Spark and/or Apache Storm as backendsGetting aquaintend with these technologies is part of the master thesis objective.
-  * Use of a high-level language to construct the compiler (Scala, http://scala-lang.org/) +
-  * Use of Latent Modular Staging (LMS for short) for generating low-level portable assembly code at runtime (http://​scala-lms.github.io/​) +
-  * Use of LLVM (http://​llvm.org/​) as a portable assembly code and corresponding translator to machine code.+
  
-Getting aquaintend with these technologies ​is part of the master thesis objective.+**Validation of the approach** Validation of the proposed interpreter/​compiler should be done on two levels: 
 +  * a theoretical level; by comparing the generated Spark/Storm processors to a processor based on "​Incremental computation"​ that is being developped at the lab 
 +  * an experimental level; by proposing a benchmark collection ​of CEP queries that can be used to test the obtained interpreter/​compiler,​ and report on the experimentally observed performance on this benchmark.
  
-**Validation ​of the approach** The thesis should propose a benchmark ​collection ​of SPARQL ​queries ​that can be used to test the obtained SPARQL-to-machine-code ​compiler ​and compare its perforance against a referenceinterpreter-based SPARQL compiler.+**Deliverables** ​of the master thesis project 
 +  ​An overview of the processing models of Spark and Storm 
 +  * A definition of the declarative CEP language under consideration 
 +  * A description of the interpretation/​compilation algorithm 
 +  * A theoretical comparison of this algorithm wrt an incremental evaluation algorithm. 
 +  ​* The interpreter/​compiler itself (software artifact) 
 +  * A benchmark ​set of CEP queries ​and associated data sets for the experimental validation 
 +  * An experimental validation of the compiler, ​and analysis of the results.
  
-**Deliverables** of the master thesis project:  ​ +**Interested?** 
-  ​- An overview of the state of the art in query-to-machine-code compilation. +  * Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
-  - A description of latent modular staging and how it can be used to construct machine-code compilers. +
-  - The SPARQL compiler (software artifact) +
-  - A benchmark set of SPARQL queries and associated data sets for the experimental validation +
-  - An experimental validation of the compiler, comparing efficiency of compiled queries against a reference compiler based on query plan interpretation. +
- +
- +
-**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+
  
 **Status**: available **Status**: available
  
-===== An implementation of the SCULPT schema language for tabular data on the Web ===== 
  
-Despite the availability of numerous standardized formats ​for semi-structured and semantic web data such as XML, RDF, and JSON, a very large percentage of data and open data published on the web, remains tabular in nature. (Jeni Tennison, one of the two co-chairs of the W3C CSV on the Web working group claims that ``over 90% of the data published on data.gov.uk is tabular data''​.) Tabular data is most commonly published in the form of comma separated values (CSV) files because such files are open and therefore processable by numerous tools, and tailored for all sizes of files ranging from a number of KBs to several TBs. Despite these advantages, working with CSV files is often cumbersome because they are typically not accompanied by a //schema// that describes the file's structure (i.e., ``the second column is of integer datatype'',​ ``columns are delimited by tabs'',​ etc) and captures its intended meaning. Such a description is nevertheless vital for any user trying to interpret the file and execute queries or make changes to it.+===== Graph Indexing ​for Fast Subgraph Isomorphism Testing =====
  
-In other data models, the presence ​of a schema ​is also important for query optimization ​(required for scalable query execution if the file is large), as well as other static analysis tasksFinallyschemas are a prerequisite for unlocking huge amounts ​of tabular data to the Semantic Web.+There is an increasing amount of scientific ​data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, ...). A crucial operation when searching in this data is that of subgraph ​   isomorphism testing: given pattern P that one is interested in (also a graph) in and a collection D of graphs ​(e.g., chemical molecules), find all graphs in G that have P as a   ​subgraphUnfortunatelythe subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing ​of this problem, we aim to reduce ​the number of candidate graphs in D to which a subgraph isomorphism test needs   to be executed. Specifically,​ we index the graphs in the collection D by means of decomposing them into graphs for which subgraph ​  ​isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.
  
-In recognition of this problem, the CSV on the Web Working Group of the World Wide Web Consortium argues for the introduction of a schema language for tabular data to ensure higher interoperability when working with datasets using the CSV or similar formats. +In this master thesis project, the student will emperically validate ​on real-world datasets ​the extent ​to which graphs ​can be decomposed ​into graphs ​for which subgraph isomorphism ​is tractable, and run experiments ​to validate ​the effectiveness ​of the proposed method ​in terms of filtering power.
- +
-The objective of this master thesis is to implement a recent proposal for such a schema language named SCULPT (http://​arxiv.org/​abs/​1411.2351). Concretely, this entails: +
-  * proposing an elegant concrete syntax for SCULPT schemas +
-  * implement both the in-memory and streaming validation algorithms of SCULPT proposed in http://​arxiv.org/​abs/​1411.2351 +
-  * extend the SCULPT proposal, by investigating how SCULPT can be combined with complementary features recently proposed by the W3C CSV on the Web Working group (http://​www.w3.org/​2013/​csvw/​wiki/​Main_Page) +
-  * and in particular, extend sculpt with features that allow tabular files to be converted into RDF +
-  * create associated tooling for SCULPT (i.e., parser and serializer generator, in the spirit of data description tools) +
- +
-\\ +
-**Deliverables** of this master thesis project+
-  - detailed description of the SCULPT proposal (document) +
-  - overview of the state of the art; in particular other proposals for schema languages for tabular data (document) +
-  - concrete syntax for sculpt (design document + formal grammar) +
-  - implementation of SCULPT validation algorithms (software artifact) +
-  - extension of sculpt with features for converting into RDF (document + software) +
- +
- +
-**Interested?​** Contact: [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] +
- +
-**Status**: available +
- +
-===== Engineering a runtime system and compiler for AQL ===== +
- +
-Automatically extracting structured information from text is a task that has been pursued for decades. As a discipline///​Information Extraction///​ (IE) had its start with the [[http://​acl.ldc.upenn.edu/​C/​C96/​C96-1079.pdf|DARPA Message Understanding Conference in 1987]]. While early work in the area focused largely ​on military applications,​ recent changes have made information extraction increasingly important to an increasingly broad audience. Trends such as the rise of social media have produced huge amounts of text data, while analytics platforms like Hadoop have at the same time made the analysis of this data more accessible to a broad range of users. Since most analytics over text involves information extraction as a first step, IE is a very important part of data analysis in the enterprise today. +
- +
-In 2005, researchers at the IBM Almaden Research Center developped a new system specifically geared for practical information extraction in the enterprise. This effort lead to SystemT, a rule-based IE system with an SQL-like declarative language named AQL (Annotation Query Language). The declarative nature of AQL enables new kinds of tools for extractor development,​ and draws upon known techniques form query processing in relational database management systems to offer a cost-based optimizer that ensures high-througput performance. Recent research into the foundations of AQL (http://​researcher.watson.ibm.com/​researcher/​files/​us-fagin/​jacm15.pdf) has shown that, as an alternative,​ it is also possible ​to build a runtime system for AQL based on special kinds of finite state automata. A potential benefit of this alternate runtime system is that text files need only be processed once (instead of multiple times in the cost-based optimizer backend) and may hence provide greater throughput. On the other hand, the alternate system ​can sometimes have larger memory requirements than the cost-based optimizer backend. +
- +
-The objective of this master thesis is to design and engineer a runtime system and compiler for (a fragment) of AQL based on finite state automata. Ideally, to obtain the best performance,​ these automata should ​be compiled ​into machine-code when executed. For this compilation,​ the following technologies should be used: +
-  * A a high-level language to construct the compiler (Scala, http://​scala-lang.org/​) +
-  * Use of Latent Modular Staging (LMS for short) for generating low-level portable assembly from the automata at runtime (http://​scala-lms.github.io/​) +
-  * Use of LLVM (http://​llvm.org/​) as a portable assembly code and corresponding translator to machine code. +
- +
-Getting aquaintend with these technologies ​is part of the master thesis objective. +
- +
-**Validation of the approach** The thesis should propose a benchmark collection of AQL queries and associated input text files that can be used to test the obtained automaton-based AQL compiler and compare its performance against the reference, cost-based optimizer of SystemT. +
- +
-**Deliverables** of the master thesis project: +
-  - An overview of AQL, SystemT, and its cost-based optimizer and evaluation engine. (document) +
-  - A description of how AQL can be evaluated by means of so-called vset finite state automata. (document) +
-  - A detailed desription of the state of the art in evaluating finite state automata. (document) +
-  - Identification of the AQL syntaxt that is to be supported. (specification) +
-  - The AQL compiler (software artifact) +
-  - A benchmark set of AQL queries and associated data sets for the experimental validation +
-  - An experimental validation ​of the compiler, comparing efficiency of compiled queries against the cost-based reference compiler. +
- +
-\\ +
-**References about SystemT**:​ +
-  * [[http://​almaden.ibm.com/​cs/​projects/​avatar/​icde2008.pdf|An Algebraic Approach to Rule-Based Information Extraction]]  +
-  * [[http://​www.sigmod.org/​publications/​sigmod-record/​0812/​p007.special.krishnamurthy.pdf|SystemT:​ A System for Declarative Information Extraction]] +
- +
-\\ +
-**References about finite state automata evaluation**:​ +
-  * Regular expression pattern matching can be simple and fast. http://​swtch.com/​~rsc/​regexp/​regexp1.html +
-  * Regular Expression Matching: the Virtual Machine Approach http://​swtch.com/​~rsc/​regexp/​regexp2.html +
-  * Regular Expression Matching ​in the Wild http://​swtch.com/​~rsc/​regexp/​regexp3.html +
-  * [[http://​www.diku.dk/​kmc/​documents/​AiPL-CrashCourse.pdf|A Crash-Course in Regular Expression Parsing and Regular Expressions as Types.]] +
- +
-\\ +
-**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] +
- +
-\\ +
-**Status**: available +
- +
- +
-===== Structural compression ​of relational databases ===== +
- +
-Recent research in database management systems at ULB has shown how to theoretically construct succinct (compressed) representations for relational databases and semantic web databases. The advantage of these succinct representations is that they allow querying directly **on the succinct representation**,​ without needing to consult the underlying database. +
- +
-The goal of this thesis is to study scalable algorithms for constructing the actual succinct representations. Some in-memory algorithms are already known, but given the large size of typical database, distributed and out-of-core alternatives need to be found. +
- +
-**Deliverables**:​ +
-  * Overview of the state of the art in main-memory,​ and distributed (bi)simulation-based compression algorithms (document) +
-  * Description of the simulation-based compression algorithm to implement (document) +
-  * Selection of the distribution framework (Actors, Pregel, ...) (document) +
-  * Simulation algorithm (software artifact) +
-  * Experimental analysis of distributed algorithm on a number of datasets(document)+
  
 **Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] **Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
Line 149: Line 71:
  
 **Validation of the approach** The thesis should propose a benchmark collection of datalog queries and associated data workloads that be used to test the obtained system, and measure key performance characteristics (elasticity of the system; memory frootprint; overall running time, ...) **Validation of the approach** The thesis should propose a benchmark collection of datalog queries and associated data workloads that be used to test the obtained system, and measure key performance characteristics (elasticity of the system; memory frootprint; overall running time, ...)
 +
 +**Required reading**:
 +  * Datalog and Recursive Query Processing - Foundations and trends in query processing.
 +  * LogicBlox, Platform and Language: A Tutorial (Todd J. Green, Molham Aref, and Grigoris Karvounarakis)
 +  * Dedalus: Datalog in Time and Space (Peter Alvaro, William R. Marczak, Neil Conway, Joseph M. Hellerstein,​ David Maier, and Russell Sears)
 +  * Declarative Networking (Loo et al). For the distributed evaluation strategy.
 +  * Parallel processing of recursive queries in distributed architectures (VLDB 1989)
 +  * Evaluating recursive queries in distributed databases (IEEE trans knowledge and data engieneering,​ 1993)
  
 **Deliverables**:​ **Deliverables**:​
- 
   * Semantics of datalog; overview of known optimization strategies (document)   * Semantics of datalog; overview of known optimization strategies (document)
   * Description of the leapfrog trie join (document)   * Description of the leapfrog trie join (document)
Line 157: Line 86:
   * Experimental analysis of developped system on a number of use cases (document)   * Experimental analysis of developped system on a number of use cases (document)
  
-**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]+**Interested?​*
 +  ​* Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
  
 **Status**: available **Status**: available
  
-===== Design and Implementation of a Curriculum Revision Tool =====+===== Développement d’un système de gestion de l’information pour un réseau de dépistage et de suivi des lésions précancéreuses et cancéreuses du col de l’utérus dans la Région de Cochabamba en Bolivie ​=====
  
-Stijn Vansummeren (WIT), Frédéric Robert (BEAMS)+Full description available here:​{{:​teaching:​mfe:​mfe_u_bio-mechatronics_codepo_01.docx|}}
  
-This master thesis project concerns the analysis, design, and implementation of a software system that can assist in the revision of teaching curricula (also known as teaching programs).+**Interested?​** 
 +  * Contact :   * Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
  
-The primary targetted functionalities of the software system are as follows:+**Status**available
  
-  * It should allow to make different versions of the teaching programs, much in the same way as version control systems like GIT and subversion offer the possibility to make different "​development branches"​ of a program'​s source code. 
-  * It should allow an extensible means to check the modified program for inconsistentcies. (For example, if course X has course Y as prerequisite,​ then course Y should not be scheduled in 2nd semester and X in 1st semester. Moreover, the total number of ECTS of all courses should be at most 60 ECTS. ) 
-  * It should allow to analyze the modifications proposed in the teaching programs, and summarize the impact that these changes could have on other programs. (For example, if a course is removed from the computer science curriculum, it should be flagged that it should also be removed from all curricula that included the course.) 
-  * It should load data from (and preferably, save data to) the ULB central administration database. 
-  * It should give suggestions concerning the impact of the modifications on the course schedules. 
  
-A proof-of-concept implementation of a revision tool that supports the first two requirements above is currently being developped in the context of a PROJH402 project. The MFE student that selects this topic is expected to: 
-  * Develop this prototype to a production-ready implementation. 
-  * Implement the communication with the central ULB database. 
-  * Implement the impact analysis concerning the course schedules. 
-  * Interact with the administration of the Ecole Polytechnique to fine-tune the above requirements;​ test the implementation;​ and integrate remarks after testing 
  
-**Interested?​** Contact : Stijn Vansummeren (stijn.vansummeren@ulb.ac.be),​ Frédéric Robert <​frrobert@ulb.ac.be>​+=====Publishing and Using Spatio-temporal Data on the Semantic Web=====
  
  
-===== Automatic detection of name variations ===== +[[http://​www.w3c.org/​|RDF]] is the [[http://​www.w3c.org/​|W3C]] proposed framework for representing information 
-Toon Calders ​(WIT)+in the Web. Basically, information in RDF is represented as a set of triples of the form (subject,​predicate,​object).  RDF syntax is based on directed labeled graphs, where URIs are used as node labels and edge labels. The [[http://​linkeddata.org/​|Linked Open Data]] (LOD) initiative is aimed at extending the Web  by means of publishing various open datasets as RDF,  setting RDF links between data items from different data sources. ​ Many companies ​ and government agencies are moving towards publishing data following the LOD initiative. 
 +In order to do this, the original data must be transformed into Linked Open Data. Although most of these data are alphanumerical,​ most of the time they contained ​ a spatial or spatio-temporal component, that must also be transformed. This can be exploited  
 +by application providers, that can build attractive and useful applications,​ in particular, for devices like mobile phones, tablets, etc. 
  
-For this project a large data collection consisting of historical birth, death, and marriage certificates ​of the province of North-Brabant in the Netherlands is available. This collection contains certificates ​for about 3 million people, from 1580 until 1955This collection of paper documents has been indexed by volunteersFor many of the certificates ​(unfortunately the index is not complete yet), the names of the people involved in it, and their role have been recorded in a database. Consider for instance ​the following example ​of an index entry for a death certificate:​+The goals of this thesis are: (1) study the existing proposals for mapping spatio-temporal data into LOD; (2) apply this mapping to a real-world case study (as was the case for the [[http://​www.oscb.be/|Open Semantic Cloud for Brussels]] project; ​(3Based on the produced mapping, and using existing applications like the [[http://​linkedgeodata.org/​|Linked Geo Data project]], build applications that make use of LOD for example, to find out which cultural events are taking place at given time at a given location. ​   
 + 
  
-^ Death certificate ^^ +    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]]
-|Deceased |Johanna Louise Fredrika Frans | +
-|Relation of the deceased |Gerard Cornelius Reincke de Sitter | +
-|Father of the deceased |Carl Ludwig Frans | +
-|Mother of the deceased |Alida Philippina Zehender | +
-|Type of deed |death certificate | +
-|Number of deed |5 | +
-|Place |Beers | +
-|Date of decease |26-02-1825 | +
-|Period |1825 | +
-|Contains |Overlijdensregister 1825 | +
-|Number of inventory |50 | +
-|Record number |456 ​|+
  
-There are, however, several problems with the data recorded by the volunteers:  +=====Extending SPARQL ​for Spatio-temporal Data Support=====
-  - Volunteers made mistakes when recording the names +
-  - Natural name variations occur; ​for instance, during the Napoleonic era, Willem preferred to be called Guillaume. After the French left the Netherlands,​ Willem became Willem again. Other, less spectacular variations: Fredrika versus Frederika. +
-  - Another source of variation is the granularity at which locations are reported. Sometimes locations have been reported at suburb or even neighborhood level, whereas in other records only the city is reported. +
-  ​Also the original data contained errors. For instance, the order of names may have been swapped.+
  
-The goal of this graduation project ​is to automatically detect name variations for location and person names, using statistical and data mining methods. Because of the large size of the database it is very likely that most name variations occur frequently. In a pilot studyit was shown that name variations could be detected by finding pairs of full names sharing most surnamesbut not all. The differences often were name variationsYour task will be to extend ​this approach ​to also include locationsand exploit additional background knowledge such asfor most birth certificates there is a matching death certificateno one has more than one birth and death certificate,​ etc.  +[[http://​www.w3.org/​TR/​rdf-sparql-query/​|SPARQL]] ​is the W3C standard language ​to query RDF data over the semantic web. Although syntactically similar to SQL,  SPARQL ​is based on graph matching. In additionSPARQL is aimedbasically, to query alphanumerical data  
-This project has large research componentso your creative input will be required as well. For this project it is absolutely not necessary to speak or understand Dutch.+Therefore, a proposal ​to extend ​SPARQL ​to support spatial datacalled ​ [[http://​www.opengeospatial.org/​projects/​groups/​geosparqlswg/​|GeoSPARQL]], has been presented to the Open Geospatial Consortium  
 +  
 +In this thesis we propose to (1) perform an analysis of the current proposal for GeoSPARQL; (2) study of  current implementations of SPARQL that support spatial data; (3) implement simple extensions for SPARQL to support spatial dataand use these language in real-world use cases 
 + 
  
-Interested? ​Contact [[toon.calders@ulb.ac.be|Toon Calders]]+   ​* ​Contact[[ezimanyi@ulb.ac.be|Esteban Zimányi]]
  
-===== Analyzing state-of-the-art technology for handwritten text recognition in a practical case study ===== +=====Efficient Management ​of (Sub-)structure ​ Similarity Search Over Large Graph Databases. ​===== 
-Toon Calders (WIT) and Olivier Debeir (LISA)+
  
-The goal of this project is to study the applicability ​of current state-of-the-art text recognition tools in the following practical applicationConsider the following two exemplary documents:+The problem ​of (sub-)structure similarity search over graph data has recently drawn significant research interest due to its importance in many application areas such as in Bio-informatics,​ Chem-informatics,​ Social Network, Software Engineering,​ World Wide Web, Pattern Recognition,​ etc.  Consider, for example, ​the area of drug design, efficient techniques are required to query and analyze huge data sets of chemical molecules thus shortening ​the discovery cycle in drug design and other scientific activities
  
-[[https://​dl.dropbox.com/​u/​5119252/​MFE/​069-50-3165-1813-00009.jpg]] \\  +Graph edit distance is widely accepted as a similarity measure of labeled graphs due to its ability to cope with any kind of graph structures and labeling schemes ​Today,​ graph edit similarity plays a significant role in managing graph data , and is employed in a variety of analysis tasks such as graph classification and clustering, object recognition in computer vision, etc
-[[https://​dl.dropbox.com/​u/​5119252/​MFE/​069-50-3165-1815-00003.jpg]]+
  
-These two documents are scans of birth certificates (actually both are 2 birth certificates) from the Dutch city Grave. We have a huge collection of such paper documents; about 3 millionof which several tens of thousands have been scanned. Furthermore,​ we have an index on these documents, created by volunteers. This index contains, for the birth certificate,​ the name of the child, the name of the father and mother, and the witnesses. As you can see in the documents, however, much more information is available. Your task is to answer the following question: is it realisticgiven the current ​state-of-the-art to do automatic recognition of hand-written texts such as these certificates?​ Most of the documents are very structured, ​with limited number of possible values ​(age of a person, profession), and there is a huge amount of training data; the names of all people have been indexedusually ​the handwriting is consistent throughout a whole book with certificates. This graduation project includes a thorough literature study and experimentation with (original combinations ​of) state-of-the-art image recognition techniques adapted to our specific case. The project will be carried out in collaboration with the research labs WIT and LISA.+In this master thesis project due to the hardness ​of graph edit distance (computing graph edit distance ​is known to be NP-hard problem)the student ​ will investigate ​the current ​approaches that deals with problem complexity while searching for similar ​(sub-)structures. ​ At the end, the student should be able to empirically analyze ​and contrast some of the interesting approaches 
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]]+=====A Generic Similarity Measure For Symbolic Trajectories===== 
 +Moving object databases (MOD) are database systems that can store and manage moving object dataA moving object is a value that changes over timeIt can be spatial (e.g., a car driving on the road network), or non-spatial (e.g., the temperature in Brussels). Using a variety of sensors, the changing values of moving objects can be recorded in digital formats. A MOD, then, helps storing and querying such data. There are two types of MOD. The first is the trajectory database, that manages the history of movement. The second type, in contrast, manages the stream of current movement and the prediction of the near future. This thesis belongs to the first type (trajectory databases). The research in this area mainly goes around proposing data persistency models and query operations for trajectory data. 
  
-===== Process Mining on Company Data for Detecting Security Breaches ===== +A sub-topic of MOD is the study of semantic trajectories. It is motivated by the fact that the semantic of the movement is lost during the observation process. You GPS logger, ​for instance, would record a sequence of (lon, lat, time) that describe your trajectory. It won't, however, store the purpose of your trip (work, leisure, …), the transportation mode (car, bus, on foot, …), and other semantics of your trip. Research works have accordingly emerged to extract semantics from the trajectory raw data, and to provide database persistency to semantic trajectories. ​
-Toon Calders ​(WIT)+
  
-According to a recent report of Price Waterhouse Cooperthe most common source of security incidents are current employees, followed at distance by former employees and only after that truly external threats such as hactivists. [http://​www.pwc.com/​gx/​en/​consulting-services/​information-security-survey/​giss.jhtml?​region=&​industry=] ​ This observation leads to the conclusion that in an intelligent security event management systemshould also concentrate on internal threats to security. +RecentlyRalf Güting et al. published ​model called “symbolic trajectories”which can be viewed ​as a representation ​of semantic trajectories: 
-The goal of the thesis is to analyze the possibility of using process mining to help in the detection of silent attacks. We will concentrate on company-specific data. From this data typical behavior will be detected and modeled ​as a process or workflow. We consider three aspects ​of a workflowthe actor(s)the resources, and the activitiesBy modeling the normal behavior in the system we are able to detect deviating casesBased on historical data, the goal is to build models of typical behavior, including the use of resources such as patient recordsSuch a system would be able to detect for instance if a certain patient record is consulted much more often than usual, or by more people, or outside of the normal workflow (e.g., only reading informationbut not writing). Such pattern could indicate unjustified access to for instance the patient record ​of a famous patient.  +Ralf Hartmut GütingFabio Valdés, and Maria Luisa Damiani2015Symbolic TrajectoriesACM TransSpatial Algorithms Syst12Article 7 (July 2015), 51 pages. 
-For modeling the workflows, we propose the use of process mining ​(Van der Aalst2011). Process mining ​is a state-of-the-art technology concerned with the automatic extraction of process models from event logs. Considere.g., a hospital registering all activities that are carried out for the treatment of patients, ranging from the admission, various measurements being taken from the patient, medicine administered,​ surgical procedures, to the resignation ​of the patientProcess mining could be used to extrapolate from these examples, a common ​model of how the hospital deals with a patient. There are several applications of process mining; first it can be used to improve the processes by standardizing them; many companies ​and organizations may only have informal procedures. By process mining the process logs are used to extract a general model of the actual business processes. Such a model can guide the automation process.  +A symbolic trajectory is very simple structure composed ​of a sequence ​of pairs (time intervallabel). So, it is a time dependent labelwhere every label can tell something about the semantics ​of the moving object during its associated time intervalWe think this model is promising because ​of its simplicity ​and genericness  ​
-In this thesis the goal is to analyze how process mining could be used for anomaly detection; how can the discovered models be used to detect abnormal behavior in a company network? Much like in credit card fraud detection, the approach is to first model normal behavior, in this case using process mining, in order to detect diverging behavior that could indicate security breaches in the network.+
  
-Van der AalstWM(2011)Process Mining: DiscoveryConformance and Enhancement ​of Business ProcessesSpringer.+The goal of this thesis is to implement a similarity operator for symbolic trajectories. There are three dimensions of similarity in symbolic trajectories:​ temporal similarityvalue similarity, and semantic similaritySuch an operator should be flexible to express arbitrary combinations of themIt should accept a pair of semantic trajectories and return a numerical value that can be used for clustering or ranking objects based on their similaritySymbolic trajectories are similar to time seriesexcept that labels are annotated by time intervals, rather than time points. We think that the techniques ​of time series similarity can be adopted for symbolic trajectoriesThis thesis should assess that, and implement a similarity measure based on time series similarity. The implementation is required to be done as an extension to PostGIS. We have already implemented some temporal types and operations on top of PostGIS, where you can start from
  
 + 
 +**Deliverables** of the master thesis project
 +  * Reporting on the state of art of semantic trajectory similarity measures.
 +  * Reporting on the state of art in time series similarity measures.
 +  * Assessing the application of time series similarity to symbolic trajectories.
 +  * Implementing symbolic trajectories on top of PostGIS.
 +  * Implementation and evaluating the proposed symbolic trajectory similarity operator. ​  
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] 
  
-===== Mining patterns for compression ===== +**Interested?​** 
-Toon Calders (WIT)+  * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
  
-Data mining is the research discipline that studies the extraction of information from large amounts of data. One of the typical data mining tasks is pattern mining where we try to find regularities that occur frequently in a dataset. The prototypical example is that of a supermarket storing for every customer visiting the supermarket,​ the transaction;​ that is, the set of items that were bought by that customer. The frequent itemset mining problem now is to detect which combinations of products were more often sold together than a given threshold. One of the major problems of pattern mining algorithms, however, is the enormous amount of redundant patterns they generate; for instance, very popular items, such as toilet paper, tend to appear in many frequent combinations purely due to chance. In order to deal with this problem, techniques based upon compression and minimum description length were proposed to reduce the number of patterns. The rationale behind the minimal description length principle is that a set of patterns that describes well what is happening in the dataset should allow for a good compression. For a collection of patterns, the quality is measured as the description length of the patterns plus the size of the data compressed with these patterns. For instance, if the pattern {bread, milk, butter} has a high frequency, we could opt to replace every occurrence of this pattern by a special code, effectively reducing the encoding length of the data. Surprisingly,​ however, the MDL principle was until now only used to rule out redundant patterns, and it has not been researched yet how well the discovered patterns actually do compress the data as compared to compression algorithms such as Lempel–Ziv–Welch.  +**Status**available
-Hence, in this highly research oriented graduation project, two research questions are central(1) How good do non-redundant pattern sets based on MDL allow compressing data, and (2) Can we extract useful patterns from existing compression algorithms?+
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]]+=====Assessing Existing Communication Protocols In The Context Of DaaS =====  
 +Data-as-a-Service (DaaS) is an emerging cloud modelThe main offering of DaaS is to allow data producers/​owners to publish data services on the cloudThe idea of publishing data via a service interface is not new. SOA protocols have enabled this long ago. Yet, these protocols were not developed with the cloud and the big data in mind. This is probably why the term DaaS has emerged. It marks the need for protocols and tools that enable big data exchange
  
-===== Pattern Mining ​for Object Tracking ===== +DaaS services need to exchange large amounts of data. Large here refers to large message size, large message count, or a combination of both. RESTful services, ​for instance, communicate over HTTP, which is not a good choice for communicating large messages/​files. SOAP services are not bound to HTTP, but they introduce another overhead of requiring messages to be strictly formatted in XML. This is why researchers started to reconsider older protocols like the BitTorrent, and suggesting extension to existing protocols like the SOAP with Attachments. ​
-Toon Calders (WIT)+
  
-Pattern mining techniques are more and more often used in computer vision +The topic of this thesis is to perform a comprehensive survey on the protocols data exchange, ​and assess their suitability for DaaS. A quantitative comparison of protocols need to be doneconsidering at least these two dimensions: (1) the protocol: SOAPRESTBitTorrent, etc, and (2) the message: short inline, long inline, file. The assessment should be in terms of reliabilityperformanceand security.
-to obtain features that are more discriminative than those extracted +
-using computer vision algorithms. This is true for example in content-based +
-images/​videos retrievalindexingclassificationtracking, etc. Howeverthe main +
-drawback of using traditional pattern mining techniques is their inefficiency when +
-dealing with huge set of data (for example provided by Google image or Youtube +
-for videosor when trying to tackle real-time analysis problems. The data mining +
-community has been working on the “Big Data” problem for many years coming +
-up with promising solutions such as stream mining. The aim of this project +
-is to explore the possibility of using pattern mining ​in data streams for the (real-time) analysis ​of videos andin particularfor object tracking.+
  
-For more extensive information regarding ​the context ​and problem settingsee the following paper:+**Deliverables** of the master thesis project 
 +  * A report that reviews the state of art communication protocols. 
 +  * Propose a tool for DaaS developers to choose the best protocol/s based on their application needs. Such a tool might also provide means of automatically switching between protocols on certain thresholds. 
 +  * Experiments to assess the suitability of protocols for DaaS, and to compare between them. These experiments need to be repeatableso that others can use them on their own datasets and configurations.  ​
  
-Toon Calders, Elisa Fromont, Baptiste Jeudy and Hoang Thanh Lam. +**Interested?​** 
-[[http://​labh-curien.univ-st-etienne.fr/​~fromont/​|Analysis of Videos using Tile Mining.]]\\ +  * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
-In: //ECML/PKDD Workshop on Real-World Challenges for Data Stream Mining//, Prague, 2013+
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] +**Status**: available
- +
- +
-===== Design and Implementation of a Curriculum Revision Tool ====== +
- +
-Stijn Vansummeren (WIT), Frédéric Robert (BEAMS) +
- +
-This MFE concers the analysis, design, and implementation of a +
-software system that can assist in the revision of teaching curricula +
-(also known as teaching programs). +
- +
-The primary targetted functionalities of the  software system are as +
-follows: +
-  ​It should allow to make different versions of the teaching programs, much in the same way as version control systems like GIT and subversion offer the possibility to make different "​development branches"​ of a program'​s source code. +
-  ​It should ​ allow an extensible means to check the modified program for inconsistentcies. (For example, if course X has course Y as prerequisite,​ then course Y should not be scheduled in 2nd semester and X in 1st semester. Moreover, the total number of ECTS of all courses should be at most 60 ECTS. ) +
-  ​It should allow to analyze the modifications proposed in the teaching programs, and summarize the impact that these changes could have on other programs. (For example, if a course is removed from the computer science curriculum, it should be flagged that it should also be removed from all curricula that included the course.) +
-  * It should load data from (and preferably, save data to) the ULB central administration database.  +
-  * It should give suggestions concerning the impact of the modifications on the course schedules. +
- +
-A proof-of-concept implementation of a revision tool that supports the first two requirements above is currently being developed in the context of a PROJH402 project. The MFE student that selects this topic is expected to: +
- +
-  * Develop this prototype to a production-ready implementation. +
-  * Implement the communication with the central ULB database. +
-  * Implement the impact analysis concerning the course schedules. +
-  * Interact with the administration of the Ecole Polytechnique to fine-tune the above requirements;​ test the implementation;​ and integrate remarks after testing +
- +
-Contact : Stijn Vansummeren <​stijn.vansummeren@ulb.ac.be>,​ Frédéric Robert <​frrobert@ulb.ac.be>​ +
- +
- +
-=====Publishing and Using Spatio-temporal Data on the Semantic Web===== +
- +
- +
-[[http://​www.w3c.org/​|RDF]] is the [[http://​www.w3c.org/​|W3C]] proposed framework for representing information +
-in the Web. Basically, information in RDF is represented as a set of triples of the form (subject,​predicate,​object). ​ RDF syntax is based on directed labeled graphs, where URIs are used as node labels and edge labels. The [[http://​linkeddata.org/​|Linked Open Data]] (LOD) initiative is aimed at extending the Web  by means of publishing various open datasets as RDF,  setting RDF links between data items from different data sources. ​ Many companies ​ and government agencies are moving towards publishing data following the LOD initiative. +
-In order to do this, the original data must be transformed into Linked Open Data. Although most of these data are alphanumerical,​ most of the time they contained ​ a spatial or spatio-temporal component, that must also be transformed. This can be exploited  +
-by application providers, that can build attractive and useful applications,​ in particular, for devices like mobile phones, tablets, etc.  +
- +
-The goals of this thesis are: (1) study the existing proposals for mapping spatio-temporal data into LOD; (2) apply this mapping to a real-world case study (as was the case for the [[http://​www.oscb.be/​|Open Semantic Cloud for Brussels]] project; (3) Based on the produced mapping, and using existing applications like the [[http://​linkedgeodata.org/​|Linked Geo Data project]], build applications that make use of LOD for example, to find out which cultural events are taking place at a given time at a given location. ​   +
-  +
- +
-    * Contact: [[ezimanyi@ulb.ac.be|Esteban Zimányi]] +
- +
-=====Extending SPARQL for Spatio-temporal Data Support===== +
- +
-[[http://​www.w3.org/​TR/​rdf-sparql-query/​|SPARQL]] is the W3C standard language to query RDF data over the semantic web. Although syntactically similar to SQL,  SPARQL is based on graph matching. In addition, SPARQL is aimed, basically, to query alphanumerical data.   +
-Therefore, a proposal to extend SPARQL to support spatial data, called ​ [[http://​www.opengeospatial.org/​projects/​groups/​geosparqlswg/​|GeoSPARQL]],​ has been presented to the Open Geospatial Consortium. ​  +
-  +
-In this thesis we propose to (1) perform an analysis of the current proposal for GeoSPARQL; (2) a study of  current implementations of SPARQL that support spatial data; (3) implement simple extensions for SPARQL to support spatial data, and use these language in real-world use cases.  +
-  +
- +
-   Contact[[ezimanyi@ulb.ac.be|Esteban Zimányi]] +
- +
 
teaching/mfe/is.txt · Last modified: 2020/09/29 17:03 by mahmsakr