Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
teaching:mfe:is [2015/04/13 13:47]
svsummer
teaching:mfe:is [2015/06/26 16:08]
svsummer
Line 19: Line 19:
  
 Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found  Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found 
-{{:​teaching:​mfe:​mt2014_euranova.pdf|here}}+{{:​teaching:​mfe:​master_thesis_euranova_2015.pdf|here}}
  
 These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks. These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks.
Line 25: Line 25:
   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
  
-===== Automatic detection of name variations ===== 
-Toon Calders (WIT) 
  
-For this project a large data collection consisting of historical birth, death, and marriage certificates of the province of North-Brabant in the Netherlands is available. This collection contains certificates ​for about 3 million people, from 1580 until 1955. This collection of paper documents has been indexed by volunteers. For many of the certificates (unfortunately the index is not complete yet), the names of the people involved in it, and their role have been recorded in a database. Consider for instance the following example of an index entry for a death certificate:​+===== Graph Indexing ​for Fast Subgraph Isomorphism Testing =====
  
-^ Death certificate ^^ +There is an increasing amount ​of scientific data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, ...). A crucial operation when searching in this data is that of subgraph ​   isomorphism testing: given a pattern P that one is interested in (also a graph) in and a collection D of graphs (e.g., chemical molecules), find all graphs in G that have P as a   ​subgraph. Unfortunately, ​the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing ​of this problem, we aim to reduce ​the number ​of candidate graphs in D to which a subgraph isomorphism test needs   to be executed. Specifically,​ we index the graphs in the collection D by means of decomposing them into graphs for which subgraph ​  ​isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.
-|Deceased |Johanna Louise Fredrika Frans | +
-|Relation ​of the deceased |Gerard Cornelius Reincke de Sitter | +
-|Father ​of the deceased |Carl Ludwig Frans | +
-|Mother ​of the deceased |Alida Philippina Zehender | +
-|Type of deed |death certificate | +
-|Number ​of deed |5 | +
-|Place |Beers | +
-|Date of decease |26-02-1825 | +
-|Period |1825 | +
-|Contains |Overlijdensregister 1825 | +
-|Number of inventory |50 | +
-|Record number |456 |+
  
-There arehowever, several problems with the data recorded by the volunteers:  +In this master thesis project, the student will emperically validate on real-world datasets ​the extent to which graphs can be decomposed into graphs ​for which subgraph isomorphism is tractableand run experiments ​to validate ​the effectiveness ​of the proposed method ​in terms of filtering power.
-  ​Volunteers made mistakes when recording ​the names +
-  - Natural name variations occur; ​for instanceduring the Napoleonic era, Willem preferred ​to be called Guillaume. After the French left the Netherlands,​ Willem became Willem again. Other, less spectacular variations: Fredrika versus Frederika. +
-  - Another source ​of variation is the granularity at which locations are reported. Sometimes locations have been reported at suburb or even neighborhood level, whereas ​in other records only the city is reported. +
-  - Also the original data contained errors. For instance, the order of names may have been swapped.+
  
-The goal of this graduation project is to automatically detect name variations for location and person names, using statistical and data mining methodsBecause of the large size of the database it is very likely that most name variations occur frequentlyIn a pilot study, it was shown that name variations could be detected by finding pairs of full names sharing most surnames, but not allThe differences often were name variations. Your task will be to extend this approach to also include locations, and exploit additional background knowledge such as: for most birth certificates there is a matching death certificate,​ no one has more than one birth and death certificate,​ etc.  +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
-This project has a large research component, so your creative input will be required as well. For this project it is absolutely not necessary to speak or understand Dutch.+
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]]+**Status**: available
  
-===== Analyzing state-of-the-art technology for handwritten text recognition in a practical case study ===== 
-Toon Calders (WIT) and Olivier Debeir (LISA) 
  
-The goal of this project is to study the applicability of current state-of-the-art text recognition tools in the following practical application. Consider the following two exemplary documents:+=====  Complex Event Processing for Security Analytics===== ​
  
-[[https://dl.dropbox.com/u/5119252/MFE/​069-50-3165-1813-00009.jpg]] \\  +As noted by [[http://home.deib.polimi.it/cugola/Papers/cep_survey.pdf|Cugola and Magara]], "an increasing number of distributed applications requires processing continuously flowing data ("​events"​) from geographically distributed sources at unpredictable rates to obtain timely responses to complex queries. Examples of such applications come from the most disparate fieldsfrom fraud  detection to network intrusion detection systems, from wireless sensor networks to financial tickers, from traffic management to click-stream inspection."
-[[https://​dl.dropbox.com/​u/​5119252/​MFE/​069-50-3165-1815-00003.jpg]]+
  
-These two documents are scans of birth certificates ​(actually both are 2 birth certificates) from the Dutch city Grave. We have huge collection ​of such paper documents; about 3 million, ​of which several tens of thousands have been scannedFurthermorewe have an index on these documentscreated by volunteers. This index containsfor the birth certificatethe name of the child, the name of the father ​and mother, and the witnessesAs you can see in the documentshowever, much more information is available. Your task is to answer ​the following question: is it realistic, given the current state-of-the-art to do automatic recognition ​of hand-written texts such as these certificates?​ Most of the documents are very structured, with limited number of possible values (age of a person, profession), ​and there is a huge amount of training data; the names of all people have been indexed, usually the handwriting is consistent throughout ​whole book with certificates. This graduation project includes a thorough literature study and experimentation with (original combinations of) state-of-the-art image recognition techniques adapted to our specific case. The project will be carried out in collaboration with the research labs WIT and LISA.+These requirements have led to the development of a number ​of systems specifically designed to process information as a flow (or set of flows) ​of continues data "​events"​ according to a set of pre-deployed processing rules ​Despite having a common goal, these systems differ in a wide range of aspectsincluding architecturedata modelsrule and pattern languages, and processing mechanismsIn partthis is due to the fact that they were the result ​of the research efforts ​of different communities,​ each one bringing its own view of the problem ​and its background to the definition ​of a solution.
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]]+The master thesis is put forward in the context of the SPICES "​Scalable Processing and mIning of Complex Events for Security-analytics"​ research project, funded by Innoviris  
 +The objective of this master thesis is to survey the existing systems and compare the strengths and weaknesses when they are applied specifically to the context detecting security breaches (network intrusion, fraud detection, ​...), and help, as part of the research project, in the design & implementation of a new system that overcomes these weaknesses.
  
-===== Process Mining on Company Data for Detecting Security Breaches ===== +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
-Toon Calders (WIT)+
  
-According to a recent report of Price Waterhouse Cooper, the most common source of security incidents are current employees, followed at a distance by former employees and only after that truly external threats such as hactivists. [http://​www.pwc.com/​gx/​en/​consulting-services/​information-security-survey/​giss.jhtml?​region=&​industry=] ​ This observation leads to the conclusion that in an intelligent security event management system, should also concentrate on internal threats to security. +**Status**already ​taken.
-The goal of the thesis is to analyze the possibility of using process mining to help in the detection of silent attacks. We will concentrate on company-specific data. From this data typical behavior will be detected and modeled as a process or workflow. We consider three aspects of a workflow: the actor(s), the resources, and the activities. By modeling the normal behavior in the system we are able to detect deviating cases. Based on historical data, the goal is to build models of typical behavior, including the use of resources such as patient records. Such a system would be able to detect for instance if a certain patient record is consulted much more often than usual, or by more people, or outside of the normal workflow (e.g., only reading information,​ but not writing). Such a pattern could indicate unjustified access to for instance the patient record of a famous patient.  +
-For modeling the workflows, we propose the use of process mining (Van der Aalst, 2011). Process mining is a state-of-the-art technology concerned with the automatic extraction of process models from event logs. Consider, e.g., a hospital registering all activities that are carried out for the treatment of patients, ranging from the admission, various measurements being taken from the patient, medicine administered,​ surgical procedures, to the resignation of the patient. Process mining could be used to extrapolate from these examples, a common model of how the hospital deals with a patient. There are several applications of process mining; first it can be used to improve the processes by standardizing them; many companies and organizations may only have informal procedures. By process mining the process logs are used to extract a general model of the actual business processes. Such a model can guide the automation process.  +
-In this thesis the goal is to analyze how process mining could be used for anomaly detection; how can the discovered models be used to detect abnormal behavior in a company network? Much like in credit card fraud detection, the approach is to first model normal behavior, in this case using process mining, in order to detect diverging behavior that could indicate security breaches in the network.+
  
-Van der Aalst, W. M. (2011). Process Mining: Discovery, Conformance and Enhancement of Business Processes. Springer. 
  
 +===== Compiling SPARQL queries into machine code =====
  
-Interested? Contact ​[[toon.calders@ulb.ac.be|Toon Calders]]+Due to the increasing availability of larger and larger cheap RAM memories, the working set of modern database management systems becomes more and more main memory resident. This implies that, in contrast to traditional database management systems, slow disk accesses are rare, and that hence, the in-memory processing speed of databases becomes an important factor. As recently observed by a number of researchers,​ (e.g., ​[[http://​sites.computer.org/​debull/​A14mar/​p3.pdf|Neumann and Leis]]), one very attractive approach for fast query processing in this context is the just-in-time compilation of incoming queries into machine code. This compilation avoids the overhead of the traditional interpretation of query plans, and can aid in minimzing memory traffic for boosting performance.
  
-===== Mining patterns for compression ===== +A number of recent research prototypes exist that compile SQL queries into machine code in this sense: ​ HyPer A Hybrid OLTP&​OLAP High Performance DBMS (http://​hyper-db.de/​and Legobase (https://​github.com/​epfldata/​NewLegoBase and http://​data.epfl.ch/​legobase).
-Toon Calders ​(WIT)+
  
-Data mining is the research discipline that studies the extraction ​of information from large amounts of data. One of the typical data mining tasks is pattern mining where we try to find regularities that occur frequently in a dataset. The prototypical example is that of a supermarket storing for every customer visiting ​the supermarket,​ the transaction;​ that is, the set of items that were bought by that customer. The frequent itemset mining problem now is to detect which combinations of products were more often sold together than given threshold. One of the major problems of pattern mining algorithms, however, is the enormous amount of redundant patterns they generate; ​for instance, very popular items, such as toilet paper, tend to appear in many frequent combinations purely due to chance. In order to deal with this problem, techniques based upon compression and minimum description length were proposed to reduce ​the number of patterns. The rationale behind the minimal description length principle is that a set of patterns that describes well what is happening in the dataset ​should ​allow for a good compression. For a collection of patterns, ​the quality is measured as the description length ​of the patterns plus the size of the data compressed with these patterns. For instance, if the pattern {bread, milk, butter} has a high frequency, we could opt to replace every occurrence of this pattern by a special code, effectively reducing ​the encoding length of the data. Surprisinglyhowever, the MDL principle was until now only used to rule out redundant patterns, and it has not been researched yet how well the discovered patterns actually do compress the data as compared to compression algorithms such as Lempel–Ziv–Welch.  +The objective ​of this master thesis ​is to apply the same methodology ​to engineer ​compiler that translates (fragments ​of) SPARQL (the standard query language ​for querying RDF data on the semantic web) into machine code. The overall methodology ​should ​follow ​the methodology used by HyPer and Legobase: 
-Hence, in this highly research oriented graduation project, two research questions are central: ​(1How good do non-redundant pattern sets based on MDL allow compressing data, and (2Can we extract useful patterns from existing compression algorithms?+  * Use of a high-level language ​to construct ​the compiler (Scalahttp://​scala-lang.org/) 
 +  * Use of Latent Modular Staging ​(LMS for shortfor generating low-level portable assembly code at runtime ​(http://​scala-lms.github.io/​) 
 +  * Use of LLVM (http://​llvm.org/​) as a portable assembly code and corresponding translator to machine code.
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]]+Getting aquaintend with these technologies is part of the master thesis objective.
  
-===== Pattern Mining for Object Tracking ===== +**Validation of the approach** The thesis should propose a benchmark collection of SPARQL queries that can be used to test the obtained SPARQL-to-machine-code compiler and compare its perforance against a reference, interpreter-based SPARQL compiler.
-Toon Calders (WIT)+
  
-Pattern mining techniques are more and more often used in computer vision +**Deliverables** of the master thesis project:  ​ 
-to obtain features that are more discriminative than those extracted +  - An overview of the state of the art in query-to-machine-code compilation
-using computer vision algorithms. This is true for example ​in content-based +  - A description ​of latent modular staging and how it can be used to construct machine-code compilers
-images/​videos retrieval, indexing, classification,​ tracking, etcHowever, the main +  ​- ​The SPARQL compiler (software artifact) 
-drawback ​of using traditional pattern mining techniques is their inefficiency when +  - A benchmark set of SPARQL queries and associated ​data sets for the experimental validation 
-dealing with huge set of data (for example provided by Google image or Youtube +  ​An experimental validation ​of the compilercomparing efficiency of compiled queries against a reference compiler based on query plan interpretation.
-for videos) or when trying ​to tackle real-time analysis problemsThe data mining +
-community has been working on the “Big Data” problem for many years coming +
-up with promising solutions such as stream mining. ​The aim of this project +
-is to explore the possibility ​of using pattern mining in data streams ​for the (real-time) analysis ​of videos andin particular, for object tracking.+
  
-For more extensive information regarding the context and problem setting, see the following paper: 
  
-Toon Calders, Elisa Fromont, Baptiste Jeudy and Hoang Thanh Lam. +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
-[[http://​labh-curien.univ-st-etienne.fr/​~fromont/​|Analysis of Videos using Tile Mining.]]\\ +
-In: //ECML/PKDD Workshop on Real-World Challenges for Data Stream Mining//, Prague, 2013+
  
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]]+**Status**: available
  
 +===== An implementation of the SCULPT schema language for tabular data on the Web =====
  
-===== Design and Implementation of a Curriculum Revision Tool ======+Despite the availability of numerous standardized formats for semi-structured and semantic web data such as XML, RDF, and JSON, a very large percentage of data and open data published on the web, remains tabular in nature. (Jeni Tennison, one of the two co-chairs of the W3C CSV on the Web working group claims that ``over 90% of the data published on data.gov.uk is tabular data''​.) Tabular data is most commonly published in the form of comma separated values (CSV) files because such files are open and therefore processable by numerous tools, and tailored for all sizes of files ranging from a number of KBs to several TBs. Despite these advantages, working with CSV files is often cumbersome because they are typically not accompanied by a //schema// that describes the file's structure (i.e., ``the second column is of integer datatype'',​ ``columns are delimited by tabs'',​ etc) and captures its intended meaning. Such a description is nevertheless vital for any user trying to interpret the file and execute queries or make changes to it. 
 + 
 +In other data models, the presence of a schema is also important for query optimization (required for scalable query execution if the file is large), as well as other static analysis tasks. Finally, schemas are a prerequisite for unlocking huge amounts of tabular data to the Semantic Web. 
 + 
 +In recognition of this problem, the CSV on the Web Working Group of the World Wide Web Consortium argues for the introduction of a schema language for tabular data to ensure higher interoperability when working with datasets using the CSV or similar formats. 
 + 
 +The objective of this master thesis is to implement a recent proposal for such a schema language named SCULPT (http://​arxiv.org/​abs/​1411.2351). Concretely, this entails: 
 +  * proposing an elegant concrete syntax for SCULPT schemas 
 +  * implement both the in-memory and streaming validation algorithms of SCULPT proposed in http://​arxiv.org/​abs/​1411.2351 
 +  * extend the SCULPT proposal, by investigating how SCULPT can be combined with complementary features recently proposed by the W3C CSV on the Web Working group (http://​www.w3.org/​2013/​csvw/​wiki/​Main_Page) 
 +  * and in particular, extend sculpt with features that allow tabular files to be converted into RDF 
 +  * create associated tooling for SCULPT (i.e., parser and serializer generator, in the spirit of data description tools) 
 + 
 +\\ 
 +**Deliverables** of this master thesis project: 
 +  - detailed description of the SCULPT proposal (document) 
 +  - overview of the state of the art; in particular other proposals for schema languages for tabular data (document) 
 +  - concrete syntax for sculpt (design document + formal grammar) 
 +  - implementation of SCULPT validation algorithms (software artifact) 
 +  - extension of sculpt with features for converting into RDF (document + software) 
 + 
 + 
 +**Interested?​** Contact: [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] 
 + 
 +**Status**: available 
 + 
 +===== Engineering a runtime system and compiler for AQL ===== 
 + 
 +Automatically extracting structured information from text is a task that has been pursued for decades.Since most analytics over text involves information extraction as a first step, IE is a very important part of data analysis in the enterprise today. 
 + 
 +In 2005, researchers at the IBM Almaden Research Center developped a new system specifically geared for practical information extraction in the enterprise. This effort lead to SystemT, a rule-based IE system with an SQL-like declarative language named AQL (Annotation Query Language). The declarative nature of AQL enables new kinds of tools for extractor development,​ and draws upon known techniques form query processing in relational database management systems to offer a cost-based optimizer that ensures high-througput performance. Recent research into the foundations of AQL (http://​researcher.watson.ibm.com/​researcher/​files/​us-fagin/​jacm15.pdf) has shown that, as an alternative,​ it is also possible to build a runtime system for AQL based on special kinds of finite state automata. A potential benefit of this alternate runtime system is that text files need only be processed once (instead of multiple times in the cost-based optimizer backend) and may hence provide greater throughput. On the other hand, the alternate system can sometimes have larger memory requirements than the cost-based optimizer backend. 
 + 
 +The objective of this master thesis is to design and engineer a runtime system and compiler for (a fragment) of AQL based on finite state automata. Ideally, to obtain the best performance,​ these automata should be compiled into machine-code when executed. For this compilation,​ the following technologies should be used: 
 +  * A a high-level language to construct the compiler (Scala, http://​scala-lang.org/​) 
 +  * Use of Latent Modular Staging (LMS for short) for generating low-level portable assembly from the automata at runtime (http://​scala-lms.github.io/​) 
 +  * Use of LLVM (http://​llvm.org/​) as a portable assembly code and corresponding translator to machine code. 
 + 
 +Getting aquaintend with these technologies is part of the master thesis objective. 
 + 
 +**Validation of the approach** The thesis should propose a benchmark collection of AQL queries and associated input text files that can be used to test the obtained automaton-based AQL compiler and compare its performance against the reference, cost-based optimizer of SystemT. 
 + 
 +**Deliverables** of the master thesis project: 
 +  - An overview of AQL, SystemT, and its cost-based optimizer and evaluation engine. (document) 
 +  - A description of how AQL can be evaluated by means of so-called vset finite state automata. (document) 
 +  - A detailed desription of the state of the art in evaluating finite state automata. (document) 
 +  - Identification of the AQL syntaxt that is to be supported. (specification) 
 +  - The AQL compiler (software artifact) 
 +  - A benchmark set of AQL queries and associated data sets for the experimental validation 
 +  - An experimental validation of the compiler, comparing efficiency of compiled queries against the cost-based reference compiler. 
 + 
 +\\ 
 +**References about SystemT**:​ 
 +  * [[http://​almaden.ibm.com/​cs/​projects/​avatar/​icde2008.pdf|An Algebraic Approach to Rule-Based Information Extraction]]  
 +  * [[http://​www.sigmod.org/​publications/​sigmod-record/​0812/​p007.special.krishnamurthy.pdf|SystemT:​ A System for Declarative Information Extraction]] 
 + 
 +\\ 
 +**References about finite state automata evaluation**:​ 
 +  * Regular expression pattern matching can be simple and fast. http://​swtch.com/​~rsc/​regexp/​regexp1.html 
 +  * Regular Expression Matching: the Virtual Machine Approach http://​swtch.com/​~rsc/​regexp/​regexp2.html 
 +  * Regular Expression Matching in the Wild http://​swtch.com/​~rsc/​regexp/​regexp3.html 
 +  * [[http://​www.diku.dk/​kmc/​documents/​AiPL-CrashCourse.pdf|A Crash-Course in Regular Expression Parsing and Regular Expressions as Types.]] 
 + 
 +\\ 
 +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] 
 + 
 +\\ 
 +**Status**: available 
 + 
 + 
 +===== Structural compression of relational databases ===== 
 + 
 +Recent research in database management systems at ULB has shown how to theoretically construct succinct (compressed) representations for relational databases and semantic web databases. The advantage of these succinct representations is that they allow querying directly **on the succinct representation**,​ without needing to consult the underlying database. 
 + 
 +The goal of this thesis is to study scalable algorithms for constructing the actual succinct representations. Some in-memory algorithms are already known, but given the large size of typical database, distributed and out-of-core alternatives need to be found. 
 + 
 +**Deliverables**:​ 
 +  * Overview of the state of the art in main-memory,​ and distributed (bi)simulation-based compression algorithms (document) 
 +  * Description of the simulation-based compression algorithm to implement (document) 
 +  * Selection of the distribution framework (Actors, Pregel, ...) (document) 
 +  * Simulation algorithm (software artifact) 
 +  * Experimental analysis of distributed algorithm on a number of datasets. (document) 
 + 
 +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] 
 + 
 +**Status**: available 
 + 
 +===== A Scala-based runtime and compiler for Distributed Datalog ===== 
 + 
 +Datalog is a fundamental query language in datamanagement based on logic programming. It essentially extends select-from-where SQL queries with recursion. There is a recent trend in data management research to use datalog to specify distributed applications,​ most notably on the web, as well as do inference on the semantic web. The goal of this thesis is to engineer a basic **distributed datalog system**, i.e., a system that is capable of compiling & running distributed datalog queries. The system should be implemented in the Scala programming language. Learning Scala is part of the master thesis project. 
 + 
 +The system should incorporate recently proposed worst-case join algorithms (i.e., the [[http://​arxiv.org/​abs/​1210.0481|leapfrog trie join]]) and employ known local datalog optimizations (such as magic sets and QSQ.) 
 + 
 +**Validation of the approach** The thesis should propose a benchmark collection of datalog queries and associated data workloads that be used to test the obtained system, and measure key performance characteristics (elasticity of the system; memory frootprint; overall running time, ...) 
 + 
 + 
 +**Deliverables**:​ 
 +  * Semantics of datalog; overview of known optimization strategies (document) 
 +  * Description of the leapfrog trie join (document) 
 +  * Datalog system (software artifact) 
 +  * Experimental analysis of developped system on a number of use cases (document) 
 + 
 +\\ 
 +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] 
 + 
 +**Status**: available 
 + 
 +===== Design and Implementation of a Curriculum Revision Tool =====
  
 Stijn Vansummeren (WIT), Frédéric Robert (BEAMS) Stijn Vansummeren (WIT), Frédéric Robert (BEAMS)
  
-This MFE concers ​the analysis, design, and implementation of a +This master thesis project concerns ​the analysis, design, and implementation of a software system that can assist in the revision of teaching curricula (also known as teaching programs). 
-software system that can assist in the revision of teaching curricula + 
-(also known as teaching programs).+The primary targetted functionalities of the software system are as follows:
  
-The primary targetted functionalities of the  software system are as 
-follows: 
   * It should allow to make different versions of the teaching programs, much in the same way as version control systems like GIT and subversion offer the possibility to make different "​development branches"​ of a program'​s source code.   * It should allow to make different versions of the teaching programs, much in the same way as version control systems like GIT and subversion offer the possibility to make different "​development branches"​ of a program'​s source code.
-  * It should ​ allow an extensible means to check the modified program for inconsistentcies. (For example, if course X has course Y as prerequisite,​ then course Y should not be scheduled in 2nd semester and X in 1st semester. Moreover, the total number of ECTS of all courses should be at most 60 ECTS. )+  * It should allow an extensible means to check the modified program for inconsistentcies. (For example, if course X has course Y as prerequisite,​ then course Y should not be scheduled in 2nd semester and X in 1st semester. Moreover, the total number of ECTS of all courses should be at most 60 ECTS. )
   * It should allow to analyze the modifications proposed in the teaching programs, and summarize the impact that these changes could have on other programs. (For example, if a course is removed from the computer science curriculum, it should be flagged that it should also be removed from all curricula that included the course.)   * It should allow to analyze the modifications proposed in the teaching programs, and summarize the impact that these changes could have on other programs. (For example, if a course is removed from the computer science curriculum, it should be flagged that it should also be removed from all curricula that included the course.)
-  * It should load data from (and preferably, save data to) the ULB central administration database. ​+  * It should load data from (and preferably, save data to) the ULB central administration database.
   * It should give suggestions concerning the impact of the modifications on the course schedules.   * It should give suggestions concerning the impact of the modifications on the course schedules.
  
-A proof-of-concept implementation of a revision tool that supports the first two requirements above is currently being developed ​in the context of a PROJH402 project. The MFE student that selects this topic is expected to: +A proof-of-concept implementation of a revision tool that supports the first two requirements above is currently being developped ​in the context of a PROJH402 project. The MFE student that selects this topic is expected to:
   * Develop this prototype to a production-ready implementation.   * Develop this prototype to a production-ready implementation.
   * Implement the communication with the central ULB database.   * Implement the communication with the central ULB database.
Line 134: Line 206:
   * Interact with the administration of the Ecole Polytechnique to fine-tune the above requirements;​ test the implementation;​ and integrate remarks after testing   * Interact with the administration of the Ecole Polytechnique to fine-tune the above requirements;​ test the implementation;​ and integrate remarks after testing
  
-Contact : Stijn Vansummeren ​<stijn.vansummeren@ulb.ac.be>, Frédéric Robert <​frrobert@ulb.ac.be>​+\\ 
 +**Interested?​** ​Contact : Stijn Vansummeren ​(stijn.vansummeren@ulb.ac.be), Frédéric Robert <​frrobert@ulb.ac.be>​ 
 + 
 + 
 +===== Semi-Supervised Entity Resolution ===== 
 +Toon Calders (WIT) 
 + 
 +In the big data era large collections of data have become available for analysis. These data, however, often come from different data sources and may contain errors. Consider for instance a company that wants to combine data from marketing and sales in order to see to what extent the targeted marketing campaign has been successful in attracting new customers. A key operation in this analysis is the identification of which records from marketing and sales refer to the same person. In this way it can be determined which targeted potential customers were already clients, and of the contacted non-clients,​ which ones reacted to the marketing campaign. Furthermore,​ most likely the records of marketing are far less reliable and formatted differently than those of sales. For instance, the marketing records won't usually contain a client number. The process of linking these sources together and identifying which records refer to the same person is know as entity resolution. Most existing approaches for entity resolution use either a fixed set of pre-determined rules, which may be sub-optimal for the problem at hand, or are based on learning classifiers which requires large amounts of labelled data. 
 + 
 +In this thesis you will study the possibility of entity-resolution in the absence of large collections of labelled data, by exploiting redundancies in the features with which records can be compared in combination with an active learning approach in which volunteers can be asked to label some examples on the fly. 
 +\\ 
 +**Interested?​** Contact [[toon.calders@ulb.ac.be|Toon Calders]] 
 + 
 + 
 +===== Using Non-Redundant Sequential Pattern Mining for Process Discovery ===== 
 +Toon Calders (WIT) 
 + 
 +Process mining is the act of deriving a process model, such as for instance a Petri-net or a BPMN model, based on an event log. An example of such a log could be all events that an insurance company undertakes for pricing a car insurance based on a request from a client. Events could be looking up if the client has been blacklisted,​ his or her history w.r.t. car accidents, estimating the risk based on car type, age and gender of the requester, making a proposal, soliciting the agreement of the client, in case of disagreement,​ contacting a manager to approve a special offer, etc. Based on several traces for different clients may allow the automatic reconstruction of a process model. There exist several approaches for process mining, including footprint based algorithms such as Alpha, Alpha+, heuristic algorithms including heuristics miner, genetic algorithms, region based methods, etc. The goal of this thesis is to explore the possibility of using current state-of-the-art data mining algorithms for sequence and episode mining as a basis of a new and improved version of the alpha-algorithm. 
 + 
 +Van der Aalst, W. M. (2011). Process Mining: Discovery, Conformance and Enhancement of Business Processes. Springer. 
 + 
 + 
 +Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] 
 + 
 +===== Mining patterns for compression ===== 
 +Toon Calders (WIT) 
 + 
 +Data mining is the research discipline that studies the extraction of information from large amounts of data. One of the typical data mining tasks is pattern mining where we try to find regularities that occur frequently in a dataset. The prototypical example is that of a supermarket storing for every customer visiting the supermarket,​ the transaction;​ that is, the set of items that were bought by that customer. The frequent itemset mining problem now is to detect which combinations of products were more often sold together than a given threshold. One of the major problems of pattern mining algorithms, however, is the enormous amount of redundant patterns they generate; for instance, very popular items, such as toilet paper, tend to appear in many frequent combinations purely due to chance. In order to deal with this problem, techniques based upon compression and minimum description length were proposed to reduce the number of patterns. The rationale behind the minimal description length principle is that a set of patterns that describes well what is happening in the dataset should allow for a good compression. For a collection of patterns, the quality is measured as the description length of the patterns plus the size of the data compressed with these patterns. For instance, if the pattern {bread, milk, butter} has a high frequency, we could opt to replace every occurrence of this pattern by a special code, effectively reducing the encoding length of the data. Surprisingly,​ however, the MDL principle was until now only used to rule out redundant patterns, and it has not been researched yet how well the discovered patterns actually do compress the data as compared to compression algorithms such as Lempel–Ziv–Welch.  
 +Hence, in this highly research oriented graduation project, two research questions are central: (1) How good do non-redundant pattern sets based on MDL allow compressing data, and (2) Can we extract useful patterns from existing compression algorithms?​ 
 + 
 +Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] 
  
  
 
teaching/mfe/is.txt · Last modified: 2020/09/29 17:03 by mahmsakr