Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
teaching:mfe:is [2015/04/13 14:44]
svsummer [Engineering a runtime system and compiler for AQL]
teaching:mfe:is [2015/05/08 09:23]
svsummer [Compiling SPARQL queries into machine code]
Line 19: Line 19:
  
 Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found  Our laboratory performs collaborative research with Euranova R&D (http://​euranova.eu/​). The list of subjects proposed for this year by Euranova can be found 
-{{:​teaching:​mfe:​mt2014_euranova.pdf|here}}+{{:​teaching:​mfe:​master_thesis_euranova_2015.pdf|here}}
  
 These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks. These subject include topics on distributed graph processing, processing big data using Map/Reduce, cloud computing, and social networks.
  
   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]   * Contact : [[ezimanyi@ulb.ac.be|Esteban Zimanyi]]
 +** Complex Event Processing for Security Analytics
 +
 +  As noted by [[home.deib.polimi.it/​cugola/​Papers/​cep_survey.pdf][Cugola and Magara]], "an increasing number of distributed
 +  applications requires processing continuously flowing data
 +  ("​events"​) from geographically distributed sources at unpredictable
 +  rates to obtain timely responses to complex queries. Examples of
 +  such applications come from the most disparate fields: from fraud
 +  detection to network intrusion detection systems, from wireless
 +  sensor networks to financial tickers, from traffic management to
 +  click-stream inspection."​
 +
 +  These requirements have led to the development of a number of
 +  systems specifically designed to process information as a flow (or a
 +  set of flows) of continues data "​events"​ according to a set of
 +  pre-deployed processing rules. ​ Despite having a common goal, these
 +  systems differ in a wide range of aspects, including architecture,​
 +  data models, rule and pattern languages, and processing
 +  mechanisms. In part, this is due to the fact that they were the
 +  result of the research efforts of different communities,​ each one
 +  bringing its own view of the problem and its background to the
 +  definition of a solution.
 +
 +  The master thesis is put forward in the context of the SPICES
 +  "​Scalable Processing and mIning of Complex Events for
 +  Security-analytics"​ research project, funded by Innoviris. ​ The
 +  objective of this master thesis is to survey the existing systems
 +  and compare the strengths and weaknesses when they are applied
 +  specifically to the context detecting security breaches (network
 +  intrusion, fraud detection, ...), and help, as part of the research
 +  project, in the design & implementation of a new system that
 +  overcomes these weaknesses.
 +
 +   ​*Interested?​*
 +   - Contact :  [[svsummer@ulb.ac.be][Stijn Vansummeren]]
 +
 +   ​*Status*:​ available
  
 ===== Compiling SPARQL queries into machine code ===== ===== Compiling SPARQL queries into machine code =====
Line 67: Line 103:
   * create associated tooling for SCULPT (i.e., parser and serializer generator, in the spirit of data description tools)   * create associated tooling for SCULPT (i.e., parser and serializer generator, in the spirit of data description tools)
  
- +\\
 **Deliverables** of this master thesis project: **Deliverables** of this master thesis project:
   - detailed description of the SCULPT proposal (document)   - detailed description of the SCULPT proposal (document)
Line 83: Line 118:
 ===== Engineering a runtime system and compiler for AQL ===== ===== Engineering a runtime system and compiler for AQL =====
  
-Automatically extracting structured information from text is a task that has been pursued for decades. As a discipline, ///​Information Extraction///​ (IE) had its start with the [[http://​acl.ldc.upenn.edu/​C/​C96/​C96-1079.pdf|DARPA Message Understanding Conference in 1987]]. While early work in the area focused largely on military applications,​ recent changes have made information extraction increasingly important to an increasingly broad audience. Trends such as the rise of social media have produced huge amounts of text data, while analytics platforms like Hadoop have at the same time made the analysis of this data more accessible to a broad range of users. Since most analytics over text involves information extraction as a first step, IE is a very important part of data analysis in the enterprise today.+Automatically extracting structured information from text is a task that has been pursued for decades.Since most analytics over text involves information extraction as a first step, IE is a very important part of data analysis in the enterprise today.
  
 In 2005, researchers at the IBM Almaden Research Center developped a new system specifically geared for practical information extraction in the enterprise. This effort lead to SystemT, a rule-based IE system with an SQL-like declarative language named AQL (Annotation Query Language). The declarative nature of AQL enables new kinds of tools for extractor development,​ and draws upon known techniques form query processing in relational database management systems to offer a cost-based optimizer that ensures high-througput performance. Recent research into the foundations of AQL (http://​researcher.watson.ibm.com/​researcher/​files/​us-fagin/​jacm15.pdf) has shown that, as an alternative,​ it is also possible to build a runtime system for AQL based on special kinds of finite state automata. A potential benefit of this alternate runtime system is that text files need only be processed once (instead of multiple times in the cost-based optimizer backend) and may hence provide greater throughput. On the other hand, the alternate system can sometimes have larger memory requirements than the cost-based optimizer backend. In 2005, researchers at the IBM Almaden Research Center developped a new system specifically geared for practical information extraction in the enterprise. This effort lead to SystemT, a rule-based IE system with an SQL-like declarative language named AQL (Annotation Query Language). The declarative nature of AQL enables new kinds of tools for extractor development,​ and draws upon known techniques form query processing in relational database management systems to offer a cost-based optimizer that ensures high-througput performance. Recent research into the foundations of AQL (http://​researcher.watson.ibm.com/​researcher/​files/​us-fagin/​jacm15.pdf) has shown that, as an alternative,​ it is also possible to build a runtime system for AQL based on special kinds of finite state automata. A potential benefit of this alternate runtime system is that text files need only be processed once (instead of multiple times in the cost-based optimizer backend) and may hence provide greater throughput. On the other hand, the alternate system can sometimes have larger memory requirements than the cost-based optimizer backend.
Line 117: Line 152:
   * [[http://​www.diku.dk/​kmc/​documents/​AiPL-CrashCourse.pdf|A Crash-Course in Regular Expression Parsing and Regular Expressions as Types.]]   * [[http://​www.diku.dk/​kmc/​documents/​AiPL-CrashCourse.pdf|A Crash-Course in Regular Expression Parsing and Regular Expressions as Types.]]
  
 +\\
 **Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] **Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
  
 +\\
 **Status**: available **Status**: available
  
Line 143: Line 180:
 Datalog is a fundamental query language in datamanagement based on logic programming. It essentially extends select-from-where SQL queries with recursion. There is a recent trend in data management research to use datalog to specify distributed applications,​ most notably on the web, as well as do inference on the semantic web. The goal of this thesis is to engineer a basic **distributed datalog system**, i.e., a system that is capable of compiling & running distributed datalog queries. The system should be implemented in the Scala programming language. Learning Scala is part of the master thesis project. Datalog is a fundamental query language in datamanagement based on logic programming. It essentially extends select-from-where SQL queries with recursion. There is a recent trend in data management research to use datalog to specify distributed applications,​ most notably on the web, as well as do inference on the semantic web. The goal of this thesis is to engineer a basic **distributed datalog system**, i.e., a system that is capable of compiling & running distributed datalog queries. The system should be implemented in the Scala programming language. Learning Scala is part of the master thesis project.
  
-The system should+The system should incorporate recently proposed worst-case join algorithms (i.e., the [[http://​arxiv.org/​abs/​1210.0481|leapfrog trie join]]) ​and employ known local datalog optimizations (such as magic sets and QSQ.)
-  * incorporate recently proposed worst-case join algorithms (i.e., the [[http://​arxiv.org/​abs/​1210.0481|leapfrog trie join]]) +
-  * employ known local datalog optimizations (such as magic sets and QSQ)+
  
 **Validation of the approach** The thesis should propose a benchmark collection of datalog queries and associated data workloads that be used to test the obtained system, and measure key performance characteristics (elasticity of the system; memory frootprint; overall running time, ...) **Validation of the approach** The thesis should propose a benchmark collection of datalog queries and associated data workloads that be used to test the obtained system, and measure key performance characteristics (elasticity of the system; memory frootprint; overall running time, ...)
  
-**Deliverables**:​ 
  
 +**Deliverables**:​
   * Semantics of datalog; overview of known optimization strategies (document)   * Semantics of datalog; overview of known optimization strategies (document)
   * Description of the leapfrog trie join (document)   * Description of the leapfrog trie join (document)
Line 156: Line 191:
   * Experimental analysis of developped system on a number of use cases (document)   * Experimental analysis of developped system on a number of use cases (document)
  
 +\\
 **Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]] **Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
  
Line 180: Line 216:
   * Interact with the administration of the Ecole Polytechnique to fine-tune the above requirements;​ test the implementation;​ and integrate remarks after testing   * Interact with the administration of the Ecole Polytechnique to fine-tune the above requirements;​ test the implementation;​ and integrate remarks after testing
  
 +\\
 **Interested?​** Contact : Stijn Vansummeren (stijn.vansummeren@ulb.ac.be),​ Frédéric Robert <​frrobert@ulb.ac.be>​ **Interested?​** Contact : Stijn Vansummeren (stijn.vansummeren@ulb.ac.be),​ Frédéric Robert <​frrobert@ulb.ac.be>​
  
  
-===== Automatic detection of name variations ​=====+===== Semi-Supervised Entity Resolution ​=====
 Toon Calders (WIT) Toon Calders (WIT)
  
-For this project a large data collection consisting ​of historical birthdeath, and marriage certificates of the province of North-Brabant in the Netherlands is availableThis collection contains certificates ​for about 3 million people, ​from 1580 until 1955. This collection of paper documents ​has been indexed by volunteersFor many of the certificates (unfortunately the index is not complete yet), the names of the people involved in it, and their role have been recorded in a databaseConsider for instance the following example ​of an index entry for a death certificate:​+In the big data era large collections ​of data have become available for analysis. These datahoweveroften come from different data sources ​and may contain errorsConsider ​for instance a company that wants to combine data from marketing and sales in order to see to what extent the targeted marketing campaign ​has been successful in attracting new customersA key operation in this analysis ​is the identification ​of which records from marketing and sales refer to the same person. In this way it can be determined which targeted potential customers were already clients, and of the contacted non-clients,​ which ones reacted to the marketing campaignFurthermore,​ most likely the records of marketing are far less reliable and formatted differently than those of sales. For instancethe marketing records won't usually contain a client number. The process ​of linking these sources together and identifying which records refer to the same person is know as entity resolution. Most existing approaches ​for entity resolution use either ​fixed set of pre-determined rules, which may be sub-optimal for the problem at hand, or are based on learning classifiers which requires large amounts of labelled data.
  
-^ Death certificate ^^ +In this thesis you will study the possibility ​of entity-resolution in the absence ​of large collections of labelled data, by exploiting redundancies in the features with which records can be compared in combination with an active learning approach in which volunteers can be asked to label some examples on the fly. 
-|Deceased |Johanna Louise Fredrika Frans | +\\ 
-|Relation of the deceased |Gerard Cornelius Reincke de Sitter | +**Interested?​** Contact [[toon.calders@ulb.ac.be|Toon Calders]]
-|Father ​of the deceased |Carl Ludwig Frans | +
-|Mother ​of the deceased |Alida Philippina Zehender | +
-|Type of deed |death certificate | +
-|Number of deed |5 | +
-|Place |Beers | +
-|Date of decease |26-02-1825 | +
-|Period |1825 | +
-|Contains |Overlijdensregister 1825 | +
-|Number of inventory |50 | +
-|Record number |456 |+
  
-There are, however, several problems with the data recorded by the volunteers: ​ 
-  - Volunteers made mistakes when recording the names 
-  - Natural name variations occur; for instance, during the Napoleonic era, Willem preferred to be called Guillaume. After the French left the Netherlands,​ Willem became Willem again. Other, less spectacular variations: Fredrika versus Frederika. 
-  - Another source of variation is the granularity at which locations are reported. Sometimes locations have been reported at suburb or even neighborhood level, whereas in other records only the city is reported. 
-  - Also the original data contained errors. For instance, the order of names may have been swapped. 
  
-The goal of this graduation project is to automatically detect name variations for location and person names, using statistical and data mining methods. Because of the large size of the database it is very likely that most name variations occur frequently. In a pilot study, it was shown that name variations could be detected by finding pairs of full names sharing most surnames, but not all. The differences often were name variations. Your task will be to extend this approach to also include locations, and exploit additional background knowledge such as: for most birth certificates there is a matching death certificate,​ no one has more than one birth and death certificate,​ etc.  +===== Using Non-Redundant Sequential Pattern Mining ​for Process ​Discovery ​=====
-This project has a large research component, so your creative input will be required as well. For this project it is absolutely not necessary to speak or understand Dutch. +
- +
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] +
- +
-===== Analyzing state-of-the-art technology ​for handwritten text recognition in a practical case study ===== +
-Toon Calders (WIT) and Olivier Debeir (LISA) +
- +
-The goal of this project is to study the applicability of current state-of-the-art text recognition tools in the following practical application. Consider the following two exemplary documents:​ +
- +
-[[https://​dl.dropbox.com/​u/​5119252/​MFE/​069-50-3165-1813-00009.jpg]] \\  +
-[[https://​dl.dropbox.com/​u/​5119252/​MFE/​069-50-3165-1815-00003.jpg]] +
- +
-These two documents are scans of birth certificates (actually both are 2 birth certificates) from the Dutch city Grave. We have a huge collection of such paper documents; about 3 million, of which several tens of thousands have been scanned. Furthermore,​ we have an index on these documents, created by volunteers. This index contains, for the birth certificate,​ the name of the child, the name of the father and mother, and the witnesses. As you can see in the documents, however, much more information is available. Your task is to answer the following question: is it realistic, given the current state-of-the-art to do automatic recognition of hand-written texts such as these certificates?​ Most of the documents are very structured, with limited number of possible values (age of a person, profession),​ and there is a huge amount of training data; the names of all people have been indexed, usually the handwriting is consistent throughout a whole book with certificates. This graduation project includes a thorough literature study and experimentation with (original combinations of) state-of-the-art image recognition techniques adapted to our specific case. The project will be carried out in collaboration with the research labs WIT and LISA. +
- +
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] +
- +
-===== Process ​Mining on Company Data for Detecting Security Breaches ​=====+
 Toon Calders (WIT) Toon Calders (WIT)
  
-According to a recent report of Price Waterhouse Cooper, ​the most common source ​of security incidents are current employees, followed at distance by former employees and only after that truly external threats ​such as hactivists. [http://​www.pwc.com/​gx/​en/​consulting-services/​information-security-survey/​giss.jhtml?​region=&​industry=] ​ This observation leads to the conclusion that in an intelligent security ​event management system, should also concentrate on internal threats to security. +Process mining is the act of deriving ​process model, ​such as for instance a Petri-net or a BPMN model, based on an event logAn example ​of such a log could be all events that an insurance company undertakes for pricing a car insurance based on a request from a clientEvents could be looking up if the client has been blacklisted,​ his or her history w.r.t. car accidentsestimating ​the risk based on car typeage and gender of the requester, making a proposal, soliciting ​the agreement of the client, ​in case of disagreement,​ contacting a manager ​to approve a special offer, etc. Based on several traces for different clients may allow the automatic reconstruction ​of a process model. There exist several approaches for process mining, including ​footprint based algorithms ​such as AlphaAlpha+heuristic algorithms including heuristics miner, genetic algorithmsregion based methodsetcThe goal of this thesis is to explore ​the possibility ​of using current ​state-of-the-art ​data mining algorithms ​for sequence and episode ​mining ​as basis of a new and improved version ​of the alpha-algorithm.
-The goal of the thesis is to analyze the possibility of using process mining to help in the detection of silent attacks. We will concentrate ​on company-specific dataFrom this data typical behavior will be detected and modeled as a process ​or workflowWe consider three aspects of a workflow: the actor(s), the resources, and the activities. By modeling ​the normal behavior ​in the system we are able to detect deviating cases. Based on historical data, the goal is to build models ​of typical behavior, including ​the use of resources ​such as patient records. Such a system would be able to detect for instance if a certain patient record is consulted much more often than usualor by more peopleor outside of the normal workflow (e.g.only reading informationbut not writing)Such a pattern could indicate unjustified access ​to for instance ​the patient record ​of a famous patient.  +
-For modeling the workflows, we propose the use of process mining (Van der Aalst, 2011). Process mining is a state-of-the-art ​technology concerned with the automatic extraction of process models from event logs. Consider, e.g., a hospital registering all activities that are carried out for the treatment of patients, ranging from the admission, various measurements being taken from the patient, medicine administered,​ surgical procedures, to the resignation of the patient. Process ​mining ​could be used to extrapolate from these examples, ​common model of how the hospital deals with patient. There are several applications of process mining; first it can be used to improve the processes by standardizing them; many companies ​and organizations may only have informal procedures. By process mining the process logs are used to extract a general model of the actual business processes. Such a model can guide the automation process.  +
-In this thesis the goal is to analyze how process mining could be used for anomaly detection; how can the discovered models be used to detect abnormal behavior in a company network? Much like in credit card fraud detection, the approach is to first model normal behavior, in this case using process mining, in order to detect diverging behavior that could indicate security breaches in the network.+
  
 Van der Aalst, W. M. (2011). Process Mining: Discovery, Conformance and Enhancement of Business Processes. Springer. Van der Aalst, W. M. (2011). Process Mining: Discovery, Conformance and Enhancement of Business Processes. Springer.
Line 246: Line 248:
 Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]]
  
-===== Pattern Mining for Object Tracking ===== 
-Toon Calders (WIT) 
- 
-Pattern mining techniques are more and more often used in computer vision 
-to obtain features that are more discriminative than those extracted 
-using computer vision algorithms. This is true for example in content-based 
-images/​videos retrieval, indexing, classification,​ tracking, etc. However, the main 
-drawback of using traditional pattern mining techniques is their inefficiency when 
-dealing with huge set of data (for example provided by Google image or Youtube 
-for videos) or when trying to tackle real-time analysis problems. The data mining 
-community has been working on the “Big Data” problem for many years coming 
-up with promising solutions such as stream mining. The aim of this project 
-is to explore the possibility of using pattern mining in data streams for the (real-time) analysis of videos and, in particular, for object tracking. 
- 
-For more extensive information regarding the context and problem setting, see the following paper: 
- 
-Toon Calders, Elisa Fromont, Baptiste Jeudy and Hoang Thanh Lam. 
-[[http://​labh-curien.univ-st-etienne.fr/​~fromont/​|Analysis of Videos using Tile Mining.]]\\ 
-In: //ECML/PKDD Workshop on Real-World Challenges for Data Stream Mining//, Prague, 2013 
- 
-Interested? Contact [[toon.calders@ulb.ac.be|Toon Calders]] 
- 
- 
-===== Design and Implementation of a Curriculum Revision Tool ====== 
- 
-Stijn Vansummeren (WIT), Frédéric Robert (BEAMS) 
- 
-This MFE concers the analysis, design, and implementation of a 
-software system that can assist in the revision of teaching curricula 
-(also known as teaching programs). 
- 
-The primary targetted functionalities of the  software system are as 
-follows: 
-  * It should allow to make different versions of the teaching programs, much in the same way as version control systems like GIT and subversion offer the possibility to make different "​development branches"​ of a program'​s source code. 
-  * It should ​ allow an extensible means to check the modified program for inconsistentcies. (For example, if course X has course Y as prerequisite,​ then course Y should not be scheduled in 2nd semester and X in 1st semester. Moreover, the total number of ECTS of all courses should be at most 60 ECTS. ) 
-  * It should allow to analyze the modifications proposed in the teaching programs, and summarize the impact that these changes could have on other programs. (For example, if a course is removed from the computer science curriculum, it should be flagged that it should also be removed from all curricula that included the course.) 
-  * It should load data from (and preferably, save data to) the ULB central administration database. ​ 
-  * It should give suggestions concerning the impact of the modifications on the course schedules. 
- 
-A proof-of-concept implementation of a revision tool that supports the first two requirements above is currently being developed in the context of a PROJH402 project. The MFE student that selects this topic is expected to: 
- 
-  * Develop this prototype to a production-ready implementation. 
-  * Implement the communication with the central ULB database. 
-  * Implement the impact analysis concerning the course schedules. 
-  * Interact with the administration of the Ecole Polytechnique to fine-tune the above requirements;​ test the implementation;​ and integrate remarks after testing 
- 
-Contact : Stijn Vansummeren <​stijn.vansummeren@ulb.ac.be>,​ Frédéric Robert <​frrobert@ulb.ac.be>​ 
  
  
 
teaching/mfe/is.txt · Last modified: 2020/09/29 17:03 by mahmsakr