Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
teaching:projh402 [2014/10/24 08:54]
svsummer [Project proposals]
teaching:projh402 [2015/08/18 11:22]
svsummer [Project proposals]
Line 6: Line 6:
  
 ===== Project proposals ===== ===== Project proposals =====
 +
 +=== Fast loading of semantic web datasets into native RDF stores ====
 +
 +The next generation of the Web, the so-called Semantic Web, stores
 +extremely large knowledge bases in the RDF data model. In this data
 +model, all knowledge is represented by means of triples of the form
 +(subject, property, object), where subject, property and object can be
 +URLs, among other things.
 +
 +In order to effeciently query such knowledge bases, the RDF data is
 +typically loaded into a so-called native RDF store. To ensure that the
 +knowledge is encoded for fast retrieval, the RDF store will first
 +encode all variable-length URLs in the dataset by fixed-width
 +integers, among other things. Each RDF triple will then be encoded by
 +by their corresponding integer triples (integer_of_subject,​
 +integer_of_property,​ integer_of_object).
 +
 +The purpose of this project is to implement and experementally compare
 +a number of algorithms that can perform this encoding:
 +
 +  * The trivial alogrithm that simply maintains a hashmap that maps URLs to their integer codes. When processing triple (s,p,o), it looks up s, p, and o in the hashmap to see if they have alraedy been assigned an integer ID. If so, this id is used for the   ​encoding;​ otherwise they are inserted into the hashmap with new, unique ids. The downside of this approach is that, while simple, it requires that one can store all URLS in working memory.
 +
 +  * The slightly smarter algorithm that works in multiple stages: the ID is computed by a pre-fixed hash function. For each URL, the URL and its ID are written to an output file. This file is later sorted on ID to check for possible ​ hash collisions between distinct URLS. 
 +
 +  * Algorithms that use the best known state-of-the art data structures for compactly storing respresenting sets of strings, such as the HAT-TRIE ("​Engineering scalable, cache and space efficient tries for strings"​ Nikolas Askitis, Ranjan Sinha, The VLDB Journal, October 2010, Volume 19, Issue 5, pp 633-660 and "​HAT-Trie:​ A Cache-Conscious Trie-Based Data Structure For Strings",​ The 30th International Australasian Computer Science Conference (ACSC), Volume 62, pages 97 - 105, 2007.).
 +
 +  * Variations of the above algorithms, fine-tuned for semantic web datasets.
 +
 +
 +**Contact** : Stijn Vansummeren (stijn.vansummeren@ulb.ac.be)
 +
 +**Status**: available
 +===== Graph Indexing for Fast Subgraph Isomorphism Testing =====
 +
 +There is an increasing amount of scientific data, mostly from the bio-medical sciences, that can be represented as collections of graphs (chemical molecules, gene interaction networks, ...). A crucial operation when searching in this data is that of subgraph ​   isomorphism testing: given a pattern P that one is interested in (also a graph) in and a collection D of graphs (e.g., chemical molecules), find all graphs in G that have P as a   ​subgraph. Unfortunately,​ the subgraph isomorphism problem is computationally intractable. In ongoing research, to enable tractable processing of this problem, we aim to reduce the number of candidate graphs in D to which a subgraph isomorphism test needs   to be executed. Specifically,​ we index the graphs in the collection D by means of decomposing them into graphs for which subgraph ​  ​isomorphism *is* tractable. An associated algorithm that filters graphs that certainly cannot match P can then formulated based on ideas from information retrieval.
 +
 +In this project, the student will emperically validate on real-world datasets the extent to which graphs can be decomposed into graphs for which subgraph isomorphism is tractable, and run experiments to validate the effectiveness of the proposed method in terms of filtering power.
 +
 +**Interested?​** Contact : [[stijn.vansummeren@ulb.ac.be|Stijn Vansummeren]]
 +
 +**Status**: available
  
 ==== Principles of Database Management Architectures in Managed Virtual Environments ==== ==== Principles of Database Management Architectures in Managed Virtual Environments ====
Line 70: Line 111:
  
  
-==== Development of a Personal Scientific Digital Library Management System ==== 
- 
-In this project, the student is asked to construct a software system to help manage large collections of scientific papers in digital form. Specifically,​ the system must be able to: 
-  - Scan a given filesystem location for given filetypes (PDFs, EPUB, ...) containing scientific articles. 
-  - Extract the metadata from each identified file. Here, the metadata includes the title of the article, its authors, the publishing venue, the publisher, the year of publication,​ the article'​s abstract ... The development of an intelligent way to retreive this metadata is requried. This could be done, for example by a combination of parsing the file, contacting the internet repositories of known publishers (AMC, Springer, Elsevier) etc to retrieve the data. 
-  - Offer search capabilities,​ in order to allow a user to find all indexed articles matching certain criteria (title, author, ...) 
-  - Offer archiving capabilities 
- 
-Use of semantic web technologies (RDF, SPARQL, ...) to store and search the metadata is encouraged. 
- 
-**Contact** : Stijn Vansummeren (stijn.vansummeren@ulb.ac.be) 
- 
-**Status**: taken 
  
 
teaching/projh402.txt · Last modified: 2022/09/06 10:39 by ezimanyi