Msc. Project: Linking Maritime Datasets to Dutch Ships and Sailors Cloud – Case studies on Archangelvaart and Elbing

[This post was written by Jeroen Entjes and describes his Msc Thesis research]

The Dutch maritime supremacy during the Dutch Golden Age has had a profound influence on the modern Netherlands and possibly other places around the globe. As such, much historic research has been done on the matter, facilitated by thorough documentation done by many ports of their shipping. As more and more of these documentations are digitized, new ways of exploring this data are created.

screenshot1
Screenshot showing an entry from the Elbing website

This master project uses one such way. Based on the Dutch Ships and Sailors project digitized maritime datasets have been converted to RDF and published as Linked Data. Linked Data refers to structured data on the web that is published and interlinked according to a set of standards. This conversion was done based on requirements for this data, set up with historians from the Huygens ING Institute that provided the datasets. The datasets chosen were those of Archangel and Elbing, as these offer information of the Dutch Baltic trade, the cradle of the Dutch merchant navy that sailed the world during the Dutch Golden Age.

Along with requirements for the data, the historians were also interviewed to gather research questions that combined datasets could help solve. The goal of this research was to see if additional datasets could be linked to the existing Dutch Ships and Sailors cloud and if such a conversion could help solve the research questions the historians were interested in.
Data visualization showing shipping volume of different datasets.

elbing graphAs part of this research, the datasets have been converted to RDF and published as Linked Data as an addition to the Dutch Ships and Sailors cloud and a set of interactive data visualizations have been made to answer the research questions by the historians. Based on the conversion, a set of recommendations are made on how to convert new datasets and add them to the Dutch Ships and Sailors cloud. All data representations and conversions have been evaluated by historians to assess the their effectiveness.

The data visualizations can be found at http://www.entjes.nl/jeroen/thesis/. Jeroen’s thesis can be found here: Msc. Thesis Jeroen Entjes

Share This:

MSc. Project Roy Hoeymans: Effective Recommendation in Knowlegde Portals – the SKYbrary case study

[This post was written by Roy Hoeymans. It describes his MSc. project ]

In this master project, which I have done externally at DNV-GL, I have built a recommender system for knowledge portals. Recommender systems are pieces of software that provide suggestions for related items to a user. My research focuses on the application of a recommender system in knowledge portals. A knowledge portal is an online single point of access to information or knowledge on a specific subject. Examples of knowledge portals are SKYbrary (www.skybrary.aero) or Navipedia (www.navipedia.org).

skybrary logoPart of this project was a case study on SKYbrary, a knowledge portal on the subject of aviation safety. In this project I looked at the types of data that are typically available to knowledge portals. I used user navigation pattern data, which I retrieved via the Google Analytics API, and the text of the articles to create a user-navigation based and a content based algorithm. The user-navigation based algorithm uses an item association formula and the content based algorithm uses a tf-idf weighting scheme to calculate content similarity between articles. Because both types of algorithm have their separate disadvantages, I also developed a hybrid algorithm that combines these two.

Screenshot of the demo application
Screenshot of the demo application

To see which type of algorithm was the most effective, I conducted a survey to the content editors of SKYbrary, who are domain experts on the subject. Each question in the survey showed an article and then recommendations for that article. The respondent was then asked to rate each recommended article on a scale from 1 (completely irrelevant) to 5 (very relevant). The results of the survey showed that the hybrid algorithm algorithm is, which a statistical significant difference, better than a user-navigation based algorithm. A difference between the hybrid algorithm and the content-based algorithm was not found however. Future work might include a more extensive or different type of evaluation.

In addition to the research I have done on the algorithms, I have also developed a demo application in which the content editors of SKYbrary can use to show recommendations for a selected article and algorithm.

For more informaton, view Roy Hoeymans’ Thesis Presentation [pdf] or read the thesis [Academia].

Share This:

Two TPDL papers accepted!

Today, the TPDL (International Conference on Theory and Practice of Digital Libraries) results came in and both papers on which I am a co-author got accepted. Today is a good day 🙂 tess_algThe first paper, we present work done during my stay at Netherlands Institute for Sound and Vision on automatic term extraction from subtitles. The interesting thing about this paper was that it was mainly how these algorithms were functioning in a ‘real’ context, that is within a larger media ecosystem. The paper was co-authored with Roeland Ordelman and Josefien Schuurman.

Screenshot of the QHP toolOn the second paper, I am one of the co-authors. In the paper “Supporting Exploration of Historical Perspectives across Collections”, we present an exploratory search application that highlights different perspectives on World War II across collections (including Verrijkt Koninkrijk). The project is funded by the Amsterdam Data Science seed project with Daan Odijk, research assistants Cristina Gârbacea and Thomas Schoegje, VU/CWI-colleagues Laura Hollink and Jacco van Ossenbruggen and  historian Kees Ribbens (NIOD). You can read more about it on Daan’s blog.

Share This:

A Sugar Activity for Subsistence Farmers

[reblogged from http://worldwidesemanticweb.org/2015/03/06/a-sugar-activity-for-subsistence-farmers/ This post is written by Tom Jansen]

Screenshot of the Sugar activity (Tom Jansen)
Screenshot of the Sugar activity (Tom Jansen)

Subsistence farming or agriculture is a form of farming where farmers mainly focus on growing enough food to be self-sufficient. Especially in African countries, where people are very dependent of own-grown food, this type of farming is very common. Subsistence farming, however, in these countries has so much to gain and has so much potential. Improving the farming skills of the farmers could make significant contributions to the reduction of hunger. Unfortunately, farmers often haven’t had enough agricultural education to optimally grow their own food. To help these farmers, I developed an activity that will improve their farming skills. The application helps the farmers to identify diseases of their crops and animals and will present them ways to manage the diseases and prevent them in the future. Giving them an opportunity to manage diseases of their crops and livestock means giving them an opportunity to improve their harvest. The opportunity of a bigger harvest could be a substantial contribution to a better way of living for farmers in (a.o) West Africa.

The activity is Sugar based and is therefore perfectly suitable for the XO-Laptops that are commonly used in West Africa. The activity revolves around a database with a lot of information about diseases of crops and livestock. When the farmer opens the activity, he will be led through two menus with possibilities. When the right crop or livestock is selected, a list with diseases will be shown containing identification possibilites for a particular diseases. When the farmer notices that one description of the disease is very similar to what is happening to his crops or livestock, he clicks on the disease. When the choice is made another window pops up showing the information the farmer needs to manage and prevent the disease.

Right now it is only possible to access the database and read the information inside the database. What would improve the activity is a way where farmers can access the database and not only read, but also change and add information from the database. This way the information and thus the quality of the activity could be improved without any help from the outside.

The activity can be found on the following page (containing all the code): https://github.com/WorldWideSemanticWeb/farming-activity

Read the full report here: Helping Subsistence Farmers in West Africa

Share This:

Linked Data for International Aid Transparency Initiative

In August 2013, VU Msc. student Kasper Brandt finished his thesis on developing, implementing and testing a Linked Data model for the International Aid Transparency Initiative (IATI). Now, more than a year later, that work was accepted for publication in the Journal on Data Semantics. We are very happy with this excellent result.

Model fragment
Model fragment

IATI is a multi-stakeholder initiative that seeks to improve the transparecy of development aid and to that end developed an open standard for the publication of aid information. Hundreds of NGOs and governments have registered to the IATI registry by publishing their aid activities in this XML standard. Taking the IATI model as an input, we have created a Linked Data model based on requirements elicitated from qualitative interviews using an iterative requirements engineering methodology. We have converted the IATI open data from a central registry to Linked Data and linked it to various other datasets such as World Bank indicators and DBPedia information. This dataset is made available for re-use at http://semanticweb.cs.vu.nl/iati .

burundi country page
Screenshot of an application bringing together information from multiple datasets

To demonstrate the added value of this Linked Data approach, we have created several applications which combine the information from the IATI dataset and the datasets it was linked to.  As a result, we have shown that creating Linked Data for the IATI dataset and linking it to other datasets give new valuable insights in aid transparency. Based on actual information needs of IATI users, we were able to show that linking IATI data adds significant value to the data and is able to fulfill the needs of IATI users.

A draft of the paper can be found here.

Share This:

Master Project Esra Atesçelik: Cluster Analysis Applied to Europana

[This post was written by Esra Atesçelik. It describes her MSc. project supervised  by Antoine Isaac and myself]

The digital libraries and aggregators such as Europeana provide access to millions of Cultural Heritage Objects (CHOs). Europeana is one of the libraries which does not maintain collection-level metadata. Europeana can cluster the objects that have common information with each other. It can use collection-level information to organize results and help users.

Karola Torkos - Cluster earrings (click to view on Flickr)In this project we want to show how we can cluster the objects from Europeana datasets. We also aim at finding the best way of clustering on Europeana metadata and the best parametric setting for clustering. We apply various clustering methods on Europeana metadata and aim at proposing a clustering technique that is most appropriate to group Europeana CHOs. In the experiments we evaluated the cluster results manually, on qualitative and quantitative level.

The results of experiments showed that it is difficult to define the best parametric setting and best clustering method only based on a number of experiments. However, we have shown a way to cluster Europeana objects which may be useful for Europeana.

View Esra’s presentation [pdf] and her thesis [pdf]

 

Share This:

Master Project Andrea Bravo Balado: Linking Historical Ship Records to Newspaper Archives

[This post was written by Andrea Bravo Balado and is cross-posted at her own blog. It describes her MSc. project supervised  by myself]

Linking historical datasets and making them available for the Web has increasingly become a subject of research in the field of digital humanities. In the Netherlands, history is intimately related to the maritime activity because it has been essential in the development of economic, social and cultural aspects of Dutch society. As such an important sector, it has been well documented by shipping companies, governments, newspapers and other institutions.

janwillemsen: foto Rotterdam historische schepen (click to view on flickr)In this master project we assume that, given the importance of maritime activity in every day life in the XIX and XX centuries, announcements on the departures and arrivals of ships or mentions of accidents or other events, can be found in newspapers.

We have taken a two-stage approach: first, an heuristic-based method for record linkage and then machine-learning algorithms for article classification to be used for filtering in combination with domain features. Evaluation of the linking method has shown that certain domain features were indicative of mentions of ships in newspapers. Moreover, the classifier methods scored near perfect precision in predicting ship related articles.

Enriching historical ship records with links to newspaper archives is significant for the digital history community since it connects two datasets that would have otherwise required extensive annotating work and man hours to align. Our work is part of the Dutch Ships and Sailors Linked Data Cloud project. Check out Andrea’s thesis[pdf].

[googleapps domain=”docs” dir=”presentation/d/1HSzQIWc5SX4AGjOsOlja6gF-n44OwGJRxixklUSQ6Gs/embed” query=”start=false&loop=false&delayms=30000″ width=”680″ height=”411″ /]

Share This:

Master project Rianne Nieland: Talking to Linked Data

[This post was written by Rianne Nieland. It describes her MSc. project supervised  by myself]

People in developing countries cannot access information on the Web, because they have no Internet access and are often low literate. A solution could be to provide voice-based access to data on the Web by using the GSM network.

afbeeldingIn my master project I have investigated how to make general-purpose data sets efficiently available using voice interfaces for GSM. To achieve this, I have developed two voice interfaces, one for Wikipedia and one for DBpedia. I have made two voice interfaces with two different kinds of input data sources, namely normal web data and Linked Data, to be able to compare them.

To develop the two voice interfaces, I first did requirements elicitation from literature and developed a user interface and conversion algorithms for Wikipedia and DBpedia concepts. With user tests the users evaluated the two voice interfaces, to be able to compare them on speed, error rate and usability.

[Rianne’s thesis presentation slides can be found on slideshare and is embedded below. Her thesis is attached here: Eindversie-Paper-Rianne-Nieland-2057069]

 

[slideshare id=37310122&w=476&h=400&sc=no]

Share This:

W4RA student mini-workshop

Image
Group photo of the First Web alliance for Regreening in Africa Student mini-symposium

Today, we held an informal workshop for students involved in MSc. projects related to the VOICES project and other activities associated with “the Web for Warm Countries”. The goal of this meeting was for the students to inform eachother about the current status of their research project and to sketch the bigger picture. I gave a short talk decribing the various running projects (VOICES, Furoba Blon, IDSWrapper, SemanticXO) as well as possible future projects (ICONS, the ICT4D course).

Six students presented us with their updates:

  • Henk Kroon told us a bit about his efforts into creating a client application that uses the  Linked Data based on RadioMarché.
  • Rokhsareh Nakhaei presented us with extensive models for her design of a serious game that will be used to gather voice fragments in different languages.
  • Albert Chifura is talking to many stakeholders to identify sustainable business models for the M-Event use case of VOICES
  • Binyam Tesfa is also developing a crowdsourcing application. He is doing this for digitizing pluvial data from the Sahel. He targets a specific niche (the African ‘diaspora’) to do this.
  • Deepak Chetri is doing literature research into the design of Voice-based interfaces for low-literate users in developing countries.
  • Gavarni Winter is the newest addition to the W4RA family, he is still contemplating the specific research questions.

Also present were Pieter De Leenheer, supervisor for a number of projects and Wendelien Tuyp from CIS, who could answer a number of questions about the African context. From my point of view, the meeting was a succes and we agreed to organize a second installment later this year.

Share This: