I am an assistant professor (UD) at the User-Centric Data Science group at the Computer Science department of the Vrije Universiteit Amsterdam (VU). I am also a senior research fellow at Netherlands Institute for Sound and Vision. In my research, I combine (Semantic) Web technologies with Human-Computer Interaction, Knowledge Representation and Information Extraction to tackle research challenges in various domains. These include Cultural Heritage, Digital Humanities and ICT for Development (ICT4D). More information on these projects can be found on this site or through my CV .
Last friday, the students of the class of 2018/2019 of the course Digital Humanities and Social Analytics in Practice presented the results of their capstone internship project. This course and project is the final element of the Digital Humanities and Social Analytics minor programme in which students from very different backgrounds gain skills and knowledge about the interdisciplinary topic.
The course took the form of a 4-week internship at an organization working with humanities or social science data and challenges and student groups were asked to use these skills and knowledge to address a research challenge. Projects ranged from cleaning, indexing, visualizing and analyzing humanities data sets to searching for bias in news coverage of political topics. The students showed their competences not only in their research work but also in communicating this research through great posters.
The complete list of student projects and collaborating institutions is below:
- “An eventful 80 years’ war” at Rijksmuseum identifying and mapping historical events from various sources.
- An investigation into the use of structured vocabularies also at the Rijksmuseum
- “Collecting and Modelling Event WW2 from Wikipedia and Wikidata” in collaboration with Netwerk Oorlogsbronnen (see poster image below)
- A project where an search index for Development documents governed by the NICC foundation was built.
- “EviDENce: Ego Documents Events modelliNg – how individuals recall mass violence” – in collaboration with KNAW Humanities Cluster (HUC)
- “Historical Ecology” – where students searched for mentions of animals in historical newspapers – also with KNAW-HUC
- Project MIGRANT: Mobilities and connection project in collaboration with KNAW-HUC and Huygens ING
- Capturing Bias with media data analysis – an internal project at VU looking at indentifying media bias
- Locating the CTA Archive Amsterdam where a geolocation service and search tool was built
- Linking Knowledge Graphs of Symbolic Music with the Web – also an internal project at VU working with Albert Merono
In the context of our ArchiMediaL project on Digital Architectural History, a number of student projects explored opportunities and challenges around enriching the colonialarchitecture.eu dataset. This dataset lists buildings and sites in countries outside of Europe that at the time were ruled by Europeans (1850-1970).
Patrick Brouwer wrote his IMM bachelor thesis “Crowdsourcing architectural knowledge: Experts versus non-experts” about the differences in annotation styles between architecture historical experts and non-expert crowd annotators. The data suggests that although crowdsourcing is a viable option for annotating this type of content. Also, expert annotations were of a higher quality than those of non-experts. The image below shows a screenshot of the user study survey.
Rouel de Romas also looked at crowdsourcing , but focused more on the user interaction and the interface involved in crowdsourcing. In his thesis “Enriching the metadata of European colonial maps with crowdsourcing” he -like Patrick- used the Accurator platform, developed by Chris Dijkshoorn. A screenshot is seen below. The results corroborate the previous study that the in most cases the annotations provided by the participants do meet the requirements provided by the architectural historian; thus, crowdsourcing is an effective method to enrich the metadata of European colonial maps.
[this post is based on Frank Walraven‘s Master thesis]
Who uses DBPedia anyway? This was the question that started a research project for Frank Walraven. This question came up during one of the meetings of the Dutch DBPedia chapter, of which VUA is a member. If usage and users are better understood, this can lead to better servicing of those users, by for example prioritizing the enrichment or improvement of specific sections of DBPedia Characterizing use(r)s of a Linked Open Data set is an inherently challenging task as in an open Web world, it is difficult to know who are accessing your digital resources. For his Msc project research, which he conducted at the Dutch National Library supervised by Enno Meijers , Frank used a hybrid approach using both a data-driven method based on user log analysis and a short survey of know users of the dataset. As a scope Frank selected just the Dutch DBPedia dataset.
For the data-driven part of the method, Frank used a complete user log of HTTP requests on the Dutch DBPedia. This log file (see link below) consisted of over 4.5 Million entries and logged both URI lookups and SPARQL endpoint requests. For this research only a subset of the URI lookups were concerned.
As a first analysis step, the requests’ origins IPs were categorized. Five classes can be identified (A-E), with the vast majority of IP addresses being in class “A”: Very large networks and bots. Most of the IP addresses in these lists could be traced back to search engine
indexing bots such as those from Yahoo or Google. In classes B-F, Frank manually traced the top 30 most encounterd IP-addresses, concluding that even there 60% of the requests came from bots, 10% definitely not from bots, with 30% remaining unclear.
The second analysis step in the data-driven method consisted of identifying what types of pages were most requested. To cluster the thousands of DBPedia URI request, Frank retriev
ed the ‘categories’ of the pages. These categories are extracted from Wikipedia category links. An example is the “Android_TV” resource, which has two categories: “Google” and “Android_(operating_system)”. Following skos:broader links, a ‘level 2 category’ could also be found to aggregate to an even higher level of abstraction. As not all resources have such categories, this does not give a complete image, but it does provide some ideas on the most popular categories of items requested. After normalizing for categories with large amounts of incoming links, for example the category “non-endangered animal”, the most popular categories where 1. Domestic & International movies, 2. Music, 3. Sports, 4. Dutch & International municipality information and 5. Books.
Frank also set up a user survey to corroborate this evidence. The survey contained questions about the how and why of the respondents Dutch DBPedia use, including the categories they were most interested in. The survey was distributed using the Dutch DBPedia websitea and via twitter however only attracted 5 respondents. This illustrates
the difficulty of the problem that users of the DBPedia resource are not necessarily easily reachable through communication channels. The five respondents were all quite closely related to the chapter but the results were interesting nonetheless. Most of the users used the DBPedia SPARQL endpoint. The full results of the survey can be found through Frank’s thesis, but in terms of corroboration the survey revealed that four out of the five categories found in the data-driven method were also identified in the top five resulting from the survey. The fifth one identified in the survey was ‘geography’, which could be matched to the fifth from the data-driven method.Frank’s research shows that although it remains a challenging problem, using a combination of data-driven and user-driven methods, it is indeed possible to get an indication into the most-used categories on DBPedia. Within the Dutch DBPedia Chapter, we are currently considering follow-up research questions based on Frank’s research.
[This post describes the Master Project work of Information Science students Tim de Bruyn and John Brooks and is based on their theses]
Audiovisual archives adopt structured vocabularies for their metadata management. With Semantic Web and Linked Data now becoming more and more stable and commonplace technologies, organizations are looking now at linking these vocabularies to external sources, for example those of Wikidata, DBPedia or GeoNames.
However, the benefits of such endeavors to the organizations are generally underexplored. For their master project research, done in the form of an internship at the Netherlands Institute for Sound and Vision (NISV), Tim de Bruyn and John Brooks conducted a case study into the benefits of linking the “Common Thesaurus for Audiovisual Archives” (or GTAA) and the general-purpose dataset Wikidata. In their approach, they identified various use cases for user groups that are both internal (Tim) as well as external (John) to the organization. Not only were use cases identified and matched to a partial alignment of GTAA and Wikidata, but several proof of concept prototypes that address these use cases were developed.
For the internal users, three cases were elaborated, including a calendar service where personnel receive notifications when an author of a work has passed away 70 years ago, thereby changing copyright status of the work. This information is retrieved from the Wikidata page of the author, aligned with the GTAA entry (see fig 1 above).
A second internal case involves the new ‘story platform’ of NISV. Here Tim implemented a prototype enduser application to find stories related to the one currently shown to the user, based on persons occuring in that story (fig 2).
The external cases centered around the users of the CLARIAH Media Suite. For this extension, several humanities researchers were interviewed to identify worthwile extensions with Wikidata information. Based on the outcomes of these interviews, John Brooks developed the Wikidata retrieval service (fig 3).
The research presented in the two theses are a good example of User-Centric Data Science, where affordances provided by data linkages are aligned with various user needs. The various tools were evaluated with end users to ensure they match their actual needs. The research was reported in a research paper which will be presented at the MTSR2018 conference: (Victor de Boer, Tim de Bruyn, John Brooks, Jesse de Vos. The Benefits of Linking Metadata for Internal and External users of an Audiovisual Archive. To appear in Proceedings of MTSR 2018 [Draft PDF])
Find out more:
See my slides for the MTSR presentation below
[This post describes the Information Sciences Master Project of Hameedat Omoine and is based on her thesis.]
In the quest to improve the lives of farmers and improve agricultural productivity in rural Burkina Faso, meteorological data has been identified as one of the is key information needs for local farmers. Various online weather information services are available, but many are not tailored specifically to tis target user group. In a research case study, Hameedat Omoine designed a weather information system that collects not only weather but also related agricultural information and provides the farmers with this information to allow them to improve agricultural productivity and the livelihood of the people of rural Burkina Faso.
The research and design of the system was conducted at and in collaboration with 2CoolMonkeys, a Utrecht-based Open data and App-development company with expertise in ICT for Development (ICT4D).
Following the design science research methodology, Hameedat investigated the requirements for a weather information system, and the possible options for ensuring the sustainability of the system. Using a structured approach, she developed the application and evaluated it in the field with potential Burkinabe end users. The mobile interface of the application featured weather information and crop advice (seen in the images above). A demonstration video is shown below
Hameedat developed multiple alternative models to investigate the sustainability of the application. For this she used the e3value approach and language. The image below shows a model for the case where a local radio station is involved.
At the DHBenelux 2018 conference, students from the VU minor “Digital Humanities and Social Analytics” presented their final DH in Practice work. In this video, the students talk about their experience in the minor and the internship projects. We also meet other participants of the conference talking about the need for interdisciplinary research.
All good things come to an end, and that also holds for our great Horizon2020 project “Big Data Europe“, in which we collaborated with a broad range of techincal and domain partners to develop (Semantic) Big Data infrastructure for a variety of domains. VU was involved as work package leader in the Pilot and Evaluation work package and co-developed methods to test and apply the BDE stack in Health, Traffic, Security and other domains..
You can read more about the end of the project in this blog post at the BDE website.
On 19 June, André Baart was awarded the High Potential Award at the Amsterdam Science & Innovation en Impact Awards for his and W4RA‘s work on the Kasadaka platform.
Kasadaka (“talking box”) is an ICT for Development (ICT4D) platform to develop voice-based technologies for those who are not connected to the Internet, cannot not read and write, and speak underresourced languages.
As part of a longer-term project, the Kasadaka Voice platform and software development kit (VSDK), has been developed by André Baart as part of his BSc and MSc research at VU. In that context it has been extensively tested in the field, for example by Adama Tessougué, journalist and founder of radio Sikidolo in Konobougou, a small village in rural Mali. It was also evaluated in the context of the ICT4D course at VU, by 46 master students from Computer Science, Information Science and Artificial Intelligence. The Kasadaka is now in Sarawak Malaysia, where it will be soon deployed in a Kampong, by Dr. Cheah Waishiang, ICT4D researcher at the University of Malasia Sarawak (UNIMAS), and students from VU and UNIMAS.
André is currently pursuing his PhD in ICT4D at Universiteit van Amsterdam and still member of the W4RA team.
The ICT4D project CARPA, funded by NWO-WOTRO had its first stakeholder workshop today at the Amsterdam Business School of UvA. From our project proposal: The context for CARPA (Crowdsourcing App for Responsible Production in Africa) lies in sustainable and responsible business. Firms are under increasing pressure to ensure sustainable, responsible production in their supply chains.. Lack of transparency about labour abuses and environmental damages has led some firms to cease purchases from the region
The first stakeholder workshop at #UvA of #CAPRA project on developing an #ict4d crowdsourcing app for responsible production in #Africa #NWO–#WOTRO @AndreBaart @marcelworring pic.twitter.com/sgfTb2P2XE
— Victor de Boer (@victordeboer) May 15, 2018
.With an interdisciplinary partnership of local NGOs and universities in DRC, Mali, and South Africa, this project aims to generate new evidence-based knowledge to improve transparency about business impacts on responsible production.
Co-creating a smartphone application, we will use crowdsourcing methods to obtain reports of negative social and environmental business impacts in these regions, and follow them over time to understand access to justice and whether and how remediation of such impacts occurs. Data integration and visualization methods will identify patterns in order to provide context and clarity about business impacts on sustainability over time. A website will be developed to provide ongoing public access to this data, including a mapping function pinpointing impact locations.
The project will be led by Michelle Westermann-Behaylo from UvA, with the research work on the ground being executed by UvA’s Francois Lenfant and Andre Baart. Marcel Worring and myself are involved in supervisory roles.
Two weeks ago, ICT.Open2018 was held in Amersfoort. This event brings together Computer Science researchers from all over the Netherlands and our research group was present with many posters and presentations.
We even won a prize! (Well, a 2nd place prize, but awesome nonetheless). Xander Wilcke presented work on using Knowledge Graphs for Machine Learning. He was awarded the runner-up prize for best poster presentation at ICTOpen2018. Congrats!
— Victor de Boer (@victordeboer) March 19, 2018
Ronald Siebes presented work in the ArchiMediaL project on reconstructing 4D street views from historical images.
— Victor de Boer (@victordeboer) March 20, 2018
Oana Inel presented her work on Named Entity Recognition and Gold Standard critiquing. She also demonstrated the Clariah MediaSuite.
— Victor de Boer (@victordeboer) March 19, 2018
— Victor de Boer (@victordeboer) March 19, 2018
Anca Dumitrache talked about using crowdsourcing as part of the Machine Learning life cycle.
— Victor de Boer (@victordeboer) March 19, 2018
Cristina Bucur introduced Linkflows: enabling a web of linked semantic publishing workflows
I talked myself a bit about current work in the ABC-Kb Network Institute project
@victordeboer presenting "UX Challenges of information organization: the assessment of language impairment in bilingual children" @ #ictopen2018 @networkinstvu @UserCentricDS @VU_Science pic.twitter.com/2CY4esa4vy
— Oana Inel (@oana_inel) March 20, 2018
All in all, this was quite a nice edition of the yearly event for our group. See you next year in Amersfoort!