InTaVia project started

From November 1 2020, we are collaborating on connecting tangible and intangible heritage through knowledge graphs in the new Horizon2020 project “InTaVia“.

To facilitate access to rich repositories of tangible and intangible asset, new technologies are needed to enable their analysis, curation and communication for a variety of target groups without computational and technological expertise. In face of many large, heterogeneous, and unconnected heritage collections we aim to develop supporting technologies to better access and manage in/tangible CH data and topics, to better study and analyze them, to curate, enrich and interlink existing collections, and to better communicate and promote their inventories.

tangible and intagible heritage (img from project proposal)

Our group will contribute to the shared research infrastructure and will be responsible for developing a generic solution for connecting linked heritage data to various visualization tools. We will work on various user-facing services and develop an application shell and front-end for this connection
be responsible for evaluating the usability of the integrated InTaVia platform for specific users. This project will allow for novel user-centric research on topics of Digital Humanities, Human-Computer interaction and Linked Data service design.

screenshot of the virtual kickoff meeting

Share This:

Hearing (Knowledge) Graphs

[This post is based on Enya Nieland‘s Msc Thesis “Generating Earcons from Knowledge Graphs” ]

Three earcons with varying pitch, rythm and both pitch and rythm

Knowledge Graphs are becoming enormously popular, which means that users interacting with such complex networks are diversifying. This requires new and innovative ways of interacting. Several methods for visualizing, summarizing or exploring knowledge have been proposed and developed. In this student project we investigated the potential for interacting with knowledge graphs through a different modality: sound.

The research focused on the question how to generate meaningful sound or music from (knowledge) graphs. The generated sounds should provide users some insights into the properties of the network. Enya framed this challenge with the idea of “earcons” the auditory version of an icon.

Enya eventually developed a method that automatically produces these types of earcon for random knowledge graphs. Each earcon consist of three notes that differ in pitch and duration. As example, listen to the three earcons which are shown in the figure on the left.

Earcon where pitch varies
Earcon where note duration varies
Earcon where both pitch and rythm vary

The earcon parameters are derived from network metrics such as minimum, maximum and average indegree or outdegree. A tool with user interface allowed users to design the earcons based on these metrics.

The pipeline for creating earcons
The GUI

The different variants were evaluated in an extensive user test of 30 respondents to find out which variants were the most informative. The results show that indeed, the individual elements of earcons can provide insights into these metrics, but that combining them is confusing to the listener. In this case, simpler is better.

Using this tool could be an addition to a tool such as LOD Laundromat to provide an instant insight into the complexity of KGs. It could additionally benefit people who are visually impaired and want to get an insight into the complexity of Knowledge Graphs

Share This:

Multitasking Behaviour and Gaze-Following Technology for Workplace Video-Conferencing.

[This post was written by Eveline van Everdingen and describes her M.Sc. project]

Working with multiple monitors is very common at the workplace nowadays. A second monitor can increase work efficiency, structure and a better overview in a job. Even in business video-conferencing, dual monitors are used. Although the purpose of dual monitor use might be clear to the multitasker, this behaviour is not always perceived as positive by their video-conferencing partners.

Eveline2
Gaze direction of the multitasker with the focus on the primary monitor (left), on the dual monitor (middle) or in between two monitors when switching (right).

Results show that multitasking on a dual screen or mobile device is indicated as less polite and acceptable than doing something else on the same screen. Although the multitasker might be involved with the meeting, he or she seems less engaged with the meeting, resulting in negative perceptions.

eveline1
Effect of technology on politeness of multitasking

Improving the sense of eye-contact might result in a better video-conferencing experience with the multitasker, therefore a gaze-following tool with two webcams is designed (code available at https://github.com/een450/MasterProject ). When the multitasker switches to the dual screen, another webcam will catch the frontal view of the multitasker. Indeed, participants indicate the multitasking behaviour as being more polite and acceptable with the dynamic view of the multitasker. The sense of eye-contact is not significantly more positive rated with this experimental design.

These results show that gaze-following webcam technology can be successful to improve collaboration in dual-monitor multitasking.

For more information, read Eveline’s thesis [pdf] or visit the project’s figshare page.

Example of a video presented to the experiment participants.

Share This:

A Sugar Activity for Subsistence Farmers

[reblogged from http://worldwidesemanticweb.org/2015/03/06/a-sugar-activity-for-subsistence-farmers/ This post is written by Tom Jansen]

Screenshot of the Sugar activity (Tom Jansen)
Screenshot of the Sugar activity (Tom Jansen)

Subsistence farming or agriculture is a form of farming where farmers mainly focus on growing enough food to be self-sufficient. Especially in African countries, where people are very dependent of own-grown food, this type of farming is very common. Subsistence farming, however, in these countries has so much to gain and has so much potential. Improving the farming skills of the farmers could make significant contributions to the reduction of hunger. Unfortunately, farmers often haven’t had enough agricultural education to optimally grow their own food. To help these farmers, I developed an activity that will improve their farming skills. The application helps the farmers to identify diseases of their crops and animals and will present them ways to manage the diseases and prevent them in the future. Giving them an opportunity to manage diseases of their crops and livestock means giving them an opportunity to improve their harvest. The opportunity of a bigger harvest could be a substantial contribution to a better way of living for farmers in (a.o) West Africa.

The activity is Sugar based and is therefore perfectly suitable for the XO-Laptops that are commonly used in West Africa. The activity revolves around a database with a lot of information about diseases of crops and livestock. When the farmer opens the activity, he will be led through two menus with possibilities. When the right crop or livestock is selected, a list with diseases will be shown containing identification possibilites for a particular diseases. When the farmer notices that one description of the disease is very similar to what is happening to his crops or livestock, he clicks on the disease. When the choice is made another window pops up showing the information the farmer needs to manage and prevent the disease.

Right now it is only possible to access the database and read the information inside the database. What would improve the activity is a way where farmers can access the database and not only read, but also change and add information from the database. This way the information and thus the quality of the activity could be improved without any help from the outside.

The activity can be found on the following page (containing all the code): https://github.com/WorldWideSemanticWeb/farming-activity

Read the full report here: Helping Subsistence Farmers in West Africa

Share This: