Information Extraction and Knowledge Graph Creation from Handwritten Historical Documents

[This post is based on the Bachelor Project AI of Annriya Binoy]

In her bachelor thesis “Evaluating Methodologies for Information Extraction and Knowledge Graph Creation from Handwritten Historical Documents”, Annriya Binoy provides a systematic evaluation of various methodologies for extracting and structuring information from historical handwritten documents, with the goal of identifying the most effective strategies.

As a case study, the research investigates several methods on scanned pages from the National Archive of the Netherlands, specifically the service records and pension registers of the late 18th century and early 19th century of the Koninklijk Nederlands Indisch Leger (KNIL), see the example below. The task was defined as that of extracting birth events.


Four approaches are analyzed:

  1. Handwritten Text Recognition (HTR) using the Transkribus tool
  2. a combination of Large Language Models (LLM) and Regular Expressions (Regex),
  3. Regex alone
  4. Fuzzy Search

HTR and the LLM-Regex combination show strong performance and adaptability with F1 measure values of 0.88. While Regex alone delivers high accuracy, it lacks comprehensiveness. Fuzzy Search proves effective in handling transcription errors common in historical documents, offering a balance between accuracy and robustness. This research offers initial but practical solutions for the digitization and semantic enrichment of historical archives, and it also addresses the challenges of preserving contextual integrity when constructing knowledge graphs from extracted data.

More details can be found in Annriya’s thesis below.

Share This:

Exploring Culinary Links with NLP and Knowledge Graphs

[This post is based on Nour al Assali‘s bachelor AI thesis]

Nour’s research explores the use of Natural Language Processing (NLP) and Knowledge Graphs to investigate the historical connections and cultural exchanges within global cuisines. The thesis “Flavours of History: Exploring Historical and Cultural Connections Through Ingredient Analysis Using NLP and Knowledge Graphs” describes a method for analyzing ingredient usage patterns across various cuisines by processing a dataset of recipes. Its goal is to trace the diffusion and integration of ingredients into different culinary traditions. The primary aim is to establish a digital framework for addressing questions related to culinary history and cultural interactions.

The methodology involves applying NLP to preprocess recipe data, focusing on extracting and normalizing ingredient names. The pipeline contains steps for stop word removal, token- and lemmatization, character replacements etc.

With the results, a Knowledge Graph is constructed to map relationships between ingredients, recipes, and cuisines. The approach also includes visualizing these connections, with an interactive map and other tools designed to provide insights into the data and answer key research questions. The figure below shows a visualisation of top ingredients per cuisine.

Case studies on ingredients such as pistachios, tomatoes, basil, olives, and cardamom illustrate distinct usage patterns and origins. The findings reveal that certain ingredients—like pistachios, basil, and tomatoes—associated with specific regions have gained widespread international popularity, while others, such as olives and cardamom, maintain strong ties to their places of origin. This research underscores the influence of historical trade routes and cultural exchanges on contemporary culinary practices and offers a digital foundation for future investigations into culinary history and food culture.

The code and dataset used in this research are available on GitHub: https://github.com/Nour-alasali/BPAI. The complete thesis can be found below.

Share This:

Generating Synthetic Time-Series Data For Smart-Building Knowledge Graphs Using Generative Adversarial Networks

[This blog post is based on Jesse van Haaster‘s bachelor thesis Artificial Intelligence at VU]

Knowledge Graphs represent data as triples, connecting related data points. This form of representation is widely used for various applications, such as querying information and drawing inferences from data. For fine-tuning such applications, actual KGs are needed. However, in certain domains like medical records or smart home devices, creating large-scale public knowledge graphs is challenging due to privacy concerns. To address this, generating synthetic knowledge graph data that mimics the original while preserving privacy is highly beneficial.

Jesse’s thesis explored the feasibility of generating meaningful synthetic time series data for knowledge graphs. He specifically does this in the smart building / IoT domain, building on our previous work on IoT knowledge graphs, including OfficeGraph.

To this end, two existing generative adversarial networks (GANs), CTGAN and TimeGAN, are evaluated for their ability to produce synthetic data that retains key characteristics of the original OfficeGraph dataset. Jesse compared among other things the differences in distributions of values for key features, such as humidity, temperature and co2 levels, seen below.

Key value distributions for CTGAN-generated data vs original data
Key value distributions for TimeGAN-generated data vs original data

The experiment results indicate that while both models capture some important features, neither is able to replicate all of the original data’s properties. Further research is needed to develop a solution that fully meets the requirements for generating meaningful synthetic knowledge graph data.

More details can be found in Jesse’s thesis (found below) and his Github repository https://github.com/JaManJesse/SyntheticKnowledgeGraphGeneration

Share This:

Simulating creativity in GANs with IoT

[This blog post is based on the Artificial Intelligence MSc thesis project from Fay Beening, supervised by myself and Joost de Boo, more information can be found on Fay’s website]

Recently, generative art has been one of the fields where AI, especially deep learning has caught the public eye. Algorithms and online tools such as Dall-E are able to produce astounding results based on large artistic datasets. One class of algorithms that has been at the root of this success is the Generative Adversarial Network (GAN), frequently used in online art-generating tools because of their ability to produce realistic artefacts.

but, is this “””real””” art? is this “””real””” creativity?

To address this, Fay investigated current theories on art and art education and found that these imply that true human creativity can be split into three types: 1) combinational, 2) explorative and 3) transformative creativity but that it also requires real-world experiences and interactions with people and the environment. Therefore, Fay in her thesis proposes to combine the GAN with an Internet of Things (IoT) setup to make it behave more creative.

Arduin-based prototype (image from Fay’s thesis)

She then designed a system that extends the original GAN with an interactive IoT system (implemented in an Arduino-based prototype) to simulate a more creative process. The prototype of the design showed a successful implementation of creative behaviour that can react to the environment and gradually change the direction of the generated images.

Images shown to the participant during the level of creativity task. Images 2 and 6 are creative GAN generated images. Images 1 and 5 are human-made art. Images 3 and 4 are online GAN generated art.

The generated art was evaluated based on their creativity by doing task-based interviews with domain experts. The results show that the the level to which the generated images are considered to be creative depends heavily on the participant’s view of creativity.

Share This:

Thesis writing guidelines

As supervisor of many MSc and BSc theses, I find myself giving writing tips and guidelines quite often. Inspired by Jan van Gemert’s guidelines, I compiled my own document with tips and guidelines for writing an CS/AI/IS bachelor or master thesis. These are things that I personally care about and other lecturers might have different ideas. Also, this is by no means a complete list and I will use it as a living document. You can find it here: https://tinyurl.com/victorthesiswriting

Share This:

Student-supported project in the news

It was great to see that one of this year’s Digital Humanities in Practice projects lead to a conversation between the students in that project Helene Ayar and Edith Brooks, their external supervisors Willemien Sanders (UU) and Mari Wigham (NISV) and an advisor for another project André Krouwel (VU). That conversation resulted in original research and CLARIAH MediaSuite data story “‘Who’s speaking?’- Politicians and parties in the media during the Dutch election campaign 2021” where the content of news programmes was analysed for politicians’ names, their gender and party affiliation.

The results are very interesting and subsequently appeared on Dutch news site NOS.nl, showing that right-wing politicians are more represented on radio and tv: “Onderzoek: Rechts domineert de verkiezingscampagne op radio en tv“. Well done and congratulations!

Share This:

Historical Toponym Disambiguation

[This blog post is based on the Master thesis Information Sciences of Bram Schmidt, conducted at the KNAW Humanities cluster and IISG. It reuses text from his thesis]

Place names (toponyms) are very ambiguous and may change over time. This makes it hard to link mentions of places to their corresponding modern entity and coordinates, especially in a historical context. We focus on historical Toponym Disambiguation approach of entity linking based on identified context toponyms.

The thesis specifically looks at the American Gazetteer. These texts contain fundamental information about major places in its vicinity. By identifying and exploiting these tags, we aim to estimate the most likely position for the historical entry and accordingly link it to its corresponding contemporary counterpart.

Example of a toponym in the Gazetteer

Therefore, in this case study, Bram Schmidt examined the toponym recognition performance of state-of-the-art Named Entity Recognition (NER) tools spaCy and Stanza concerning historical texts and we tested two new heuristics to facilitate efficient entity linking to the geographical database of GeoNames.

Experiments with different geo-distance heuristics show that indeed this can be used to disambiguate place names.

We tested our method against a subset of manually annotated records of the gazetteer. Results show that both NER tools do function insufficiently in their task to automatically identify relevant toponyms out of the free text of a historical lemma. However, exploiting correctly identified context toponyms by calculating the minimal distance among them proves to be successful and combining the approaches into one algorithm shows improved recall score.

Bram’s thesis was co-supervised by Marieke van Erp and Romke Stapel. His thesis can be found here [pdf]

Share This:

Automating Authorship Attribution

[This blog post was written by Nizar Hirzalla and describes his VU Master AI project conducted at the Koninklijke Bibliotheek (KB), co-supervised by Sara Veldhoen]

Authorship attribution is the process of correctly attributing a publication to its corresponding author, which is often done manually in real-life settings. This task becomes inefficient when there are many options to choose from due to authors having the same name. Authors can be defined by characteristics found in their associated publications, which could mean that machine learning can potentially automate this process. However, authorship attribution tasks introduce a typical class imbalance problem due to a vast number of possible labels in a supervised machine learning setting. To complicate this issue even more, we also use problematic data as input data as this mimics the type of available data for many institutions; data that is heterogeneous and sparse of nature.

Inside the KB (photo S. ter Burg)

The thesis searches for answers regarding how to automate authorship attribution with its known problems and this type of input data, and whether automation is possible in the first place. The thesis considers children’s literature and publications that can have between 5 and 20 potential authors (due to having the same exact name). We implement different types of machine learning methodologies for this method. In addition, we consider all available types of data (as provided by the National Library of the Netherlands), as well as the integration of contextual information.

Furthermore, we consider different types of computational representations for textual input (such as the title of the publication), in order to find the most effective representation for sparse text that can function as input for a machine learning model. These different types of experiments are preceded by a pipeline that consists out of pre-processing data, feature engineering and selection, converting data to other vector space representations and integrating linked data. This pipeline shows to actively improve performance when used with the heterogeneous data inputs.

Implemented neural network architectures for TFIDF (left) and Word2Vec (right) based text classification

Ultimately the thesis shows that automation can be achieved in up to 90% of the cases, and in a general sense can significantly reduce costs and time consumption for authorship attribution in a real-world setting and thus facilitate more efficient work procedures. While doing so, the thesis also finds the following key notions:

  1. Between comparison of machine learning methodologies, two methodologies are considered: author classification and similarity learning. Author classification grants the best raw performance (F1. 0.92), but similarity learning provides the most robust predictions and increased explainability (F1. 0.88). For a real life setting with end users the latter is recommended as it presents a more suitable option for integration of machine learning with cataloguers, with only a small hit to performance.
  2. The addition of contextual information actively increases performance, but performance depends on the type of information inclusion. Publication metadata and biographical author information are considered for this purpose. Publication metadata shows to have the best performance (predominantly the publisher and year of publication), while biographical author information in contrast negatively affects performance.
  3. We consider BERT, word embeddings (Word2Vec and fastText) and TFIDF for representations of textual input. BERT ultimately grants the best performance; up to 200% performance increase when compared to word embeddings. BERT is a sophisticated language model with an applied transformer, which leads to more intricate semantic meaning representation of text that can be used to identify associated authors. 
  4. Based on surveys and interviews, we also find that end users mostly attribute importance to author related information when engaging in manual authorship attribution. Looking more in depth into the machine learning models, we can see that these primarily use publication metadata features to base predictions upon. We find that such differences in perception of information should ultimately not lead to negative experiences, as multiple options exist for harmonizing both parties’ usage of information.
Summary of the final performances of the best performing models from the differing implemented methodologies

Share This:

Hearing (Knowledge) Graphs

[This post is based on Enya Nieland‘s Msc Thesis “Generating Earcons from Knowledge Graphs” ]

Three earcons with varying pitch, rythm and both pitch and rythm

Knowledge Graphs are becoming enormously popular, which means that users interacting with such complex networks are diversifying. This requires new and innovative ways of interacting. Several methods for visualizing, summarizing or exploring knowledge have been proposed and developed. In this student project we investigated the potential for interacting with knowledge graphs through a different modality: sound.

The research focused on the question how to generate meaningful sound or music from (knowledge) graphs. The generated sounds should provide users some insights into the properties of the network. Enya framed this challenge with the idea of “earcons” the auditory version of an icon.

Enya eventually developed a method that automatically produces these types of earcon for random knowledge graphs. Each earcon consist of three notes that differ in pitch and duration. As example, listen to the three earcons which are shown in the figure on the left.

Earcon where pitch varies
Earcon where note duration varies
Earcon where both pitch and rythm vary

The earcon parameters are derived from network metrics such as minimum, maximum and average indegree or outdegree. A tool with user interface allowed users to design the earcons based on these metrics.

The pipeline for creating earcons
The GUI

The different variants were evaluated in an extensive user test of 30 respondents to find out which variants were the most informative. The results show that indeed, the individual elements of earcons can provide insights into these metrics, but that combining them is confusing to the listener. In this case, simpler is better.

Using this tool could be an addition to a tool such as LOD Laundromat to provide an instant insight into the complexity of KGs. It could additionally benefit people who are visually impaired and want to get an insight into the complexity of Knowledge Graphs

Share This:

Linked Art Provenance

In the past year, together with Ingrid Vermeulen (VU Amsterdam) and Chris Dijkshoorn (Rijksmuseum Amsterdam), I had the pleasure to supervise two students from VU, Babette Claassen and Jeroen Borst, who participated in a Network Institute Academy Assistant project around art provenance and digital methods. The growing number of datasets and digital services around art-historical information presents new opportunities for conducting provenance research at scale. The Linked Art Provenance project investigated to what extent it is possible to trace provenance of art works using online data sources.

Caspar Netscher, the Lacemaker, 1662, oil on canvas. London: the Wallace Collection, P237

In the interdisciplinary project, Babette (Art Market Studies) and Jeroen (Artificial Intelligence) collaborated to create a workflow model, shown below, to integrate provenance information from various online sources such as the Getty provenance index. This included an investigation of potential usage of automatic information extraction of structured data of these online sources.

This model was validated through a case study, where we investigate whether we can capture information from selected sources about an auction (1804), during which the paintings from the former collection of Pieter Cornelis van Leyden (1732-1788) were dispersed. An example work , the Lacemaker, is shown above. Interviews with various art historian validated the produced workflow model.

The workflow model also provides a basic guideline for provenance research and together with the Linked Open Data process can possibly answer relevant research questions for studies in the history of collecting and the art market.

More information can be found in the Final report

Share This: