Exploring Automatic Recognition of Labanotation Dance Scores

[This post describes the research of Michelle de Böck and is based on her MSc Information Sciences thesis.]

Digitization of cultural heritage content allows for the digital archiving, analysis and other processing of that content. The practice of scanning and transcribing books, newspapers and images, 3d-scanning artworks or digitizing music has opened up this heritage for example for digital humanities research or even for creative computing. However, with respect to the performing arts, including theater and more specifically dance, digitization is a serious research challenge. Several dance notation schemes exist, with the most established one being Labanotation, developed in 1920 by Rudolf von Laban. Labanotation uses a vertical staff notation to record human movement in time with various symbols for limbs, head movement, types and directions of movements.

Generated variations of movements used for training the recognizers

Where for musical scores, good translations to digital formats exist (e.g. MIDI), for Lanabotation, these are lacking. While there are structured formats (LabanXML, MovementXML), the majority of content still only exists either in non-digitized form (on paper) or in scanned images. The research challenge of Michelle de Böck’s thesis therefore was to identify design features for a system capable of recognizing Labanotation from scanned images.

Examples of Labanotation files used in the evaluation of the system.

Michelle designed such a system and implemented this in MATLAB, focusing on a few movement symbols. Several approaches were developed and compared, including approaches using pre-trained neural networks for image recognition (AlexNet). This approach outperformed others, resulting in a classification accuracy of 78.4%. While we are still far from developing a full-fledged OCR system for Labanotation, this exploration has provided valuable insights into the feasibility and requirements of such a tool.

Share This:

An Augmented Reality App to Annotate Art

[This post is based on the Bachelor project by Jurjen Braam and reuses content from his thesis]

The value of Augmented Reality applications has been shown for a number of different tasks. Most of these show that AR applications add to the immersiveness of an experience. For his Bachelor Project, VU student Jurjen Braam researched to what extent AR technology makes sense for the task of annotating artworks.

To this end, Jurjen built a mobile application which allows experts or laypeople to add textual annotations to artworks in three different modes. One mode doesnt show the artwork, but allows for textual input, the 2nd mode shows the work in an image and allows for localised annotations. The last mode is the AR mode, which projects the artwork in the physical space, using the device camera and screen.

Three modes of the Application (Text, 2D, AR)

Jurjen evaluated the three modes through a small user study, which showed that immersion and enjoyment was highest in the AR mode but that this mode was least efficient. Also, participants indicated that for annotation tasks, larger screens would be preferable.

User evaluation in action

This research was a unique endeavour combining a proven technology (AR) and well-known task (Annotation) which identified interesting possibilities for follow-up research.

Share This: