Musical Audio Analysis with Insufficient Training Data
By A. Lerch

A. Lerch talks to ADASP about his work on unsupervised and semi-supervised approaches for addressing the data challenge in music analysis

Abstract

The rise of increasingly complex approaches to automated systems for audio analysis requires larger and larger amounts of annotated training data. While music data is generally available, detailed annotations are often scarce because they are tedious and time-consuming to create. This creates a widening gap between the available amount of data and the required data for many tasks in Music Information Retrieval. By example of several music analysis tasks, this talk will present work on unsupervised and semi-supervised approaches to outline future directions for addressing the data challenge in music analysis.

Bio

Alexander Lerch is Associate Professor at the Center for Music Technology, Georgia Institute of Technology. He received his Master’s (EE) and his PhD (Audio Communications) from TU Berlin. Lerch has been working on intelligent audio algorithms that allow computers to listen to and comprehend music for about two decades. His research positions him in the field of Music Information Retrieval (MIR) at the intersection of signal processing, machine learning, and music. He aims at creating artificially intelligent software for music generation, production, and consumption. Lerch authored numerous journal and conference papers, as well as the text book “An Introduction to Audio Content Analysis” (IEEE/Wiley 2012). Before he joined Georgia Tech, Lerch was Co-Founder and Head of Research at his company zplane.development, an industry leader in music technology licensing. The technologies he worked on at zplane include algorithms such as time-stretching and automatic key detection. zplane technologies are nowadays used by millions of musicians and producers world-wide.