Invited Speakers

Pattern induction and matching in music signals

Anssi Klapuri

Abstract: Pattern induction and matching plays an important part in understanding the structure of a given music piece and in detecting similarities between two different music pieces. At all levels of music - harmony, melody, rhythm, and texture - the temporal sequence of events can be subdivided into shorter patterns that are sometimes repeated and transformed. This talk discussed methods for extracting such patterns from musical audio signals (feature extraction and pattern induction) and for using these patterns in order to infer a meaningful structure of a given music piece. Furthermore, computationally feasible methods are discussed for retrieving similar patterns from a large database of songs (pattern matching). Some application scenarios for the proposed pattern induction and matching methods are discussed.

Bio: Anssi Klapuri received his Ph.D. degree in Information Technology from Tampere University of Technology (TUT) in 2004. He visited as a post-doc researcher at Ecole Centrale de Lille, France, and Cambridge Univerisity, UK, in 2005 and 2006, respectively. He worked until 2009 as a professor and group leader at TUT. In 2009, he joined Queen Mary, University of London as a lecturer. His research interests include audio signal processing, auditory modeling, and machine learning. He has worked as a principal investigator in numerous industrial research projects. He received the IEEE Signal Processing Society 2005 Young Author Best Paper Award. He has co-edited one book and several scientific papers and book chapters.

Website: http://www.cs.tut.fi/~klap/

Songs2See and GlobalMusic2One - Two ongoing projects in music information retrieval research at Fraunhofer IDMT

Christian Dittmar

Abstract: At the Fraunhofer IDMT in Ilmenau, Germany, two current research projects are directed towards core problems of music information retrieval. This talk gives a brief introduction of both and highlights challenges as well as first achievements.

The recently started Songs2See project is supported by the Thuringian Ministry of Economy, Employment and Technology though granting funds of the European Fund for Regional Development. The target outcome of this project is a web-based application that assists music students with their instrumental exercises. The unique advantage over existing e-learning solutions is the opportunity to create personalized exercise content using the favorite songs of the music student. This is only possible through application of state-of-the-art algorithms for automatic music transcription and audio source separation in combination with an intuitive user interface.

GlobalMusic2one is an ongoing German research project aiming at developing a new generation of hybrid music search engines. The target outcomes are new methods of music information retrieval and Web 2.0 technologies for better quality in the automated recommendation and online marketing of world music collections. Music recordings are automatically analysed with respect to rhythm, melody and other characteristics. This allows an efficient filing of new content into existing collections. In addition, the user can create new categories and classification concepts to allow the system to flexibly adapt to new musical forms of expression and regional contexts.

Bio: Dipl. Ing. (FH) Christian Dittmar studied electrical engineering with specialization in digital media technology at the University for Applied Sciences in Jena, Germany from 1998 to 2002. In his diploma thesis, he investigated Independent Subspace Analysis as a means of music signal analysis. Subsequent to his successful graduation he joined the Fraunhofer IDMT in 2003. He has contributed to a number of scientific papers in the field of music information retrieval and automatic transcription. Since late 2006 he is head of the Semantic Music Technologies research group and has managed several industrial and public r&d projects.

Progress in Music Modelling

Simon Dixon

Abstract: Music is a complex phenomenon. Human understanding of music is at best incomplete, and computational models used in our research community fail to capture much of what is understood about music. Nevertheless, in the last decade we have seen remarkable progress in Music Information Retrieval research. This progress is particularly remarkable considering the naivete of the musical models used. For example, the bag-of-frames approach to music similarity and the periodicity pattern approach to rhythm analysis are both independent of the order of musical notes, whereas temporal order is an essential feature of melody, rhythm and harmonic progression. This talk will present recent work on modelling harmony and rhythm which extends previous approaches in order to come closer to modelling music as a musician might conceptualise it.

Bio: Simon Dixon is a lecturer in the Centre for Digital Music at Queen Mary University of London. He obtained BSc and PhD degrees in computer science from the University of Sydney, and AMusA and LMusA diplomas in classical guitar. He was a lecturer at Flinders University of South Australia and research scientist at the Austrian Research Institute for Artificial Intelligence. His research interests focus on the extraction and processing of musical (particularly rhythmic and harmonic) content in audio signals, including tasks such as tempo induction, beat tracking, onset detection, audio alignment, automatic transcription, and the measurement and visualisation of expression in music performance. He is author of the beat tracking software BeatRoot (rated 1st in MIREX 2006 evaluation of beat tracking systems) and the audio alignment software MATCH (best poster award, ISMIR 2005). He was programme co-chair for ISMIR 2007.

Ancient Instruments Sound/Timbre Reconstruction Application

Francesco De Mattia

Abstract: ASTRA (Ancient instruments Sound/Timbre Reconstruction Application) is a project coordinated at University of Málaga which brings history to life. Ancient musical instruments can now be heard for the first time in hundreds of years, thanks to the successful synergy between art/humanities and science. The Epigonion, an instruments of the past has been digitally recreated using an advanced EGEE middleware (with the GENIUS interface) and GÉANT and EUMEDCONNECT research networks to link high capacity computers together, sharing information to enable the computerintensive modelling of musical sounds.

Francesco will perform a demo-concert in which equipment from Seelake will be used.

Bio: Francesco De Mattia. Pianist, harpsichordist and composer his main focus is in baroque music as continuo player and revisor. His artistic carrier sums an intense participation in concerts, this has taken him to become a substitute maestro in the Teatro di S. Carlo di Napoli. He has vast experience in the editorial sector in which he continues working with several editorial houses: BMG-Ricordi (he has done the critic revision of more than 60 operas mainly from the unknown repertory of the Neapolitan '700), Universal, McGraw-Hill, Prentice Hall, Pearson Education, Cambridge University Press, Oxford University Press. He is the artistic coordinator of the ASTRA Project and of the very unique Lost Sounds Orchestra. He has been Director of the Conservatory of Music "G. Martucci" of Salerno; currently he works as professor of General Musical Culture en the Conservatory of Music "A. Boito" in Parma.