• Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    The Sousta corpus: Beat-informed automatic transcription of traditional dance tunes 
    •   QMRO Home
    • School of Electronic Engineering and Computer Science
    • Electronic Engineering and Computer Science
    • The Sousta corpus: Beat-informed automatic transcription of traditional dance tunes
    •   QMRO Home
    • School of Electronic Engineering and Computer Science
    • Electronic Engineering and Computer Science
    • The Sousta corpus: Beat-informed automatic transcription of traditional dance tunes
    ‌
    ‌

    Browse

    All of QMROCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects
    ‌
    ‌

    Administrators only

    Login
    ‌
    ‌

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    The Sousta corpus: Beat-informed automatic transcription of traditional dance tunes

    View/Open
    Accepted version (461.5Kb)
    Pagination
    531 - 537 (7)
    Publisher
    ISMIR
    Publisher URL
    https://wp.nyu.edu/ismir2016/
    Metadata
    Show full item record
    Abstract
    In this paper, we present a new corpus for research in computational ethnomusicology and automatic music transcription, consisting of traditional dance tunes from Crete. This rich dataset includes audio recordings, scores transcribed by ethnomusicologists and aligned to the audio performances, and meter annotations. A second contribution of this paper is the creation of an automatic music transcription system able to support the detection of multiple pitches produced by lyra (a bowed string instrument). Furthermore, the transcription system is able to cope with deviations from standard tuning, and provides temporally quantized notes by combining the output of the multi-pitch detection stage with a state-of-the-art meter tracking algorithm. Experiments carried out for note tracking using 25ms onset tolerance reach 41.1% using information from the multi-pitch detection stage only, 54.6% when integrating beat information, and 57.9% when also supporting tuning estimation. The produced meter aligned transcriptions can be used to generate staff notation, a fact that increases the value of the system for studies in ethnomusicology.
    Authors
    Holzapfel, A; Benetos, E; 17th International Society for Music Information Retrieval Conference
    URI
    http://qmro.qmul.ac.uk/xmlui/handle/123456789/12636
    Collections
    • Electronic Engineering and Computer Science [2319]
    Licence information
    https://wp.nyu.edu/ismir2016/
    Twitter iconFollow QMUL on Twitter
    Twitter iconFollow QM Research
    Online on twitter
    Facebook iconLike us on Facebook
    • Site Map
    • Privacy and cookies
    • Disclaimer
    • Accessibility
    • Contacts
    • Intranet
    • Current students

    Modern Slavery Statement

    Queen Mary University of London
    Mile End Road
    London E1 4NS
    Tel: +44 (0)20 7882 5555

    © Queen Mary University of London.