Show simple item record

dc.contributor.authorZong, Y
dc.contributor.authorGarcia-Sihuay, N
dc.contributor.authorReiss, J
dc.date.accessioned2024-07-22T13:43:06Z
dc.date.available2024-07-22T13:43:06Z
dc.date.issued2024-04-27
dc.identifier.citationAuthors: Zong, Yisu; Garcia-Sihuay, Nelly; Reiss, Joshua Affiliations: Queen Mary University of London; Queen Mary University of London; Queen Mary University of London. AES Conference: AES 2024 International Audio for Games Conference (April 2024) Paper Number: 2 Publication Date: April 27, 2024 Subject: Procedural audio Sound effects synthesis Sound matching Differentiable digital signal processing Deep learningen_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/98329
dc.description.abstractProcedural audio models have great potential in sound effects production and design, they can be incredibly high quality and have high interactivity with the users. However, they also often have many free parameters that may not be specified just from an understanding of the phenomenon, making it very difficult for users to create the desired sound. Moreover, their potential and generalization ability are rarely explored fully due to their complexity. To address these problems, this work introduces a hybrid machine learning method to evaluate the overall sound matching performance of a real sound dataset. First, we train a parameter estimation network using synthesis sound samples. Through the differentiable implementation of the sound synthesis model, we use both parameter and spectral loss in this self-supervised stage. Then, we perform adversarial training by spectral loss plus adversarial loss using real sound samples. We evaluate our approach for an example of an explosion sound synthesis model. We experiment with different model designs and conduct a subjective listening test. We demonstrate that this is an effective method to evaluate the overall performance of a sound synthesis model, and its capability to speed up the sound model design process.en_US
dc.format.extent11 - 19
dc.publisherAudio Engineering Societyen_US
dc.titleA Machine learning method to evaluate and improve sound effects synthesis model designen_US
dc.typeConference Proceedingen_US
dc.rights.holder© 2024 Audio Engineering Society
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.publisher-urlhttps://secure.aes.org/forum/pubs/conferences/?elib=22417
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
rioxxterms.funder.projectb215eee3-195d-4c4f-a85d-169a4331c138en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record