dc.contributor.author | Zong, Y | |
dc.contributor.author | Garcia-Sihuay, N | |
dc.contributor.author | Reiss, J | |
dc.date.accessioned | 2024-07-22T13:43:06Z | |
dc.date.available | 2024-07-22T13:43:06Z | |
dc.date.issued | 2024-04-27 | |
dc.identifier.citation | Authors: Zong, Yisu; Garcia-Sihuay, Nelly; Reiss, Joshua Affiliations: Queen Mary University of London; Queen Mary University of London; Queen Mary University of London. AES Conference: AES 2024 International Audio for Games Conference (April 2024) Paper Number: 2 Publication Date: April 27, 2024 Subject: Procedural audio Sound effects synthesis Sound matching Differentiable digital signal processing Deep learning | en_US |
dc.identifier.uri | https://qmro.qmul.ac.uk/xmlui/handle/123456789/98329 | |
dc.description.abstract | Procedural audio models have great potential in sound effects production and design, they can be incredibly high quality and have high interactivity with the users. However, they also often have many free parameters that may not be specified just from an understanding of the phenomenon, making it very difficult for users to create the desired sound. Moreover, their potential and generalization ability are rarely explored fully due to their complexity. To address these problems, this work introduces a hybrid machine learning method to evaluate the overall sound matching performance of a real sound dataset. First, we train a parameter estimation network using synthesis sound samples. Through the differentiable implementation of the sound synthesis model, we use both parameter and spectral loss in this self-supervised stage. Then, we perform adversarial training by spectral loss plus adversarial loss using real sound samples. We evaluate our approach for an example of an explosion sound synthesis model. We experiment with different model designs and conduct a subjective listening test. We demonstrate that this is an effective method to evaluate the overall performance of a sound synthesis model, and its capability to speed up the sound model design process. | en_US |
dc.format.extent | 11 - 19 | |
dc.publisher | Audio Engineering Society | en_US |
dc.title | A Machine learning method to evaluate and improve sound effects synthesis model design | en_US |
dc.type | Conference Proceeding | en_US |
dc.rights.holder | © 2024 Audio Engineering Society | |
pubs.notes | Not known | en_US |
pubs.publication-status | Published | en_US |
pubs.publisher-url | https://secure.aes.org/forum/pubs/conferences/?elib=22417 | |
rioxxterms.funder | Default funder | en_US |
rioxxterms.identifier.project | Default project | en_US |
rioxxterms.funder.project | b215eee3-195d-4c4f-a85d-169a4331c138 | en_US |