Show simple item record

dc.contributor.authorGalli, Cen_US
dc.contributor.authorDonos, Nen_US
dc.contributor.authorCalciolari, Een_US
dc.date.accessioned2024-02-26T08:15:43Z
dc.date.issued2024-02-01en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/94880
dc.description.abstractSystematic reviews are cumbersome yet essential to the epistemic process of medical science. Finding significant reports, however, is a daunting task because the sheer volume of published literature makes the manual screening of databases time-consuming. The use of Artificial Intelligence could make literature processing faster and more efficient. Sentence transformers are groundbreaking algorithms that can generate rich semantic representations of text documents and allow for semantic queries. In the present report, we compared four freely available sentence transformer pre-trained models (all-MiniLM-L6-v2, all-MiniLM-L12-v2, all-mpnet-base-v2, and All-distilroberta-v1) on a convenience sample of 6110 articles from a published systematic review. The authors of this review manually screened the dataset and identified 24 target articles that addressed the Focused Questions (FQ) of the review. We applied the four sentence transformers to the dataset and, using the FQ as a query, performed a semantic similarity search on the dataset. The models identified similarities between the FQ and the target articles to a varying degree, and, sorting the dataset by semantic similarities using the best-performing model (all-mpnet-base-v2), the target articles could be found in the top 700 papers out of the 6110 dataset. Our data indicate that the choice of an appropriate pre-trained model could remarkably reduce the number of articles to screen and the time to completion for systematic reviews.en_US
dc.relation.ispartofInformation (Switzerland)en_US
dc.rightsThis article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
dc.titlePerformance of 4 Pre-Trained Sentence Transformer Models in the Semantic Query of a Systematic Review Dataset on Peri-Implantitisen_US
dc.typeArticle
dc.rights.holder© 2024 by the authors. Licensee MDPI, Basel, Switzerland.
dc.identifier.doi10.3390/info15020068en_US
pubs.issue2en_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.volume15en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record