Show simple item record

dc.contributor.authorGan, Yen_US
dc.date.accessioned2022-11-09T14:43:38Z
dc.date.issued2022
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/82330
dc.description.abstractText-to-SQL studies how to translate natural language descriptions into SQL queries. The key challenge is addressing the mismatch between natural language and SQL queries. To bridge this gap, we propose an SQL intermediate representation (IR) called Natural SQL (NatSQL), which makes inferring SQL easier for models and improves the performance of existing models. We also study the robustness of existing models in light of schema linking and compositional generalization. Specifically, NatSQL preserves the core functionalities of SQL while it simplifies the queries as follows: (1) dispensing with operators and keywords such as GROUP BY, HAVING, FROM, JOIN ON, which are usually hard to find counterparts for in the text descriptions; (2) removing the need for nested subqueries and set operators; and (3) making schema linking easier by reducing the required number of schema items. On Spider, a challenging text-to-SQL benchmark that contains complex and nested SQL queries, NatSQL outperforms other IRs and significantly improves the performance of several previous SOTA models. Furthermore, for existing models that do not support executable SQL generation, NatSQL easily enables them to generate executable SQL queries. This thesis also discusses the robustness of text-to-SQL models.Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render models vulnerable to attacks that break the schema linking mechanism. In particular, this thesis introduces Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-Syn were modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases. Experiments show that the accuracy dramatically drops with the elimination of such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks. We present two categories of approaches to improve the model robustness. The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, whereas the second category is based on adversarial training. Experiments illustrate that both categories of approaches significantly outperform their counterparts without the defense and that the approaches in the first category are more effective. Based on the above study results, we further discuss the Exact Match based Schema Linking (EMSL). EMSL has become standard in text-to-SQL: many state-of-the-art models employ EMSL, with performance dropping significantly when the EMSL component is removed. However, we show that EMSL reduces robustness, rendering models vulnerable to synonym substitution and typos. Instead of relying on EMSL to make up for deficiencies in question-schema encoding, we show that using a pre-trained language model as an encoder can improve performance without using EMSL, creating a more robust model. We also study the design choice of the schema linking module, finding that a suitable design benefits performance and interpretability. Our experiments show that better understanding of the schema linking mechanism can improve model interpretability, robustness and performance. This thesis finally discusses the text-to-SQL compositional generalization challenge: neural networks struggle with compositional generalization where training and test distributions differ. In this thesis, we propose a clause-level compositional example generation method. We first split the sentences in the Spider text-to-SQL dataset into sub-sentences, annotating each sub-sentence with its corresponding SQL clause, resulting in a new dataset Spider-SS. We then construct a further dataset, Spider-CG, by composing Spider-SS sub-sentences in different combinations, to test the ability of models to generalize compositionally. Experiments show that existing models suffer significant performance degradation when evaluated on Spider-CG, even though every sub-sentence is seen during training. To deal with this problem, we modify a number of state-of-the-art models to train on the segmented data of Spider-SS, and we show that this method improves the generalization performance.en_US
dc.language.isoenen_US
dc.titleAccurate and Robust Text-to-SQL Parsing using Intermediate Representationen_US
pubs.notesNot knownen_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
qmul.funderEECS Studentships Award::Queen Mary, University of Londonen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [4150]
    Theses Awarded by Queen Mary University of London

Show simple item record