Attack the bots! Adversarial data augmentation for intent modeling
This project will have an external supervisor from Kindly, in addition to an internal supervisor from LTG.
Intent classification is one of the core components of conversational agents (also known as “chatbots”), where the system attempts to predict the intent of a human (e.g., “CHANGE PASSWORD”) on the basis of a chat message (e.g., “Hi there, I’m trying to change my password but can’t seem to find out how”, or “Why can’t I log in anymore??? I don’t have access to my profile!! Can you help me fix it? It’s urgent!!”). Platforms such as Kindly enable non-technical users to train models for intent classification by crafting training data for custom intents. While this chatbot programming paradigm allows a rapidly increasing number of companies to improve and optimize their customer support services, it often leads to relatively small data sets that are very susceptible to so-called adversarial attacks.
In the context of (neural) intent prediction, black-box adversarial attacks can be defined as follows: given a query Q (e.g., “I need to order a pizza”) that the system can classify successfully, fool the system into incorrectly classify a perturbed version of the query (let’s call it Q’) with high confidence. Q’ can differ from Q in one or more dimensions: on the level of characters (“I wnat to odrer a pizza”), words (by switching words with semantically adjacent ones, “I gotta order a pizza”), by adding spurious content (“Hi there, I need to order a pizza, ok?”), and so on.
Recent years have seen a surge of interest on the topic of adversarial attacks to NLP systems (see for instance  https://arxiv.org/abs/1901.06796,  https://bibinlp.umiacs.umd.edu/, and  https://www.aclweb.org/anthology/D17-1215/). This master’s project will explore the potential of (semi-) automatically generated adversarial examples in improving intent classification models for conversational agents. In particular, we will explore the potential of adversarial data augmentation, where adversarial examples are added to “vanilla” instances to improve model robustness. We will look at different strategies for adversary generation, and measure their effectiveness across neural architectures and chatbot domains, focusing primarily on the Norwegian language.
Kindly is a language technology company based in Oslo. Kindly was founded in 2016 and has since grown to a team of 40+ employees, several of whom have graduated from the Language Technology Group (LTG) at UiO with master’s and doctoral degrees. Natural language processing and machine learning are at the core of the Kindly chatbot platform, which powers the conversational agents for some of the leading enterprises in the Nordics, such as Norwegian Air Shuttle, Elkjøp, Kahoot!, Thon Hotels, and Finn. This master’s project will be carried out in close cooperation with Kindly.