NLI: Combining Logic and Neural Networks
There have been various approaches to Natural language inferences (NLIs). Logical approaches underlying so-called formal semantics of natural languages, have focused on logical operators (like not, and, every) and inferences like, from All humans are mortal and Socrates is a human, to Socrates is mortal.
Logic has, however, had little to say about inferences which involve the meaning of individual words not considered logical, e.g. from Dickens was an author to Dickens wrote books. Such inferences have been central to the NLP task called Textual Entailment, and later on Natural Language Inference. The best performing systems to such tasks have been based on word models and machine learning (ML), and most recently on ML-methods based on neural networks.
Currently, there is a growing interest in combining logic and neural networks in various ways. The logical rules may be considered an additional resource in the learning, similarly to additional knowledge sources in distantly supervised learning or as an additional learning task. In this project we will explore some of these models for integration and see whether including more logic can improve the results.
Some relevant references: