Date of Award
Doctor of Philosophy
Alvis C Fong, Ph.D.
Ikhlas Abdel Qader, Ph.D.
Steven Carr, Ph.D.
Ajay Gupta, Ph.D.
Artificial intelligence, commonsense reasoning, deep neural networks, fairness in artificial intelligence, machine learning, natural language processing
Although Artificial Intelligence (AI) promises to deliver ever more user-friendly consumer applications, recent mishaps involving fake information and biased treatment serve as vivid reminders of the pitfalls of AI. AI can harbor latent biases and flaws that can cause harm in diverse and unexpected ways. It is crucial to understand the reasons for, mechanisms behind, and circumstances under which AI can fail. For instance, a lack of commonsense reasoning can lead to biased or unfair decisions made by Machine Learning (ML) systems. For example, if an ML system is trained on data that is biased or unrepresentative of the real world, it may make decisions that are unfair or discriminatory. In related manner, a system that has a good understanding of the concept of fairness and is able to apply this knowledge to its decision-making process may be less likely to make biased or unfair decisions.
As the amount of unstructured text data generated on the internet continues to grow at an exponential rate, there is an increasing need for intelligent approaches to process and extract valuable knowledge from this data. Natural Language Processing (NLP) enables computers to understand and analyze human language, allowing them to effectively handle unstructured data. While human-machine interaction may seem straightforward in theory, it can be highly complex and challenging in practice. Despite significant advancements in deep learning models for language processing, NLP systems still struggle with understanding basic commonsense knowledge. This demonstrates the continued difficulty of achieving reliable and effective communication between humans and machines.
In this work, we have three research objectives that each address a particular aspect of AI. The first objective of our research is to provide a comprehensive overview of AI-induced mishaps in consumer applications and to propose strategies for mitigating their negative impacts. This objective aims to not only raise awareness of current issues but also inspire other researchers in the consumer technology field to develop more reliable AI applications.
In our second research objective, we focus on developing learning models that can automatically learn representations of human language. We investigate commonsense inference, a task that requires both natural language understanding and commonsense reasoning, which is considered one of the most challenging problems in the field of NLP. We introduce the Commonsense Validation and Explanation (ComVE) tasks and address them using state-of-the-art deep learning models. We propose a novel technique called Masked Language Modeling for short sentences, inspired by BERT, and fine-tune several language models. We also present a method inspired by question/answering tasks, which treats the classification problem as a multiple-choice question answering task to improve the performance of our experimental results, which achieved a score of 96.06%, significantly better than the baseline.
In the third research objective, we explore the issue of normalized discrimination in a language. We develop NLP models to identify Patronizing and Condescending Language (PCL) and raise awareness of unconscious attitudes that implicitly target vulnerable communities. This task is inherently different from sentiment analysis because positive/negative hidden attitudes in 1 Bidirectional Encoder Representations from Transformers 2 Implementation available at GitHub the context will not necessarily be considered positive/negative for PCL tasks. We devised various NLP data augmentation methods such as Easy Data Augmentation, Back Translation, and PEGASUS Paraphrasing to mitigate the effects of the highly imbalanced dataset. We explored fine-tuned RoBERTa and few-shot learning technique with two engines of GPT3, Curie and Davinci. We achieved competitive results and ranked among the top 16 teams out of 77 participants in the Semantic Evaluation-2022, which is considered a top-tier workshop within the NAACL5 conference.
Saeedi, Sirwe, "Socially Aware Natural Language Processing with Commonsense Reasoning and Fairness in Intelligent Systems" (2023). Dissertations. 3952.