How NLP Models Perpetuate Human Biases

O'Reilly Media
oreillymedia
Published in
4 min readJan 21, 2021

--

We have seen an explosion of chatbots in the market. AI has become ingrained into the daily fabric of our lives. Service industries have turned to AI-driven chatbots to manage customer interactions, increasing speed and quality of resolution while decreasing cost. Millennials increasingly prefer to interact with chatbots rather than humans. As we embrace chatbots in our lives, it is paramount to evaluate the role they play in reinforcing and perpetuating societal biases and stereotypes. With the proliferation of chatbots creating a new paradigm of human and machine collaboration, an interesting ethical question emerges: should we hold chatbots to a higher ethical standard than we hold ourselves?

Underlying chatbots are natural language processing (NLP) models made up of deep learning algorithms called neural networks. Deep learning models have the ability to accurately map complex relationships from messy data — in text as well as images. So what are popular NLP models like convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) doing in chatbots? They are mathematically defining relationships between words, as explicitly or implicitly defined in the training corpus.

Examples of Chatbots Inheriting Human Biases

There have been several disconcerting NLP algorithm mishaps. Amazon’s secret AI recruiting tool showed bias against women. Microsoft’s now — infamous chatbot Tay14, a machine learning experiment in social interaction, had to be decommissioned when it picked up a series of racial slurs.

These stories shock and outrage us. We are quick to blame the company or the AI developer. Yet it is human-generated data that these NLP models are trained on. NLP models are only exposing the existing human bias learned from the training data. NLP mishaps are a reflection of humanity’s dark side.

How Chatbots Perpetuate Human Biases

Just as a child’s code of ethics is shaped by its parents and environment, machine learning models learn from human-generated training data specified by their creators. Just as humans grow wiser with experience, machine learning models require large training corpuses to learn robust and generalizable relationships. And just as children grow up and pass on their code of ethics, as well as their biases, machines too will perpetuate their code of ethics and their biases through interactions with future generations. The difference is that these machines are immortal and will transcend generations in time. Thus our chatbots must be held to the highest ethical standard and must correct for the biases in training data generated by fallible humans.

Ways to Correct Biases in Chatbots

In continuous pursuit of excellence, we must admit our fallibility and look to correct it in generations to come. From an NLP modeling perspective, three bias correction methods exist to help chatbots overcome human biases in their training data.

One option is to completely remove the biased concept from the NLP model. For example, in preparation for an NLP model, words and phrases in a training corpus are mapped to a vector of real numbers called word embeddings. Mathematically, gender can be subtracted from these vectors. However, completely removing the concept of gender might not prove practical in applications where it is a key predictor or segmentation variable. An alternative for removing gender stereotypes, while still keeping the concept of gender, is to simply remove gender stereotypes we don’t want (e.g., receptionist) and keep those we do (e.g., CEO). Finally, additional data can be synthetically generated by flipping pronouns (i.e., “he” and “she”) so that the model does not learn any unintended bias due to lack of representation in the training data.

Why Continuous Learning Is Required for Chatbots

We see that our code of ethics has continually evolved over thousands of years. Recent progress has been made with civil rights, women’s rights, and the LGBT movement. Though the core principles of ethics have not fundamentally changed since the time of Socrates, the practical application of ethics is fluid and is constantly evolving alongside society. If we hardcode today’s biases into immortal machines, we will pollute the minds of future generations with the biases of generations past, slowing the ethical evolution of the human race. Instead, we can leverage techniques to help chatbots overcome today’s human biases, so they can in turn make the human race better, ethically speaking!

Learn faster. Dig deeper. See farther.

Join the O’Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Naomi Arcadia Kaduwela is an innovative, ethical AI creator. As the head of Kavi Labs, the innovation arm of Kavi Global, Naomi partners with clients across industries to cocreate business value leveraging AI, advanced analytics, and IoT. Prior to joining Kavi Global, Naomi was a founding member of GE Healthcare’s analytics team and graduated from GE’s Digital Technology Leadership Program. Naomi is an academic researcher and a conference speaker. Naomi holds an MS degree in analytics from Northwestern University and joint BS degrees in computer science and applied psychology from Ithaca College.

--

--

O'Reilly Media
oreillymedia

O'Reilly Media spreads the knowledge of innovators through its books, video training, webcasts, events, and research.