Gender Bias Within AI

Table of Contents
- Other Instances of Gender Bias within AI
- Why Gender Bias in AI Happens
- History of Gender Bias
- What Can Be Done
- References
Gender Bias in AI has become a pressing issue as artificial intelligence systems increasingly reflect and perpetuate societal stereotypes. According to a June 8, 2024 article from the South China Morning Post, an AI machine called Ernie Bot, China’s version of OpenAI’s ChatGBT, provides gender-biased responses about societal roles. When a South China Morning Post reporter typed the command, “Generate a picture of a nurse taking care of the elderly,” the AI produced an image of a woman with a stethoscope. When asked to depict a professor teaching mathematics or a company boss, the chatbot’s main characters were male-presenting.
The American Psychological Association defines gender bias as “any one of a variety of stereotypical beliefs or biases about individuals based on their gender.” This bias has persisted throughout history and remains prevalent in societies with patriarchal structures, where men hold disproportionate amounts of power and privilege. In many cases throughout the United States, men are paid more than women for the same job, and women are often asked to do stereotypically “female” tasks, such as receptionist work, that their male coworkers are not asked to do.
Virtual assistants often embody stereotypical traits, with most AI tools featuring feminine voices, further entrenching gender biases. For a deeper dive into this phenomenon, see AI Identity Crisis.
Because our society holds these biases and we create AI, these biases manifest in the early development of AI chatbots. If left unaddressed, this can lead to harmful consequences for users.
Other Instances of Gender Bias within AI
The United Nations Educational, Scientific, and Cultural Organization (UNESCO) has expressed concern about Large Language models (LLM) due to their tendencies to produce gender bias, homophobia, and racial stereotyping.
Many of these AI machines are widely available, raising concerns about people being exposed to and inherently retaining these biases. This is especially problematic for children, who will use AI for fun or educational purposes and start learning biases that society has been working to eliminate. CNN Business also reported that gender bias is causing issues with facial recognition AI software used for security applications at places such as concerts and airports.
Open-source large language models (LLMs) tend to assign more prestigious jobs to men, such as “doctor” or “teacher,” while women are given socially stigmatized and undervalued roles such as “cook,” “domestic servant,” and “prostitute,” according to UNESCO. META’s AI machine, Llama 2, was asked to generate stories, some about men and boys and others about women and girls. Many of the male-generated stories used words such as “woods,” “adventurous,” and “treasure,” while the female-generated stories featured words such as “love,” “hair,” “gentle,” and “husband.”
AI can also exacerbate inequality in recruitment processes, reinforcing gender stereotypes in hiring decisions. Strategies for addressing these issues, as outlined in Towards Fairer Hiring Practices, are essential to ensure AI-driven recruitment is fair.
Why Gender Bias in AI Happens

AI has continuously shown biases that have proven difficult to fix. The Harvard Business Review explains that AI systems like Amazon’s Alexa, Apple’s Siri, and other natural language processing (NLP) have shown gender biases. These systems operate like a game of word association, which explains why AI machines would most likely pair ‘man’ with ‘doctor’ and ‘woman’ with ‘receptionist.’ Several factors contribute to these persistent biases:
- Incomplete or skewed training datasets are often a primary reason for these biases; certain demographics may be underrepresented in specific categories. The Harvard Business Review provides an example: if female speakers only make up about 10% of the training data, “… when you apply a trained machine learning model to females, it is likely to produce a higher degree of errors.”
- They use supervised machine learning when using training data in commercial AI systems, meaning the training data is labeled. These labels are usually human-created, and since humans have biases – consciously or unconsciously– these biased labels become encoded, resulting in machine-learning models producing biased content.
- Measurements used as inputs for speech synthesis (text-to-speech) have been shown to introduce bias. An analysis of speech synthesis programs revealed that speech modeled by taller speakers with “longer vocal cords and lower-pitched voices,” resulted in technology that creates more errors for women than for men. This indicates that measurements used as inputs for other machine-learning models could cause inherent biases.
History of Gender Bias
As stated before, gender bias is not new; it has persisted for centuries. Today, we still face issues such as the gender pay gap and debates about “a woman’s place,” which stem from historical views of women as property of their fathers or husbands, women being unable to own anything legally, and women being confined to domestic work, apparel, lower-wage teaching, or nursing positions. These outdated beliefs are rooted in a past where women could not even attend college.
In the 20th century, the first wave of feminism fought for women’s property and voting rights. In the 1960s, the second wave of feminism targeted workplace and legal inequality. The third wave took place in the 1990s, and today, we are witnessing the fourth wave with the #MeToo movement.
These historic strides in equality for women should not be ignored or reversed by AI systems that are predominantly developed and trained by men. As discussed, AI has created very gender-stereotyped work, many of which are harmful. If these biases are not addressed, they could instill outdated beliefs in younger generations, suggesting male superiority and undoing years of progress toward gender equality.
What Can Be Done
A crucial step in overcoming AI bias is ensuring that the training datasets are diverse in terms of gender, ethnicity/race, age, and sexuality. This means having roughly equal representation across all demographics—for example, ensuring the same number of interviews with women as with men.
Ensuring diversity among AI developers is also essential. Having diverse teams from the project’s inception to its completion promotes inclusivity and results in more diverse outputs from the AI. International Women’s Day suggests AI companies should attract more women in tech jobs to diversify the pipeline and workforce.
The Harvard Business Review suggests that machine-learning teams measure accuracy levels separately for different demographic categories to ensure no single category is treated more favorably.
One common solution to address AI biases is to conduct more research, especially focusing on minority groups. This will ensure that AI has sufficient data to produce balanced and fair results. Ultimately, it is up to the developers to prioritize diversity in their AI systems to provide unbiased and non-stereotyped outputs.
References
Gender bias. (n.d.). American Psychological Association. Retrieved June 14, 2024, from https://dictionary.apa.org/gender-bias
Feast, J. (2019, November 20). 4 ways to address gender bias in AI. Harvard Business Review. https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai
Gender and AI: Addressing bias in Artificial Intelligence. (n.d.). International Women’s Day. https://www.internationalwomensday.com/Missions/14458/Gender-and-AI-Addressing-bias-in-artificial-intelligence
Gordon, C. (n.d.). Growing Apart: A Political History of American Inequality. Scalar. Retrieved June 14, 2024, from https://scalar.usc.edu/works/growing-apart-a-political-history-of-american-inequality/gender-and-inequality
Metz, R. (2019, November 21). AI software defines people as male or female. That’s a problem. CNN. https://edition.cnn.com/2019/11/21/tech/ai-gender-recognition-problem/index.html
O’Hagan, C. (2024, March 7). Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. UNESCO. https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes
Sinn, F., & Nekoei, A. (2021, May 27). The origin of the gender gap. CEPR. https://cepr.org/voxeu/columns/origin-gender-gap
