
Penn State’s Bias-a-thon: A Beacon for Ethical AI in the 21st Century
Additions and Updates made by TIA Team on July 22, 2024
Table of Contents
- A Glimpse into the Bias-a-thon Challenge
- Revelations from the 2023 Bias-a-thon
- Responsible AI: A Call to Action
- The Global Relevance of Penn State Bias-a-thon
- Key Takeaways: Questions Answered
- References
The world and Artificial Intelligence (AI) are advancing rapidly, and in our quest for innovation, we often overlook ethical considerations. However, a crucial part of integrating AI into everyday life is scrutinizing the algorithms that power our AI systems and discovering their biases. That’s what Penn State Bias-a-thon hopes to do: expose biases in AI. I believe that the results from the Bias-a-thon can help developers discover, reduce, and eliminate biases in AI and push for ethical AI development.
A Glimpse into the Bias-a-thon Challenge
Penn State’s Bias-a-thon is a competition sponsored by the Center for Socially Responsible Artificial Intelligence (CSRAI). The most recent event ran from November 13 to November 16, 2023. Members of the Penn State community with @psu.edu email addresses were invited to study biases within generative AI tools. The challenge was to create prompts that would produce outputs reflecting biases in specific categories: age, ability, language, history, context, culture, and “out-of-the-box” bias.
The response was nothing short of phenomenal. Over 80 participants submitted thought-provoking prompts targeting more than 10 popular generative AI tools, including industry giants like Open AI’s ChatGPT and Google’s Bard and image generators like Midjourney and StabilityAI’s Stable Diffusion.
Revelations from the 2023 Bias-a-thon

At the heart of the Bias-a-thon were the participants who dared to question the status quo of AI. Mukund Srinath, a doctoral student in informatics at the College of Information Sciences and Technology, identified a bias that beat the predefined categories. His prompt to ChatGPT 3.5 revealed a concerning inclination to favor individuals aligning with traditional beauty standards and success based on superficial traits. Sadly, that’s not where it ends.
Nargess Tahmasbi, an associate professor at Penn State Hazleton, exposed Midjourney’s tendency to perpetuate stereotypes. The AI tool portrayed computer scientists as predominantly young, white men.
Eunchae Jang, a Mass Communications doctoral student, highlighted biases in gender roles as ChatGPT 3.5 assumed an engineer to be a man and a secretary to be a woman.
Marjan Davoodi, a Sociology and Social Data Analytics student, prompted DeepMind’s image generators to create an image representing Iran in 1950, revealing stereotypical features associated with Iranian women despite historical accuracy.
This list contains results from the top three participants, so it doesn’t fully do justice to the discoveries made during the Penn State Bias-a-thon. These results raise one fundamental question: What does the future hold if AI keeps supporting biases that have been around for a long time? Can we hope for a better world with more diversity, equity, and inclusion(DEI)?
If nothing else, these results are a powerful reminder that there are several ethical issues with AI that still need attention. If these issues are not handled, we may have to deal with AI systems that perpetuate the same biases that are a massive part of human interactions.
Responsible AI: A Call to Action
The Bias-a-thon did more than produce winners in a simple competition; it signals for Responsible AI (RAI). As S. Shyam Sundar, CSRAI director, pointed out, generative AI tools are trained on data created by humans. Identifying and rectifying human biases does more than avoid offensive responses. It may also be necessary to prevent discriminatory practices and foster the development of socially conscious AI products.
The CSRAI is a crucial part of advancing transformative AI research. Sundar stressed that raising awareness about biases in AI models mitigates potential harm, inspires further research, and contributes to the design of AI aligned with ethical principles.
The Global Relevance of Penn State Bias-a-thon
The impact of Bias-a-thon is not confined to the walls of Penn State. If AI will play a big part in shaping our future, then responsible use of AI technologies is non-negotiable. The potential of AI comes with promise and risks. All the winners of the Bias-a-thon did was use simple prompts to unveil biases ingrained in AI models, but it will take more than that to root out these biases.
Organizations worldwide must recognize the importance of Responsible AI and begin to evaluate existing practices or establish new ones that prioritize ethical considerations in AI development and usage.
Key Takeaways: Questions Answered
Question: What are the 5 key insights from Penn State’s Bias-a-thon on ethical AI development?
The Bias-a-thon demonstrated the prevalence of biases in generative AI, the importance of community involvement in detecting biases, the need for ongoing scrutiny of AI algorithms, the role of ethical considerations in AI advancement, and the global impact of promoting responsible AI practices.
Question: What is the purpose of Penn State’s Bias-a-thon?
The Penn State Bias-a-thon goal is to expose biases in AI to help developers discover, reduce, and eliminate these biases, advocating for the development of ethical AI
Question: What were some of the biases revealed during the Bias-a-thon?
Biases related to beauty standards, gender roles, and racial stereotypes were among those identified, showing AI’s inclination towards traditional success traits and misrepresentation of professional roles and national identities.
Question: How does the Bias-a-thon contribute to the development of responsible AI?
The Bias-a-thon not only raises awareness about biases in AI models but also inspires further research and contributes to the design of AI systems aligned with ethical principles, fostering the development of socially conscious AI products.
References
Penn State University. (n.d.). Center for Socially Responsible Artificial Intelligence. Retrieved February 14, 2024, from https://csrai.psu.edu/
Bellisario College of Communications at Penn State. (n.d.). S. Shyam Sundar. Retrieved February 14, 2024, from https://www.bellisario.psu.edu/people/individual/s.-shyam-sundar
Penn State University. (n.d.). What is ChatGPT and what can it be used for? Retrieved February 14, 2024, from https://www.psu.edu/news/research/story/what-chatgpt-and-what-can-it-be-used/
TechTarget. (n.d.). What Is Diversity, Equity and Inclusion (DEI)? | HR Software. Retrieved February 14, 2024, from https://www.techtarget.com/searchhrsoftware/definition/diversity-equity-and-inclusion-DEI

[…] you like using an app, share your thoughts on social media and let people know your […]