Bridging the Gap: Transforming Theoretical Commitments into Inclusive AI Practices

Bridging the Gap: Transforming Theoretical Commitments into Inclusive AI Practices

Table of Contents

Artificial Intelligence

A term with which we have all become very familiar since the advent of ChatGPT. 

From aiding in hiring decisions to crafting winning cover letters for that coveted job, AI is the helper we all need but are reluctant to admit. While the AI community pays lip service to inclusivity and responsibility, there is a vast chasm between theoretical AI principles and real-world practice.

Consider this situation: a 2019 study conducted by MIT found that a facial recognition algorithm had an error rate of 35% for dark-skinned females, compared to a mere 1% for lighter-skinned ones. This discrepancy stems from a pervasive lack of diversity in AI teams and training data, leading to the proliferation of biased outcomes. 

Further exacerbating the issue is the tendency of many organizations to treat AI ethics as an afterthought rather than an integral part of design. The challenge, therefore, lies in acknowledging the importance of inclusivity in AI and in actualizing these principles into equitable AI design, development, and deployment. 

Renee Cummings, an AI ethicist and the founder of Urban AI, says:

“So much of AI and data science is about civil rights. And when we think about Black History Month, we think about legacy, and the American legacy that changed the world. As we think about AI, it’s that an algorithm can create a legacy.”

This article delves deeper into these challenges and explores potential solutions to bridge this gap. 

Join us as we navigate the path toward truly inclusive AI practices.

Two translucent digital screens display overlapping lines of code and keywords, set against a soft purple background.
Photo by Google DeepMind on Pexels

What is Inclusive AI?

The term inclusive AI requires a concrete and precise definition. However, it is designed to account for diverse needs and benefit society, including minority, marginalized, and underrepresented groups. It aims to diminish bias and discrimination within AI systems and their outcomes. It strives to reduce inequality in access to these systems and the necessary digital literacy. Inclusive AI must be non-discriminatory in its production, unbiased in its consequences, and accessible to all. 

Therefore, an AI project can only be deemed ‘inclusive’ if it is conceived, designed, and built by teams representing society.

The Need for Responsible and Inclusive AI

The responsible and inclusive development of AI systems is critical yet complex. AI’s unprecedented capabilities create opportunities to expand access and empower marginalized groups across healthcare, education, and more. For instance, In May 2023, a 40-year-old man from the Netherlands, who had been paralyzed for 12 years due to a biking accident, could walk again. This breakthrough was made possible by AI technologies implanted in his body that could transmit his thoughts to his nervous system.

However, neglecting ethical considerations poses grave dangers of embedded biases, exclusion, and discrimination that could profoundly harm vulnerable populations. While frameworks exist, most remain voluntary, and concrete accountability remains elusive. 

A case in point is the Cambridge Analytica scandal, where AI technology was used to manipulate global elections, including presidential campaigns and Brexit. 

Sustained effort is required to ensure AI’s development mirrors humanity’s diversity, preventing harm while spreading benefits equitably.

On the other hand, thoughtful oversight and inclusion of diverse perspectives could enable AI to democratize opportunity. Companies are increasingly recognizing responsible AI as vital for trust and adoption. 

Google and Microsoft, for instance, have been making strides in leveraging responsible AI, ensuring their AI systems are explainable, transparent, and accountable. Still, binding governance and ongoing collaboration among stakeholders are essential to upholding ethical principles in practice, not just in theory. 

Exploring the Limitations of Theoretical Frameworks

Theoretical frameworks for Inclusive AI, though crucial, face several limitations. AI grapples with the challenge of translating broad principles into actionable practices. The existing focus on maximizing speed, efficiency, and profit often conflicts with the resources and time required for ethical evaluation or guidance. 

Most existing ethical frameworks are principled approaches, yet challenges arise in making them actionable. We need more than just principles to guarantee ethical AI and provide clear guidance on ensuring transparency in AI systems. As AI technology evolves and new use cases emerge, principles that were applicable in the past may not suffice to address contemporary ethical and social issues.

Several high-profile incidents highlight the significance of managing AI-related risks and the necessity for robust AI governance, with some examples as follows:

These instances underscore the urgent need for practical solutions and implementations. The limitations of existing theoretical frameworks for Inclusive AI emphasize that the journey towards truly inclusive AI is not just about creating principles but bringing them to life. As we continue to innovate, we must ensure that our technological advancements align with our commitment to inclusivity and responsibility.

To make AI inclusive, developers should involve diverse stakeholders in the development process. This ensures that AI systems are designed with a broad range of experiences and perspectives paving the way to mitigate biases. Diversity and inclusivity in AI are important because they ensure that AI systems are fair, equitable, and beneficial to all. By considering these factors in the development of AI systems, we can help prevent the perpetuation of societal biases and ensure that the benefits of AI are equitably distributed.

AI and Society

AI holds the potential to revolutionize various sectors. For example, in healthcare, AI can aid in diagnosing diseases, personalizing treatment plans, and predicting patient outcomes. AI can offer personalized learning experiences in education, enabling students to learn at their own pace. However, the ascent of AI also poses significant challenges. 

One of the most prominent concerns is the potential for job displacement due to automation. Additionally, issues related to privacy and security have come to the forefront, as AI systems often rely on large amounts of data, raising concerns about data protection and misuse.

Inclusive AI plays a crucial role in mitigating these negative impacts and enhancing the positive ones. By ensuring that AI systems are designed and developed with diverse perspectives and experiences, we can help prevent biases in AI outputs, making these systems more fair and equitable. 

Inclusive AI also emphasizes making AI systems accessible and beneficial to all, regardless of their demographic variables. This can help ensure that the benefits of AI are distributed equitably, contributing to a more inclusive and just society.

The Current State of AI Practices

Building on the potential of AI to revolutionize various sectors, current AI practices are evolving to address challenges and enhance positive impacts.

Current AI Practices

  1. Generative AI: This form of AI, capable of generating new content, is becoming more prevalent. It is employed in various sectors, from education to business management.
  2. Foundation Models: Large-scale models trained on diverse internet text. They are designed to be fine-tuned for a wide range of tasks.
  3. Multimodal AI: This form of AI, which combines different forms of data input (such as text, audio, and images), is gaining prominence. It can assist in diagnosing diseases, predicting patient outcomes, and providing personalized learning experiences.
  4. Model Optimization: The process of optimizing AI models is becoming more accessible, ensuring that AI systems are beneficial to all.

Inclusivity in AI Practices

Inclusive AI plays a pivotal role in contemporary AI practices, emphasizing the importance of making AI systems accessible to all and ensuring they are designed and developed with diverse perspectives and experiences.

Here are some ways AI practices are becoming more inclusive:

  1. Human-Centered Design: AI systems prioritize the user experience, ensuring they are accessible and beneficial to all.
  2. Data Diversity: Ensuring that the data used to train AI models is diverse and representative of all potential use cases and end users,  preventing biases in AI outputs.
  3. Ethical Considerations: There’s a growing recognition that ethical considerations should precede the mainstream adoption of AI tools, contributing to a more inclusive and just society.
  4. Inclusive Design Practices: These practices help developers understand and address potential barriers that could unintentionally exclude people, making these systems more fair and equitable.

Challenges in Implementing Inclusive AI

Implementing Inclusive AI is a complex process that involves translating theoretical commitments into practical decisions. This process is fraught with technological and societal challenges, which can hinder the inclusion of human values in AI.

Technologically, AI systems are intricate and demand high expertise to develop and maintain. This complexity can pose a significant barrier for organizations that need a solid technical background. For example, teachers may require assistance integrating AI tools into their teaching methods, limiting the effectiveness of these tools in enhancing learning outcomes. This highlights the problem of inclusion in AI systems, where not all users can fully utilize and benefit from AI technologies.

Another significant challenge lies in the data used to train AI systems. AI systems are only as good as the data they’re trained on. The AI system may produce biased results if this data is not diverse and representative of all potential use cases and end users. This underscores the importance of ethics in data science and artificial intelligence, where ethical concerns with AI, such as data privacy and algorithmic bias, must be addressed to ensure responsible artificial intelligence.

Cultural differences can also pose a challenge. AI systems may not consider cultural nuances, leading to misunderstandings or misinterpretations. Despite these challenges, numerous examples of efforts to implement inclusive AI exist. For instance, a case study in Japan explored the use of AI in inclusive education and found that while AI could support diverse learners, there were challenges in terms of technological and pedagogical aspects, dataset limitations, and cultural differences.

The Power of Diverse Teams

The impact of diverse teams on AI development is profound and multifaceted. Diverse teams bring various perspectives, experiences, and ideas to the table, leading to more innovative solutions and comprehensive understanding of complex problems. This diversity is essential in AI, a technology with the potential to affect people from all walks of life.

When it comes to AI development, a diverse team ensures that resulting AI systems are inclusive and equitable. Such a team is more likely to consider a broad range of use cases, resulting in AI systems that are universally applicable and less prone to harmful biases. This represents a crucial aspect of responsible artificial intelligence.

Diverse teams address ethical concerns in AI. They help ensure that the data used to train AI systems is representative of all potential users, thereby reducing the risk of bias. Additionally, they bring various ethical perspectives on decisions about how AI systems should be designed and used.

Several real-world examples illustrate the power of diverse teams in AI development. For instance, research at MIT Lincoln Laboratory suggests that training an AI model with mathematically “diverse” teammates improves its ability to collaborate with other AI it has never worked with before.

Another example is the work done by Columbia University, where researchers tasked 400 AI engineers with creating algorithms that made over 8.2 million predictions about 20,000 people. The team’s diversity led to a more robust and accurate AI system.

Diverse teams help to ensure that AI systems are inclusive, equitable, and ethically sound. This emphasizes the importance of using artificial intelligence to promote diversity and include human values in AI. It also highlights the need for ongoing efforts to maintain diversity in AI teams, addressing the problems of inclusion in AI systems and leveraging AI for diversity and inclusion.

Comparison of AI Laws in the EU and US

FeatureAI Laws in the EUAI laws in the US
ApproachComprehensive, risk-basedPatchwork, sector-specific
Legal StatusRegulation (law)No overarching federal laws. Some state laws and agency guidelines are present.
FocusSafety, transparency, fairness, non-discrimination, environmental impactVaries by agency/state, often focuses on specific risks or applications
Risk ClassificationHigh, medium, low, unacceptableNone
High-Risk AIBanned (e.g., social scoring) or requires strict complianceVaries by agency/state, some address high-risk areas (e.g., facial recognition)
TransparencyRequirements for explainability and user informationLimited requirements, varies by agency/state
AccountabilityClear liability framework, human oversight requiredUnclear and fragmented, varies by agency/state
Inclusivity FocusExplicitly prohibits discrimination based on protected characteristicsSome laws address specific forms of discrimination (e.g., in employment), but no comprehensive framework
Data Bias MitigationRequires measures to address data bias and fairnessLimited requirements, focus on preventing specific discriminatory outcomes
EnforcementIndependent oversight body, significant finesVaries by agency/state, often limited enforcement power

While the EU’s AI Act takes a promising step towards inclusivity through its transparency, non-discrimination, and data bias mitigation requirements, its effectiveness relies on enforcement and interpretation. 

The US, lacking a comprehensive framework, addresses inclusivity incrementally through scattered laws, primarily tackling specific forms of discrimination, leaving broader issues such as algorithmic bias unaddressed. Both approaches face challenges in guaranteeing meaningful inclusivity in AI development and deployment.

Successful Inclusive AI Practices

Several examples of successful inclusive AI practices have made a significant impact. These practices have demonstrated AI’s capability for diversity and inclusion, offering valuable lessons for future AI development.

One such example is the work carried out by Microsoft’s Inclusive Design team. They partnered with research, engineering, and legal groups across the company to bridge the gap between high-level principles and everyday practice in AI development. 

Their approach to inclusive AI involved a complete shift in mindset throughout the development process, taking into account every crucial decision in the build process. 

Another illustration is the strategy adopted by Appen, a company that provides human-annotated data for machine learning and AI. They stress the significance of data diversity and continuous monitoring and retraining in the AI life cycle. 

Their approach to achieving responsible artificial intelligence involves considering all potential use cases and end users, ensuring their AI model performs equitably for each user group.

These examples highlight two important lessons for AI-inclusive practices:

  1. Redefine bias as a spectrum: Rather than focusing solely on the extreme cases, teams should acknowledge bias as a spectrum, where bias may appear subtly in our everyday experiences. This approach enables teams to address AI bias issues more promptly and design accordingly.
  2. Enlist customers to correct bias: Training is vital for building more inclusive AI. However, AI development often happens behind closed doors, restricted to input from teams that may not adequately represent the diverse customers they design for. Enlisting customers can rectify bias and ensure that AI systems are genuinely inclusive.

These successful practices demonstrate the use of AI for diversity and inclusion, offering a benchmark for responsible AI. They underscore the importance of a comprehensive and inclusive approach to AI development, taking every decision in the build process into account, and incorporating diverse perspectives.

Recommendations to Ensure Responsible AI

In pursuing Inclusive AI, bridging the gap between theory and practice is essential. 

Here are some suggestions that can help make AI more accessible to all:

  1. Interdisciplinary Collaboration: Encourage collaboration among AI developers, ethicists, sociologists, and representatives from diverse user groups. This can ensure a holistic approach to AI development, considering technical feasibility, social impact, and ethical implications.
  2. Public-Private Partnerships: Promote collaboration among governments, academic institutions, and private organizations to promote the development and use of responsible artificial intelligence. This can help set industry standards, promote best practices, and ensure accountability.
  3. Education and Training: Invest in education and training to develop a workforce skilled in AI and aware of its ethical implications. This includes technical training and education about the social and ethical aspects of AI.
  4. User Empowerment: Develop tools and platforms enabling users to customize and control how AI systems interact with them. This can help make AI systems more user-friendly and inclusive.
  5. Policy and Regulation: Advocate for policies and regulations that promote AI transparency, accountability, and fairness. This can help address some ethical concerns associated with AI and ensure that AI systems are developed and used responsibly.

The Inclusive AI’s Two Cents

While there are significant challenges in implementing inclusive AI, they also present opportunities for innovation and improvement. This embodies inclusion in the age of AI, where AI technologies are designed and developed to be accessible and beneficial to all, regardless of their demographic variables. 

However, it’s important to note that while strides are being made towards inclusivity in AI, there is still a long way to go. The field grapples with issues such as bias in AI algorithms and the need for more diverse representation in AI development teams. 

The path forward lies in actions, not in words.

Who are we?

This is where The Inclusive AI (TIA) comes in. As a pioneer in AI inclusivity, TIA is championing diversity and inclusion in the AI landscape. Our commitment extends to Policy Advocacy, Literacy Training, and Application Reviews, ensuring AI aligns with inclusivity. We are on a mission to pave the way for a more accessible and equitable AI future, driving positive change and making artificial intelligence a force for inclusivity globally.

At TIA, the goal is to be a global leader in shaping an AI landscape that prioritizes ethical practices, accessibility, and universal benefits for every individual and community. Using TIA’s services, organizations can ensure their workplace is free of AI bias and contribute to a more inclusive and just society.

References

McKinsey & Company. (2018, June 1). AI, automation, and the future of work: Ten things to solve for. https://www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-future-of-work-ten-things-to-solve-for

Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A., Almohareb, S. N., Aldairem, A., Alrashed, M., Saleh, K. B., Badreldin, H. A., Yami, M. S. A., Harbi, S. A., & Albekairy, A. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Medical Education, 23(1). https://doi.org/10.1186/s12909-023-04698-z

Buolamwini, J. (2019, February 7). Artificial intelligence has a problem with gender and racial bias. Here’s how to solve it. TIME. Retrieved February 11, 2024, from https://time.com/5520558/artificial-intelligence-racial-gender-bias/

Callahan, C. (2023, December 4). How AI regulation differs in the U.S. and EU. Digiday. https://digiday.com/marketing/how-ai-regulation-differs-in-the-u-s-and-eu/

Cowgill, B., Dell’Acqua, F., Deng, S., Hsu, D., Verma, N., & Chaintreau, A. (2020). Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics. In SSRN. Columbia University. Retrieved February 11, 2024, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3615404

De Vynck, G., Lerman, R., & Tiku, N. (2023, February 17). Microsoft’s AI chatbot is going off the rails. Washington Post. https://www.washingtonpost.com/technology/2023/02/16/microsoft-bing-ai-chatbot-sydney/

Engler, A. (2023, April 25). The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment. Brookings. Retrieved February 11, 2024, from https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/

Foy, K. (2022, May 25). Is diversity the key to collaboration? New AI research suggests so. MIT News | Massachusetts Institute of Technology. Retrieved February 11, 2024, from https://news.mit.edu/2022/is-diversity-key-to-collaboration-0525

Goujard, C. (2023, April 3). Italian privacy regulator bans ChatGPT. POLITICO. https://www.politico.eu/article/italian-privacy-regulator-bans-chatgpt/

Greene, R. T. (2023, April 24). The pros and cons of using AI in learning: Is ChatGPT helping or hindering learning outcomes? eLearning Industry. https://elearningindustry.com/pros-and-cons-of-using-ai-in-learning-chatgpt-helping-or-hindering-learning-outcomes

Haidar, A. (2023). An integrative theoretical framework for responsible artificial intelligence. International Journal of Digital Strategy, Governance, and Business Transformation, 13(1), 1–23. https://doi.org/10.4018/ijdsgbt.334844

Inclusive Design. (2019, August 22). In pursuit of inclusive AI – Microsoft Design. Medium. https://medium.com/microsoft-design/in-pursuit-of-inclusive-ai-eb73f62d17fc

Khan, S. (2022, November 8). How can AI support diversity, equity and inclusion? World Economic Forum. Retrieved February 11, 2024, from https://www.weforum.org/agenda/2022/03/ai-support-diversity-equity-inclusion/

Lawton, G. (2023, November 1). Generative AI ethics: 8 biggest concerns and risks. Enterprise AI. https://www.techtarget.com/searchenterpriseai/tip/Generative-AI-ethics-8-biggest-concerns

Lo Piano, S. (2020). Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanities and Social Sciences Communications, 7(1). https://doi.org/10.1057/s41599-020-0501-9

Luzniak, K. (2023, March 28). Responsible AI – What Is It? Examples from the Business World. Neoteric. https://neoteric.eu/blog/responsible-ai-what-is-it-business-examples/

Manyika, J., Silberg, J., & Presten, B. (2022, November 17). What do we do about the biases in AI? Harvard Business Review. Retrieved February 11, 2024, from https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

Moulakakis, V. (2023, April 1). Ethical AI: Addressing cultural differences and human rights challenges. https://www.linkedin.com/pulse/ethical-ai-addressing-cultural-differences-human-vassilios-moulakakis

Rouhiainen, L. (2019, October 14). How AI and data could personalize higher Education. Harvard Business Review. https://hbr.org/2019/10/how-ai-and-data-could-personalize-higher-education

Sharma, P. (2023, December 19). AI By All: Embracing diversity for ethical AI and sustainable digital transformation. IndiaTimes. Retrieved February 11, 2024, from https://www.indiatimes.com/technology/news/embracing-diversity-for-ethical-ai-623878.html

Toyokawa, Y., Horikoshi, I., Majumdar, R., & Ogata, H. (2023). Challenges and opportunities of AI in inclusive education: a case study of data-enhanced active reading in Japan. Smart Learning Environments, 10(1). https://doi.org/10.1186/s40561-023-00286-2

Transforming Data With Intelligence. (2022, August 24). Home. TDWI. https://tdwi.org/articles/2022/08/24/appen-state-of-ai-ml-report.aspx

Trotta, A., Ziosi, M., & Lomonaco, V. (2023). The future of ethics in AI: challenges and opportunities. AI & SOCIETY, 38(2), 439–441. https://doi.org/10.1007/s00146-023-01644-x

Turner Lee, N., Resnick, P., & Barton, G. (2019, May 22). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings. Retrieved February 11, 2024, from https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

West, M., Kraut, R., & Ei Chew, H. (2019). I’d blush if I could: closing gender divides in digital skills through education. UNESCO. https://doi.org/10.54675/rapc9356

Women’s Forum for the Economy & Society (A Publicis Groupe company). (2021, October 29). What is Inclusive AI? A perspective from the Women’s Forum. https://www.linkedin.com/pulse/what-inclusive-ai-perspective-from-

CATEGORIES
TAGS
Share This

COMMENTS

Wordpress (2)
  • comment-avatar

    […] the innovation of Machine Learning marvels (ML), leaders from various backgrounds have emerged, each with a fresh perspective. It is […]

  • comment-avatar

    […] This could mean a future where AI is capable of competently handling your affairs. It would mean a brighter, smarter future for everyone, as long as we work together. […]

  • Disqus (1 )

    Discover more from The Inclusive AI

    Subscribe now to keep reading and get access to the full archive.

    Continue reading