My Blog
Entrepreneur

Follow These 5 Principles to Make AI More Inclusive For All

Follow These 5 Principles to Make AI More Inclusive For All
Follow These 5 Principles to Make AI More Inclusive For All


Opinions expressed by Entrepreneur contributors are their own.

From generating just-for-fun images of the Pope to algorithms that help sort through job applications and ease the burden on hiring managers, artificial intelligence programs have taken the public consciousness and the business world by storm. However, it’s vital to not overlook the potentially deep-rooted ethical issues associated with it.

These breakthrough tech tools generate content by sourcing from existing data and other material, but if those sources are even partially the result of racial or gender bias, for example, AI will likely replicate that. For those of us who want to live in a world where diversity, equity and inclusion (DEI) are at the forefront of emerging technology, we should all be concerned with how AI systems are creating content and what impact their output has on society.

So, whether you’re a developer, an entrepreneur of an AI start-up or simply a concerned citizen like me, consider these principles that can be integrated into AI apps and programs to ensure they create more ethical and equitable outputs.

Related: What Will It Take to Build a Truly Ethical AI? These 3 Tips Can Help

1. Create user-centric design

User-centric design ensures that the program you’re developing is inclusive of its users. This can include features like voice interactions and screen reader capability that help those with vision impairments. Speech recognition models, meanwhile, can be more inclusive of different types of voices (like women’s, or by applying accents from around the world).

Put simply, developers should pay close attention to whom their AI systems are addressing — make a point of thinking outside the group of engineers who created them. This is particularly vital if they and/or company entrepreneurs hope to scale products globally.

2. Build a diverse team of reviewers and decision-makers

The development team of an AI app or program is crucial, not just in its creation but from a review and decision-making perspective as well. A 2023 report published by the AI Now Institute of New York University described the lack of diversity at multiple levels of AI development. It included the remarkable statistics that at least 80% of AI professors are men and that fewer than 20% of AI researchers at the world’s top tech companies are women. Without the right checks, balances and representation in development, we run the serious risk of feeding AI programs dated and/or biased data that perpetuates unjust tropes about certain groups.

3. Audit data sets and create accountability structures

It’s not necessarily anyone’s direct fault if older data that perpetuates biases is present, but it is someone’s fault if data isn’t regularly audited. To ensure AI is producing the highest quality output with DEI in mind, developers need to carefully assess and analyze the information they’re using. They should be asking: How old is it? Where does it come from? What does it contain? Is it ethical or correct in the current moment? Perhaps most importantly, data sets should ensure that AI perpetuates a positive future for DEI and not a negative one sourced from the past.

Related: These Entrepreneurs Are Taking on Bias in Artificial Intelligence

4. Collect and curate diverse data

If, after auditing information, an AI program is using, you notice there are inconsistencies, biases and/or prejudices, work to collect better material. This is easier said than done: Collecting data takes months, even years, but it’s abundantly worth the endeavor.

To help fuel that process, if you’re an entrepreneur running an AI start-up and have the resources to do research and development, create projects where team members create new data that represent diverse voices, faces and attributes. This will result in more suitable source material for apps and programs that we can all benefit from—essentially creating a brighter future that shows various individuals as multi-dimensional instead of one-sided or otherwise simplistic.

Related: Artificial Intelligence Can Be Racist, Sexist and Creepy. Here Are 5 Ways You Can Counter This In Your Enterprise

5. Engage in AI ethics training on bias and inclusivity

As a DEI consultant and proud creator of the LinkedIn course, Navigating AI Through an Intersectional DEI Lens, I’ve learned the power of centering DEI in AI development and the positive ripple effects it has.

If you or your team are struggling to put together a list of associated to-dos for developers, reviewers and others, I recommend hosting corresponding ethics training, including online course that can help you troubleshoot issues in real time.

Sometimes all you need is a trainer to help you walk through the process and troubleshoot each issue one by one to create a lasting result that produces more inclusive, diverse and ethical AI data and programs.

Related: The 6 Traits You Need To Succeed in the AI-Accelerated Workplace

Developers, entrepreneurs and others who care about reducing bias in AI should use our collective energy to train themselves in how to build teams of diverse reviewers who can check and audit data and focus on designs that make programs more inclusive and accessible. The result will be a landscape that represents a wider range of users, as well as better content.

Related posts

FTC Suing Microsoft Over Potential Activision Blizzard Deal

newsconquest

Why Do Most Online Businesses Fail? Here’s How to Avoid It

newsconquest

5 Reasons Apple’s New Phone Moves the Needle: 5 That It Won’t

newsconquest