My Blog
Entrepreneur

Keep Humans Involved In Artificial Intelligence

Keep Humans Involved In Artificial Intelligence
Keep Humans Involved In Artificial Intelligence


During the 1950s, Alan Turing proposed an experiment called the imitation game (now called the Turing test). In it, he posited a situation where someone—the interrogator—was in a room, separated from another room that had a computer and a second person. The goal of the test was for the interrogator to ask questions of both the person and the computer; the goal of the computer was to make the interrogator believe it was a human. Turing predicted that, eventually, computers would be able to mimic human behavior successfully and fool interrogators a high percentage of the time.

Turing’s prediction has yet to come to pass, and there’s a fair question of whether computers will ever be able to truly complete the test. However, it’s both a useful lens to view the dynamic of how people view the potential capabilities of artificial intelligence and a source of irony. Though AI has amazing capabilities, it also has limits. Today, it’s clear that no one knows the full workings of the AI we create, and the lack of “explainability” and humans in the loop causes problems and missed opportunities.

Whatever the future might hold, one thing is clear: Human decision-making must be included in the loop of AI functioning. Having it be a “black box” leads to biased decisions based on inherently biased algorithms, which can then lead to serious consequences.

Why AI Is Often a Black Box

There’s a general perception that people know more about and have more control over AI than they actually do. People believe that because computer scientists wrote and compiled the code, the code is both knowable and controllable. However, that isn’t necessarily the case.

AI can often be a black box, where we don’t exactly know how the eventual outputs have been constructed or what they will become. This is because the code is set in motion, and then—almost like a wheel rolling down a hill on its own momentum—it continues to go along, taking in information, adapting, and growing. The results are not always foreseeable or necessarily positive.

AI, while powerful, can be imprecise and unpredictable. There are multiple instances of AI failures, including serious car accidents, stemming from AI’s inability to interpret the world in the ways we predict it will. Many downsides arise because the origin of the code is human, but the code’s progress is self-guided and unmoored. In other words, we know the code’s starting point, but not exactly how it’s grown or how it’s progressing. There are serious questions about what is going on in the machine’s mind.

The questions are worthwhile to ask. There are spectacular downsides to incidents such as car crashes, but more subtle ones, such as computer flash trading, raise questions about the algorithms. What does it mean to have set these programs in motion? What are the stakes of using these machines, and what safeguards must be put in place?

AI should be understandable and able to be manipulated and treated in ways that give end users control. The beginning of that dynamic begins with making AI understandable.

When AI Should Be Pressed for More Answers

Not all AI needs are created equal. For instance, in low-stakes situations, such as image recognition for noncritical needs, it’s not likely necessary to understand how the programs are working. However, it is critical to understand how code operates and continues to develop in situations with important outcomes, including medical decisions, hiring decisions, or car safety decisions. It’s important to know where human intervention is needed and when it’s necessary for input and intervention. Additionally, because educated men mainly write AI code, according to (fittingly) the Alan Turing Institute, there’s a natural bias to reflect the experiences and worldviews of those coders.

Ideally, coding situations in which the end goal implicates vital interests need to focus on “explainability” and clear points where the coder can intervene and either take control or adjust the program to ensure ethical and desirable end performance. Further, those developing the programs—and those reviewing them—need to ensure the source inputs aren’t biased toward certain populations.

Why Focusing on ‘Explainability’ Can Help Users and Coders Refine Their Programs

“Explainability” is the key to making AI both reviewable and adjustable. Businesses, or other end users, must understand the program architecture and end goals to provide crucial context to developers on how they should tweak inputs and restrict specific outcomes. Today, there is a movement toward that end.

New York City, for example, has implemented a new law that requires a bias audit before employers can use AI tools to make hiring decisions. Under the new law, independent reviewers must analyze the program’s code and process to report the program’s disparate impact on individuals based on immutable characteristics such as race, ethnicity, and sex. Using an AI program for hiring is specifically prohibited unless the report of the program is displayed on the company’s website.

When designing their products, programmers and companies should focus on anticipating external requirements, such as those above, and plan for downside protection in litigation where they need to defend their products. Most importantly, programmers must focus on creating explainable AI because it contributes to society.

AI that uses “human in the loop designs” that can fully explain source components and code progressions will likely be necessary not only for ethical and business reasons, but also for legal ones. Businesses would be wise to anticipate this need and not have to retrofit their programs after the fact.

Why Developers Should Be Diverse and Representative of Broader Populations

To go a step beyond the need for “explainability,” the people creating the programs and inputs must be diverse and develop programs representative of the broader population. The more diverse the perspectives included, the more likely a true signal will emerge from the program. Research by Ascend Venture Capital, a VC company that supports data-centric companies, found that even the giants of the AI and technology world, such as Google, Bing, and Amazon, have flawed processes. So, there is continued work to be done on that frontier.

Working to promote inclusiveness in AI needs must be a priority. Developers must proactively work with the communities they impact to help build trust with the communities they impact (such as when law enforcement uses AI for identification purposes). When people don’t understand the AI in their world, it creates a fear response. That fear can cause a valuable loss of insight and feedback that would make programs better.

Ideally, programmers themselves are reflective of the broader population. At the very least, an aggressive focus must be placed on ensuring all programs do not exclude or marginalize any users—intentionally or otherwise. In the rush to create cutting-edge technology and programs, programmers must never lose sight that these tools are meant to serve people.

The Turing test might never come to pass, and we might never see computers that can precisely match human capabilities. If that is true, as it currently is, then we must prioritize maintaining the human purpose behind AI: advancing our own interests. To do that, we must generate explainable, controllable programs where each step in the process can be explained and controlled. Further, those programs must be developed by a diverse group of people whose lived experiences reflect the broader population. In accomplishing those two items, AI will be refined to help continue to advance human interests and cause less harm.

Related posts

3 Myths About Leaving Your Full-Time Job To Be All-In On Your Startup

newsconquest

The Magic Words Netflix Co-Founder Wants to Hear in a Pitch

newsconquest

New Report Exposes Blue Check Instagram Verification Scheme

newsconquest