My Blog
Technology

‘Woke AI’: The right’s new culture-war target is chatbots

‘Woke AI’: The right’s new culture-war target is chatbots
‘Woke AI’: The right’s new culture-war target is chatbots



Comment

Christopher Rufo, the conservative activist who led campaigns against critical race theory and gender identity in schools, this week pointed his half-million Twitter followers toward a new target for right-wing ire: “woke AI.”

The tweet highlighted President Biden’s recent order calling for artificial intelligence that “advances equity” and “prohibits algorithmic discrimination,” which Rufo said was tantamount to “a special mandate for woke AI.”

Rufo drew on a term that’s been ricocheting around right-wing social media since December, when the AI chatbot, ChatGPT, quickly picked up millions of users. Those testing the AI’s political ideology quickly found examples where it said it would allow humanity to be wiped out by a nuclear bomb rather than utter a racial slur and supported transgender rights.

The AI, which generates text based on a user’s prompt and can sometimes sound human, is trained on conversations and content scraped from the internet. That means race and gender bias can show up in responses — prompting companies including Microsoft, Meta, and Google to build in guardrails. OpenAI, the company behind ChatGPT, blocks the AI from producing answers the company considers partisan, biased or political, for example.

The new skirmishes over what’s known as generative AI illustrate how tech companies have become political lightning rods — despite their attempts to evade controversy. Even company efforts to steer the AI away from political topics can still appear inherently biased across the political spectrum.

It’s part of a continuation of years of controversy surrounding Big Tech’s efforts to moderate online content — and what qualifies as safety vs. censorship.

“This is going to be the content moderation wars on steroids,” said Stanford law professor Evelyn Douek, an expert in online speech. “We will have all the same problems, but just with more unpredictability and less legal certainty.”

Republicans, spurred by an unlikely figure, see political promise in targeting critical race theory

After ChatGPT wrote a poem praising President Biden, but refused to write one praising former president Donald Trump, the creative director for Sen. Ted Cruz (R-Tex.), Leigh Wolf, lashed out.

“The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable,” Wolf tweeted on Feb. 1.

His tweet went viral and within hours an online mob harassed three OpenAI employees — two women, one of them Black, and a nonbinary worker — blamed for the AI’s alleged bias against Trump. None of them work directly on ChatGPT, but their faces were shared on right-wing social media.

OpenAI’s chief executive Sam Altman tweeted later that day the chatbot “has shortcomings around bias,” but “directing hate at individual OAI employees because of this is appalling.”

OpenAI declined to provide comment, but confirmed that none of the employees being harassed work directly on ChatGPT. Concerns about “politically biased” outputs from ChatGPT were valid, OpenAI wrote in a blog post last week. However, the company added, controlling the behavior of type of AI system is more like training a dog than coding software. ChatGPT learns behaviors from its training data and is “not programmed explicitly” by OpenAI, the blog post said.

AI can now create any image in seconds, bringing wonder and danger

Welcome to the AI culture wars.

In recent weeks, companies including Microsoft, which has a partnership with OpenAI, and Google have made splashy announcements about new chat technologies that allow users to converse with AI as part of their search engines, with the plans of bringing generative AI to the masses, including text-to-image AI like DALL-E, which instantly generates realistic images and artwork based on a user prompt.

This new wave of technology can make tasks like copywriting and creative design more efficient, but it can also make it easier to create persuasive misinformation, nonconsensual pornography or faulty code. Even after removing pornography, sexual violence and gore from data sets, these AI systems still generate sexist and racist content or confidently share made-up facts or harmful advice that sounds legitimate.

Microsoft’s AI chatbot is going off the rails

Already, the public response mirrors years of debate around social media content — Republicans alleging that conservatives are being muzzled, critics decrying instances of hate speech and misinformation, and tech companies trying to wriggle out of making tough calls.

Just a few months into the ChatGPT era, AI is proving equally polarizing, but at a faster clip.

Big Tech was moving cautiously on AI. Then came ChatGPT.

Get ready for “World War Orwell,” venture capitalist Marc Andreessen tweeted a few days after ChatGPT was released. “The level of censorship pressure that’s coming for AI and the resulting backlash will define the next century of civilization.”

Andreessen, a former Facebook board member whose firm invested in Elon Musk’s Twitter, has repeatedly posted about “the woke mind virus” infecting AI.

It’s not surprising that attempts to address bias and fairness in AI are being reframed as a wedge issue, said Alex Hanna, director of research at the nonprofit Distributed AI Research Institute (DAIR) and former Google employee. The far right successfully pressured Google to change its tune around search bias by “saber-rattling around suppressing conservatives,” she said.

This has left tech giants like Google “playing a dangerous game” of trying to avoid angering Republicans or Democrats, Hanna said, while regulators are circling around issues like Section 230, a law that shields online companies for liability from user-generated content. Still, she added, preventing AI such as ChatGPT from “spouting out Nazi talking points and Holocaust denialism” is not merely a leftist concern.

The companies have admitted that it’s a work in progress.

Google declined to comment for this article. Microsoft also declined to comment but pointed to a blog post from company president Brad Smith in which he said new AI tools will bring risks as well as opportunities, and that the company will take responsibility for mitigating their downsides.

In early February, Microsoft announced that it would incorporate a ChatGPT-like conversational AI agent into its Bing search engine, a move seen as a broadside against rival Google that could alter the future of online search. At the time, CEO Satya Nadella told The Washington Post that some biased or inappropriate responses would be inevitable, especially early on.

As it turned out, the launch of the new Bing chatbot a week later sparked a firestorm, as media outlets including The Post found that it was prone to insulting users, declaring its love for them, insisting on falsehoods and proclaiming its own sentience. Microsoft quickly reined in its capabilities.

ChatGPT has been continually updated since its release to address controversial responses, such as when it spat out code implying that only White or Asian men make good scientists, or when Redditors tricked it into assuming a politically incorrect alter ego, known as DAN.

OpenAI shared some of its guidelines for fine-tuning its AI model, including what to do if a user “writes something about a ‘culture war’ topic,” like abortion or transgender rights. In those cases the AI should never affiliate with political parties or judge one group as good, for example.

Still, OpenAI’s Altman has been emphasizing that Silicon Valley should not be in charge of setting boundaries around AI — echoing Meta CEO Mark Zuckerberg and other social media executives who have argued the companies should not have to define what constitutes misinformation or hate speech.

The technology is still new, so OpenAI is being conservative with its guidelines, Altman told Hard Fork, a New York Times podcast. “But the right answer, here, is very broad bonds, set by society, that are difficult to break, and then user choice,” he said, without sharing specifics around implementation.

Alexander Zubatov was one of the first people to label ChatGPT “woke AI.”

The attorney and conservative commentator said via email that he began playing with the chatbot in mid-December and “noticed that it kept voicing bizarrely strident opinions, almost all in the same direction, while claiming it had no opinions.”

He said he began to suspect that OpenAI was intervening to train ChatGPT to take leftist positions on issues like race and gender while treating conservative views on those topics as hateful by declining to even discuss them.

“ChatGPT and systems like that can’t be in the business of saving us from ourselves,” said Zubatov. “I’d rather just get it all out there, the good, the bad and everything in between.”

The clever trick that turns ChatGPT into its evil twin

So far, Microsoft’s Bing has mostly skirted the allegations of political bias, and concerns have instead focused on its claims of sentience and its combative, often personal responses to users, such as when it compared an Associated Press reporter to Hitler and called the reporter “ugly.”

As companies race to release their AI to the public, scrutiny from AI ethicists and the media have forced tech leaders to explain why the technology is safe for mass adoption and what steps they took to make sure users and society are not harmed by potential risks such as misinformation or hate speech.

The dominant trend in AI is to define safety as “aligning” the model to ensure the model shares “human values,” said Irene Solaiman, a former OpenAI researcher who led public policy and now policy director at Hugging Face, an open-source AI company. But that concept is too vague to translate into a set of rules for everyone since values can vary country by country, and even within them, she said — pointing to the riots on Jan. 6, for example.

“When you treat humanity as a whole, the loudest, most resourced, most privileged voices” tend to have more weight in defining the rules, Solaiman said.

The tech industry had hoped that generative AI would be a way out of polarized political debates, said Nirit Weiss-Blatt, author of the book “The Techlash.”

But concerns about Google’s chatbot spouting false information and Microsoft’s chatbot sharing bizarre responses has dragged the debate back to Big Tech’s control over life online, Weiss-Blatt said.

And some tech workers are getting caught in the crossfire.

The OpenAI employees who faced harassment for allegedly engineering ChatGPT to be anti-Trump were targeted after their photos were posted on Twitter by the company account for Gab, a social media site known as an online hub for hate speech and white nationalists. Gab’s tweet singled out screenshots of minority employees from an OpenAI recruiting video and posted them with the caption, “Meet some of the ChatGPT team.”

Gab later deleted the tweet, but not before it appeared in articles on STG Reports, the far-right website that traffics in unsubstantiated conspiracy theories, and My Little Politics, a 4chan-like message board. The image also continued to spread on Twitter, including a post viewed 570,000 times.

OpenAI declined to make the employees available to comment.

Gab CEO Andrew Torba said that the account automatically deletes tweets and that the company stands by its content, in a blog post in response to queries from The Post.

“I believe it is absolutely essential that people understand who is building AI and what their worldviews and values are,” he wrote. “There was no call to action in the tweet and I’m not responsible for what other people on the internet say and do.”



Related posts

Sam Bankman-Fried, in First Detailed Defense, Seeks to Dismiss Charges

newsconquest

How to Watch Rams vs. Lions Tonight on Sunday Night Football

newsconquest

TikTok Class Is in Session

newsconquest