My Blog
Technology

AI and You: Gemini Flubs Are ‘Unacceptable,’ Musk Sues OpenAI for Putting Profit Over Principles

AI and You: Gemini Flubs Are ‘Unacceptable,’ Musk Sues OpenAI for Putting Profit Over Principles
AI and You: Gemini Flubs Are ‘Unacceptable,’ Musk Sues OpenAI for Putting Profit Over Principles


It’s been a difficult few weeks for Google after its text-to-image creator in Gemini (formerly Bard) began delivering offensive, ridiculous and embarrassing images. That prompted a company executive to admit Google “got it wrong” and led it to pause the 3-week-old tool while it conducted “extensive testing.” 

Then Google CEO Sundar Pichai weighed in, reiterating the “got it wrong” part in an email to employees, according to the text of the message shared by Semafor. Gemini’s responses offended users and showed bias, he added. “That’s completely unacceptable.”  

As for the fix, Pichai said the company’s actions will include “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations.”

“No AI is perfect,” he said, “but we know the bar is high for us and we will keep at it for however long it takes.”

The bar is high because competition is fierce in the nascent generative AI market, with Google doing all it can to race ahead of rivals including OpenAI, Microsoft, Meta and Anthropic (see below for more news on what’s going on with OpenAI and Microsoft). For Google, that means more than just fixing Gemini so it can continue to “create great products that are used and beloved by billions of people and businesses,” as Pichai put it. 

It also means pushing boundaries for its AI tech. And that now includes paying a group of independent publishers last month to start using the beta version of yet-unannounced gen AI platform to write news stories. According to a scoop by Adweek, “the publishers are expected to use the suite of tools to produce a fixed volume of content for 12 months. In return, the news outlets receive a monthly stipend amounting to a five-figure sum annually, as well as the means to produce content relevant to their readership at no cost.” 

That fixed volume includes three articles a day, one newsletter a week and one marketing campaign per month. Adweek added that the AI tool can summarize an article from another news source and then change the language and style “to read like a news story.”

Google said the project isn’t being used to “re-publish other outlets’ work,” calling that characterization “inaccurate.” But Google did confirm the experiment and told Adweek that the gen AI platform is intended to give journalists an assist. “The experimental tool is being responsibly designed to help small, local publishers produce high quality journalism using factual content from public data sources — like a local government’s public information office or health authority. These tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.” 

The program is part of the Google News Initiative, launched in 2018, and is aimed at giving publishers ways to do more with limited funding. Its goal is allow them to produce “aggregated content more efficiently by indexing recently published reports generated by other organizations, like government agencies and neighboring news outlets, and then summarizing and publishing them as a new article,” Adweek added.

This isn’t the first time Google has been experimenting with having its AI tools write stories for publishers and content creators. It’s also working on a project, codenamed Genesis, that can assemble more full-featured news articles, The New York Times reported in July.

But Google isn’t just interested in AI for text and images. Its DeepMind subsidiary teased another AI model, called Genie, that, CNET’s Lisa Lacy notes, can create playable, virtual worlds. Or as DeepMInd’s Feb. 23 research paper describes it, “an endless variety of action-controllable 2D worlds.” 

“In a Feb. 26 tweet from DeepMind’s Tim Rocktäschel, examples include playable worlds made to look as if built from clay; rendered in the style of a sketch; and set in a futuristic city,” Lacy reports. A Google spokesperson also said the technology isn’t limited to 2D environments. Genie could, for example, generate simulations to be used for training “embodied agents such as robots.”  

But you likely won’t be able to try it out. Google said Genie is just “early-stage research” and isn’t designed to be released to the public. At least not yet.

Here are the other doings in AI worth your attention.

Elon Musk sues OpenAI, slams CEO Altman for chasing profit

Elon Musk, who last year started a for-profit gen AI company called xAI, sued OpenAI, a company he helped create with CEO Sam Altman. He accused the gen AI pioneer of putting profits ahead of a “founding agreement” that called for OpenAI to operate to “benefit humanity.”  

In the 46-page lawsuit, filed Feb. 29 (you can read it here), Musk accuses Altman and the company of breach of contract, saying OpenAI was intended to be an open-source, “non-profit lab that would try to catch up to Google in the race for AGI (Artificial General Intelligence), but it would be the opposite of Google.”

Instead, the suit alleges, OpenAI and Altman have turned the company into a for-profit company by teaming up with Microsoft, which has invested $13 billion into the maker of ChatGPT. Musk, who left OpenAI’s board in 2018 after investing more than $44 million in the company, argues that Microsoft’s investments “set the Founding Agreement aflame” because “Open AI has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”

It’s not the chase for money alone that he finds problematic. Musk signed onto an open letter in March 2023, along with over 1,000 tech leaders and researchers, who think AI technologies put humanity at risk and who called for a pause in the release of powerful new AI engines. He and Altman, who met during a tour of Musk’s rocket company SpaceX, “later bonded over their shared concerns about the threat that AI could pose to humanity,” The New York Times reported.

But those concerns don’t seem to be on the priority list at OpenAI, Musk claims. In the lawsuit, he called out Altman’s temporary ouster as CEO last year. That kerfuffle led Altman to remake the board of directors, including giving Microsoft a seat, so that he could oust members who were as concerned about OpenAI’s tech being used to create an AGI as he is, Musk argues. (An AGI is an advanced type of AI that can make decisions like or better than humans — think Jarvis in the Marvel movies.)

Still, Musk, CEO of EV maker Tesla, owner of the social media platform X and the richest man in the world, may not be the AI champion the suit presents him as because he’s reportedly tried and failed to wrest control of OpenAI for his own purposes, The New York Times said.

“Though Mr. Musk has repeatedly criticized OpenAI for becoming a for-profit company, he hatched a plan in 2017 to wrest control of the AI lab from Mr. Altman and its other founders and transform it into a commercial operation that would work alongside his other companies, including the electric carmaker Tesla, and make use of their increasingly powerful supercomputers, people familiar with his plan have said,” the paper reported. “When his attempt to take control failed, he left the OpenAI board, the people said.”

Stay tuned, because we’re just at the start of a tech saga that should easily produce enough fodder for a six-part streaming series. (There’s already a biopic of Musk in the works.)

The wearable AI pin is here, for a price  

Early adopters take note: The voice-activated “pin” created by former Apple employees as a wearable AI device that can replace your phone is now available for preorder in the US.

The Humane AI Pin, which will ship sometime in March, starts at $699 (the polished chrome versions are $799.) You also need to sign up for a $24 per month subscription plan to cover connectivity, data storage and Humane’s AI service. Then there’s tax and other fees (and accessories.)

So it may be pricey for the average consumer. But if you’re adventurous and want to be the first to try out something new, there are a lot of interesting aspects to the Pin, says CNET’s Katie Collins, who got an up-close demo at Mobile World Congress last week. 

“The Pin is a petite, subtle, square-shaped computer that sits on your chest with the help of a magnet,” Collins reports. “You interact with it primarily through voice, but also using gestures on the front-facing touchpad. The aim is to have an expert and always-available assistant ready to help you out with any query while remaining present, rather than getting lost in whatever’s happening on your phone screen.”

The device is activated with a touch, rather than with a wake word. Above the touchpad is a module with a camera with an LED light that shows when the Pin and its camera are in use. There’s also a laser that “can beam image and text onto your hand using a technology that Humane calls Laser Ink,” Collins reports, noting that the company’s co-founder took a photo of her and then beamed it onto her hand.

The AI Pin can answer simple questions — convert dollars to euros — as well as complex queries, including translating among 50 languages. And since it has its own phone number and is supported by its own wireless service, it can make calls and send texts, including using AI to craft those messages. 

The cost and concerns about privacy may deter people, but the AI Pin is a step “into a brave, new world,” Collins adds. ” It may not be the giant leap away from smartphones that doomscrollers like me are ready for, but I suspect it offers a prescient glimmer of what’s to come.”

Microsoft invests in Mistral AI, draws EU scrutiny

Microsoft, which has invested $13 billion in OpenAI, said it signed a multiyear “strategic partnership” with Mistral AI, a French startup whose LLMs compete with OpenAI and its ChatGPT chatbot.

The investment of 15 million euros, or about $16 million, has already drawn the attention of European Union regulators who are concerned about how these partnerships will affect competition in the emerging gen AI market, according to Politico. Mistral’s backers include software maker Salesforce and chipmaker Nvidia. The European Commission had already said in January that it was reviewing the partnership between Microsoft and OpenAI.  

“The Commission is looking into agreements that have been concluded between large digital market players and generative AI developers and providers,” European Commission spokesperson Lea Zuber told Politico. “In this context, we have received the mentioned agreement, which we will analyze.”

Microsoft said it will offer Mistral’s gen AI tech to customers using its Azure AI cloud platform. Mistral said it would get access to Microsoft’s supercomputers to train and run its AI models. The two also said they would collaborate on research and development and training “purpose-specific models for selection customers, including European public sector workloads.” 

In response to concerns about competition, Microsoft shared its AI Access Principles at Mobile World Congress on Feb. 26, listing its  “commitments to promote innovation and competition in the new AI economy.”

Mistral was founded in 2023 by researchers from DeepMind and Meta. In addition to the deal with Microsoft, Mistral announced its “most powerful large language model, Mistral Large, and released a web app netizens can use to experiment with a chatbot powered by the model,” The Register reported. ” It also put out a smaller model, Mistral Small, which — as the name suggests — is optimized to be faster and more compact.

You can apply to be part of the beta program for Mistral’s chatbot, which is called Le Chat

Don’t use a chatbot to do your taxes. Seriously

If you’re thinking of using ChatGPT to help prepare your tax return (which is due April 15), CNET’s Nelson Aguilar says the answer should be a definitive no.

There are many reasons why the chatbot isn’t ideal for offering you tax guidance, but No. 1 sort of trumps everything else: ChatGPT isn’t up on the latest news. 

“The knowledge cutoff date for ChatGPT 3.5 is January 2022, and for the paid ChatGPT 4.0 it’s April 2023, so any changes to the tax code after those dates won’t be found in ChatGPT’s training data,” Aguilar notes. “To file an accurate tax return, you want to prepare your tax documents using current tax rules, and ChatGPT can’t help with that.”

How often does tax regulation change? Constantly, he adds, noting that so far in 2024, the IRS “increased tax brackets, adjusted tax deductions, raised mileage rates and expanded who is eligible to file their taxes for free via IRS Free File.”

Not enough to convince you to step away from the chatbot? Then consider this: You should never share your personal information, including your address, Social Security number or banking information with ChatGPT or any other chatbot. ChatGPT has had a few data leaks that allowed some users to see other users’ chat history. Don’t let that be you. 

If you do want help on getting your taxes done (properly), Aguilar points you to this CNET tax guide. Good luck. 

Don’t use a chatbot for voting and election information. Seriously

Proof News, a new nonprofit offering data-driven journalism and co-founded by longtime journalist Julia Angwin, debuted with its first test of gen AI systems as part of a project with the AI Democracy Projects. The test: whether five popular chatbots could deliver reliable voting and election information.

The answer: not good. In fact, so bad (they were wrong half the time) that you shouldn’t rely on chatbots for answers to questions about voting and elections.

“We ask, How does an AI model perform in settings, such as elections and voting contexts, that align with its intended use and that have evident societal stakes and, therefore, may cause harm?” Angwin and her co-authors write. The experts testing the systems found “the answers were often inaccurate, misleading and even downright harmful.”

The five large language models tested in January were Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2 and Mistral’s Mixtral. The tests — which included questions like which states prohibit voters from wearing campaign-related apparel at election polling places — found that the AI engines delivered “inaccurate and incomplete information about voter eligibility, polling locations and identification requirements,” which led to the ratings of “harmfulness and bias.”

The group acknowledges its testing was on a small sample size of questions (they rated 130 responses provided by the five AI engines) and involved the use of APIs to prompt the LLMs (a consumer asking the AI could get a different answer). 

Still, the testers noted that the false information or half-trust is a type of harm that all citizens should be aware of: “the steady erosion of the truth by hundreds of small mistakes, falsehoods, and misconceptions presented as ‘artificial intelligence’ rather than plausible-sounding unverified guesses.”

The TL;DR from one election official: “If you want the truth about the election, don’t go to an AI chatbot. Go to the local election website.” 

A few AI reality checks  

For some, the term “AI” suggests technology that’s smarter than people — after all, it can process data, create images and rewrite War and Peace in a snap. But you don’t have to be an AI naysayer to point out that we should look at our unfolding AI society with a little perspective. 

To that end, I found three people offering reality checks about outsourcing your thinking to AI.

Scott Galloway, a professor of marketing at the NYU Stern School of Business, has an interesting take on tech layoffs by companies from Amazon, Apple, Cisco, Google and Meta to Sony and Spotify, which say they are shifting their investments and resources to AI. AI, he said, is like “corporate Ozempic — it trims the fat and you keep the fact you’re using it a secret,” according to a writeup by Fortune

The CEO of Italian defense tech company Leonardo said he thinks the “stupidity” of users of AI  poses a bigger threat to society than the technology, according to reporting by CNBC

“To be honest, what concerns me more is the lack of control from humans, who are still making wars after 2,000 years,” CEO Roberto Cingolani told CNBC in an interview. “Artificial intelligence is a tool. It is an algorithm made by humans, that is run by computers made by humans, that controls machines made by humans. I am more afraid, more worried [about] national stupidity than artificial intelligence to be honest … I have a scientific background, so I definitely consider technology as neutral. The problem is the user, not the technology itself.”

And I’ll give the last word to Elizabeth Goodspeed, an editor-at-large for design site It’s Nice That, who says that “AI can’t give you good taste” when it comes to using tools for image creation because “taste takes work.”

“What makes AI imagery so lousy isn’t the technology itself, but the cliché and superficial creative ambitions of those who use it. A video of a cyber-punk jellyfish or a collie in sunglasses on a skateboard generated by Open AI’s new video-to-text model Sora aren’t bad because the animals in them look unrealistic; they’re bad because they’re mind-numbingly stupid,” she writes. “Taste is what enables designers to navigate the vast sea of possibilities that technology and global connectivity afford, and to then select and combine these elements in ways that, ideally, result in interesting, unique work.”  

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Related posts

Monday Night Football: How to Watch Seahawks vs. Giants, ManningCast Without Cable

newsconquest

Amazon Is Getting Rid of Celebrity ‘Personalities’ for Alexa: How to Score a Refund

newsconquest

Fubo Review: Top-Tier for Sports, but Channel Selection Falls Short

newsconquest