My Blog
Technology

AI and You: OpenAI CEO Sam Altman Is Fired, the Rise of Synthetic Performers

AI and You: OpenAI CEO Sam Altman Is Fired, the Rise of Synthetic Performers
AI and You: OpenAI CEO Sam Altman Is Fired, the Rise of Synthetic Performers


The conventional wisdom in journalism is that when a company puts out a statement on a Friday afternoon, it’s generally not good news.

And so it was that OpenAI announced on Nov. 17 that it had ousted co-founder and CEO Sam Altman and chief advocate for the company’s mind-altering ChatGPT generative AI chatbot. The board asked Altman to exit because it no longer had confidence in his ability to lead the San Francisco-based company, according to a blog post in which OpenAI announced its leadership transition.

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

Ouch. I guess he won’t be there to celebrate ChatGPT’s first birthday on Nov. 30.

The board members who fired Altman are OpenAI chief scientist Ilya Sutskever and OpenAI’s independent directors, Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley and Georgetown Center for Security and Emerging Technology’s Helen Toner. Mira Murati, the company’s chief technology officer, was named interim CEO as OpenAI conducts a search for Altman’s permanent replacement. Co-founder Greg Brockman, who served as chairman of the board, also quit the company, saying in a post that he was “shocked and saddened” by the board’s decision to oust Altman.

OpenAI told me it didn’t have any additional comment and referred me back to its blog post.

This is quite the big deal in the AI industry given that Altman has been the poster child for generative AI chatbots, from touting ChatGPT’s potential to assist advancements in human achievement to calling on regulators to offer up legislation to help companies figure out how to keep innovating while also guarding against the potential security, privacy and humanity-ending threats genAI could pose in the hands of bad actors. ChatGPT is the most widely visited genAI tool, according to Similarweb, with over 1.5 billion visits in October

The New York Times said this was a “stunning fall for Mr. Altman, 38, who over the last year had become of the tech industry’s most prominent executives as well as one of its most fascinating characters.” CNN described Altman as an “overnight quasi-celebrity and the face of a new crop of AI tools that can generate images and texts in response to simple users prompts.” The Guardian described Altman’s exit as a “major shakeup in the world of AI.” It noted that he was fired for “allegedly lying to the board of his company” but that “what Altman had allegedly hidden from his company’s board was not clear.”

“In Silicon Valley, Altman has long been known as a smart investor and supporter of smaller companies, but the rise of OpenAI catapulted him into the league of tech titans alongside Musk, Meta CEO Mark Zuckerberg and even the late Apple CEO Steve Jobs,” The Washington Post noted. “As recently as Thursday, Altman was acting the CEO part, speaking onstage at the Asia-Pacific Economic Cooperation summit in San Francisco.”

I asked ChatGPT what it could tell me about OpenAI’s board and its decision to fire CEO Sam Altman. It apologized for not being able to answer because it doesn’t have access to real-time news or specific information about recent events. (Its training goes up until September 2021.) It did describe Altman as an “American entrepreneur and investor.”

For his part, Altman, who stepped in as CEO in 2020 after helping to start OpenAI initially as non-profit in 2015 with backing from tech billionaires Elon Musk, Peter Thiel and Reid Hoffman, said in a post on X (also known as Twitter) that he “loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.”

He added that he “will have more to say about what’s next later.” OpenAI co-founder Brockman on Friday tweeted out a brief timeline about the board’s news, and added that he and Altman “are still trying to figure out exactly what happened.”

In journalistic terms, that means this is a developing story,  Stay tuned.

Here are the other doings in AI worth your attention. 

Clones, digital duplicates and synthetic actors

One concern about generative AI is how the tech can be used to copy real people and fool you into thinking that person is saying or doing something they didn’t. That’s the issue with deepfakes, which as the name implies are intended to deceive or mislead. Actor Tom Hanks cautioned in October that an AI clone was touting dental plans in an unauthorized ad. While introducing his Executive Order with guardrails around AI development and use, President Joe Biden joked about a deepfake doppelganger copying his voice.

But beyond ads and misinformation campaigns, actors and performers in Hollywood are also concerned that genAI could be used by Hollywood studios and content creators to make digital doubles or synthetic performers instead of using (and paying) humans. That remains an issue even as the Hollywood strike with actors was resolved, with the deal including guardrails on the industry’s use of genAI that requires that actors have to give permission to producers who want to create and use their digital replicas.

Justine Bateman, the actor who served as the union advisor for genAI negotiations with Hollywood, summed up the larger problem this way in a post on X (formerly known as Twitter) earlier this week.

“Winning an audition could become very difficult, because you will not just be competing with the available actors who are your type, but you will now compete with every actor, dead or alive, who has made their ‘digital double’ available for rent in an a rage of ages to suit the character,” Bateman wrote. “You also will be in competition with an infinite number of AI Objects that the studios/streamers can freely use. And a whole cast of AI Objects instead of human actors eliminates the need for a set or any crew at all.”

Just how easy is it to use AI to generate digital doubles and synthetic performers? Let me call out three interesting AI developments in the news this past week that underscore the issue. 

The first comes from Charlie Holtz, a “hacker in residence” at Replicate, a machine-learning startup, who created an AI clone of British biologist and historian Sir David Attenborough, Insider reported. In a post on X, Holtz showed how he was able to replicate the documentary filmmaker’s distinctive voice. The result: “Here’s what happens when David Attenborough narrates your life.”

Holtz freely shared the code for co opting Attenborough’s voice. Attenborough hadn’t responded to Insider’s request for comment as of this writing, but Holtz’s experiment has had more than 3.5 million views. One commentator said they’re looking forward to having Attenborough “narrate videos of my baby learning how to eat broccoli.”

The second is an experimental music tool called Dream Track from YouTube that lets you create your music tracks by cloning the voices of nine musicians — including John Legend, Demi Lovato and Sia — with their permission. Created in collaboration with Google’s DeepMind AI lab, Dream Track is being tested by a selected group of US creators who can make a soundtrack for their YouTube shorts by typing their idea for the song into a prompt and then picking one of the nine artists. The tool will then create an original Shorts soundtrack featuring the AI-generated voice of the artist.  

“Being a part of YouTube’s Dream Track experiment is an opportunity to help shape possibilities for the future,” Legend said in a testimonial posted on a YouTube blog. “As an artist, I am happy to have a seat at the table and I look forward to seeing what the creators dream up during this period.”  

Charli XCX seemed a little more guarded in her endorsement. “When I was first approached by YouTube I was cautious and still am, AI is going to transform the world and the music industry in ways we do not yet fully understand. This experiment will offer a small insight into the creative opportunities that could be possible and I’m interested to see what comes out of it.”

You can listen to an example featuring T-Pain that was generated from the prompt: “a sunny morning in Florida, R&B.” Another clones Charlie Puth and delivers “a ballad about how opposites attract, upbeat acoustic.”

The news about Dream Track came at the same that YouTube announced its guidelines for “responsible AI innovation” on its platform. Video creators will need to select from some content labels when they upload video to disclose when their “it contains realistic altered or synthetic material … This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.”

The third set of genAI tech I’m highlighting is from Meta, Emu Video and Emu Edit. A “simple” text-to-video generation tool, Emu Video lets you build a 4-second animated clip, at 16 frames per second, using text only, image only or both text and an image. Emu Edit offers an easy way to edit those images. You can see for yourself how it works

Meta’s demo tool lets you choose from a set of images — a panda wearing sunglasses, a fawn Pembroke Welsh corgi, among them – and then you can select from the prompts provided to have your character appear in Central Park or under water, while walking-in-slow motion or skateboarding in a photorealistic or Anime Manga style. I went for the cat dancing energetically, in Times Square in paper cut craft illustration style. 

emucat.png

Meta’s Emu Video lets you create a 4-second animated video by selecting from a set of images and text-based descriptions. I went for the cat dancing energetically, in Times Square in paper cut craft illustration style.

Meta’s Emu Video software

You might think, “oh, that’s an easy way to create a GIF.” But in the not-too-distant future, you may be able to place all kinds of characters in the tool and with just a few words, create a short movie. 

Have AI, will travel? Sort of

Among one of the more popular use cases for chatbots is to help with travel planning, the time-consuming and labor-intensive process of mapping out a detailed itinerary. And while there are many anecdotal reports about the success of having genAI do that work for you, CNET’s Katie Collins reminds us that mapping out an itinerary is about more than just creating a list of place to see and things to do. 

“The best itineraries will string your day together in a way that makes sense geographically and thematically,” Collins wrote about mapping out a tour of her hometown in Edinburgh, Scotland, a place she says she knows well. She relied on tools including ChatGPT, GuideGeek, Roam Around, Wonderplan, Tripnotes and the Out of Office, or OOO app.

“The journey between attraction A and attraction B will be part of the fun, taking you down a picturesque street or providing a surprising view you might not otherwise have seen. It will also be well paced, taking into account that by the third gallery of the day, even the most cultured among us will likely be struggling with museum fatigue,” she said. 

So while chatbots can generate lists of well-known and popular attractions, Collins said “very few of the itineraries I asked AI to create for Edinburgh fit this brief” and “The fact that AI uses historical data makes it incredibly backward looking” which may lead you to places that no longer exist.  

So as is the case with most genAI, you’ll need to double check, verify and cross-check what the AI is telling you before you head out. Cautioned Collins, “That goes for everything it tells you.” 

Just how much hallucinating are we talking about? 

Collins’ story reminded me the whole hallucination problem — that’s when chatbots deliver answers to your prompts that aren’t true but sound like they are true — very much remains a problem for large language models such as ChatGPT and Google Bard.

Researchers at a startup called Vectara, founded by former Google employees, tried to quantify how much of a problem it is and found that “chatbots invent information at least 3% of the time — and as high as 27%,” the New York Times reported.

Vectara is now publishing a “Hallucination Leaderboard,” which evaluates how often the LLM hallucinates when summarizing a document. As of Nov. 1, it gave top marks to OpenAI’s GPT 4 (3% hallucination rate) and its lowest scores to Google’s Palm 2 technology, which had a 27.2% hallucination rate. The leaderboard will be updated “regularly as our model and the LLMs get updated over time,” the company said.

Microsoft unveils its own AI chip  

Microsoft introduced the first in a series of Maia accelerators for AI, saying it designed the chip to power its own cloud business and subscription software services and not to resell to other providers, according to reporting by CNBC, Reuters and ZDNET.

“The Maia chip was designed to run large language models, a type of AI software that underpins Microsoft’s Azure OpenAI service and is a product of Microsoft’s collaboration with ChatGPT creator OpenAI,” Reuters said. “Microsoft and other tech giants such as Alphabet (GOOGL.O) are grappling with the high cost of delivering AI services, which can be 10x greater than for traditional services such as search engines.”

CNBC, citing an interview with Microsoft corporate vice president Rani Borkar, noted that “Microsoft is testing how Maia 100 stands up to the needs of its Bing search engine’s AI chatbot (now called Copilot instead of Bing Chat), the GitHub Copilot coding assistant and GPT-3.5-Turbo, a large language model from Microsoft-backed OpenAI,” citing an interview with Microsoft corporate vice president Rani Borkar. 

The Maia 100 has 105 billion transistors, making it “one of the largest chips on 5-nanometer process technology,” referring to the size of the smallest features of the chip, five billionths of a meter,” ZDNET said.

AI term of the week: Deep learning

When people talk about AI, you may hear about how it will (or won’t) mimic the human brain. Which is why the term “deep learning” pops up. Here are two definitions, wiith the first a straightforward explainer from Coursera.

Deep learning: A function of AI that imitates the human brain by learning from how it structures and processes information to make decisions. Instead of relying on an algorithm that can only perform one specific task, this subset of machine learning can learn from unstructured data without supervision.”

The second is from IBM, which also offers up explainer about how deep learning works. 

“Deep learning: A subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain — albeit far from matching its ability — allowing it to “learn” from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy.

Deep learning drives many artificial intelligence applications and services that improve automation, performing analytical and physical tasks without human intervention. Deep learning technology lies behind everyday products and services (such as digital assistants, voice-enabled TV remotes, and credit card fraud detection) as well as emerging technologies (such as self-driving cars).”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Related posts

Tech Got A Little Creepy At CES 2023

newsconquest

Lego’s Newest Super Mario Set Is This ‘Menacing’ Piranha Plant

newsconquest

Kazakhstan’s Web Shutdowns May Be a Caution for Ukraine

newsconquest