My Blog
Technology

AI and You: Hallucinations, Big Tech Talk on AI, and Jobs, Jobs, Jobs

AI and You: Hallucinations, Big Tech Talk on AI, and Jobs, Jobs, Jobs
AI and You: Hallucinations, Big Tech Talk on AI, and Jobs, Jobs, Jobs


Over the past few months, I’ve read through AI glossaries to get caught up on the vocabulary around the new world of generative AI. I recognize I’ve been doing deep dives into this topic and may know more than the average American about AI, but I still assumed that some of the key concepts associated with generative AI are widely known and understood. Talking with a journalism professor this week showed me that isn’t the case: As I explained how AI tools have a tendency to “hallucinate,” they stared blankly at me and said, “What does that mean?” 

“Hallucinate” is one of the first vocabulary words related to genAI that everyone should know. Simply put, it means that AI engines, like OpenAI’s ChatGPT, have a tendency to make up stuff that isn’t true but that sounds true. In fact, the US Federal Trade Commission earlier this month started investigating OpenAI about its chatbot potentially making false statements about people. Where does “hallucinate” come from in an AI context? Google DeepMind researchers came up with the quaint term in 2018, saying they found that neural machine translation systems, or NMTs, “are susceptible to producing highly pathological translations that are completely untethered from the source material.”  

Highly pathological translations untethered from the source material. I’m not an engineer, but even I know that’s a really nice way of saying something is really wrong with those systems.

Most glossaries offer a more straightforward explanation of an AI “hallucination.” This definition comes courtesy of NBC News

“Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that are not yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze, or make up facts about events that aren’t in its training data. It’s not fully understood why this happens, but can arise from sparse data, information gaps and misclassification.”

In the interests of fairness, I asked ChatGPT for comment by prompting it to tell me what “hallucinate” means as it refers to AI. Here’s its response: 

In the context of AI, the term “hallucinate” is used metaphorically to describe the generation of content or information that is not based on direct observation or real-world data. When an AI system “hallucinates,” it means that it produces outputs that may resemble or mimic certain patterns or features found in the training data, but are not actually grounded in reality.

The term “hallucinate” is not specific to AI and has been borrowed from its usage in the field of psychology. In psychology, hallucination refers to perceptions or sensory experiences that occur without any external stimuli. These perceptions are not based on reality but are perceived as real by the individual experiencing them.

In the context of AI, the term is used metaphorically to describe situations where the generated outputs deviate from reality or introduce elements that are not present in the original data. For example, in the domain of image generation, an AI system may produce images that contain surreal or fantastical elements, which are not based on actual objects or scenes observed in the training data.

The term “hallucinate” is used to highlight the potential for AI systems to generate outputs that go beyond what is directly observed or explicitly instructed. It reflects the ability of AI algorithms to extrapolate, combine, or create new patterns, sometimes resulting in unexpected or imaginative results.

Kind of a long-winded and convoluted way for a conversational AI to say it makes stuff up, in my opinion. If I didn’t know better, I’d also say ChatGPT was a bit defensive.

Here are the other doings in AI worth your attention.

US workers aren’t afraid of AI, but job erosion coming

The Pew Research Center and McKinsey each put out reports this week on how AI may affect workers and jobs, even as  many open questions remain. Both reports are worth a read. 

US workers “seem more hopeful than concerned about the impact of AI on their jobs,” according to the Pew study.

Pew Research Center

The study aimed in part to quantify which industries and workers are more exposed to AI. Pew characterized jobs as “more exposed to artificial intelligence if AI can either perform their most important activities entirely or help with them.”

“Many US workers in more exposed industries do not feel their jobs are at risk — they are more likely to say AI will help more than hurt them personally. For instance, 32% of workers in information and technology say AI will help more than hurt them personally, compared with 11% who say it will hurt more than it helps,” the study found.

As to whether AI will lead to job losses, Pew said the answer to that remains unclear “because AI could be used either to replace or complement what workers do.” And that decision, as we all know, will be made by humans — the managers running these businesses who get to decide if, how and when AI tools are used.

“Consider customer service agents,” Pew noted. “Evidence shows that AI could either replace them with more powerful chatbots or it could enhance their productivity. AI may also create new types of jobs for more skilled workers — much as the internet age generated new classes of jobs such as web developers. Another way AI-related developments might increase employment levels is by giving a boost to the economy by elevating productivity and creating more jobs overall.”

When it comes to jobs with the highest exposure to AI, the breakout isn’t all that surprising, given that some jobs — like firefighting — are more hands on, literally, than others. What is surprising is that more women than men are likely to have exposure to AI in their jobs, Pew said, based on the kind of work they do

Meanwhile, McKinsey offered up its report “Generative AI and the future of work in America.” The consultancy gave a blunt assessment on the impact of AI and work, saying that “by 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated — a trend accelerated by generative AI.”

But there’s a possible silver lining. “An additional 12 million occupational transitions may be needed by 2030. As people leave shrinking occupations, the economy could reweight toward higher-wage jobs. Workers in lower-wage jobs are up to 14 times more likely to need to change occupations than those in highest-wage positions, and most will need additional skills to do so successfully. Women are 1.5 times more likely to need to move into new occupations than men.”

All that depends, McKinsey adds, on US employers helping train workers to serve their evolving needs and turning to overlooked groups, like rural workers and people with disabilities, for their new talent.

What does all this mean for you right now? One thing is that AIs are being used by employers to help with their recruitment. If you’re looking for tips on how to job hunt in a world with these AI recruiting tools, check out this useful guide on The New Age of Hiring by CNET’s Laura Michelle Davis.

Big Tech talks up AI during earnings calls  

Google/Alphabet, Microsoft and Meta (formerly known as Facebook) announced quarterly earnings this week. And what was interesting, but not surprising, was how often AI was mentioned in the opening remarks by CEOs and other executives, as well as in the questions asked by Wall Street analysts. 

Microsoft CEO Satya Nadella, whose company offers an AI-enhanced version of its Bing search engine, plus AI tools for business, mentioned artificial intelligence 27 times in his opening remarks. Google CEO Sundar Pichai, who talked up the power of Google’s Bard and other AI tools, mentioned AI 35 times. And Meta CEO Mark Zuckerberg called out AI 17 times. If you’re looking for a little less-than-light reading, I encourage you to scan the transcripts for yourself.   

From Zuckerberg, we heard that, “AI-recommended content from accounts you don’t follow is now the fastest growing category of content on Facebook’s feed.” Also that, “You can imagine lots of ways AI could help people connect and express themselves in our apps: creative tools that make it easier and more fun to share content, agents that act as assistants, coaches, or that can help you interact with businesses and creators, and more. These new products will improve everything that we do across both mobile apps and the metaverse — helping people create worlds and the avatars and objects that inhabit them as well.”

Nadella, in talking about Bing, said it’s “the default search experience for OpenAI’s ChatGPT, bringing timelier answers with links to our reputable sources to ChatGPT users. To date, Bing users have engaged in more than 1 billion chats and created more than 750 million images with Bing Image Creator.”  

And Pichai talked about how AI tech is transforming Google Search. “User feedback has been very positive so far,” he said. “It can better answer the queries people come to us with today while also unlocking entirely new types of questions that Search can answer. For example, we found that generative AI can connect the dots for people as they explore a topic or project, helping them weigh multiple factors and personal preferences before making a purchase or booking a trip. We see this new experience as another jumping-off point for exploring the web, enabling users to go deeper to learn about a topic.”

AI detection hits another snag

Last week, I shared a CNET story by science editor Jackson Ryan about how a  group of researchers from Stanford University set out to test generative AI “detectors” to see if they could tell the difference between something written by an AI and something written by a human. The detectors did less than a good job, with the researchers noting that the software is biased and easy to fool. 

Which is why educators and others were heartened by news in January that Open AI, the creator of ChatGPT, was working on a tool that would detect AI versus human content. Turns out that was an ambitious quest, because OpenAI “quietly unplugged” its AI detection tool, according to reporting by Decrypt.  

OpenAI said that as of July 20 it was no longer making AI Classifier available, because of its “low rate of accuracy.” The company shared the news in a note appended to the blog post that first announced the tool, adding, “We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”

US government continues to discuss AI regulations

Senate Majority Leader Chuck Schumer continued holding sessions to brief the Senate on the opportunities and risks around AI, saying this week that there’s “real bipartisan interest” in putting together AI legislation that “encourages innovation but has the safeguards to prevent the liabilities that AI could present.”

The Senate expects to call in more experts to testify in coming months, Reuters reported, noting that earlier in the week senators on both sides expressed alarm about AI being used to create a “biological attack.” I know that’s already been the plot of a sci-fi movie, I just can’t remember which one.

Schumer’s complete remarks are here.

Hollywood interest in AI talent picks up as actors, writers strikes continue

Speaking of movies and AI plots, as the actors and writers strikes continue, entertainment companies — not interested in public relations optics, I guess — posted job openings for AI specialists as creatives walked the picket line out of concern that studios will “take their likenesses or voices, and reuse them over and over for little or no pay, and with little in the way of notice,” The Hollywood Reporter said

“Nearly every studio owner seems to be thinking about AI, whether it’s for content, customer service, data analysis or other uses,” the Reporter said, noting that Disney is offering a base salary of $180,000, with bonuses and other compensation, for someone who has the “ambition to push the limits of what AI tools can create and understand the difference between the voice of data and the voice of a designer, writer or artist.” 

Netflix is seeking a $900,000-per-year AI product manager, the Intercept found, while the Reporter noted that Amazon is looking for a senior manager for Prime Video, base salary of up to $300,000, who will “help define the next big thing in localizing content, enhancing content, or making it accessible using state-of-the-art Generative AI and Computer Vision tech.”

As we all know, AI isn’t going anywhere and jobs will be affected. But the questions about how, when and why, and who gets compensated for what — from actors to writers — will depend on decisions made by humans. 

Actor Joseph-Gordon Levitt, who also created the online collaborative platform HitRecord and figured out a way to pay creatives for their contributions, wrote a worthwhile op-ed piece reminding everyone that AIs are trained on something — and that something is usually the work of others who should be acknowledged and paid for their contributions.

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Related posts

You Might Need an Electrician to Install a Home EV Charger

newsconquest

Premier League Soccer: Livestream Chelsea vs. Arsenal From Anywhere

newsconquest

Elon Musk’s Twitter acquire results in steep drop in Tesla inventory

newsconquest