My Blog
Technology

AI and You: The Job Debate Continues, Social Media Isn’t So Swift Handling Porn Deepfakes

AI and You: The Job Debate Continues, Social Media Isn’t So Swift Handling Porn Deepfakes
AI and You: The Job Debate Continues, Social Media Isn’t So Swift Handling Porn Deepfakes


Expect the conversation about how AI technology will affect the future of work — and by that, I mean jobs — to continue to be a huge topic of debate, optimism and pessimism in the coming months (and next few years, TBH.)

Companies are already planning for potential productivity and profit boosts from the adoption of generative AI and automated tech, as evidenced by job cuts at Google, Spotify, Amazon and others that have specifically noted their need to shift resources to AI-forward roles and projects. 

The International Monetary Fund said this month that nearly 40 percent of jobs around the world are exposed to change due to AI. In its September “Jobs of Tomorrow” report, the World Economic Forum predicts 23% of global jobs will change in the next five years due to factors including AI. That transformation will reshape existing jobs and create new roles in categories including AI developers, interface and interaction designers, AI content creators, data curators, and AI ethics and governance specialists, the WEF said.  (Remember, Goldman Sachs noted last year that 60% of workers are employed in occupations right now that didn’t exist in 1940.)

Which of today’s occupations will be most affected and how do employers find the right people for those new roles, since experts agree it will take time to build an AI education workforce? And who’s going to do that reskilling? Is it the responsibility of the government, the workers themselves, or the companies rewriting job descriptions as they retune their businesses? 

The answer is a mix of all of the above. Workers should learn new skills, the government should set policies that promote an AI-skilled workforce and companies should invest in training employees to give them new skills, said New York University professor Robert Seamans, who is also director of the NYU Stern Center for the Future of Management. But he particularly hopes that companies will step up.

“From a policy point of view, there’s a lot of focus put on programs that help the worker to gain the skills that they need to succeed in the jobs of the future … but it puts a lot of the burden on the worker to basically make bets on the skill sets that are going to be needed one year, let alone five years, in the future,” he said in a Q&A for the Collective[i} Forecast lecture series last week.   

Seamans hopes to see incentives given to companies for investing in and retraining their staff. “The firms would then be able to take advantage of a much better trained workforce,” he believes. “Instead of trying to identify the people, trying to understand what skills we need right now and trying to identify who out there has those skills [and] trying to convince them to come — let’s just take the employees that we have. We have a rough idea of the skills we think that they need, and we’re being encouraged by the government to sort of invest in training in those skills.”

The good news is even the most AI bullish businesses have time to invest in training their workers. An MIT study released Jan. 22 found companies seeking to replace jobs or shift tasks to AI won’t see a return on investment in AI tools for a while. They looked at jobs around computer vision to run their analysis and concluded that “at today’s costs, US businesses would choose not to automate most vision tasks that have ‘AI Exposure,’ and that only 23% of worker wages being paid for vision tasks would be attractive to automate.” 

“AI job displacement will be substantial, but also gradual,” the researchers added. “Therefore there is room for policy and retraining to mitigate unemployment impacts.”

Longtime tech investor Esther Dyson also weighed in on the question of AI and jobs. Instead of focusing on AI jobs, she encourages us instead to think about becoming “better humans: more self-aware and more understanding of the world around us, better able to understand our own and others’ motivations.”

“We should not compete with AI; we should use it,” Dyson said in an essay for The Information. “People should train themselves to be better humans even as we develop better AI. People are still in control, but they need to use that control wisely, ethically and carefully.”

As an aside, if you’re interested in learning what hiring managers want to see in resumes and applications, graphic design provider Canva set up a new hub for job seekers that offers info, tools, templates and tips that it says are based on interviews with 5,000 hiring managers.

Here are the other doings in AI worth your attention.

Taylor Swift victim of pornographic deepfake images  

A week after noting that musician Taylor Swift was the victim of deepfake videos showing a fake Swift pitching cookware, the musician was victimized again when dozens of explicit, faked photos of her appeared on social media sites including Telegram, X, Facebook, Instagram and Reddit. The photos, says the DailyMail.com, were “uploaded to Celeb Jihad, that show Swift in a series of sexual acts while dressed in Kansas City Chief memorabilia and in the stadium. … Swift has been a regular at Chiefs games since going public with her romance with star player Travis Kelce.”

Elon Musk’s social media platform X (formerly Twitter) told the BBC in a statement that it was “actively removing” the images and taking “appropriate actions” against the accounts that had published and spread the images. But the BBC added that while many of them were removed at the time it published its story, dated Jan. 26, “one photo of Swift was reviewed a reported 47 million times before being taken down.” The photos were up for at least a day. 

Pornography, added the BBC, “consists of the overwhelmingly majority of the deepfakes posted online, with women making up 99% of those targeted in such content, according to the State of Deepfakes report published last year.

AI being used to create deepfakes — audio and video showing real people doing things they haven’t done or said — is on the rise because new tools make it faster and easier to do so. Last week, someone sent out a robocall that used President Joe Biden’s voice to tell people not to vote in the New Hampshire presidential primary. Fake versions of celebrities pitching products — including Steve Harvey touting Medicare scams — is on the rise, Popular Science reported. YouTube said it shut down 90 accounts and “suspended multiple advertisers for faking celebrity endorsements,” according to USA Today.

And it’s not just living celebrities who have to worry. Since AI can create fake versions of people living or dead, there are now concerns about how the tech is being used to resurrect those who have passed on. Case in point: Comedian George Carlin, who died in 2008, was the star of an AI-generated, hourlong audio comedy special called “George Carlin: I’m Glad I’m Dead,” reports the Associated Press. Carlin’s estate has filed a lawsuit against the company that created the show, with Carlin’s daughter Kelly Carlin telling the AP that it’s “a poorly executed facsimile cobbled together by unscrupulous individuals to capitalize on the extraordinary goodwill my father established with his adoring fanbase.”

If you’re interested in more about legal rights concerning AI revivals of those who have died, look for this upcoming article in the California Law Review by University at Buffalo legal expert Mark Bartholomew called “A Right to Be Left Dead.” He argues that a “new calculus” is needed “that protects the deceased against unauthorized digital reanimation.”

Microsoft CEO Satya Nadella was asked by anchor Lester Holt of NBC Nightly News about the Taylor Swift deepfakes — and how long they were available online. “This is alarming and terrible, and so therefore yes, we have to act, and quite frankly all of us in the tech platform, irrespective of what your standing on any particular issue is — I think we all benefit when the online world is a safe world,” Nadella said. “I don’t think anyone would want an online world that is completely not safe for, both for content creators and content consumers. So therefore I think it behooves us to move fast on this.” The complete interview airs Tuesday, NBC said.

Oren Etzioni, a computer science professor at the University of Washington who works on ferreting out deepfakes, told The New York Times that the Swift photos will now prompt “a tsunami of these AI-generated explicit images. The people who generated this see this as a success.”  

As for Swift, her fans, known as Swifties, have rallied around the award-winning artist and expressed outrage over the abusive images, adding a line to their social media posts saying “Protect Taylor Swift.” The DailyMail.com said Swift is reportedly considering legal action. Rep. Joseph Morelle, a Democrat from New York, unveiled a bill last year that would make sharing deepfake pornography illegal. Last week, he said it was time to pass the legislation, called Preventing Deepfakes of Intimate Images Act.  

“The spread of AI-generated explicit images of Taylor Swift is appalling — and sadly, it’s happening to women everywhere, every day,” Morelle wrote on X. “It’s sexual exploitation, and I’m fighting to make it a federal crime.”

Vogue’s Emma Specter writes that if anyone can bring attention to the problem with deepfakes and get more legal and regulatory action to stop its proliferation, it’s the much admired Swift.

“It’s deeply unfortunate that Swift is now weathering the predatory manipulation of her image online, but her reach and power are likely to bring attention to the issue of criminal AI misuse — one that has already victimized far too many people,” Specter wrote.

Google’s Lumiere, a text-to-video model

Speaking of AI technology that can make it easier to create video, Google Research last week released a paper describing Lumiere, a text-to-video model that says it can portray “realistic, diverse, and coherent motion — a pivotal challenge in video synthesis.”

Lumiere is currently just a research project, notes CNET’s Lisa Lacy. “Lumiere’s capabilities include text-to-video and image-to-video generation, as well as stylized generation — that is, using an image to create videos in a similar style. Other tricks include the ability to fill in any missing visuals within a video clip,” Lacy says. But she adds, “It’s not clear when — or if — anyone outside the search giant will be able to kick the tires. It’s certainly fun to look at, though.”

ZDNet describes Lumiere’s potential as “pretty amazing.”

You may have noticed video generation models typically render choppy video, but Google’s approach delivers a more seamless viewing experience,” writes ZDNet’s Sabrina Ortiz. “Not only are the video clips smooth to watch, but they also look hyper-realistic, a significant upgrade from other models. Lumiere can achieve this through its Space-Time U-Net architecture, which generates the temporal duration of a video at once through a single pass.”

You can see a short demo of Lumiere on YouTube here. By the way, Lacy reminds us that fans of Disney’s Beauty and the Beast will know that lumiere is French for “light.”

FTC investigates Big Tech investments among AI companies

Lina Khan, chair of the US Federal Trade Commission, said last week that she’s going to open an inquiry into the relationships between top AI companies, including ChatGPT maker OpenAI, and the tech companies investing billions of dollars in them. That pretty much includes Microsoft, which has over $13 billion in OpenAI, Amazon, Google and Anthropic. 

“We’re scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition,” said Khan in remarks at the FTC’s first Tech Summit on AI on Jan. 25.  The inquiry, she added, is a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers. … Will this be a moment of opening up markets to fair and free competition in unleashing the full potential of emerging technologies or will a handful of dominant firms concentrate control over these key tools, locking us into a future of their choosing.”

gettyimages-1530150886

“At the FTC, the rapid development and deployment of AI is informing our work across the agency,” says Chair Lina Khan.

Tom Williams/CQ-Roll Call via Getty Images

“At the FTC, the rapid development and deployment of AI is informing our work across the agency,” Khan said, according to a recap of the event by CNBC. “There’s no AI exemption from the laws on the books, and we’re looking closely at the ways companies may be using their power to thwart competition or trick the public.”

You can listen to Khan’s remarks on YouTube here, starting at the 5:30 minute mark, or read the transcript there.

In other government-related AI news, the US National Science Foundation released a pilot program to spur AI research and development as part of Biden’s push to promote responsible AI. Called the National Artificial Intelligence Research Resource pilot, it’s a partnership between 10 federal agencies — including the NSF, Department of Defense, Department of Energy and US Patent and Trademark Office — and 25 private sector, nonprofit and philanthropic organizations, including Amazon Web Services, Anthropic, Google, Intel, Meta, Microsoft and OpenAI.

Their goal is to “provide access to advanced computing, datasets, models, software, training and user support to US-based researchers and educators.” 

You can find all the details of the program, as well as the complete list of partners, here

Etsy gets into gift mode with personalized AI-generated guides

If you’ve ever gone down the Etsy rabbit hole looking for presents to buy for others (or for yourself), you may be interested in a new AI-powered feature called gift mode, which helps you find products based on your gifting preferences. 

“After entering a few quick details about the person they’re shopping for, we use the power of machine-learning technology to match gifters with special/unique items from Etsy sellers, categorized by 200+ recipient personas,” Etsy says. Those personas include the music lover, the adventurer, and the pet parent.

Here’s how TechCrunch describes the feature: “Gift mode is essentially an online quiz that asks about who you’re shopping for (sibling, parent, child), the occasion (birthday, anniversary, get well), and the recipient’s interests. At launch, the feature has 15 interests to choose from, including crafting, fashion, sports, video games, pets, and more. It then generates a series of gift guides inspired by your choices, pulling options from the over 100 million items listed on the platform.”

Reminder: Valentine’s Day is Feb. 14. 

Researchers give artists ‘poison sauce’ to fight image copying 

A project led by the University of Chicago aims to give artists, graphic designers and other image creators a way to protect their work from being scraped and co-opted by AI image generators. Called Nightshade, it basically “poisons” the image data to confuse or mislead the large language models powering today’s image chatbots and prevent the LLMs from training on those images.

Ben Zhao, a computer science professor who led the project, told TechCrunch that Nightshade is like “putting hot sauce in your lunch so it doesn’t get stolen from the workplace fridge.” 

Here’s how the Nightshade team describes it:

“Nightshade [is] a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into “poison” samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g., a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.”

“Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade’s goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.”

AI word of the week: drift 

If you don’t know what a hallucination means with regard to generative AI, you should. That’s why I made it the word of the week in July. Simply put,  it means that AI engines, like OpenAI’s ChatGPT, have a tendency to make up stuff that isn’t true but that sounds true. 

How much do AIs hallucinate? Researchers at a startup called Vectara, founded by former Google employees, tried to quantify it and found that chatbots invent things at least 3% of the time and as much as 27% of the time. Vectara has a “Hallucination Leaderboard” that shows how often an LLM makes up stuff when summarizing a document, if you want to see the rate for your favorite AI tool yourself.

Well, riffing off the concept of hallucinations brings us to this week’s word: drift. It’s more a term that people developing LLMs are concerned with, but you should be aware of what it’s all about. 

AI drift “refers to when large language models (LLMs) behave in unexpected or unpredictable ways that stray away from the original parameters. This may happen because attempts to improve parts of complicated AI models cause other parts to perform worse,” note my colleagues at ZDNET in their story titled “What is ‘AI drift’ and why is it making ChatGPT dumber?” 

Cem Dilmegani, a principal analyst at AIMultiple, offers a more detailed definition. “Model drift, also called model decay, refers to the degradation of machine learning model performance over time. This means that the model suddenly or gradually starts to provide predictions with lower accuracy compared to its performance during the training period.” 

Dilmegani says there are two types of model drift: concept drift and data drift. I’ll let you read up on those. 

And IBM has an explainer that talks about the perils of model drift and how developers can address it. They explain the problem this way: The accuracy of AI model can drift (degrade) within days when production data differs from training data. This can negatively affect business KPIs.”

Happy reading.

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Related posts

Best RV Mattress for 2024

newsconquest

Save Up to 40% on Phone Stands, Grips and More at Clckr’s 1-Day Flash Sale

newsconquest

Today’s NYT Strands Hints, Answer and Help for August 1 #151

newsconquest