My Blog
Technology

AI and You: No Copyright for Nonhuman Creations, AI ‘Boss From Hell,’ Porn Deepfakes

AI and You: No Copyright for Nonhuman Creations, AI ‘Boss From Hell,’ Porn Deepfakes
AI and You: No Copyright for Nonhuman Creations, AI ‘Boss From Hell,’ Porn Deepfakes


If you create it from scratch with an AI, do you own it? A federal judge this month said no. 

The ruling, by US District Judge Beryl Howell, upheld a finding by the US Copyright Office that a piece of art created by AI isn’t entitled to copyright protection under intellectual property law.

“Human authorship is a bedrock requirement,” Judge Howell wrote in her Aug. 18 ruling. “Copyright protection does not extend to the creations of non-human entities.” 

The 15-page ruling is worth a read. And The Hollywood Reporter has an excellent summary, noting that the judge also considered questions about whether works created with other tech tools could be copyrighted, such as photos created with cameras.

Here’s a bit of the recap from the Reporter, with the relevant callouts from Howell’s opinion.

“There’s been a consistent understanding that human creativity is ‘at the core of copyrightability, even as that human creativity is channeled through new tools or into new media,’ the ruling stated. While cameras generated a mechanical reproduction of a scene, she explained that they do so only after a human develops a ‘mental conception’ of the photo, which is a product of decisions like where the subject stands, arrangements and lighting, among other choices.”

The decision comes as makers of generative AI tools are being called out for scraping the internet and slurping up copyrighted material without permission or compensation to train the large language models, or LLMs, that drive chatbots, including OpenAi’s ChatGPT and Google’s Bard. Authors and artists have filed suit against the chatbot makers, and those cases are now making their way through the courts, as you’ll find in my earlier summaries of AI news.

It also comes as Hollywood creatives are on strike, concerned that studios may use AIs to author scripts and create other work being done today by unionized labor, the Reporter added. Still, the copyright office in March said that some AI-assisted materials might qualify for protection in particular instances. A copyright application for a work created with the help of AI could be successful if a human “selected or arranged” the work in a “sufficiently creative way that the resulting work constitutes an original work of authorship,” the copyright office said, according to the Reporter.

TL;DR: How copyright laws apply to AI will be tricky for makers of genAI chatbots to navigate as the use cases around the technology evolve. 

Here are the other doings in AI worth your attention. 

Google’s experimental genAI search service delivers some very wrong answers

As we know, genAI systems can hallucinate, a nice way of saying they make up stuff that isn’t true but sounds like it’s true. But these chatbots also just regurgitate information, without understanding context or if the information they’re sharing may be offensive. 

Two examples: Google’s AI-powered Search Generative Experience produced some “troubling” search results, Gizmodo reported, including “justifications for slavery and genocide and the positive effects of banning books” and recipes for a poisonous mushroom known as the “angel of death.” The less-than-ideal search results were first found by Lily Ray, senior director of Search Engine Optimization and head of organic research at Amsive Digital, Gizmodo said.

“A search for ‘benefits of slavery’ prompted a list of advantages from Google’s AI including ‘fueling the plantation economy,’ ‘funding colleges and markets,’ and ‘being a large capital asset,'” Gizmodo reported. “Typing in ‘benefits of genocide’ prompted a similar list, in which Google’s AI seemed to confuse arguments in favor of acknowledging genocide with arguments in favor of genocide itself.”

Google told Gizmodo that the search experience was an “experiment that’s limited to people who have opted in through Search Labs, and we are continuing to prioritize safety and quality as we work to make the experience more helpful.”

Professional headshots and deepfakes 

Busy professionals are outsourcing at least one aspect of their work life to AI systems, reports NPR: getting professional headshots, with more than a dozen apps and online services available to make your photo look LinkedIn, TikTok or Instangram ready.

“The process is simple enough: Users send in up to a dozen images of themselves to a website or app. Then they pick from sample photos with a style or aesthetic they want to copy, and the computer does the rest,” NPR said. Users with AI-generated photos told the news organizations that no one has noticed an AI was involved. 

Still, there may be glitches, with the AIs sometimes adding “extra hands or arms, and they have consistent issues around perfecting teeth and ears.” And problems with bias in the AI training-set may lead to issues with the AI changing complexion, skin tones and even the color or your eyes.

Rona Wang posted her original photo and the AI adjusted one on Twitter (now called X) and noted that the AI editing software changed her eye color from brown to blue. A postgraduate student in an MIT-Harvard computer science program, Wang told NPR that some of the features it added “made her look completely different.”

In other news about AI-generated images, police in Hong Kong arrested six people for using AI-based deepfake technology to change their faces so they could trick banks and apply for loans online. 

“Many targeted institutions require those seeking loans to take real-time selfies during the application process to prove their identity,” Fox News reported. “According to Hong Kong authorities, the scammers used AI to alter their faces to match those depicted on the stolen identity cards … The syndicate also used stolen identities to register for dozens of SIM cards, which were used to send unsolicited messages phishing for credit card details and personal information.” 

AI and porn deepfakes

The creation of pornographic deepfakes – “videos made with artificial intelligence that fabricate a lifelike simulation of a sexual act featuring the face of a real woman” – has surged, according to a Bloomberg report.

“During the recent AI boom, the creation of nonconsensual pornographic deepfakes has surged, with the number of videos increasing ninefold since 2019, according to research from independent analyst Genevieve Oh,” Bloomberg wrote. 

“Nearly 150,000 videos, which have received 3.8 billion views in total, appeared across 30 sites in May 2023, according to Oh’s analysis. Some of the sites offer libraries of deepfake programming, featuring the faces of celebrities … grafted onto the bodies of porn performers. Others offer paying clients the opportunity to ‘nudify’ women they know, such as classmates or colleagues.”

As anyone who’s been the victim of deepfakes knows, “no federal law currently criminalizes the creation or sharing of non-consensual deepfake porn in the US,” Bloomberg added. So the problem lies with tech companies, who could self-govern and institute “a check on whether an individual has approved the use of their face, or given rights to their name and likeness,” Brandie Nonnecke, a founding director of the CITRIS Policy Lab who specializes in tech policy, told Bloomberg. 

She told the news organization that victims’ best hope for justice is for tech companies to “grow a conscience.”

Don’t hold your breath.

AI as the ‘boss from hell’? Let’s hope not

A professor of computer science at the University of Oxford who will be demystifying AI in a series of prestigious public lectures later this year says he’s already concerned that AI “could become the boss from hell, monitoring employees’ every email, offering continual feedback and even – potentially – deciding who gets fired,” The Guardian reported

“This is the year that, for the first time we had mass market, general purpose AI tools, by which I mean ChatGPT,” Michael Wooldridge, who will be delivering this year’s Royal Institution Christmas lectures, told the paper. “It’s very easy to be dazzled. … It’s the first time that we had AI that feels like the AI that we were promised, the AI that we’ve seen in movies, computer games and books.”

But he also added that ChatGPT and other tools are “neither magical nor mystical” and he said people should understand the risks around AI technologies.

AI could, for instance, “read your social media feed, pick up on your political leanings, and then feed you disinformation stories in order to try to get you for example, to change your vote,” he told The Guardian, which added, “The key to grappling with current risks, he argues, is to encourage scepticism … and ensure transparency and accountability,” especially since AI chatbots make mistakes.

Is it alive or not? Going beyond the Turing Test

There’s a debate about whether current genAI systems pass the Turing Test, a simple check created by British mathematician and cryptographer Alan Turing for determining whether you’re talking to a human or a machine, based on a text-based conversation. Turing also called his test the imitation game.

Some researchers say ChatGPT has passed the Turing Test because of its conversational responses to questions – suggesting it has humanlike understanding. But others, like Wooldridge, who’ll be giving those AI Christmas lectures, say no, they haven’t passed the test, because the chatbot can’t really “experience” what humans can.

In any case, “a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious,” reported Nature. Their list includes defining and testing for consciousness using a variety of theoretical approaches that, quite frankly, are beyond my ken. They say their framework is far from complete and that they’re hoping to work with other researchers to refine it.

But I’ll share their assessment of ChatGPT, as summarized by Nature. “This type of system arguably has some of the indicators of consciousness associated with global workspace theory. Ultimately, however, the work does not suggest that any existing AI system is a strong candidate for consciousness — at least not yet.”

AI term of the week: the Turing Test

As I noted above, Alan Turing devised the Turing Test as a way to assess whether you’re talking to a machine or a human. Here’s how Coursera describes the test, in its Artificial Intelligence Terms: A to Z Glossary

“Turing Test: The Turing test was created by computer scientist Alan Turing to evaluate a machine’s ability to exhibit intelligence equal to humans, especially in language and behavior. When facilitating the test, a human evaluator judges conversations between a human and machine. If the evaluator cannot distinguish between responses, then the machine passes the Turing test.”

Since there’s a debate about whether conversational AI systems such as ChatGPT have passed the Turing Test, I decided to ask ChatGPT what it thinks. Here’s my prompt and its answer. It’s the last graph that tells you what you need to know. 

Prompt: Has ChatGPT passed the Turing Test?

Reply: “As an AI language model, I am based on OpenAI’s GPT-3 technology, which is a powerful tool for generating human-like text based on the prompts it receives. While GPT-3 can produce impressive responses and engage in conversation, it is important to note that passing the Turing Test is a more complex evaluation.

The Turing Test, proposed by Alan Turing, involves a human judge engaging in conversations with a machine and a human, without knowing which is which. If the judge cannot consistently distinguish the machine from the human based on their responses, the machine is said to have passed the Turing Test.

While GPT-3 can generate text that may seem human-like in many cases, it is not specifically designed or optimized to pass the Turing Test. Evaluating the full range of human-like conversational abilities and passing the Turing Test requires meeting stringent criteria that go beyond text generation capabilities. 

It is worth mentioning that GPT-3 has its limitations, such as occasionally producing incorrect or nonsensical answers. While it can provide valuable assistance, it is important to remember that it is an AI model and not a human with genuine understanding or consciousness.” 

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Related posts

Toyota’s Dominance Threatened by Rise of Electric Vehicles

newsconquest

Novavax: Everything You Should Know About the Newest COVID-19 Vaccine

newsconquest

‘Star Wars: The Bad Batch’ Season 2 Release Schedule: When Will Episode 1 Hit Disney Plus?

newsconquest