My Blog
Technology

The year AI became eerily humanlike

The year AI became eerily humanlike
The year AI became eerily humanlike



Craving a photo of a Dachsund puppy in space in the style of painted glass?

Before, you might have needed to commission an artist to get that, but now simply type that request into a text-to-image generator, and out pops an AI generated photo from thin air of such high quality, even AI doubters conceded it’s impressive — though they still note their many concerns.

This year saw an explosion of text-to-image generators.

Dall-E 2, created by OpenAI and named after painter Salvador Dali and Disney Pixar’s WALL-E, shook the internet after launching in July. In August, the start-up Stable Diffusion launched its own version, essentially an anti-DALL-E with fewer restrictions on how it was used. Research lab Midjourney released another during the summer, which created the picture that sparked a controversy in August when it won an art competition at the Colorado State Fair.

What these models do aren’t new, but how they did it was, experts said, causing the sharp increase in image quality. They were trained in a novel way, using a method called diffusion, which essentially breaks down images it is trained on and then reverses that process to generate them, making them faster, more flexible and better at photorealism.

Predictably, experts said, the surge in use came with problems. Artists felt these models were training off images they created and posted onto the internet, and weren’t getting profits from them. People quickly used them to create images of school shootings, war photos, and even child porn, according to a Reddit group and Discord chat channel.

Related posts

Today’s NYT Strands Hints, Answer and Help for Oct. 7, #218

newsconquest

iOS 16 Public Beta: New iPhone Features You’ll Love to Use

newsconquest

The Highest Sci-Fi TV Displays on High Video

newsconquest