My Blog
Technology

A.I. Can Make Art That Feels Human. Whose Fault Is That?

A.I. Can Make Art That Feels Human. Whose Fault Is That?
A.I. Can Make Art That Feels Human. Whose Fault Is That?


This was the year — ask your stockbroker, or the disgraced management of Sports Illustrated — that artificial intelligence went from a dreamy projection to an ambient menace and perpetual sales pitch. Does it feel like the future to you, or has A.I. already taken on the staleness and scamminess of the now-worthless nonfungible token?

Artists have been deploying A.I. technologies for a while, after all: Ed Atkins, Martine Syms, Ian Cheng and Agnieszka Kurant have made use of neural networks and large language models for years, and orchestras were playing A.I.-produced Bach variations back in the 1990s. I suppose there was something nifty the first time I tried ChatGPT — a slightly more sophisticated grandchild of Eliza, the ’60s therapist chatbot — though I’ve barely used it since then; the hallucinatory falsehoods of ChatGPT make it worthless for journalists, and even its tone seems an insult to my humanity. (I asked: “Who was the better painter, Manet or Degas?” Response: “It is not appropriate to compare artists in terms of ‘better’ or ‘worse,’ as art is a highly subjective field.”)

Still, the explosive growth of text-to-image generators such as Midjourney, Stable Diffusion and Dall-E (the last is named after the corniest artist of the 20th century; that should have been a clue) provoked anxieties that A.I. was coming for culture — that certain capabilities once understood as uniquely human now faced computational rivals. Is this really the case?

Without specific prompting, these A.I. images default to some common aesthetic characteristics: highly symmetrical composition, extreme depth of field, and sparkly and radiant edges that pop on a backlit smartphone screen. Figures have the waxed-fruit skin and deeply set eyes of video game characters; they also often have more than 10 fingers, though let’s hold out for a software update. There is little I’d call human here, and any one of these A.I. pictures, on its own, is an aesthetic irrelevance. But collectively they do signal a hazard we are already facing: the devaluation and trivialization of culture into just one more flavor of data.

A.I. cannot innovate. All it can produce are prompt-driven approximations and reconstitutions of preexisting materials. If you believe that culture is an imaginative human endeavor, then there should be nothing to fear, except that — what do you know? — a lot of humans have not been imagining anything more substantial. When a TikTok user in April posted an A.I.-generated song in the style (and voices) of Drake and the Weeknd, critics and copyright lawyers bayed that nothing less than our species’s self-definition was under threat, and a simpler sort of listener was left to wonder: Was this a “real” song? (A soulless engine that strings together a bunch of random formulas can pass as Drake — hard to believe, I know….)

An apter question is: Why is the music of these two cocksure Canadians so algorithmic to begin with? And another: What can we learn about human art, human music, human writing, now that the good-enough approximations of A.I. have put their bareness and thinness on full display?

As early as 1738, as the musicologist Deirdre Loughridge writes in her engaging new book “Sounding Human: Music and Machines, 1740/2020,” Parisian crowds were marveling at a musical automaton equipped with bellows and pipes, capable of playing the flute. They loved the robot, and happily accepted that the sounds it produced were “real” music. An android flutist was, on its own, no threat to human creativity — but impelled philosophers to understand humans and machines as perpetually entangled, and artists to raise their game. To do the same in the 21st century will require us to take seriously not only what capabilities we share with machines, but also what differentiates us, or should.

I remain profoundly relaxed about machines passing themselves off as humans; they are terrible at it. Humans acting like machines — that is a much likelier peril, and one that culture, as the supposed guardian of (human?) virtues and values, has failed to combat these last few years.

Every year, our art and entertainment has resigned itself further to recommendation engines and ratings structures. Every year our museums and theaters and studios have further internalized the tech industry’s reduction of human consciousness into simple sequences of numbers. A score out of 100 for joy or fear. Love or pain, surprise or rage — all just so much metadata. Insofar as A.I. threatens culture, it’s not in the form of some cheesy HAL-meets-Robocop fantasy of out-of-control software and killer lasers. The threat is that we shrink ourselves to the scale of our machines’ limited capabilities; the threat is the sanding down of human thought and life to fit into ever more standardized data sets.

It sure seems that A.I. will accelerate or even automate the composition of elevator music, the production of color-popping, celebratory portraiture, the screenwriting of multiverse coming-of-age quests. If so, well, as Cher Horowitz’s father says in “Clueless,” I doubt anybody would miss you. These were already the outputs of “artificial” intelligences in every way that matters — and if what you write or paint has no more profundity or humanity than a server farm’s creations, then surely you deserve your obsolescence.

Rather than worry about whether bots can do what humans do, we would do much better to raise our cultural expectations of humans: to expect and demand that art — even and especially art made with the help of new technologies — testify to the full extent of human powers and human aspirations. The Ukrainian composer Heinali, whose album “Kyiv Eternal” I’ve held close to me throughout 2023, reconstructed the wartime capital through beautiful reconciliations of medieval plainsong and contemporary synthesizers. The sculptures of Nairy Baghramian, which I chased down this year in Mexico City, in Aspen, in the garden at MoMA and on the facade of the Met, deploys the most contemporary methods of fabrications for the most fragile and tender of forms. These artists are not afraid of technology. They are not replaceable by technology, either. Technologies are tools for human flourishing.

I spent a lot of this year thinking about ​​stylistic exhaustion, and the pervading sense that, in digital times, culture is going nowhere fast. The worries that accompanied artificial intelligence in 2023 reaffirmed this fear: that we’ve lost something vital between our screens and our databases, that content has conquered form and novelty has had its day. If our culture has grown static, then might we call our dissembling chatbots and insta-kitsch image engines what they are: mirrors of our diminished expectations?

Seen that way, I might even allow myself to wonder if A.I. might be the best thing to happen to culture in years — that is, if these perpetual mediocrity machines, these supercharged engines of cliché, end up pressing us to revalue the things humans alone can do. Leaving behind “a narrow fixation on how humanly machines can perform,” as Loughbridge writes, now is the time to figure out “what it means to work with and exist in relation to them.”

To make something count, you are going to have to do more than just rearrange precedent images and words, like any old robot. You are going to have to put your back into it, your back and maybe also your soul.

Related posts

ChatGPT creator rolls out ‘imperfect’ tool to help teachers spot potential cheating

newsconquest

Survey Says Starlink Users Are Happier Than Your Average ISP Customer

newsconquest

‘Catfish,’ the TV Show That Predicted America’s Disorienting Digital Future

newsconquest