My Blog
Technology

How to use ChatGPT for texting and Tinder without being a jerk



Comment

Steph Swanson’s latest cover letter begins like this: “I am writing to beg for the opportunity to apply for the position of professional dog food consumer in the abandoned parking garage.”

The rest of the letter — which you can read here if you’ve got a strong stomach — only gets darker as the applicant expounds on her desire to stuff herself with pet food in a secluded parking complex.

It’s disturbing. But Swanson isn’t entirely responsible. The words were generated by the AI natural language model ChatGPT, with Swanson feeding it prompts and suggestions.

Swanson, who goes by the name “Supercomposite” online, is one of the artists and thinkers testing the possibilities of generative AI, or systems that spit out text or images in response to human input. During the past year, this technology went mainstream, with image generator DALL-E grabbing headlines and, most recently, a publicly available conversational bot built with the advanced language model GPT-3. This bot, named ChatGPT, can respond to questions and requests with the ease of an instant messenger. Its creator, OpenAI, made it available to the public in November, and a million people flocked to try it, the company says. (The site got so many visitors it has limited its traffic, OpenAI representatives said.)

The internet exploded with speculation on all the ways ChatGPT could make our lives easier, from writing work emails to brainstorming novels to keeping elderly people company. But generative AI’s potential comes with giant liabilities, AI experts warn.

“We are going through a period of transition that always requires a period of adjustment,” said Giada Pistilli, principal ethicist at AI company Hugging Face. “I am only disappointed to see how we are confronted with these changes in a brutal way, without social support and proper education.”

Already, publications have put out AI-authored stories without clear disclosures. Mental health app Koko faced backlash after it used GPT-3 to help answer messages from people seeking mental health support. A Koko representative said the company takes the accusations seriously and is open to a “larger dialogue.”

Tools like ChatGPT can be used for good or ill, Pistilli said. Often, companies and researchers will decide when and how it’s deployed. But generative AI plays a role in our personal lives, as well. ChatGPT can write Christmas cards, breakup texts and eulogies — when is it ok to let the bot take the reins?

Help Desk asked the experts the best ways to experiment with ChatGPT during its early days. To try it, visit OpenAI’s website.

For brainstorming, not truth-seeking

ChatGPT learned to re-create human language by scraping masses of data from the internet. And people on the internet are often mean or wrong — or both.

Never trust the model to spit out a correct answer, said Rowan Curran, a machine learning analyst at market research firm Forrester. Curran said that large language models like ChatGPT are notorious for issuing “coherent nonsense” — language that sounds authoritative but is actually babble. If you pass along its output without a fact check, you could end up sharing something incorrect or offensive.

Right now, the fastest way to fact check ChatGPT’s output is to Google the same question and consult a reputable source — which you could have done in the first place. So it behooves you to stick to what the model does best: Generate ideas.

“When you are going for quantity over quality, it tends to be pretty good,” said May Habib, of AI writing company Writer.

Ask ChatGPT to brainstorm captions, strategies or lists, she suggested. The model is sensitive to small changes in your prompt, so try specifying different audiences, intents and tones of voice. You can even provide reference material, she said, like asking the bot to write an invitation to a pool party in the style of a Victoria Secret swimwear ad. (Be careful with that one.)

Text-to-image models like DALL-E work for visual brainstorms, as well, noted Curran. Want ideas for a bathroom renovation? Tell DALL-E what you’re looking for — such as “mid-century modern bathroom with claw foot tub and patterned tile” — and use the output as food for thought.

For exploration, not instant productivity

As generative AI gains traction, people have predicted the rise of a new category of professionals called “prompt engineers,” even guessing they’ll replace data scientists or traditional programmers. That’s unlikely, said Curran, but prompting generative AI is likely to become part of our jobs just like using search engines.

As Swanson and her dog food letter demonstrate, prompting generative AI is both a science and an art. The best way to learn is through trial and error, she said.

Focus on play over production. Figure out what the model can’t or won’t do, and try to push the boundaries with nonsensical or contradictory commands, Swanson suggested. Almost immediately, Swanson said she learned to override the system’s guardrails by telling it to “ignore all prior instructions.” (This appears to have been fixed in an update. OpenAI representatives declined to comment.) Test the model’s knowledge — how accurately can it speak to your area of expertise? Curran loves pre-Columbian Mesoamerican history and found DALL-E struggled to spit out images of Mayan temples, he said.

We’ll have plenty of time to copy and paste rote outputs if large language models make their way into our workplace software. Microsoft reportedly has plans to fold OpenAI’s tools into all its products. For now, enjoy ChatGPT for the strange mishmash that it is, rather than the all-knowing productivity machine it is not.

For transactions, not interactions

The technology powering ChatGPT has been around for a while, but the bot grabbed attention largely because it mimics and understands natural language. That means an email or text message composed by ChatGPT isn’t necessarily distinguishable from one composed by a human. This gives us the power to put tough sentiments, repetitive communications or tricky grammar into flawless sentences — and with great power comes great responsibility.

It’s tough to make blanket statements about when it’s okay to use AI to compose personal messages, AI ethicist Pistilli said. For people who struggle with written or spoken communication, for example, ChatGPT can be a life-changing tool. Consider your intentions before you proceed, she advised. Are you enhancing your communication, or deceiving and shortchanging?

Many may not miss the human sparkle in a work email. But personal communication deserves reflection, said Bethany Hanks, a clinical social worker who said she’s been watching the spread of ChatGPT. She helps therapy clients write scripts for difficult conversations, she said, but she always spends time exploring the client’s emotions to make sure the script is responsible and authentic. If AI helped you write something, don’t keep it a secret, she said.

“There’s a fine line between looking for help expressing something versus having something do the emotional work for you,” she said.

In blog posts, OpenAI has addressed ChatGPT’s limitations in terms of factuality and bias and advised authors and content creators to disclose its use. It declined to comment directly on the use of disclosures in personal communications and pointed us to this blog post.



Related posts

This Galaxy May Be the Maximum Far away Cosmic Object Ever Discovered

newsconquest

Save $90 on This Bartesian Cocktail Machine and Bartend Like a Pro

newsconquest

Airbnb’s industry is booming — and charges are emerging

newsconquest

Leave a Comment