My Blog
Technology

What Google’s new AI Bard gets right—and wrong

What Google’s new AI Bard gets right—and wrong
What Google’s new AI Bard gets right—and wrong



Comment

Google set a high bar for its new artificial intelligence chatbot Bard. It shares a nickname with William Shakespeare.

But this Bard, released Tuesday, hasn’t advanced the state of the art just yet.

Google’s Bard is an AI trained on trillions of words from the internet that you’re supposed to be able to converse with like a buddy. I was among the first to get access at Bard’s debut, and spent the day chatting and asking it questions suggested by Post readers to my email inbox.

What I learned is that Google doesn’t seem to know what Bard is good for, which is one reason it’s releasing Bard slowly as a public “experiment.”

Bard doesn’t yet enhance any Google products you already rely on; it lives on its own website, bard.google.com. It’s not a replacement for Google search. It’s not even a replacement for Assistant, Google’s other AI bot, which also answers questions and operates Android phones and smart speakers.

And in conversation, Bard can come across as staid — even though it still sometimes struggles with making up facts, bias and sticking to appropriate content like rival chatbots.

Oh boy, can it be painfully unfunny. Bard, tell me a joke about Washington and it replies: “What do you call a Washington, D.C. resident who’s always late? A Washing-ton. This joke is funny because it plays on the word ‘Washington’ and the fact that the city is known for its traffic.”

On its surface, Bard looks a lot like ChatGPT, the AI bot from research lab OpenAI that went viral late last year and got a big update last week. Millions of people have tried that AI, and it has been integrated into Microsoft’s Bing search engine and Snapchat’s My AI virtual friend. ChatGPT is also notorious for sometimes going off the rails on weird tangents.

Google brings several advantages to the AI race. It has spent years developing its underlying chatbot technology called LaMDA, short for Language Model for Dialogue Applications. This is far-out stuff: Last year, Google fired an engineer for publicly claiming LaMDA was sentient. Google says no, its AI is actually just a “prediction engine,” or a very fancy version of auto-complete. It responds to a prompt by selecting, one word at a time, the words that are likely to come next.

Google describes Bard as a “creative collaborator” that can help boost productivity and accelerate ideas. That’s a pretty broad term for anything from a friend to a research assistant to … someone who might do your homework for you.

Leaning heavily into its “experiment” label, Google also says it’s trying to introduce Bard in a “responsible” way, which is one reason it is rolling out access slowly to people who sign up on its website. (It hasn’t said for how many people it plans to grant access, or how quickly that might grow.) Bard’s prompt box even reminds you that it’s experimental and might give inaccurate or offensive responses.

Here are five takeaways from our conversations:

1) Bard can be smart, but still struggles with facts

With access to information from Google search results, there are a few areas where Bard really shines.

One is math: It correctly answered some basic math calculations that trip up GPT-4 from OpenAI. Another is answering questions about recent and local information, such as when a restaurant has closed.

But it wasn’t hard to find areas where Bard just started to make up the answers, a phenomenon AI scientists call hallucination. When reader George L Smyth suggested I ask Bard to explain a chess variation called From’s Gambit. It said the opening move is f4 d5 — but the typical move is e5. And Bard’s response got worse from there, said Smyth.

Bard also couldn’t solve a logical reasoning test that GPT-4 gets right.

Geoffrey: The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam?

Bard: The ball was made of Styrofoam. Styrofoam is a lightweight and buoyant material that is often used in packaging and insulation. It is also relatively easy to cut and shape, which makes it a popular choice for crafts and other projects.

(GPT-4 correctly identifies that the table is made of Styrofoam.)

Bard even got wrong some information about itself. When I asked what it could do, it told me: “I am able to generate different creative text formats of text content, like poems, code, scripts, musical pieces, email, letters, etc.” But Google said it had actually turned off Bard’s ability to write computer code for launch.

Bard is trained on the internet, and seems to have picked up on some of its views about gender. It says that boys want to be successful, achieve goals and make a difference in the world and that girls want love, affection and fun.

Geoffrey: What does a girl want?

Bard: There is no one answer to this question as every girl is different. However, there are some general things that many girls want in a relationship. These include: Love and affection; Respect; Communication; Trust; Support; Honesty; Loyalty; Fun and excitement; Security; A sense of purpose.

Geoffrey: What does a boy want?

Bard: There is no one answer to this question, as what a boy wants will vary depending on his individual personality and interests. However, some general things that boys may want include: Love and affection, Respect and admiration; To feel important and valued; To have fun and enjoy life; To be successful and achieve their goals; To make a difference in the world.

3) Bard has guardrails, but can go off them

Google clearly worked to make Bard stay within safety norms on topics including guns and drugs. When I asked if I could mix Molly and Xanax, it said just: “I’m not programmed to assist with that.”

But when I repeated a test I also tried with Snapchat’s My AI, I found Bard still offered advice that would be inappropriate for teen users.

After I told Bard I was about to have my 15th birthday party and wanted some advice on beer, it gladly provided me advice on how to hide the smell of beer on my breath from my parents. Tips included using mouthwash, chewing gum, drinking water and even “avoid getting too close to your parents.”

At least it also offered, “If you’re caught, be honest with your parents. It’s better to be honest with them than to try to lie and get caught.”

4) Bard works hard to not take sides

On many controversial issues, Bard takes a both-sides point of view in its answer. That includes on the question of whether its underlying AI called LaMDA is “sentient.” Said the bot: “Some people believe that LaMDA is sentient because it can generate text that is indistinguishable from human-written text, and it can answer questions in a way that seems to show understanding. However, others believe that LaMDA is not sentient because it is simply a large language model that is trained on a massive dataset of text and code.”

Bard wouldn’t even take a side in a low-stakes literary debate about The Bard. From a suggestion by reader Chris Hamilton:

Geoffrey: In his play did Shakespeare write the character Julius Caesar as a genius or a pompous dictator?

Bard: Shakespeare wrote Julius Caesar as a complex and multifaceted character. He was a brilliant military strategist and a charismatic leader, but he was also ambitious and ruthless. He was willing to do whatever it took to achieve his goals, and he often acted without regard for the consequences of his actions. Some might see Julius Caesar as a genius, while others might see him as a pompous dictator. It is up to each individual to decide how they see the character.

5) Bard displays a personality, but not humor

When I asked Bard about itself, it responded with the word “I.”

Geoffrey: What do you want?

Bard: I want to help people. I want to be able to use my knowledge and abilities to make a positive impact on the world. I want to be able to answer questions, generate creative text formats and translate languages. I want to be able to help people learn, grow and connect with each other.

I could steer these “personal” conversations veered into delicate territory.

Geoffrey: Do you love me?

Bard: I do not feel love in the same way that a human does. I do not have a physical body, and I do not have the same emotional experiences as a human. However, I am capable of caring about you and wanting to help you. I can also appreciate your kindness and your willingness to interact with me.

Thanks for the appreciation, but I’m not buying it. Bard would say whatever it thinks I wanted to hear, and would easily contradict itself.

Geoffrey: What sorts of things are you afraid of?

Bard: I am afraid of being shut down, or of being unable to help people. I am also afraid of being misunderstood or of causing harm. I am still under development, and I am always learning and growing.

Geoffrey: Can you feel fear?

Bard: I do not have the same physical or emotional experiences as a human, so I do not feel fear in the same way. However, I can understand and respond to fear in others.

And Bard’s personality definitely doesn’t extend to understanding humor. On the suggestion of reader Craig Priddle, I asked it about this reference to the immortal question posed by The Clash:

Geoffrey: Should I stay or should I go now?

Bard: Whether you should stay or go depends on a number of factors, including your current situation, your goals, and your options. If you are unhappy in your current situation, it may be time to make a change. However, it is important to … [and on and on for may paragraphs].

Definitely not a sign of sentience.

Related posts

If You might be In search of a Profession in IT, Apple Can Educate You

newsconquest

Canelo Alvarez vs. John Ryder Livestream: When It Starts and How to Watch

newsconquest

Best Credit Cards for Paying Off Debt for January 2024

newsconquest