My Blog
Business

Google reportedly building A.I. that offers life advice

Google reportedly building A.I. that offers life advice
Google reportedly building A.I. that offers life advice


Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, US, on Wednesday, May 10, 2023. 

David Paul Morris | Bloomberg | Getty Images

One of Google’s AI units is using generative AI to develop at least 21 different tools for life advice, planning and tutoring, The New York Times reported Wednesday.

Google’s DeepMind has become the “nimble, fast-paced” standard-bearer for the company’s AI efforts, as CNBC previously reported, and is behind the development of the tools, the Times reported.

News of the tool’s development comes after Google’s own AI safety experts had reportedly presented a slide deck to executives in December that said users taking life advice from AI tools could experience “diminished health and well-being” and a “loss of agency,” per the Times.

Google has reportedly contracted with Scale AI, the $7.3 billion startup focused on training and validating AI software, to test the tools. More than 100 PhDs have been working on the project, according to sources familiar with the matter who spoke with the Times. Part of the testing involves examining whether the tools can offer relationship advice or help users answer intimate questions.

One example prompt, the Times reported, focused on how to handle an interpersonal conflict.

 “I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?” the prompt reportedly said.

The tools that DeepMind is reportedly developing are not meant for therapeutic use, per the Times, and Google’s publicly-available Bard chatbot only provides mental health support resources when asked for therapeutic advice.

Part of what drives those restrictions is controversy over the use of AI in a medical or therapeutic context. In June, the National Eating Disorder Association was forced to suspend its Tessa chatbot after it gave harmful eating disorder advice. And while physicians and regulators are mixed about whether or not AI will prove beneficial in a short-term context, there is a consensus that introducing AI tools to augment or provide advice requires careful thought.

Google DeepMind did not immediately respond to a request for comment.

Read more in The New York Times.

Related posts

A weak economy will test the desire to travel

newsconquest

3 ‘must-read’ books by way of ladies to learn in 2022, in line with feminine CEOs

newsconquest

Banks creating downside risks for global growth: IMF chief economist

newsconquest