My Blog
Technology

Companies don’t know if Chat GPT is safe, but employees want AI help

Companies don’t know if Chat GPT is safe, but employees want AI help
Companies don’t know if Chat GPT is safe, but employees want AI help


When Justin used ChatGPT at work earlier this year, he was pleased by how helpful it was. A research scientist at a Boston-area biotechnology firm, he’d asked the chatbot to create a genetic testing protocol — a task that can take him hours, but it was reduced to mere seconds using the popular artificial intelligence tool.

He was excited by how much time the chatbot saved him, he said, but in April, his bosses issued a strict edict: ChatGPT was banned for employee use. They didn’t want workers entering company secrets into the chatbot — which takes in people’s questions and responds with lifelike answers — and risking that information becoming public.

“It’s a little bit of a bummer,” said Justin, who spoke on the condition of using only his first name to freely discuss company policies. But he understands the ban was instituted out of an “abundance of caution” because he said OpenAI is so secretive about how its chatbot works. “We just don’t really know what’s underneath the hood,” he said.

Generative AI tools such as OpenAI’s ChatGPT have been heralded as pivotal for the world of work, with the potential to increase employees’ productivity by automating tedious tasks and sparking creative solutions to challenging problems. But as the technology is being integrated into human-resources platforms and other workplace tools, it is creating a formidable challenge for corporate America. Big companies such as Apple, Spotify, Verizon and Samsung have banned or restricted how employees can use generative AI tools on the job, citing concerns that the technology might put sensitive company and customer information in jeopardy.

Several corporate leaders said they are banning ChatGPT to prevent a worst-case scenario where an employee uploads proprietary computer code or sensitive board discussions into the chatbot while seeking help at work, inadvertently putting that information into a database that OpenAI could use to train its chatbot in the future. Executives worry that hackers or competitors could then simply prompt the chatbot for its secrets and get them, although computer science experts say it is unclear how valid these concerns are.

The fast-moving AI landscape is creating a dynamic in which corporations are experiencing both “a fear of missing out and a fear of messing up,” according to Danielle Benecke, the global head of the machine learning practice at the law firm Baker McKenzie. Companies are worried about hurting their reputations, by not moving quickly enough or by moving too fast.

“You want to be a fast follower, but you don’t want to make any missteps,” Benecke said.

Sam Altman, the chief executive of OpenAI, has privately told some developers that the company wants to create a ChatGPT “supersmart personal assistant for work” that has built-in knowledge about employees and their workplace and can draft emails or documents in a person’s communication style with up-to-date information about the firm, according to a June report in the Information.

Representatives of OpenAI declined to comment on companies’ privacy concerns but pointed to an April post on OpenAI’s website indicating that ChatGPT users could talk with the bot in private mode and prevent their prompts from ending up in its training data.

Corporations have long struggled with letting employees use cutting-edge technology at work. In the 2000s, when social media sites first appeared, many companies banned them for fear they would divert employees’ attention away from work. Once social media became more mainstream, those restrictions largely disappeared. In the following decade, companies were worried about putting their corporate data onto servers in the cloud, but now that practice has become common.

Google stands out as a company on both sides of the generative AI debate — the tech giant is marketing its own rival to ChatGPT, Bard, while also cautioning its staff against sharing confidential information with chatbots, according to reporting by Reuters. Although the large language model can be a jumping-off point for new ideas and a timesaver, it has limitations with accuracy and bias, James Manyika, a senior vice president at Google, warned in an overview of Bard shared with The Washington Post. “Like all LLM-based experiences, Bard will still make mistakes,” the guide reads, using the abbreviation for “large language model.”

“We’ve always told employees not to share confidential information and have strict internal policies in place to safeguard this information,” Robert Ferrara, the communications manager at Google, said in a statement to The Post.

In February, Verizon executives warned their employees: Don’t use ChatGPT at work.

The reasons for the ban were simple, the company’s chief legal officer, Vandana Venkatesh, said in a video addressing employees. Verizon has an obligation not to share things like customer information, the company’s internal software code and other Verizon intellectual property with ChatGPT or similar artificial intelligence tools, she said, because the company cannot control what happens once it has been fed into such platforms.

Verizon did not respond to requests from The Post for comment.

Joseph B. Fuller, a professor at Harvard Business School and co-leader of its future of work initiative, said executives are reluctant to adapt the chatbot into operations because there are still so many questions about its capabilities.

“Companies both don’t have a firm grasp of the implications of letting individual employees engage in such a powerful technology, nor do they have a lot of faith in their employees’ understanding of the issues involved,” he said.

Fuller said it’s possible that companies will ban ChatGPT temporarily as they learn more about how it works and assess the risks it poses in relation to company data.

Fuller predicted that companies eventually will integrate generative AI into their operations, because they soon will be competing with start-ups that are built directly on these tools. If they wait too long, they may lose business to nascent competitors.

Eser Rizaoglu, a senior analyst at the research firm Gartner, said HR leaders are increasingly creating guidance on how to use ChatGPT.

“As time has gone by,” he said, HR leaders have seen “that AI chatbots are sticking around.”

Companies are taking a range of approaches to generative AI. Some, including the defense company Northrop Grumman and the media company iHeartMedia, have opted for straightforward bans, arguing that the risk is too great to allow employees to experiment. This approach has been common in client-facing industries including financial services, with Deustche Bank and JPMorgan Chase blocking use of ChatGPT in recent months.

Others, including the law firm Steptoe & Johnson, are carving out policies that tell employees when it is and is not acceptable to deploy generative AI. The firm didn’t want to ban ChatGPT outright but has barred employees from using it and similar tools in client work, according to Donald Sternfeld, the firm’s chief innovation officer.

Sternfeld pointed to cautionary tales such as that of the New York lawyers who were recently sanctioned after filing a ChatGPT-generated legal brief that cited several fictitious cases and legal opinions.

ChatGPT “is trained to give you an answer, even when it doesn’t know,” Sternfeld said. To demonstrate his point, he asked the chatbot: Who was the first person to walk across the English Channel? He got back a convincing account of a fictional person completing an impossible task.

At present, there is “a little bit of naiveté” among companies regarding AI tools, even as their release creates “disruption on steroids” across industries, according to Arlene Arin Hahn, the global head of the technology transactions practice at the law firm White & Case. She’s advising clients to keep a close eye on developments in generative AI and to be prepared to constantly revise their policies.

“You have to make sure you’re reserving the ability to change the policy … so your organization is nimble and flexible enough to allow for innovation without stifling the adoption of new technology,” Hahn said.

Baker McKenzie was among the early law firms to sanction the use of ChatGPT for certain employee tasks, Benecke said, and there is “an appetite at pretty much every layer of staff” to explore how generative AI tools can reduce drudge work. But any work produced with AI assistance must be subject to thorough human oversight, given the technology’s tendency to produce convincing-sounding yet false responses.

Yoon Kim, a machine-learning expert and assistant professor at MIT, said companies’ concerns are valid, but they may be inflating fears that ChatGPT will divulge corporate secrets.

Kim said it’s technically possible that the chatbot could use sensitive prompts entered into it for training data but also said that OpenAI has built guardrails to prevent that.

He added that even if no guardrails were present, it would be hard for “malicious actors” to access proprietary data entered into the chatbot, because of the enormous volume of data on which ChatGPT needs to learn.

“It’s unclear if [proprietary information] is entered once, that it can be extracted by simply asking,” he said.

If Justin’s company allowed him to use ChatGPT again, it would help him greatly, he said.

“It does reduce the amount of time it takes me to look … things up,” he said. “It’s definitely a big timesaver.”

Related posts

I Love My Stanley Tumbler and Amazon Has Them on Sale for Labor Day

newsconquest

Social Security 2024 COLA Increase: Your Checks Are Getting Bigger Next Year

newsconquest

China Strikes Back at Micron Technology Even as It Signals Openness

newsconquest