It’s the latest salvo in Microsoft’s battle with Google and other tech companies to dominate the fast-growing field of “generative” AI, though it’s still unclear whether the flurry of product launches, demos and proclamations from executives will change the tech industry as dramatically as leaders are predicting.
The tool adds to Microsoft’s growing roster of generative AI products, which use algorithms trained on trillions of words of text from the internet. The algorithms can summarize and write text and computer code, and have relatively complex conversations with humans. The tool, called Microsoft Security Copilot, aggregates data from a company’s own computers and outside databases of hacking threats, then allows workers to ask simple questions about what kind of attempted hacks are hitting their network.
Microsoft says the product will make the lives of cybersecurity professionals easier, and allow people with less specialized skills to work as security experts.
“Cybersecurity in general is the major challenge of our times,” Microsoft chief executive Satya Nadella said in an interview. “Can we give the analyst’s speed a 10X boost? Can we bring a novice analyst and change their learning curve?”
At the same time, hackers have already begun using generative AI chatbots to write code that can be used in attacks, according to cybersecurity research firm Check Point Research. In the same way that Microsoft’s tool makes it easier for people to defend their companies’ systems, similar tools are letting people build hacking software without needing technical skills themselves.
“There will always be innovation out there on the dark side,” said Charlie Bell, executive vice president of security at Microsoft. The company has engineers trying to figure out ways to hack into its own products, he said, “people who are trying to break it before somebody else tries to figure it out.”
Microsoft, Google, Salesforce and other big tech companies are rushing to incorporate generative AI into more of their products as regular people, Wall Street investors and the companies’ customers have grown incredibly interested in the technology. Earlier this year, Microsoft announced a multibillion-dollar deal with OpenAI, a smaller company that kicked off the latest AI wave by publicly launching its DALL-E image generator and ChatGPT chatbot. Since then, it has integrated ChatGPT-based tech into its Bing search engine, productivity tools such as Word and Excel and cloud software that it sells to other businesses.
Microsoft’s archrival Google, which has long been the undisputed leader in AI tech, has been left on its back foot, and is rushing to launch competing products using its own generative AI tech. OpenAI, though it is partnered with Microsoft, has also sold access to its tech directly to other companies, including major Microsoft competitors like Salesforce and Zoom.
Cybersecurity is a nightmare for many companies. The rate of damaging hacks has grown in recent years, and ransomware attacks — where hackers lock a company out of its systems and demand a ransom payment to restore access — have shut down hospitals, businesses and entire city administrations. Companies and governments are desperate for more skilled cybersecurity professionals to help them monitor and defend against hack attacks.
Microsoft is pitching its Copilot tool as a way to help employees better narrow down what kind of risks are most dangerous and then deal with them more efficiently. The program runs on both OpenAI’s ChatGPT tech, and Microsoft’s own security-focused AI algorithms.
Using AI to defend against cyberattacks is not new. Other tools built by Microsoft and other companies use it to sift through data and pick out the most likely threats. Hackers also use AI to improve their chances of breaking into a targeted computer network.
But the company says using chatbots and generative AI is a major step up in what’s currently available to companies and governments, and will make cybersecurity jobs easier and accessible to more people. For example, a major skill cyber professionals use is reverse-engineering malicious computer code to figure out what it does and where it came from. In a video demo, Microsoft showed a user inputting computer code into Copilot and receiving a visual flowchart about how the code works, readable by someone who has no programming experience.
“There’s a lot of noise which comes towards a defender. It’s really hard to see where attackers are coming from,” said Vasu Jakkal, corporate vice president of Microsoft security. “Not only is it going to go back and say ‘hey this was a type of attack that happened and here is a similar attack,’ it’s going to help you find attacks that you haven’t seen before.”