SAN FRANCISCO–Trying to figure out where the field of AI is going and how attackers and defenders will be using it is no one’s idea of a good time. AI usage is still in its very early stages, but some of the people working on and thinking about the safety and security of AI systems and LLMs are optimistic about the good outweighing the bad and say that many of the same principles used to design and build software securely can be applies to building LLMs and AI systems.
“It’s important to understand that when we talk about securing AI, we’re just talking about software. AI is just software. A lot of the techniques we’ve been developing over the last thirty years apply here as well. It’s not rocket science. We’ve been using machine learning inside systems for a very long time,” Heather Adkins, vice president of security engineering at Google, said during a panel discussion on AI safety at the RSA Conference here last week.
“When we talk about AI, it’s almost like AI is some unique monster. But it’s just a software problem.”
For the large part, when people talk about AI systems, they’re referring to generative AI tools such as ChatGPT that are built on top of large language models (LLMs). Those models are trained on massive data sets, many of which are proprietary and not generally open to public inspection. But despite this inherent opacity, AI systems comprise a set of software components, and humans have been building software for a long time. That process hasn’t always gone well, which is why the security industry exists. But people moderates how the software development process should work, what the built-in risks are, and how adversaries tend to attack software systems. That knowledge can be applied to securing AI systems and ensuring that their output fit for human consumption.
“If the platform they’re built on is built in a virtuous way and we get the right outcomes, then that’s what we’re looking for,” Adkins said.
“”We know security is a data problem. What the bad guys do with AI, we don’t know yet. Anything the defense will do with it the bad guys will do with it.”
One of the concerns around the rise of AI tools is that attackers will use it to automate their campaigns and intrusion efforts. To some degree, adversaries have been doing this for many years, but experts believe the advantage lies with defenders at the moment.
"We have a lot of security problems and AI taking over the world is not high on my list."
“It will magnify power in both areas. In our industry we have a data problem and a human resources problem. We can apply these technologies in ways the attackers can’t,” said Bruce Schneier, a cryptographer, technologist, and lecturer at the Kennedy School at Harvard University.
“Right now we are not defending at those speeds because it often requires judgment to do those things. In the short term I think AI will help defense more than offense. Long term, I have no idea.”
Adkins agreed in principle, but also emphasized that because AI is such a young technology, it’s virtually impossible to say where it will go.
“We are very bad at predicting what technology will be used for. We are today looking at our sci fi, a huge body of creative thinking, but the ways we’re deploying AI are very mundane. We are reducing toil in the SOC, automating report writing,” she said.
“it’s fairly difficult to predict what we’re going to do with it in two years, five years, or a hundred years The important thing is constant watchfulness. There are things we’re going to be delighted by. I love that we’re being very dialog driven but i think we all have to become AI experts now as a society.”
One thing the panelsist are not worried about in the short term is AI replacing humans at a large scale.
“Our ideas and fears about AI come from science fiction. This is why we think about the Terminator. That’s not science, that’s not engineering. That’s not something I’m worried about. We have a lot of security problems and AI taking over the world is not high on my list,” Schneier said.