This is not exactly a good time for regulators. The prevailing mood is: Wait, did things just get worse faster than we expected?
Right now, regulators in the UK are frantically looking to control what appears to be a frightening jump in the use of AI. A model created by Anthropic was apparently able to discover a large number of software vulnerabilities and this is making people worried.
This is not science fiction. It’s real.
After being assessed internally, as the model is still in early trials, regulators started wondering if this new AI system could have negative effects for the UK. The fact that the model was said to be able to find thousands of weaknesses in a given environment caused alarm.
UK regulators, including the Bank of England, had a response. The details of what happened and the regulators’ reactions can be found in the following report:
Let’s step back for a moment, though. That’s the tricky part. This isn’t a “bad news” story. Identifying vulnerabilities, after all, is an incredibly valuable tool when it comes to AI.
The faster patches can be applied, the fewer vulnerabilities there are to begin with. It is helpful for cybersecurity professionals. The difficulty is that it is helpful for those who would like to exploit the vulnerabilities too.
That is the dual-use problem that has been so prevalent with AI as it’s rapidly evolved.
A look at AI’s potential in cyber security shows the potential downside to the technology as well: Some insiders are already whispering that we’re entering a phase where AI doesn’t just assist hackers, it might outpace human defenders entirely.
That is a very scary thought, but is it true? We already know that some AI technologies are able to identify and even exploit system vulnerabilities. It is only a matter of time before we can do so automatically.
And I’ve talked to a few developers over the past year, and there’s this quiet shift in tone. As one of them joked, “We built tools to help us… now we’re checking if they need supervision like interns who never sleep.”
I am sure we will have heard more from policymakers as they grapple with the rapid advances of AI technologies globally:
In parallel, companies such as Google and OpenAI continue their self-developed trajectory towards increasingly potent systems in a rather quiet competition.
This competition is not one that makes a huge fuss, but rather one where each upgrade raises the floor and the ceiling of what’s possible. This prompts another question which people tend to avoid.
Are we building faster than we can comprehend the results? Since regulations are already in a scramble to stay up to date, what happens six months from today?
Another paper that discusses the acceleration of AI and why the regulation is not able to keep up adds to this point.
There isn’t really a happy ending for all this. We have reached a point where the rapid acceleration is a reality and the future is unclear. It is an important time for all of us.
AI isn’t just a tool anymore. It’s becoming an actor in systems we barely fully control. It’s a moment of reckoning, and the answers are likely to vary depending on what side of the firewall you’re standing on.