There was a palpable change in Silicon Valley this week.
Over 200 Google and OpenAI employees called on their employers to better define the limits of how AI can be used for military purposes. Explicitly. Loudly. In a private push that Axios’s details, workers made it clear they are increasingly uneasy about how the AI tools they’re developing are being deployed.
And honestly? You can see why.
AI no longer just helps compose email and produce graphics. It is being talked about in relation to war logistics, surveillance and autonomous weaponry on the battlefield. That’s serious. At least one person who participated in the effort wondered aloud if these corporate checks are sufficient, or whether they merely represent aspirational prose that can be bent when needed in the face of political exigencies.
The reason this seems déjà vu is because we’ve been here before. In 2018, Googlers revolted against the company working on Project Maven, a Pentagon project to analyze drone footage. Google responded with its AI principles, which promised the company would not build AI for use in weapons or in weapons surveillance. The trouble is, technology moves faster than principles, and things that seemed obviously out of bounds in 2018 might seem less clear-cut in 2023.
OpenAI also has publicly accessible use cases policies that ban weapons work. On paper, it is reassuring. But employees appear to be seeking answers to a more ambiguous question: What if AI tech is dual use? What if it helps doctors do research, but also can be employed in weapons work? What’s the boundary?
If you step back a little further, you will see the geopolitical context: AI has been designated one of the Department of Defense’s top areas of priority modernization, and there’s a whole website for the Chief Digital and Artificial Intelligence Office. They claim AI will enable faster decision-making, minimize loss of life, and deter threats. It’s all very “practical”.
But critics, including some within tech companies, are concerned that this is the thin edge of the wedge. AI in defense systems can lead to a lack of accountability. Autonomous systems, even non-lethal ones, are another step towards delegating choices that some believe should always remain in the hands of people.
But the international argument is far from over. The UN has been debating lethal autonomous weapons for years and, as recent reports show, nations are still a long way from agreeing what should happen next. Some want a ban. Others prefer to propose loose guidelines. AI models, meanwhile, get better every month.
The part that sounds really human is the people who are speaking out aren’t opposed to technology. Many of them are AI enthusiasts. They’ve seen their systems enable the earlier detection of diseases, the real-time translation of languages, and easier access to learning. They support the good stuff. That’s why this is such a charged situation. It’s not a rebellion for its own sake — it’s a disagreement over values.
There’s a generational element, too. Younger engineers aren’t so quick to shrug and say, “If we don’t do it, someone else will.” The Silicon Valley standby no longer resonates. Instead, they’re asking: If we’re going to do it, shouldn’t we create the borders, too?
But obviously, company leaders have a different perspective. Governments are big customers. Security issues are a factor. And with AI racing going on (particularly between the U.S. and China), they don’t want to get left behind. It’s not easy to just leave. It’s strategic, it’s money, it’s politics, it’s all that.
But the inner pressure reveals something valuable. AI isn’t just algorithms. AI is values. AI is a group of people sitting in front of a monitor and starting to understand that what they are developing could one day weigh on questions of life and death.
Perhaps that’s the crux of the matter. This is a moral as much as a policy argument. Staff are being very clear: “We want guardrails.” Not because they’re opposed to progress — but precisely because they see its gravity.
What’s next? It’s unclear. The corporations could tighten up the pledges. The governments could develop more defined policies. Or the friction could simply be papered over with PR announcements.
But one thing is clear: the debate over military AI is not just theoretical anymore. It is personal. And it is taking place in the rooms where the future is being created.