Speech by David van Weel Minister of Foreign Affairs at the AI Summit Brainport Eindhoven
Speech by David van Weel at the AI Summit Brainport 2025, Eindhoven, 13 November 2025
How to trust AI on the battlefield
Good morning, everyone,
How to trust AI? That’s the key question we’re addressing today.
But perhaps you have another question first.
How to trust a Minister of Foreign Affairs on this topic?
I hope to earn that trust. But let me stress right from the start that AI is a key part of security and foreign policy.
AI-technology affects our traditional security policy, as demonstrated recently by the incursion into Belgian airspace by AI-controlled drones.
But it also affects our democratic order, human rights and global geopolitical relations.
It presents us with challenges that the Netherlands cannot solve alone.
That’s why we need to shape policy at international level and I want to highlight the Netherlands’ commitment to this effort.
But first let me outline the opportunities and challenges AI brings when it comes to our security.
Let’s start with the opportunities.
The fact is: those who fail to make optimal use of new technology in the field of security will ultimately pay the price.
History has taught us some costly lessons in this regard.
A well-known example is the use of tanks.
It was the British and French who developed this new weapon during the First World War and deployed it in battle for the first time.
But both countries subsequently failed to think through and optimise its use any further.
The Germans did.
And that enabled them to deploy tanks highly effectively in their Blitzkrieg operations at the start of the Second World War.
As this example shows, incorporating new technology like AI into our security policy is not a choice.
It’s a necessity.
In a country like Ukraine, where war is raging, they understand this all too well.
Last year, TIME Magazine aptly called Ukraine an ‘AI War Lab’: new technology is being tested and improved on the battlefield.
A few weeks ago I visited Ukraine, where I witnessed firsthand how technology is helping in its defence.
For example, AI software is crucial in rapidly analysing satellite images, processing drone images and identifying Russian positions.
It assists with cyber defence and the deployment of autonomous drones and robots.
But it’s also playing an essential role behind the scenes, helping to organise the armed forces more efficiently.
Using AI, Ukraine has managed to reduce procurement times for defence equipment to just a few days.
It used to take two years.
All these examples give us a good idea of where AI can be effective, including for our own security policy.
In addition to the military domain, there are also opportunities in the broader field of security.
Like addressing the root causes of conflicts worldwide.
Conflicts can be prevented through early warning of climate-related disasters and impending food insecurity.
AI can help save lives.
At the same time, it would be naive to ignore the risks and challenges that AI poses to our security.
The downside of drones and robots is that they render the traditional defence capabilities of our armed forces outdated in one fell swoop.
And while AI makes it easier for us to avert cyberattacks, it also makes those attacks easier to carry out.
We’re already experiencing this first-hand in the Netherlands, where there has been an exponential rise in the number of cyberattacks and in cybercrime from abroad.
Our intelligence services make no secret of the fact that current AI models increase all existing risks to the security of the Netherlands and Europe.
And that’s before we consider the new applications that may still lie ahead.
Like autonomous systems that take over critical infrastructure or produce biochemical weapons.
Such applications raise important ethical questions.
Will humans remain in control, or will machines take over?
Are we building something that we will later lose control of?
Who can then be held responsible for life-and-death decisions?
How to trust AI, indeed.
AI applications also threaten our security on a more subtle level.
For example, by interfering in election processes and creating a relentless stream of fake news and disinformation.
This has an impact on our democratic order.
And the Netherlands has learned the hard way how AI can affect a human right like the principle of equal treatment.
We saw that during the childcare benefit scandal, where an algorithm wrongfully profiled benefit recipients as fraudsters.
What’s more, AI is the subject of a fierce geopolitical power struggle.
Back in 2017, Vladimir Putin said that whoever leads in artificial intelligence will rule the world.
And in this case, he was right.
Countries like the US and China see AI as a path to global dominance, and are therefore investing heavily in technological advancement.
This also impacts our own position.
As European countries, we’re highly dependent on these powers at all levels of the AI value chain, and that has major implications for our own security.
In response to all these challenges, the Netherlands is opting for a nuanced yet proactive approach: we want to accelerate safely.
And as far as ‘safety’ is concerned, I’m proud that the Netherlands realised early on that when it comes to AI we need safety belts that can be adjusted at European and international level.
In 2023 we joined forces with South Korea to start a broad international debate on this topic through REAIM, which stands for Responsible AI in the Military Domain.
This partnership brings together the private sector, governments, civil society and knowledge institutions.
Together with South Korea, we’ve also initiated various UN resolutions on AI in the military domain, one of which was adopted last week.
We actively participate in several international expert groups.
And partly as a result of such efforts, we ranked number one in last year’s inaugural Global Index on Responsible AI.
At the same time, we’re firmly committed to AI’s continued development, both in the Netherlands and across Europe.
It's extremely important to boost our open strategic autonomy in this way.
As a serious European player in the field of AI, the Netherlands has a solid point of departure in this regard.
And we owe this position in large part to the knowledge and innovative power of technological hubs like Brainport Eindhoven.
Thanks to ASML, Eindhoven is home to the key to the AI transition: the smartest machines for making chips for AI applications.
Meanwhile, over the past two years great strides have been made at European level to strengthen the AI ecosystem.
A network of 20 AI factories and five gigafactories is being set up across Europe, and the Netherlands is part of this initiative.
Actually, one of those AI factories will be located in Groningen.
I believe it’s important to maintain the right balance between regulation and room for innovation.
Brussels sometimes has a tendency to act like a Silicon Valley of regulation, causing us to miss opportunities.
New regulations on responsible use must therefore be focused enough to avoid hindering innovation while being strict enough to keep innovation ethically responsible.
An important condition, for example, is that there must always be interaction between humans and machines.
I’m going to wrap up now.
When the first atomic bomb exploded on the 16th of July 1945 in the New Mexico desert, everyone involved knew that a turning point had been reached.
Years later, J. Robert Oppenheimer, the bomb’s chief architect, recalled how the team supervising the test emerged from their shelter after it was over.
‘We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent.’
The impact of AI on our security policy has already been compared to the advent of the atomic bomb.
And I understand that: we will now also have to fundamentally review our security policy.
An important difference, however, is that AI is not a more or less completed technology, culminating in an ultimate test, after which decisions can be made.
The impact of AI on our security is gradually becoming more apparent. But we don’t yet have a clear picture of the ultimate consequences.
This makes it all the more important for technology experts and government officials to keep consulting each other, so we can weigh up both the opportunities and the risks.
On our side, we’re working across Dutch central government to develop future-proof AI policy for the Netherlands.
I’m also working closely with other ministries to develop an international AI strategy.
The Netherlands is one of the first countries working on this.
This certainly requires diplomatic efforts: in Brussels, in multilateral institutions and in the contacts with like-minded partners.
But we also desperately need your expertise.
Because joining forces on tech diplomacy will make the Netherlands stronger.
AI has changed the world, and we can laugh or cry about each new development. But standing still is not an option.
There’s too much work to be done.
Thank you.