"An absence of regulations concerning AI is not an option”

We are at a crossroad in our society, says the Netherlands Scientific Council for Government Policy (WRR): artificial intelligence is now rooted in society and is here to stay. The government, so believes the think tank, must do more to guide its development. But it is not easy to be well prepared for whatever may happen, say EAISI director Carlo van de Weijer and researcher Savio Sciancalepore. “The government has to create the framework without retarding the process.”

photo metamorworks / shutterstock

Self-driving cars, surgical robots, military drones. Artificial intelligence (AI) is about to become part and parcel of our lives. Its influence will be so extensive, according to the WRR, that AI can be compared to other world-changing technologies such as the steam engine and electricity.

Opgave AI. De nieuwe systeemtechnologie (Assignment AI; the new system technology; only available in Dutch) has recently been published by the WRR, which was commissioned by the cabinet to produce an advisory report on AI and public values. That this technology will impact public values is taken as read, says Carlo van de Weijer, director of Eindhoven Artificial Intelligence Systems Institute (EAISI).

“And actually that influence will be very positive,” says Van de Weijer. “Take safety, for example. We have the opportunity to make cars totally safe, in no small measure thanks to AI.” However, says the director, every technology has its downside. “You can compare it with fire: useful for heating your home and preparing your food, but you could also burn your fingers or reduce your house to ashes.” It's just that with AI the risks are that much greater. “Unlike fire, it gives us the capacity to harm the whole of humanity.”

Not using fire is no longer an option, but there are rules to ensure fire safety. Similarly, we can't turn back the clock and stop using AI, says Van de Weijer. “So it is also something we need rules for.”

Cyber security

This view is endorsed by Savio Sciancalepore, assistant professor in the Security Group at Mathematics and Computer Science. “The only way to make AI safe is to ensure that it is handled responsibly by the people using it. The technology itself is very difficult to stop.”


When it comes to cyber security, AI is the very thing used to fend off threats. Thus there is a system that relies on AI to detect hackers and recognize irregularities that could be due to cyber attacks, tells Sciancalepore. “You have to keep these systems up to date because hackers can also use what we call ‘adversarial AI’. Criminals who know that you're using algorithms to keep your systems secure can write their own algorithms that will avoid detection by yours.”

Keeping abreast of the latest technological developments in the field of AI is a necessity. “The attacker doesn't wait, you have to stay one step ahead,” says Sciancalepore. The government needs to be doing this too, he says. “I think they are reasonably on top of the situation; as far as I know they are involved in a lot of research projects.”


Nonetheless, the government must do more in order not to fall behind, says the WRR. But it is not easy to prepare for a technological development whose influence will be felt in so many ways, says Van de Weijer.

“The government must set the framework, preferably without retarding the process too much. In China everything is regulated by government, the Americans let the big commercial players take the lead. Europe is more inclined to take a democratic approach and focus on individual rights.”

This way things may move more slowly than in China and the US, but it really is the only way to ensure that machines don't become more powerful than people, says the director. “By definition regulation is an obstacle to innovation. But with AI, an absence of regulations isn't an option.”

And although the government should be doing more, don't forget there is already plenty going on in the Netherlands, he is keen to point out. “For example, knowledge institutes, companies, government bodies and civil society organizations have united in the Dutch AI Coalition. EAISI is also a member. In this context, we are already addressing the possible hazards of AI and its moral aspects.”


“The coalition has received 276 million euros from the National Growth Fund. The first tranche of money, 44 million euros, is destined for the ELSA labs, whose full name reveals their field: the Ethical, Legal, Societal aspects of AI.” Likewise at TU/e the ethical side of AI is a major interest, says Van de Weijer. He refers to the research group Human Technology Interaction, which focuses on the interaction between people and technology.

While Van de Weijer is convinced that AI can have a positive influence on society, he mentions a difficult balance. “If a car can drive more safely on its own, should a person still be able to intervene? And do you want to have your public transport system managed by a transport company or by Google? It isn't wise to regulate everything down to the last detail, but you certainly don't want to be leaving it all to market forces either.”

Share this article