The future of AI: “We ourselves will decide the role of systems”

Full professor Wijnand IJsselsteijn about the interaction between human and AI model

Read more

The future of AI: “We ourselves will decide the role of systems”

Ever since ChatGPT hit the scene, all eyes have been fixed on the meteoric development of Artificial Intelligence. Experts around the world are expressing concerns and speculating about what these large language models may lead to. In this series, Cursor and the scientific director of EAISI Wim Nuijten talk to TU/e researchers about their perspectives on the future of AI. Today we present part four: Wijnand IJsselsteijn, full professor of Cognition and Affect in Human-Technology Interaction. He uses AI as a tool to make a difference for people with dementia and he is also active in the area of affective computing and the Metaverse.

In the future, how will we decide if what we see and whom we are talking to are real? This will get increasingly difficult according to IJsselsteijn, who will be studying the psychology and ethics of the Metaverse in depth next year as part of his Distinguished NIAS Lorentz Fellowship. Artificial Intelligence and Virtual Reality coming together in the digital world makes for a powerful mix for positive immersive experiences, but also involves obvious risks, the professor says. “With a medium like that, you make more of a psychological impact than we are already used to from social media, for example, and at the same time you also gather more personal data. That loop is the ideal marketing machine.” He thinks generative AI systems, such as ChatGPT, will play a major role in the Metaverse, because they can generate artificial personae that behave in a human way, can start believable conversations and can anticipate on personal preferences and characteristic wishes, based on behavior. “This is no science fiction in the far future, I think this is about to happen.”

All of this raises the question of whether users of this medium can still protect themselves from being influenced, which thanks to AI happens in such a subtle way that you often don’t even realize you’re being influenced in the first place. “Familiarity influences what people experience as truth. If I read or see something often enough, it’ll become familiar and I’ll more likely come to accept it as the truth. This is devastating for democratic processes, public debate and science. AI accelerates this process and makes it easier for fake or untrue things to appear real or very plausible.”


To solve this problem, IJsselsteijn thinks investments in various areas are necessary. “You have to make sure you cannot do certain things with the technology and come up with rules and regulations in this respect. The RU is working on this intensively at the moment. You also have to encourage large companies to develop models in a responsible manner and enable the user to develop the skills necessary to make distinctions.” Although training users should start from a young age according to IJsselsteijn, he does not think the ‘pain’ of this distinction-making should generally be with the user. “For example, there are voice generators that can imitate my voice to perfection. Those should have a kind of watermark or warning so it’s always clear it’s not real.”

There will always be bad actors. That’s why you have to make sure that the most powerful tools do not fall into the wrong hands

Be that as it may, rules and watermarks will not enable us to get rid of everything that’s misleading or fake, just like it’s impossible to foresee and prevent all dangers arising from AI, he emphasizes. That’s because people have a limited idea of how the technology will eventually be used in society. Also, there will always be people with bad intentions, who exploit the vulnerabilities of others. There are still phishing emails, still fraudsters who knock on senior citizens’ doors. And there are people that don’t use fire to cook with, but to set a house ablaze. “There will always be bad actors. That’s why you have to make sure that the most powerful tools do not fall into the wrong hands. Take Putin who invests in AI and sees it as a potential arms race. Intelligent weaponry is undesirable, especially when it has the autonomy to make decisions.”


Incidentally, IJsselsteijn is not so sure about whether you should call this weaponry intelligent. He finds fault with using this word, because he sees its meaning as much more far-reaching than just the cognitively rational-logical form of intelligence. “I see intelligence as a broad range of all kinds of adaptive skills and qualities.” As opposed to Carlos Zednik, who in a previous instalment of this series indicated that as far as he’s concerned intelligence doesn’t require a body made of flesh and blood, IJsselsteijn does think that the latter is crucial in human intelligence. “A big part of our thinking is supported and structured by our body, our environment and our relations to other people and to cognitive tools we have available, like pen and paper, or a computer.  When you talk, you gesture. You can’t see that as separate from what’s happening in your mind. It helps you to keep your own communication going. Another example is that when I broke my ankle a while ago and therefore couldn’t walk during lectures, it really affected the quality of my teaching. I missed the possibility to move around, which I use to structure my thoughts.” In this context IJsselsteijn also mentions extended cognition, which refers to the idea that our thinking is not only being played out within our skulls, but is also making use of everything around us. “This notion turns my office into a kind of cognitive nest that helps me think. How I leave things reflects the thoughts I had there and helps me pick up where I left off upon my return.”

IJsselsteijn thinks that in the current discussion about AI, humans and computers are mistakenly seen as equals. In fact, according to him it’s nothing short of ludicrous to suggest that an autonomous system has similar qualities to a person. “Systems such as ChatGPT are good at deceiving people. Even a Google engineer said that a similar model was conscious. There’s no such thing. I don’t know what kind of kids they have, but they’d do well to compare them to the system. For now, we only have some kind of superintelligence in very specific areas.” And this won’t change anytime soon, he thinks, because we’re still miles away from solving the common sense problem, where possible consequences of intelligent behavior in the broadest sense of the word are considered, when it comes to AI systems.


Consequently, IJsselsteijn is critical about the idea of a technological singularity – that AI would become self-reinforcing and super intelligent and  would start to independently put aside the interests of mankind. He feels like the debate about this takes too big a leap and is entirely undefined. “It’s a theoretical discussion – full of futuristic visions and utopic and dystopic claims, but almost completely disconnected from an in-depth and balanced scientific debate about what intelligence actually is, how many different forms and shapes it has and what roles we want to give AI in relation to humans.” Why would an AI system develop itself if we haven’t given it the explicit possibility to do so, he wonders. “We shouldn’t mythicize AI, shouldn’t pretend like it’s happening to us and we’re powerless to stop it. AI is made by people. We make the system, we give it the buttons it can push and we decide what role the system gets. People are involved every step of the process.”

We’re already putting too much trust in relatively dumb systems. Take the childcare benefits scandal

Besides, he thinks that before we get to the point of AI having a kind of self-reinforcing ability, we’ll have created much bigger problems by using AI in critical systems we depend on. “So I’d prefer the focus to be on what’s going wrong – or is threatening to go wrong – right now. For instance, we’re already putting too much trust in relatively dumb systems. Take the childcare benefits scandal. It concerned a system created as a result of a political need, because people were unlawfully syphoning off benefits. But through a system error, the interests of specific groups in society were harmed badly. You shouldn’t blindly trust systems in these kinds of processes and you have to keep running checks and maintain an overview of the situation. The human factor should remain central to the purpose of a system, even when the system in some ways is superior to the things us humans can do.” Attention will be paid to this matter in the new Master’s Program AI & Society, which the professor aspires to set up together with his colleagues.

Open error culture

According to IJsselsteijn, systems should fit into a framework of our human values. But is that feasible, Nuijten wonders. Can we still reign in future systems, which may be millions of times faster than we are? IJsselsteijn likes to think so. “But this will involve trial and error, and painful lessons. We have to be careful with systems that haven’t been tested enough. For that we need firm AI policies, validated test protocols and review methods and to have an open error culture.” He thinks we can learn a lot from aviation in this respect. “The American aviation authorities (FAA and NASA) keep a public error database containing accidents and near-accidents. Pilots aren’t punished, but praised when they file a report, which they can also do anonymously. The whole aviation organization learns a whole lot from this. It’s no wonder flying is one of the safest forms of transport.”

His own ESDiT project now also includes an “AI post-mortem team”, which looks into the reasons for something having gone wrong with AI. “It’s always terribly complex, but necessary if you want to learn how to best use AI.” Because there’s no doubt in IJsselsteijn’s mind that we will – and will have to – use AI more and more in future. “At some point there won’t be enough hands to take care of everyone. If we ignore technology, people will be on their own in the future. The hope is for us to develop something that allows people to be humanely autonomous for a longer time.” This is also what he tries to do in his research, where he involves the target audience – people with dementia – itself. In this context, the professor is hopeful about models such as ChatGPT, which he thinks can help people to communicate with technology in a more natural way. “If a natural match is established between how we express ourselves and what a system can accept as input, we can make technology much more accessible for everyone.”

Human control

So does a future involving AI look bright or rather dark? Probably a bit of both, IJsselsteijn thinks. To make sure it doesn’t get too dark, he believes we should build in failsafes at the front and back ends of the systems. “I don’t necessarily see human control and automation as polar opposites. You can have highly automated systems with a large degree of human control, for example a digital camera full of AI. You can use it to adjust things, but you do feel in control because the system does exactly what you want it to.”

Share this article