The future of AI: “Make the system itself responsible”

Philosopher Carlos Zednik on the intelligence of AI systems

Read more

The future of AI: “Make the system itself responsible”

After the introduction of ChatGPT, all eyes are focused on the meteoric rise of Artificial Intelligence. Experts from across the world express their concerns and speculate about the possible consequences of these Large Language Models. In this series, Cursor and Wim Nuijten, scientific director of EAISI, discuss the future of AI with TU/e researchers in multiple fields of expertise. In part two: Carlos Zednik, philosopher and assistant professor at Industrial Engineering and Innovation Sciences, whose research centers on natural and artificial cognitive systems.

How do you investigate AI systems of which the process of producing results can’t be properly explained? Carlos Zednik addresses this question from a philosophical perspective, with a basis of psychology, neurosciences and artificial intelligence. According to him, neural networks are capable of carrying out tasks in ways that are similar to how the human brain works. “These are high-dimensional complex systems that adapt to and learn from their environment, just like people. That’s why we could use research methods that are similar to the ones neuroscientists and psychologists use to explain human and animal behavior.”

Although Zednik realizes that the brain is significantly more complex than existing neural networks, he isn’t afraid to claim that certain current systems could already be considered truly intelligent. “I have to admit that I’m more liberal than many of my colleagues. I think I don’t put such a high bar on what we call intelligence.” One of the problems, Zednik says, is that there’s no proper definition of what intelligence really is, because it’s such a multidimensional concept. “These systems can learn, they can adapt to their environment and they can solve problems. Why not call this intelligence? Consciousness, however, is something else. Although, honestly, I don’t know what consciousness is. I also don’t know whether you are conscious or not.”

Meat sounds

Those who wish to compare AI systems to humans, shouldn’t underestimate the capabilities of AI systems, nor overestimate what humans are capable of, Zednik says. “Let’s talk about the human case first. What do we mean when we say that we 'understand'? What we are doing is making meat sounds (talking, ed.) in response to meat sounds. The particular meat sound I make, is influenced by states of my brain, which are obviously influenced by previous experiences, and so on. It’s not magic. I have in no way direct access to the meaning that you give to the meat sounds. I can only respond to your input.”

Zednik takes a similar view when it comes to AI. “We need to try to understand what ChatGPT actually does when, for example, it predicts words. In order to predict words in so many diverse situations, it presumably needs to have learned certain internal representations of the world. It needs to have learned certain rules and patterns to be able to behave like it does now.” According to Zednik, AI systems have a mathematical structure that can be analyzed, not unlike the abstract structure in the human brain. “At the end of the day, it’s just patterns. Although the ability to learn these patterns is impressive, it is not special, and probably not all that different from what happens in the human brain.”

What I thought was really impressive about ChatGPT, is that all of a sudden, we had an AI system that seems to be able to talk about everything

Carlos Zednik
assistant professor Industrial Engineering and innovation sciences

What’s certain, is that Zednik sees many similarities between AI systems such as Large Language Models, and the human brain. However, that doesn’t mean that he expects artificial general intelligence (AGI) to emerge any time soon. Although that too depends on how you define the concept of AGI. “AI systems are becoming more autonomous, but I think the key idea behind AGI is really this generality, not so much the autonomy. It’s more that it can perform many tasks and not just one task. Thus far AI has always been a chess playing computer or maybe a self-driving car. But it has never been a self-driving car that can also play chess.” However, if you were to apply the Turing test, which looks at if a person can determine if they are dealing with a human or a computer, things become more complicated. Because even if ChatGPT cannot actually play chess and drive a car, it can talk about both of these things and more, and in this sense, pass the Turing Test, Zednik says.

“What I thought was really impressive about ChatGPT, is that all of a sudden, we had an AI system that seems to be able to talk about everything. If it can talk about everything, then it’s almost as if it knows everything. There is obviously still a gap between talking about chess and playing chess and talking about driving a car and actually driving a car, but a first step is being able to talk about all those things, and this is what we seem to have reached now with ChatGPT. The challenge now is to translate this ability to talk about things, into actually acting.”

Just like kids

Zednik expects that in the future, AI will surpass humans in performing various tasks. For certain diagnoses, he already puts greater trust in the judgement skill of certain systems than in that of the average physician. But AI won’t become infinitely powerful, Zednik believes. At least, that’s not what he’s worried about. What worries him more is the harm humans can cause with AI. “But I choose not to be scared, so to speak. What I choose to do instead, is to influence it the best I can, and to try and shape the kind of AI we will have. To promote good and responsible AI systems. I don’t have time to be scared, because I’m too busy working on that.”

One of the ways Zednik hopes to exert a positive influence on the future of AI, is by teaching his students how to become responsible and ethical engineers. In addition, he’s involved with the development of ISO standards for AI, which address issues such as privacy, transparency and accountability. But what’s the use of accountability when a machine decides to plan its own course and humans are no longer able to put a halt to that process, Nuijten wants to know. To answer that question. Zednik says he needs to puts his philosopher’s hat on. “Maybe at some point, once it has become sufficiently autonomous, we should consider giving the responsibility and the autonomy to the system itself. In the same way we do with kids. You’re responsible up to a certain point until they have their own autonomy and responsibility.”

Their own person

In case AI systems become fully autonomous in the future and are no longer under control of humans, perhaps society should accept the fact that these systems are “their own person,” with rights and responsibilities, Zednik says as he takes things a step further. “In general, I’m not opposed to this.” EAISI director Wim Nuijten seems to think differently, and asks: “Suppose this non-human intelligence is superior to humans: are there any cases of a coexistence of two (animal)species, one more intelligent than the other, where the less intelligent species flourishes?”

Yes, there certainly are, Zednik says: “Cockroaches very successfully coexist with humans.” He believes that you should simply think of it as evolution. Once we decide to think of certain forms of artificial intelligence as “persons,” what’s the difference between creating a person in a biological way or in an artificial way? Perhaps we should just think of those future systems as our successors. “That doesn’t really scare me, it’s just the natural process.”

Share this article