“AI has the potential to save humanity, but also to wipe it out”

TU/e researchers Wim Nuijten and Daniel Kapitan are in favor of a development pause for large AI experiments

Read more

“AI has the potential to save humanity, but also to wipe it out”

How do you make sure that Artificial Intelligence technology continues to serve humanity while protecting the rights of individuals? This is the question MEPs are working on as part of the development of the AI Act, that TU/e is also provides input for. But this European act is not yet in place and in the meantime, systems like GPT-4 (the engine behind ChatGPT) continue to develop at a frantic pace. This is happening way too fast, reads an open letter signed by many prominent technologists, including several TU/e researchers.

by
photo Wongsakorn Napaeng / Shutterstock

The open letter, which was published online a month ago, calls for a development pause for large AI experiments because it argues that the developments are moving too fast and their consequences would be incalculable. More than 25 thousand people signed the letter, including many leaders in the world of Artificial Intelligence and a few TU/e researchers. Wim Nuijten, who is the scientific director of the Eindhoven Artificial Intelligence Systems Institute (EAISI), also signed the letter. He thinks that we should take the risks of the development development of an Artificial General Intelligence (AGI) seriously, because it such AI systems could potentially endanger the existence of humanity.

AI Act

One thing that could maybe help avoid such catastrophic consequences is the AI Act: MEPs are currently diligently working on this European Act to protect the rights of individuals with respect to AI. TU/e has taken the initiative to represent academic research in this process and based on that has proposed amendments. “The act sets a number of requirements for AI systems. For example, the developers must administer the risks, keep extensive technical documentation and ensure that human supervision is possible.” This latter requirement currently is impossible to meet for systems such as GPT-4, the language model behind ChatGPT and the system that is also referred to in the open letter. This is because the problem is that nobody knows exactly what happens inside that system, not even the developers themselves, says Nuijten.

“What you see is the input and the output. And generally speaking, we do understand the output of GPT-4: it’s sequences of words. But those sequences are the results of a network of millions and millions of real numbers with weights and functions,” says Nuijten. But those functions were built into it by the developers, so surely they understand what happens inside the system, right? “They do understand the architecture of it, but they don’t understand why the system arrives at certain answers; and that’s quite disconcerting regarding the capacities of systems like GPT-4 and even more advanced versions in the future.”

TU/e and the AI Act

TU/e has proposed amendments for the AI Act that will still safeguard the goals of the European act without interfering too much with university research. Many of the requirements being proposed protect the individual user but at the same time involve a huge administrative burden, says Nuijten. Therefore TU/e, together with MEPs and industrial partners, is looking at possibilities for a lighter administrative burden that still protects the consumer, to make sure that research is not being slowed down unnecessarily.

No AGI

In any case, according to Nuijten, TU/e is not doing research with the aim of developing systems that intent to create artificial general intelligence. “GPT-4 is an example of a system that falls under natural language processing and that’s not one of our focal points.” However, there are researchers at TU/e working on neural networks. “This involves applications like being able to tell from a picture of an eye whether someone is more likely to develop diabetes, or being able to make better diagnoses from an MRI scan.” If AI is used for these kinds of purposes, he believes it actually has the potential to improve life on earth. “For example, it can cure or prevent diseases and reduce poverty all over the world. Of that, I’m absolutely certain.”

Would it help if there was more transparency about how these systems work? Absolutely not, according to Nuijten. “I will explicitly say that OpenAI, the company that created GPT-4, should not make this architecture public. It’s not a good idea. Because if the system really is close to an artificial general intelligence, its coming versions might have the potential to end life on earth. I don’t think it’s the case yet for GPT-4, but no one is certain of where we stand now and where we are heading at which speed.” He draws a comparison to nuclear weapons, which have the same potential. The reasons why he believes things haven’t gone wrong yet in that regard is because they are difficult to make, you need the right materials and there is strict legislation in place. “But what if you’d make all of this much easier and nuclear weapons would fall into the hands of many small groups of people? Who do you think would vote for that?”

Stop button

The AI Act is supposed to help mitigate such high risks. But the act has not been passed yet, and GPT-4 already fails to meet several of its requirements, for example that human oversight must be possible and that people must fully understand how the system works. Also included in the act’s proposal is the requirement that a stop button must be built in by default. However, the problem with artificial general intelligence is that it may eventually become smart enough to realize that such a button exists and, for example, create backups of itself in other locations through a back door. In that case, that stop button would be useless.

Let’s hope that in 30 years, we’ll re-read the open letter and we’ll be laughing our heads off, thinking: fools, of course that was never going to happen.

If such an AGI were to pursue its own course, Nuijten believes it might well have very different plans than humanity does. “Take war, for example. We should stop that, the system thinks. It could shut down the infrastructure of arms factories. Or just look at how we treat cows and pigs: that’s unacceptable. Such a system would see this and take measures. These are all steps in which it may come to the conclusion that it is not very sensible to have people living on this earth.”

AI alignment

Nuijten believes that an important way to ensure that AI systems do not take such disastrous measures (for humanity) is to focus on AI alignment: making sure that systems are developed in such a way that they conform to human values. “However, little work has gone into that, and we’re not expected to solve it any time soon. Developments in technology, on the other hand, are advancing at an extremely rapid pace.” So whether we will be able to achieve that alignment in time remains to be seen, he says. Besides, moral values vary around the world, although Nuijten thinks we should at least be able to assume that the common denominator is that (almost) no one wants to end humanity.

Nuijten is aware that he is painting a rather catastrophic picture of a future (or lack thereof) with Artificial Intelligence. “I’ve thought carefully about what I’m saying. I could’ve focused on the positive potential of AI that can make life great, but decided not to. Let’s hope that in 30 years, we’ll re-read the open letter and we’ll be laughing our heads off, thinking: fools, of course that was never going to happen. But we can’t completely rule out the possibility that the letter is right or even underestimates the situation. Whereas I used to think we could achieve artificial general intelligence in forty or fifty years, I now think it may well be under twenty.”

Good intentions

So what’s important now, he says, is to take the risks seriously, focus on alignment and work on other things, like the AI Act, that can prevent catastrophic consequences. Nuijten is happy with how the talks in Brussels are proceeding and praises the openness and expertise of the MEPs and their staff. “They have the very best intentions. They work out of the public eye and are not ruled by talk shows or Twitter. That allows them to really think it through. There are a lot of differences between countries in Europe, but when you look at the AI Act, you see that we have a lot more in common in terms of norms and values than most people would probably think.”

"I remember saying: I’ll believe it when I see it.”

Daniel Kapitan, EAISI Fellow, is one of the signatories of the open letter. Among other things, he gives lectures at Jheronimus Academy of Data Science and works with startup myTomorrows, an organization that links patients (for whom no more treatment is possible) to relevant clinical trials and aims to automate that process. “We used InstructGPT (with motor GPT 3.5, ed.) for a new service that we launched yesterday and it performed really well.” He says that the development team at myTomorrows consists of young people who had already warned him some time ago that large language models would overtake them. “I remember saying: I’ll believe it when I see it.” And on January 11 of this year, it happened. His work became “redundant”.

True technology optimist

However, that is not the reason why he calls for a pause (temporary or not) in the development of these systems. He considers himself a true technology optimist and believes in the interaction between man and machine, but he does think that certain processes must be put in place and that humans have to be kept in the loop. That is not what is happening with GPT-4, for example, which Kapitan says is a literal black box. “We think the system understands the world in a similar way like people do, but we just don’t know if that’s really the case and how that actually works. We should do research in order to break open the black box but developers keep brandishing commercial interests and intellectual property issues as an argument not to. That’s simply unacceptable.”

Hence, unlike Wim Nuijten, Kapitan argues that the architecture of these systems should in fact be brought out into the open to allow for peer review. “I can see why you might wonder if that’s a good idea. Knowledge needed to make bombs is now also readily available on the dark web. But at the same time, the knowledge behind GPT-4 is simply not accessible now. The companies that develop such systems have complete power; you can’t trust them either.” He believes the free market has far too much control over the development of generative AI at present.

Splitting up big tech companies

In order to curtail that control a little, big tech companies should be “split up”, thinks Kapitan. “We should say: dear Amazon and Google, we want you to separate your cloud from your store and search engine. We’ll legislate it because it’s such an important public resource. If I said this in a room full of commercial parties, I’d be kicked straight out, by the way.” Why is this also important with regard to the rapid development of AI? “If we cut those companies loose, the sector will be more in balance, like what happened to oil and telecom companies when those sectors were liberalized. That way there’s also less money available to finance big AI projects. AI is largely financed by excess profits from, for example, Microsoft’s office automation operations.”

It is no secret that large AI systems like GPT-4 are costly to develop. Kapitan estimates that the costs amount to around 20-30 million euros. He thinks Europe should set aside that same amount of money to replicate such systems in its own way, using peer review. “We’d learn much more about it that way.  It’s unclear now what choices developers made with the design. By doing it more out in the open and in collaboration with different parties, we can make steps in the direction of AI Alignment.”

 

“Awareness in the industry has increased and that’s already a big win.”

 

He also says they should address what he calls the “shell” around language models. “Around the core of the model is a shell of people who manually apply reinforcement learning. They have to filter out the most awful things from the Internet. Yesterday I read that by using the right prompts, someone was able to pierce through that shell and thus see what ChatGPT would do if the shell didn’t work. It scared me out of my wits.” One example given in the article is that by having the AI system conduct a conversation between Tom and Jerry, it is suddenly able to explain how to steal a car or make methamphetamine, whereas it would refuse to do so with normal prompts.

Brute force techniques

Reinforcement learning should help in ensuring that systems have the same moral values as humans do: the alignment Wim Nuijten suggested as a way to avert dangers of AI. Kapitan agrees that AI alignment is important. “I’m only familiar with brute force techniques like manually filtering out things you don’t want in the system. You only do that when problems are encountered. But there’s also another way to solve these problems: by filtering the training data beforehand and excluding all the nonsense that people spout on the Internet.”

Back to the letter and the proposed pause: much to his regret, Kapitan thinks it’s not going to happen. “But awareness in the industry has increased and that’s already a big win.” If the pause does not go ahead, he says we should still look to countervailing powers like policymakers and the government. “If necessary, we should put a categorical ban on those types of systems. And afterwards, we might possibly bring them into the world step by step and in a controlled manner.” Because how much more disinformation do we have to put up with, he wonders. “I’ve seen deepfakes of Obama and Merkel on the beach. Hilarious, but you couldn’t tell that it wasn’t real. Imagine all the misinformation you could feed people and the turmoil it could cause in the world.”

Switch off

Still, he wants to put the dangers of AI in perspective, because while they are certainly there, Kapitan says that we have bigger problems to worry about. “The climate crisis is also destructive, but on a very different level. When it comes to AI, there are still plenty of opportunities for intervention; in the end, it’s all just code and infrastructure that you can switch off. But as for the climate: we can’t turn away from that.”

Share this article