The future of AI: “The ‘AI plane’ isn’t flying yet”

Computer Scientist Jakub Tomczak about what generative AI models are capable of

Read more

The future of AI: “The ‘AI plane’ isn’t flying yet”

Ever since ChatGPT hit the scene, all eyes have been fixed on the meteoric development of Artificial Intelligence. Experts around the world are expressing concerns and speculating about what these Large Language Models may lead to. In this series, Cursor and the scientific director of EAISI Wim Nuijten talk to TU/e researchers about their perspectives on the future of AI. Today we present part one: Jakub Tomczak, computer scientist and associate professor at Mathematics and Computer Science. His research focuses on generative artificial intelligence (which includes ChatGPT) and he is specialized in machine learning.

ChatGPT and other similar models are also referred to as black box models. That description is based on the fact that no one knows exactly how the system arrives at certain answers. That does not mean that we don’t understand the mechanism behind the system, emphasizes Jakub Tomczak. “It’s essentially just matrix multiplication.” Neural networks learn to make correlations, he explains. “And then correlations between correlations between correlations times a thousand, for example. That makes it an incredibly complicated network of potential concepts, which means it’s nearly impossible to predict the system's output.”

As a result, neural networks are able to recognize larger structures in data (so-called long-range dependencies), says Tomczak. “Say you have a picture of a face. If I randomly remove a few pixels from that picture, my human brain can still recognize that it’s a face. Simple computer models cannot. But what’s fascinating about neural networks is that they can learn to recognize the face despite the missing pixels. They can analyze one of the pixels in the image and determine by some of the values that it’s part of a larger complex structure.” You can apply the same principle to text, he continues. ChatGPT has to fully recognize the prompt, a short piece of text used as a starting point to generate a response, that someone enters. Only by the end of the sentence does it have all the information needed to complete the task.

Understanding

Both the fact that ChatGPT “understands” the prompt and the fact that it subsequently rolls out entire sentences that make perfect sense as results could give the impression that the system actually understands the meaning of concepts and words. Whether that is indeed the case is difficult to determine and depends on how you define the meaning of the word “understand”, says Tomczak. “Some Generative AI models do show signs of understanding certain concepts. The other day I saw an example of an AI-generated image from a text prompt of a racoon wearing an astronaut helmet. That couldn't have been in the training data because it was an unrealistic concept. Still, the generated image looked exactly as you would expect it to."

Another example of GPT-4 seeming to have some kind of understanding of concepts, is one that has been going around on the internet. The system is asked to explain what is humorous about a meme of a smartphone that is being charged with a VGA cable. GPT-4 was able to do that, and stated: ‘The humor in this image comes from the absurdity of plugging in a large outdated VGA connector into a small, modern smartphone charging port.’ But does that mean that the system actually understands what a phone is; that you can use it to make calls and send messages, for example? “In a sense, yes”, says Tomczak. Transformer models such as ChatGPT create a sort of knowledge graph, which visualizes connections between words. “When people create such a graph, they assign names or labels to all the nodes. Neural networks do something similar. Except they don’t give those nodes a label that is recognizable to us, but rather an imaginary label. What matters is that it makes similar kinds of connections.”

The chatbots appeared to have developed a language of their own

By correlating correlations, AI models may also develop new behaviors that are not always immediately understandable to humans. This is also what happened with bidding chatbots developed by Facebook in 2017, Tomczak says. “There were two bidding bots equipped with language models that were able to communicate with each other. After a while, those bots started sending each other very strange messages using some sort of codes that were not understandable to people. At first, the developers thought they were just malfunctioning, but later it turned out that they were still executing their duties as bidding chatbots. They appeared to have developed a language of their own.”

Sci-fi

That kind of development does not make Tomczak nervous, however, because it is an isolated system that did nothing more than optimize its functionality. It is only when systems start to communicate with other systems that problems could arise, he says. “Imagine, for example, an AI system spreading viruses to control government systems. Or AI systems on smartphones that communicate with each, thereby draining the battery and making phones unusable. Although that all still sounds a bit sci-fi.”

EAISI director Nuijten wants to know what we can do to prevent such situations. Tomczak: “I don't think there’s much we can do; so many people are working on this, it would be impossible to stop all of them.” Fortunately, he says, there are still too many missing features, which means that we are still far away from an artificial general intelligence (AGI), which he describes as a fully autonomous, interactive system that imitates the intelligence of humans and animals. However, one development that he believes would make a big difference and that would be cause for concern is if the system no longer forgot the data it learns. “The system would then be able to use billions and billions of images and other data to learn from in a sequential manner. It could keep learning continuously. That would truly be moving towards a form of intelligence.”

A cause for concern would be if the system no longer forgot the data it learns

The way large companies in particular are going at it these days, now that they are in some kind of rat race, Tomczak doesn’t think such developments will be achieved any time soon. “They’re mostly concerned with scaling up, but what you really need is new functionality.” Maybe there is a very simple solution and you just need someone to come up with the right idea, he speculates. This reminds Nuijten of people’s desire to be able to fly: “The first attempts to achieve that were by imitating birds. But then we found a much simpler solution. Just go very fast and make sure you have the right aerodynamics,” he jokes.

Tomczak likes this analogy. “Engineers might say that a plane is extremely complicated, but to us, the principle of it is actually quite simple. The execution, that’s another story. You could say the same thing about neural networks, although they’re not “flying” yet. We do have terrific electronics, plenty of seats, everything looks great, but we’re missing the engines. That is why I think scaling up is not the ultimate solution, because it doesn’t help you with the engine problem. The plane will just roll faster but it won’t get off the ground.”

Other tools

Still, according to Tomczak, some AI systems are already dangerous right now: “Just look at what’s happening in Moscow, where they have cameras at the entrance to subway trains, making it easier to arrest people. And look at China, where people are being tracked everywhere with AI. And, honestly: America using AI in the military. These are scary things.” On the other hand, AI is also being used to do a lot of good. “Simple applications that can recognize skin cancer are already making a huge difference. And just last week I read about a generative model that had discovered a new kind of antibiotic.”

He also has high hopes for something called “no-code programming", which as far as he knows can write code nearly flawlessly. “There was an experiment where they had a German software engineer with twenty years of experience perform the same task as the AI system. It took the German twice as long and they only managed to complete 20 percent of the task while the system managed to complete 90 percent.” Tomczak says that is the future and there is no need to be afraid of it. It’s just a matter of using different tools. “It happens all the time in computer science; that’s nothing new. If language models were able to assist us on a daily basis that would only be a great help.”

Share this article