'ChatGPT is especially good at bluffing'

Everyone is talking about the AI chatbot ChatGPT which seems capable of doing pretty much anything. It has the potential to trigger multiple changes in the fields of science, literature and education. Yet the general feeling during a debate at the Royal Netherlands Academy of Arts and Sciences is that it still has much to learn.

by
image Pogonici / iStock

"It's a bit like the invention of photography”, writer and poet Hannah van Binsbergen said on Monday at a meeting of the Royal Netherlands Academy of Arts and Sciences. "Artists either rejected it out of hand or embraced it, taking advantage of the new photographic perspectives it offered."

New skills

The new artificial intelligence program writes essays and literature and drafts research papers on command. With its apparent ability to do everything, the program raises all sorts of questions. Will the chatbot soon be able to write as well as we do? Can it think for itself and devise creative solutions? What does this mean for education and science?

That sense of wonder and those questions are understandable, says language technologist and cognitive scientist Jelle Zuidema. "The system behind the chatbot is the latest in technology known as large language model tools."Scientists have been working on it for decades, but in the last three years it has really accelerated to the extent that the programs have learned all kinds of new skills. They do so in dialogue with input from users and programmers.

According to Zuidema, these types of models are set to change the world. "Not because they are capable of independent, logical thought processes, but because they are designed to predict in as many cases as possible what is needed to produce a potential answer to a question."

Generic

It is important to make a number of qualifications, however. Author Hannah van Binsbergen highlighted the self-censorship imposed on the chatbot by the programmers, which, she says, makes the creation of solid literary texts impossible for the time being. Reponses tend to be highly generic. She asked ChatGPT what the chatbot could do for writers, for example. Amongst its replies, it wrote that it can help by "broadening cultural horizons and expanding what the concept of literature can be". That sounds promising, but what it actually means in practical terms remains unclear.

And there are other problems, she says. "The program has no difficulty generating texts, but is barely capable of relating to style and form." To illustrate her point, she asked ChatGPT to write a poem in the style of Lucebert, and out came a semi-rhyming poem, which Van Binsbergen said had nothing to do with the poet.

Furthermore, the chatbot is trained using underpaid Kenyan test users to give politically correct answers. “What’s more”, said Victor Gijsbers of the Institute for Philosophy of Leiden University, "the program is totally incapable of answering any politically sensitive or moral questions". And if you ask it why, “it replies that it is important for people to form their own opinions."

Truthfulness is not important

That brought Gijsbers to a sensitive issue. He says that the chatbot is not programmed for truthfulness. That contrasts with earlier artificial intelligent systems, which were programmed with that in mind, and is a result of OpenAI, the company behind ChatGPT, likely not considering it important. By way of example, Gijsbers presented the chatbot with a problem that had already been solved by a program from the 1960s. Several times, the chatbot gave incorrect answers. The program also occasionally does strange things. "If you ask ChatGPT ten times what the capital of the Netherlands is, it will say Amsterdam nine times and Rotterdam once."

Producing coherent sections of text that are factually correct is still frequently beyond it as well, says Zuidema. The chatbot tries to rely on existing web texts in its response, but when it fails to do so, it bluffs its way through and fills in nonsense, without quoting any sources. A new online program from Microsoft search engine Bing tries the opposite approach and looks for sources matching the response first, says Zuidema. "Although the coherence and selection of the information is not always good even then, it is at least possible to check where the information comes from."

Pen and paper

Students, at least, seem to be embracing the new technology en masse, using ChatGPT to write entire essays. The question is, are they still encouraged to think critically? “That really is something of a dilemma for lecturers”, jokes Gijsbers. “We evidently need to find another way of testing knowledge. No fancy equipment, just good old pen and paper in an examination hall.”

But there are also benefits for lecturers. “At the moment, it takes a lot of time and effort to explain to students how to write an essay. In the foreseeable future, programmes will become available that I’m sure will be very helpful in this regard."

Just as good?

Responding to a question from the audience, the speakers said that they did not believe that ChatGPT could have prepared as good a talk as they had given during the symposium. According to philosopher Gijsbers, speakers have to tell a story that is true and has value for the audience. "I think there is something very different going on in the relationship between you and me than in the relationship between you and technology."

Share this article