An AI Too Dangerous for the Public?


I was browsing for different technologies that may have significant ethical, economic, social, or environmental implication as part of a course I’m taking right now. I was stopped dead in my tracks with a headline that said ‘The AI That’s Too Dangerous to Release’. I thought to myself, “ok here we go; technology is rising against us.”

Elon Musk’s OpenAI just managed to release a language model that can generate very accurate, human-like texts including news reports that are based on absolutely nothing. The company created GPT-2 and only released a smaller model of the language AI by which they showcased a few fake news articles that the model spit out after having been given a prompt. The full model was kept from the public as they deemed it far too dangerous and they could see it be used for more malicious intents.

So, an AI that can generate very convincing texts that vary from inaccurate news articles to poems, stories, research papers, and more? I don’t know about you, but it sure does sound threatening to me.

There is a common ideology that down the evolutionary line of mankind, there will come a point in time where technology takes the top of the hierarchal pyramid and you either get with it or get left behind. Actually, the very same Elon Musk of OpenAI set up another related company by the name of Neuralink. It looks into linking the human brain to a computer base in order to eventually realise human symbiosis with AI should AI ‘take over’. But this GPT-2 is now, it’s not in some distant probable future. The implications of an AI writing news articles that are indistinguishable from human-made ones and can be based on little to no truth is grave.

It is impossible to know the limits of what this AI is capable of, especially with a knowledge database that consists of everything that exists on the web. Does this then have an impact on the future of publications and mass media regulations? Of course, there may be far more simple and innocent uses for this technology however, where do we draw the line when it comes to technology ‘bettering’ human life?

I am very curious as to what comes next. If this is the reduced model, what is the full model capable of? Also, how does OpenAI then manage the ethical suggestions of publishing such a model? I can’t help but think that this is yet another publicity stunt in order to attract more attention and possibly raise the financial stakes. Is the age of indistinguishable human-like AI finally upon us? And, if OpenAI was founded in order to not only mimic the human brain but surpass human intelligence and democratize artificial general intelligence (A.G.I), is the GPT-2 model actually the most threatening project the company is withholding from the public?

Deel dit artikel