Is artificial intelligence or AI a threat to mankind? Would AI ever be able to take over our computer systems? Could this system influence our behaviour without us being able to resist it? Opinions are divided. Maybe the real problems lie somewhere else.

Takeover
Dutch newspaper NRC Handelsblad devoted a whole series of articles on AI: on June 3, June 24, July 8 and August 26 2023, and April 25 2025. Het Parool followed suit on May 27 and September 23 2023. What do we learn from this debate? It appears that we changed our outlook on AI. About ten years ago, we didn’t look upon computers as a threat to mankind. Nowadays we don’t make a fool of ourselves if we confess that we are afraid of AI becoming too smart and take the lead. AI would become so powerful that we wouldn’t be able anymore to control it.
This concept follows an old sci-fi subculture, proposing that the forces created by mankind will in the end become stronger than us. As a result, we wouldn’t be able anymore to control developments. AI as a self-initiated natural force that will be going to control us. In line, by the way, with many AI developers’ self-image, who look upon mankind and the world as entities that in the end can be captured in models entirely. Models become bigger evermore, and at a certain point they would be able to take control. Gradually, computers would be able to develop a kind of superintelligence that we wouldn’t be able to resist. Mankind as such would become extinct – a runaway dystopia. Much more imaginative than ‘real’ problems like inequality, the climate and discrimination.
Stagnation
But the comparison doesn’t hold. Fundamentally there has been no progress, for years at a stretch; in spite of all predictions on incontrollable computer systems. Even though computers have become much stronger. Many AI developers therefore look upon the world – including mankind – as entities fully to be captured in data. Whereas a level-headed inspection would reveal that our problems are rather boring: inequality, a runaway climate, discrimination. None of them problems that we can blame on the computer.
On the contrary, AI developers will hold that we will be able to control all problems – if we could just put enough money into AI’s basket. But do we really need to do that? AI’s concept of mankind is dominated by the idea that man as such is a cold-blooded and rational being. Insofar as reality doesn’t conform to that concept, we would need to restructure mankind in that sense. But as a matter of fact, the rationalistic image of mankind is self-defeating in this way. There is no reason whatsoever why we should restructure reality in this rationalistic sense.
A storm in a teacup
According to Emily M. Bender, a professor at the university of Seattle, AI systems will not become more intelligent than mankind. Let alone that mankind would be under the threat of extinction. For after all, so-called artificial intelligence is nothing more than a kind of automation and sythetisation. The whole concept of this technology threatening mankind is nothing more than salesmanship.
Bender also opposes the use of instruments like ChapGPT. This chatbot makes use of a whole array of texts, whether true or untrue, without us having the opportunity to control where they come from. Bender uses the term pollution of the information system, in which non-information would be able to spread unchecked. She would applaud if OpenAI, the organization that developed ChatGPT, would be held accountable for ChatGPT’s output and its errors. ‘For instance if it provides information that contains medical errors, or libel.’
Still a long way to go
But mankind is lazy. Often, we accept without further ado information that comes from the computer; but it cannot think. What computers are good at, it recognizing patterns in words and sentences. If they are in each other’s’ neighbourhood, the computer will connect them – often with a convincing result. Since a number of years, OpenAI collaborates with Microsoft. OpenAI intends to clean up the answers from mistakes, and Microsoft intends to improve its language models. But the result may not even come close to human consciousness. For as long as systems dealing with information do not have a body that experiences hunger, pain, pleasure or the need for physical survival, they lack the biological anchors that lay at the foundation of our consciousness.
In an article in the newspaper Het Parool, Bennie Mols sets out – based on a piece of Gary Marcus and Ernest Davis – the steps that need to be taken by AI to imitate the human mind. For starters, instruments should develop a basic understanding of time, space and causality. In addition to that, they need to learn to reason based on insecure information. Of course, they also need to observe their surroundings and learn how to speak a language. Based on those skills, machines (i.e. AI) should learn to understand AI – required for it’s own correction. Moreover, AI would have to become superintelligent, and know how to acquire it’s own energy. And finally, it would have to become so smart that people would not be able anymore to switch it off. Let’s summarize: an unlikely series of events.
Not innocent
But all this doesn’t mean that AI is innocent. AI could lead to discrimination, violation of copyright and possibly also punishable acts like child porn. An independent authority would have to be able to investigate AI. Not an easy task, in an industry that is developing fast. The European Union tries to pave the way by proposing a law that is specifically intended to contain AI’s excrescences. We need to have a good explanation of the law’s intentions, and transparency, so that AI will be contained within the boundaries of democratic values and government. Meaning that there will come an end to the unregulated space in which developers can move now.
Interesting? Then also read:
Engineering life will require responsibility and control
Artificial intelligence (AI): a temporary moratorium?
ChatGPT: artificial intelligence penetrates deeper into written texts