Artificial intelligence (AI): a temporary moratorium?

Thousands of developers of artificial intelligence recently asked for a moratorium on its development for half a year. ‘Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.’ The developers should use this half year for the joint development of common safety protocols, used in the design of advanced AI.

artificial intelligence
Artificial Intelligence via www.vpnsrus.com. Photo: mikemacmarketing, Wikimedia Commons.

A call

Artificial intelligence is in rapid development. It opportunities are gigantic, like rewriting texts, development of photos and changing them, and even composing music. But, as the letter puts it, ‘advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.’ Unfortunately, the letter continues, this doesn’t happen. ‘Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.’

But we are not entirely helpless vis-à-vis AI generated texts. A tool like Originality.AI claims that they can tell with a 94% accuracy if a text was created using ChatGPT. And the mere opportunity creates its own countervailing forces. The site change.inc devoted attention to it, in an article by Romy de Weert (in Dutch). And ‘AI-Godfather’ Geoffrey Hinton, who worked on AI systems with Google for many years, is one of the signatories of the declaration. He left Google in order to be able to warn openly for artificial intelligence’s threats, he told the The New York Times. But so far, few companies seem to take this call to heart. For instance, ChatGPT maker OpenAI doesn’t participate.

What are the dangers of artificial intelligence?

In their open letter, the researchers wrote: ‘contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?’

But of course, artificial intelligence doesn’t merely have dubious consequences. On the contrary, precisely in order to protect the positive consequences, we need a safety protocol, according to the signatories. Romy de Weert emphasises the assistance lent by artificial intelligence in meeting sustainability problems. AI can make very good climatic predictions. Help optimizing energy use. And urban design. As the call formulates: ‘AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.’

Towards AI-safety

In parallel, according to the signatories, we should improve the robustness of AI-systems, together with policy makers. Like by ‘new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.’

If we should do this, we will really be able to reap the benefits of artificial intelligence, the letter concludes. By engineering these systems for the clear benefit of all, and give society a chance to adapt.

Interesting? Then also read:
ChatGPT: artificial intelligence penetrates deeper into written texts
The energy transition is a digital transition too
Engineering life will require responsibility and control

(Visited 163 times, 2 visits today)

Leave a Comment