Elon Musk joins other tech experts in call for AI pause over risks to humanity
March 29, 2023381 views0 comments
By Onome Amuge
Over 1,000 artificial intelligence experts, researchers, including Elon Musk,co-founder of OpenAI, the research lab responsible for ChatGPT and GPT-4, have called on AI labs around the world to pause development of large-scale AI systems for at least six months, so the capabilities and dangers of systems such as GPT-4 to humanity can be properly studied and mitigated.
The demand is made in an open letter published by the nonprofit Future of Life Institute,and its signatories include Emad Mostaque, founder of London-based Stability AI,Steve Wozniak, the co-founder of Apple, engineers from DeepMind, Microsoft, Meta, Google and Amazon, amongst other notable players in the tech industry.
The letter partly read: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
Read Also:
Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
According to the open letter, contemporary AI systems are now becoming human-competitive at general tasks,and considering its adverse impact on the control of human civilisation, its operations must not be delegated to unelected tech leaders.
The experts argued that powerful AI systems should be developed only once the developers are confident that their effects will be positive and their risks will be manageable, well justified and increase with the magnitude of a system’s potential effects.
Thus, they requested that all AI labs immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause, they noted, should be public and verifiable, and include all key actors, adding that governments should intervene and institute a moratorium if the pause is not quickly enacted.
AI labs and independent experts, were also enjoined to utilise the pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” they demanded.
The letter also tasked AI developers to work with policymakers to dramatically accelerate development of robust AI governance systems, which should at a minimum include:
-New and capable regulatory authorities dedicated to AI
-Oversight and tracking of highly capable AI systems and large pools of computational capability;
-Provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem.
-Robust public funding for technical AI safety research.
-Well-resourced institutions for coping with the dramatic economic and political disruptions that AI will cause.