2. Artificial Intelligence
Artificial Intelligence poses a danger to human kind. Agree or disagree?
INTRODUCTION
Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity. Elon Musk and other prominent figures signed onto the letter, this includes Apple Co-founder Steve Wozniak, former Democratic presidential candidate Andrew Yang and Marc Rotenberg, the president of the nonprofit Center for AI and Digital Policy, in all, the letter features more than 1,000 signees, including professors, tech executives and scientists.
Published on March 22, 2023 the TECH LEADERS' PLEA TO STOP DANGEROUS AI by signing an open letter warning of potential risks, and say the race to develop AI systems is out of control.
The letter states that AI systems with human-competitive intelligence can pose profound risks to society and humanity and should be planned for and managed with care and resources. They advocate that not even the creators – can understand, predict, or reliably control out-of-control race to develop and deploy this technology.
Contemporary AI systems are now becoming human-competitive at general tasks, and as a society we must ask ourselves:
The leaders suggest that these decisions must not be left to unelected tech leaders, rather that these Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.
The letter referenced OpenAI's recent statement regarding artificial general intelligence, stating that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." They agree. That point is now.
As such they have called for an agreed immediate pause for at least 6 months the training of AI systems to ensure that systems are safe beyond a reasonable doubt. So that AI research and development can be refocused, making todays powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
They want AI developers to work with policymakers to develop robust AI governance systems that:
• help distinguish real from synthetic
• track model leaks
• create an auditing and certification ecosystem
• define liability for AI-caused harm
• develop public funding for technical AI safety research
• support well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Overall, they believe this will allow humanity to enjoy a flourishing future with AI, reaping the rewards, to engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society and they believe we should do so here.
REFLECTIVE QUESTIONS
RESOURCES
Suggested Websites:
Worldbook Article
Search the Shortis Library for digital resources & videos
References on this page:
https://forms.microsoft.com/r/ta29iZVYqV
https://forms.microsoft.com/r/ViqYWASDbw