Skip to Main Content

Yr 7 Inquiry Project

Artificial Intelligence

2. Artificial Intelligence

Artificial Intelligence poses a danger to human kind. Agree or disagree?

INTRODUCTION

Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity. Elon Musk and other prominent figures signed onto the letter, this includes Apple Co-founder Steve Wozniak, former Democratic presidential candidate Andrew Yang and Marc Rotenberg, the president of the nonprofit Center for AI and Digital Policy, in all, the letter features more than 1,000 signees, including professors, tech executives and scientists.

Published on March 22, 2023 the TECH LEADERS' PLEA TO STOP DANGEROUS AI by signing an open letter warning of potential risks, and say the race to develop AI systems is out of control.

The letter states that AI systems with human-competitive intelligence can pose profound risks to society and humanity and should be planned for and managed with care and resources. They advocate that not even the creators – can understand, predict, or reliably control out-of-control race to develop and deploy this technology. 
Contemporary AI systems are now becoming human-competitive at general tasks, and as a society we must ask ourselves: 

  • Should we let machines flood our information channels with propaganda and untruth? 
  • Should we automate away all the jobs, including the fulfilling ones? 
  • Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? 
  • Should we risk loss of control of our civilization? 

The leaders suggest that these decisions must not be left to unelected tech leaders, rather that these Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable. 
The letter referenced OpenAI's recent statement regarding artificial general intelligence, stating that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." They agree. That point is now.
 

As such they have called for an agreed immediate pause for at least 6 months the training of AI systems to ensure that systems are safe beyond a reasonable doubt. So that AI research and development can be refocused, making todays powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
They want AI developers to work with policymakers to develop robust AI governance systems that:
•    help distinguish real from synthetic
•    track model leaks
•    create an auditing and certification ecosystem 
•    define liability for AI-caused harm 
•    develop public funding for technical AI safety research
•    support well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Overall, they believe this will allow humanity to enjoy a flourishing future with AI, reaping the rewards, to engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society and they believe we should do so here. 

REFLECTIVE QUESTIONS

  1. What is artificial intelligence?
  2. What are the risks of artificial intelligence?
  3. What are the benefits of artificial intelligence?
  4. How will artificial intelligence impact society?
    eg: ​​​​​How will artificial intelligence shape the future of education?
    How will artificial intelligence change the workforce?
  5. How will artificial intelligence change what it means to be human?
  6. Should we use artificial intelligence to enhance human abilities?
  7. How can we make sure artificial intelligence technologies are ethical?
  8. In today’s world does humankind fear or embrace artificial intelligence in our daily lives?

References on this page:

  • SAS Institute. (n.d.). Artificial Intelligence (AI): What it is and why it matters. https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html
  • Thomas, M and Urwin, A. (2022, January 25). 8 Risks and Dangers of Artificial Intelligence (AI). Built In. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
  • Marr B. (2022, October 12). Is Artificial Intelligence (AI) A Threat To Humans? Forbes. https://www.forbes.com/sites/bernardmarr/2020/03/02/is-artificial-intelligence-ai-a-threat-to-humans/?sh=539bab98205d
  • Smith J. (2023, March 30). Elon Musk calls for pause on developing 'dangerous' AI. Mail Online. https://www.dailymail.co.uk/news/article-11914149/Musk-experts-urge-pause-training-AI-systems-outperform-GPT-4.html
  • Vallance, C. (2023, March 30). Elon Musk among experts urging a halt to AI training. BBC News. https://www.bbc.com/news/technology-65110030
  • Doermann, D.S. (2024). Artificial intelligence. In World Book Student. https://www.worldbookonline.com/student-new/#/article/home/ar032470

 

Self-Evaluation - Microsoft Form

https://forms.microsoft.com/r/ta29iZVYqV

 

 

 

Reflection - Microsoft Form

https://forms.microsoft.com/r/ViqYWASDbw