My presentation was a continuation of the class discussion about artificial intelligence. Specifically, my presentation focused on two concepts: the relationship between ASI and the environment, and the exponential pace of technological evolution. I used Tim Urban’s two part series, The AI Revolution, as the basis of the conversation. His essays explore the rise of artificial super-intelligence (ASI) and the existential risk it poses to humanity.
A good portion of class discussion time was used to comprehend the incomprehensible intelligence of ASI. It is hard to grasp that an ASI will not only be thousands or millions times faster than a human mind, but also able to understand and think at levels we can literally not even imagine. This part of class discussion related most to the the pace of technological evolution. Currently, the pace of technological improvement is exponential, and this means it is likely an AGI will be invented in the next century. When the first ASI is developed, it will quickly be able to improve the rate at which it improves, essentially performing expedited super-evolution. I argued in the class discussion that this poses serious risks to humanity because we will be unable to understand or control an ASI. Therefore, we must be extraordinarily careful when creating ASI’s so that the goals of the ASi align with the complex goals of humanity.
Part of the discussion was a mostly theoretical discussion of the consequences of a functional ASI on the environment. Urban’s example of Turry, a robot that writes thank-you notes and achieves ASI, was pertinent to this part of class. Turry’s singular goal was to write notes, and as an ASI Turry was extremely effective at this job, eventually taking over the entire earth. We talked about the necessity of programming ASI in such a manner as to include the protection of the environment as part of their foundational goal.
One interesting debate near the end of the discussion centered on the relevance of the environment after development of an ASI. The final consensus was that humans have a moral obligation to protect the environment, regardless of whether it was essential to the survival of humanity.
It was difficult to have a meaningful discussion of ASI and the future of the environment because it is a complex idea that it difficult to grasp without intimate knowledge of ethics, philosophy, and engineering. However, the discussion did seem to serve as a foundation of interest from which to build and explore. Urban’s easy to understand prose and handy charts were a fantastic starting point for a conversation that has hopefully sparked the curiosity and worry of the students.
Tim Urban’s two-part series can be accessed here and here.
Send message to Swarthmore College Environmental Studies
last updated 9/19/16webmaster