Stanford Existential Risk Initiative tackles global threats – The Stanford Daily

Stanford launched the official Stanford Existential Risk Initiative (SERI) on Friday, which aims to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity. Specifically, its goal is to prevent global catastrophic risks (GCRs) risks that threaten to destroy human civilization or drive the entire species extinct.

The organizations current plans include hosting a speaker series of prominent people in the world of GCR-mitigation, such as Precipice author Toby Ord, as well as offering $7,500 stipends for undergraduate summer projects. Funding for between 10 and 20 projects will be provided by Open Philanthropy. And during the academic year, SERI hopes to integrate its offerings into the brand-new Catastrophic Risks and Solutions (CRS) concentration in the Science, Technology and Society (STS) major.

SERI, hosted under the Freeman Spogli Institute for International Studies, is the product of an alliance between the STS program, the Center for International Security and Cooperation (CISAC), Stanford Global Health and the existing local effective altruism community. Its co-leaders are professor Stephen Luby in the School of Medicine and professor Paul Edwards, the director of the STS program.

Luby and Edwards met over shared interest in teaching a course on human extinction at Stanford. That course, THINK 65: Preventing Human Extinction, is now in its second year and has about 100 students.

Edwards told The Daily that Luby reached out to him with the original idea for the class.

Right after I was hired at Stanford before anything had happened and while I was still in Ann Arbor Steve wrote me and said, Do you want to teach a course on preventing human extinction? Edwards said. And I had never met him, I knew nothing about him, I saw this email and immediately said, Yes. I have been working on climate change for the past 30 years, and before I worked on climate change I mainly worked on the role of computers in nuclear war.

Climate change and nuclear war are two of the four focuses of the class; the others are uncontrollable artificial intelligence and, of course, plague natural or artificial.

Luby described the possibility of human-caused extinction as a new phenomenon in his lifetime.

I dont think we got to the point where humans could destroy civilization until the advent of nuclear arsenals, he said.

But the course has not treated COVID-19 as a global catastrophic risk. Luby, an expert epidemiologist who spent time at the Centers for Disease Control and Prevention (CDC), said that the kind of worst-case scenarios the course describes short of extinction are things that would cause us to lose all our scientific understanding, all our art, and where humanity is reduced to being largely illiterate: things with no hope of recovery. COVID-19 is unlikely to alter the long-term trajectory of mankind.

The course discusses the novel coronavirus in the context of previous disasters of even larger scales the bubonic plague, which shut down the Silk Road and emptied entire cities in Europe in the Late Middle Ages, or the Old World epidemics that ravaged New World civilizations and tribes post-contact.

We taught that course last spring for the first time, and we were surprised that it generated so much interest, Luby said. Then someone at Effective Altruism, maybe Kuhan, reached out to us and Open Philanthropy reached out to me and as part of that conversation we thought about how we could help build skills and capacities for students who are interested in taking global catastrophic risk seriously.

Effective altruism is a philosophy that focuses on doing good based on evidence rather than intuition or tradition. It also emphasizes diverting college students toward high-impact careers to fight challenges such as runaway climate change in the current century.

Kuhan Jeyapragasan 20 M.S. 20, the president of student group Stanford Effective Altruism, is a student collaborator in SERI who linked Luby and Edwards to Open Philanthropy funding. He described the issues of civilization collapse and mankinds extinction as a natural extension of general effective altruism thought.

A lot of effective altruists are focused on making sure the future goes well, and going along with that is the idea of preventing really bad catastrophes that would either cause extinction or the permanent downfall of civilization, Jeyapragasan said.

Jeyapragasan, who has led some work for the summer program, designed the project to immerse ambitious students in a full-time network of people and ideas. To apply, students had to find a mentor and compose a project proposal. Jeyapragasan described being blown away by scores of detailed applications and technical project ideas, focused on issues such as nuclear policy and bioengineering and, most of all, climate change.

Accepting all proposals would require millions of dollars, which is beyond SERIs reach. But interested students unable to secure summer funding can still participate in SERI through part-time projects or its upcoming speaker series, which will include mentors from the summer program as well as Stanford professors focused on issues such as climate change. SERI also hopes to develop relationships with similar groups at other universities, such as the Nuclear Threat Initiative at MIT, headed by former secretary of energy Ernest Moniz.

The question of nuclear destruction is the original spark for all modern extinction-prevention efforts, even as the number of ways humanity might destroy itself has ballooned.

SERI is Stanfords newest answer to the question of extinction. Future speaker events, virtual or in person, will be publicized through Stanford mailing lists and interest groups.

Jeyapragasan said that SERI is not an outlet for pessimistic prognosticators, but for hopeful optimists.

If you look at how humanity has progressed in terms of alleviating poverty and on social issues, education, all these other fronts, weve done a pretty miraculous job, Jeyapragasan said. And I think we will continue to find solutions and that the future will be really great the problems we face today, like climate change, we can solve. And eventually we might eradicate existential risk, or at least anthropogenic existential risk.

Contact Cooper Veit at cveit at stanford.edu.

The rest is here:
Stanford Existential Risk Initiative tackles global threats - The Stanford Daily

Related Posts

Comments are closed.