Seeking the optimal philanthropic strategy: Global Warming or AI risk?

Over on Beware of the Train some people are discussing what the optimal philanthropic strategy is for people who want to do their bit to help the world as a whole, save people's lives, etc.

The two options under consideration are: (a) Mitigating Anthropogenic Global Warming and (b) working on the risk of artificial intelligence. To quote pozorvlak:

Last night I made a serious strategic error: I dared to suggest to some Less Wrongers that unFriendly transcendent AI was not the most pressing danger facing Humanity.

In particular, I made the following claims:

  • That runaway anthropogenic climate change, while unlikely to cause Humanity's extinction, was very likely (with a probability of the order of 70%) to cause tens of millions of deaths through war, famine, pestilence, etc. in my expected lifetime (so before about 2060).
  • That with a lower but still worryingly high probability (of the order of 10%) ACC could bring about the end of our current civilisation in the same time frame.
  • That should our current civilisation end, it would be hard-to-impossible to bootstrap a new one from its ashes.
  • That unFriendly AI, by contrast, has a much lower (
Related Posts

Comments are closed.