James Hughes’ critique of a benevolent superintelligent singleton

Over on IEET, James Hughes wrote an odd article critiquing the FAI proposal. It is best summarized by Kaj Sotala:

"Interesting essay. Still, I found it a bit puzzling. I was expecting to find some sort of rebuttal of what you term "technocratic absolutist" arguments, but could see none. In the end you just state that "In response, we defenders of liberal democracy need to marshal our arguments for the virtuous circle of reinforcement between human technological enablement and self-governance", but don't actually provide any such arguments.

As far as I can see, you pretty much wrote an essay supporting exactly the view you're opposing. After all, you repeated many of the "technocratic" arguments, but provided no support at all for the "liberal democratic" side. Am I missing something here?"

Hughes responds:

"The idea of benevolent totalitarianism is in principle offensive to me, because I subscribe to those other values about the importance of individuals creating themselves and governing themselves through discussion."

- which seems to be an absurd statement. Democracy is a tool to get outcomes that we want - but perhaps we have had to fight so hard against promises of totalitarian utopia in the past that many people have developed an unbreakable irrational hatred of anything other than democracy?
This may be one of those cases of rational irrationality: for agents with bounded intelligence who might be preyed upon by would-be dictators with oh-so-convincing arguments in favor of their totalitarian utopia, the only winning cognitive algorithm might be to just not accept any arguments, no matter how convincing they are. Hughes would rather tolerate a highly suboptimal outcome - life without an FAI - than give in to his innate sense of the wrongness of totalitarianism. And, given the information he has, this is probably the right thing to do. Alas.
A way around this impasse might be to require critics to demonstrate a degree of knowledge of the FAI proposal sufficient to turn off this drive to unconditionally reject certain tempting ideas. Perhaps critics would have to pass some certificate of excellence exam in Friendly AI theory before being allowed to critique it - after all, that's what other professional specialisms require. Critics would not be allowed to critique the LHC or Laser Fusion projects as a waste of money unless they had a degree in theoretical physics that demonstrated that they actually understood what was going on.
Related Posts

Comments are closed.