Monday, March 19, 2012

Transhumanism and the Technocratic Urge

Transhumanists deal with alien intelligence in their theory frequently; ask if they would support putting our economy in the hands of a powerful distributed AI, and they'll say yes.  Point out that capitalism represents the earliest, and thus far most powerful, artificial (and alien) intelligence yet encountered, and you'll leave them sputtering.  (See Less Wrong for transhumanist perspectives which aren't anti-capitalist; I presume the majority I have encountered here.)

Transhumanists, like most liberals, are big on intent (Hayek wrote about this, and Kevin from Smallest Minority has also written good material on the subject).  You can shut down arguments very quickly with a one-two-three punch; point out that capitalism achieves good ends.  An honest individual will admit to this, but will add a "But" to it; typically pointing out the externalities of the capitalism system.  Second, point out that externalities don't go away in another system; even a hyperintelligence can only account for the information it has, and has no power to predict information it doesn't; there will always be externalities to any decision-making process.

Here, the twin definitions of "externality" play in perfect cohesion - it is defined both as a price which is not reflected in cost, but also information which exists outside the perceived domain.  A strong understanding of capitalist theory holds that these are one and the same, of course.  Cost -is- information; information which isn't included in cost is an externality.

And third, and finally, while they're stumped on this second point, there's the simple fact that capitalism, as a hyperintelligence of distributed nodes, is tested and proven, and that their resistance to it stems from the fact (getting back to intent) that they dislike that the positive ends feel like externalities to the system; it has no sense of intent, and they fear this.  Their fears run in the face of all evidence; they are behaving like luddites.  Ask what kind of superintelligence they would put in charge, and why.  Ask whether the unintended consequences they may face are superior to a situation which chafes because it achieves an end without setting out towards it; why they demand their intelligence must possess intent to be effective.

I've played this argument out before.  It works, for a short while at least; people are good at rationalizing their positions, however, and rarely stray from hard-held beliefs.

No comments:

Post a Comment