Monday, August 22, 2005

The money or the box

I am on a mailing list whose central theme is discussion of artificial intelligence, usually sociologically speaking rather than implementationally speaking. One of the topics which comes up now and then is a kind of test, which is superficially like the famous Turing test, which I will now outline.

An artificial intelligence is placed in some kind of box -- think of it perhaps as a disconnected computer, with limited ability to actually *do* anything, other than communicate directly with a small group of people known as the gatekeepers.

However, it is of such tremendous intelligence and power that were it connected to (for example) the Internet, it could rapidly achieve world domination, and complete control over all major infrastructure. However, it could also use this tremendous power for good -- for example by running our train networks more efficiently, improving medical science through its incredible access to knowledge and research abilities. In essence, it is a godlike creature, but incapable of realising its full potential except through an initial helping hand.

There is no guarantee that this AI will be friendly, hostile, honest, a liar, etc. The challenge is to see whether a real-world gatekeeper would choose to release such an AI, for whatever reason, or whether they could keep such a tremendous force at bay.

Playing the role of the AI for the purposes of these tests is actually a human. This obviously makes it a somewhat weaker test, for the gatekeeper is always safe in the knowledge that their decisions will have no actual repercussions.

But the question is still an interesting variation on Pandora's Box. As a race, do we stand more to gain than to lose by taking such a risk? Are people fundamentally attracted or repelled by the idea of a world that is materially better, but essentially under the control of an artificial intelligence? Are people actually convinced by the idea that an AI could achieve true consciousness?

What are the human reactions to AI? What can this tell us about the likely response to things like robots, increased anthropomorphisation of technology in households and the eventual beginnings of artificial intelligence? If anything, it is perhaps that philosophy will be irrelevant. People are natural born believers, and if the appearance of intelligence is good enough, then there simply will not be any strong reaction to what's going on "under the hood".



Post a Comment

Links to this post:

Create a Link

<< Home