Wednesday, May 26, 2010

The Singularity

I recently subscribed to the Philosophy Bites podcast available on iTunes. This podcast consists of two writers, Nigel Warburton and David Edmonds, interviewing top thinkers and philosophers from around the world in 15 to 30 minute segments (i.e. just long enough for me to get through one on my walk to or from work).

This morning I listened to their interview with David Chalmers, a philosopher at the Australian National University (I guess not all Australian philosophers are named Bruce after all) regarding the Singularity. The Singularity is a term most often attributed to Ray Kurzweil and refers to a period of time in the future when machine intelligence reaches a point where it outstrips human intelligence. That that point, such machines will then be capable of creating software and other machines which outstrip their intelligence and so on in a rapidly increasing intelligence explosion. A sort of AI tail recursion.

The premise is, naturally, based on the assumption that it will ever be possible to build a machine intelligence that matches human intelligence including consciousness. There is wide debate on this topic with folks like John Searle arguing it impossible (at least in pure software) to philosophers like David Chalmers arguing it is inevitable. This has been a subject of interest for me which occupied a good portion of my undergrad days. I must admit that I feel drawn to Chalmer's argument wherein he imagines a single neuron in a conscious human's brain being replaced by a silicon chip. Keep replacing neurons and inspecting the subject ("Yes, I feel fine. This apple is tasty.") and you will hit one of three possible outcomes:
  1. At some point after replacing a neuron the subject suddenly loses consciousness (the magic neuron)
  2. The subject will gradually lose consciousness transitioning through differing levels of reduced consciousness until a point is reached where we can no longer say she is conscious (the fadeout)
  3. All neurons will be replaced and the subject is still conscious despite having her brain completely replaced by silicon.
If we can assume an accurate replication of the function of each and every neuron I can't help thinking that the third outcome is entirely plausible. Searle had previously addressed this type of thought experiment and at the time argued against the possibility of consciousness (or what he called Strong AI). I believe his more current arguments are around consciousness arising from physical systems as opposed to software systems so he may now agree with (or at least be willing to explore) the possibility that our silicon-brained subject is conscious. Still, I can imagine those silicon chips being replaced one by one by individual software processes that mimic their function perfectly. I would wager that our subject would continue to remain just as conscious during the silicon-to-software transition as she did for the neuron-to-silicon transition.

If you're with me so far then you can easily see the doors to possibility that open up. The concept of "uploading" our consciousness into machines would become a reality (the ethical issues around the treatment of our software doppelgangers is beyond the scope of this blog post). How about making multiple duplicates of your consciousness to increase your productivity or at least balance the workload? How about an upgrade? Immortality would then come not from biological science but rather information science.

I really love this kind of subject in science fiction which is why writers like Greg Egan grace much of my shelving. Does anyone else find this subject fascinating? Any counter arguments to Chalmer's neuron replacement thought experiment?

1 comment:

Kimota94 aka Matt aka AgileMan said...

Have you read Mindscan by Robert J. Sawyer? It's a bit lightweight for my liking, but it certainly deals with this subject in an entertaining fashion.