This morning I listened to their interview with David Chalmers, a philosopher at the Australian National University (I guess not all Australian philosophers are named Bruce after all) regarding the Singularity. The Singularity is a term most often attributed to Ray Kurzweil and refers to a period of time in the future when machine intelligence reaches a point where it outstrips human intelligence. That that point, such machines will then be capable of creating software and other machines which outstrip their intelligence and so on in a rapidly increasing intelligence explosion. A sort of AI tail recursion.
The premise is, naturally, based on the assumption that it will ever be possible to build a machine intelligence that matches human intelligence including consciousness. There is wide debate on this topic with folks like John Searle arguing it impossible (at least in pure software) to philosophers like David Chalmers arguing it is inevitable. This has been a subject of interest for me which occupied a good portion of my undergrad days. I must admit that I feel drawn to Chalmer's argument wherein he imagines a single neuron in a conscious human's brain being replaced by a silicon chip. Keep replacing neurons and inspecting the subject ("Yes, I feel fine. This apple is tasty.") and you will hit one of three possible outcomes:
- At some point after replacing a neuron the subject suddenly loses consciousness (the magic neuron)
- The subject will gradually lose consciousness transitioning through differing levels of reduced consciousness until a point is reached where we can no longer say she is conscious (the fadeout)
- All neurons will be replaced and the subject is still conscious despite having her brain completely replaced by silicon.
If you're with me so far then you can easily see the doors to possibility that open up. The concept of "uploading" our consciousness into machines would become a reality (the ethical issues around the treatment of our software doppelgangers is beyond the scope of this blog post). How about making multiple duplicates of your consciousness to increase your productivity or at least balance the workload? How about an upgrade? Immortality would then come not from biological science but rather information science.
I really love this kind of subject in science fiction which is why writers like Greg Egan grace much of my shelving. Does anyone else find this subject fascinating? Any counter arguments to Chalmer's neuron replacement thought experiment?