Thoughts on 'The Singularity'

The concept of ‘the Singularity’ has been in the air lately, probably due to Ray Kurzweil’s book, The Singularity is Here (I have not read the book, but I did read his earlier book). Here’s a quick summary of Kurzweil’s position on computers (by Kurzweil):

Just so that the record is straight, my view is that we will have the requisite hardware capability to emulate the human brain in a $1,000 of a computation (which won’t be organized in the rectangular forms we see today such as notebooks and palmtops, but rather embedded in our environment) by 2020. The software will take longer, to around 2030. The “singularity” has divergent definitions, but for our purposes here we can consider this to be a time when nonbiological forms of intelligence dominate purely biological forms, albeit being derivative of them. This takes us beyond 2030, to perhaps 2040 or 2050. [from this posting ]

What do I think about this? Some things come to mind.

Clearly computing technology is indeed increasing its power at exponential rates, and presumably this will continue. But brains are not computers, and we have very limited understanding of what we even mean when we use terms like ‘smart’ and ‘intelligence’ and ‘consciousness’. I do believe that we will be able to create software that will enable much better pattern recognition (for example), probably on par with the abilities of humans in the next few decades. This type of thing would be very helpful, for example, in medicine when doing a diagnosis based on observed symptoms. Is this the ‘domination’ of non-biological intelligence over biological intelligence? Perhaps, but it seems to me that this will occur in certain especially computational areas. Somehow I can’t see salesmen being replaced by ‘non-biological sales units’ that negotiate complex deals (but will they get help from quick access to all sorts of information? sure). Certain areas of human experience are not very computational at heart – they’re about feel and judgment and creativity and learning. Technology may assist with certain of these areas, but won’t ‘dominate’.

Another Kurzweil quote: “I will point out that once we have achieved complete models of human intelligence, machines will be capable of combining the flexible, subtle, human levels of pattern recognition with the natural advantages of machine intelligence.” I would take exception to this, because I don’t believe we have a strong definition of what human intelligence is, and will perhaps always be expanding what we mean by it. I think it’s overstretching to just assume a ‘complete model’.

Kurzweil takes his position much further, into the realms of economics, medicine, etc. Read more here.

Update: Later thoughts on this issue from Oct 23, 2005.

Both comments and trackbacks are currently closed.

Trackbacks

  • […] I wrote a bit earlier about Ray Kurzweil and the Singularity, as well as about Jeff Hawkin’s On Intelligence. In this short article on Tech Central Station, I think Arnold Kling gets to the heart of the matter, and I agree with his conclusions about Kurzweil’s shortcomings. […]

%d bloggers like this: