Kurzweil's argument for the success of AI

“Positive feedback” is one of those phrases that sounds like it ought to mean something good, when it actually means something that is almost always bad: a runaway process whose output forces its inputs further in the direction they are already headed. A chain reaction in an atomic bomb is a classic example, where the production of neutrons is proportional to the number of atoms split, which is in turn proportional to the number of neutrons. Of course, a bomb is one of the few cases where the goal is to produce a runaway process. Even in this case, positive feedback has the property that it quickly destroys the conditions that made the positive feedback possible. This is a hallmark of the breed. Take any other example, such as the multiplication of rabbits, whose output—rabbits—is proportional to the number of rabbits. This simple model assumes that food is abundant; the exponentially increasing rabbit population soon violates that assumption. In Ray Kurzweil’s book, The Singularity is Near, it is argued that the evolution of humanity and technology is driven by a positive feedback, and that the resulting destruction of many of the truths we hold dear will result in a post-human world of wonders. It’s hard to argue with the claim that a positive feedback is going on, especially after Kurzweil has pounded the point home with graph after graph; and so it is hard to disagree that this feedback will destroy the conditions that make the feedback possible (see above). Such arguments have driven pessimists from Malthus to Gore to conclude that humanity is in for some stormy weather. Kurzweil, however, believes that evolution has always managed to avoid the consequences of its manias by leapfrogging from one medium to another. Genetic evolution produced cultural evolution and now cultural evolution will produce a new phase in which people gradually replace themselves with bio-mechanical hybrids, where the seams between the carbon-based and siliconbased parts will be blurred by the nanobots crawling through them. At this point our new incarnations will begin to guide their own evolution, presumably toward ever more complex technological embodiments. Our future selves will be superintelligent, able to merge with others in ways that will cause our current conceptions of individuality to break down. Because all our assumptions about what happens past that point break down, he calls it the Singularity. It’s hard not to picture the Borg from Star Trek when thinking about Kurzweil’s vision of post-human beings swimming through silicon. It was generally assumed in Star Trek that assimilation into the Borg was to be avoided, although none of the assimilated ever complained. Kurzweil can hardly wait. He is even watching his health to make sure he survives to the year 2045, which is roughly when the Singularity will be consummated. He acknowledges that there are perils in the technologies he visualizes, including especially AI, but he is confident humanity can conquer them.