We are all familiar with Moore’s law. At least all of us in IT. It’s the empiric
We first heard of this Moore’s law when about 20 years ago. It was the beginning of the personal computer revolution.
We are all familiar with Moore’s law. At least all of us in IT. It’s the empiric rule that tells us that every two years; advances in technology should allow squeezing the same amount of computing power in half the space, at a similar cost.
We first heard of this law about 20 years ago. It was the beginning of the personal computer revolution. Writing code still required punching cards (at least in my case!). Yet, that law sounded very aggressive to me and my professors of the time. People were saying that there was no way that trend could continue for more than just a few years. And here we are. The “law” is still alive and kicking.
So it is only natural that we now wonder, again, how long this law will hold true. Actually, a more enjoyable debate would be trying to understand what the world would look like in 20 or 30 years from now, if the law does holds true.
A few weeks ago, we ran into an article in Time magazine that presented the work of some technologists that believe that we are approaching the “Singularity” or “the moment when technological change becomes so rapid and profound, it represents a rupture in the fabric of human history. Basically, this line of thought argues that if we follow Moore’s law, in a few decades we’ll be able to manufacture a machine that is more intelligent that any human being.
And from there on, it gets exponential. Machines build even smarter machines. You can picture the rest. Think of it as a friendlier version of Terminator. Of course this is far from science, and there are many voices that believe the whole idea doesn’t make any sense. Nobody fully understands how the human brain works, and attempts to measure the brain’s capacity in units comparable to those used for computers have not been conclusive. Furthermore, this sounds very much like a Malthusian nightmare, but with a positive twist, and we know that Malthus predictions were significantly overstated.
The singularity concept is at least intriguing. Ray Kurzweil is one of the most well known supporters of the theory, and its main evangelist for many years. He now seems to be getting some sort of traction for his ideas. It so happens, we continue to experience the acceleration of technology development (just consider how much shorter the life of a cell phone is these days).
Computers keep beating humans. First there was chess, last week it was Jeopardy (it was with a mix of admiration and panic that I watched the metallic voice of Watson beat its two human contenders on TV). We’re not sure if we should call Kurweil a technologist or a “sciencefictionist”, but as soon as his upcoming movie debuts, we’ll be hitting the theater.
You never know, and we’d better be in good terms with the machines!
Comments? Contact us for more information. We’ll quickly get back to you with the information you need.