Understanding Moore’s Law – Where Computers are Headed
Even one of today’s modest personal computers has more processing power and storage space than the famous Cray-1 supercomputer. In 1976, the Cray-1 was state-of-the-art: it could process 160 million floating-point operations per second (flops) and had 8 megabytes (MB) of memory.
Today, many personal computers can perform more than 10 times that number of floating-point operations in a second and have 100 times the amount of memory. Meanwhile on the supercomputer front, the Cray XT5 Jaguar at the Oak Ridge National Laboratory performed a sustained 1.4 petaflops in 2008 [source: Cray]. The prefix peta means 10 to the 15th power — in other words, one quadrillion. That means the Cray XT5 can process 8.75 million times more flops than the Cray-1. It only took a little over three decades to reach that milestone.
If you were to chart the evolution of the computer in terms of processing power, you would see that progress has been exponential. The man who first made this famous observation is Gordon Moore, a co-founder of the microprocessor company Intel. Computer scientists, electrical engineers, manufacturers and journalists extrapolated Moore’s Law from his original observation. In general, most people interpret Moore’s Law to mean the number of transistors on a 1-inch (2.5 centimeter) diameter of silicon doubles every x number of months.
The number of months shifts as conditions in the microprocessor market change. Some people say it takes 18 months and others say 24. Some interpret the law to be about the doubling of processing power, not the number of transistors. And the law sometimes seems to be more of a self-fulfilling prophecy than an actual law, principle or observation. To understand why, it’s best to go back to the beginning.