This is “The Death of Moore’s Law?”, section 5.2 from the book Getting the Most Out of Information Systems (v. 1.3). For details on it (including licensing), click here.
For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. To download a .zip file containing this book to use offline, simply click here.
Moore simply observed that we’re getting better over time at squeezing more stuff into tinier spaces. Moore’s Law is possible because the distance between the pathways inside silicon chips gets smaller with each successive generation. While chip plants (semiconductor fabrication facilities, or fabsSemiconductor fabrication facilities; the multibillion dollar plants used to manufacture semiconductors.) are incredibly expensive to build, each new generation of fabs can crank out more chips per silicon waferA thin, circular slice of material used to create semiconductor devices. Hundreds of chips may be etched on a single wafer, where they are eventually cut out for individual packaging.. And since the pathways are closer together, electrons travel shorter distances. If electronics now travel half the distance to make a calculation, that means the chip is twice as fast.
But the shrinking can’t go on forever, and we’re already starting to see three interrelated forces—size, heat, and power—threatening to slow down Moore’s Law’s advance. When you make processors smaller, the more tightly packed electrons will heat up a chip—so much so that unless today’s most powerful chips are cooled down, they will melt inside their packaging. To keep the fastest computers cool, most PCs, laptops, and video game consoles need fans, and most corporate data centers have elaborate and expensive air conditioning and venting systems to prevent a meltdown. A trip through the Facebook data center during its recent rise would show that the firm was a “hot” start-up in more ways than one. The firm’s servers ran so hot that the Plexiglas sides of the firm’s server racks were warped and melting!E. McGirt, “Hacker, Dropout, C.E.O.,” Fast Company, May 2007. The need to cool modern data centers draws a lot of power and that costs a lot of money.
The chief eco officer at Sun Microsystems has claimed that computers draw 4 to 5 percent of the world’s power. Google’s chief technology officer has said that the firm spends more to power its servers than the cost of the servers themselves.D. Kirkpatrick, “The Greenest Computer Company under the Sun,” April 13, 2007. Microsoft, Yahoo! and Google have all built massive data centers in the Pacific Northwest, away from their corporate headquarters, specifically choosing these locations for access to cheap hydroelectric power. Google’s location in The Dalles, Oregon, is charged a cost per kilowatt hour of two cents by the local power provider, less than one-fifth of the eleven-cent rate the firm pays in Silicon Valley.S. Mehta, “Behold the Server Farm,” Fortune, August 1, 2006. Also see Chapter 10 "Software in Flux: Partly Cloudy and Sometimes Free" in this book. This difference means big savings for a firm that runs more than a million servers.
And while these powerful shrinking chips are getting hotter and more costly to cool, it’s also important to realize that chips can’t get smaller forever. At some point Moore’s Law will run into the unyielding laws of nature. While we’re not certain where these limits are, chip pathways certainly can’t be shorter than a single molecule, and the actual physical limit is likely larger than that. Get too small and a phenomenon known as quantum tunneling kicks in, and electrons start to slide off their paths. Yikes!
One way to overcome this problem is with multicore microprocessorsMicroprocessors with two or more (typically lower power) calculating processor cores on the same piece of silicon., made by putting two or more lower power processor cores (think of a core as the calculating part of a microprocessor) on a single chip. Philip Emma, IBM’s Manager of Systems Technology and Microarchitecture, offers an analogy. Think of the traditional fast, hot, single-core processors as a three hundred-pound lineman, and a dual-core processor as two 160-pound guys. Says Emma, “A 300-pound lineman can generate a lot of power, but two 160-pound guys can do the same work with less overall effort.”A. Ashton, “More Life for Moore’s Law,” BusinessWeek, June 20, 2005. For many applications, the multicore chips will outperform a single speedy chip, while running cooler and drawing less power. Multicore processors are now mainstream.
Today, most PCs and laptops sold have at least a two-core (dual-core) processor. The Microsoft Xbox 360 has three cores. The PlayStation 3 includes the so-called cell processor developed by Sony, IBM, and Toshiba that runs nine cores. By 2010, Intel began shipping PC processors with eight cores, while AMD introduced a twelve-core chip. Intel has even demonstrated chips with upwards of fifty cores.
Multicore processors can run older software written for single-brain chips. But they usually do this by using only one core at a time. To reuse the metaphor above, this is like having one of our 160-pound workers lift away, while the other one stands around watching. Multicore operating systems can help achieve some performance gains. Versions of Windows or the Mac OS that are aware of multicore processors can assign one program to run on one core, while a second application is assigned to the next core. But in order to take full advantage of multicore chips, applications need to be rewritten to split up tasks so that smaller portions of a problem are executed simultaneously inside each core.
Writing code for this “divide and conquer” approach is not trivial. In fact, developing software for multicore systems is described by Shahrokh Daijavad, software lead for next-generation computing systems at IBM, as “one of the hardest things you learn in computer science.”A. Ashton, “More Life for Moore’s Law,” BusinessWeek, June 20, 2005. Microsoft’s chief research and strategy officer has called coding for these chips “the most conceptually different [change] in the history of modern computing.”M. Copeland, “A Chip Too Far?” Fortune, September 1, 2008. Despite this challenge, some of the most aggressive adaptors of multicore chips have been video game console manufacturers. Video game applications are particularly well-suited for multiple cores since, for example, one core might be used to render the background, another to draw objects, another for the “physics engine” that moves the objects around, and yet another to handle Internet communications for multiplayer games.
Another approach that’s breathing more life into Moore’s Law moves chips from being paper-flat devices to built-up 3-D affairs. By building up as well as out, firms are radically boosting speed and efficiency of chips. Intel has flipped upward the basic component of chips—the transistor. Transistors are the supertiny on-off switches in a chip that work collectively to calculate or store things in memory (a high-end microprocessor might include over two billion transistors). While you won’t notice that chips are much thicker, Intel says that on the miniscule scale of modern chip manufacturing, the new designs will be 37 percent faster and half as power hungry as conventional chips.K. Bourzac, “How Three-Dimensional Transistors Went from Lab to Fab,” Technology Review, May 6, 2011.
Think about it—the triple threat of size, heat, and power means that Moore’s Law, perhaps the greatest economic gravy train in history, will likely come to a grinding halt in your lifetime. Multicore and 3-D transistors are here today, but what else is happening to help stave off the death of Moore’s Law?
Every once in a while a material breakthrough comes along that improves chip performance. A few years back researchers discovered that replacing a chip’s aluminum components with copper could increase speeds up to 30 percent, and Intel slipped exotic-sounding hafnium onto its silicon to improve power use. Now scientists are concentrating on improving the very semiconductor material that chips are made of. While the silicon used in chips is wonderfully abundant (it has pretty much the same chemistry found in sand), researchers are investigating other materials that might allow for chips with even tighter component densities. Researchers have demonstrated that chips made with supergeeky-sounding semiconductor materials such as indium gallium arsenide, germanium, and bismuth telluride can run faster and require less wattage than their silicon counterparts.Y. L. Chen, J. G. Analytis, J.-H. Chu, Z. K. Liu, S.-K. Mo, X. L. Qi, H. J. Zhang, et al., “Experimental Realization of a Three-Dimensional Topological Insulator, Bi2Te3,” Science 325, no. 5937 (July 10, 2009): 178—81; K. Greene, “Intel Looks Beyond Silicon,” Technology Review, December 11, 2007; and A. Cane, “A Valley By Any Other Name…” Financial Times, December 11, 2006. Perhaps even more exotic (and downright bizarre), researchers at the University of Delaware have experimented with a faster-than-silicon material derived from chicken feathers! Hyperefficient chips of the future may also be made out of carbon nanotubes, once the technology to assemble the tiny structures becomes commercially viable.
Other designs move away from electricity over silicon. Optical computing, where signals are sent via light rather than electricity, promises to be faster than conventional chips, if lasers can be mass produced in miniature (silicon laser experiments show promise). Others are experimenting by crafting computing components using biological material (think a DNA-based storage device).
One yet-to-be-proven technology that could blow the lid off what’s possible today is quantum computing. Conventional computing stores data as a combination of bits, where a bit is either a one or a zero. Quantum computers, leveraging principles of quantum physics, employ qubits that can be both one and zero at the same time. Add a bit to a conventional computer’s memory and you double its capacity. Add a bit to a quantum computer and its capacity increases exponentially. For comparison, consider that a computer model of serotonin, a molecule vital to regulating the human central nervous system, would require 1094 bytes of information. Unfortunately there’s not enough matter in the universe to build a computer that big. But modeling a serotonin molecule using quantum computing would take just 424 qubits.P. Kaihla, “Quantum Leap,” Business 2.0, August 1, 2004.
Some speculate that quantum computers could one day allow pharmaceutical companies to create hyperdetailed representations of the human body that reveal drug side effects before they’re even tested on humans. Quantum computing might also accurately predict the weather months in advance or offer unbreakable computer security. Ever have trouble placing a name with a face? A quantum computer linked to a camera (in your sunglasses, for example) could recognize the faces of anyone you’ve met and give you a heads-up to their name and background.P. Schwartz, C. Taylor, and R. Koselka, “The Future of Computing: Quantum Leap,” Fortune, August 2, 2006. Opportunities abound. Of course, before quantum computing can be commercialized, researchers need to harness the freaky properties of quantum physics wherein your answer may reside in another universe, or could disappear if observed (Einstein himself referred to certain behaviors in quantum physics as “spooky action at a distance”).
Pioneers in quantum computing include IBM, HP, NEC, and a Canadian start-up named D-Wave. If or when quantum computing becomes a reality is still unknown, but the promise exists that while Moore’s Law may run into limits imposed by Mother Nature, a new way of computing may blow past anything we can do with silicon, continuing to make possible the once impossible.