A 3D stack of silicon logic, resistive RAM, nanotube circuits, and sensors uses new architecture and devices to save energy
You’d think computers spend most of their time and energy doing, well, computation. But that’s not the case: about 90 percent of a computer’s execution time and electrical energy is spent transferring data between the processor and the memory banks, says Subhasish Mitra, a computer scientist at Stanford University. Even if Moore’s law continued on indefinitely, computers would still be limited by this memory bottleneck.
This week in the journal
, Mitra and collaborators describe a new computer architecture they say addresses this problem—and that Mitra believes will improve both the energy efficiency and speed of computers by a factor of 1000.
The new 3D architecture is based on novel devices including 2 million carbon nanotube transistors and over 1 million resistive RAM cells, all built on top of a layer of silicon using existing fabrication methods and connected by densely packed metal wiring between the layers. As a demonstration, the team built an electronic nose that can sense and identify several common vapors including lemon juice, rubbing alcohol, vodka, wine, and beer.
These novel nanodevices are interesting in themselves, says Mitra, but computing’s problems will not be solved by switching out existing devices with new ones that are slightly better. He says the important thing about the combination of technologies in their prototype is that it enabled them to develop a new, more efficient architecture that would not be possible to make in traditional CMOS.
Stacking circuits is a way to bring memory and processing closer together, but even 3D chips have a significant memory bottleneck. They are typically limited by the number and quality of connections between levels. It’s not possible to build conventional metal interconnects on top of one layer and then add another level of memory or processing because of the temperatures required—in excess of 1000 degrees Celsius to make silicon devices. Typically the layers are made separately, then bonded together and connected with relatively large, sparsely distributed connectors called through-silicon vias tens of micrometers apart, says Max Shulaker, a computer scientist at MIT. Shulaker worked with Mitra and Stanford electrical engineer H.-S. Philip Wong on the 3D nanosystem.
Carbon nanotube transistors and resistive RAM can both be fabricated at about 200 degrees Celsius. So they can be built on top of each other and connected with metal wiring, without researchers having to worry about vaporizing the metal. The interconnects in their prototype are more than a thousand times denser than through-silicon vias in conventional 3D chips.
The Stanford and MIT device has four layers. It’s built on a silicon wafer, and the first stratum is made up of silicon logic. This is topped with a layer of interconnects, then an array of carbon nanotube logic. Another layer of interconnects links the nanotubes up to a layer of resistive RAM. A final layer of interconnects is topped with another array of carbon nanotube logic and nanotube sensors that can pick up ambient gases. The order of the layers “reflects the data flow streaming vertically down through the chip,” says Shulaker.
This team has been working on carbon nanotube computing for several years, and Shulaker believes it is now ready for commercial applications. Shulaker says something like chemical sensing is a good first application for carbon nanotube systems because even if one sensor doesn’t work, redundant ones will the whole system won’t fail. He sees niche sensing applications of this sort as a good “on-ramp” to getting carbon nanotube computing commercialized. “The only reason we have 10 nanometer silicon today is that we’ve gone through decades of process learning,” he says. Niche applications will help carbon nanotube computing gain a foothold. The group has partnered with Analog Devices to work on products.
Mitra is thinking big. The computing bottleneck is a huge problem, and as applications like machine learning become more widespread, computing’s data intensity is only growing. He thinks this sort of architectural redesign is a promising route forward. “From supercomputers to cellphones, everyone has to use this technology,” says Mitra.
Shulaker started his lab at MIT last summer. But he started in the Stanford lab his freshman year of college, and even slept there during fabrication runs. During his grad school days, says Shulaker, he says no one really believed a demonstration of carbon nanotube computing on this scale would be possible. Today, says Shulaker, “it would be difficult to bet against these technologies.”
Source: IEEE Spectrum