The Chip Shortage, Giant Chips, and the Future of Moore’s Law – IEEE Spectrum

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
IEEE Spectrum’s biggest semiconductor headlines of 2021
Employees work on the production line at the Volkswagen Autoeuropa car factory in Palmela, Spain on May 13, 2020.
With COVID-19 shaking the global supply chain like an angry toddler with a box of jelly beans, the average person had to take a crash course in the semiconductor industry. And many of them didn’t like what they learned. Want a new car? Tough luck, not enough chips. A new gaming system? Same. But you are not the average person, dear reader. So, in addition to learning why there was a chip shortage in the first place, you also discovered that you can—with considerable effort—fit more than two trillion transistors on a single chip. You also found that the future of Moore’s Law depends as much on where you put the wires as on how small the transistors are, among many other things.
So to recap the semiconductor stories you read most this year, we’ve put together this set of highlights:

This year you learned the same thing that some car makers did: Even if you think you’ve hedged your bets by having a diverse set of suppliers, those suppliers—or the suppliers of those suppliers—might all be using the output of the same small set of semiconductor fabs.
To recap: Car makers panicked and cancelled orders at the outset of the pandemic. Then when it seemed people still wanted cars, they discovered that all of the display drivers, power management chips, and other low-margin stuff they needed had already been sucked up into the work/learn/live-from-home consumer frenzy. By the time they got back in line to buy chips, that line was nearly a year long, and it was time to panic again.
Chipmakers worked flat out to meet demand and have unleashed a blitz of expansion, though most of that is aimed at higher-margin chips than those that clogged the engine of the automotive sector. The latest numbers from chip manufacturing equipment industry association SEMI, show sales of equipment set to cross US $100-billion in 2021. A mark never before reached.
As for car makers, they may have learned their lesson. At a gathering of stakeholders in the automotive electronics supply chain this summer at GlobalFoundries Fab 8 in Malta, N.Y., there was enthusiastic agreement that car makers and chip makers needed to get cozy with each other. The result? GlobalFoundries has already inked agreements with both Ford and BMW.
You can make transistors as small as you want, but if you can’t connect them up to each other, there’s no point. So Arm and the Belgian research institute imec spent a few years finding room for those connections. The best scheme they found was to take the interconnects that carry power to logic circuits (as opposed to data) and bury them under the surface of the silicon, linking them to a power delivery network built on the backside of the chip. This research trend suddenly became news when Intel said what sounded like: “Oh yeah. We’re definitely doing that in 2025.”
What has 2.6 trillion transistors, consumes 20 kilowatts, and carries enough internal bandwidth to stream a billion Netflix movies? It’s generation 2 of the biggest chip ever made, of course! (And yes, I know that’s not how streaming works, but how else do you describe 220 petabits per second of bandwidth?) Last April, Cerebras Systems topped its original, history-making AI processor with a version built using a more advanced chip-making technology. The result was a more than-doubling of the on-chip memory to an impressive 40 gigabytes, an increase in the number of processor cores from the previous 400,000 to a speech-stopping 850,000, and a mind-boggling boost of 1.4 trillion additional transistors.
Gob-smacking as all that is, what you can do with it is really what’s important. And later in the year, Cerebras showed a way for the computer that houses its Wafer Scale Engine 2 to train neural networks with as many as 120 trillion parameters. For reference, the massive—and occasionally foul-mouthed—GPT-3 natural language processor has 175 billion. What’s more, you can now link up to 192 of these computers together.
Of course, Cerebras’ computers aren’t the only ones meant to tackle absolutely huge AI training jobs. SambaNova is after the same title, and clearly Google has it’s eye on some awfully big neural networks, too.
IBM claimed to have developed what it called a 2-nanometer node chip and expects to see it in production in 2024. To put that in context, leading chipmakers TSMC and Samsung are going full-bore on 5 nm, with a possible cautious start for 3 nm in 2022. As we reminded you last year, what you call a technology process node has absolutely no relation to the size of any part of the transistors it constructs. So whether IBM’s process is any better than rivals will really come down to the combination of density, power consumption, and performance.
The real importance is that IBM’s process is another endorsement of nanosheet transistors as the future of silicon. While each big chipmaker is moving from today’s FinFET design to nanosheets at their own pace, nanosheets are inevitable.
The news hasn’t all been about transistors. Processor architecture is increasingly important. Your smartphones’ brains are probably based on an Arm architecture, your laptop and the servers its so attached too are likely based on the x86 architecture. But a fast-growing cadre of companies, particularly in Asia, are looking to an open-source chip architecture called RISC-V. The attraction is to allow startups to design custom chips without the costly licensing fees for proprietary architectures.
Even big companies like Nvidia are incorporating it and Intel expects RISC V to boost its foundry business. Seeing RISC V as a possible path to independence in an increasingly polarized technology landscape, Chinese firms are particularly bullish on RISC V. Only last month, Alibaba said it would make the source code available for its RISC V core.
Although certain types of optical computing are getting closer, the switch researchers in Russia and at IBM described in October is likely for a far future computer. Relying on exotic stuff like exciton-polaritons and Bose-Einstein condensates, the device swtiched at about 1 trillion times per second. That’s so fast light would only manage about one third of a millimeter before the device switches again.
One of AI’s big problems is that its data is so far away. Sure, that distance is measured in millimeters, but these days that’s a long way. (Somewhere there’s an Intel 4004 saying, “Back in my day, data had to go 30 centimeters, uphill, in a snowstorm.”) There are lots of ways engineers are coming up with to shorten that distance. But this one really caught your attention:
Instead of building DRAM from silicon transistors and a metal capacitor built above it, use a second transistor as the capacitor and build them both above the silicon from oxide semiconductors. Two research groups showed that these transistors could keep their data way longer than ordinary DRAM, and they could be stacked in layers above the silicon, giving a much shorter path between the processor an its precious data.
In August Intel unveiled what it called the company’s biggest processor architecture advances in a decade. They included two new x86 CPU core architectures—the straightforwardly-named Performance-core (P-core) and Efficient-core (E-core). The cores are integrated into Alder Lake, a “performance hybrid” family of processors that includes new tech to let the upcoming Windows 11 OS run CPUs more efficiently.
“This is an awesome time to be a computer architect,” senior vice president and general manager Raja Koduri said at the time. The new architectures and SoCs Intel unveiled “demonstrate how architecture will satisfy the crushing demand for more compute performance as workloads from the desktop to the data center become larger, more complex, and more diverse than ever.”
If you want, you could translate that as: “In your face, process technology and device scaling! It’s all about the architecture now!” But I don’t think Koduri would take it that far.
A bit alarmed by just how geographically close China is to Taiwan and Samsung, the only two countries capable of making the most advanced logic chips, U.S. lawmakers got the ball rolling on an effort to boost cutting edge chip making in the United States. Some of that has already started with TSMC, Samsung, and Intel making major fab investments. Of course, Taiwan and South Korea are also making major domestic investments, as are Europe and Japan.
It’s all part of a broader economic and technological nationalism playing out across the world, notes geopolitical futurist Abishur Prakash with the Center for Innovating the Future, in Toronto. Some see these “shifts in geopolitics as short term, as if they’re byproducts of the pandemic and that things on a certain timeline will calm down if not return to normal,” he told IEEE Spectrum in May. “That’s wrong. The direction that nations are moving now is the new permanent north star.”
Hey, remember all that brain-based processing stuff we’ve been banging on about for decades? Well it’s here now, in the form of a camera chip made by French startup Prophesee and major imager manufacturer Sony. Unlike a regular imager, this chip doesn’t capture frame after frame with each tick of the clock. Instead it only notes the changes in a scene. That means both much lower power—when there’s nothing happening, there’s nothing to see—and faster response times.
Boston Dynamics’ Stretch can move 800 heavy boxes per hour
Stretch can autonomously transfer boxes onto a roller conveyor fast enough to keep up with an experienced human worker.
As COVID-19 stresses global supply chains, the logistics industry is looking to automation to help keep workers safe and boost their efficiency. But there are many warehouse operations that don’t lend themselves to traditional automation—namely, tasks where the inputs and outputs of a process aren’t always well defined and can’t be completely controlled. A new generation of robots with the intelligence and flexibility to handle the kind of variation that people take in stride is entering warehouse environments. A prime example is Stretch, a new robot from Boston Dynamics that can move heavy boxes where they need to go just as fast as an experienced warehouse worker.
Stretch’s design is somewhat of a departure from the humanoid and quadrupedal robots that Boston Dynamics is best known for, such as Atlas and Spot. With its single massive arm, a gripper packed with sensors and an array of suction cups, and an omnidirectional mobile base, Stretch can transfer boxes that weigh as much as 50 pounds (23 kilograms) from the back of a truck to a conveyor belt at a rate of 800 boxes per hour. An experienced human worker can move boxes at a similar rate, but not all day long, whereas Stretch can go for 16 hours before recharging. And this kind of work is punishing on the human body, especially when heavy boxes have to be moved from near a trailer’s ceiling or floor.
“Truck unloading is one of the hardest jobs in a warehouse, and that’s one of the reasons we’re starting there with Stretch,” says Kevin Blankespoor, senior vice president of warehouse robotics at Boston Dynamics. Blankespoor explains that Stretch isn’t meant to replace people entirely; the idea is that multiple Stretch robots could make a human worker an order of magnitude more efficient. “Typically, you’ll have two people unloading each truck. Where we want to get with Stretch is to have one person unloading four or five trucks at the same time, using Stretches as tools.”
All Stretch needs is to be shown the back of a trailer packed with boxes, and it’ll autonomously go to work, placing each box on a conveyor belt one by one until the trailer is empty. People are still there to make sure that everything goes smoothly, and they can step in if Stretch runs into something that it can’t handle, but their full-time job becomes robot supervision instead of lifting heavy boxes all day.
“No one wants to do receiving.” —Matt Beane, UCSB
Achieving this level of reliable autonomy with Stretch has taken Boston Dynamics years of work, building on decades of experience developing robots that are strong, fast, and agile. Besides the challenge of building a high-performance robotic arm, the company also had to solve some problems that people find trivial but are difficult for robots, like looking at a wall of closely packed brown boxes and being able to tell where one stops and another begins.
Safety is also a focus, says Blankespoor, explaining that Stretch follows the standards for mobile industrial robots set by the American National Standards Institute and the Robotics Industry Association. That the robot operates inside a truck or trailer also helps to keep Stretch safely isolated from people working nearby, and at least for now, the trailer opening is fenced off while the robot is inside.
Stretch is optimized for moving boxes, a task that’s required throughout a warehouse. Boston Dynamics hopes that over the longer term the robot will be flexible enough to put its box-moving expertise to use wherever it’s needed. In addition to unloading trucks, Stretch has the potential to unload boxes from pallets, put boxes on shelves, build orders out of multiple boxes from different places in a warehouse, and ultimately load boxes onto trucks, a much more difficult problem than unloading due to the planning and precision required.
“Where we want to get with Stretch is to have one person unloading four or five trucks at the same time.” —Kevin Blankespoor, Boston Dynamics
In the short term, unloading a trailer (part of a warehouse job called “receiving”) is the best place for a robot like Stretch, agrees Matt Beane, who studies work involving robotics and AI at the University of California, Santa Barbara. “No one wants to do receiving,” he says. “It’s dangerous, tiring, and monotonous.”
But Beane, who for the last two years has led a team of field researchers in a nationwide study of automation in warehousing, points out that there may be important nuances to the job that a robot such as Stretch will probably miss, like interacting with the people who are working other parts of the receiving process. “There’s subtle, high-bandwidth information being exchanged about boxes that humans down the line use as key inputs to do their job effectively, and I will be singularly impressed if Stretch can match that.”
Boston Dynamics spent much of 2021 turning Stretch from a prototype, built largely from pieces designed for Atlas and Spot, into a production-ready system that will begin shipping to a select group of customers in 2022, with broader sales expected in 2023. For Blankespoor, that milestone will represent just the beginning. He feels that such robots are poised to have an enormous impact on the logistics industry. “Despite the success of automation in manufacturing, warehouses are still almost entirely manually operated—we’re just starting to see a new generation of robots that can handle the variation you see in a warehouse, and that’s what we’re excited about with Stretch.”