The MOSFET: The Microscopic Ruler of the AI Race
A road paved with lots of imperfect competition
The MOSFET—metal-oxide-semiconductor field-effect transistor—is, by raw numbers, humanity’s most produced thing. We’ve manufactured more of these tiny electronic contraptions than there are grains of sand on Earth1. They’re in your phone, your laptop, your tractor, your car key, your washing machine, your toothbrush, and the data center hosting this page. Trillions of them are etched into existence every single day2. And yet, for something so omnipresent, the MOSFET remains one of the most jealously guarded industrial secrets in history.
But how so? Everyone with a basic knowledge of electrical engineering knows what a MOSFET is, right? Interestingly, no one outside a handful of actors knows how the sausage is made. The MOSFET, in its textbook form, is rather trivial: a channel that opens or closes when a voltage is applied to a terminal. A glorified switch. You can simulate one on your laptop with Spice, or even 3D-print a crappy mock-up in your garage to teach kids in primary school what its cross-section looks like. But real, commercial MOSFETs—especially at the cutting edge—are no longer *just* transistors, but borderline quantum alchemy. Exotic alloys, esoteric quantum effects, self-aligned fins, atomic-layer deposition, and doping profiles crafted like fine sword steel.
The foundries—the factories producing them—guard their process design kits (PDKs) like state secrets. The PDK—the set of transistor models and design rules a chip designer uses—is the only interface most companies will ever have to the black box beneath their silicon. And even that is an abstraction, a behavioral simulation of the transistor’s quirks, not the recipe itself. In fact, Apple designs its M-series chips without knowing the full physics of the 3nm FinFETs or GAAFETs they’re using. AMD and Nvidia have no choice but to trust the foundries, paying obscene sums and hoping the magic works.
MOSFET secrecy spans its full life cycle: from conception to fabrication. An oligopoly of companies has managed to create design environments in such an intricate way that they’ve achieved the closest thing to feudalism. A full EDA stack—schematic capture, simulation, layout, parasitic extraction, verification, sign-off—at the commercial level can cost millions of dollars per year in licenses, with per-seat pricing that would make AWS look like a bargain. Of course, each toolchain is tailored to specific foundry processes, binding designers into a labyrinth of inescapable vendor lock-in. Oligopolies and all, it is really interesting how much the IC development process is (still) based on an awfully disjointed design process, where the path from architecture to tapeout includes unholy amounts of tribal knowledge and a jungle of tools with tacky names which run scripts in Tcl.
But wait, it can get worse: oligopolies look nice in the light of the ridiculous monopoly that sits at the top, where literally one single firm makes the only extreme ultraviolet lithography (EUV) tools capable of printing the minuscule patterns necessary for etching and producing the designs powering the current-day AI razzmatazz. Each of these machines costs over $200 million, takes a year to build, and is so complex that calling it a “printer” is a bit of an understatement3. Without ASML’s blessing, no one—neither TSMC, Samsung, nor Intel—can manufacture the transistors that all current AI fads are hanging on.
Is open source even an option here? There are efforts. SkyWater’s Sky130 PDK is a noble attempt at independence, but it’s a comparatively legacy process and unable to sustain the clock speeds and frequencies today’s applications require, be it a data center or a 5G modem, mainly due to lack of tool sophistication, passive component characterization, among other limitations. With Sky130, you can design a MOSFET for sure, digital logic or even a RISC-V core. But with TSMC’s latest node, you’re wrestling with the laws of quantum mechanics, allowing you to reach to the gigahertz. And TSMC isn’t about to send you the keys to that kingdom.
The semiconductor industry thrives on the scarcity of knowledge as much as the scarcity of fabs. It’s why the cost of designing a cutting-edge chip has ballooned into the billions, where most of that budget doesn’t go into silicon but into knowing how to talk to the silicon. The software that lets you design a transistor costs more than the transistor itself.
The irony is strong: the MOSFET is the most manufactured object ever, by far, and yet it is less accessible to a curious engineer than it was 40 years ago. Back then, Carver Mead and Lynn Conway’s “VLSI Design Revolution” promised to democratize chip design. Today, the barrier is higher, the gatekeepers richer, and the costs are unbereable. A brilliant graduate student in 1975 could tinker with real transistors. In 2025, they need a Fortune 500 budget just to get a simulation license.
Every GPU, TPU, and AI accelerator is nothing but a fantastically complicated mosaic of MOSFETs. The entire AI hype train—and many NASDAQ tickers—hang on a technology so locked behind secrecy, capital, and geopolitical bottlenecks that it’s both the foundation and the weakest link of the whole stack. ∎
The math checks out: it is estimated that approximately 13 sextillion (1.3 x 10^22) MOSFETs have been manufactured since their invention in 1960, according to Wikipedia and other sources, while there are an estimated 7.5 sextillion grains of sand on Earth.
The Apple M1 processor alone has almost 60 billion transistors
Look at the size of that thing: https://newsroom.intel.com/press-kit/intel-high-na-euv