Back in May, a research team spread across the United States and Taiwan made a stunning announcement: they’d mastered the fabrication of transistors with elements just one nanometre in size.
Inhabiting the macroscale of centimetres and metres and kilometres, we find it difficult to visualise anything so small. The width of the very finest human hair, for example, comes in at around 17 micrometres – or 17,000 nanometres. A red blood cell is less than half that size – 7,000 times larger than a single nanometre. It isn’t until you get into the bits and pieces within our cells – organelles like ribosomes, tirelessly translating RNA strands into proteins – that you start getting close. But even a ribosome is still about 20 times bigger than this new transistor.
Perhaps the best comparison we can look to for something one nanometre in width comes from something else we have difficulty imagining: atoms. Semiconductors generally use some silicon in their construction, so we can look to this element to gain a sense of scale. While any individual atom’s width will be fuzzied by quantum uncertainty, a hypothetically “perfect” silicon atom has a width of approximately a fifth of a nanometre. Which means these new transistors have been composed of components just five atoms wide.
How many nanometres?
- One metre: 1,000,000,000 nm
- Tardigrade length: 500,000 nm
- Dust particle length: 1,000 – 100,000 nm
- Human hair width: 17,000 nm
- E.coli length: 1,000 – 2,000 nm
- Single wavelength of red light: 750 nm
- Single wavelength of blue light: 450 nm
- DNA diameter: 2.5 nm
- Silicon atom: 0.2 nm
The almost unbelievably small scale of the discovery has re-kindled our imaginings of a future world shaped by the wonders of “nanotechnology”.
Over 60 years ago, Nobel Prize-winning physicist Richard Feynman gave an address to the American Physical Society. In “There’s Plenty of Room at the Bottom”, he acknowledged that we’d done very well making things big: big buildings, big bombs, big rockets. But, Feynman theorised, perhaps we’d been looking in the wrong direction – or rather, the wrong dimension? In search of ever better and more powerful machines, perhaps we should be looking to the fundamental arrangement of matter? Could we “arrange atoms the way we want”, to construct any sort of machinery, at the atomic scale? Wouldn’t this allow us to create fantastically powerful computers and microscopes?
Instead of inspiring his physicist peers, Feynman’s public thought experiment disappeared without a trace, his prescience only becoming widely recognised a generation later among a new generation of physicists. They had been inspired by the incredible advances in miniaturisation that accompanied the semiconductor revolution that had only been in its infancy when Feynman gave his address. The advances in semiconductor technology, obeying Moore’s Law, meant that by the early 1980s, the nanoscale had finally come into view as a technical possibility that could be plausibly realised within another generation.
Massachusetts Institute of Technology (MIT), in particular, became a hotbed of research into the possibilities offered up by nanotechnology. Leading the charge was a graduate student named K. Eric Drexler, an engineer with a big brain and even bigger personality who inspired an almost cult-like following, influencing a range of ideas within the then-emerging “transhumanist” movement.
Drexler drafted a manifesto for a world shaped by nanotechnology, which worked its way into print as Engines of Creation (1986). As science harnessed the machinery of the atomic-scale, he proclaimed, we would find ourselves graced by an endless abundance of wealth, health and wisdom. All of his followers (and I counted myself among them) eagerly awaited this future of atomic-scale prosperity.
Drexler finished his MIT PhD, publishing it in 1991 as Nanosystems: Molecular Machinery, Manufacturing and Computation. Taking an engineer’s approach to the nanoscale, Drexler designed the nano-equivalents of gears, rotors, switches and other mechanical elements common to the macroscale. Nanosystems reduced nanotechnology to an engineering problem, simply a matter of crafting a set of building blocks like so many Lego bricks that could be snapped together to yield an infinite array of atomic-scale machines, making the problems of working at the nanoscale appear trivial.
Drexler’s blueprints directly influenced science fiction author Neal Stephenson’s Hugo-award winning novel The Diamond Age (1995), in which a fully-realised nanotechnology has largely relieved human want – with some plot-driven limitations.
But as researchers explored the nanoscale, they learned that the reliable physics of the macroscale – rules that make our machines both possible and dependable – simply don’t work the same way at the bottom. Things that should glide smoothly become unexpectedly sticky, or interact in unexpected ways. The clean and efficient designs of macroscale machines offer no real insights into designs at the nanoscale. The two realms operate by different rules.
From the later half of the 1990s, progress in this engineering-driven nanotechnology ground to a halt – just as researchers realised they had a another design portfolio at hand, laden with nanoscale machinery: nature. The wonders of information-coding in DNA, or the protein-assembling ribosome, offered an approach carefully conserved across billions of years of evolution. As computer scientist-cum-biologist Tom Knight once quipped, “Biology is the nanotechnology that works.”
The detailed study of natural nanotechnology led directly to “synthetic” biology – the creation of living systems from non-living components. Over the last decade, that research yielded the first synthetic cells, and the beginnings of a “toolbox” of natural design patterns that, in this decade, might help us build something akin to Drexler’s nanosystems.
All the way back to Feynman, through Drexler, to the current work at the nanoscale, there’s been a vision to build a “nanomedical” system, a realisation of a Fantastic Voyage bit of kit – a “doctor” small enough to travel through our bodies to where it can effect repairs on damaged tissues. To do that, you’d need nanocomputers, nanomanipulators, and, most importantly, nanosensors – because you can’t operate on what you can’t see. Researchers at Columbia University revealed earlier this year that they’d fabricated a near-nanoscale sensor so small (less than 0.1mm3) that it can only be seen with a microscope – and small enough to be safely injected into the body.
Yet even where you can fabricate them, nanoscale systems confront a familiar issue: how do you power something that small? It can’t carry its own power supply, nor have we mastered the art of stealing power from the body’s own stores. These researchers cleverly used ultrasonic sound waves to power their nanosensor – safe for the body’s tissues, and conveniently providing an “off” switch should any nanosensor decide to go rogue. Study lead Ken Shepard says of his device that it “should be revolutionary for developing wireless, miniaturised, implantable medical devices that can sense different things, be used in clinical applications, and eventually approved for human use.”
At some point – perhaps this decade, possibly in the next – the techniques of atomic-scale chip-making and the fabrication of medical nanosensors will meet and blend into a new generation of medical equipment. Rather than popping you into a massive, expensive scanner when you show up in the emergency room, it could be that doctors will inject a host of nanosensors into your bloodstream, analysing the data streaming from those devices to get a more complete picture of your illness or injuries – and their remedy.
That’s the dream of a fully realised nanotechnology. While it feels tangibly closer than it did 40 years ago, we should expect that we still have plenty of sticky lessons to learn, down at the bottom.