Tag Archives: Nobel

C. N. Yang, Dead at 103

I don’t usually do obituaries here, but sometimes I have something worth saying.

Chen Ning Yang, a towering figure in particle physics, died last week.

Picture from 1957, when he received his Nobel

I never met him. By the time I started my PhD at Stony Brook, Yang was long-retired, and hadn’t visited the Yang Institute for Theoretical Physics in quite some time.

(Though there was still an office door, tucked behind the institute’s admin staff, that bore his name.)

The Nobel Prize doesn’t always honor the most important theoretical physicists. In order to get a Nobel Prize, you need to discover something that gets confirmed by experiment. Generally, it has to be a very crisp, clear statement about reality. New calculation methods and broader new understandings are on shakier ground, and theorists who propose them tend to be left out, or at best combined together into lists of partial prizes long after the fact.

Yang was lucky. With T. D. Lee, he had made that crisp, clear statement. He claimed that the laws of physics, counter to everyone’s expectations, are not the same when reflected in a mirror. In 1956, Wu confirmed the prediction, and Lee and Yang got the prize the year after.

That’s a huge, fundamental discovery about the natural world. But as a theorist, I don’t think that was Yang’s greatest accomplishment.

Yang contributed to other fields. Practicing theorists have seen his name strewn across concepts, formalisms, and theorems. I didn’t have space to talk about him in my article on integrability for Quanta Magazine, but only just barely: another paragraph or two, and he would have been there.

But his most influential contribution is something even more fundamental. And long-time readers of this blog should already know what it is.

Yang, along with Robert Mills, proposed Yang-Mills Theory.

There isn’t a Nobel prize for Yang-Mills theory. In 1953, when Yang and Mills proposed the theory, it was obviously wrong, a theory that couldn’t explain anything in the natural world, mercilessly mocked by famous bullshit opponent Wolfgang Pauli. Not even an ambitious idea that seemed outlandish (like plate tectonics), it was a theory with such an obvious missing piece that, for someone who prioritized experiment like the Nobel committee does, it seemed pointless to consider.

All it had going for it was that it was a clear generalization, an obvious next step. If there are forces like electromagnetism, with one type of charge going from plus to minus, why not a theory with multiple, interacting types of charge?

Nothing about Yang-Mills theory was impossible, or contradictory. Mathematically, it was fine. It obeyed all the rules of quantum mechanics. It simply didn’t appear to match anything in the real world.

But, as theorists learn, nature doesn’t let a good idea go to waste.

Of the four fundamental forces of nature, as it would happen, half are Yang-Mills theories. Gravity is different, electromagnetism is simpler, and could be understood without Yang and Mills’ insights. But the weak nuclear force, that’s a Yang-Mills theory. It wasn’t obvious in 1953 because it wasn’t clear how the massless, photon-like particles in Yang-Mills theory could have mass, and it wouldn’t become clear until the work of Peter Higgs over a decade later. And the strong nuclear force, that’s also a Yang-Mills theory, missed because of the ability of such a strong force to “confine” charges, hiding them away.

So Yang got a Nobel, not for understanding half of nature’s forces before anyone else had, but from a quirky question of symmetry.

In practice, Yang was known for all of this, and more. He was enormously influential. I’ve heard it claimed that he personally kept China from investing in a new particle collider, the strength of his reputation the most powerful force on that side of the debate, as he argued that a developing country like China should be investing in science with more short-term industrial impact, like condensed matter and atomic physics. I wonder if the debate will shift with his death, and what commitments the next Chinese five-year plan will make.

Ultimately, Yang is an example of what a theorist can be, a mix of solid work, counterintuitive realizations, and the thought-through generalizations that nature always seems to make use of in the end. If you’re not clear on what a theoretical physicist is, or what one can do, let Yang’s story be your guide.

Congratulations to John Clarke, Michel Devoret, and John Martinis!

The 2025 Physics Nobel Prize was announced this week, awarded to John Clarke, Michel Devoret, and John Martinis for building an electrical circuit that exhibited quantum effects like tunneling and energy quantization on a macroscopic scale.

Press coverage of this prize tends to focus on two aspects: the idea that these three “scaled up” quantum effects to medium-sized objects (the technical account quotes a description that calls it “big enough to get one’s grubby fingers on”), and that the work paved the way for some of the fundamental technologies people are exploring for quantum computing.

That’s a fine enough story, but it leaves out what made these folks’ work unique, why it differs from other Nobel laureates working with other quantum systems. It’s a bit more technical of a story, but I don’t think it’s that technical. I’ll try to tell it here.

To start, have you heard of Bose-Einstein Condensates?

Bose-Einstein Condensates are macroscopic quantum states that have already won Nobel prizes. First theorized based on ideas developed by Einstein and Bose (the namesake of bosons), they involve a large number of particles moving together, each in the same state. While the first gas that obeyed Einstein’s equations for a Bose-Einstein Condensate was created in the 1990’s, after Clarke, Devoret, and Martinis’s work, other things based on essentially the same principles were created much earlier. A laser works on the same principles as a Bose-Einstein condensate, as do phenomena like superconductivity and superfluidity.

This means that lasers, superfluids, and superconductors had been showing off quantum mechanics on grubby finger scales well before Clarke, Devoret, and Martinis’s work. But the science rewarded by this year’s Nobel turns out to be something quite different.

Because the different photons in laser light are independently in identical quantum states, lasers are surprisingly robust. You can disrupt the state of one photon, and it won’t interfere with the other states. You’ll have weakened the laser’s consistency a little bit, but the disruption won’t spread much, if at all.

That’s very different from the way quantum systems usually work. Schrodinger’s cat is the classic example. You have a box with a radioactive atom, and if that atom decays, it releases poison, killing the cat. You don’t know if the atom has decayed or not, and you don’t know if the cat is alive or not. We say the atom’s state is a superposition of decayed and not decayed, and the cat’s state is a superposition of alive and dead.

But unlike photons in a laser, the atom and the cat in Schrodinger’s cat are not independent: if the atom has decayed, the cat is dead, if the atom has not, the cat is alive. We say the states of atom and cat are entangled.

That makes these so-called “Schrodinger’s cat” states much more delicate. The state of the cat depends on the state of the atom, and those dependencies quickly “leak” to the outside world. If you haven’t sealed the box well, the smell of the room is now also entangled with the cat…which, if you have a sense of smell, means that you are entangled with the cat. That’s the same as saying that you have measured the cat, so you can’t treat it as quantum any more.

What Clarke, Devoret, and Martinis did was to build a circuit that could exhibit, not a state like a laser, but a “cat state”: delicately entangled, at risk of total collapse if measured.

That’s why they deserved a Nobel, even in a world where there are many other Nobels for different types of quantum states. Lasers, superconductors, even Bose-Einstein condensates were in a sense “easy mode”, robust quantum states that didn’t need all that much protection. This year’s physics laureates, in contrast, showed it was possible to make circuits that could make use of quantum mechanics’ most delicate properties.

That’s also why their circuits, in particular, are being heralded as a predecessor for modern attempts at quantum computers. Quantum computers do tricks with entanglement, they need “cat states”, not Bose-Einstein Condensates. And Clarke, Devoret, and Martinis’s work in the 1980’s was the first clear proof that this was a feasible thing to do.

Congratulations to John Hopfield and Geoffrey Hinton!

The 2024 Physics Nobel Prize was announced this week, awarded to John Hopfield and Geoffrey Hinton for using physics to propose foundational ideas in the artificial neural networks used for machine learning.

If the picture above looks off-center, it’s because this is the first time since 2015 that the Physics Nobel has been given to two, rather than three, people. Since several past prizes bundled together disparate ideas in order to make a full group of three, it’s noteworthy that this year the committee decided that each of these people deserved 1/2 the prize amount, without trying to find one more person to water it down further.

Hopfield was trained as a physicist, working in the broad area known as “condensed matter physics”. Condensed matter physicists use physics to describe materials, from semiconductors to crystals to glass. Over the years, Hopfield started using this training less for the traditional subject matter of the field and more to study the properties of living systems. He moved from a position in the physics department of Princeton to chemistry and biology at Caltech. While at Caltech he started studying neuroscience and proposed what are now known as Hopfield networks as a model for how neurons store memory. Hopfield networks have very similar properties to a more traditional condensed matter system called a “spin glass”, and from what he knew about those systems Hopfield could make predictions for how his networks would behave. Those networks would go on to be a major inspiration for the artificial neural networks used for machine learning today.

Hinton was not trained as a physicist, and in fact has said that he didn’t pursue physics in school because the math was too hard! Instead, he got a bachelor’s degree in psychology, and a PhD in the at the time nascent field of artificial intelligence. In the 1980’s, shortly after Hopfield published his network, Hinton proposed a network inspired by a closely related area of physics, one that describes temperature in terms of the statistics of moving particles. His network, called a Boltzmann machine, would be modified and made more efficient over the years, eventually becoming a key part of how artificial neural networks are “trained”.

These people obviously did something impressive. Was it physics?

In 2014, the Nobel prize in physics was awarded to the people who developed blue LEDs. Some of these people were trained as physicists, some weren’t: Wikipedia describes them as engineers. At the time, I argued that this was fine, because these people were doing “something physicists are good at”, studying the properties of a physical system. Ultimately, the thing that ties together different areas of physics is training: physicists are the people who study under other physicists, and go on to collaborate with other physicists. That can evolve in unexpected directions, from more mathematical research to touching on biology and social science…but as long as the work benefits from being linked to physics departments and physics degrees, it makes sense to say it “counts as physics”.

By that logic, we can probably call Hopfield’s work physics. Hinton is more uncertain: his work was inspired by a physical system, but so are other ideas in computer science, like simulated annealing. Other ideas, like genetic algorithms, are inspired by biological systems: does that mean they count as biology?

Then there’s the question of the Nobel itself. If you want to get a Nobel in physics, it usually isn’t enough to transform the field. Your idea has to actually be tested against nature. Theoretical physics is its own discipline, with several ideas that have had an enormous influence on how people investigate new theories, ideas which have never gotten Nobels because the ideas were not intended, by themselves, to describe the real world. Hopfield networks and Boltzmann machines, similarly, do not exist as physical systems in the real world. They exist as computer simulations, and it is those computer simulations that are useful. But one can simulate many ideas in physics, and that doesn’t tend to be enough by itself to get a Nobel.

Ultimately, though, I don’t think this way of thinking about things is helpful. The Nobel isn’t capable of being “fair”, there’s no objective standard for Nobel-worthiness, and not much reason for there to be. The Nobel doesn’t determine which new research gets funded, nor does it incentivize anyone (except maybe Brian Keating). Instead, I think the best way of thinking about the Nobel these days is a bit like Disney.

When Disney was young, its movies had to stand or fall on their own merits. Now, with so many iconic movies in its history, Disney movies are received in the context of that history. Movies like Frozen or Moana aren’t just trying to be a good movie by themselves, they’re trying to be a Disney movie, with all that entails.

Similarly, when the Nobel was young, it was just another award, trying to reward things that Alfred Nobel might have thought deserved rewarding. Now, though, each Nobel prize is expected to be “Nobel-like”, an analogy between each laureate and the laureates of the past. When new people are given Nobels the committee is on some level consciously telling a story, saying that these people fit into the prize’s history.

This year, the Nobel committee clearly wanted to say something about AI. There is no Nobel prize for computer science, or even a Nobel prize for mathematics. (Hinton already has the Turing award, the most prestigious award in computer science.) So to say something about AI, the Nobel committee gave rewards in other fields. In addition to physics, this year’s chemistry award went in part to the people behind AlphaFold2, a machine learning tool to predict what shapes proteins fold into. For both prizes, the committee had a reasonable justification. AlphaFold2 genuinely is an amazing advance in the chemistry of proteins, a research tool like nothing that came before. And the work of Hopfield and Hinton did lead ideas in physics to have an enormous impact on the world, an impact that is worth recognizing. Ultimately, though, whether or not these people should have gotten the Nobel doesn’t depend on that justification. It’s an aesthetic decision, one that (unlike Disney’s baffling decision to make live-action remakes of their most famous movies) doesn’t even need to impress customers. It’s a question of whether the action is “Nobel-ish” enough, according to the tastes of the Nobel committee. The Nobel is essentially expensive fanfiction of itself.

And honestly? That’s fine. I don’t think there’s anything else they could be doing at this point.

Congratulations to Pierre Agostini, Ferenc Krausz and Anne L’Huillier!

The 2023 Physics Nobel Prize was announced this week, awarded to Pierre Agostini, Ferenc Krausz and Anne L’Huillier for figuring out how to generate extremely fast (hundreds of attoseconds) pulses of light.

Some physicists try to figure out the laws of physics themselves, or the behavior of big photogenic physical systems like stars and galaxies. Those people tend to get a lot of press, but most physicists don’t do that kind of work. Instead, most physicists try to accomplish new things with old physical laws: taking light, electrons, and atoms and doing things nobody thought possible. While that may sound like engineering, the work these physicists do lies beyond the bounds of what engineers are comfortable with: there’s too much uncertainty, too little precedent, and the applications are still far away. The work is done with the goal of pushing our capabilities as far as we can, accomplishing new things and worrying later about what they’re good for.

(Somehow, they still tend to be good for something, often valuable things. Knowing things pays off!)

Anne L’Huillier began the story in 1987, shining infrared lasers through noble gases and seeing the gas emit unexpected new frequencies. As physicists built on that discovery, it went from an academic observation to a more and more useful tool, until in 2001 Pierre Agostini and Ferenc Krausz, with different techniques both based on the same knowledge, managed to produce pulses of light only a few hundred attoseconds long.

(“Atto” is one of the SI prefixes. They go milli, micro, nano, pico, femto, atto. Notice that “nano” is in the middle there: an attosecond is as much smaller than a nanosecond as a nanosecond is from an ordinary second.)

This is cool just from the point of view of “humans doing difficult things”, but it’s also useful. Electrons move on attosecond time-scales. If you can send pulses of light at attosecond speed, you’ve got a camera fast enough to capture how electrons move in real time. You can figure out how they traverse electronics, or how they slosh back and forth in biological molecules.

This year’s prize has an extra point of interest for me, as both Anne L’Huillier and Pierre Agostini did their prize-winning work at CEA Paris-Saclay, where I just started work last month. Their groups would eventually evolve into something called Attolab, I walk by their building every day on the way to lunch.

Congratulations to Alain Aspect, John F. Clauser and Anton Zeilinger!

The 2022 Nobel Prize was announced this week, awarded to Alain Aspect, John F. Clauser, and Anton Zeilinger for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.

I’ve complained in the past about the Nobel prize awarding to “baskets” of loosely related topics. This year, though, the three Nobelists have a clear link: they were pioneers in investigating and using quantum entanglement.

You can think of a quantum particle like a coin frozen in mid-air. Once measured, the coin falls, and you read it as heads or tails, but before then the coin is neither, with equal chance to be one or the other. In this metaphor, quantum entanglement slices the coin in half. Slice a coin in half on a table, and its halves will either both show heads, or both tails. Slice our “frozen coin” in mid-air, and it keeps this property: the halves, both still “frozen”, can later be measured as both heads, or both tails. Even if you separate them, the outcomes never become independent: you will never find one half-coin to land on tails, and the other on heads.

For those who read my old posts, I think this is a much better metaphor than the different coin-cut-in-half metaphor I used five years ago.

Einstein thought that this couldn’t be the whole story. He was bothered by the way that measuring a “frozen” coin seems to change its behavior faster than light, screwing up his theory of special relativity. Entanglement, with its ability to separate halves of a coin as far as you liked, just made the problem worse. He thought that there must be a deeper theory, one with “hidden variables” that determined whether the halves would be heads or tails before they were separated.

In 1964, a theoretical physicist named J.S. Bell found that Einstein’s idea had testable consequences. He wrote down a set of statistical equations, called Bell inequalities, that have to hold if there are hidden variables of the type Einstein imagined, then showed that quantum mechanics could violate those inequalities.

Bell’s inequalities were just theory, though, until this year’s Nobelists arrived to test them. Clauser was first: in the 70’s, he proposed a variant of Bell’s inequalities, then tested them by measuring members of a pair of entangled photons in two different places. He found complete agreement with quantum mechanics.

Still, there was a loophole left for Einstein’s idea. If the settings on the two measurement devices could influence the pair of photons when they were first entangled, that would allow hidden variables to influence the outcome in a way that avoided Bell and Clauser’s calculations. It was Aspect, in the 80’s, who closed this loophole: by doing experiments fast enough to change the measurement settings after the photons were entangled, he could show that the settings could not possibly influence the forming of the entangled pair.

Aspect’s experiments, in many minds, were the end of the story. They were the ones emphasized in the textbooks when I studied quantum mechanics in school.

The remaining loopholes are trickier. Some hope for a way to correlate the behavior of particles and measurement devices that doesn’t run afoul of Aspect’s experiment. This idea, called, superdeterminism, has recently had a few passionate advocates, but speaking personally I’m still confused as to how it’s supposed to work. Others want to jettison special relativity altogether. This would not only involve measurements influencing each other faster than light, but also would break a kind of symmetry present in the experiments, because it would declare one measurement or the other to have happened “first”, something special relativity forbids. The majority, uncomfortable with either approach, thinks that quantum mechanics is complete, with no deterministic theory that can replace it. They differ only on how to describe, or interpret, the theory, a debate more the domain of careful philosophy than of physics.

After all of these philosophical debates over the nature of reality, you may ask what quantum entanglement can do for you?

Suppose you want to make a computer out of quantum particles, one that uses the power of quantum mechanics to do things no ordinary computer can. A normal computer needs to copy data from place to place, from hard disk to RAM to your processor. Quantum particles, however, can’t be copied: a theorem says that you cannot make an identical, independent copy of a quantum particle. Moving quantum data then required a new method, pioneered by Anton Zeilinger in the late 90’s using quantum entanglement. The method destroys the original particle to make a new one elsewhere, which led to it being called quantum teleportation after the Star Trek devices that do the same with human beings. Quantum teleportation can’t move information faster than light (there’s a reason the inventor of Le Guin’s ansible despairs of the materialism of “Terran physics”), but it is still a crucial technology for quantum computers, one that will be more and more relevant as time goes on.

Congratulations to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi!

The 2021 Nobel Prize in Physics was announced this week, awarded to Syukuro Manabe and Klaus Hasselmann for climate modeling and Giorgio Parisi for understanding a variety of complex physical systems.

Before this year’s prize was announced, I remember a few “water cooler chats” about who might win. No guess came close, though. The Nobel committee seems to have settled in to a strategy of prizes on a loosely linked “basket” of topics, with half the prize going to a prominent theorist and the other half going to two experimental, observational, or (in this case) computational physicists. It’s still unclear why they’re doing this, but regardless it makes it hard to predict what they’ll do next!

When I read the announcement, my first reaction was, “surely it’s not that Parisi?” Giorgio Parisi is known in my field for the Altarelli-Parisi equations (more properly known as the DGLAP equations, the longer acronym because, as is often the case in physics, the Soviets got there first). These equations are in some sense why the scattering amplitudes I study are ever useful at all. I calculate collisions of individual fundamental particles, like quarks and gluons, but a real particle collider like the LHC collides protons. Protons are messy, interacting combinations of quarks and gluons. When they collide you need not merely the equations describing colliding quarks and gluons, but those that describe their messy dynamics inside the proton, and in particular how those dynamics look different for experiments with different energies. The equation that describes that is the DGLAP equation.

As it turns out, Parisi is known for a lot more than the DGLAP equation. He is best known for his work on “spin glasses”, models of materials where quantum spins try to line up with each other, never quite settling down. He also worked on a variety of other complex systems, including flocks of birds!

I don’t know as much about Manabe and Hasselmann’s work. I’ve only seen a few talks on the details of climate modeling. I’ve seen plenty of talks on other types of computer modeling, though, from people who model stars, galaxies, or black holes. And from those, I can appreciate what Manabe and Hasselmann did. Based on those talks, I recognize the importance of those first one-dimensional models, a single column of air, especially back in the 60’s when computer power was limited. Even more, I recognize how impressive it is for someone to stay on the forefront of that kind of field, upgrading models for forty years to stay relevant into the 2000’s, as Manabe did. Those talks also taught me about the challenge of coupling different scales: how small effects in churning fluids can add up and affect the simulation, and how hard it is to model different scales at once. To use these effects to discover which models are reliable, as Hasselmann did, is a major accomplishment.

Congratulations to Roger Penrose, Reinhard Genzel, and Andrea Ghez!

The 2020 Physics Nobel Prize was announced last week, awarded to Roger Penrose for his theorems about black holes and Reinhard Genzel and Andrea Ghez for discovering the black hole at the center of our galaxy.

Of the three, I’m most familiar with Penrose’s work. People had studied black holes before Penrose, but only the simplest of situations, like an imaginary perfectly spherical star. Some wondered whether black holes in nature were limited in this way, if they could only exist under perfectly balanced conditions. Penrose showed that wasn’t true: he proved mathematically that black holes not only can form, they must form, in very general situations. He’s also worked on a wide variety of other things. He came up with “twistor space”, an idea intended for a new theory of quantum gravity that ended up as a useful tool for “amplitudeologists” like me to study particle physics. He discovered a set of four types of tiles such that if you tiled a floor with them the pattern would never repeat. And he has some controversial hypotheses about quantum gravity and consciousness.

I’m less familiar with Genzel and Ghez, but by now everyone should be familiar with what they found. Genzel and Ghez led two teams that peered into the center of our galaxy. By carefully measuring the way stars moved deep in the core, they figured out something we now teach children: that our beloved Milky Way has a dark and chewy center, an enormous black hole around which everything else revolves. These appear to be a common feature of galaxies, and many others have been shown to orbit black holes as well.

Like last year, I find it a bit odd that the Nobel committee decided to lump these two prizes together. Both discoveries concern black holes, so they’re more related than last year’s laureates, but the contexts are quite different: it’s not as if Penrose predicted the black hole in the center of our galaxy. Usually the Nobel committee avoids mathematical work like Penrose’s, except when it’s tied to a particular experimental discovery. It doesn’t look like anyone has gotten a Nobel prize for discovering that black holes exist, so maybe that’s the intent of this one…but Genzel and Ghez were not the first people to find evidence of a black hole. So overall I’m confused. I’d say that Penrose deserved a Nobel Prize, and that Genzel and Ghez did as well, but I’m not sure why they needed to split one with each other.

Congratulations to James Peebles, Michel Mayor, and Didier Queloz!

The 2019 Physics Nobel Prize was announced this week, awarded to James Peebles for work in cosmology and to Michel Mayor and Didier Queloz for the first observation of an exoplanet.

Peebles introduced quantitative methods to cosmology. He figured out how to use the Cosmic Microwave Background (light left over from the Big Bang) to understand how matter is distributed in our universe, including the presence of still-mysterious dark matter and dark energy. Mayor and Queloz were the first team to observe a planet outside of our solar system (an “exoplanet”), in 1995. By careful measurement of the spectrum of light coming from a star they were able to find a slight wobble, caused by a Jupiter-esque planet in orbit around it. Their discovery opened the floodgates of observation. Astronomers found many more planets than expected, showing that, far from a rare occurrence, exoplanets are quite common.

It’s a bit strange that this Nobel was awarded to two very different types of research. This isn’t the first time the prize was divided between two different discoveries, but all of the cases I can remember involve discoveries in closely related topics. This one didn’t, and I’m curious about the Nobel committee’s logic. It might have been that neither discovery “merited a Nobel” on its own, but I don’t think we’re supposed to think of shared Nobels as “lesser” than non-shared ones. It would make sense if the Nobel committee thought they had a lot of important results to “get through” and grouped them together to get through them faster, but if anything I have the impression it’s the opposite: that at least in physics, it’s getting harder and harder to find genuinely important discoveries that haven’t been acknowledged. Overall, this seems like a very weird pairing, and the Nobel committee’s citation “for contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos” is a pretty loose justification.

Congratulations to Arthur Ashkin, Gérard Mourou, and Donna Strickland!

The 2018 Physics Nobel Prize was announced this week, awarded to Arthur Ashkin, Gérard Mourou, and Donna Strickland for their work in laser physics.

nobel2018Some Nobel prizes recognize discoveries of the fundamental nature of reality. Others recognize the tools that make those discoveries possible.

Ashkin developed techniques that use lasers to hold small objects in place, culminating in “optical tweezers” that can pick up and move individual bacteria. Mourou and Strickland developed chirped pulse amplification, the current state of the art in extremely high-power lasers. Strickland is only the third woman to win the Nobel prize in physics, Ashkin at 96 is the oldest person to ever win the prize.

(As an aside, the phrase “optical tweezers” probably has you imagining two beams of laser light pinching a bacterium between them, like microscopic lightsabers. In fact, optical tweezers use a single beam, focused and bent so that if an object falls out of place it will gently roll back to the middle of the beam. Instead of tweezers, it’s really more like a tiny laser spoon.)

The Nobel announcement emphasizes practical applications, like eye surgery. It’s important to remember that these are research tools as well. I wouldn’t have recognized the names of Ashkin, Mourou, and Strickland, but I recognized atom trapping, optical tweezers, and ultrashort pulses. Hang around atomic physicists, or quantum computing experiments, and these words pop up again and again. These are essential tools that have given rise to whole subfields. LIGO won a Nobel based on the expectation that it would kick-start a vast new area of research. Ashkin, Mourou, and Strickland’s work already has.

When You Shouldn’t Listen to a Distinguished but Elderly Scientist

Of science fiction author Arthur C. Clarke’s sayings, the most famous is “Clarke’s third law”, that “Any sufficiently advanced technology is indistinguishable from magic.” Almost as famous, though, is his first law:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Recently Michael Atiyah, an extremely distinguished but also rather elderly mathematician, claimed that something was possible: specifically, he claimed it was possible that he had proved the Riemann hypothesis, one of the longest-standing and most difficult puzzles in mathematics. I won’t go into the details here, but people are, well, skeptical.

This post isn’t really about Atiyah. I’m not close enough to that situation to comment. Instead, it’s about a more general problem.

See, the public seems to mostly agree with Clarke’s law. They trust distinguished, elderly scientists, at least when they’re saying something optimistic. Other scientists know better. We know that scientists are human, that humans age…and that sometimes scientific minds don’t age gracefully.

Some of the time, that means Alzheimer’s, or another form of dementia. Other times, it’s nothing so extreme, just a mind slowing down with age, opinions calcifying and logic getting just a bit more fuzzy.

And the thing is, watching from the sidelines, you aren’t going to know the details. Other scientists in the field will, but this kind of thing is almost never discussed with the wider public. Even here, though specific physicists come to mind as I write this, I’m not going to name them. It feels rude, to point out that kind of all-too-human weakness in someone who accomplished so much. But I think it’s important for the public to keep in mind that these people exist. When an elderly Nobelist claims to have solved a problem that baffles mainstream science, the news won’t tell you they’re mentally ill. All you can do is keep your eyes open, and watch for warning signs:

Be wary of scientists who isolate themselves. Scientists who still actively collaborate and mentor almost never have this kind of problem. There’s a nasty feedback loop when those contacts start to diminish. Being regularly challenged is crucial to test scientific ideas, but it’s also important for mental health, especially in the elderly. As a scientist thinks less clearly, they won’t be able to keep up with their collaborators as much, worsening the situation.

Similarly, beware those famous enough to surround themselves with yes-men. With Nobel prizewinners in particular, many of the worst cases involve someone treated with so much reverence that they forget to question their own ideas. This is especially risky when commenting on an unfamiliar field: often, the Nobelist’s contacts in the new field have a vested interest in holding on to their big-name support, and ignoring signs of mental illness.

Finally, as always, bigger claims require better evidence. If everything someone works on is supposed to revolutionize science as we know it, then likely none of it will. The signs that indicate crackpots apply here as well: heavily invoking historical scientists, emphasis on notation over content, a lack of engagement with the existing literature. Be especially wary if the argument seems easy, deep problems are rarely so simple to solve.

Keep this in mind, and the next time a distinguished but elderly scientist states that something is possible, don’t trust them blindly. Ultimately, we’re still humans beings. We don’t last forever.