The 2025 Physics Nobel Prize was announced this week, awarded to John Clarke, Michel Devoret, and John Martinis for building an electrical circuit that exhibited quantum effects like tunneling and energy quantization on a macroscopic scale.
Press coverage of this prize tends to focus on two aspects: the idea that these three “scaled up” quantum effects to medium-sized objects (the technical account quotes a description that calls it “big enough to get one’s grubby fingers on”), and that the work paved the way for some of the fundamental technologies people are exploring for quantum computing.
That’s a fine enough story, but it leaves out what made these folks’ work unique, why it differs from other Nobel laureates working with other quantum systems. It’s a bit more technical of a story, but I don’t think it’s that technical. I’ll try to tell it here.
To start, have you heard of Bose-Einstein Condensates?
Bose-Einstein Condensates are macroscopic quantum states that have already won Nobel prizes. First theorized based on ideas developed by Einstein and Bose (the namesake of bosons), they involve a large number of particles moving together, each in the same state. While the first gas that obeyed Einstein’s equations for a Bose-Einstein Condensate was created in the 1990’s, after Clarke, Devoret, and Martinis’s work, other things based on essentially the same principles were created much earlier. A laser works on the same principles as a Bose-Einstein condensate, as do phenomena like superconductivity and superfluidity.
This means that lasers, superfluids, and superconductors had been showing off quantum mechanics on grubby finger scales well before Clarke, Devoret, and Martinis’s work. But the science rewarded by this year’s Nobel turns out to be something quite different.
Because the different photons in laser light are independently in identical quantum states, lasers are surprisingly robust. You can disrupt the state of one photon, and it won’t interfere with the other states. You’ll have weakened the laser’s consistency a little bit, but the disruption won’t spread much, if at all.
That’s very different from the way quantum systems usually work. Schrodinger’s cat is the classic example. You have a box with a radioactive atom, and if that atom decays, it releases poison, killing the cat. You don’t know if the atom has decayed or not, and you don’t know if the cat is alive or not. We say the atom’s state is a superposition of decayed and not decayed, and the cat’s state is a superposition of alive and dead.
But unlike photons in a laser, the atom and the cat in Schrodinger’s cat are not independent: if the atom has decayed, the cat is dead, if the atom has not, the cat is alive. We say the states of atom and cat are entangled.
That makes these so-called “Schrodinger’s cat” states much more delicate. The state of the cat depends on the state of the atom, and those dependencies quickly “leak” to the outside world. If you haven’t sealed the box well, the smell of the room is now also entangled with the cat…which, if you have a sense of smell, means that you are entangled with the cat. That’s the same as saying that you have measured the cat, so you can’t treat it as quantum any more.
What Clarke, Devoret, and Martinis did was to build a circuit that could exhibit, not a state like a laser, but a “cat state”: delicately entangled, at risk of total collapse if measured.
That’s why they deserved a Nobel, even in a world where there are many other Nobels for different types of quantum states. Lasers, superconductors, even Bose-Einstein condensates were in a sense “easy mode”, robust quantum states that didn’t need all that much protection. This year’s physics laureates, in contrast, showed it was possible to make circuits that could make use of quantum mechanics’ most delicate properties.
That’s also why their circuits, in particular, are being heralded as a predecessor for modern attempts at quantum computers. Quantum computers do tricks with entanglement, they need “cat states”, not Bose-Einstein Condensates. And Clarke, Devoret, and Martinis’s work in the 1980’s was the first clear proof that this was a feasible thing to do.


Thank you for the article. I did not learn about their work up to now. I just have read some more about their discovery on the internet. Thus, Clarke, Devoret, and Martinis proved that quantum tunnelling of electrons are not limited to the microscopic realm. They also demonstrated that their superconducting circuit could only absorb and emit energy in discrete, or quantized, amounts.
The question is: Has a mechanism for this behaviour of electrons been found, or is a microscopic theory of this phenomenon still missing?
I ask this because from my point of view it could be an interesting task to develop a theory of this phenomenon in the framework of the submicroscopic approach in which space is considered as a tessellattice, which provides for collective behaviour of a macroscopic ensemble of particles. The usual quantum mechanical formalism cannot cope with such a problem of course.
LikeLike
Excuse me if I’m wrong, but if I’ve understood what you written right, you said entanglement causes the wave function to collapse?
This leaves me thinking the measurement problem is answered, which can’t be right.
Sorry, for wasting your time. My background is psychology, and I write SF so I ask stupid questions.
LikeLike
Heh, it’s not an unreasonable question!
I think what often gets missed is that the measurement problem is much more of a philosophy problem than it is a physics problem. Physically, people have a pretty good understanding of what happens in a measurement, a picture which sharpened in the 80’s with Zurek’s work on decoherence. When a small isolated system gets entangled with a big system, you lose track of some of its degrees of freedom, in a way that makes the result look increasingly like classical physics. Matt Strassler has a nice series on this if you want someone who is really scrupulously (but pedagogically!) getting the details right, it starts here.
The unresolved part, the part that usually gets called the measurement problem, is philosophical. When you’ve entangled a small system with a big system, and that big system happens to be you…is the system still in quantum superposition, just in a way you can no longer observe? Are the other possibilities still out there in some metaphysical sense, many-worlds-style, or do they get removed by some “collapse event” that actually physically happens in the world (and if so, how the heck does that work with relativity?) Or should one throw out the whole metaphysical question and do something like Quantum Bayesianism, and say that from your perspective the only truth is what you personally (directly or indirectly) observe? And in any case, how do you get the frequentist “these events occur with these rates” results from the Bayesian “these are your estimated probabilities” output of quantum theory?
LikeLiked by 1 person
Thank you for taking time to reply to me. I shall go read the link, and see what I can extract from it. Cheers.
LikeLike