Author Archives: 4gravitons

Antimatter Isn’t Magic

You’ve heard of antimatter, right?

For each type of particle, there is a rare kind of evil twin with the opposite charge, called an anti-particle. When an anti-proton meets a proton, they annihilate each other in a giant blast of energy.

I see a lot of questions online about antimatter. One recurring theme is people asking a very general question: how does antimatter work?

If you’ve just heard the pop physics explanation, antimatter probably sounds like magic. What about antimatter lets it destroy normal matter? Does it need to touch? How long does it take? And what about neutral particles like neutrons?

You find surprisingly few good explanations of this online, but I can explain why. Physicists like me don’t expect antimatter to be confusing in this way, because to us, antimatter isn’t doing anything all that special. When a particle and an antiparticle annihilate, they’re doing the same thing that any other pair of particles do when they do…basically anything else.

Instead of matter and antimatter, let’s talk about one of the oldest pieces of evidence for quantum mechanics, the photoelectric effect. Scientists shone light at a metal, and found that if the wavelength of the light was short enough, electrons would spring free, causing an electric current. If the wavelength was too long, the metal wouldn’t emit any electrons, no matter how much light they shone. Einstein won his Nobel prize for the explanation: the light hitting the metal comes in particle-sized pieces, called photons, whose energy is determined by the wavelength of the light. If the individual photons don’t have enough energy to get an electron to leave the metal, then no electron will move, no matter how many photons you use.

What happens to the photons after they hit the metal?

They go away. We say they are absorbed, an electron absorbs a photon and speeds up, increasing its kinetic energy so it can escape.

But we could just as easily say the photon is annihilated, if we wanted to.

In the photoelectric effect, you start with one electron and one photon, they come together, and you end up with one electron and no photon. In proton-antiproton annihilation, you start with a proton and an antiproton, they come together, and you end up with no protons or antiprotons, but instead “energy”…which in practice, usually means two photons.

That’s all that happens, deep down at the root of things. The laws of physics are rules about inputs and outputs. Start with these particles, they come together, you end up with these other particles. Sometimes one of the particles stays the same. Sometimes particles seem to transform, and different kinds of particles show up. Sometimes some of the particles are photons, and you think of them as “just energy”, and easy to absorb. But particles are particles, and nothing is “just energy”. Each thing, absorption, decay, annihilation, each one is just another type of what we call interactions.

What makes annihilation of matter and antimatter seem unique comes down to charges. Interactions have to obey the laws of physics: they conserve energy, they conserve momentum, and they conserve charge.

So why can an antiproton and a proton annihilate to pure photons, while two protons can’t? A proton and an antiproton have opposite charge, a photon has zero charge. You could combine two protons to make something else, but it would have to have the same charge as two protons.

What about neutrons? A neutron has no electric charge, so you might think it wouldn’t need antimatter. But a neutron has another type of charge, called baryon number. In order to annihilate one, you’d need an anti-neutron, which would still have zero electric charge but would have the opposite baryon number. (By the way, physicists have been making anti-neutrons since 1956.)

On the other hand, photons actually have no charge. So do Higgs bosons. So one Higgs boson can become two photons, without annihilating with anything else. Each of these particles can be called its own antiparticle: a photon is also an antiphoton, a Higgs is also an anti-Higgs.

Because particle-antiparticle annihilation follows the same rules as other interactions between particles, it also takes place via the same forces. When a proton and an antiproton annihilate each other, they typically do this via the electromagnetic force. This is why you end up with light, which is an electromagnetic wave. Like everything in the quantum world, this annihilation isn’t certain. Is has a chance to happen, proportional to the strength of the interaction force involved.

What about neutrinos? They also appear to have a kind of charge, called lepton number. That might not really be a conserved charge, and neutrinos might be their own antiparticles, like photons. However, they are much less likely to be annihilated than protons and antiprotons, because they don’t have electric charge, and thus their interaction doesn’t depend on the electromagnetic force, but on the much weaker weak nuclear force. A weaker force means a less likely interaction.

Antimatter might seem like the stuff of science fiction. But it’s not really harder to understand than anything else in particle physics.

(I know, that’s a low bar!)

It’s just interactions. Particles go in, particles go out. If it follows the rules, it can happen, if it doesn’t, it can’t. Antimatter is no different.

I’ve Felt Like a Hallucinating LLM

ChatGPT and its kin work by using Large Language Models, or LLMs.

A climate model is a pile of mathematics and code, honed on data from the climate of the past. Tell it how the climate starts out, and it will give you a prediction for what happens next.

Similarly, a language model is a pile of mathematics and code, honed on data from the texts of the past. Tell it how a text starts, and it will give you a prediction for what happens next.

We have a rough idea of what a climate model can predict. The climate has to follow the laws of physics, for example. Similarly, a text should follow the laws of grammar, the order of verbs and nouns and so forth. The creators of the earliest, smallest language models figured out how to do that reasonably well.

Texts do more than just follow grammar, though. They can describe the world. And LLMs are both surprisingly good and surprisingly bad at that. They can do a lot when used right, answering test questions most humans would struggle with. But they also “hallucinate”, confidently saying things that have nothing to do with reality.

If you want to understand why large language models make both good predictions and bad, you shouldn’t just think about abstract “texts”. Instead, think about a specific type of text: a story.

Stories follow grammar, most of the time. But they also follow their own logic. The hero sets out, saves the world, and returns home again. The evil queen falls from the tower at the climax of the final battle. There are three princesses, and only the third can break the spell.

We aren’t usually taught this logic, like we’re taught physics or grammar. We learn it from experience, from reading stories and getting used to patterns. It’s the logic, not of how a story must go, but of how a story typically goes. And that question, of what typically comes next, is exactly the question LLMs are designed to answer.

It’s also a question we sometimes answer.

I was a theatre kid, and I loved improv in particular. Some of it was improv comedy, the games and skits you might have seen on “Whose Line is it Anyway?” But some of it was more…hippy stuff.

I’d meet up with a group on Saturdays. One year we made up a creation myth, half-rehearsed and half-improvised, a collection of gods and primordial beings. The next year we moved the story forward. Civilization had risen…and fallen again. We played a group of survivors gathered around a campfire, wary groups wondering what came next.

We plotted out characters ahead of time. I was the “villain”, or the closest we had to one. An enforcer of the just-fallen empire, the oppressor embodied. While the others carried clubs, staves, and farm implements, I was the only one with a real weapon: a sword.

(Plastic in reality, but the audience knew what to do.)

In the arguments and recriminations of the story, that sword set me apart, a constant threat that turned my character from contemptible to dangerous, that gave me a seat at the table even as I antagonized and stirred the pot.

But the story had another direction. The arguments pushed and pulled, and gradually the survivors realized that they would not survive if they did not put their grievances to rest, if they did not seek peace. So, one man stepped forward, and tossed his staff into the fire.

The others followed. One by one, clubs and sticks and menacing tools were cast aside. And soon, I was the only one armed.

If I was behaving logically, if I followed my character’s interests, I would have “won” there. I had gotten what I wanted, now there was no check on my power.

But that wasn’t what the story wanted. Improv is a game of fast decisions and fluid invention. We follow our instincts, and our instincts are shaped by experience. The stories of the past guide our choices, and must often be the only guide: we don’t have time to edit, or to second-guess.

And I felt the story, and what it wanted. It was a command that transcended will, that felt like it left no room for an individual actor making an individual decision.

I cast my sword into the fire.

The instinct that brought me to do that is the same instinct that guides authors when they say that their characters write themselves, when their story goes in an unexpected direction. It’s an instinct that can be tempered and counteracted, with time and effort, because it can easily lead to nonsense. It’s why every good book needs an editor, why improv can be as repetitive as it is magical.

And it’s been the best way I’ve found to understand LLMs.

An LLM telling a story tells a typical story, based on the data used to create it. In the same way, an LLM giving advice gives typical advice, to some extent in content but more importantly in form, advice that is confident and mentions things advice often mentions. An LLM writing a biography will write a typical biography, which may not be your biography, even if your biography was one of those used to create it, because it tries to predict how a biography should go based on all the other biographies. And all of these predictions and hallucinations are very much the kind of snap judgement that disarmed me.

These days, people are trying to build on top of LLMs and make technology that does more, that can edit and check its decisions. For the most part, they’re building these checks out of LLMs. Instead of telling one story, of someone giving advice on the internet, they tell two stories: the advisor and the editor, one giving the advice and one correcting it. They have to tell these stories many times, broken up into many parts, to approximate something other than the improv actor’s first instincts, and that’s why software that does this is substantially more expensive than more basic software that doesn’t.

I can’t say how far they’ll get. Models need data to work well, decisions need reliability to be good, computers need infrastructure to compute. But if you want to understand what’s at an LLM’s beating heart, think about the first instincts you have in writing or in theatre, in stories or in play. Then think about a machine that just does that.

Lambda-CDM Is Not Like the Standard Model

A statistician will tell you that all models are wrong, but some are useful.

Particle physicists have an enormously successful model called the Standard Model, which describes the world in terms of seventeen quantum fields, giving rise to particles from the familiar electron to the challenging-to-measure Higgs boson. The model has nineteen parameters, numbers that aren’t predicted by the model itself but must be found by doing experiments and finding the best statistical fit. With those numbers as input, the model is extremely accurate, aside from the occasional weird discrepancy.

Cosmologists have their own very successful standard model that they use to model the universe as a whole. Called ΛCDM, it describes the universe in terms of three things: dark energy, denoted with a capital lambda (Λ), cold dark matter (CDM), and ordinary matter, all interacting with each other via gravity. The model has six parameters, which must be found by observing the universe and finding the best statistical fit. When those numbers are input, the model is extremely accurate, though there have recently been some high-profile discrepancies.

These sound pretty similar. You model the world as a list of things, fix your parameters based on nature, and make predictions. Wikipedia has a nice graphic depicting the quantum fields of the Standard Model, and you could imagine a similar graphic for ΛCDM.

A graphic like that would be misleading, though.

ΛCDM doesn’t just propose a list of fields and let them interact freely. Instead, it tries to model the universe as a whole, which means it carries assumptions about how matter and energy are distributed, and how space-time is shaped. Some of this is controlled by its parameters, and by tweaking them one can model a universe that varies in different ways. But other assumptions are baked in. If the universe had a very different shape, caused by a very different distribution of matter and energy, then we would need a very different model to represent it. We couldn’t use ΛCDM.

The Standard Model isn’t like that. If you collide two protons together, you need a model of how quarks are distributed inside protons. But that model isn’t the Standard Model, it’s a separate model used for that particular type of experiment. The Standard Model is supposed to be the big picture, the stuff that exists and affects every experiment you can do.

That means the Standard Model is supported in a way that ΛCDM isn’t. The Standard Model describes many different experiments, and is supported by almost all of them. When an experiment disagrees, it has specific implications for part of the model only. For example, neutrinos have mass, which was not predicted in the Standard Model, but it proved easy for people to modify the model to fit. We know the Standard Model is not the full picture, but we also know that any deviations from it must be very small. Large deviations would contradict other experiments, or more basic principles like probabilities needing to be smaller than one.

In contrast, ΛCDM is really just supported by one experiment. We have one universe to observe. We can gather a lot of data, measuring it from its early history to the recent past. But we can’t run it over and over again under different conditions, and our many measurements are all measuring different aspects of the same thing. That’s why unlike in the Standard Model, we can’t separate out assumptions about the shape of the universe from assumptions about what it contains. Dark energy and dark matter are on the same footing as distribution of fluctuations and homogeneity and all those shape-related words, part of one model that gets fit together as a whole.

And so while both the Standard Model and ΛCDM are successful, that success means something different. It’s hard to imagine that we find new evidence and discover that electrons don’t exist, or quarks don’t exist. But we may well find out that dark energy doesn’t exist, or that the universe has a radically different shape. The statistical success of ΛCDM is impressive, and it means any alternative has a high bar to clear. But it doesn’t have to mean rethinking everything the way an alternative to the Standard Model would.

I Have a Theory

“I have a theory,” says the scientist in the book. But what does that mean? What does it mean to “have” a theory?

First, there’s the everyday sense. When you say “I have a theory”, you’re talking about an educated guess. You think you know why something happened, and you want to check your idea and get feedback. A pedant would tell you you don’t really have a theory, you have a hypothesis. It’s “your” hypothesis, “your theory”, because it’s what you think happened.

The pedant would insist that “theory” means something else. A theory isn’t a guess, even an educated guess. It’s an explanation with evidence, tested and refined in many different contexts in many different ways, a whole framework for understanding the world, the most solid knowledge science can provide. Despite the pedant’s insistence, that isn’t the only way scientists use the word “theory”. But it is a common one, and a central one. You don’t really “have” a theory like this, though, except in the sense that we all do. These are explanations with broad consensus, things you either know of or don’t, they don’t belong to one person or another.

Except, that is, if one person takes credit for them. We sometimes say “Darwin’s theory of evolution”, or “Einstein’s theory of relativity”. In that sense, we could say that Einstein had a theory, or that Darwin had a theory.

Sometimes, though, “theory” doesn’t mean this standard official definition, even when scientists say it. And that changes what it means to “have” a theory.

For some researchers, a theory is a lens with which to view the world. This happens sometimes in physics, where you’ll find experts who want to think about a situation in terms of thermodynamics, or in terms of a technique called Effective Field Theory. It happens in mathematics, where some choose to analyze an idea with category theory not to prove new things about it, but just to translate it into category theory lingo. It’s most common, though, in the humanities, where researchers often specialize in a particular “interpretive framework”.

For some, a theory is a hypothesis, but also a pet project. There are physicists who come up with an idea (maybe there’s a variant of gravity with mass! maybe dark energy is changing!) and then focus their work around that idea. That includes coming up with ways to test whether the idea is true, showing the idea is consistent, and understanding what variants of the idea could be proposed. These ideas are hypotheses, in that they’re something the scientist thinks could be true. But they’re also ideas with many moving parts that motivate work by themselves.

Taken to the extreme, this kind of “having” a theory can go from healthy science to political bickering. Instead of viewing an idea as a hypothesis you might or might not confirm, it can become a platform to fight for. Instead of investigating consistency and proposing tests, you focus on arguing against objections and disproving your rivals. This sometimes happens in science, especially in more embattled areas, but it happens much more often with crackpots, where people who have never really seen science done can decide it’s time for their idea, right or wrong.

Finally, sometimes someone “has” a theory that isn’t a hypothesis at all. In theoretical physics, a “theory” can refer to a complete framework, even if that framework isn’t actually supposed to describe the real world. Some people spend time focusing on a particular framework of this kind, understanding its properties in the hope of getting broader insights. By becoming an expert on one particular theory, they can be said to “have” that theory.

Bonus question: in what sense do string theorists “have” string theory?

You might imagine that string theory is an interpretive framework, like category theory, with string theorists coming up with the “string version” of things others understand in other ways. This, for the most part, doesn’t happen. Without knowing whether string theory is true, there isn’t much benefit in just translating other things to string theory terms, and people for the most part know this.

For some, string theory is a pet project hypothesis. There is a community of people who try to get predictions out of string theory, or who investigate whether string theory is consistent. It’s not a huge number of people, but it exists. A few of these people can get more combative, or make unwarranted assumptions based on dedication to string theory in particular: for example, you’ll see the occasional argument that because something is difficult in string theory it must be impossible in any theory of quantum gravity. You see a spectrum in the community, from people for whom string theory is a promising project to people for whom it is a position that needs to be defended and argued for.

For the rest, the question of whether string theory describes the real world takes a back seat. They’re people who “have” string theory in the sense that they’re experts, and they use the theory primarily as a mathematical laboratory to learn broader things about how physics works. If you ask them, they might still say that they hypothesize string theory is true. But for most of these people, that question isn’t central to their work.

This Week, at FirstPrinciples.org

I’ve got a piece out this week in a new venue: FirstPrinciples.org, where I’ve written a profile of a startup called Vaire Computing.

Vaire works on reversible computing, an idea that tries to leverage thermodynamics to make a computer that wastes as little heat as possible. While I learned a lot of fun things that didn’t make it into the piece…I’m not going to tell you them this week! That’s because I’m working on another piece about reversible computing, focused on a different aspect of the field. When that piece is out I’ll have a big “bonus material post” talking about what I learned writing both pieces.

This week, instead, the bonus material is about FirstPrinciples.org itself, where you’ll be seeing me write more often in future. The First Principles Foundation was founded by Ildar Shar, a Canadian tech entrepreneur who thinks that physics is pretty cool. (Good taste that!) His foundation aims to support scientific progress, especially in addressing the big, fundamental questions. They give grants, analyze research trends, build scientific productivity tools…and most relevantly for me, publish science news on their website, in a section called the Hub.

The first time I glanced through the Hub, it was clear that FirstPrinciples and I have a lot in common. Like me, they’re interested both in scientific accomplishments and in the human infrastructure that makes them possible. They’ve interviewed figures in the open access movement, like the creators of arXiv and SciPost. On the science side, they mix coverage of the mainstream and reputable with outsiders challenging the status quo, and hot news topics with explainers of key concepts. They’re still new, and still figuring out what they want to be. But from my glimpse on the way, it looks like they’re going somewhere good.

Hot Things Are Less Useful

Did you know that particle colliders have to cool down their particle beams before they collide?

You might have learned in school that temperature is secretly energy. With a number called Boltzmann’s constant, you can convert a temperature of a gas in Kelvin to the average energy of a molecule in the gas. If that’s what you remember about temperature, it might seem weird that someone would cool down the particles in a particle collider. The whole point of a particle collider is to accelerate particles, giving them lots of energy, before colliding them together. Since those particles have a lot of energy, they must be very hot, right?

Well, no. Here’s the thing: temperature is not just the average energy. It’s the average random energy. It’s energy that might be used to make a particle move forward or backwards, up or down, a random different motion for each particle. It doesn’t include motion that’s the same for each particle, like the movement of a particle beam.

Cooling down a particle beam then, doesn’t mean slowing it down. Rather, it means making it more consistent, getting the different particles moving in the same direction rather than randomly spreading apart. You want the particles to go somewhere specific, speeding up and slamming into the other beam. You don’t want them to move randomly, running into the walls and destroying your collider. So you can have something with high energy that is comparatively cool.

In general, the best way I’ve found to think about temperature and heat is in terms of usefulness and uselessness. Cool things are useful, they do what you expect and not much more. Hot things are less useful, they use energy to do random things you don’t want. Sometimes, by chance, this random energy will still do something useful, and if you have a cold thing to pair with the hot thing, you can take advantage of this in a consistent way. But hot things by themselves are less useful, and that’s why particle colliders try to cool down their beams.

AI Can’t Do Science…And Neither Can Other Humans

Seen on Twitter:

I don’t know the context here, so I can’t speak to what Prof. Cronin meant. But it got me thinking.

Suppose you, like Prof. Cronin, were to insist that AI “cannot in principle” do science, because AI “is not autonomous” and “does not come up with its own problems to solve”. What might you mean?

You might just be saying that AI is bad at coming up with new problems to solve. That’s probably fair, at least at the moment. People have experimented with creating simple “AI researchers” that “study” computer programs, coming up with hypotheses about the programs’ performance and testing them. But it’s a long road from that to reproducing the much higher standards human scientists have to satisfy.

You probably don’t mean that, though. If you did, you wouldn’t have said “in principle”. You mean something stronger.

More likely, you might mean that AI cannot come up with its own problems, because AI is a tool. People come up with problems, and use AI to help solve them. In this perspective, not only is AI “not autonomous”, it cannot be autonomous.

On a practical level, this is clearly false. Yes, machine learning models, the core technology in current AI, are set up to answer questions. A user asks something, and receives the model’s prediction of the answer. That’s a tool, but for the more flexible models like GPT it’s trivial to turn it into something autonomous. Just add another program: a loop that asks the model what to do, does it, tells the model the result, and asks what to do next. Like taping a knife to a Roomba, you’ve made a very simple modification to make your technology much more dangerous.

You might object, though, that this simple modification of GPT is not really autonomous. After all, a human created it. That human had some goal, some problem they wanted to solve, and the AI is just solving the problem for them.

That may be a fair description of current AI, but insisting it’s true in principle has some awkward implications. If you make a “physics AI”, just tell it to do “good physics”, and it starts coming up with hypotheses you’d never thought of, is it really fair to say it’s just solving your problem?

What if the AI, instead, was a child? Picture a physicist encouraging a child to follow in their footsteps, filling their life with physics ideas and rhapsodizing about the hard problems of the field at the dinner table. Suppose the child becomes a physicist in turn, and finds success later in life. Were they really autonomous? Were they really a scientist?

What if the child, instead, was a scientific field, and the parent was the general public? The public votes for representatives, the representatives vote to hire agencies, and the agencies promise scientists they’ll give them money if they like the problems they come up with. Who is autonomous here?

(And what happens if someone takes a hammer to that process? I’m…still not talking about this! No-politics-rule still in effect, sorry! I do have a post planned, but it will have to wait until I can deal with the fallout.)

At this point, you’d probably stop insisting. You’d drop that “in principle”, and stick with the claim I started with, that current AI can’t be a scientist.

But you have another option.

You can accept the whole chain of awkward implications, bite all the proverbial bullets. Yes, you insist, AI is not autonomous. Neither is the physicist’s child in your story, and neither are the world’s scientists paid by government grants. Each is a tool, used by the one, true autonomous scientist: you.

You are stuck in your skull, a blob of curious matter trained on decades of experience in the world and pre-trained with a couple billion years of evolution. For whatever reason, you want to know more, so you come up with problems to solve. You’re probably pretty vague about those problems. You might want to see more pretty pictures of space, or wrap your head around the nature of time. So you turn the world into your tool. You vote and pay taxes, so your government funds science. You subscribe to magazines and newspapers, so you hear about it. You press out against the world, and along with the pressure that already exists it adds up, and causes change. Biological intelligences and artificial intelligences scurry at your command. From their perspective, they are proposing their own problems, much more detailed and complex than the problems you want to solve. But from yours, they’re your limbs beyond limbs, sight beyond sight, asking the fundamental questions you want answered.

Cool Asteroid News

Did you hear about the asteroid?

Which one?

You might have heard that an asteroid named 2024 YR4 is going to come unusually close to the Earth in 2032. When it first made the news, astronomers estimated a non-negligible chance of it hitting us: about three percent. That’s small enough that they didn’t expect it to happen, but large enough to plan around it: people invest in startups with a smaller chance of succeeding. Still, people were fairly calm about this one, and there are a couple of good reasons:

  • First, this isn’t a “kill the dinosaurs” asteroid, it’s much smaller. This is a “Tunguska Event” asteroid. Still pretty bad if it happens near a populated area, but not the end of life as we know it.
  • We know about it far in advance, and space agencies have successfully deflected an asteroid before, for a test. If it did pose a risk, it’s quite likely they’d be able to change its path so it misses the Earth instead.
  • It’s tempting to think of that 3% chance as like a roll of a hundred-sided die: the asteroid is on a random path, roll 1 to 3 and it will hit the Earth, roll higher and it won’t, and nothing we do will change that. In reality, though, that 3% was a measure of our ignorance. As astronomers measure the asteroid more thoroughly, they’ll know more and more about its path, and each time they figure something out, they’ll update the number.

And indeed, the number has been updated. In just the last few weeks, the estimated probability of impact has dropped from 3% to a few thousandths of a percent, as more precise observations clarified the asteroid’s path. There’s still a non-negligible chance it will hit the moon (about two percent at the moment), but it’s far too small to do more than make a big flashy crater.

It’s kind of fun to think that there are people out there who systematically track these things, with a plan to deal with them. It feels like something out of a sci-fi novel.

But I find the other asteroid more fun.

In 2020, a probe sent by NASA visited an asteroid named Bennu, taking samples which it carefully packaged and brought back to Earth. Now, scientists have analyzed the samples, revealing several moderately complex chemicals that have an important role in life on Earth, like amino acids and the bases that make up RNA and DNA. Interestingly, while on Earth these molecules all have the same “handedness“, the molecules on Bennu are divided about 50/50. Something similar was seen on samples retrieved from another asteroid, so this reinforces the idea that amino acids and nucleotide bases in space do not have a preferred handedness.

I first got into physics for the big deep puzzles, the ones that figure into our collective creation story. Where did the universe come from? Why are its laws the way they are? Over the ten years since I got my PhD, it’s felt like the answers to these questions have gotten further and further away, with new results serving mostly to rule out possible explanations with greater and greater precision.

Biochemistry has its own deep puzzles figuring into our collective creation story, and the biggest one is abiogenesis: how life formed from non-life. What excites me about these observations from Bennu is that it represents real ongoing progress on that puzzle. By glimpsing a soup of ambidextrous molecules, Bennu tells us something about how our own molecules’ handedness could have developed, and rules out ways that it couldn’t have. In physics, if we could see an era of the universe when there were equal amounts of matter and antimatter, we’d be ecstatic: it would confirm that the imbalance between matter and antimatter is a real mystery, and show us where we need to look for the answer. I love that researchers on the origins of life have reason right now to be similarly excited.

Some FAQ for Microsoft’s Majorana 1 Chip

Recently, Microsoft announced a fancy new quantum computing chip called Majorana 1. I’ve noticed quite a bit of confusion about what they actually announced, and while there’s a great FAQ page about it on the quantum computing blog Shtetl Optimized, the post there aims at a higher level, assuming you already know the basics. You can think of this post as a complement to that one, that tries to cover some basic things Shtetl Optimized took for granted.

Q: In the announcement, Microsoft said:

“It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.”

That sounds wild! Are they really using particles in a computer?

A: All computers use particles. Electrons are particles!

Q: You know what I mean!

A: You’re asking if these are “particle physics” particles, like the weird types they try to observe at the LHC?

No, they’re not.

Particle physicists use a mathematical framework called quantum field theory, where particles are ripples in things called quantum fields that describe properties of the universe. But they aren’t the only people to use that framework. Instead of studying properties of the universe you can study properties of materials, weird alloys and layers of metal and crystal that do weird and useful things. The properties of these materials can be approximately described with the same math, with quantum fields. Just as the properties of the universe ripple to produce particles, these properties of materials ripple to produce what are called quasiparticles. Ultimately, these quasiparticles come down to movements of ordinary matter, usually electrons in the original material. They’re just described with a kind of math that makes them look like their own particles.

Q: So, what are these Majorana particles supposed to be?

A: In quantum field theory, most particles come with an antimatter partner. Electrons, for example, have partners called positrons, with a positive electric charge instead of a negative one. These antimatter partners have to exist due to the math of quantum field theory, but there is a way out: some particles are their own antimatter partner, letting one particle cover both roles. This happens for some “particle physics particles”, but all the examples we’ve found are a type of particle called a “boson”, particles related to forces. In 1937, the physicist Ettore Majorana figured out the math you would need to make a particle like this that was a fermion instead, the other main type of particle that includes electrons and protons. So far, we haven’t found one of these Majorana fermions in nature, though some people think the elusive neutrino particles could be an example. Others, though, have tried instead to find a material described by Majorana’s theory. This should in principle be easier, you can build a lot of different materials after all. But it’s proven quite hard for people to do. Back in 2018, Microsoft claimed they’d managed this, but had to retract the claim. This time, they seem more confident, though the scientific community is still not convinced.

Q: And what’s this topoconductor they’re talking about?

A: Topoconductor is short for topological superconductor. Superconductors are materials that conduct electricity much better than ordinary metals.

Q: And, topological means? Something about donuts, right?

A: If you’ve heard anything about topology, you’ve heard that it’s a type of mathematics where donuts are equivalent to coffee cups. You might have seen an animation of a coffee cup being squished and mushed around until the ring of the handle becomes the ring of a donut.

This isn’t actually the important part of topology. The important part is that, in topology, a ball is not equivalent to a donut.

Topology is the study of which things can change smoothly into one another. If you want to change a donut into a ball, you have to slice through the donut’s ring or break the surface inside. You can’t smoothly change one to another. Topologists study shapes of different kinds of things, figuring out which ones can be changed into each other smoothly and which can’t.

Q: What does any of that have to do with quantum computers?

A: The shapes topologists study aren’t always as simple as donuts and coffee cups. They can also study the shape of quantum fields, figuring out which types of quantum fields can change smoothly into each other and which can’t.

The idea of topological quantum computation is to use those rules about what can change into each other to encode information. You can imagine a ball encoding zero, and a donut encoding one. A coffee cup would then also encode one, because it can change smoothly into a donut, while a box would encode zero because you can squash the corners to make it a ball. This helps, because it means that you don’t screw up your information by making smooth changes. If you accidentally drop your box that encodes zero and squish a corner, it will still encode zero.

This matters in quantum computing because it is very easy to screw up quantum information. Quantum computers are very delicate, and making them work reliably has been immensely challenging, requiring people to build much bigger quantum computers so they can do each calculation with many redundant backups. The hope is that topological superconductors would make this easier, by encoding information in a way that is hard to accidentally change.

Q: Cool. So does that mean Microsoft has the best quantum computer now?

A: The machine Microsoft just announced has only a single qubit, the quantum equivalent of just a single bit of computer memory. At this point, it can’t do any calculations. It can just be read, giving one or zero. The hope is that the power of the new method will let Microsoft catch up with companies that have computers with hundred of qubits, and help them arrive faster at the millions of qubits that will be needed to do anything useful.

Q: Ah, ok. But it sounds like they accomplished some crazy Majorana stuff at least, right?

A: Umm…

Read the Shtetl-Optimized FAQ if you want more details. The short answer is that this is still controversial. So far, the evidence they’ve made public isn’t enough to show that they found these Majorana quasiparticles, or that they made a topological superconductor. They say they have more recent evidence that they haven’t published yet. We’ll see.

Bonus Material for “How Hans Bethe Stumbled Upon Perfect Quantum Theories”

I had an article last week in Quanta Magazine. It’s a piece about something called the Bethe ansatz, a method in mathematical physics that was discovered by Hans Bethe in the 1930’s, but which only really started being understood and appreciated around the 1960’s. Since then it’s become a key tool, used in theoretical investigations in areas from condensed matter to quantum gravity. In this post, I thought I’d say a bit about the story behind the piece and give some bonus material that didn’t fit.

When I first decided to do the piece I reached out to Jules Lamers. We were briefly office-mates when I worked in France, where he was giving a short course on the Bethe ansatz and the methods that sprung from it. It turned out he had also been thinking about writing a piece on the subject, and we considered co-writing for a bit, but that didn’t work for Quanta. He helped me a huge amount with understanding the history of the subject and tracking down the right sources. If you’re a physicist who wants to learn about these things, I recommend his lecture notes. And if you’re a non-physicist who wants to know more, I hope he gets a chance to write a longer popular-audience piece on the topic!

If you clicked through to Jules’s lecture notes, you’d see the word “Bethe ansatz” doesn’t appear in the title. Instead, you’d see the phrase “quantum integrability”. In classical physics, an “integrable” system is one where you can calculate what will happen by doing an integral, essentially letting you “solve” any problem completely. Systems you can describe with the Bethe ansatz are solvable in a more complicated quantum sense, so they get called “quantum integrable”. There’s a whole research field that studies these quantum integrable systems.

My piece ended up rushing through the history of the field. After talking about Bethe’s original discovery, I jumped ahead to ice. The Bethe ansatz was first used to think about ice in the 1960’s, but the developments I mentioned leading up to it, where experimenters noticed extra variability and theorists explained it with the positions of hydrogen atoms, happened earlier, in the 1930’s. (Thanks to the commenter who pointed out that this was confusing!) Baxter gets a starring role in this section and had an important role in tying things together, but other people (Lieb and Sutherland) were involved earlier, showing that the Bethe ansatz indeed could be used with thin sheets of ice. This era had a bunch of other big names that I didn’t have space to talk about: C. N. Yang makes an appearance, and while Faddeev comes up later, I didn’t mention that he had a starring role in the 1970’s in understanding the connection to classical integrability and proposing a mathematical structure to understand what links all these different integrable theories together.

I vaguely gestured at black holes and quantum gravity, but didn’t have space for more than that. The connection there is to a topic you might have heard of before if you’ve read about string theory, called AdS/CFT, a connection between two kinds of world that are secretly the same: a toy model of gravity called Anti-de Sitter space (AdS) and a theory without gravity that looks the same at any scale (called a Conformal Field Theory, or CFT). It turns out that in the most prominent example of this, the theory without gravity is integrable! In fact, it’s a theory I spent a lot of time working with back in my research days, called N=4 super Yang-Mills. This theory is kind of like QCD, and in some sense it has integrability for similar reasons to those that Feynman hoped for and Korchemsky and Faddeev found. But it actually goes much farther, outside of the high-energy approximation where Korchemsky and Faddeev’s result works, and in principle seems to include everything you might want to know about the theory. Nowadays, people are using it to investigate the toy model of quantum gravity, hoping to get insights about quantum gravity in general.

One thing I didn’t get a chance to mention at all is the connection to quantum computing. People are trying to build a quantum computer with carefully-cooled atoms. It’s important to test whether the quantum computer functions well enough, or if the quantum states aren’t as perfect as they need to be. One way people have been testing this is with the Bethe ansatz: because it lets you calculate the behavior of special systems perfectly, you can set up your quantum computer to model a Bethe ansatz, and then check how close to the prediction your results are. You know that the theoretical result is complete, so any failure has to be due to an imperfection in your experiment.

I gave a quick teaser to a very active field, one that has fascinated a lot of prominent physicists and been applied in a wide variety of areas. I hope I’ve inspired you to learn more!