Monthly Archives: July 2024

At Quanta This Week, With a Piece on Vacuum Decay

I have a short piece at Quanta Magazine this week, about a physics-y end of the world as we know it called vacuum decay.

For science-minded folks who want to learn a bit more: I have a sentence in the article mentioning other uncertainties. In case you’re curious what those uncertainties are:

Gamma (\gamma) here is the decay rate, its inverse gives the time it takes for a cubic gigaparsec of space to experience vacuum decay. The three uncertainties are from experiments, the uncertainties of our current knowledge of the Higgs mass, top quark mass, and the strength of the strong force.

Occasionally, you see futurology-types mention “uncertainties in the exponent” to argue that some prediction (say, how long it will take till we have human-level AI) is so uncertain that estimates barely even make sense: it might be 10 years, or 1000 years. I find it fun that for vacuum decay, because of that \log_{10}, there is actually uncertainty in the exponent! Vacuum decay might happen in as few as 10^{411} years or as many as 10^{1333} years, and that’s the result of an actual, reasonable calculation!

For physicist readers, I should mention that I got a lot out of reading some slides from a 2016 talk by Matthew Schwartz. Not many details of the calculation made it into the piece, but the slides were helpful in dispelling a few misconceptions that could have gotten into the piece. There’s an instinct to think about the situation in terms of the energy, to think about how difficult it is for quantum uncertainty to get you over the energy barrier to the next vacuum. There are methods that sort of look like that, if you squint, but that’s not really how you do the calculation, and there end up being a lot of interesting subtleties in the actual story. There were also a few numbers that it was tempting to put on the plots in the article, but turn out to be gauge dependent!

Another thing I learned from those slides how far you can actually take the uncertainties mentioned above. The higher-energy Higgs vacuum is pretty dang high-energy, to the point where quantum gravity effects could potentially matter. And at that point, all bets are off. The calculation, with all those nice uncertainties, is a calculation within the framework of the Standard Model. All of the things we don’t yet know about high-energy physics, especially quantum gravity, could freely mess with this. The universe as we know it could still be long-lived, but it could be a lot shorter-lived as well. That in turns makes this calculation a lot more of a practice-ground to hone techniques, rather than an actual estimate you can rely on.

Rube Goldberg Reality

Quantum mechanics is famously unintuitive, but the most intuitive way to think about it is probably the path integral. In the path integral formulation, to find the chance a particle goes from point A to point B, you look at every path you can draw from one place to another. For each path you calculate a complex number, a “weight” for that path. Most of these weights cancel out, leaving the path the particle would travel under classical physics with the biggest contribution. They don’t perfectly cancel out, though, so the other paths still matter. In the end, the way the particle behaves depends on all of these possible paths.

If you’ve heard this story, it might make you feel like you have some intuition for how quantum physics works. With each path getting less likely as it strays from the classical, you might have a picture of a nice orderly set of options, with physicists able to pick out the chance of any given thing happening based on the path.

In a world with just one particle swimming along, this might not be too hard. But our world doesn’t run on the quantum mechanics of individual particles. It runs on quantum field theory. And there, things stop being so intuitive.

First, the paths aren’t “paths”. For particles, you can imagine something in one place, traveling along. But particles are just ripples in quantum fields, which can grow, shrink, or change. For quantum fields instead of quantum particles, the path integral isn’t a sum over paths of a single particle, but a sum over paths traveled by fields. The fields start out in some configuration (which may look like a particle at point A) and then end up in a different configuration (which may look like a particle at point B). You have to add up weights, not for every path a single particle could travel, but every different set of ways the fields could have been in between configuration A and configuration B.

More importantly, though, there is more than one field! Maybe you’ve heard about electric and magnetic fields shifting back and forth in a wave of light, one generating the other. Other fields interact like this, including the fields behind things you might think of as particles like electrons. For any two fields that can affect each other, a disturbance in one can lead to a disturbance in the other. An electromagnetic field can disturb the electron field, which can disturb the Higgs field, and so on.

The path integral formulation tells you that all of these paths matter. Not just the path of one particle or one field chugging along by itself, but the path where the electromagnetic field kicks off a Higgs field disturbance down the line, only to become a disturbance in the electromagnetic field again. Reality is all of these paths at once, a Rube Goldberg machine of a universe.

In such a universe, intuition is a fool’s errand. Mathematics fares a bit better, but is still difficult. While physicists sometimes have shortcuts, most of the time these calculations have to be done piece by piece, breaking the paths down into simpler stories that approximate the true answer.

In the path integral formulation of quantum physics, everything happens at once. And “everything” may be quite a bit larger than you expect.

Musing on Application Fees

A loose rule of thumb: PhD candidates in the US are treated like students. In Europe, they’re treated like employees.

This does exaggerate things a bit. In both Europe and the US, PhD candidates get paid a salary (at least in STEM). In both places, PhD candidates count as university employees, if sometimes officially part-time ones, with at least some of the benefits that entails.

On the other hand, PhD candidates in both places take classes (albeit more classes in the US). Universities charge both for tuition, which is in turn almost always paid by their supervisor’s grants or department, not by them. Both aim for a degree, capped off with a thesis defense.

But there is a difference. And it’s at its most obvious in how applications work.

In Europe, PhD applications are like job applications. You apply to a particular advisor, advertising a particular kind of project. You submit things like a CV, cover letter, and publication list, as well as copies of your previous degrees.

In the US, PhD applications are like applications to a school. You apply to the school, perhaps mentioning an advisor or topic you are interested in. You submit things like essays, test scores, and transcripts. And typically, you have to pay an application fee.

I don’t think I quite appreciated, back when I applied for PhD programs, just how much those fees add up to. With each school charging a fee in the $100 range, and students commonly advised to apply to ten or so schools, applying to PhD programs in the US can quickly get unaffordable for many. Schools do offer fee waivers under certain conditions, but the standards vary from school to school. Most don’t seem to apply to non-Americans, so if you’re considering a US PhD from abroad be aware that just applying can be an expensive thing to do.

Why the fee? I don’t really know. The existence of application fees, by itself, isn’t a US thing. If you want to get a Master’s degree from the University of Copenhagen and you’re coming from outside Europe, you have to pay an application fee of roughly the same size that US schools charge.

Based on that, I’d guess part of the difference is funding. It costs something for a university to process an application, and governments might be willing to cover it for locals (in the case of the Master’s in Copenhagen) or more specifically for locals in need (in the US PhD case). I don’t know whether it makes sense for that cost to be around $100, though.

It’s also an incentive, presumably. Schools don’t want too many applicants, so they attach a fee so only the most dedicated people apply.

Jobs don’t typically have an application fee, and I think it would piss a lot of people off if they did. Some jobs get a lot of applicants, enough that bigger and more well-known companies in some places use AI to filter applications. I have to wonder if US PhD schools are better off in this respect. Does charging a fee mean they have a reasonable number of applications to deal with? Or do they still have to filter through a huge pile, with nothing besides raw numbers to pare things down? (At least, because of the “school model” with test scores, they have some raw numbers to use.)

Overall, coming at this with a “theoretical physicist mentality”, I have to wonder if any of this is necessary. Surely there’s a way to make it easy for students to apply, and just filter them down to the few you want to accept? But the world is of course rarely that simple.

Clickbait or Koan

Last month, I had a post about a type of theory that is, in a certain sense, “immune to gravity”. These theories don’t allow you to build antigravity machines, and they aren’t totally independent of the overall structure of space-time. But they do ignore the core thing most people think of as gravity, the curvature of space that sends planets around the Sun and apples to the ground. And while that trait isn’t something we can use for new technology, it has led to extremely productive conversations between mathematicians and physicists.

After posting, I had some interesting discussions on twitter. A few people felt that I was over-hyping things. Given all the technical caveats, does it really make sense to say that these theories defy gravity? Isn’t a title like “Gravity-Defying Theories” just clickbait?

Obviously, I don’t think so.

There’s a concept in education called inductive teaching. We remember facts better when they come in context, especially the context of us trying to solve a puzzle. If you try to figure something out, and then find an answer, you’re going to remember that answer better than if you were just told the answer from the beginning. There are some similarities here to the concept of a Zen koan: by asking questions like “what is the sound of one hand clapping?” a Zen master is supposed to get you to think about the world in a different way.

When I post with a counterintuitive title, I’m aiming for that kind of effect. I know that you’ll read the title and think “that can’t be right!” Then you’ll read the post, and hear the explanation. That explanation will stick with you better because you asked that question, because “how can that be right?” is the solution to a puzzle that, in that span of words, you cared about.

Clickbait is bad for two reasons. First, it sucks you in to reading things that aren’t actually interesting. I write my blog posts because I think they’re interesting, so I hope I avoid that. Second, it can spread misunderstandings. I try to be careful about these, and I have some tips how you can be too:

  1. Correct the misunderstanding early. If I’m worried a post might be misunderstood in a clickbaity way, I make sure that every time I post the link I include a sentence discouraging the misunderstanding. For example, for the post on Gravity-Defying Theories, before the link I wrote “No flying cars, but it is technically possible for something to be immune to gravity”. If I’m especially worried, I’ll also make sure that the first paragraph of the piece corrects the misunderstanding as well.
  2. Know your audience. This means both knowing the normal people who read your work, and how far something might go if it catches on. Your typical readers might be savvy enough to skip the misunderstanding, but if they latch on to the naive explanation immediately then the “koan” effect won’t happen. The wider your reach can be, the more careful you need to be about what you say. If you’re a well-regarded science news piece, don’t write a title saying that scientists have built a wormhole.
  3. Have enough of a conclusion to be “worth it”. This is obviously a bit subjective. If your post introduces a mystery and the answer is that you just made some poetic word choice, your audience is going to feel betrayed, like the puzzle they were considering didn’t have a puzzly answer after all. Whatever you’re teaching in your post, it needs to have enough “meat” that solving it feels like a real discovery, like the reader did some real work to solve it.

I don’t think I always live up to these, but I do try. And I think trying is better than the conservative option, of never having catchy titles that make counterintuitive claims. One of the most fun aspects of science is that sometimes a counterintuitive fact is actually true, and that’s an experience I want to share.