Tag Archives: particle physics

Why the Coupling Constants Aren’t Constant: Epistemology and Pragmatism

If you’ve heard a bit about physics, you might have heard that each of the fundamental forces (electromagnetism, the weak nuclear force, the strong nuclear force, and gravity) has a coupling constant, a number, handed down from nature itself, that determines how strong of a force it is. Maybe you’ve seen them in a table, like this:

tablefromhyperphysics

If you’ve heard a bit more about physics, though, you’ll have heard that those coupling constants aren’t actually constant! Instead, they vary with energy. Maybe you’ve seen them plotted like this:

phypub4highen

The usual way physicists explain this is in terms of quantum effects. We talk about “virtual particles”, and explain that any time particles and forces interact, these virtual particles can pop up, adding corrections that change with the energy of the interacting particles. The coupling constant includes all of these corrections, so it can’t be constant, it has to vary with energy.

renormalized-vertex

Maybe you’re happy with this explanation. But maybe you object:

“Isn’t there still a constant, though? If you ignore all the virtual particles, and drop all the corrections, isn’t there some constant number you’re correcting? Some sort of `bare coupling constant’ you could put into a nice table for me?”

There are two reasons I can’t do that. One is an epistemological reason, that comes from what we can and cannot know. The other is practical: even if I knew the bare coupling, most of the time I wouldn’t want to use it.

Let’s start with the epistemology:

The first thing to understand is that we can’t measure the bare coupling directly. When we measure the strength of forces, we’re always measuring the result of quantum corrections. We can’t “turn off” the virtual particles.

You could imagine measuring it indirectly, though. You’d measure the end result of all the corrections, then go back and calculate. That calculation would tell you how big the corrections were supposed to be, and you could subtract them off, solve the equation, and find the bare coupling.

And this would be a totally reasonable thing to do, except that when you go and try to calculate the quantum corrections, instead of something sensible, you get infinity.

We think that “infinity” is due to our ignorance: we know some of the quantum corrections, but not all of them, because we don’t have a final theory of nature. In order to calculate anything we need to hedge around that ignorance, with a trick called renormalization. I talk about that more in an older post. The key message to take away there is that in order to calculate anything we need to give up the hope of measuring certain bare constants, even “indirectly”. Once we fix a few constants that way, the rest of the theory gives reliable predictions.

So we can’t measure bare constants, and we can’t reason our way to them. We have to find the full coupling, with all the quantum corrections, and use that as our coupling constant.

Still, you might wonder, why does the coupling constant have to vary? Can’t I just pick one measurement, at one energy, and call that the constant?

This is where pragmatism comes in. You could fix your constant at some arbitrary energy, sure. But you’ll regret it.

In particle physics, we usually calculate in something called perturbation theory. Instead of calculating something exactly, we have to use approximations. We add up the approximations, order by order, expecting that each time the corrections will get smaller and smaller, so we get closer and closer to the truth.

And this works reasonably well if your coupling constant is small enough, provided it’s at the right energy.

If your coupling constant is at the wrong energy, then your quantum corrections will notice the difference. They won’t just be small numbers anymore. Instead, they end up containing logarithms of the ratio of energies. The more difference between your arbitrary energy scale and the correct one, the bigger these logarithms get.

This doesn’t make your calculation wrong, exactly. It makes your error estimate wrong. It means that your assumption that the next order is “small enough” isn’t actually true. You’d need to go to higher and higher orders to get a “good enough” answer, if you can get there at all.

Because of that, you don’t want to think about the coupling constants as actually constant. If we knew the final theory then maybe we’d know the true numbers, the ultimate bare coupling constants. But we still would want to use coupling constants that vary with energy for practical calculations. We’d still prefer the plot, and not just the table.

The Physics Isn’t New, We Are

Last week, I mentioned the announcement from the IceCube, Fermi-LAT, and MAGIC collaborations of high-energy neutrinos and gamma rays detected from the same source, the blazar TXS 0506+056. Blazars are sources of gamma rays, thought to be enormous spinning black holes that act like particle colliders vastly more powerful than the LHC. This one, near Orion’s elbow, is “aimed” roughly at Earth, allowing us to detect the light and particles it emits. On September 22, a neutrino with energy around 300 TeV was detected by IceCube (a kilometer-wide block of Antarctic ice stuffed with detectors), coming from the direction of TXS 0506+056. Soon after, the satellite Fermi-LAT and ground-based telescope MAGIC were able to confirm that the blazar TXS 0506+056 was flaring at the time. The IceCube team then looked back, and found more neutrinos coming from the same source in earlier years. There are still lingering questions (Why didn’t they see this kind of behavior from other, closer blazars?) but it’s still a nice development in the emerging field of “multi-messenger” astronomy.

It also got me thinking about a conversation I had a while back, before one of Perimeter’s Public Lectures. An elderly fellow was worried about the LHC. He wondered if putting all of that energy in the same place, again and again, might do something unprecedented: weaken the fabric of space and time, perhaps, until it breaks? He acknowledged this didn’t make physical sense, but what if we’re wrong about the physics? Do we really want to take that risk?

At the time, I made the same point that gets made to counter fears of the LHC creating a black hole: that the energy of the LHC is less than the energy of cosmic rays, particles from space that collide with our atmosphere on a regular basis. If there was any danger, it would have already happened. Now, knowing about blazars, I can make a similar point: there are “galactic colliders” with energies so much higher than any machine we can build that there’s no chance we could screw things up on that kind of scale: if we could, they already would have.

This connects to a broader point, about how to frame particle physics. Each time we build an experiment, we’re replicating something that’s happened before. Our technology simply isn’t powerful enough to do something truly unprecedented in the universe: we’re not even close! Instead, the point of an experiment is to reproduce something where we can see it. It’s not the physics itself, but our involvement in it, our understanding of it, that’s genuinely new.

The IceCube experiment itself is a great example of this: throughout Antarctica, neutrinos collide with ice. The only difference is that in IceCube’s ice, we can see them do it. More broadly, I have to wonder how much this is behind the “unreasonable effectiveness of mathematics”: if mathematics is just the most precise way humans have to communicate with each other, then of course it will be effective in physics, since the goal of physics is to communicate the nature of the world to humans!

There may well come a day when we’re really able to do something truly unprecedented, that has never been done before in the history of the universe. Until then, we’re playing catch-up, taking laws the universe has tested extensively and making them legible, getting humanity that much closer to understanding physics that, somewhere out there, already exists.

Why a New Particle Matters

A while back, when the MiniBoone experiment announced evidence for a sterile neutrino, I was excited. It’s still not clear whether they really found something, here’s an article laying out the current status. If they did, it would be a new particle beyond those predicted by the Standard Model, something like the neutrinos but which doesn’t interact with any of the fundamental forces except gravity.

At the time, someone asked me why this was so exciting. Does it solve the mystery of dark matter, or any other long-standing problems?

The sterile neutrino MiniBoone is suggesting isn’t, as far as I’m aware, a plausible candidate for dark matter. It doesn’t solve any long-standing problems (for example, it doesn’t explain why the other neutrinos are so much lighter than other particles). It would even introduce new problems of its own!

It still matters, though. One reason, which I’ve talked about before, is that each new type of particle implies a new law of nature, a basic truth about the universe that we didn’t know before. But there’s another reason why a new particle matters.

There’s a malaise in particle physics. For most of the twentieth century, theory and experiment were tightly linked. Unexpected experimental results would demand new theory, which would in turn suggest new experiments, driving knowledge forward. That mostly stopped with the Standard Model. There are a few lingering anomalies, like the phenomena we attribute to dark matter, that show the Standard Model can’t be the full story. But as long as every other experiment fits the Standard Model, we have no useful hints about where to go next. We’re just speculating, and too much of that warps the field.

Critics of the physics mainstream pick up on this, but I’m not optimistic about what I’ve seen of their solutions. Peter Woit has suggested that physics should emulate the culture of mathematics, caring more about rigor and being more careful to confirm things before speaking. The title of Sabine Hossenfelder’s “Lost in Math” might suggest the opposite, but I get the impression she’s arguing for something similar: that particle physicists have been using sloppy arguments and should clean up their act, taking foundational problems seriously and talking to philosophers to help clarify their ideas.

Rigor and clarity are worthwhile, but the problems they’ll solve aren’t the ones causing the malaise. If there are problems we can expect to solve just by thinking better, they’re problems that we found by thinking in the first place: quantum gravity theories that stop making sense at very high energies, paradoxical thought experiments with black holes. There, rigor and clarity can matter: to some extent they’re already there, but I can appreciate the argument that it’s not yet nearly enough.

What rigor and clarity won’t do is make physics feel (and function) like it did in the twentieth century. For that, we need new evidence: experiments that disobey the Standard Model, and do it in a clear enough way that we can’t just chalk it up to predictable errors. We need a new particle, or something like it. Without that, our theories are most likely underdetermined by the data, and anything we propose is going to be subjective. Our subjective judgements may get better, we may get rid of the worst-justified biases, but at the end of the day we still won’t have enough information to actually make durable progress.

That’s not a popular message, in part, because it’s not something we can control. There’s a degree of helplessness in realizing that if nature doesn’t throw us a bone then we’ll probably just keep going in circles forever. It’s not the kind of thing that lends itself to a pithy blog post.

If there’s something we can do, it’s to keep our eyes as open as possible, to make sure we don’t miss nature’s next hint. It’s why people are getting excited about low-energy experiments, about precision calculations, about LIGO. Even this seemingly clickbaity proposal that dark matter killed the dinosaurs is motivated by the same sort of logic: if the only evidence for dark matter we have is gravitational, what can gravitational evidence tell us about what it’s made of? In each case, we’re trying to widen our net, to see new phenomena we might have missed.

I suspect that’s why this reviewer was disappointed that Hossenfelder’s book lacked a vision for the future. It’s not that the book lacked any proposals whatsoever. But it lacked this kind of proposal, of a new place to look, where new evidence, and maybe a new particle, might be found. Without that we can still improve things, we can still make progress on deep fundamental mathematical questions, we can kill off the stupidest of the stupid arguments. But the malaise won’t lift, we won’t get back to the health of twentieth century physics. For that, we need to see something new.

Amplitudes 2018

This week, I’m at Amplitudes, my field’s big yearly conference. The conference is at SLAC National Accelerator Laboratory this year, a familiar and lovely place.

IMG_20180620_183339441_HDR

Welcome to the Guest House California

It’s been a packed conference, with a lot of interesting talks. Recording and slides of most of them should be up at this point, for those following at home. I’ll comment on a few that caught my attention, I might do a more in-depth post later.

The first morning was dedicated to gravitational waves. At the QCD Meets Gravity conference last December I noted that amplitudes folks were very eager to do something relevant to LIGO, but that it was still a bit unclear how we could contribute (aside from Pierpaolo Mastrolia, who had already figured it out). The following six months appear to have cleared things up considerably, and Clifford Cheung and Donal O’Connel’s talks laid out quite concrete directions for this kind of research.

I’d seen Erik Panzer talk about the Hepp bound two weeks ago at Les Houches, but that was for a much more mathematically-inclined audience. It’s been interesting seeing people here start to see the implications: a simple method to classify and estimate (within 1%!) Feynman integrals could be a real game-changer.

Brenda Penante’s talk made me rethink a slogan I like to quote, that N=4 super Yang-Mills is the “most transcendental” part of QCD. While this is true in some cases, in many ways it’s actually least true for amplitudes, with quite a few counterexamples. For other quantities (like the form factors that were the subject of her talk) it’s true more often, and it’s still unclear when we should expect it to hold, or why.

Nima Arkani-Hamed has a reputation for talks that end up much longer than scheduled. Lately, it seems to be due to the sheer number of projects he’s working on. He had to rush at the end of his talk, which would have been about cosmological polytopes. I’ll have to ask his collaborator Paolo Benincasa for an update when I get back to Copenhagen.

Tuesday afternoon was a series of talks on the “NNLO frontier”, two-loop calculations that form the state of the art for realistic collider physics predictions. These talks brought home to me that the LHC really does need two-loop precision, and that the methods to get it are still pretty cumbersome. For those of us off in the airy land of six-loop N=4 super Yang-Mills, this is the challenge: can we make what these people do simpler?

Wednesday cleared up a few things for me, from what kinds of things you can write down in “fishnet theory” to how broad Ashoke Sen’s soft theorem is, to how fast John Joseph Carrasco could show his villanelle slide. It also gave me a clearer idea of just what simplifications are available for pushing to higher loops in supergravity.

Wednesday was also the poster session. It keeps being amazing how fast the field is growing, the sheer number of new faces was quite inspiring. One of those new faces pointed me to a paper I had missed, suggesting that elliptic integrals could end up trickier than most of us had thought.

Thursday featured two talks by people who work on the Conformal Bootstrap, one of our subfield’s closest relatives. (We’re both “bootstrappers” in some sense.) The talks were interesting, but there wasn’t a lot of engagement from the audience, so if the intent was to make a bridge between the subfields I’m not sure it panned out. Overall, I think we’re mostly just united by how we feel about Simon Caron-Huot, who David Simmons-Duffin described as “awesome and mysterious”. We also had an update on attempts to extend the Pentagon OPE to ABJM, a three-dimensional analogue of N=4 super Yang-Mills.

I’m looking forward to Friday’s talks, promising elliptic functions among other interesting problems.

A Paper About Ranking Papers

If you’ve ever heard someone list problems in academia, citation-counting is usually near the top. Hiring and tenure committees want easy numbers to judge applicants with: number of papers, number of citations, or related statistics like the h-index. Unfortunately, these metrics can be gamed, leading to a host of bad practices that get blamed for pretty much everything that goes wrong in science. In physics, it’s not even clear that these statistics tell us anything: papers in our field have been including more citations over time, and for thousand-person experimental collaborations the number of citations and papers don’t really reflect any one person’s contribution.

It’s pretty easy to find people complaining about this. It’s much rarer to find a proposed solution.

That’s why I quite enjoyed Alessandro Strumia and Riccardo Torre’s paper last week, on Biblioranking fundamental physics.

Some of their suggestions are quite straightforward. With the number of citations per paper increasing, it makes sense to divide each paper by the number of citations it contains: it means more to get cited by a paper with ten citations than by a paper with one hundred. Similarly, you could divide credit for a paper among its authors, rather than giving each author full credit.

Some are more elaborate. They suggest using a variant of Google’s PageRank algorithm to rank papers and authors. Essentially, the algorithm imagines someone wandering from paper to paper and tries to figure out which papers are more central to the network. This is apparently an old idea, but by combining it with their normalization by number of citations they eke a bit more mileage from it. (I also found their treatment a bit clearer than the older papers they cite. There are a few more elaborate setups in the literature as well, but they seem to have a lot of free parameters so Strumia and Torre’s setup looks preferable on that front.)

One final problem they consider is that of self-citations, and citation cliques. In principle, you could boost your citation count by citing yourself. While that’s easy to correct for, you could also be one of a small number of authors who cite each other a lot. To keep the system from being gamed in this way, they propose a notion of a “CitationCoin” that counts (normalized) citations received minus (normalized) citations given. The idea is that, just as you can’t make anyone richer just by passing money between your friends without doing anything with it, so a small community can’t earn “CitationCoins” without getting the wider field interested.

There are still likely problems with these ideas. Dividing each paper by its number of authors seems like overkill: a thousand-person paper is not typically going to get a thousand times as many citations. I also don’t know whether there are ways to game this system: since the metrics are based in part on citations given, not just citations received, I worry there are situations where it would be to someone’s advantage to cite others less. I think they manage to avoid this by normalizing by number of citations given, and they emphasize that PageRank itself is estimating something we directly care about: how often people read a paper. Still, it would be good to see more rigorous work probing the system for weaknesses.

In addition to the proposed metrics, Strumia and Torre’s paper is full of interesting statistics about the arXiv and InSpire databases, both using more traditional metrics and their new ones. Whether or not the methods they propose work out, the paper is definitely worth a look.

Path Integrals and Loop Integrals: Different Things!

When talking science, we need to be careful with our words. It’s easy for people to see a familiar word and assume something totally different from what we intend. And if we use the same word twice, for two different things…

I’ve noticed this problem with the word “integral”. When physicists talk about particle physics, there are two kinds of integrals we mention: path integrals, and loop integrals. I’ve seen plenty of people get confused, and assume that these two are the same thing. They’re not, and it’s worth spending some time explaining the difference.

Let’s start with path integrals (also referred to as functional integrals, or Feynman integrals). Feynman promoted a picture of quantum mechanics in which a particle travels along many different paths, from point A to point B.

three_paths_from_a_to_b

You’ve probably seen a picture like this. Classically, a particle would just take one path, the shortest path, from A to B. In quantum mechanics, you have to add up all possible paths. Most longer paths cancel, so on average the short, classical path is the most important one, but the others do contribute, and have observable, quantum effects. The sum over all paths is what we call a path integral.

It’s easy enough to draw this picture for a single particle. When we do particle physics, though, we aren’t usually interested in just one particle: we want to look at a bunch of different quantum fields, and figure out how they will interact.

We still use a path integral to do that, but it doesn’t look like a bunch of lines from point A to B, and there isn’t a convenient image I can steal from Wikipedia for it. The quantum field theory path integral adds up, not all the paths a particle can travel, but all the ways a set of quantum fields can interact.

How do we actually calculate that?

One way is with Feynman diagrams, and (often, but not always) loop integrals.

4grav2loop

I’ve talked about Feynman diagrams before. Each one is a picture of one possible way that particles can travel, or that quantum fields can interact. In some (loose) sense, each one is a single path in the path integral.

Each diagram serves as instructions for a calculation. We take information about the particles, their momenta and energy, and end up with a number. To calculate a path integral exactly, we’d have to add up all the diagrams we could possibly draw, to get a sum over all possible paths.

(There are ways to avoid this in special cases, which I’m not going to go into here.)

Sometimes, getting a number out of a diagram is fairly simple. If the diagram has no closed loops in it (if it’s what we call a tree diagram) then knowing the properties of the in-coming and out-going particles is enough to know the rest. If there are loops, though, there’s uncertainty: you have to add up every possible momentum of the particles in the loops. You do that with a different integral, and that’s the one that we sometimes refer to as a loop integral. (Perhaps confusingly, these are also often called Feynman integrals: Feynman did a lot of stuff!)

\frac{i^{a+l(1-d/2)}\pi^{ld/2}}{\prod_i \Gamma(a_i)}\int_0^\infty...\int_0^\infty \prod_i\alpha_i^{a_i-1}U^{-d/2}e^{iF/U-i\sum m_i^2\alpha_i}d\alpha_1...d\alpha_n

Loop integrals can be pretty complicated, but at heart they’re the same sort of thing you might have seen in a calculus class. Mathematicians are pretty comfortable with them, and they give rise to numbers that mathematicians find very interesting.

Path integrals are very different. In some sense, they’re an “integral over integrals”, adding up every loop integral you could write down. Mathematicians can define path integrals in special cases, but it’s still not clear that the general case, the overall path integral picture we use, actually makes rigorous mathematical sense.

So if you see physicists talking about integrals, it’s worth taking a moment to figure out which one we mean. Path integrals and loop integrals are both important, but they’re very, very different things.

Why Your Idea Is Bad

By A. Physicist

 

Your idea is bad…

 

…because it disagrees with precision electroweak measurements

…………………………………..with bounds from ATLAS and CMS

…………………………………..with the power spectrum of the CMB

…………………………………..with Eötvös experiments

…because it isn’t gauge invariant

………………………….Lorentz invariant

………………………….diffeomorphism invariant

………………………….background-independent, whatever that means

…because it violates unitarity

…………………………………locality

…………………………………causality

…………………………………observer-independence

…………………………………technical naturalness

…………………………………international treaties

…………………………………cosmic censorship

…because you screwed up the calculation

…because you didn’t actually do the calculation

…because I don’t understand the calculation

…because you predict too many magnetic monopoles

……………………………………too many proton decays

……………………………………too many primordial black holes

…………………………………..remnants, at all

…because it’s fine-tuned

…because it’s suspiciously finely-tuned

…because it’s finely tuned to be always outside of experimental bounds

…because you’re misunderstanding quantum mechanics

…………………………………………………………..black holes

………………………………………………………….effective field theory

…………………………………………………………..thermodynamics

…………………………………………………………..the scientific method

…because Condensed Matter would contribute more to Chinese GDP

…because the approximation you’re making is unjustified

…………………………………………………………………………is not valid

…………………………………………………………………………is wildly overoptimistic

………………………………………………………………………….is just kind of lazy

…because there isn’t a plausible UV completion

…because you care too much about the UV

…because it only works in polynomial time

…………………………………………..exponential time

…………………………………………..factorial time

…because even if it’s fast it requires more memory than any computer on Earth

…because it requires more bits of memory than atoms in the visible universe

…because it has no meaningful advantages over current methods

…because it has meaningful advantages over my own methods

…because it can’t just be that easy

…because it’s not the kind of idea that usually works

…because it’s not the kind of idea that usually works in my field

…because it isn’t canonical

…because it’s ugly

…because it’s baroque

…because it ain’t baroque, and thus shouldn’t be fixed

…because only a few people work on it

…because far too many people work on it

…because clearly it will only work for the first case

……………………………………………………………….the first two cases

……………………………………………………………….the first seven cases

……………………………………………………………….the cases you’ve published and no more

…because I know you’re wrong

…because I strongly suspect you’re wrong

…because I strongly suspect you’re wrong, but saying I know you’re wrong looks better on a grant application

…….in a blog post

…because I’m just really pessimistic about something like that ever actually working

…because I’d rather work on my own thing, that I’m much more optimistic about

…because if I’m clear about my reasons

……and what I know

…….and what I don’t

……….then I’ll convince you you’re wrong.

 

……….or maybe you’ll convince me?

 

Unreasonably Big Physics

The Large Hadron Collider is big, eight and a half kilometers across. It’s expensive, with a cost to construct and operate in the billions. And with an energy of 6.5 TeV per proton, it’s the most powerful collider in the world, accelerating protons to 0.99999999 of the speed of light.

The LHC is reasonable. After all, it was funded, and built. What does an unreasonable physics proposal look like?

It’s probably unfair to call the Superconducting Super Collider unreasonable, after all, it did almost get built. It would have been a 28 kilometer-wide circle in the Texas desert, accelerating protons to an energy of 20 TeV, three times the energy of the LHC. When it was cancelled in 1993, it was projected to cost twelve billion dollars, and two billion had already been spent digging the tunnel. The US hasn’t invested in a similarly sized project since.

A better example of an unreasonable proposal might be the Collider-in-the-Sea. (If that link is paywalled, this paper covers most of the same information.)

mcint2-2656157-large

If you run out of room on land, why not build your collider underwater?

Ok, there are pretty obvious reasons why not. Surprisingly, the people proposing the Collider-in-the-Sea do a decent job of answering them. They plan to put it far enough out that it won’t disrupt shipping, and deep enough down that it won’t interfere with fish. Apparently at those depths even a hurricane barely ripples the water, and they argue that the technology exists to keep a floating ring stable under those conditions. All in all, they’re imagining a collider 600 kilometers in diameter, accelerating protons to 250 TeV, all for a cost they claim would be roughly comparable to the (substantially smaller) new colliders that China and Europe are considering.

I’m sure that there are reasons I’ve overlooked why this sort of project is impossible. (I mean, just look at the map!) Still, it’s impressive that they can marshal this much of an argument.

Besides, there are even more impossible projects, like this one, by Sugawara, Hagura, and Sanami. Their proposal for a 1000 TeV neutrino beam isn’t intended for research: rather, the idea is a beam powerful enough to send neutrinos through the Earth to destroy nuclear bombs. Such a beam could cause the bombs to detonate prematurely, “fizzling” with about 3% the explosion they would have normally.

In this case, Sugawara and co. admit that their proposal is pure fantasy. With current technology they would need a ring larger than the Collider-in-the-Sea, and the project would cost hundreds of billions of dollars. It’s not even clear who would want to build such a machine, or who could get away with building it: the authors imagine a science fiction-esque world government to foot the bill.

There’s a spectrum of papers that scientists write, from whimsical speculation to serious work. The press doesn’t always make the difference clear, so it’s a useful skill to see the clues in the writing that show where a given proposal lands. In the case of the Sugawara and co. proposal, the paper is littered with caveats, explicitly making it clear that it’s just a rough estimate. Even the first line, dedicating the paper to another professor, should get you to look twice: while this sometimes happens on serious papers, often it means the paper was written as a fun gift for the professor in question. The Collider-in-the-Sea doesn’t have these kinds of warning signs, and it’s clear its authors take it a bit more seriously. Nonetheless, comparing the level of detail to other accelerator proposals, even those from the same people, should suggest that the Collider-in-the-Sea isn’t entirely on the same level. As wacky as it is to imagine, we probably won’t get a collider that takes up most of the Gulf of Mexico, or a massive neutrino beam capable of blowing up nukes around the world.

Tutoring at GGI

I’m still at the Galileo Galilei Institute this week, tutoring at the winter school.

At GGI’s winter school, each week is featuring a pair of lecturers. This week, the lectures alternate between Lance Dixon covering the basics of amplitudeology and Csaba Csaki, discussing ways in which the Higgs could be a composite made up of new fundamental particles.

Most of the students at this school are phenomenologists, physicists who make predictions for particle physics. I’m an amplitudeologist, I study the calculation tools behind those predictions. You’d think these would be very close areas, but it’s been interesting seeing how different our approaches really are.

Some of the difference is apparent just from watching the board. In Csaki’s lectures, the equations that show up are short, a few terms long at most. When amplitudes show up, it’s for their general properties: how many factors of the coupling constant, or the multipliers that show up with loops. There aren’t any long technical calculations, and in general they aren’t needed: he’s arguing about the kinds of physics that can show up, not the specifics of how they give rise to precise numbers.

In contrast, Lance’s board filled up with longer calculations, each with many moving parts. Even things that seem simple from our perspective take a decent amount of board space to derive, and involve no small amount of technical symbol-shuffling. For most of the students, working out an amplitude this complicated was an unfamiliar experience. There are a few applications for which you need the kind of power that amplitudeology provides, and a few students were working on them. For the rest, it was a bit like learning about a foreign culture, an exercise in understanding what other people are doing rather than picking up a new skill themselves. Still, they made a strong go at it, and it was enlightening to see the pieces that ended up mattering to them, and to hear the kinds of questions they asked.

Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

1200px-13_portrait_of_robert_hooke

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a 1/r^2 force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of 1/r^2 he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

img_0622

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

 

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

 

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.