Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.
For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.
Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.
That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.
By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.
The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.
Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.
It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.
If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.
My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.
Maybe what’s needed is a new technology. Rather than spending so much on building a bigger smasher, fund research into other ways of studying the quantumly small. For example, I’ve read that LIGO operates in about the same size range that the LHC does, and I’ve heard about “table top” accelerators that use some sort of waves to get particles up to high speeds.
Or maybe it’s time to fund research into a whole new view of reality seeing as how we’ve spent an awful lot of time trying to reconcile QFT with GR… and still don’t have a clue. Maybe it’s time to throw it all away and start over.
People are already looking into the new technology side of the equation. LIGO isn’t generally able to look at the right sort of thing, and tabletop accelerators aren’t at comparable energies, but there are other tabletop experiments that might have interesting particle physics implications, and other “cheap” experiments that can at least tell us something useful.
Scientifically, there’s still a lot those kinds of experiments can’t cover. But more broadly, they don’t really require a wider community of the same scope, or calculations of the same difficulty. If there are no new colliders there will be some shrinkage. And that’s not unreasonable necessarily, that’s the nature of progress, different things get different emphasis at different times. I don’t know if my field is going to handle that gracefully, is the main thing.
“If there are no new colliders there will be some shrinkage.”
I’m taking off my physicist hat and putting on my economists hat for a second.
What would you say the current number of people in your subfield are? How many are in academia? Are all in physics departments, or are their some in mathematics or engineering departments? Where in government or industry do non-academics work?
How much attrition is there in an average year (even very stable organizations tend to have much higher losses to attrition that you would naively expect)? How many incoming pre-professionals are there at any given time? How many incoming pre-professionals are needed to keep the subfield viable?
What fields would an aspiring amplitudeologist most likely pursue if there were few job opportunities in the subfield? What fields would an experienced amplitudeologist pursue if research opportunities were declining in the sub-field?
How many dollars does it take to support a full fledged amplitudeologist (i.e. including grad students, post-docs, all professional equipment, support staff and overhead for a full time tenured professor in his or her prime, or the equivalent outside academia) for year? I would imagine that you guys are cheap as physicists go, but maybe computational resource needs drive up that cost. Who funds you now? What prospects would there be for alternative funding, for example, from support for basic research from companies interested in quantum computing, defense contractors or nano-tech companies, or the like?
How much shrinkage would you anticipate? How much delayed would it be from the collapse of collider physics?
To be clear, I’m not trying to insist on answers to all or any of these (although I am curious about the total size of the community), but those are the questions that leaders in the community plotting its future should be considering.
This reminds me of Sheldon abandoning the string theory.
FWIW, I have repeatedly pitched amplitudeology and the computational power necessary to make higher loop calculations as the best investment that we can make in physics. For example, when it comes to determining the values of the fundamental constants pertinent to QCD in the Standard Model, we already have lots of measurements that are far more precise than anything we can calculate such as the lighter hadron properties, but can’t convert those calculations to precise physical constant values due to imprecision in the calculations. With enough precision in calculation, we could determine the strong force coupling constant and light quark masses to pretty much any level of precision we could calculate from existing precision measurements that are diverse enough to allow plenty of cross-checking from experimental tests of hadron properties not used to come up with the new more precise physical constant values.
Also, once we get more precise values of Standard Model physical constants from this method, it is possible to reanalyze a lot of existing experiments and data to look for BSM phenomena or SM confirmation because backgrounds and theoretical expectations can be known more precisely. And with greater precision in physical constants it is possible to look for hidden within the Standard Model relationships that could otherwise have been dismissed as chance.
Finally, amplitudeology is also one of the more promising areas from which real breakthroughs in quantum gravity seem possible. Renormalizable or not, somehow the universe manages to keep everything in its courses without causing blue screens of death to pop up all over the universe, so there has to be a way of calculating QG that can be calculated and you guys are in the best position of anyone to figure out how.
Unlike new colliders whose value is speculative, this is a process guaranteed to advance our knowledge of fundamental physics as long as we work at it hard enough, and in the process of doing so, amplitudeology is almost certain to advance as more computations point their way towards deeper relationships in the equations that are calculated generating new hypotheses, and with those advances, we will get even further than we would with mere raw calculations and may even discover new fundamental and qualitative aspects of fundamental physics.
One more thought on the virtues of this sub-field. Lots of complex processes, calculation of all of the possible decays of a top quark, for example, have components (e.g. the decay of heavy leptons or the calculation of a particular class of term in the path integrals involved) that only have to be done one even though they are used repeatedly. Each new calculation on the shelf joins the parts list that can be used to make other calculations more efficient jumping from problem to solution without the intermediate calculation steps. It may be something of a brute force effort, but given that there is art as well as science that goes into solving those intermediate steps efficiently, the cumulative effort can also add a lot of value and make real time calculations of novel complex processes viable.
The tricky thing here is that this sort of re-branding on our side (from value for interpreting new data to value for re-interpreting old data) would have to be accompanied by a similar re-branding on the phenomenological side, by the people who actually do the interpreting. And maybe I’m talking to the wrong phenomenologists, but I’m not seeing that.
There are searches for BSM physics in old data that phenomenologists do get excited about, sure, like the recent reanalysis of ALEPH data. But they don’t seem especially tied to increases in precision on the theory side. The view that there could be a lot of interesting BSM physics hiding behind our uncertainty of the SM backgrounds looks plausible from the outside, but it’s not something I hear from phenomenologists themselves very often. The phenomenologists I’ve observed re-branding are mostly focusing on the intensity and cosmic frontiers, not talking about reanalyzing old energy frontier data.
The QG side is perhaps more promising, a lot of amplitudes research is basically non-mainstream QG research. So that chunk of the field is pretty well insulated at least.
There’s two arguments here: the one raised by Matt is the argument about the health of the overall field of particle physics, which I can follow. The second is about the relative health of a given sub-field within the larger field. For the latter, some of the amplitude technology is in good shape. Since the direct path to discovery seems to be closed, precision physics at LHC is the next best vector. However, LHC is not a precision machine: the strong force is strong while most of the physics is in the weak sector (just count parameters). Hence you need as a minimum N^(>1)LO computations in QCD, garuanteeing many more years of amplitudes. However, if your methods only apply to unregulated, planar N=4, you might not be able to draw on this…
So, this is definitely a point I hear people in our sub-field make pretty often. It’s usually in the intro of at least one talk at every amplitudes conference.
My main reservation is that it’s not a point I’ve heard often from outside our sub-field. I haven’t heard the same excitement about precision physics from BSM pheno folks.
Maybe I’m just not talking to the right people, though. You’ve likely talked to more phenomenologists than I have recently. Are there major things the BSM people think are hiding behind the QCD backgrounds, that wouldn’t have shown up in more direct searches?
intro of talk \rightarrow check 🙂
At LO many ‘BSM pheno’ people seem to be of the ambulance-chasing variety, aiming for easy models. At NLO, some will acknowledge the importance of disentangling new from known physics. At NNLO, some are trying to go here. This requires however implicitly or explicitly acknowledging that the easy vectors are almost closed now (low-scale susy, anyone?). Some of the drive toward parametrizing higher dimensional operators which can be added to the standard model as a way of capturing ‘all’ of BSM physics is a first step in this direction. Certainly, there remains lots of the electroweak sector to be explored further; the Higgs is involved in the majority of coupling constants in the standard model, while as a particle it is not even of kindergarten age. Of course, this requires amplitudologists to study amplitudes with broken gauge symmetries instead of the more fun simple varieties (Parke-Taylor, anyone?) :).
LikeLiked by 1 person