If you imagine a particle physicist, you probably picture someone spending their whole day dreaming up new particles. They figure out how to test those particles in some big particle collider, and for a lucky few their particle gets discovered and they get a Nobel prize.
Occasionally, a wiseguy asks if we can’t just cut out the middleman. Instead of dreaming up particles to test, why don’t we just write down every possible particle and test for all of them? It would save the Nobel committee a lot of money at least!
It turns out, you can sort of do this, through something called Effective Field Theory. An Effective Field Theory is a type of particle physics theory that isn’t quite true: instead, it’s “effectively” true, meaning true as long as you don’t push it too far. If you test it at low energies and don’t “zoom in” too much then it’s fine. Crank up your collider energy high enough, though, and you expect the theory to “break down”, revealing new particles. An Effective Field Theory lets you “hide” unknown particles inside new interactions between the particles we already know.
To help you picture how this works, imagine that the pink and blue lines here represent familiar particles like electrons and quarks, while the dotted line is a new particle somebody dreamed up. (The picture is called a Feynman diagram, if you don’t know what that is check out this post.)
In an Effective Field Theory, we “zoom out”, until the diagram looks like this:
Now we’ve “hidden” the new particle. Instead, we have a new type of interaction between the particles we already know.
So instead of writing down every possible new particle we can imagine, we only have to write down every possible interaction between the particles we already know.
That’s not as hard as it sounds. In part, that’s because not every interaction actually makes sense. Some of the things you could write down break some important rules. They might screw up cause and effect, letting something happen before its cause instead of after. They might screw up probability, giving you a formula for the chance something happens that gives a number greater than 100%.
Using these rules you can play a kind of game. You start out with a space representing all of the interactions you can imagine. You begin chipping at it, carving away parts that don’t obey the rules, and you see what shape is left over. You end up with plots that look a bit like carving a ham.


People in my subfield are getting good at this kind of game. It isn’t quite our standard fare: usually, we come up with tricks to make calculations with specific theories easier. Instead, many groups are starting to look at these general, effective theories. We’ve made friends with groups in related fields, building new collaborations. There still isn’t one clear best way to do this carving, so each group manages to find a way to chip a little farther. Out of the block of every theory we could imagine, we’re carving out a space of theories that make sense, theories that could conceivably be right. Theories that are worth testing.
What do you really think about the bootstrap program?
Ok, it’s not something entirely new as far as I know, but in the recent years there is a resurgence of that field. I understand that you are more sympathetic to the ” operational” approach to physics than to more abstract, based on ” first principles ” approaches, but maybe I’m wrong after all.
LikeLike
I don’t really think of them as conflicting, at least not framed in that way. I’m an “operationalist” in that I care about observables. That doesn’t mean I object to “first principles”, it just means first principles need to be operationalized in terms of those observables. Bootstrap approaches do that: the CFT bootstrap constrains correlation functions, the EFT constraints story I’m talking about here is often framed in terms of on-shell amplitudes, and my own perturbative bootstrap work also involves building amplitudes from first principles in a sense, just for specific theories rather than general spaces of theories.
I don’t know if what you have in mind are the kinds of vague stuff I’ve said about QM interpretations and the like (there my main annoyance is with people who tend to either refuse to operationalize their “first principles”, or to mistake differences in how people operationalize different ideas for differences in how they view actual physics), or if it’s more a general outlook of pragmatism (I do care about whether the methods we develop can actually be useful to predict things in the real world, but I also think that bootstrap methods, my own included, are going to be a valuable contribution to doing that!)
LikeLike
Thanks for the link to that older post.
I won’t pretend that I fully comprehend the details of the work that you have done there, but the result was really impressive, to say the least! You’ve eliminated the need for extremely time consuming calculations by using restrictions and found a clever ” shortcut” instead of doing this the naive / tedious way.
The operationalist / positivist approaches to physics are useful in the sense that they’re keeping people down- to- earth , I don’t really disagree with that stance. My only objection has to do with the somehow ” extreme” philosophical meaning that some people ( in the Quantum Foundations area ) are adapting to these terms.
Personally, I’m a bit agnostic about interpretations and I don’t have very strong preferences ( although I am more sympathetic to some of them- e.g. “consistent histories”, than others).
I don’t really have enough information to decide about which is the ” correct” one.
QM is very well confirmed by all experiments and, for the time being, that’s enough.
Experimentalists will soon test other alternatives ( like “physical collapse” theories ), so in the not so far future we will know , perhaps, more..
LikeLike
Since you’re a fan of consistent histories with a GR-ish background, maybe you can answer this. I saw a talk by Hertog where he “avoided” the eternal inflation multiverse by taking a measurable patch of the sky and saying that our history is a superposition of the possible spacetimes that can lead to that patch. Was that an application of consistent histories? I know Hertog has worked a lot with Hartle, who likes consistent histories, and I know that consistent histories is supposed to be useful for cosmology, but I wasn’t sure whether the specific thing he was doing there was related to that.
LikeLike
I think that you’re right about the work of Hartle, Gell- Mann , Unruh , Omnes (“interpretation of QM”, 1994) and my preference for the consistent histories approach. Even my first introductory GR textbook was J. Hartle’s . There are, also, other interesting interpretations like the relational approaches, I’m open minded about this..
In recent years, after that ’17 paper by Turok (and collaborators) and the following controversies, the status of the No Boundary proposal has been a very complicated affair as you probably already know.( Maybe I should say that the whole thing about various cosmological proposals has been increasingly complicated..)
The NBP has been, also , significantly transformed from its initial path integral / consistent histories origins. I haven’t seen any recent talk from Hertog about the subject and I’m not aware about the most recent developments.
Only a guess: Maybe it has to do with the swampland conjectures.
There was a paper by Matsui & Terada ( that was related with a previous one by Matsui & Takahashi – arXiv:1807.11938 ) where, if I remember correctly, there was a claim that the No Boundary Proposal does not give high probabilities for expanding ” classical” universes without presupposing the need for eternal inflation.
Now ,the potential problem, according to this paper is that eternal inflation (E.I.) is constrained by the swampland conjectures ( there was also a connection with the Trans- Planckian censorship conjecture- again, If I remember it right! ) in a way that poses a potential issue for NBP.
So, perhaps NB people found a way to circumvent this by avoiding the need for E.I.?
This is only a guess, perhaps I’m altogether wrong!
LikeLike
That’s plausibly part of the motivation, yeah.
I should clarify, my question was intended to be more along the lines of “is this how consistent histories works?” If I understood Hertog’s argument, the idea was that rather than eternal inflation giving a large number of different regions with different cosmological constants, you look at the data we can get in one region, and think about its past as a superposition dependent on how much data is available, and once the region you’re considering is over a certain size its past is a superposition of NBP pasts.
The thing is, that superficially seems like it avoids eternal inflation in a really cheaty way, by just not considering regions outside of our Hubble volume to begin with. I was curious whether it was that particular trick (“consider just what you can observe, and your past is a superposition of pasts that could lead to it”) was something one is allowed to do in the consistent histories interpretation. Because it both seemed consistent with the extremely vague things I’ve heard about consistent histories, and not consistent with the way you seem to be describing consistent histories as not operationalist (because “your past is determined by what evidence you have access to” is extremely operationalist!)
LikeLike
There Is an arbitrariness problem in quantum/ or inflationary etc. cosmology, I think almost everybody agrees on that ( I’ve made a related comment in your previous blog post). Things are not so blatantly cheaty or circular though. In NB, they assume homogeneous and isotropic cosmologies ( with a cosmological constant and a scalar field etc) and they use the observational data from our past light cone as constraints
( what else could they do?) for the calculations. The NB predicts an early inflationary phase, Ok, but which model ( of the numerous that have been proposed ) and with which characteristics ?
Do these predictions favour models of inflation that lead to big , expanding, “classical ” universes like ours? What about eternal inflation models etc..
I don’t think that they’re excluding eternal inflation or pre- Big Bang phases or bouncing models in an arbitrary manner.
In some older papers the N B indeed predicted bouncing models with minimum radius, or initial singularities that could be extended in the past etc.. ( e.g arXiv:0711.4630 [hep- th] – this is a very brief 4 pages paper from HHH.
There is another related paper, more detailed and much longer: 0803.1663 again from Hartle/ Hawking/ Hertog ). These papers are quite old and probably outdated in some aspects, but I think that they show how the whole thing works.
As for the most recent developments in the NBP, I have no updated information and I cannot comment , i won’t do any justice to their work.
LikeLike
As an additional note, i have the impression that your previous comments have to do more with that ” middle” philosophical stance ( in-between operationalism and realism ) than with consistent histories. Perhaps I’m not really self- consistent with my preference to consistent histories 🙂!
LikeLike