Math and physics are different fields with different cultures. Some of those differences are obvious, others more subtle.
I recently remembered a subtle difference I noticed at the University of Waterloo. The math building there has “research rooms”, rooms intended for groups of mathematicians to collaborate. The idea is that you invite visitors to the department, reserve the room, and spend all day with them trying to iron out a proof or the like.
Theoretical physicists collaborate like this sometimes too, but in my experience physics institutes don’t typically have this kind of “research room”. Instead, they have “collaboration spaces”. Unlike a “research room”, you don’t reserve a “collaboration space”. Typically, they aren’t even rooms: they’re a set of blackboards in the coffee room, or a cluster of chairs in the corner between two hallways. They’re open spaces, designed so that passers-by can overhear the conversation and (potentially) join in.
That’s not to say physicists never shut themselves in a room for a day (or night) to work. But when they do, it’s not usually in a dedicated space. Instead, it’s in an office, or a commandeered conference room.
Waterloo’s “research rooms” and physics institutes’ “collaboration spaces” can be used for similar purposes. The difference is in what they encourage.
The point of a “collaboration space” is to start new collaborations. These spaces are open in order to take advantage of serendipity: if you’re getting coffee or walking down the hall, you might hear something interesting and spark something new, with people you hadn’t planned to collaborate with before. Institutes with “collaboration spaces” are trying to make new connections between researchers, to be the starting point for new ideas.
The point of a “research room” is to finish a collaboration. They’re for researchers who are already collaborating, who know they’re going to need a room and can reserve it in advance. They’re enclosed in order to shut out distractions, to make sure the collaborators can sit down and focus and get something done. Institutes with “research rooms” want to give their researchers space to complete projects when they might otherwise be too occupied with other things.
I’m curious if this difference is more widespread. Do math departments generally tend to have “research rooms” or “collaboration spaces”? Are there physics departments with “research rooms”? I suspect there is a real cultural difference here, in what each field thinks it needs to encourage.
There’s a picture we learn in high school. It’s not the whole story, certainly: philosophers of science have much more sophisticated notions. But for practicing scientists, it’s a picture that often sits in the back of our minds, informing what we do. Because of that, it’s worth examining in detail.
In the high school picture, scientific theories make predictions. Importantly, postdictions don’t count: if you “predict” something that already happened, it’s too easy to cheat and adjust your prediction. Also, your predictions must be different from those of other theories. If all you can do is explain the same results with different words you aren’t doing science, you’re doing “something else” (“metaphysics”, “religion”, “mathematics”…whatever the person you’re talking to wants to make fun of, but definitely not science).
Seems reasonable, right? Let’s try a thought experiment.
In the late 1950’s, the physics of protons and neutrons was still quite mysterious. They seemed to be part of a bewildering zoo of particles that no-one could properly explain. In the 60’s and 70’s the field started converging on the right explanation, from Gell-Mann’s eightfold way to the parton model to the full theory of quantum chromodynamics (QCD for short). Today we understand the theory well enough to package things into computer code: amplitudes programs like BlackHat for collisions of individual quarks, jet algorithms that describe how those quarks become signals in colliders, lattice QCD implemented on supercomputers for pretty much everything else.
Now imagine that you had a time machine, prodigious programming skills, and a grudge against 60’s era-physicists.
Suppose you wrote a computer program that combined the best of QCD in the modern world. BlackHat and more from the amplitudes side, the best jet algorithms and lattice QCD code, and more: a program that could reproduce any calculation in QCD that anyone can do today. Further, suppose you don’t care about silly things like making your code readable. Since I began the list above with BlackHat, we’ll call the combined box of different codes BlackBox.
Now suppose you went back in time, and told the bewildered scientists of the 50’s that nuclear physics was governed by a very complicated set of laws: the ones implemented in BlackBox.
Behold, your theory
Your “BlackBox theory” passes the high school test. Not only would it match all previous observations, it could make predictions for any experiment the scientists of the 50’s could devise. Up until the present day, your theory would match observations as well as…well as well as QCD does today.
(Let’s ignore for the moment that they didn’t have computers that could run this code in the 50’s. This is a thought experiment, we can fudge things a bit.)
Now suppose that one of those enterprising 60’s scientists, Gell-Mann or Feynman or the like, noticed a pattern. Maybe they got it from an experiment scattering electrons off of protons, maybe they saw it in BlackBox’s code. They notice that different parts of “BlackBox theory” run on related rules. Based on those rules, they suggest a deeper reality: protons are made of quarks!
But is this “quark theory” scientific?
“Quark theory” doesn’t make any new predictions. Anything you could predict with quarks, you could predict with BlackBox. According to the high school picture of science, for these 60’s scientists quarks wouldn’t be scientific: they would be “something else”, metaphysics or religion or mathematics.
And in practice? I doubt that many scientists would care.
“Quark theory” makes the same predictions as BlackBox theory, but I think most of us understand that it’s a better theory. It actually explains what’s going on. It takes different parts of BlackBox and unifies them into a simpler whole. And even without new predictions, that would be enough for the scientists in our thought experiment to accept it as science.
Why am I thinking about this? For two reasons:
First, I want to think about what happens when we get to a final theory, a “Theory of Everything”. It’s probably ridiculously arrogant to think we’re anywhere close to that yet, but nonetheless the question is on physicists’ minds more than it has been for most of history.
Right now, the Standard Model has many free parameters, numbers we can’t predict and must fix based on experiments. Suppose there are two options for a final theory: one that has a free parameter, and one that doesn’t. Once that one free parameter is fixed, both theories will match every test you could ever devise (they’re theories of everything, after all).
If we come up with both theories before testing that final parameter, then all is well. The theory with no free parameters will predict the result of that final experiment, the other theory won’t, so the theory without the extra parameter wins the high school test.
What if we do the experiment first, though?
If we do, then we’re in a strange situation. Our “prediction” of the one free parameter is now a “postdiction”. We’ve matched numbers, sure, but by the high school picture we aren’t doing science. Our theory, the same theory that was scientific if history went the other way, is now relegated to metaphysics/religion/mathematics.
I don’t know about you, but I’m uncomfortable with the idea that what is or is not science depends on historical chance. I don’t like the idea that we could be stuck with a theory that doesn’t explain everything, simply because our experimentalists were able to work a bit faster.
My second reason focuses on the here and now. You might think we have nothing like BlackBox on offer, no time travelers taunting us with poorly commented code. But we’ve always had the option of our own Black Box theory: experiment itself.
The Standard Model fixes some of its parameters from experimental results. You do a few experiments, and you can predict the results of all the others. But why stop there? Why not fix all of our parameters with experiments? Why not fix everything with experiments?
That’s the Black Box Theory of Everything. Each individual experiment you could possibly do gets its own parameter, describing the result of that experiment. You do the experiment, fix that parameter, then move on to the next experiment. Your theory will never be falsified, you will never be proven wrong. Sure, you never predict anything either, but that’s just an extreme case of what we have now, where the Standard Model can’t predict the mass of the Higgs.
What’s wrong with the Black Box Theory? (I trust we can all agree that it’s wrong.)
It’s not just that it can’t make predictions. You could make it a Black Box All But One Theory instead, that predicts one experiment and takes every other experiment as input. You could even make a Black Box Except the Standard Model Theory, that predicts everything we can predict now and just leaves out everything we’re still confused by.
The Black Box Theory is wrong because the high school picture of what counts as science is wrong. The high school picture is a useful guide, it’s a good rule of thumb, but it’s not the ultimate definition of science. And especially now, when we’re starting to ask questions about final theories and ultimate parameters, we can’t cling to the high school picture. We have to be willing to actually think, to listen to the philosophers and consider our own motivations, to figure out what, in the end, we actually mean by science.
Where even the physics institutes have their own little rainforests.
“Nonperturbative” means that most of the people at this conference don’t use the loop-by-loop approximation of Feynman diagrams. Instead, they try to calculate things that don’t require approximations, finding formulas that work even for theories where the forces involved are very strong. In practice this works best in what are called “conformal” theories, roughly speaking these are theories that look the same whichever “scale” you use. Sometimes these theories are “integrable”, theories that can be “solved” exactly with no approximation. Sometimes these theories can be “bootstrapped”, starting with a guess and seeing how various principles of physics constrain it, mapping out a kind of “space of allowed theories”. Both approaches, integrability and bootstrap, are present at this conference.
This isn’t quite my community, but there’s a fair bit of overlap. We care about many of the same theories, like N=4 super Yang-Mills. We care about tricks to do integrals better, or to constrain mathematical guesses better, and we can trade these kinds of tricks and give each other advice. And while my work is typically “perturbative”, I did have one nonperturbative result to talk about, one which turns out to be more closely related to the methods these folks use than I had appreciated.