Like every good truism, though, there is an exception. Some rare times, you will actually want to read someone else’s thesis. This isn’t usually because the material is new: rather it’s because it’s well explained.
When we academics publish, we’re often in a hurry, and there isn’t time to write well. When we publish more slowly, often we have more collaborators, so the paper is a set of compromises written by committee. Either way, we rarely make a concept totally crystal-clear.
A thesis isn’t always crystal-clear either, but it can be. It’s written by just one person, and that person is learning. A grad student who just learned a topic can be in the best position to teach it: they know exactly what confused them when they start out. Thesis-writing is also a slower process, one that gives more time to hammer at a text until it’s right. Finally, a thesis is written for a committee, and that committee usually contains people from different fields. A thesis needs to be an accessible introduction, in a way that a published paper doesn’t.
There are topics that I never really understood until I looked up the thesis of the grad student who helped discover it. There are tricks that never made it to published papers, that I’ve learned because they were tucked in to the thesis of someone who went on to do great things.
So if you’re finding a subject confusing, if you’ve read all the papers and none of them make any sense, look for the grad students. Sometimes the best explanation of a tricky topic isn’t in the published literature, it’s hidden away in someone’s thesis.
Growing up in the US there are a lot of age-based milestones. You can drive at 16, vote at 18, and drink at 21. Once you’re in academia though, your actual age becomes much less relevant. Instead, academics are judged based on academic age, the time since you got your PhD.
More generally, when academics apply for jobs they are often weighed in terms of academic age. Compared to others, how long have you spent as a postdoc since your PhD? How many papers have you published since then, and how well cited were they? The longer you spend without finding a permanent position, the more likely employers are to wonder why, and the reasons they assume are rarely positive.
This creates some weird incentives. If you have a choice, it’s often better to graduate late than to graduate early. Employers don’t check how long you took to get your PhD, but they do pay attention to how many papers you published. If it’s an option, staying in school to finish one more project can actually be good for your career.
Biological age matters, but mostly for biological reasons: for example, if you plan to have children. Raising a family is harder if you have to move every few years, so those who find permanent positions by then have an easier time of it. That said, as academics have to take more temporary positions before settling down fewer people have this advantage.
Beyond that, biological age only matters again at the end of your career, especially if you work somewhere with a mandatory retirement age. Even then, retirement for academics doesn’t mean the same thing as for normal people: retired professors often have emeritus status, meaning that while technically retired they keep a role at the university, maintaining an office and often still doing some teaching or research.
I was talking with some other physicists about my “Black Box Theory” thought experiment, where theorists have to compete with an impenetrable block of computer code. Even if the theorists come up with a “better” theory, that theory won’t predict anything that the code couldn’t already. If “predicting something new” is an essential part of science, then the theorists can no longer do science at all.
One of my colleagues made an interesting point: in the thought experiment, the theorists can’t predict new behaviors of reality. But they can predict new behaviors of the code.
Even when we have the right theory to describe the world, we can’t always calculate its consequences. Often we’re stuck in the same position as the theorists in the thought experiment, trying to understand the output of a theory that might as well be a black box. Increasingly, we are employing a kind of “experimental theoretical physics”. We try to predict the result of new calculations, just as experimentalists try to predict the result of new experiments.
This experimental approach seems to be a genuine cultural difference between physics and mathematics. There is such a thing as experimental mathematics, to be clear. And while mathematicians prefer proof, they’re not averse to working from a good conjecture. But when mathematicians calculate and conjecture, they still try to set a firm foundation. They’re precise about what they mean, and careful about what they imply.
“Experimental theoretical physics”, on the other hand, is much more like experimental physics itself. Physicists look for plausible patterns in the “data”, seeing if they make sense in some “physical” way. The conjectures aren’t always sharply posed, and the leaps of reasoning are often more reckless than the leaps of experimental mathematicians. We try to use intuition gleaned from a history of experiments on, and calculations about, the physical world.
At the same time, experimental theoretical physics has real power. Experience may be a bad guide to mathematics, but it’s a better guide to the mathematics that specifically shows up in physics. And in practice, our recklessness can accomplish great things, uncovering behaviors mathematicians would never have found by themselves.
The key is to always keep in mind that the two fields are different. “Experimental theoretical physics” isn’t mathematics, and it isn’t pretending to be, any more than experimental physics is pretending to be theoretical physics. We’re gathering data and advancing tentative explanations, but we’re fully aware that they may not hold up when examined with full rigor. We want to inspire, to raise questions and get people to think about the principles that govern the messy physical theories we use to describe our world. Experimental physics, theoretical physics, and mathematics are all part of a shared ecosystem, and each has its role to play.
Avengers: Endgame has been out for a while, so I don’t have to worry about spoilers right? Right?
Anyway, time travel. The spoiler is time travel. They bring back everyone who was eliminated in the previous movie, using time travel.
They also attempt to justify the time travel, using Ant Man-flavored quantum mechanics. This works about as plausibly as you’d expect for a superhero whose shrinking powers not only let him talk to ants, but also go to a “place” called “The Quantum Realm”. Along the way, they manage to throw in splintered references to a half-dozen almost-relevant scientific concepts. It’s the kind of thing that makes some physicists squirm.
And I enjoyed it.
Movies tend to treat time travel in one of two ways. The most reckless, and most common, let their characters rewrite history as they go, like Marty McFly almost erasing himself from existence in Back to the Future. This never makes much sense, and the characters in Avengers: Endgame make fun of it, listing a series of movies that do time travel this way (inexplicably including Wrinkle In Time, which has no time travel at all).
In the other common model, time travel has to happen in self-consistent loops: you can’t change the past, but you can go back and be part of it. This is the model used, for example, in Harry Potter, where Potter is saved by a mysterious spell only to travel back in time and cast it himself. This at least makes logical sense, whether it’s possible physically is an open question.
Avengers: Endgame uses the model of self-consistent loops, but with a twist: if you don’t manage to make your loop self-consistent you instead spawn a parallel universe, doomed to suffer the consequences of your mistakes. This is a rarer setup, but not a unique one, though the only other example I can think of at the moment is Homestuck.
Is there any physics justification for the Avengers: Endgame model? Maybe not. But you can at least guess what they were thinking.
The key clue is a quote from Tony Stark, rattling off a stream of movie-grade scientific gibberish:
“ Quantum fluctuation messes with the Planck scale, which then triggers the Deutsch Proposition. Can we agree on that? ”
From this quote, one can guess not only what scientific results inspired the writers of Avengers: Endgame, but possibly also which Wikipedia entry. David Deutsch is a physicist, and an advocate for the many-worlds interpretation of quantum mechanics. In 1991 he wrote a paper discussing what happens to quantum mechanics in the environment of a wormhole. In it he pointed out that you can make a self-consistent time travel loop, not just in classical physics, but out of a quantum superposition. This offers a weird solution to the classic grandfather paradox of time travel: instead of causing a paradox, you can form a superposition. As Scott Aaronson explains here, “you’re born with probability 1/2, therefore you kill your grandfather with probability 1/2, therefore you’re born with probability 1/2, and so on—everything is consistent.” If you believe in the many-worlds interpretation of quantum mechanics, a time traveler in this picture is traveling between two different branches of the wave-function of the universe: you start out in the branch where you were born, kill your grandfather, and end up in the branch where you weren’t born. This isn’t exactly how Avengers: Endgame handles time travel, but it’s close enough that it seems like a likely explanation.
David Deutsch’s argument uses a wormhole, but how do the Avengers make a wormhole in the first place? There we have less information, just vague references to quantum fluctuations at the Planck scale, the scale at which quantum gravity becomes important. There are a few things they could have had in mind, but one of them might have been physicists Leonard Susskind and Juan Maldacena’s conjecture that quantum entanglement is related to wormholes, a conjecture known as ER=EPR.
Long-time readers of the blog might remember I got annoyed a while back, when Caltech promoted ER=EPRusing a different Disney franchise. The key difference here is that Avengers: Endgame isn’t pretending to be educational. Unlike Caltech’s ER=EPR piece, or even the movie Interstellar, Avengers: Endgame isn’t really about physics. It’s a superhero story, one that pairs the occasional scientific term with a character goofily bouncing around from childhood to old age while another character exclaims “you’re supposed to send him through time, not time through him!” The audience isn’t there to learn science, so they won’t come away with any incorrect assumptions.
The a movie like Avengers: Endgame doesn’t teach science, or even advertise it. It does celebrate it though.
That’s why, despite the silly half-correct science, I enjoyed Avengers: Endgame. It’s also why I don’t think it’s inappropriate, as some people do, to classify movies like Star Wars as science fiction. Star Wars and Avengers aren’t really about exploring the consequences of science or technology, they aren’t science fiction in that sense. But they do build off science’s role in the wider culture. They take our world and look at the advances on the horizon, robots and space travel and quantum speculations, and they let their optimism inform their storytelling. That’s not going to be scientifically accurate, and it doesn’t need to be, any more than the comic Abstruse Goose really believes Witten is from Mars. It’s about noticing we live in a scientific world, and having fun with it.
There’s a shorter-term problem, though, that gets much less press, despite arguably being a bigger part of the field right now. In amplitudes, we take theories and turn them into predictions, order by order and loop by loop. And when we want to compare those predictions to the real world, in most cases the best we can do is two loops and five particles.
Five particles here counts the particles coming in and going out: if two gluons collide and become three gluons, we count that as five particles, two in plus three out. Loops, meanwhile, measure the complexity of the calculation, the number of closed paths you can draw in a Feynman diagram. If you use more loops, you expect more precision: you’re approximating nature step by step.
As a field we’re pretty good at one-loop calculations, enough to do them for pretty much any number of particles. As we try for more loops though, things rapidly get harder. Already for two loops, in many cases, we start struggling. We can do better if we dial down the number of particles: there are three-particle and two-particle calculations that get up to three, four, or even five loops. For more particles though, we can’t do as much. Thus the current state of the art, the field’s short term goal: two loops, five particles.
When you hear people like me talk about crazier calculations, we’ve usually got a trick up our sleeve. Often we’re looking at a much simpler theory, one that doesn’t describe the real world. For example, I like working with a planar theory, with lots of supersymmetry. Remove even one of those simplifications, and suddenly our life becomes a lot harder. Instead of seven loops and six particles, we getgenuinelyexcitedabout, well, two loops five particles.
Luckily, two loops five particles is also about as good as the experiments can measure. As the Large Hadron Collider gathers more data, it measures physics to higher and higher precision. Currently for five-particle processes, its precision is just starting to be comparable with two-loop calculations. The result has been a flurry of activity, applying everything from powerful numerical techniques to algebraic geometry to the problem, getting results that genuinely apply to the real world.
“Two loops, five particles” isn’t as cool of a slogan as “space-time is doomed”. It doesn’t get much, or any media attention. But, steadily and quietly, it’s become one of the hottest topics in the amplitudes field.
As a kid, I wanted to know everything. Eventually, I realized this was a little unrealistic. Doomed to know some things and not others, I picked physics as a kind of triage. Other fields I could learn as an outsider: not well enough to compete with the experts, but enough to at least appreciate what they were doing. After watching a few string theory documentaries, I realized this wasn’t the case for physics: if I was going to ever understand what those string theorists were up to, I would have to go to grad school in string theory.
Over time, this goal lost focus. I’ve become a very specialized creature, an “amplitudeologist”. I didn’t have time or energy for my old questions. In an irony that will surprise no-one, a career as a physicist doesn’t leave much time for curiosity about physics.
One of the great things about this blog is how you guys remind me of those old questions, bringing me out of my overspecialized comfort zone. In that spirit, in this post I’m going to list a few things in physics that I really want to understand better. The idea is to make a public commitment: within a year, I want to understand one of these topics at least well enough to write a decent blog post on it.
Wilsonian Quantum Field Theory:
When you first learn quantum field theory as a physicist, you learn how unsightly infinite results get covered up via an ad-hoc-looking process called renormalization. Eventually you learn a more modern perspective, that these infinite results show up because we’re ignorant of the complete theory at high energies. You learn that you can think of theories at a particular scale, and characterize them by what happens when you “zoom” in and out, in an approach codified by the physicist Kenneth Wilson.
While I understand the basics of Wilson’s approach, the courses I took in grad school skipped the deeper implications. This includes the idea of theories that are defined at all energies, “flowing” from an otherwise scale-invariant theory perturbed with extra pieces. Other physicists are much more comfortable thinking in these terms, and the topic is important for quite a few deep questions, including what it means to properly define a theory and where laws of nature “live”. If I’m going to have an informed opinion on any of those topics, I’ll need to go back and learn the Wilsonian approach properly.
If you’re a fan of science fiction, you probably know that wormholes are the most realistic option for faster-than-light travel, something that is at least allowed by the equations of general relativity. “Most realistic” isn’t the same as “realistic”, though. Opening a wormhole and keeping it stable requires some kind of “exotic matter”, and that matter needs to violate a set of restrictions, called “energy conditions”, that normal matter obeys. Some of these energy conditions are just conjectures, some we even know how to violate, while others are proven to hold for certain types of theories. Some energy conditions don’t rule out wormholes, but instead restrict their usefulness: you can have non-traversable wormholes (basically, two inescapable black holes that happen to meet in the middle), or traversable wormholes where the distance through the wormhole is always longer than the distance outside.
I’ve seen a few talks on this topic, but I’m still confused about the big picture: which conditions have been proven, what assumptions were needed, and what do they all imply? I haven’t found a publicly-accessible account that covers everything. I owe it to myself as a kid, not to mention everyone who’s a kid now, to get a satisfactory answer.
Quantum Foundations is a field that many physicists think is a waste of time. It deals with the questions that troubled Einstein and Bohr, questions about what quantum mechanics really means, or why the rules of quantum mechanics are the way they are. These tend to be quite philosophical questions, where it’s hard to tell if people are making progress or just arguing in circles.
I’m more optimistic about philosophy than most physicists, at least when it’s pursued with enough analytic rigor. I’d like to at least understand the leading arguments for different interpretations, what the constraints on interpretations are and the main loopholes. That way, if I end up concluding the field is a waste of time at least I’d be making an informed decision.
You might notice a change on the site this week: the ads are gone!
When I started this blog back in 2012, it was just a class project. I didn’t want to spend money on it, so I chose WordPress.com’s free hosting option. A consequence of that option is that WordPress got to post ads. These were pretty mild to begin with, I think most of the early posts didn’t even have ads. It seemed like a reasonable deal.
Over the years, WordPress has quietly been adding more ads, and worse ones. I mostly hadn’t noticed: I use an adblocker. For those who don’t, though, the blog began to look increasingly unprofessional, plastered with the kind of shitty, borderline-scam ads that fill certain parts of the internet. Thanks to everyone who let me know this was happening, I don’t think I would have noticed otherwise. To clarify, I never made any money from these ads, all of the revenue went to WordPress.
As of this week I’ve switched the site to a paid hosting plan. The move is long overdue: the plan is actually pretty cheap, and is the sort of thing I could have easily afforded by myself. As it happens I don’t have to afford it by myself: the grant that funds me, a Marie Curie Individual Fellowship, also funds outreach activities. I already had a message thanking them on my About page, but somehow I hadn’t considered actually using their funding here.
The site’s new plan also comes with a free domain, so you can now reach this site with a new simpler address: 4gravitons.com. The old 4gravitons.wordpress.com address should still work as well, if there are any glitches please let me know!