The Quantum Kids

I gave a pair of public talks at the Niels Bohr International Academy this week on “The Quest for Quantum Gravity” as part of their “News from the NBIA” lecture series. The content should be familiar to long-time readers of this blog: I talked about renormalization, and gravitons, and the whole story leading up to them.

(I wanted to title the talk “How I Learned to Stop Worrying and Love Quantum Gravity”, like my blog post, but was told Danes might not get the Doctor Strangelove reference.)

I also managed to work in some history, which made its way into the talk after Poul Damgaard, the director of the NBIA, told me I should ask the Niels Bohr Archive about Gamow’s Thought Experiment Device.

“What’s a Thought Experiment Device?”

einsteinbox

This, apparently

If you’ve heard of George Gamow, you’ve probably heard of the Alpher-Bethe-Gamow paper, his work with grad student Ralph Alpher on the origin of atomic elements in the Big Bang, where he added Hans Bethe to the paper purely for an alpha-beta-gamma pun.

As I would learn, Gamow’s sense of humor was prominent quite early on. As a research fellow at the Niels Bohr Institute (essentially a postdoc) he played with Bohr’s kids, drew physics cartoons…and made Thought Experiment Devices. These devices were essentially toy experiments, apparatuses that couldn’t actually work but that symbolized some physical argument. The one I used in my talk, pictured above, commemorated Bohr’s triumph over one of Einstein’s objections to quantum theory.

Learning more about the history of the institute, I kept noticing the young researchers, the postdocs and grad students.

h155

Lev Landau, George Gamow, Edward Teller. The kids are Aage and Ernest Bohr. Picture from the Niels Bohr Archive.

We don’t usually think about historical physicists as grad students. The only exception I can think of is Feynman, with his stories about picking locks at the Manhattan project. But in some sense, Feynman was always a grad student.

This was different. This was Lev Landau, patriarch of Russian physics, crowning name in a dozen fields and author of a series of textbooks of legendary rigor…goofing off with Gamow. This was Edward Teller, father of the Hydrogen Bomb, skiing on the institute lawn.

These were the children of the quantum era. They came of age when the laws of physics were being rewritten, when everything was new. Starting there, they could do anything, from Gamow’s cosmology to Landau’s superconductivity, spinning off whole fields in the new reality.

On one level, I envy them. It’s possible they were the last generation to be on the ground floor of a change quite that vast, a shift that touched all of physics, the opportunity to each become gods of their own academic realms.

I’m glad to know about them too, though, to see them as rambunctious grad students. It’s all too easy to feel like there’s an unbridgeable gap between postdocs and professors, to worry that the only people who make it through seem to have always been professors at heart. Seeing Gamow and Landau and Teller as “quantum kids” dispels that: these are all-too-familiar grad students and postdocs, joking around in all-too-familiar ways, who somehow matured into some of the greatest physicists of their era.

Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

1200px-13_portrait_of_robert_hooke

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a 1/r^2 force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of 1/r^2 he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

img_0622

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

 

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

 

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

Amplitudes Papers I Haven’t Had Time to Read

Interesting amplitudes papers seem to come in groups. Several interesting papers went up this week, and I’ve been too busy to read any of them!

Well, that’s not quite true, I did manage to read this paper, by James Drummond, Jack Foster, and Omer Gurdogan. At six pages long, it wasn’t hard to fit in, and the result could be quite useful. The way my collaborators and I calculate amplitudes involves building up a mathematical object called a symbol, described in terms of a string of “letters”. What James and collaborators have found is a restriction on which “letters” can appear next to each other, based on the properties of a mathematical object called a cluster algebra. Oddly, the restriction seems to have the same effect as a more physics-based condition we’d been using earlier. This suggests that the abstract mathematical restriction and the physics-based restriction are somehow connected, but we don’t yet understand how. It also could be useful for letting us calculate amplitudes with more particles: previously we thought the number of “letters” we’d have to consider there was going to be infinite, but with James’s restriction we’d only need to consider a finite number.

I didn’t get a chance to read David Dunbar, John Godwin, Guy Jehu, and Warren Perkins’s paper. They’re computing amplitudes in QCD (which unlike N=4 super Yang-Mills actually describes the real world!) and doing so for fairly complicated arrangements of particles. They claim to get remarkably simple expressions: since that sort of claim was what jump-started our investigations into N=4, I should probably read this if only to see if there’s something there in the real world amenable to our technique.

I also haven’t read Rutger Boels and Hui Lui’s paper yet. From the abstract, I’m still not clear which parts of what they’re describing is new, or how much it improves on existing methods. It will probably take a more thorough reading to find out.

I really ought to read Burkhard Eden, Yunfeng Jiang, Dennis le Plat, and Alessandro Sfondrini’s paper. They’re working on a method referred to as the Hexagon Operator Product Expansion, or HOPE. It’s related to an older method, the Pentagon Operator Product Expansion (POPE), but applicable to trickier cases. I’ve been keeping an eye on the HOPE in part because my collaborators have found the POPE very useful, and the HOPE might enable something similar. It will be interesting to find out how Eden et al.’s paper modifies the HOPE story.

Finally, I’ll probably find the time to read my former colleague Sebastian Mizera’s paper. He’s found a connection between the string-theory-like CHY picture of scattering amplitudes and some unusual mathematical structures. I’m not sure what to make of it until I get a better idea of what those structures are.

The opposite of Witches

On Halloween I have a tradition of posts about spooky topics, whether traditional Halloween fare or things that spook physicists. This year it’s a little of both.

Mage: The Ascension is a role-playing game set in a world in which belief shapes reality. Players take the role of witches and warlocks, casting spells powered by their personal paradigms of belief. The game allows for pretty much any modern-day magic-user you could imagine, from Wiccans to martial artists.

wicked_witch

Even stereotypical green witches, probably

Despite all the options, I was always more interested in the game’s villains, the witches’ opposites, the Technocracy.

The Technocracy answer an inevitable problem with any setting involving modern-day magic: why don’t people notice? If reality is powered by belief, why does no-one believe in magic?

In the Technocracy’s case, the answer is a vast conspiracy of mages with a scientific bent, manipulating public belief. Much like the witches and warlocks of Mage are a grab-bag of every occult belief system, the Technocracy combines every oppressive government conspiracy story you can imagine, all with the express purpose of suppressing the supernatural and maintaining scientific consensus.

This quote is from another game by the same publisher, but it captures the attitude of the Technocracy, and the magnitude of what is being claimed here:

Do not believe what the scientists tell you. The natural history we know is a lie, a falsehood sold to us by wicked old men who would make the world a dull gray prison and protect us from the dangers inherent to freedom. They would have you believe our planet to be a lonely starship, hurtling through the void of space, barren of magic and in need of a stern hand upon the rudder.

Close your mind to their deception. The time before our time was not a time of senseless natural struggle and reptilian rage, but a time of myth and sorcery. It was a time of legend, when heroes walked Creation and wielded the very power of the gods. It was a time before the world was bent, a time before the magic of Creation lessened, a time before the souls of men became the stunted, withered things they are today.

It can be a fun exercise to see how far doubt can take you, how much of the scientific consensus you can really be confident of and how much could be due to a conspiracy. Believing in the Technocracy would be the most extreme version of this, but Flat-Earthers come pretty close. Once you’re doubting whether the Earth is round, you have to imagine a truly absurd conspiracy to back it up.

On the other extreme, there are the kinds of conspiracies that barely take a conspiracy at all. Big experimental collaborations, like ATLAS and CMS at the LHC, keep a tight handle on what their members publish. (If you’re curious how much of one, here’s a talk by a law professor about, among other things, the Constitution of CMS. Yes, it has one!) An actual conspiracy would still be outed in about five minutes, but you could imagine something subtler, the experiment sticking to “safe” explanations and refusing to publish results that look too unusual, on the basis that they’re “probably” wrong. Worries about that sort of thing can make actual physicists spooked.

There’s an important dividing line with doubt: too much and you risk invoking a conspiracy more fantastical than the science you’re doubting in the first place. The Technocracy doesn’t just straddle that line, it hops past it off into the distance. Science is too vast, and too unpredictable, to be controlled by some shadowy conspiracy.

519tdxeawyl

Or maybe that’s just what we want you to think!

A LIGO in the Darkness

For the few of you who haven’t yet heard: LIGO has detected gravitational waves from a pair of colliding neutron stars, and that detection has been confirmed by observations of the light from those stars.

gw170817_factsheet

They also provide a handy fact sheet.

This is a big deal! On a basic level, it means that we now have confirmation from other instruments and sources that LIGO is really detecting gravitational waves.

The implications go quite a bit further than that, though. You wouldn’t think that just one observation could tell you very much, but this is an observation of an entirely new type, the first time an event has been seen in both gravitational waves and light.

That, it turns out, means that this one observation clears up a whole pile of mysteries in one blow. It shows that at least some gamma ray bursts are caused by colliding neutron stars, that neutron star collisions can give rise to the high-power “kilonovas” capable of forming heavy elements like gold…well, I’m not going to be able to give justice to the full implications in this post. Matt Strassler has a pair of quite detailed posts on the subject, and Quanta magazine’s article has a really great account of the effort that went into the detection, including coordinating the network of telescopes that made it possible.

I’ll focus here on a few aspects that stood out to me.

One fun part of the story behind this detection was how helpful “failed” observations were. VIRGO (the European gravitational wave experiment) was running alongside LIGO at the time, but VIRGO didn’t see the event (or saw it so faintly it couldn’t be sure it saw it). This was actually useful, because VIRGO has a blind spot, and VIRGO’s non-observation told them the event had to have happened in that blind spot. That narrowed things down considerably, and allowed telescopes to close in on the actual merger. IceCube, the neutrino observatory that is literally a cubic kilometer chunk of Antarctica filled with sensors, also failed to detect the event, and this was also useful: along with evidence from other telescopes, it suggests that the “jet” of particles emitted by the merged neutron stars is tilted away from us.

One thing brought up at LIGO’s announcement was that seeing gravitational waves and electromagnetic light at roughly the same time puts limits on any difference between the speed of light and the speed of gravity. At the time I wondered if this was just a throwaway line, but it turns out a variety of proposed modifications of gravity predict that gravitational waves will travel slower than light. This event rules out many of those models, and tightly constrains others.

The announcement from LIGO was screened at NBI, but they didn’t show the full press release. Instead, they cut to a discussion for local news featuring NBI researchers from the various telescope collaborations that observed the event. Some of this discussion was in Danish, so it was only later that I heard about the possibility of using the simultaneous measurement of gravitational waves and light to measure the expansion of the universe. While this event by itself didn’t result in a very precise measurement, as more collisions are observed the statistics will get better, which will hopefully clear up a discrepancy between two previous measures of the expansion rate.

A few news sources made it sound like observing the light from the kilonova has let scientists see directly which heavy elements were produced by the event. That isn’t quite true, as stressed by some of the folks I talked to at NBI. What is true is that the light was consistent with patterns observed in past kilonovas, which are estimated to be powerful enough to produce these heavy elements. However, actually pointing out the lines corresponding to these elements in the spectrum of the event hasn’t been done yet, though it may be possible with further analysis.

A few posts back, I mentioned a group at NBI who had been critical of LIGO’s data analysis and raised doubts of whether they detected gravitational waves at all. There’s not much I can say about this until they’ve commented publicly, but do keep an eye on the arXiv in the next week or two. Despite the optimistic stance I take in the rest of this post, the impression I get from folks here is that things are far from fully resolved.

One, Two, Infinity

Physicists and mathematicians count one, two, infinity.

We start with the simplest case, as a proof of principle. We take a stripped down toy model or simple calculation and show that our idea works. We count “one”, and we publish.

Next, we let things get a bit more complicated. In the next toy model, or the next calculation, new interactions can arise. We figure out how to deal with those new interactions, our count goes from “one” to “two”, and once again we publish.

By this point, hopefully, we understand the pattern. We know what happens in the simplest case, and we know what happens when the different pieces start to interact. If all goes well, that’s enough: we can extrapolate our knowledge to understand not just case “three”, but any case: any model, any calculation. We publish the general case, the general method. We’ve counted one, two, infinity.

200px-infinite-svg

Once we’ve counted “infinity”, we don’t have to do any more cases. And so “infinity” becomes the new “zero”, and the next type of calculation you don’t know how to do becomes “one”. It’s like going from addition to multiplication, from multiplication to exponentiation, from exponentials up into the wilds of up-arrow notation. Each time, once you understand the general rules you can jump ahead to an entirely new world with new capabilities…and repeat the same process again, on a new scale. You don’t need to count one, two, three, four, on and on and on.

Of course, research doesn’t always work out this way. My last few papers counted three, four, five, with six on the way. (One and two were already known.) Unlike the ideal cases that go one, two, infinity, here “two” doesn’t give all the pieces you need to keep going. You need to go a few numbers more to get novel insights. That said, we are thinking about “infinity” now, so look forward to a future post that says something about that.

A lot of frustration in physics comes from situations when “infinity” remains stubbornly out of reach. When people complain about all the models for supersymmetry, or inflation, in some sense they’re complaining about fields that haven’t taken that “infinity” step. One or two models of inflation are nice, but by the time the count reaches ten you start hoping that someone will describe all possible models of inflation in one paper, and see if they can make any predictions from that.

(In particle physics, there’s an extent to which people can actually do this. There are methods to describe all possible modifications of the Standard Model in terms of what sort of effects they can have on observations of known particles. There’s a group at NBI who work on this sort of thing.)

The gold standard, though, is one, two, infinity. Our ability to step back, stop working case-by-case, and move on to the next level is not just a cute trick: it’s a foundation for exponential progress. If we can count one, two, infinity, then there’s nowhere we can’t reach.

Congratulations to Rainer Weiss, Barry Barish, and Kip Thorne!

The Nobel Prize in Physics was announced this week, awarded to Rainer Weiss, Kip Thorne, and Barry Barish for their work on LIGO, the gravitational wave detector.

Nobel2017

Many expected the Nobel to go to LIGO last year, but the Nobel committee waited. At the time, it was expected the prize would be awarded to Rainer Weiss, Kip Thorne, and Ronald Drever, the three founders of the LIGO project, but there were advocates for Barry Barish was well. Traditionally, the Nobel is awarded to at most three people, so the argument got fairly heated, with opponents arguing Barish was “just an administrator” and advocates pointing out that he was “just the administrator without whom the project would have been cancelled in the 90’s”.

All of this ended up being irrelevant when Drever died last March. The Nobel isn’t awarded posthumously, so the list of obvious candidates (or at least obvious candidates who worked on LIGO) was down to three, which simplified thing considerably for the committee.

LIGO’s work is impressive and clearly Nobel-worthy, but I would be remiss if I didn’t mention that there is some controversy around it. In June, several of my current colleagues at the Niels Bohr Institute uploaded a paper arguing that if you subtract the gravitational wave signal that LIGO claims to have found then the remaining data, the “noise”, is still correlated between LIGO’s two detectors, which it shouldn’t be if it were actually just noise. LIGO hasn’t released an official response yet, but a LIGO postdoc responded with a guest post on Sean Carroll’s blog, and the team at NBI had responses of their own.

I’d usually be fairly skeptical of this kind of argument: it’s easy for an outsider looking at the data from a big experiment like this to miss important technical details that make the collaboration’s analysis work. That said, having seen some conversations between these folks, I’m a bit more sympathetic. LIGO hadn’t been communicating very clearly initially, and it led to a lot of unnecessary confusion on both sides.

One thing that I don’t think has been emphasized enough is that there are two claims LIGO is making: that they detected gravitational waves, and that they detected gravitational waves from black holes of specific masses at a specific distance. The former claim could be supported by the existence of correlated events between the detectors, without many assumptions as to what the signals should look like. The team at NBI seem to have found a correlation of that sort, but I don’t know if they still think the argument in that paper holds given what they’ve said elsewhere.

The second claim, that the waves were from a collision of black holes with specific masses, requires more work. LIGO compares the signal to various models, or “templates”, of black hole events, trying to find one that matches well. This is what the group at NBI subtracts to get the noise contribution. There’s a lot of potential for error in this sort of template-matching. If two templates are quite similar, it may be that the experiment can’t tell the difference between them. At the same time, the individual template predictions have their own sources of uncertainty, coming from numerical simulations and “loops” in particle physics-style calculations. I haven’t yet found a clear explanation from LIGO of how they take these various sources of error into account. It could well be that even if they definitely saw gravitational waves, they don’t actually have clear evidence for the specific black hole masses they claim to have seen.

I’m sure we’ll hear more about this in the coming months, as both groups continue to talk through their disagreement. Hopefully we’ll get a clearer picture of what’s going on. In the meantime, though, Weiss, Barish, and Thorne have accomplished something impressive regardless, and should enjoy their Nobel.

Visiting Uppsala

I’ve been in Uppsala this week, visiting Henrik Johansson‘s group.

IMG_20170927_095605609

The Ångström Laboratory here is substantially larger than an ångström, a clear example of false advertising.

As such, I haven’t had time to write a long post about the recent announcement by the LIGO and VIRGO collaborations. Luckily, Matt Strassler has written one of his currently all-too-rare posts on the subject, so if you’re curious you should check out what he has to say.

Looking at the map of black hole collisions in that post, I’m struck by how quickly things have improved. The four old detections are broad slashes across the sky, the newest is a small patch. Now that there are enough detectors to triangulate, all detections will be located that precisely, or better. A future map might be dotted with precise locations of black hole collisions, but it would still be marred by those four slashes: relics of the brief time when only two machines in the world could detect gravitational waves.

When It Rains It Amplitudes

The last few weeks have seen a rain of amplitudes papers on arXiv, including quite a few interesting ones.

rainydays

As well as a fair amount of actual rain in Copenhagen

Over the last year Nima Arkani-Hamed has been talking up four or five really interesting results, and not actually publishing any of them. This has understandably frustrated pretty much everybody. In the last week he published two of them, Cosmological Polytopes and the Wavefunction of the Universe with Paolo Benincasa and Alexander Postnikov and Scattering Amplitudes For All Masses and Spins with Tzu-Chen Huang and Yu-tin Huang. So while I’ll have to wait on the others (I’m particularly looking forward to seeing what he’s been working on with Ellis Yuan) this can at least tide me over.

Cosmological Polytopes and the Wavefunction of the Universe is Nima & co.’s attempt to get a geometrical picture for cosmological correlators, analogous to the Ampituhedron. Cosmological correlators ask questions about the overall behavior of the visible universe: how likely is one clump of matter to be some distance from another? What sorts of patterns might we see in the Cosmic Microwave Background? This is the sort of thing that can be used for “cosmological collider physics”, an idea I mention briefly here.

Paolo Benincasa was visiting Perimeter near the end of my time there, so I got a few chances to chat with him about this. One thing he mentioned, but that didn’t register fully at the time, was Postnikov’s involvement. I had expected that even if Nima and Paolo found something interesting that it wouldn’t lead to particularly deep mathematics. Unlike the N=4 super Yang-Mills theory that generates the Amplituhedron, the theories involved in these cosmological correlators aren’t particularly unique, they’re just a particular class of models cosmologists use that happen to work well with Nima’s methods. Given that, it’s really surprising that they found something mathematically interesting enough to interest Postnikov, a mathematician who was involved in the early days of the Amplituhedron’s predecessor, the Positive Grassmannian. If there’s something that mathematically worthwhile in such a seemingly arbitrary theory then perhaps some of the beauty of the Amplithedron are much more general than I had thought.

Scattering Amplitudes For All Masses and Spins is on some level a byproduct of Nima and Yu-tin’s investigations of whether string theory is unique. Still, it’s a useful byproduct. Many of the tricks we use in scattering amplitudes are at their best for theories with massless particles. Once the particles have masses our notation gets a lot messier, and we often have to rely on older methods. What Nima, Yu-tin, and Tzu-Chen have done here is to build a notation similar to what we use for massless particle, but for massive ones.

The advantage of doing this isn’t just clean-looking papers: using this notation makes it a lot easier to see what kinds of theories make sense. There are a variety of old theorems that restrict what sorts of theories you can write down: photons can’t interact directly with each other, there can only be one “gravitational force”, particles with spins greater than two shouldn’t be massless, etc. The original theorems were often fairly involved, but for massless particles there were usually nice ways to prove them in modern amplitudes notation. Yu-tin in particular has a lot of experience finding these kinds of proofs. What the new notation does is make these nice simple proofs possible for massive particles as well. For example, you can try to use the new notation to write down an interaction between a massive particle with spin greater than two and gravity, and what you find is that any expression you write breaks down: it works fine at low energies, but once you’re looking at particles with energies much higher than their mass you start predicting probabilities greater than one. This suggests that particles with higher spins shouldn’t be “fundamental”, they should be explained in terms of other particles at higher energies. The only way around this turns out to be an infinite series of particles to cancel problems from the previous ones, the sort of structure that higher vibrations have in string theory. I often don’t appreciate papers that others claim are a pleasure to read, but this one really was a pleasure to read: there’s something viscerally satisfying about seeing so many important constraints manifest so cleanly.

I’ve talked before about the difference between planar and non-planar theories. Planar theories end up being simpler, and in the case of N=4 super Yang-Mills this results in powerful symmetries that let us do much more complicated calculations. Non-planar theories are more complicated, but necessary for understanding gravity. Dual Conformal Symmetry, Integration-by-Parts Reduction, Differential Equations and the Nonplanar Sector, a new paper by Zvi Bern, Michael Enciso, Harald Ita, and Mao Zeng, works on bridging the gap between these two worlds.

Most of the paper is concerned with using some of the symmetries of N=4 super Yang-Mills in other, more realistic (but still planar) theories. The idea is that even if those symmetries don’t hold one can still use techniques that respect those symmetries, and those techniques can often be a lot cleaner than techniques that don’t. This is probably the most practically useful part of the paper, but the part I was most curious about is in the last few sections, where they discuss non-planar theories. For a while now I’ve been interested in ways to treat a non-planar theory as if it were planar, to try to leverage the powerful symmetries we have in planar N=4 super Yang-Mills elsewhere. Their trick is surprisingly simple: they just cut the diagram open! Oddly enough, they really do end up with similar symmetries using this method. I still need to read this in more detail to understand its limitations, since deep down it feels like something this simple couldn’t possibly work. Still, if anything like the symmetries of planar N=4 holds in the non-planar case there’s a lot we could do with it.

There are a bunch of other interesting recent papers that I haven’t had time to read. Some look like they might relate to weird properties of N=4 super Yang-Mills, others say interesting things about the interconnected web of theories tied together by their behavior when a particle becomes “soft”. Another presents a method for dealing with elliptic functions, one of the main obstructions to applying my hexagon function technique to more situations. And of course I shouldn’t fail to mention a paper by my colleague Carlos Cardona, applying amplitudes techniques to AdS/CFT. Overall, a lot of interesting stuff in a short span of time. I should probably get back to reading it!

The Multiverse Can Only Kill Physics by Becoming Physics

I’m not a fan of the multiverse. I think it’s over-hyped, way beyond its current scientific support.

But I don’t think it’s going to kill physics.

By “the multiverse” I’m referring to a group of related ideas. There’s the idea that we live in a vast, varied universe, with different physical laws in different regions. Relatedly, there’s the idea that the properties of our region aren’t typical of the universe as a whole, just typical of places where life can exist. It may be that in most of the universe the cosmological constant is enormous, but if life can only exist in places where it is tiny then a tiny cosmological constant is what we’ll see. That sort of logic is called anthropic reasoning. If it seems strange, think about a smaller scale: there are many planets in the universe, but only a small number of them can support life. Still, we shouldn’t be surprised that we live on a planet that can support life: if it couldn’t, we wouldn’t live here!

If we really do live in a multiverse, though, some of what we think of as laws of physics are just due to random chance. Maybe the quarks have the masses they do not for some important reason, but just because they happened to end up that way in our patch of the universe.

This seems to have depressing implications. If the laws of physics are random, or just consequences of where life can exist, then what’s left to discover? Why do experiments at all?

Well, why not ask the geoscientists?

tectonic_plate_boundaries

These guys

We might live in one universe among many, but we definitely live on one planet among many. And somehow, this realization hasn’t killed geoscience.

That’s because knowing we live on a random planet doesn’t actually tell us very much.

Now, I’m not saying you can’t do anthropic reasoning about the Earth. For example, it looks like an active system of plate tectonics is a necessary ingredient for life. Even if plate tectonics is rare, we shouldn’t be surprised to live on a planet that has it.

Ok, so imagine it’s 1900, before Wegener proposed continental drift. Scientists believe there are many planets in the universe, that we live in a “multiplanet”. Could you predict plate tectonics?

Even knowing that we live on one of the few planets that can support life, you don’t know how it supports life. Even living in a “multiplanet”, geoscience isn’t dead. The specifics of our Earth are still going to teach you something important about how planets work.

Physical laws work the same way. I’ve said that the masses of the quarks could be random, but it’s not quite that simple. The underlying reasons why the masses of the quarks are what they are could be random: the specifics of how six extra dimensions happened to curl up in our region of the universe, for example. But there’s important physics in between: the physics of how those random curlings of space give rise to quark masses. There’s a mechanism there, and we can’t just pick one out of a hat or work backwards to it anthropically. We have to actually go out and discover the answer.

Similarly, we don’t know automatically which phenomena are “random”, which are “anthropic”, and which are required by some deep physical principle. Even in a multiverse, we can’t assume that everything comes down to chance, we only know that some things will, much as the geoscientists don’t know what’s unique to Earth and what’s true of every planet without actually going out and checking.

You can even find a notion of “naturalness” here, if you squint. In physics, we find phenomena like the mass of the Higgs “unnatural”, they’re “fine-tuned” in a way that cries out for an explanation. Normally, we think of this in terms of a hypothetical “theory of everything”: the more “fine-tuned” something appears, the harder it would be to explain it in a final theory. In a multiverse, it looks like we’d have to give up on this, because even the most unlikely-looking circumstance would happen somewhere, especially if it’s needed for life.

Once again, though, imagine you’re a geoscientist. Someone suggests a ridiculously fine-tuned explanation for something: perhaps volcanoes only work if they have exactly the right amount of moisture. Even though we live on one planet in a vast universe, you’re still going to look for simpler explanations before you move on to more complicated ones. It’s human nature, and by and large it’s the only way we’re capable of doing science. As physicists, we’ve papered this over with technical definitions of naturalness, but at the end of the day even in a multiverse we’ll still start with less fine-tuned-looking explanations and only accept the fine-tuned ones when the evidence forces us to. It’s just what people do.

The only way for anthropic reasoning to get around this, to really make physics pointless once and for all, is if it actually starts making predictions. If anthropic reasoning in physics can be made much stronger than anthropic reasoning in geoscience (which, as mentioned, didn’t predict tectonic plates until a century after their discovery) then maybe we can imagine getting to a point where it tells us what particles we should expect to discover, and what masses they should have.

At that point, though, anthropic reasoning won’t have made physics pointless: it will have become physics.

If anthropic reasoning is really good enough to make reliable, falsifiable predictions, then we should be ecstatic! I don’t think we’re anywhere near that point, though some people are earnestly trying to get there. But if it really works out, then we’d have a powerful new method to make predictions about the universe.

 

Ok, so with all of this said, there is one other worry.

Karl Popper criticized Marxism and Freudianism for being unfalsifiable. In both disciplines, there was a tendency to tell what were essentially “just-so stories”. They could “explain” any phenomenon by setting it in their framework and explaining how it came to be “just so”. These explanations didn’t make new predictions, and different people often ended up coming up with different explanations with no way to distinguish between them. They were stories, not scientific hypotheses. In more recent times, the same criticism has been made of evolutionary psychology. In each case the field is accused of being able to justify anything and everything in terms of its overly ambiguous principles, whether dialectical materialism, the unconscious mind, or the ancestral environment.

just_so_stories_kipling_1902

Or an elephant’s ‘satiable curtiosity

You’re probably worried that this could happen to physics. With anthropic reasoning and the multiverse, what’s to stop physicists from just proposing some “anthropic” just-so-story for any evidence we happen to find, no matter what it is? Surely anything could be “required for life” given a vague enough argument.

You’re also probably a bit annoyed that I saved this objection for last. I know that for many people, this is precisely what you mean when you say the multiverse will “kill physics”.

I’ve saved this for last for a reason though. It’s because I want to point out something important: this outcome, that our field degenerates into just-so-stories, isn’t required by the physics of the multiverse. Rather, it’s a matter of sociology.

If we hold anthropic reasoning to the same standards as the rest of physics, then there’s no problem: if an anthropic explanation doesn’t make falsifiable predictions then we ignore it. The problem comes if we start loosening our criteria, start letting people publish just-so-stories instead of real science.

This is a real risk! I don’t want to diminish that. It’s harder than it looks for a productive academic field to fall into bullshit, but just-so-stories are a proven way to get there.

What I want to emphasize is that we’re all together in this. We all want to make sure that physics remains scientific. We all need to be vigilant, to prevent a culture of just-so-stories from growing. Regardless of whether the multiverse is the right picture, and regardless of how many annoying TV specials they make about it in the meantime, that’s the key: keeping physics itself honest. If we can manage that, nothing we discover can kill our field.