Wait, How Do Academics Make Money?

I’ve been working on submitting one of my papers to a journal, which reminded me of the existence of publication fees. That in turn reminded me of a conversation I saw on tumblr a while back:

beatontumblr

“beatonna” here is Kate Beaton, of the history-themed webcomic Hark! a Vagrant. She’s about as academia-adjacent as a non-academic gets, but even she thought that the academic database JSTOR paid academics for their contributions, presumably on some kind of royalty system.

In fact, academics don’t get paid by databases, journals, or anyone else that publishes or hosts our work. In the case of journals, we’re often the ones who pay publication fees. Those who write textbooks get royalties, but that’s about it on that front.

Kate Beaton’s confusion here is part of a more general confusion: in my experience, most people don’t know how academics are paid.

The first assumption is usually that we’re paid to teach. I can’t count the number of times I’ve heard someone respond to someone studying physics or math with the question “Oh, so you’re going to teach?”

This one is at least sort of true. Most academics work at universities, and usually have teaching duties. Often, part of an academic’s salary is explicitly related to teaching.

Still, it’s a bit misleading to think of academics as paid to teach: at a big research university, teaching often doesn’t get much emphasis. The extent to which the quality of teaching determines a professor’s funding or career prospects is often quite minimal. Academics teach, but their job isn’t “teacher”.

From there, the next assumption is the one Kate Beaton made. If academics aren’t paid to teach, are they paid to write?

Academia is often described as publish-or-perish, and research doesn’t really “count” until it’s made it to a journal. It would be reasonable to assume that academics are like writers, paid when someone buys our content. As mentioned, though, that’s just not how it works: if anything, sometimes we are the ones who pay the publishers!

It’s probably more accurate (though still not the full story) to say that academics are paid to research.

Research universities expect professors not only to teach, but to do novel and interesting research. Publications are important not because we get paid to write them, but because they give universities an idea of how productive we are. Promotions and the like, at least at research universities, are mostly based on those sorts of metrics.

Professors get some of their money from their universities, for teaching and research. The rest comes from grants. Usually, these come from governments, though private donors are a longstanding and increasingly important group. In both cases, someone decides that a certain general sort of research ought to be done and solicits applications from people interested in doing it. Different people apply with specific proposals, which are assessed with a wide range of esoteric criteria (but yes publications are important), and some people get funding. That funding includes not just equipment, but contributions to salaries as well. Academics really are, in many cases, paid by grants.

This is really pretty dramatically different from any other job. There’s no “customer” in the normal sense, and even the people in charge of paying us are more concerned that a certain sort of work be done than that they have control over it. It’s completely understandable that the public rounds that off to “teaching” or “writing”. It’s certainly more familiar.

 

A Response from Nielsen, Guffanti and Sarkar

I have been corresponding with Subir Sarkar, one of the authors of the paper I mentioned a few weeks ago arguing that the evidence for cosmic acceleration was much weaker than previously thought. He believes that the criticisms of Rubin and Hayden (linked to in my post) are deeply flawed. Since he and his coauthors haven’t responded publicly to Rubin and Hayden yet, they graciously let me post a summary of their objections.

Dear Matt,

This concerns the discussion on your blog of our recent paper showing that the evidence for cosmic acceleration from supernovae is only 3 sigma. Your obviously annoyed response is in fact to inflated headlines in the media about our work – our paper does just what it does on the can: “Marginal evidence for cosmic acceleration from Type Ia supernovae“. Nevertheless you make a fair assessment of the actual result in our paper and we are grateful for that.

However we feel you are not justified in going on further to state: “In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence”. If you were as expert in cosmology as you evidently are concerning amplitudes you would know that much of the reasoning you allude to is circular. There are also other instances (which we are looking into) of using statistical methods that assume the answer to shore up the ‘standard model’ of cosmology. Does it not worry you that the evidence from supernovae – which is widely believed to be compelling – turns out to be less so when examined closely? There is a danger of confirmation bias in that cosmologists making poor measurements with large systematic uncertainties nevertheless keep finding the ‘right answer’. See e.g. Croft & Dailey (http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1112.3108) who noted “… of the 28 measurements of Omega_Lambda in our sample published since 2003, only 2 are more than 1 sigma from the WMAP results. Wider use of blind analyses in cosmology could help to avoid this”. Unfortunately the situation has not improved in subsequent years.

You are of course entitled to air your personal views on your blog. But please allow us to point out that you are being unfair to us by uncritically stating in the second part of  your sentence: “EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions” in which you link to the arXiv eprint by Rubin & Hayden.

These authors make a claim similar to Riess & Scolnic (https://blogs.scientificamerican.com/guest-blog/no-astronomers-haven-t-decided-dark-energy-is-nonexistent/) that we “assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the same, even though they have been shown to be different and past analyses have accounted for these differences”. In fact we are using exactly the same dataset (called JLA) as Adam Riess and co. have done in their own analysis (Betoule et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1401.4064). They found  stronger evidence for acceleration because of using a flawed statistical method (“constrained \chi^2”). The reason why we find weaker evidence is that we use the Maximum Likelihood Estimator – it is not because of making “dodgy assumptions”. We show our results in the same \Omega_\Lambda – \Omega_m plane simply for ease of comparison with the previous result – as seen in the attached plot, the contours move to the right … and now enclose the “no acceleration” line within 3 \sigma. Our analysis is not – as Brian Schmidt tweeted – “at best unorthodox” … even if this too has been uncritically propagated on social media.

In fact the result from our (frequentist) statistical procedure has been confirmed by an independent analysis using a ‘Bayesian Hierarchical Model’ (Shariff et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1510.05954). This is a more sophisticated approach because it does not adopt a Gaussian approximation as we did for the distribution of the light curve parameters (x_1 and c), however their contours are more ragged because of numerical computation limitations.

all_contours_colors

Rubin & Hayden do not mention this paper (although bizarrely they ascribe to us the ‘Bayesian Hierarchical Model’). Nevertheless they find more-or-less the same result as us, namely 3.1 sigma evidence for acceleration, using the same dataset as we did (left panel of their Fig.2). They argue however that there are selection effects in this dataset – which have not already been corrected for by the JLA collaboration (which incidentally included Adam Riess, Saul Perlmutter and most other supernova experts in the world). To address this Rubin & Hayden  introduce a redshift-dependent prior on the x_1 and c distributions. This increases the significance to 4.2 sigma (right panel of their Fig.2). If such a procedure is indeed valid then it does mark progress in the field, but that does not mean that these authors have “demonstrated errors in (our) analysis” as they state in their Abstract. Their result also begs the question why has the significance increased so little in going from the initial 50 supernovae which yielded 3.9 sigma evidence for acceleration (Riess et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/9805201) to 740 supernovae in JLA? Maybe this is news … at least to anyone interested in cosmology and fundamental physics!

Rubin & Hayden also make the usual criticism that we have ignored evidence from other observations e.g. of baryon acoustic oscillations and the cosmic microwave background. We are of course very aware of these observations but as we say in the paper the interpretation of such data is very model-dependent. For example dark energy has no direct influence on the cosmic microwave background. What is deduced from the data is the spatial curvature (adopting the value of the locally measured Hubble expansion rate H_0) and the fractional matter content of the universe (assuming the primordial fluctuation spectrum to be a close-to-scale-invariant power law). Dark energy is then *assumed* to make up the rest (using the sum rule: 1 = \Omega_m + \Omega_\Lambda for a spatially flat universe as suggested by the data). This need not be correct however if there are in fact other terms that should be added to this sum rule (corresponding to corrections to the Friedman equation to account e.g. for averaging over inhomogeneities or for non-ideal gas behaviour of the matter content). It is important to emphasise that there is no convincing (i.e. >5 sigma) dynamical evidence for dark energy, e.g. the late integrated Sachs-Wolfe effect which induces subtle correlations between the CMB and large-scale structure. Rubin & Hayden even claim in their Abstract (v1) that “The combined analysis of modern cosmological experiments … indicate 75 sigma evidence for positive Omega_\Lambda” – which is surely a joke! Nevertheless this is being faithfully repeated on newsgroups, presumably by those somewhat challenged in their grasp of basic statistics.

Apologies for the long post but we would like to explain that the technical criticism of our work by Rubin & Hayden and by Riess & Scolnic is rather disingenuous and it is easy to be misled if you are not an expert. You are entitled to rail against the standards of science journalism but please do not taint us by association.

As a last comment, surely we all want to make progress in cosmology but this will be hard if cosmologists are so keen to cling on to their ‘standard model’ instead of subjecting it to critical tests (as particle physicists continually do to their Standard Model). Moreover the fundamental assumptions of the cosmological model (homogeneity, ideal fluids) have not been tested rigorously (unlike the Standard Model which has been tested at the level of quantum corrections). This is all the more important in cosmology because there is simply no physical explanation for why \Lambda should be of order H_0^2.

Best regards,

 

Jeppe Trøst Nielsen, Alberto Guffanti and Subir Sarkar

 


 

On an unrelated note, Perimeter’s PSI program is now accepting applications for 2017. It’s something I wish I knew about when I was an undergrad, for those interested in theoretical physics it can be an enormous jump-start to your career. Here’s their blurb:

Perimeter Scholars International (PSI) is now accepting applications for Perimeter Institute for Theoretical Physics’ unique 10-month Master’s program.

Features of the program include:

  • All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
  • Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute
  • Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination
  • PSI recruits worldwide: 85 percent of students come from outside of Canada
  • PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude

Apply online at http://perimeterinstitute.ca/apply.

Applications are due by February 1, 2017.

What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.

z-nation-field-of-zombies

Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

Four Gravitons in China

I’m in China this week, at the School and Workshop on Amplitudes in Beijing 2016.

img_20161018_085714

It’s a little chilly this time of year, so the dragons have accessorized

A few years back, I mentioned that there didn’t seem to be many amplitudeologists in Asia. That’s changed quite a lot over just the last few years. Song He and Yu-tin Huang went from postdocs in the west to faculty positions in China and Taiwan, respectively, while Bo Feng’s group in China has expanded. As a consequence, there’s now a substantial community here. This is the third “Amplitudes in Asia” conference, with past years meeting in Hong Kong and Taipei.

The “school” part of the conference was last week. I wasn’t here, but the students here seem to have enjoyed it a lot. This week is the “workshop” part, and there have been talks on a variety of parts of amplitudes. Nima showed up on Wednesday and managed to talk for his usual impressively long amount of time, finishing with a public lecture about the future of physics. The talk was ostensibly about why China should build the next big collider, but for the most part it ended up as a more general talk about exciting open questions in high energy physics. The talks were recorded, so they should be online at some point.

Jury-Rigging: The Many Uses of Dropbox

I’ll be behind the Great Firewall of China next week, so I’ve been thinking about various sites I won’t be able to access. Prominent among them is Dropbox, a service that hosts files online.

250px-dropbox_icon-svg

A helpful box to drop things in

What do physicists do with Dropbox? Quite a lot.

For us, Dropbox is a great way to keep collaborations on the same page. By sharing a Dropbox folder, we can share research programs, mathematical expressions, and paper drafts. It makes it a lot easier to keep one consistent version of a document between different people, and it’s a lot simpler than emailing files back and forth.

All that said, Dropbox has its drawbacks. You still need to be careful not to have two people editing the same thing at the same time, lest one overwrite the other’s work. You’ve got the choice between editing in place, making everyone else receive notifications whenever the files change, or editing in a separate folder, and having to be careful to keep it coordinated with the shared one.

Programmers will know there are cleaner solutions to these problems. GitHub is designed to share code, and you can work together on a paper with ShareLaTeX. So why do we use Dropbox?

Sometimes, it’s more important for a tool to be easy and universal, even if it doesn’t do everything you want. GitHub and ShareLaTeX might solve some of the problems we have with Dropbox, but they introduce extra work too. Because no one disadvantage of Dropbox takes up too much time, it’s simpler to stick with it than to introduce a variety of new services to fill the same role.

This is the source of a lot of jury-rigging in science. Our projects aren’t often big enough to justify more professional approaches: usually, something hacked together out of what’s available really is the best choice.

For one, it’s why I use wordpress. WordPress.com is not a great platform for professional blogging: it doesn’t give you a lot of control without charging, and surprise updates can make using it confusing. However, it takes a lot less effort than switching to something more professional, and for the moment at least I’m not really in a position that justifies the extra work.

Congratulations to Thouless, Haldane, and Kosterlitz!

I’m traveling this week in sunny California, so I don’t have time for a long post, but I thought I should mention that the 2016 Nobel Prize in Physics has been announced. Instead of going to LIGO, as many had expected, it went to David Thouless, Duncan Haldane, and Michael Kosterlitz. LIGO will have to wait for next year.

Thouless, Haldane, and Kosterlitz are condensed matter theorists. While particle physics studies the world at the smallest scales and astrophysics at the largest, condensed matter physics lives in between, explaining the properties of materials on an everyday scale. This can involve inventing new materials, or unusual states of matter, with superconductors being probably the most well-known to the public. Condensed matter gets a lot less press than particle physics, but it’s a much bigger field: overall, the majority of physicists study something under the condensed matter umbrella.

This year’s Nobel isn’t for a single discovery. Rather, it’s for methods developed over the years that introduced topology into condensed matter physics.

Topology often gets described in terms of coffee cups and donuts. In topology, two shapes are the same if you can smoothly change one into another, so a coffee cup and a donut are really the same shape.

mug_and_torus_morphMost explanations stop there, which makes it hard to see how topology could be useful for physics. The missing part is that topology studies not just which shapes can smoothly change into each other, but which things, in general, can change smoothly into each other.

That’s important, because in physics most changes are smooth. If two things can’t change smoothly into each other, something special needs to happen to bridge the gap between them.

There are a lot of different sorts of implications this can have. Topology means that some materials can be described by a number that’s conserved no matter what (smooth) changes occur, leading to experiments that see specific “levels” rather than a continuous range of outcomes. It means that certain physical setups can’t change smoothly into other ones, which protects those setups from changing: an idea people are investigating in the quest to build a quantum computer, where extremely delicate quantum states can be disrupted by even the slightest change.

Overall, topology has been enormously important in physics, and Thouless, Haldane, and Kosterlitz deserve a significant chunk of the credit for bringing it into the spotlight.

Ingredients of a Good Talk

It’s one of the hazards of physics that occasionally we have to attend talks about other people’s sub-fields.

Physics is a pretty heavily specialized field. It’s specialized enough that an otherwise perfectly reasonable talk can be totally incomprehensible to someone just a few sub-fields over.

I went to a talk this week on someone else’s sub-field, and was pleasantly surprised by how much I could follow. I thought I’d say a bit about what made it work.

In my experience, a good talk tells me why I should care, what was done, and what we know now.

Most talks start with a Motivation section, covering the why I should care part. If a talk doesn’t provide any motivation, it’s assuming that everyone finds the point of the research self-evident, and that’s a risky assumption.

Even for talks with a Motivation section, though, there’s a lot of variety. I’ve been to plenty of talks where the motivation presented is very sketchy: “this sort of thing is important in general, so we’re going to calculate one”. While that’s technically a motivation, all it does for an outsider is to tell them which sub-field you’re part of. Ideally, a motivation section does more: for a good talk, the motivation should not only say why you’re doing the work, but what question you’re asking and how your work can answer it.

The bulk of any talk covers what was done, but here there’s also varying quality. Bad talks often make it unclear how much was done by the presenter versus how much was done before. This is important not just to make sure the right people get credit, but because it can be hard to tell how much progress has been made. A good talk makes it clear not only what was done, but why it wasn’t done before. The whole point of a talk is to show off something new, so it should be clear what the new thing is.

If those two parts are done well, it becomes a lot easier to explain what we know now. If you’re clear on what question you were asking and what you did to answer it, then you’ve already framed things in those terms, and the rest is just summarizing. If not, you have to build it up from scratch, ending up with the important information packed in to the last few minutes.

This isn’t everything you need for a good talk, but it’s important, and far too many people neglect it. I’ll be giving a few talks next week, and I plan to keep this structure in mind.

The Parable of the Entanglers and the Bootstrappers

There’s been some buzz around a recent Quanta article by K. C. Cole, The Strange Second Life of String Theory. I found it a bit simplistic of a take on the topic, so I thought I’d offer a different one.

String theory has been called the particle physicist’s approach to quantum gravity. Other approaches use the discovery of general relativity as a model: they’re looking for a big conceptual break from older theories. String theory, in contrast, starts out with a technical problem (naive quantum gravity calculations that give infinity) proposes physical objects that could solve the problem (strings, branes), and figures out which theories of these objects are consistent with existing data (originally the five superstring theories, now all understood as parts of M theory).

That approach worked. It didn’t work all the way, because regardless of whether there are indirect tests that can shed light on quantum gravity, particle physics-style tests are far beyond our capabilities. But in some sense, it went as far as it can: we’ve got a potential solution to the problem, and (apart from some controversy about the cosmological constant) it looks consistent with observations. Until actual evidence surfaces, that’s the end of that particular story.

When people talk about the failure of string theory, they’re usually talking about its aspirations as a “theory of everything”. String theory requires the world to have eleven dimensions, with seven curled up small enough that we can’t observe them. Different arrangements of those dimensions lead to different four-dimensional particles. For a time, it was thought that there would be only a few possible arrangements: few enough that people could find the one that describes the world and use it to predict undiscovered particles.

That particular dream didn’t work out. Instead, it became apparent that there were a truly vast number of different arrangements of dimensions, with no unique prediction likely to surface.

By the time I took my first string theory course in grad school, all of this was well established. I was entering a field shaped by these two facts: string theory’s success as a particle-physics style solution to quantum gravity, and its failure as a uniquely predictive theory of everything.

The quirky thing about science: sociologically, success and failure look pretty similar. Either way, it’s time to find a new project.

A colleague of mine recently said that we’re all either entanglers or bootstrappers. It was a joke, based on two massive grants from the Simons Foundation. But it’s also a good way to summarize two different ways string theory has moved on, from its success and from its failure.

The entanglers start from string theory’s success and say, what’s next?

As it turns out, a particle-physics style understanding of quantum gravity doesn’t tell you everything you need to know. Some of the big conceptual questions the more general relativity-esque approaches were interested in are still worth asking. Luckily, string theory provides tools to answer them.

Many of those answers come from AdS/CFT, the discovery that string theory in a particular warped space-time is dual (secretly the same theory) to a more particle-physics style theory on the edge of that space-time. With that discovery, people could start understanding properties of gravity in terms of properties of particle-physics style theories. They could use concepts like information, complexity, and quantum entanglement (hence “entanglers”) to ask deeper questions about the structure of space-time and the nature of black holes.

The bootstrappers, meanwhile, start from string theory’s failure and ask, what can we do with it?

Twisting up the dimensions of string theory yields a vast number of different arrangements of particles. Rather than viewing this as a problem, why not draw on it as a resource?

“Bootstrappers” explore this space of particle-physics style theories, using ones with interesting properties to find powerful calculation tricks. The name comes from the conformal bootstrap, a technique that finds conformal theories (roughly: theories that are the same at every scale) by “pulling itself by its own boostraps”, using nothing but a kind of self-consistency.

Many accounts, including Cole’s, attribute people like the boostrappers to AdS/CFT as well, crediting it with inspiring string theorists to take a closer look at particle physics-style theories. That may be true in some cases, but I don’t think it’s the whole story: my subfield is bootstrappy, and while it has drawn on AdS/CFT that wasn’t what got it started. Overall, I think it’s more the case that the tools of string theory’s “particle physics-esque approach”, like conformal theories and supersymmetry, ended up (perhaps unsurprisingly) useful for understanding particle physics-style theories.

Not everyone is a “boostrapper” or an “entangler”, even in the broad sense I’m using the words. The two groups also sometimes overlap. Nevertheless, it’s a good way to think about what string theorists are doing these days. Both of these groups start out learning string theory: it’s the only way to learn about AdS/CFT, and it introduces the bootstrappers to a bunch of powerful particle physics tools all in one course. Where they go from there varies, and can be more or less “stringy”. But it’s research that wouldn’t have existed without string theory to get it started.

So You Want to Prove String Theory, Part II: How Can QCD Be a String Theory?

A couple weeks back, I had a post about Nima Arkani-Hamed’s talk at Strings 2016. Nima and his collaborators were trying to find what sorts of scattering amplitudes (formulas that calculate the chance that particles scatter off each other) are allowed in a theory of quantum gravity. Their goal was to show that, with certain assumptions, string theory gives the only consistent answer.

At the time, my old advisor Michael Douglas suggested that I might find Zohar Komargodski’s talk more interesting. Now that I’ve finally gotten around to watching it, I agree. The story is cleaner, more conclusive…and it gives me an excuse to say something else I’ve been meaning to talk about.

Zohar Komargodski has a track record of deriving interesting results that are true not just for the sorts of toy models we like to work with but for realistic theories as well. He’s collaborating with amplitudes miracle-worker Simon Caron-Huot (who I’ve collaborated with recently), Amit Sever (one of the integrability wizards who came up with the POPE program) and Alexander Zhiboedov, whose name seems to show up all over the place. Overall, the team is 100% hot young talent, which tends to be a recipe for success.

While Nima’s calculation focuses on gravity, Zohar and company are asking a broader question. They’re looking at any theory with particles of high spin and nonzero mass. Like Nima, they’re looking at scattering amplitudes, in the limit that the forces involved are weak. Unlike Nima, they’re focusing on a particular limit: rather than trying to fix the full form of the amplitude, they’re interested in how it behaves for extreme, unphysical values for the particles’ momenta. Despite being unphysical, this limit can reveal something about how the theory works.

What they figured out is that, for the sorts of theories they’re looking at, the amplitude has to take a particular form in their unphysical limit. In particular, it takes a form that indicates the presence of strings.

What sort of theories are they looking at? What theories have “particles of high spin and nonzero mass”? Well, some are string theories. Others are Yang-Mills theories … theories similar to QCD.

For the experts, I encourage you to watch Zohar’s talk or read the paper for more detail. It’s a fun story that showcases how very general constraints on scattering amplitudes can translate into quite specific statements.

For the non-experts, though, there’s something that may already be confusing. When I’ve talked about Yang-Mills theories before, I’ve talked about them in terms of particles of spin 1. Where did these “higher spin” particles come from? And where are the strings? How can there be strings in a theory that I’ve described as “similar to QCD”?

If I just stuck to the higher spin particles, things could almost stay familiar. The fundamental particles of Yang-Mills theories have spin 1, but these particles can combine into composite particles, which can have higher spin and higher mass. That should be intuitive: in some sense, it’s just like protons, neutrons, and electrons combining to form atoms.

What about the strings? I’ve actually talked about that before, but I’d like to try out a new analogy. Have you ever heard of Conway’s Game of Life?

pic288405_md

Not this one!

gospers_glider_gun

This one!

Conway’s Game of Life starts with a grid of black and white squares, and evolves in steps, with each square’s color determined by the color of adjacent squares in the last step. “Fundamentally”, the game is just those rules. In practice, though, structure can emerge: a zoo of self-propagating creatures that dance across the screen.

The strings that can show up in Yang-Mills theories are like this. They aren’t introduced directly in the definition of the theory. Instead, they’re consequences: structures that form when you let the rules evolve and see what they create. They’re another description of the theory, one with its own advantages.

When I tell people I’m a theoretical physicist, they inevitably ask me “Have any of your theories been tested?” They’re operating from one idea of what a theoretical physicist does: propose new theories to describe the world, based on available evidence. Lots of theorists do that, they’re called phenomenologists, but it’s not what I do, or what most theorists I interact with day-to-day do.

So I describe what I do, how I test new mathematical techniques to make particle physics calculations faster. And in general, that’s pretty easy for people to understand. Just as they can imagine people out there testing theories, they can imagine people who work to support the others, making tools to make their work easier. But while that’s what I do, it’s not the best description of what most of my colleagues do.

What most theorists I know do is like finding new animals in Conway’s game of life. They start with theories for which we know the rules: well-tested theories like QCD, or well-studied proposals like string theory. They ask themselves, not how they can change the rules, but what results the rules have. They look for structures, and in doing so find new perspectives, learning to see the animals that live on Conway’s black and white grid. (This is something I’ve gestured at before, but this seems like a cleaner framing.)

Doing that, theorists have seen strings in the structure of QCD-like theories. And now Zohar and collaborators have a clean argument that the structures others have seen should show up, not only there, but in a broader class of theories.

This isn’t about whether the world is fundamentally described by string theory, ten dimensions and all. That’s an entirely different topic. What it is is a question about what sorts of structures emerge when we try to describe the world. What it does is show that strings are, in some sense (and, as for Nima, [with some conditions]) inevitable, that they come out of our rules even if we don’t expect them to.