The Hardest Audience Knows Just Enough to Be Dangerous

You’d think that it would be hard to explain physics to people who know absolutely nothing about physics.

And you might be right, if there was anyone these days who knew absolutely nothing about physics. If someone didn’t know what atoms were, or didn’t know what a physicist was, then yes it would take quite a while to explain anything more than the basics. But most people know what atoms are, and know what physicists are, and at least have a basic idea that there are things called protons and neutrons and electrons.

And that’s often enough. Starting with a basis like that, I can talk people through the Large Hadron Collider, I can get them to picture Feynman Diagrams, I can explain, roughly, what it is I do.

On the other end, it’s not all that hard to explain what I do to people in my sub-field. Working on the same type of physics is like sharing a language, we have all sorts of terms to make explaining easier. While it’s still possible to trip up and explain too much or too little (a recent talk I gave left out the one part that one member of the audience needed…because everyone else would have gotten nothing out of it), you’re protected by a buffer of mutual understanding.

The hardest talks aren’t for the public, and they aren’t for fellow amplitudes-researchers. They’re for a general physics audience.

If you’re talking to physicists, you can’t start with protons and neutrons. Do that, and your audience is going to get annoyed with you rather quickly. You can’t rely on the common understanding everyone has of physics. In addition to making your audience feel like they’re being talked down to, you won’t manage to say anything substantial. You need to start at a higher level so that when you do describe what you do, it’s in enough detail that your audience feels like they really understand it.

At the same time, you can’t start with the jargon of your sub-field. If you want to really explain something (and not just have fifteen minutes of background before everyone tunes out) you need to build off of a common understanding.

The tricky part is, that “common understanding” is more elusive than you might think. For example, pretty much every physicist has some familiarity with Quantum Field Theory…but that can mean anything from “uses it every day” to “saw it a couple times back in grad school”. Too much background, and half your audience is bored. Too little, and half your audience is lost. You have to strike the proper balance, trying to show everyone enough to feel satisfied.

There are tricks to make this easier. I’ve noticed that some of the best speakers begin with a clever and unique take on something everyone understands. That way, people in very different fields will still have something they recognize, while people in the same field will still be seeing something new. Of course, the tricky part is coming up with a new example in the first place!

In general, I need to get better at estimating where my audience is. Talking to you guys is fun, but I ought to also practice a “physics voice” for discussions with physicists (as well as grants and applications), and an “amplitudes voice” for fellow specialists. The key to communication, as always, is knowing your audience.

A Nobel for Blue LEDs, or, How Does That Count as Physics?

When I first heard about this year’s Nobel Prize in Physics, I didn’t feel the need to post on it. The prize went to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, whose discoveries enabled blue LEDs. It’s a more impressive accomplishment than it might seem: while red LEDs have been around since the 60’s and 70’s, blue LEDs were only developed in the 90’s, and only with both can highly efficient, LED-based white light sources be made. Still, I didn’t consider posting on it because it’s pretty much entirely outside my field.

p-device20blue20led-23

Shiny, though

It took a conversation with another PI postdoc to point out one way I can comment on the Nobel, and it started when we tried to figure out what type of physicists Akasaki, Amano, and Nakamura are. After tossing around terms like “device physicist” and “condensed matter”, someone wondered whether the development of blue LEDs wasn’t really a matter of engineering.

At that point I realized, I’ve talked about something like this before.

Physicists work on lots of different things, and many of them don’t seem to have much to do with physics. They study geometry and topology, biological molecules and the nature of evolution, income inequality and, yes, engineering.

On the surface, these don’t have much to do with physics. A friend of mine used to quip that condensed matter physicists seem to just “pick whatever they want to research”.

There is something that ties all of these topics together, though. They’re all things that physicists are good at.

Physics grad school gives you a wide variety of tools with which to understand the world. Thermodynamics gives you a way to understand large, complicated systems with statistics, while quantum field theory lets you understand everything with quantum properties, not just fundamental particles but materials as well. This batch of tools can be applied to “traditional” topics, but they’re equally applicable if you’re researching something else entirely, as long as it obeys the right kinds of rules.

In the end, the best definition of physics is the most useful one. Physicists should be people who can benefit from being part of physics organizations, from reading physics journals, and especially from training (and having been) physics grad students. The whole reason we have scientific disciplines in the first place is to make it easier for people with common interests to work together. That’s why Akasaki, Amano, and Nakamura aren’t “just” engineers, and why I and my fellow string theorists aren’t “just” mathematicians. We use our knowledge of physics to do our jobs, and that, more than anything else, makes us physicists.


Edit: It has been pointed out to me that there’s a bit more to this story than the main accounts have let on. Apparently another researcher named Herbert Paul Maruska was quite close to getting a blue LED up and running back in the early 1970’s, getting far enough to have a working prototype. There’s a whole fascinating story about the quest for a blue LED, related here. Maruska seems to be on friendly terms with Akasaki, Amano, and Nakamura, and doesn’t begrudge them their recognition.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

Love It or Hate It, Don’t Fear the Multiverse

“In an infinite universe, anything is possible.”

A nice maxim for science fiction, perhaps. But it probably doesn’t sound like productive science.

A growing number of high profile scientists and science popularizers have come out in favor of the idea that there may exist a “multiverse” of multiple universes, and that this might explain some of the unusual properties of our universe. If there are multiple universes, each with different physical laws, then we must exist in one of the universes with laws capable of supporting us, no matter how rare or unlikely such a universe is. This sort of argument is called anthropic reasoning.

(If you’re picky about definitions and don’t like the idea of more than one universe, think instead of a large universe with many different regions, each one separated from the others. There are some decent physics-based reasons to suppose we live in such a universe.)

Not to mention continuity reasons.

Why is anyone in favor of this idea? It all goes back to the Higgs.

The Higgs field interacts with other particles, giving them mass. What most people don’t mention is that the effect, in some sense, goes both ways. Because the Higgs interacts with other particles, the mass of the Higgs is also altered. This alteration is large, much larger than the observed mass of the Higgs. (In fact, in a sense it’s infinite!)

In order for the Higgs to have the mass we observe, then, something has to cancel out these large corrections. That cancellation can either be a coincidence, or there can be a reason for it.

The trouble is, we’re running out of good reasons. One of the best was supersymmetry, the idea that each particle has a partner with tightly related properties. But if supersymmetry was going to save the day, we probably would have detected some of those partners at the Large Hadron Collider by now. More generally, it can be argued that almost all possible “good reasons” require some new particle to be found at the LHC.

If there are no good reasons, then we’re stuck with a coincidence. (This is often referred to as the Naturalness Problem in particle physics.) And it’s this uncomfortable coincidence that has driven prominent physicists to the arms of the multiverse.

There’s a substantial backlash, though. Many people view the multiverse as a cop-out. Some believe it to be even more toxic than that: if there’s a near-infinite number of possible universes then in principle any unusual feature of our universe could be explained by anthropic reasoning, which sounds like it could lead to the end of physics as we know it.

You can disdain the multiverse as a cop-out, but, as I’ll argue here, you shouldn’t fear it. Those who think the multiverse will destroy physics are fundamentally misunderstanding the way physics research works.

The key thing to keep in mind is that almost nobody out there prefers the multiverse. When a prominent physicist supports the multiverse, that doesn’t mean they’re putting aside productive work on other solutions to the problem. In general, it means they don’t have other solutions to the problem. Supporting the multiverse isn’t going to stop them from having ideas they wouldn’t have had to begin with.

And indeed, many of these people are quite supportive of alternatives to the multiverse. I’ve seen Nima Arkani-Hamed talk about the multiverse, and he generally lists a number of other approaches (some quite esoteric!) that he has worked (and failed to make progress) on, and encourages the audience to look into them.

Physics isn’t a zero-sum game, nor is it ruled by a few prominent people. If a young person has a good idea about how to explain something without the multiverse, they’re going to have all the support and recognition that such an idea deserves.

What the multiverse adds is another track, another potentially worthwhile line of research. Surprising as it may seem, the multiverse doesn’t automatically answer every question. It might not even answer the question of the mass of the Higgs! All that the existence of a multiverse tells us is that we should exist somewhere where intelligent life could exist…but if intelligent life is more likely to exist in a universe very different from ours, then we’re back to square one. There’s a lot of research involved in figuring out just what the multiverse implies, research by people who wouldn’t have been working on this sort of problem if the idea of the multiverse hadn’t been proposed.

That’s the key take-away message here. The multiverse may be wrong, but just considering it isn’t going to destroy physics. Rather, it’s opened up new avenues of research, widening the community of those trying to solve the Naturalness Problem. It may well be a cop-out for individuals, but science as a whole doesn’t have cop-outs: there’s always room for someone with a good idea to sweep away the cobwebs and move things forward.

(Interstellar) Dust In The Wind…

The news has hit the blogosphere: the team behind the Planck satellite has released new dust measurements, and they seem to be a nail in the coffin of BICEP2’s observation of primordial gravitational waves.

Some background for those who haven’t been following the story:

BICEP2, a telescope in Antarctica, is set up to observe the Cosmic Microwave Background, light left over from the very early universe. Back in March, they announced that they had seen characteristic ripples in that light, ripples that they believed were caused by gravitational waves in the early universe. By comparing the size of these gravitational waves to their (quantum-small) size when they were created, they could make statements about the exponential expansion of the early universe (called inflation). This amounted to better (and more specific) evidence about inflation than anyone else had ever found, so naturally people were very excited about it.

However, doubt was rather quickly cast on these exciting results. Like all experimental science, BICEP2 needed to estimate the chance that their observations could be caused by something more mundane. In particular, interstellar dust can cause similar “ripples” to those they observed. They argued that dust would have contributed a much smaller effect, so their “ripples” must be the real deal…but to make this argument, they needed an estimate of how much dust they should have seen. They had several estimates, but one in particular was based on data “scraped” off of a slide from a talk by the Planck collaboration.

Unfortunately, it seems that the BICEP2 team misinterpreted this “scraped” data. Now, Planck have released the actual data, and it seems like dust could account for BICEP2’s entire signal.

I say “could” because more information is needed before we know for sure. The BICEP2 and Planck teams are working together now, trying to tease out whether BICEP2’s observations are entirely dust, or whether there might still be something left.

I know I’m not the only person who wishes that this sort of collaboration could have happened before BICEP2 announced their discovery to the world. If Planck had freely shared their early data with BICEP2, they would have had accurate dust estimates to begin with, and they wouldn’t have announced all of this prematurely.

Of course, expecting groups to freely share data when Nobel prizes and billion-dollar experiments are on the line is pretty absurdly naive. I just wish we lived in a world where none of this was at issue, where careers didn’t ride on “who got there first”.

I’ve got no idea how to bring about such a world, of course. Any suggestions?

So the Higgs is like, everywhere, right?

When I tell people I do particle physics, they generally jump to the first thing they’ve heard of, the Higgs boson. Unfortunately, what most people have heard about the Higgs boson is misleading.

The problem is the “crowded room” metaphor, a frequent favorite of people trying to describe the Higgs. The story goes that the Higgs works like trying to walk through a crowded room: an interesting person (massive particle) will find that the crowd clusters around them, so it becomes harder to make progress, while a less interesting person (less massive or massless particle) will have an easier time traveling through the crowd.

This metaphor gives people the impression that each of us is surrounded by an invisible sea of particles, like an invisible crowd constantly jostling us.

I see Higgs people!

People get very impressed by the idea of some invisible, newly discovered stuff that extends everywhere and surrounds everything. The thing is, this really isn’t the unique part of the Higgs. In fact, every fundamental particle works like this!

In physics, we describe the behavior of fundamental particles (like the Higgs, but also everything from electrons to photons) with a framework called Quantum Field Theory. In Quantum Field Theory, each particle has a corresponding field, and each field extends everywhere, over all space and time. There’s an electron field, and the electron field is absolutely everywhere. The catch is, most of the time, most of these fields are at zero. The electron field tells you that there are zero electrons in a generic region of space.

Particles are ripples in these fields. If the electron field wobbles a bit higher than normal somewhere, that means there’s an electron there. If it wobbles a bit lower than normal instead, then it’s an anti-electron. (Note: this is a very fast-and-loose way to describe how antimatter works, don’t take it for more than it’s worth.)

When the Higgs field ripples, you get a Higgs particle, the one discovered at the LHC. The “crowd” surrounding us isn’t these ripples (which are rare and hard to create), but the field itself, which surrounds us in the same way every other field does.

With all that said, there is a difference between the Higgs field and other fields. The Higgs field is the only field we’ve discovered (so far) that isn’t usually zero. This is because the Higgs is the only field we’ve discovered that is allowed to be something other than zero.

Symmetry is a fundamental principle in physics. At its simplest, symmetry is the idea that nothing should be special for no good reason. One consequence is that there are no special directions. Up, down, right, left, the laws of physics don’t care which one you choose. Only the presence of some object (like the Earth) can make differences like up versus down relevant.

What does that have to do with fields?

Think about a magnetic field. A magnetic field pulls in a specific direction.

So far, so good…

Now imagine a magnetic field everywhere. Which way would it point? If it was curved like the one in the picture, what would it be curved around?

There isn’t a good choice. Any choice would single out one direction, making it special. But nothing should be special for no good reason, and unless there was an object out there releasing this huge magnetic field there would be no good reason for it to be pointed that way. Because of that, the default value of the magnetic field over all space has to be zero.

You can make a similar argument for fields like the electron field. It’s even harder to imagine a way for electrons to be everywhere and not pick some “special” direction.

The Higgs, though, is special. The Higgs is what’s known as a scalar field. That means that it doesn’t have a direction. At any specific point it’s just a number, a scalar quantity. The Higgs doesn’t have to be zero everywhere because even if it isn’t, no special direction is singled out. One metaphor I’ve used before is colored construction paper: the paper can be blue or red, and either way it will still be empty until someone draws on it.

A bit less exciting than ghosts, huh?

The Higgs is special because it’s the first fundamental scalar field we’ve been able to detect, but there are probably others. Most explanations of cosmic inflation, for example, rely on one or more new scalar fields. (Just like “mass of the fundamental particles” is just a number, “rate the universe is inflating” is also just a number, and can also be covered by a scalar field.) It’s not special just because it’s “everywhere”, and imagining it as a bunch of invisible particles careening about around you isn’t going to get you anywhere useful.

Now, if you find the idea of being surrounded by invisible particles interesting, you really ought to read up on neutrinos….

No, Hawking didn’t say that a particle collider could destroy the universe

So apparently Hawking says that the Higgs could destroy the universe.

HawkingHiggs

I’ve covered this already, right? No need to say anything more?

Ok, fine, I’ll write a real blog post.

The Higgs is a scalar field: a number, sort of like temperature, that can vary across space and time. In the case of the Higgs this number determines the mass of almost every fundamental particle (the jury is still somewhat out on neutrinos). The Higgs doesn’t vary much at all, in fact it takes an enormous (Large Hadron Collider-sized) amount of energy to get it to wobble even a little bit. That is because the Higgs is in a very very stable state.

Hawking was pointing out that, given our current model of the Higgs, there’s actually another possible state for the Higgs to be in, one that’s even more stable (because it takes less energy, essentially). In that state, the number the Higgs corresponds to is much larger, so everything would be much more massive, with potentially catastrophic results. (Matt Strassler goes into some detail about the assumptions behind this.)

For those who have been following my blog for a while, you may find these “stable states” familiar. They’re vacua, different possible ways to set up “empty” space. In that post, I may have given the impression that there’s no way to change from one stable state, one “vacuum”, to another. In the case of the Higgs, the state it’s in is so stable that vast amounts of energy (again, a Large Hadron Collider-worth) only serve to create a small, unstable fluctuation, the Higgs boson, which vanishes in a fraction of a second.

And that would be the full story, were it not for a curious phenomenon called quantum tunneling.

If you’ve heard someone else describe quantum tunneling, you’ve probably heard that quantum particles placed on one side of a wall have a very small chance of being found later on the other side of the wall, as if they had tunneled there.

Using their incredibly tiny shovels.

However, quantum tunneling applies to much more than just walls. In general, a particle in an otherwise stable state (whether stable because there are walls keeping it in place, or for other reasons) can tunnel into another state, provided that the new state is “more stable” (has lower energy).

The chance of doing this is small, and it gets smaller the more “stable” the particle’s initial state is. Still, if you apply that logic to the Higgs, you realize there’s a very very very small chance that one day the Higgs could just “tunnel” away from its current stable state, destroying the universe as we know it in the process.

If that happened, everything we know would vanish at the speed of light, and we wouldn’t see it coming.

While that may sound scary, it’s also absurdly unlikely, to the extent that it probably won’t happen until the universe is many times older than it is now. It’s not the sort of thing anybody should worry about, at least on a personal level.

Is Hawking fear-mongering, then, by pointing this out? Hardly. He’s just explaining science. Pointing out the possibility that the Higgs could spontaneously change and end the universe is a great way to emphasize the sheer scale of physics, and it’s pretty common for science communicators to mention it. I seem to recall a section about it in Particle Fever, and Sean Carroll even argues that it’s a good thing, due to killing off spooky Boltzmann Brains.

What do particle colliders have to do with all this? Well, apart from quantum tunneling, just inputting enough energy in the right way can cause a transition from one stable state to another. Here “enough energy” means about a million times that produced by the Large Hadron Collider. As Hawking jokes, you’d need a particle collider the size of the Earth to get this effect. I don’t know whether he actually ran the numbers, but if anything I’d guess that a Large Earth Collider would actually be insufficient.

Either way, Hawking is just doing standard science popularization, which isn’t exactly newsworthy. Once again, “interpret something Hawking said in the most ridiculous way possible” seems to be the du jour replacement for good science writing.

The Near and the Far: Motivations for Physics

When I introduce myself, I often describe my job like this:

“I develop mathematical tools to make calculations in particle physics easier and more efficient.”

However, I could equally well describe my job like this:

“I’m looking for a radical new way to reformulate particle physics in order to solve fundamental problems in space and time.”

These may sound very different, but they’re both correct. That’s because in theoretical physics, like in many branches of science, we have two types of goals: near-term and far-term.

In the near-term, I develop mathematical tools and tricks, which let me calculate things I (and others) couldn’t calculate before. Pushing the tricks to their limits gives me more proficiency, making the tools I develop more robust. In the future, I can imagine applying the tools to more types of calculations, and specifically to more “important” calculations.

All of that still involves relatively near-term goals, though. Develop a new trick, and you can already envision what it might be used for. The far-term goals are generally deeper.

End of the road, not just the next tree.

In the far term, the new techniques that I and others develop might lead to fundamentally new ways to understand particle physics. That’s because a central feature of most of the tricks we develop is that they rephrase the calculation in a way that leaves out something that used to be thought of as fundamental. They’re “revolutions”, overthrowing some basic principle of how we do things. The hope is that the right “revolution” will help us solve problems that our current understanding of physics seems incapable of solving.

Most scientists have both sorts of goals. Someone who studies quantum mechanics might talk about developing a quantum computer, but in the near-term be interested in perfecting some algorithm. A biologist might study how information is stored in a cell, but introduce themself as someone trying to cure cancer.

For some people, the far-term goals are a big component of how they view themselves. Nima Arkani-Hamed, for example, has joked that believing that “spacetime is doomed” is what allows him to get out of bed in the morning. (For a transcript of the relevant parts, see here.) There are plenty of others with similar perspectives, people who need a “big” goal to feel motivated.

Myself, I find it harder to identify with these kinds of goals, because the payoff is so uncertain. Rephrasing particle physics in a new way might be the solution to a fundamental problem…but it could also just be another way to say the same thing. There’s no guarantee that any one project will be that one magical solution. In contrast, for me, near term goals are something I can feel confident I’m making real progress on. I can envision each step along the way, and see the part my work plays in a larger picture, led along by the satisfaction of solving each puzzle as it comes.

Neither way is better than the other, and both are important parts of science. Some people do better with one, some do better with the other, and in the end, everyone can view themselves as accomplishing something they care about.

What’s an Amplitude? Just about everything.

I am an Amplitudeologist. In other words, I study scattering amplitudes. I’ve explained bits and pieces of what scattering amplitudes are in other posts, but I ought to give a short definition here so everyone’s on the same page:

A scattering amplitude is the formula used to calculate the probability that some collection of particles will “scatter”, emerging as some (possibly different) collection of particles.

Note that I’m using some weasel words here. The scattering amplitude is not a probability itself, but “the formula used to calculate the probability”. For those familiar with the mathematics of waves, the scattering amplitude gives the amplitude of a “probability wave” that must be squared to get the probability. (Those familiar with waves might also ask: “If this is the amplitude, what about the period?” The truth is that because scattering amplitudes are calculated using complex numbers, what we call the “amplitude” also contains information about the wave’s “period”. It may seem like an inconsistent way to name things from the perspective of a beginning student, but it is actually consistent with the terminology in a large chunk of physics.)

In some of the simplest scattering amplitudes particles literally “scatter”, with two particles “colliding” and emerging traveling in different directions.

A scattering amplitude can also describe a more complicated situation, though. At particle colliders like the Large Hadron Collider, two particles (a pair of protons for the LHC) are accelerated fast enough that when they collide they release a whole slew of new particles. Since it still fits the “some particles go in, some particles go out” template, this is still described by a scattering amplitude.

It goes even further than that, though, because “some particles” could also just be “one particle”. If you’re dealing with something unstable (the particle equivalent of radioactive, essentially) then one particle can decay into two or more particles. There’s a whole slew of questions that require that sort of calculation. For example, if unstable particles were produced in the early universe, how many of them would be left around today? If dark matter is unstable (and some possible candidates are), when it decays it might release particles we could detect. In general, this sort of scattering amplitude is often of interest to astrophysicists when they happen to get involved in particle physics.

You can even use scattering amplitudes to describe situations that, at first glance, don’t sound like collisions of particles at all. If you want to find the effect of a magnetic field on an electron to high accuracy, the calculation also involves a scattering amplitude. A magnetic field can be thought of in terms of photons, particles of light, because light is a vibration in the electro-magnetic field. This means that the effect of a magnetic field on an electron can be calculated by “scattering” an electron and a photon.

4gravanom

If this looks familiar, check the handbook section.

In fact, doing the calculation in this way leads to what is possibly the most accurately predicted number in all of science.

Scattering amplitudes show up all over the place, from particle physics at the Large Hadron Collider to astrophysics to delicate experiments on electrons in magnetic fields. That said, there are plenty of things people calculate in theoretical physics that don’t use scattering amplitudes, either because they involve questions that are difficult to answer from the scattering amplitude point of view, or because they invoke different formulas altogether. Still, scattering amplitudes are central to the work of a large number of physicists. They really do cover just about everything.

Am I a String Theorist?

Perimeter, like most institutes of theoretical physics, divides their researchers into semi-informal groups. At Perimeter, these are:

  • Condensed Matter
  • Cosmology
  • Mathematical Physics
  • Particle Physics
  • Quantum Fields and Strings
  • Quantum Foundations
  • Quantum Gravity
  • Quantum Information
  • Strong Gravity

I’m in the Quantum Fields and Strings group, which many people seem to refer to simply as the String Theory group. So for the past week or so, I’ve been introducing myself as a String Theorist. As I briefly mention in my Who Am I? post, this isn’t completely accurate.

Am I a String Theorist?

The theories that I study do derive from string theory. They were first framed by string theorists, and research into them is still deeply intertwined with string theory research. I’ve definitely had occasion to compare my results to those of string theorists, or to bring in calculations by string theorists to advance my work.

And if you’re the kind of person who views the world as a competition between string theory and its rivals (like Loop Quantum Gravity) then I suppose I’m on the string theory “side”. I’m optimistic, at least, that the reason why string theory research is so much more common than any other approach to quantum gravity is simply because string theory provides many more interesting and viable projects for researchers.

On the other hand, though, there’s the basic fact that the theories I work with are not, themselves, string theories. They’re quantum field theories, the broader class that encompasses the modern synthesis of quantum mechanics and special relativity. The theories I work with are often reasonably close to the well-tested theories of the real world, close enough that the calculations are more “particle physics” than the they are “string theory”.

Of course, all of that could change. One of the great things about string theory is the way it connects lots of different interesting quantum field theories together. There’s a “string”, the “GKP string”, involved in the work of Basso, Sever, and Vieira, work that I will probably get involved with here at Perimeter. The (2,0) theory is a quantum field theory, but it’s much closer to string theory than to particle physics, so if I get more involved with the (2,0) theory would that make me a string theorist?

The fact is, these days string theory is so ubiquitous that the question “Am I a String Theorist?” doesn’t actually mean anything. String theory is there, lurking in the background, able to get involved at any time even if it’s not directly involved at present. Theoretical physicists don’t fall into neat categories.

I am a String Theorist. Also, I am not.