Insert Muscle Joke Here

I’m graduating this week, so I probably shouldn’t spend too much time writing this post. I ought to mention, though, that there has been some doubt about the recent discovery by the BICEP2 telescope of evidence for gravitational waves in the cosmic microwave background caused by the early inflation of the universe. Résonaances got to the story first and Of Particular Significance has some good coverage that should be understandable to a wide audience.

In brief, the worry is that the signal detected by BICEP2 might not be caused by inflation, but instead by interstellar dust. While the BICEP2 team used several models of dust to show that it should be negligible, the controversy centers around one of these models in particular, one taken from another, similar experiment called PLANCK.

The problem is, BICEP2 didn’t get PLANCK’s information on dust directly. Instead, it appears they took the data from a slide in a talk by the PLANCK team. This process, known as “data scraping”, involves taking published copies of the slides and reading information off of the charts presented. If BICEP2 misinterpreted the slide, they might have miscalculated the contribution by interstellar dust.

If you’re like me, the whole idea of data scraping seems completely ludicrous. The idea of professional scientists sneaking information off of a presentation, rather than simply asking the other team for data like reasonable human beings, feels almost cartoonishly wrong-headed.

It’s a bit more understandable, though, when you think about the culture behind these big experiments. The PLANCK and BICEP2 teams are colleagues, but they are also competitors. There is an enormous amount of glory in finding evidence for something like cosmic inflation first, and an equally enormous amount of shame in screwing up and announcing something that turns out to be wrong. As such, these experiments are quite protective of their data. Not only might someone with early access to the data preempt them on an important discovery, they might rush to publish a conclusion that is wrong. That’s why most of these big experiments spend a large amount of time checking and re-checking the data, communicating amongst themselves and settling on an interpretation before they feel comfortable releasing it to the wider community. It’s why BICEP2 couldn’t just ask PLANCK for their data.

From BICEP2’s perspective, they can expect that plots presented at a talk by PLANCK should be accurate, digital plots. Unlike Fox News, scientists have an obligation to present their data in a way that isn’t misleading. And while relying on such a dubious source seems like a bad idea, by all accounts that’s not what the BICEP2 team did. PLANCK’s data was just one dust model used by the team, kept in part because it agreed well with other, non-“data-scraped” models.

It’s a shame that these experiments are so large and prestigious that they need to guard their data in such a potentially destructive way. My sub-field is generally much nicer about this sort of thing: the stakes are lower, and the groups are smaller and have less media attention, so we’re able to share data when we need to. In fact, my most recent paper got a significant boost from some data shared by folks at the Perimeter Institute.

Only time will tell whether the BICEP2 result wins out, or whether it was a fluke caused by caustic data-sharing practices. A number of other experiments are coming online within the next year, and one of them may confirm or deny what BICEP2 has showed.

How do I get where you are?

I’ve mentioned before that this blog will be undergoing a redesign this summer, transitioning from 4gravitons.wordpress.com to just 4gravitons.wordpress.com. One part of that redesign will be the introduction of new categories to help people search for content, as well as new guides like the ones for N=4 super Yang-Mills and the (2,0) theory for some of those categories. Of those, one planned category/guide will discuss careers in physics, with an eye towards explaining some of the often-unstated assumptions behind the process.

I’ve already posted on being a graduate research assistant and on what a postdoc is. I haven’t said much yet about the process leading up to becoming a graduate student. In this post, I’m going to give an overview of a career in theoretical physics, with a focus on what happens before you find an advisor. This is going to be inherently biased, based as it will be on my experiences. In particular, each country’s education system is different, so much of this will only be relevant for students in the US.

Let’s start at the beginning.

A very good place to start.

If you want to become a theoretical physicist, you’d better start by taking physics and math courses in high school. Unfortunately, this is where socioeconomic status has a big effect. Some schools have Advanced Placement or International Baccalaureate courses that let you get a head-start on college, many don’t. Some schools don’t even have physics courses at all anymore. My only advice here is to get what you can, when you can. If you can take a physics course, do it. If you can take calculus, do it. If you can take classes that will give you university credit, take them.

After high school, you go to college for a Bachelor’s degree in physics. Getting into college these days is some sort of ridiculous popularity contest, and I don’t pretend to be able to give advice on that. What I can say is that once you’re in college, coursework is important, but research is more important. Graduate schools will look at how well you did in your courses and how advanced those courses were, but they will pay special attention to who you get recommendations from, and whether you did research with them. Whether or not your college has anyone who you can research with, you should consider doing summer research somewhere interesting. With programs like the NSF’s Research Experience for Undergraduates (or REU) you can apply to get hooked up with interesting projects and mentors. In addition to looking good on an application to grad school, doing research helps boost your self-confidence: knowing that you can do something real really helps you start feeling like a scientist. Research also teaches you specialized skills much faster than coursework can: I’ve learned much more about programming from having to use it on projects than from any actual programming course.

That said, coursework is also useful. You want courses that will familiarize you with basic tools of your field, physics courses on classical mechanics and quantum mechanics and electromagnetism and math courses on linear algebra and differential equations. You want to take a math course on group theory, but only if it’s taught by a physicist, as mathematicians focus on different aspects. More than any of that, though, you want to try to take at least a few graduate-level courses in while you’re still in college.

That’s important, because grad school in theoretical physics is kind of a mess. You’ll be there for around five years in total (I was in at the low end with four, some people take six or seven). However, you take most if not all of your courses in the first two years. In general, during that time you are paid as a Teaching Assistant. The school pays your tuition and a livable (if barely) wage, and in return you lead lab sections or grade papers. Teaching experience can be a positive thing, but you don’t want to keep doing it for too long, because the point of grad school isn’t teaching or courses, it’s research. Your goal is to find an advisor who is willing to pay you out of one of their (usually government) grants, so that you can transition from Teaching Assistant to Research Assistant. This is hard to do while you’re still taking courses: you won’t have time, and worse, you won’t know everything you need. Theoretical physics requires a lot of background, and much of it gets taught in grad school. Here at Stony Brook, you’d be taking graduate-level quantum mechanics, quantum field theory, and string theory. Until recently, each one of those was a one-year course, and the most logical way to take them was one after the other. Add that up, and that’s three years…kind of a problem when you want to start research after two. That’s why getting ahead in courses, however and whenever you can, is so important: not so much for the courses themselves, but so you can get past them and do research.

Research is what you do for the rest of your time in grad school. It’s what you do after you graduate, when you become a postdoc. It (and teaching) are what you do as a professor, what you are judged on when they decide whether or not you get tenure. Working through research is going to teach you more than any other experience you will have, so get as much of it as you can. And good luck!

Look what I made!

In a few weeks, I’ll be giving a talk for Stony Brook’s Graduate Awards Colloquium, to an audience of social science grad students and their parents.

One of the most useful tools when talking to people in other fields is a shared image. You want something from your field that they’ve seen, that they’re used to, that they’ll recognize. Building off of that kind of thing can be a great way to communicate.

If there’s one particle physics image that lots and lots of people have seen, it’s the Standard Model. Generally, it’s organized into charts like this:

Standard_Model_of_Elementary_Particles

I thought that if people saw a chart like that, but for N=4 super Yang-Mills, it might make the theory seem a bit more familiar. N=4 super Yang-Mills has a particle much like the Standard Model’s gluon with spin 1, paired with four gluinos, particles that are sort of but not really like quarks with spin 1/2, and six scalars, particles whose closest analogue in the Standard Model is the Higgs with spin 0.

In N=4 super Yang-Mills, none of these particles have any mass, since if supersymmetry isn’t “broken” all particles have the same mass. So where mass is written in the Standard Model table, I can just put zero. The table I linked also gives the electric charge of each particle. That doesn’t really mean anything for N=4 super Yang-Mills. It isn’t a theory that tries to describe the real world, so there’s no direct equivalent to a real-world force like electromagnetism. Since everything in the theory has to have the same charge, again due to supersymmetry, I can just list all of their “electric charges” as zero.

Putting it all together, I get the diagram below. The theory has eleven particles in total, so it won’t fit into a nice neat square. Still, this should be more familiar than most of the ways I could present things.

N4SYMParticleContent

The PhD Defense

Last Wednesday I completed the final stage of my PhD, the Defense. I booted up a projector and, in a room filled with esteemed physicists, eager grad students, and a three foot sub, I summarized the last two years of my work. A few questions later, people were shaking my hand and calling me “Doctor von Hippel”.

Now that I’m transitioning out of the grad student world, my blog will be transitioning too. I’ll be starting work as a Postdoctoral Fellow in the Fall at the Perimeter Institute for Theoretical Physics. Some time in between, probably in July, this blog will undergo a redesign, hopefully becoming easier to navigate. I’ll also be dropping the “and a grad student” from the title, switching to a new URL, 4gravitons.wordpress.com. Don’t worry, traffic from the old address will be forwarded, so infrequent readers won’t lose track. That said, if anyone with more experience has some advice about making the transition more seamless I’d love to hear it.

There are a lot of stereotypes about the PhD Defense, and mine broke almost all of them. My advisor hadn’t been directly involved in my work, my committee chair was one of the nicest, mellowest professors I’ve ever known, my experimentalist asked me a theoretical physics question, and my external member was NimafrigginArkani-Hamed.

That said, I’ve also seen several other PhD Defenses, and I have to say that the stereotypes are usually right on the money. And since I’m on a bit of a list-based comedy kick recently, let me introduce you to the four members of your PhD committee:

First, of course, is your advisor. If you two collaborate closely, you may find yourself presenting material that your advisor had a hand in. Naturally, the other committee members will ask questions about this material, and naturally you will answer them. Naturally, those answers will not be how your advisor would have explained it, so naturally your advisor will start explaining it themselves. (After all, it’s their work that’s being questioned!) Manage things well and the whole defense will be an argument between your advisor and the other committee members, and you won’t have to say anything at all!

Second is your committee chair. This is someone from your field, chosen for their general eminence and chair-ish-ness. They’ve done a lot of these before, and in their mind they’ve developed a special bond with the students, a bond forged by questions. See, if you have a typical committee chair, they will ask you the toughest, most nitpicky, most downright irrelevant lines of questions possible. The chair’s goal isn’t to keep things moving, it’s to make sure that you took their class and remember everything from it, no matter how much time that takes away from discussing your actual dissertation.

Third you must face your experimentalist. According to the ancient ideals of academia (ideals somehow unbreakably important for grad students and largely irrelevant for top-level university administrators), a dissertation must be judged not only by the yes-men of your own sub-field, but also by someone from the rest of your department. For a theoretical physicist, that means bringing in an experimental physicist. You may try to make things accessible to this person, but eventually you have to actually start talking about your work. This is healthy, as it will allow them much-needed sleep. Once they awake, they will bless you with a question that represents the most tenuous link they can draw between their own work and yours, generally asking after the mass of some subatomic particle. Once you have demonstrated your ignorance in some embarrassing fashion the experimentalist may return to sleep.

Finally, the defense brings in a special individual, the external member. Not only must you prove your worth to an experimentalist, but to someone from outside of your department altogether! For the lucky, this means someone who does similar work at a nearby university. For the terminally rural, this instead means finding the closest department and bringing in someone who will at least recognize some of the words in your talk. For us, this generally means a mathematician. Like the experimentalist, they will favor you with bewildered looks or snores, depending on temperament. Unlike the experimentalist, they are under no illusion that anything they do is relevant to anything you do, so their questions will be mercifully brief.

Grilled by these four, you then leave the room, allowing them to talk about the weather or their kids or something before they ask you back in to tell you that, of course, you’ve got your PhD. Because after all that, anything else would just be rude.

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

The Four Ways Physicists Name Things

If you’re a biologist and you discover a new animal, you’ve always got Latin to fall back on. If you’re an astronomer, you can describe what you see. But if you’re a physicist, your only option appears to involve falling back on one of a few terrible habits.

The most reasonable option is just to name it after a person. Yang-Mills and the Higgs Boson may sound silly at first, but once you know the stories of C. N. Yang, Robert Mills, Peter Higgs and Satyendra Nath Bose you start appreciating what the names mean. While this is usually the most elegant option, the increasingly collaborative nature of physics means that many things have to be named with a series of initials, like ABJM, BCJ and KKLT.

A bit worse is the tendency to just give it the laziest name possible. What do you call the particles that “glue” protons and neutrons together? Why gluons, of course, yuk yuk yuk!

This is particularly common when it comes to supersymmetry, where putting the word “super” in front of something almost always works. If that fails, it’s time to go for more specific conventions: to find the partner of an existing particle, if the new particle is a boson, just add “s-” for “super”“scalar” apparently to the name. This creates perfectly respectable names like stau, sneutrino, and selectron. If the new particle is a fermion, instead you add “-ino” to the end, getting something like a gluino if you start with a gluon. If you’ve heard of neutrinos, you may know that neutrino means “little neutral one”. You might perfectly rationally expect that gluino means “little gluon”, if you had any belief that physicists name things logically. We don’t. A gluino is called a gluino because it’s a fermion, and neutrinos are fermions, and the physicists who named it were too lazy to check what “neutrino” actually means.

Pictured: the superpartner of Nidoran?

Worse still are names that are obscure references and bad jokes. These are mercifully rare, and at least memorable when they occur. In quantum mechanics, you write down probabilities using brackets of two quantum states, \langle a | b\rangle. What if you need to separate the two states, \langle a| and |b\rangle? Then you’ve got a “bra” and a “ket”!

Or have you heard the story of how quarks were named? Quarks, for those of you unfamiliar with them, are found in protons and neutrons in groups of three. Murray Gell-Mann, one of the two people who first proposed the existence of quarks, got their name from Finnegan’s Wake, a novel by James Joyce, which at one point calls for “Three quarks for Muster Mark!” While this may at first sound like a heartwarming tale of respect for the literary classics, it should be kept in mind that a) Finnegan’s Wake is a novel composed almost entirely of gibberish, read almost exclusively by people who pretend to understand it to seem intelligent and b) this isn’t exactly the most important or memorable line in the book. So Gell-Mann wasn’t so much paying homage to a timeless work of literature as he was referencing the most mind-numbingly obscure piece of nerd trivia before the invention of Mara Jade. Luckily these days we have better ways to remember the name.

Albeit wrinklier ways.

The final, worst category, though, don’t even have good stories going for them. They are the names that tell you absolutely nothing about the thing they are naming.

Probably the worst examples of this from my experience are the a-theorem and the c-theorem. In both cases, a theory happened to have a parameter in it labeled by a letter. When a theorem was proven about that parameter, rather than giving it a name that told you anything at all about what it was, people just called it by the name of the parameter. Mathematics is full of names like this too. Without checking Wikipedia, what’s the difference between a set, a group, and a category? What the heck is a scheme?

If you ever have to name something, be safe and name it after a person. If you don’t, just try to avoid falling into these bad habits of physics naming.

A Question of Audience

I’ve been thinking a bit about science communication recently.

One of the most important parts of communicating science (or indeed, communicating anything) is knowing your audience. Much of the time if a piece is flawed, it’s flawed because the author didn’t have a clear idea of who they’re talking to.

A persistent worry among people who communicate science to the public is that we’re really just talking to ourselves. If all the people praising you for your clear language are scientists, then maybe it’s time to take a step back and think about whether you’re actually being understood by anyone else.

This blog’s goal has always been to communicate science to the general public, and most of my posts are written with as little background assumed as possible. That said, I sometimes wonder whether that’s actually the audience I’m reaching.
Wordpress has a handy feature to let me track which links people click on to get to my blog, which gives me a rough way to gauge my audience.

When a new post goes up, I get around ten to twenty clicks from Facebook. Those are people I know, which for the most part these days means physicists. I get a couple clicks from Twitter, where my followers are a mix of young scientists, science journalists, and amateurs interested in science. On WordPress, my followers are also a mix of specialists and enthusiasts. Most interesting, to me at least, are the followers who get to my blog via Google searches. Naturally, they come in regardless of whether I have a new post or not, adding an extra twenty-five or so views every day. Judging by the sites (google.fr, google.ca) these people come from all over the world, and judging by their queries they run from physics PhD students to people with no physics knowledge whatsoever.

Overall then, I think I’m doing a pretty good job getting the word out. As my site’s Google rankings improve, more non-physicists will read what I have to say. It’s a diverse audience, but I think I’m up to the challenge.

Numerics, or, Why can’t you just tell the computer to do it?

When most people think of math, they think of the math they did in school: repeated arithmetic until your brain goes numb, followed by basic algebra and trig. You weren’t allowed to use calculators on most tests for the simple reason that almost everything you did could be done by a calculator in a fraction of the time.

Real math isn’t like that. Mathematicians handle proofs and abstract concepts, definitions and constructions and functions and generally not a single actual number in sight. That much, at least, shouldn’t be surprising.

What might be surprising is that even tasks which seem very much like things computers could do easily take a fair bit of human ingenuity.

In physics, I do a lot of integrals. For those of you unfamiliar with calculus, integrals can be thought of as the area between a curve and the x-axis.

Areas seem like the sort of thing it would be easy for a computer to find. Chop the space into little rectangles, add up all the rectangles under the curve, and if your rectangles are small enough you should get the right answer. Broadly, this is the method of numerical integration. Since computers can do billions of calculations per second, you can chop things up into billions of rectangles and get as close as you’d like, right?

Heck, ten is a lot. Can we just do ten?

Heck, ten is a lot. Can we just do ten?

For some curves, this works fine. For others, though…

Ten might not be enough for this one.

Ten might not be enough for this one.

See how the left side of that plot goes off the chart? That curve goes to infinity. No matter how many rectangles you put on that side, you still won’t have any that are infinitely tall, so you’ll still miss that part of the curve.

Surprisingly enough, the area under this curve isn’t infinite. Do the integral correctly, and you get a result of 2. Set a computer to calculate this integral via the sort of naïve numerical integration discussed above though, and you’ll never find that answer. You need smarter methods: smart humans doing the math, or smart humans programming the computer.

Another way this can come up is if you’re adding up two parts of something that go to infinity in opposite directions. Try to integrate each part by itself and you’ll be stuck.

firstplot

secondplot

But add them together, and you get something quite a bit more tractable.

Yeah, definitely a ten-rectangle job.

Yeah, definitely a ten-rectangle job.

Numerical integration, and computers in general, are a very important tool in a scientist’s arsenal. But in order to use them, we have to be smart, and know what we’re doing. Knowing how to use our tools right can take almost as much expertise and care as working without tools.

So no, I can’t just tell the computer to do it.

Gravity is Yang-Mills Squared

There’s a concept that I’ve wanted to present for quite some time. It’s one of the coolest accomplishments in my subfield, but I thought that explaining it would involve too much technical detail. However, the recent BICEP2 results have brought one aspect of it to the public eye, so I’ve decided that people are ready.

If you’ve been following the recent announcements by the BICEP2 telescope of their indirect observation of primordial gravitational waves, you’ve probably seen the phrases “E-mode polarization” and “B-mode polarization” thrown around. You may even have seen pictures, showing that light in the cosmic microwave background is polarized differently by quantum fluctuations in the inflaton field and by quantum fluctuations in gravity.

But why is there a difference? What’s unique about gravitational waves that makes them different from the other waves in nature?

As it turns out, the difference all boils down to one statement:

Gravity is Yang-Mills squared.

This is both a very simple claim and a very subtle one, and it comes up in many many places in physics.

Yang-Mills, for those who haven’t read my older posts, is a general category that contains most of the fundamental forces. Electromagnetism, the strong nuclear force, and the weak nuclear force are all variants of Yang-Mills forces.

Yang-Mills forces have “spin 1”. Another way to say this is that Yang-Mills forces are vector forces. If you remember vectors from math class, you might remember that a vector has a direction and a strength. This hopefully makes sense: forces point in a direction, and have a strength. You may also remember that vectors can also be described in terms of components. A vector in four space-time dimensions has four components: x, y, z, and time, like so:

\left( \begin{array}{c} x \\ y \\ z \\ t \end{array} \right)

Gravity has “spin 2”.

As I’ve talked about before, gravity bends space and time, which means that it modifies the way you calculate distances. In practice, that means it needs to be something that can couple two vectors together: a matrix, or more precisely, a tensor, like so:

\left( \begin{array}{cccc} xx & xy & xz & xt\\ yx & yy & yz & yt\\ zx & zy & zz & zt\\ tx & ty & tz & tt\end{array} \right)

So while a Yang-Mills force has four components, gravity has sixteen. Gravity is Yang-Mills squared.

(Technical note: gravity actually doesn’t use all sixteen components, because it’s traceless and symmetric. However, often when studying gravity’s quantum properties theorists often add on extra fields to “complete the square” and fill in the remaining components.)

There’s much more to the connection than that, though. For one, it appears in the kinds of waves the two types of forces can create.

In order to create an electromagnetic wave you need a dipole, a negative charge and a positive charge at opposite ends of a line, and you need that dipole to change over time.

Change over time, of course, is a property of Gifs.

Gravity doesn’t have negative and positive charges, it just has one type of charge. Thus, to create gravitational waves you need not a dipole, but a quadrupole: instead of a line between two opposite charges, you have four gravitational charges (masses) arranged in a square. This creates a “breathing” sort of motion, instead of the back-and-forth motion of electromagnetic waves.

This is your brain on gravitational waves.

This is why gravitational waves have a different shape than electromagnetic waves, and why they have a unique effect on the cosmic microwave background, allowing them to be spotted by BICEP2. Gravity, once again, is Yang-Mills squared.

But wait there’s more!

So far, I’ve shown you that gravity is the square of Yang-Mills, but not in a very literal way. Yes, there are lots of similarities, but it’s not like you can just square a calculation in Yang-Mills and get a calculation in gravity, right?

Well actually…

In quantum field theory, calculations are traditionally done using tools called Feynman diagrams, organized by how many loops the diagram contains. The simplest diagrams have no loops, and are called tree diagrams.

Fascinatingly, for tree diagrams the message of this post is as literal as it can be. Using something called the Kawai-Lewellen-Tye relations, the result of a tree diagram calculation in gravity can be found just by taking a similar calculation in Yang-Mills and squaring it.

(Interestingly enough, these relations were originally discovered using string theory, but they don’t require string theory to work. It’s yet another example of how string theory functions as a laboratory to make discoveries about quantum field theory.)

Does this hold beyond tree diagrams? As it turns out, the answer is again yes!
The calculation involved is a little more complicated, but as discovered by Zvi Bern, John Joseph Carrasco, and Henrik Johansson, if you can get your calculation in Yang-Mills into the right format then all you need to do is square the right thing at the right step to get gravity, even for diagrams with loops!

zvi-bern-350

carrasco

This trick, called BCJ duality after its discoverers, has allowed calculations in quantum gravity that far outpace what would be possible without it. In N=8 supergravity, the gravity analogue of N=4 super Yang-Mills, calculations have progressed up to four loops, and have revealed tantalizing hints that the uncontrolled infinities that usually plague gravity theories are absent in N=8 supergravity, even without adding in string theory. Results like these are why BCJ duality is viewed as one of the “foundational miracles” of the field for those of us who study scattering amplitudes.

Gravity is Yang-Mills squared, in more ways than one. And because gravity is Yang-Mills squared, gravity may just be tame-able after all.

Flexing the BICEP2 Results

The physicsverse has been abuzz this week with news of the BICEP2 experiment’s observations of B-mode polarization in the Cosmic Microwave Background.

There are lots of good sources on this, and it’s not really my field, so I’m just going to give a quick summary before talking about a few aspects I find interesting.

BICEP2 is a telescope in Antarctica that observes the Cosmic Microwave Background, light left over from the first time that the universe was clear enough for light to travel. (If you’re interested in a background on what we know about how the universe began, Of Particular Significance has an article here that should be fairly detailed, and I have a take on some more speculative aspects here.) Earlier experiments that observed the Cosmic Microwave Background discovered a surprising amount of uniformity. This led to the proposal of a concept called inflation: the idea that at some point the early universe expanded exponentially, smearing any non-uniformities across the sky and smoothing everything out. Since the rate the universe expands is a number, if that number is to vary it naturally should be a scalar field, which in this case is called the inflaton.

During inflation, distances themselves get stretched out. Think about inflation like enlarging an image. As you’ve probably noticed (maybe even in early posts on this blog), enlarging an image doesn’t always work out well. The resulting image is often pixelated or distorted. Some of the distortion comes from glitches in the program that enlarges the image, while some of it is just what happens when the pixels of the original image get enlarged to the point that you can see them.

Enlarging the Cosmic Microwave Background

Quantum fluctuations in the inflaton field itself are the glitches in the program, enlarging some areas more than others. The pattern they create in the Cosmic Microwave Background is called E-mode polarization, and several other experiments have been able to detect it.

Much weaker are the effect of the “pixels” of the original image. Since the original image is spacetime itself, the pixels are the quantum fluctuations of spacetime: quantum gravity waves. Inflation enlarged them to the point that they were visible on a large-distance scale, fundamental non-uniformity in the world blown up big enough to affect the distribution of light. The effect this had on light is detectably different: it’s called B-mode polarization, and this is the first experiment to detect it on the right scale for it to be caused by gravity waves.

Measuring this polarization, in particular how strong it is, tells us a lot about how inflation occurred. It’s enough to rule out several models, and lend support to several others. If the results are corroborated this will be real, useful evidence, the sort physicists love to get, and folks are happily crunching numbers on it all over the world.

All that said, this site is called four gravitons and a grad student, and I’m betting that some of you want to ask this grad student: is this evidence for gravitons, or for gravity waves?

Sort of.

We already had good indirect evidence for gravity waves: pairs of neutron stars release gravity waves as they orbit each other, which causes them to slow down. Since we’ve observed them slowing down at the right rates, we were already confident gravity waves exist. And if you’ve got gravity waves, gravitons follow as a natural consequence of quantum mechanics.

The data from BICEP2 is also indirect. The gravity waves “observed” by BICEP2 were present in the early universe. It is their effect on the light that would become the Cosmic Microwave Background that is being observed, not the gravity waves directly. We still have yet to directly detect gravity waves, with a gravity telescope like LIGO.

On the other hand, a “gravity telescope” isn’t exactly direct either. In order to detect gravity waves, LIGO and other gravity telescopes attempt to measure their effect on the distances between objects. How do they do that? By looking at interference patterns of light.

In both cases, we’re looking at light, present in the environment of a gravity wave, and examining its properties. Of course, in a gravity telescope the light is from a nearby environment under tight control, while the Cosmic Microwave Background is light from as far away and long ago as anything within the reach of science today. In both cases, though, it’s not nearly as simple as “observing” an effect. “Seeing” anything in high energy physics or astrophysics is always a matter of interpreting data based on science we already know.

Alright, that’s evidence for gravity waves. Does that mean evidence for gravitons?

I’ve seen a few people describe BICEP2’s results as evidence for quantum gravity/quantum gravity effects. I felt a little uncomfortable with that claim, so I asked Matt Strassler what he thought. I think his perspective on this is the right one. Quantum gravity is just what happens when gravity exists in a quantum world. As I’ve said on this site before, quantum gravity is easy. The hard part is making a theory of quantum gravity that has real predictive power, and that’s something these results don’t shed any light on at all.

That said, I’m a bit conflicted. They really are seeing a quantum effect in gravity, and as far as I’m aware this really is the first time such an effect has been observed. Gravity is so weak, and quantum gravity effects so small, that it takes inflation blowing them up across the sky for them to be visible. Now, I don’t think there was anyone out there who thought gravity didn’t have quantum fluctuations (or at least, anyone with a serious scientific case). But seeing into a new regime, even if it doesn’t tell us much…that’s important, isn’t it? (After writing this, I read Matt Strassler’s more recent post, where he has a paragraph professing similar sentiments).

On yet another hand, I’ve heard it asserted in another context that loop quantum gravity researchers don’t know how to get gravitons. I know nothing about the technical details of loop quantum gravity, so I don’t know if that actually has any relevance here…but it does amuse me.