Tag Archives: cosmology

Is Everything Really Astonishingly Simple?

Neil Turok gave a talk last week, entitled The Astonishing Simplicity of Everything. In it, he argued that our current understanding of physics is really quite astonishingly simple, and that recent discoveries seem to be confirming this simplicity.

For the right sort of person, this can be a very uplifting message. The audience was spellbound. But a few of my friends were pretty thoroughly annoyed, so I thought I’d dedicate a post to explaining why.

Neil’s talk built up to showing this graphic, one of the masterpieces of Perimeter’s publications department:

Looked at in this way, the laws of physics look astonishingly simple. One equation, a few terms, each handily labeled with a famous name of some (occasionally a little hazy) relevance to the symbol in question.

In a sense, the world really is that simple. There are only a few kinds of laws that govern the universe, and the concepts behind them are really, deep down, very simple concepts. Neil adroitly explained some of the concepts behind quantum mechanics in his talk (here represented by the Schrodinger, Feynman, and Planck parts of the equation), and I have a certain fondness for the Maxwell-Yang-Mills part. The other parts represent different kinds of particles, and different ways they can interact.

While there are only a few different kinds of laws, though, that doesn’t mean the existing laws are simple. That nice, elegant equation hides 25 arbitrary parameters, hidden in the Maxwell-Yang-Mills, Dirac, Kobayashi-Masakawa, and Higgs parts. It also omits the cosmological constant, which fuels the expansion of the universe. And there are problems if you try to claim that the gravity part, for example, is complete.

When Neil mentions recent discoveries, he’s referring to the LHC not seeing new supersymmetric particles, to telescopes not seeing any unusual features in the cosmic microwave background. The theories that were being tested, supersymmetry and inflation, are in many ways more complicated than the Standard Model, adding new parameters without getting rid of old ones. But I think it’s a mistake to say that if these theories are ruled out, the world is astonishingly simple. These theories are attempts to explain unlikely features of the old parameters, or unlikely features of the universe we observe. Without them, we’ve still got those unlikely, awkward, complicated bits.

Of course, Neil doesn’t think the Standard Model is all there is either, and while he’s not a fan of inflation, he does have proposals he’s worked on that explain the same observations, proposals that are also beyond the current picture. More broadly, he’s not suggesting here that the universe is just what we’ve figured out so far and no more. Rather, he’s suggesting that new proposals ought to build on the astonishing simplicity of the universe, instead of adding complexity, that we need to go back to the conceptual drawing board rather than correcting the universe with more gears and wheels.

On the one hand, that’s Perimeter’s mission statement in a nutshell. Perimeter’s independent nature means that folks here can focus on deeper conceptual modifications to the laws of physics, rather than playing with the sorts of gears and wheels that people already know how to work with.

On the other hand, a lack of new evidence doesn’t do anyone any favors. It doesn’t show the way for supersymmetry, but it doesn’t point to any of the “deep conceptual” approaches either. And so for some people, Neil’s glee at the lack of new evidence feels less like admiration for the simplicity of the cosmos and more like that one guy in a group project who sits back chuckling while everyone else fails. You can perhaps understand why some people felt resentful.

A Tale of Two CMB Measurements

While trying to decide what to blog about this week, I happened to run across this article by Matthew Francis on Ars Technica.

Apparently, researchers have managed to use Planck‘s measurement of the Cosmic Microwave Background to indirectly measure a more obscure phenomenon, the Cosmic Neutrino Background.

The Cosmic Microwave Background, or CMB is often described as the light of the Big Bang, dimmed and spread to the present day. More precisely, it’s the light released from the first time the universe became transparent. When electrons and protons joined to form the first atoms, light no longer spent all its time being absorbed and released by electrical charges, and was free to travel in a mostly-neutral universe.

This means that the CMB is less like a view of the Big Bang, and more like a screen separating us from it. Light and charged particles from before the CMB was formed will never be observable to us, because they would have been absorbed by the early universe. If we want to see beyond this screen, we need something with no electric charge.

That’s where the Cosmic Neutrino Background comes in. Much as the CMB consists of light from the first time the universe became transparent, the CNB consists of neutrinos from the first time the universe was cool enough for them to travel freely. Since this happened a bit before the universe was transparent to light, the CNB gives information about an earlier stage in the universe’s history.

Unfortunately, neutrinos are very difficult to detect, the low-energy ones left over from the CNB even more so. Rather than detecting the CNB directly, it has to be observed through its indirect effects on the CMB, and that’s exactly what these researchers did.

Now does all of this sound just a little bit familiar?

Gravitational waves are also hard to detect, hard enough that we haven’t directly detected any yet. They’re also electrically neutral, so they can also give us information from behind the screen of the CMB, letting us learn about the very early universe. And when the team at BICEP2 purported to measure these primordial gravitational waves indirectly, by measuring the CMB, the press went crazy about it.

This time, though? That Ars Technica article is the most prominent I could find. There’s nothing in major news outlets at all.

I don’t think that this is just a case of people learning from past mistakes. I also don’t think that BICEP2’s results were just that much more interesting: they were making a claim about cosmic inflation rather than just buttressing the standard Big Bang model, but (outside of certain contrarians here at Perimeter) inflation is not actually all that controversial. It really looks like hype is the main difference here, and that’s kind of sad. The difference between a big (premature) announcement that got me to write four distinct posts and an article I almost didn’t notice is just one of how the authors chose to make their work known.

All Is Dust

Joke stolen from some fellow PI postdocs.

The BICEP2 and Planck experiment teams have released a joint analysis of their data, discovering what many had already suspected: that the evidence for primordial gravitational waves found by BICEP2 can be fully explained by interstellar dust.

For those who haven’t been following the story, BICEP2 is a telescope in Antarctica. Last March, they told the press they had found evidence of primordial gravitational waves, ripples in space-time caused by the exponential expansion of the universe shortly after the Big Bang. Soon after, though, doubts were raised. It appeared that the BICEP2 team hadn’t taken proper account of interstellar dust, and in particular had mis-used some data they scraped from a presentation by larger experiment Planck. After Planck released the correct version of their dust data, BICEP2’s predictions were even more evidently premature.

Now, the Planck team has exhaustively gone over their data and BICEP2’s, and done a full analysis. The result is a pretty thorough statement: everything BICEP2 observed can be explained by interstellar dust.

A few news outlets have been describing this as “ruling out inflation” or “ruling out gravitational waves”, both of which are misunderstandings. What Planck has ruled out are inflation (and gravitational waves caused by inflation) powerful enough to have been observed by BICEP2.

To an extent, this was something Planck had already predicted before BICEP2 made their announcement. BICEP2 announced a value for a parameter r, called the tensor-scalar ratio, of 0.2. This parameter r is a way to measure the strength of the gravitational waves (if you want to know what gravitational waves have to do with tensors, this post might help), and thus indirectly the strength of inflation in the early universe.

Trouble is, Planck had already released results arguing that r had to be below 0.11! So a lot of people were already rather skeptical.

With the new evidence, Planck’s bound is relaxed slightly. They now argue that r should be below 0.13, so BICEP2’s evidence was enough to introduce some fuzziness into their measurements when everything was analyzed together.

I’ve complained before about the bad aspects of BICEP2’s announcement, how releasing their data prematurely hurt the public’s trust in science and revealed the nasty side of competition for funding on massive projects. In this post, I’d like to talk a little about the positive side of the publicity around BICEP2.

Lots of theorists care about physics at very very high energies. The scale of string theory, or the Planck mass (no direct connection to the experiment, just the energy where one expects quantum gravity to be relevant), or the energy at which the fundamental forces might unify, are all much higher than any energy we can explore with a particle collider like the LHC. If you had gone out before BICEP2’s announcement and asked physicists whether we would ever see direct evidence for physics at these kinds of scales, they would have given you a resounding no. Maybe we could see indirect evidence, but any direct consequences would be essentially invisible.

All that changed with BICEP2. Their announcement of an r of 0.2 corresponds to very strong inflation, inflation of higher energy than the Planck mass!

Suddenly, there was hope that, even if we could never see such high-energy physics in a collider, we could see it out in the cosmos. This falls into a wider trend. Physicists have increasingly begun to look to the stars as the LHC continues to show nothing new. But the possibility that the cosmos could give us data that not only meets LHC energies, but surpasses them so dramatically, is something that very few people had realized.

The thing is, that hope is still alive and kicking. The new bound, restricting r to less than 0.13, still allows enormously powerful inflation. (If you’d like to work out the math yourself, equation (14) here relates the scale of inflation \Delta \phi to the Planck mass M_{\textrm{Pl}} and the parameter r.)

This isn’t just a “it hasn’t been ruled out yet” claim either. Cosmologists tell me that new experiments coming online in the next decade will have much more precision, and much better ability to take account of dust. These experiments should be sensitive to an r as low as 0.001!

With that kind of sensitivity, and the new mindset that BICEP2 introduced, we have a real chance of seeing evidence of Planck-scale physics within the next ten or twenty years. We just have to wait and see if the stars are right…

The Three Things Everyone Gets Wrong about the Big Bang

Ah, the Big Bang, our most science-y of creation myths. Everyone knows the story of how the universe and all its physical laws emerged from nothing in a massive explosion, growing from a singularity to the size of a breadbox until, over billions of years, it became the size it is today.

bigbang

A hot dense state, if you know what I mean.

…actually, almost nothing in that paragraph is true. There are a lot of myths about the Big Bang, born from physicists giving sloppy explanations. Here are three things most people get wrong about the Big Bang:

1. A Massive Explosion:

When you picture the big bang, don’t you imagine that something went, well, bang?

In movies and TV shows, a time traveler visiting the big bang sees only an empty void. Suddenly, an explosion lights up the darkness, shooting out stars and galaxies until it has created the entire universe.

Astute readers might find this suspicious: if the entire universe was created by the big bang, then where does the “darkness” come from? What does the universe explode into?

The problem here is that, despite the name, the big bang was not actually an explosion.

In picturing the universe as an explosion, you’re imagining the universe as having finite size. But it’s quite likely that the universe is infinite. Even if it is finite, it’s finite like the surface of the Earth: as Columbus (and others) experienced, you can’t get to the “edge” of the Earth no matter how far you go: eventually, you’ll just end up where you started. If the universe is truly finite, the same is true of it.

Rather than an explosion in one place, the big bang was an explosion everywhere at once. Every point in space was “exploding” at the same time. Each point was moving farther apart from every other point, and the whole universe was, as the song goes, hot and dense.

So what do physicists mean when they say that the universe at some specific time was the size of a breadbox, or a grapefruit?

It’s just sloppy language. When these physicists say “the universe”, what they mean is just the part of the universe we can see today, the Hubble Volume. It is that (enormously vast) space that, once upon a time, was merely the size of a grapefruit. But it was still adjacent to infinitely many other grapefruits of space, each one also experiencing the big bang.

2. It began with a Singularity:

This one isn’t so much definitely wrong as probably wrong.

If the universe obeys Einstein’s Theory of General Relativity perfectly, then we can make an educated guess about how it began. By tracking back the expansion of the universe to its earliest stages, we can infer that the universe was once as small as it can get: a single, zero-dimensional point, or a singularity. The laws of general relativity work the same backwards and forwards in time, so just as we could see a star collapsing and know that it is destined to form a black hole, we can see the universe’s expansion and know that if we traced it back it must have come from a single point.

This is all well and good, but there’s a problem with how it begins: “If the universe obeys Einstein’s Theory of General Relativity perfectly”.

In this situation, general relativity predicts an infinitely small, infinitely dense point. As I’ve talked about before, in physics an infinite result is almost never correct. When we encounter infinity, almost always it means we’re ignoring something about the nature of the universe.

In this case, we’re ignoring Quantum Mechanics. Quantum Mechanics naturally makes physics somewhat “fuzzy”: the Uncertainty Principle means that a quantum state can never be exactly in one specific place.

Combining quantum mechanics and general relativity is famously tricky, and the difficulty boils down to getting rid of pesky infinite results. However, several approaches exist to solving this problem, the most prominent of them being String Theory.

If you ask someone to list string theory’s successes, one thing you’ll always hear mentioned is string theory’s ability to understand black holes. In general relativity, black holes are singularities: infinitely small, and infinitely dense. In string theory, black holes are made up of combinations of fundamental objects: strings and membranes, curled up tight, but crucially not infinitely small. String theory smooths out singularities and tamps down infinities, and the same story applies to the infinity of the big bang.

String theory isn’t alone in this, though. Less popular approaches to quantum gravity, like Loop Quantum Gravity, also tend to “fuzz” out singularities. Whichever approach you favor, it’s pretty clear at this point that the big bang didn’t really begin with a true singularity, just a very compressed universe.

3. It created the laws of physics:

Physicists will occasionally say that the big bang determined the laws of physics. Fans of Anthropic Reasoning in particular will talk about different big bangs in different places in a vast multi-verse, each producing different physical laws.

I’ve met several people who were very confused by this. If the big bang created the laws of physics, then what laws governed the big bang? Don’t you need physics to get a big bang in the first place?

The problem here is that “laws of physics” doesn’t have a precise definition. Physicists use it to mean different things.

In one (important) sense, each fundamental particle is its own law of physics. Each one represents something that is true across all of space and time, a fact about the universe that we can test and confirm.

However, these aren’t the most fundamental laws possible. In string theory, the particles that exist in our four dimensions (three space dimensions, and one of time) change depending on how six “extra” dimensions are curled up. Even in ordinary particle physics, the value of the Higgs field determines the mass of the particles in our universe, including things that might feel “fundamental” like the difference between electromagnetism and the weak nuclear force. If the Higgs field had a different value (as it may have early in the life of the universe), these laws of physics would have been different. These sorts of laws can be truly said to have been created by the big bang.

The real fundamental laws, though, don’t change. Relativity is here to stay, no matter what particles exist in the universe. So is quantum mechanics. The big bang didn’t create those laws, it was a natural consequence of them. Rather than springing physics into existence from nothing, the big bang came out of the most fundamental laws of physics, then proceeded to fix the more contingent ones.

In fact, the big bang might not have even been the beginning of time! As I mentioned earlier in this article, most approaches to quantum gravity make singularities “fuzzy”. One thing these “fuzzy” singularities can do is “bounce”, going from a collapsing universe to an expanding universe. In Cyclic Models of the universe, the big bang was just the latest in a cycle of collapses and expansions, extending back into the distant past. Other approaches, like Eternal Inflation, instead think of the big bang as just a local event: our part of the universe happened to be dense enough to form a big bang, while other regions were expanding even more rapidly.

So if you picture the big bang, don’t just imagine an explosion. Imagine the entire universe expanding at once, changing and settling and cooling until it became the universe as we know it today, starting from a world of tangled strings or possibly an entirely different previous universe.

Sounds a bit more interesting to visit in your TARDIS, no?

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

Love It or Hate It, Don’t Fear the Multiverse

“In an infinite universe, anything is possible.”

A nice maxim for science fiction, perhaps. But it probably doesn’t sound like productive science.

A growing number of high profile scientists and science popularizers have come out in favor of the idea that there may exist a “multiverse” of multiple universes, and that this might explain some of the unusual properties of our universe. If there are multiple universes, each with different physical laws, then we must exist in one of the universes with laws capable of supporting us, no matter how rare or unlikely such a universe is. This sort of argument is called anthropic reasoning.

(If you’re picky about definitions and don’t like the idea of more than one universe, think instead of a large universe with many different regions, each one separated from the others. There are some decent physics-based reasons to suppose we live in such a universe.)

Not to mention continuity reasons.

Why is anyone in favor of this idea? It all goes back to the Higgs.

The Higgs field interacts with other particles, giving them mass. What most people don’t mention is that the effect, in some sense, goes both ways. Because the Higgs interacts with other particles, the mass of the Higgs is also altered. This alteration is large, much larger than the observed mass of the Higgs. (In fact, in a sense it’s infinite!)

In order for the Higgs to have the mass we observe, then, something has to cancel out these large corrections. That cancellation can either be a coincidence, or there can be a reason for it.

The trouble is, we’re running out of good reasons. One of the best was supersymmetry, the idea that each particle has a partner with tightly related properties. But if supersymmetry was going to save the day, we probably would have detected some of those partners at the Large Hadron Collider by now. More generally, it can be argued that almost all possible “good reasons” require some new particle to be found at the LHC.

If there are no good reasons, then we’re stuck with a coincidence. (This is often referred to as the Naturalness Problem in particle physics.) And it’s this uncomfortable coincidence that has driven prominent physicists to the arms of the multiverse.

There’s a substantial backlash, though. Many people view the multiverse as a cop-out. Some believe it to be even more toxic than that: if there’s a near-infinite number of possible universes then in principle any unusual feature of our universe could be explained by anthropic reasoning, which sounds like it could lead to the end of physics as we know it.

You can disdain the multiverse as a cop-out, but, as I’ll argue here, you shouldn’t fear it. Those who think the multiverse will destroy physics are fundamentally misunderstanding the way physics research works.

The key thing to keep in mind is that almost nobody out there prefers the multiverse. When a prominent physicist supports the multiverse, that doesn’t mean they’re putting aside productive work on other solutions to the problem. In general, it means they don’t have other solutions to the problem. Supporting the multiverse isn’t going to stop them from having ideas they wouldn’t have had to begin with.

And indeed, many of these people are quite supportive of alternatives to the multiverse. I’ve seen Nima Arkani-Hamed talk about the multiverse, and he generally lists a number of other approaches (some quite esoteric!) that he has worked (and failed to make progress) on, and encourages the audience to look into them.

Physics isn’t a zero-sum game, nor is it ruled by a few prominent people. If a young person has a good idea about how to explain something without the multiverse, they’re going to have all the support and recognition that such an idea deserves.

What the multiverse adds is another track, another potentially worthwhile line of research. Surprising as it may seem, the multiverse doesn’t automatically answer every question. It might not even answer the question of the mass of the Higgs! All that the existence of a multiverse tells us is that we should exist somewhere where intelligent life could exist…but if intelligent life is more likely to exist in a universe very different from ours, then we’re back to square one. There’s a lot of research involved in figuring out just what the multiverse implies, research by people who wouldn’t have been working on this sort of problem if the idea of the multiverse hadn’t been proposed.

That’s the key take-away message here. The multiverse may be wrong, but just considering it isn’t going to destroy physics. Rather, it’s opened up new avenues of research, widening the community of those trying to solve the Naturalness Problem. It may well be a cop-out for individuals, but science as a whole doesn’t have cop-outs: there’s always room for someone with a good idea to sweep away the cobwebs and move things forward.

(Interstellar) Dust In The Wind…

The news has hit the blogosphere: the team behind the Planck satellite has released new dust measurements, and they seem to be a nail in the coffin of BICEP2’s observation of primordial gravitational waves.

Some background for those who haven’t been following the story:

BICEP2, a telescope in Antarctica, is set up to observe the Cosmic Microwave Background, light left over from the very early universe. Back in March, they announced that they had seen characteristic ripples in that light, ripples that they believed were caused by gravitational waves in the early universe. By comparing the size of these gravitational waves to their (quantum-small) size when they were created, they could make statements about the exponential expansion of the early universe (called inflation). This amounted to better (and more specific) evidence about inflation than anyone else had ever found, so naturally people were very excited about it.

However, doubt was rather quickly cast on these exciting results. Like all experimental science, BICEP2 needed to estimate the chance that their observations could be caused by something more mundane. In particular, interstellar dust can cause similar “ripples” to those they observed. They argued that dust would have contributed a much smaller effect, so their “ripples” must be the real deal…but to make this argument, they needed an estimate of how much dust they should have seen. They had several estimates, but one in particular was based on data “scraped” off of a slide from a talk by the Planck collaboration.

Unfortunately, it seems that the BICEP2 team misinterpreted this “scraped” data. Now, Planck have released the actual data, and it seems like dust could account for BICEP2’s entire signal.

I say “could” because more information is needed before we know for sure. The BICEP2 and Planck teams are working together now, trying to tease out whether BICEP2’s observations are entirely dust, or whether there might still be something left.

I know I’m not the only person who wishes that this sort of collaboration could have happened before BICEP2 announced their discovery to the world. If Planck had freely shared their early data with BICEP2, they would have had accurate dust estimates to begin with, and they wouldn’t have announced all of this prematurely.

Of course, expecting groups to freely share data when Nobel prizes and billion-dollar experiments are on the line is pretty absurdly naive. I just wish we lived in a world where none of this was at issue, where careers didn’t ride on “who got there first”.

I’ve got no idea how to bring about such a world, of course. Any suggestions?

Insert Muscle Joke Here

I’m graduating this week, so I probably shouldn’t spend too much time writing this post. I ought to mention, though, that there has been some doubt about the recent discovery by the BICEP2 telescope of evidence for gravitational waves in the cosmic microwave background caused by the early inflation of the universe. Résonaances got to the story first and Of Particular Significance has some good coverage that should be understandable to a wide audience.

In brief, the worry is that the signal detected by BICEP2 might not be caused by inflation, but instead by interstellar dust. While the BICEP2 team used several models of dust to show that it should be negligible, the controversy centers around one of these models in particular, one taken from another, similar experiment called PLANCK.

The problem is, BICEP2 didn’t get PLANCK’s information on dust directly. Instead, it appears they took the data from a slide in a talk by the PLANCK team. This process, known as “data scraping”, involves taking published copies of the slides and reading information off of the charts presented. If BICEP2 misinterpreted the slide, they might have miscalculated the contribution by interstellar dust.

If you’re like me, the whole idea of data scraping seems completely ludicrous. The idea of professional scientists sneaking information off of a presentation, rather than simply asking the other team for data like reasonable human beings, feels almost cartoonishly wrong-headed.

It’s a bit more understandable, though, when you think about the culture behind these big experiments. The PLANCK and BICEP2 teams are colleagues, but they are also competitors. There is an enormous amount of glory in finding evidence for something like cosmic inflation first, and an equally enormous amount of shame in screwing up and announcing something that turns out to be wrong. As such, these experiments are quite protective of their data. Not only might someone with early access to the data preempt them on an important discovery, they might rush to publish a conclusion that is wrong. That’s why most of these big experiments spend a large amount of time checking and re-checking the data, communicating amongst themselves and settling on an interpretation before they feel comfortable releasing it to the wider community. It’s why BICEP2 couldn’t just ask PLANCK for their data.

From BICEP2’s perspective, they can expect that plots presented at a talk by PLANCK should be accurate, digital plots. Unlike Fox News, scientists have an obligation to present their data in a way that isn’t misleading. And while relying on such a dubious source seems like a bad idea, by all accounts that’s not what the BICEP2 team did. PLANCK’s data was just one dust model used by the team, kept in part because it agreed well with other, non-“data-scraped” models.

It’s a shame that these experiments are so large and prestigious that they need to guard their data in such a potentially destructive way. My sub-field is generally much nicer about this sort of thing: the stakes are lower, and the groups are smaller and have less media attention, so we’re able to share data when we need to. In fact, my most recent paper got a significant boost from some data shared by folks at the Perimeter Institute.

Only time will tell whether the BICEP2 result wins out, or whether it was a fluke caused by caustic data-sharing practices. A number of other experiments are coming online within the next year, and one of them may confirm or deny what BICEP2 has showed.

Flexing the BICEP2 Results

The physicsverse has been abuzz this week with news of the BICEP2 experiment’s observations of B-mode polarization in the Cosmic Microwave Background.

There are lots of good sources on this, and it’s not really my field, so I’m just going to give a quick summary before talking about a few aspects I find interesting.

BICEP2 is a telescope in Antarctica that observes the Cosmic Microwave Background, light left over from the first time that the universe was clear enough for light to travel. (If you’re interested in a background on what we know about how the universe began, Of Particular Significance has an article here that should be fairly detailed, and I have a take on some more speculative aspects here.) Earlier experiments that observed the Cosmic Microwave Background discovered a surprising amount of uniformity. This led to the proposal of a concept called inflation: the idea that at some point the early universe expanded exponentially, smearing any non-uniformities across the sky and smoothing everything out. Since the rate the universe expands is a number, if that number is to vary it naturally should be a scalar field, which in this case is called the inflaton.

During inflation, distances themselves get stretched out. Think about inflation like enlarging an image. As you’ve probably noticed (maybe even in early posts on this blog), enlarging an image doesn’t always work out well. The resulting image is often pixelated or distorted. Some of the distortion comes from glitches in the program that enlarges the image, while some of it is just what happens when the pixels of the original image get enlarged to the point that you can see them.

Enlarging the Cosmic Microwave Background

Quantum fluctuations in the inflaton field itself are the glitches in the program, enlarging some areas more than others. The pattern they create in the Cosmic Microwave Background is called E-mode polarization, and several other experiments have been able to detect it.

Much weaker are the effect of the “pixels” of the original image. Since the original image is spacetime itself, the pixels are the quantum fluctuations of spacetime: quantum gravity waves. Inflation enlarged them to the point that they were visible on a large-distance scale, fundamental non-uniformity in the world blown up big enough to affect the distribution of light. The effect this had on light is detectably different: it’s called B-mode polarization, and this is the first experiment to detect it on the right scale for it to be caused by gravity waves.

Measuring this polarization, in particular how strong it is, tells us a lot about how inflation occurred. It’s enough to rule out several models, and lend support to several others. If the results are corroborated this will be real, useful evidence, the sort physicists love to get, and folks are happily crunching numbers on it all over the world.

All that said, this site is called four gravitons and a grad student, and I’m betting that some of you want to ask this grad student: is this evidence for gravitons, or for gravity waves?

Sort of.

We already had good indirect evidence for gravity waves: pairs of neutron stars release gravity waves as they orbit each other, which causes them to slow down. Since we’ve observed them slowing down at the right rates, we were already confident gravity waves exist. And if you’ve got gravity waves, gravitons follow as a natural consequence of quantum mechanics.

The data from BICEP2 is also indirect. The gravity waves “observed” by BICEP2 were present in the early universe. It is their effect on the light that would become the Cosmic Microwave Background that is being observed, not the gravity waves directly. We still have yet to directly detect gravity waves, with a gravity telescope like LIGO.

On the other hand, a “gravity telescope” isn’t exactly direct either. In order to detect gravity waves, LIGO and other gravity telescopes attempt to measure their effect on the distances between objects. How do they do that? By looking at interference patterns of light.

In both cases, we’re looking at light, present in the environment of a gravity wave, and examining its properties. Of course, in a gravity telescope the light is from a nearby environment under tight control, while the Cosmic Microwave Background is light from as far away and long ago as anything within the reach of science today. In both cases, though, it’s not nearly as simple as “observing” an effect. “Seeing” anything in high energy physics or astrophysics is always a matter of interpreting data based on science we already know.

Alright, that’s evidence for gravity waves. Does that mean evidence for gravitons?

I’ve seen a few people describe BICEP2’s results as evidence for quantum gravity/quantum gravity effects. I felt a little uncomfortable with that claim, so I asked Matt Strassler what he thought. I think his perspective on this is the right one. Quantum gravity is just what happens when gravity exists in a quantum world. As I’ve said on this site before, quantum gravity is easy. The hard part is making a theory of quantum gravity that has real predictive power, and that’s something these results don’t shed any light on at all.

That said, I’m a bit conflicted. They really are seeing a quantum effect in gravity, and as far as I’m aware this really is the first time such an effect has been observed. Gravity is so weak, and quantum gravity effects so small, that it takes inflation blowing them up across the sky for them to be visible. Now, I don’t think there was anyone out there who thought gravity didn’t have quantum fluctuations (or at least, anyone with a serious scientific case). But seeing into a new regime, even if it doesn’t tell us much…that’s important, isn’t it? (After writing this, I read Matt Strassler’s more recent post, where he has a paragraph professing similar sentiments).

On yet another hand, I’ve heard it asserted in another context that loop quantum gravity researchers don’t know how to get gravitons. I know nothing about the technical details of loop quantum gravity, so I don’t know if that actually has any relevance here…but it does amuse me.

Braaains…Boltzmann Braaaains…

In honor of Halloween yesterday, let me tell you a scary physics story:

Sarah was an ordinary college student, in an ordinary dorm room, ordinary bean bag chairs strewn around an ordinary bed with ordinary pink sheets. If she concentrated, she could imagine her ordinary parents back home in ordinary Minnesota. In her ordinary physics textbook on her ordinary desk, ordinary laws of physics were written, described as the result of centuries of experimentation.

Unbeknownst to Sarah, the universe was much more chaotic and random than she realized, and also much more vast. Arbitrary collections of matter formed and dissipated, and over the universe’s long history, any imaginable combination might come to be.

Combinations like Sarah.

You see, Sarah too was a random combination, a chance arrangement of particles formed only a bare few moments ago. In truth, she had no ordinary parents, nor was she surrounded by an ordinary college, and the laws of physics that her textbook asserted were discovered through centuries of experimentation were just a moment’s distribution of ink on a page.

And as she got up to open the door into the vast dark of the outside, her world dissipated, and she ceased to exist.

That’s the life of a Boltzmann Brain. If a universe is random and old enough, it is inevitable that such minds exist. They might have memories of an extended, orderly world, but these would just be illusions, chance arrangements of their momentary neurons. What’s more, they may think they know the laws of physics through careful experiment and reasoning, but such knowledge would be illusory as well. And most frightening of all, if the universe is truly ancient and unimaginably vast, there would be many orders of magnitude more Boltzmann Brains than real humans…so many, that it would almost certainly be the case that you are in fact a Boltzmann Brain right now!

This is legitimately worrying to some physicists. The situation gets a bit more interesting when you remember that, as a Boltzmann Brain, anything you know about physics may well be a lie, since the history of research you think exists might not have. The problem is, if you manage to prove that you are probably a Boltzmann Brain, you had to use physics to do it. But your physics is probably wrong!

This, as Sean Carroll argues is why the concept of a Boltzmann Brain is self-defeating. It is, in a way, a logical impossibility. And if a universe of Boltzmann Brains is logically impossible, then any physics that makes Boltzmann Brains more likely than normal humans must similarly be wrong. That’s Carroll’s argument, one that he uses to argue for specific physical conclusions about the real world, namely a proposal about the properties of the Higgs boson.

It might seem philosophically illegitimate to use such a paradox to argue about the real world. However, philosophers have a similar argument when it comes to such “reality is a lie” scenarios. In general, modern philosophers point out that any argument that proves that all of our knowledge is false or meaningless by necessity also proves itself false or meaningless. This is what allows analytical philosophy to carry forward and make progress, even if it can’t reject the idea that reality is an illusion by more objective means.

With that said, there seems to be a difference between simply rejecting arguments that “show” that the world is an illusion or that we are all Boltzmann Brains, and using those arguments to draw conclusions about other parts of the world. I would be curious if there are similar arguments to Carroll’s in philosophy, arguments that draw conclusions more specific than “we exist and can know things”. Any philosopher readers should feel welcome to chime in in the comments!

And for the rest of you, you probably aren’t a Boltzmann Brain. But if the outside world looks a little too dark tonight…