Author Archives: 4gravitons

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

Why I Can’t Explain Ghosts: Or, a Review of a Popular Physics Piece

Since today is Halloween, I really wanted to write a post talking about the spookiest particles in physics, ghosts.

And their superpartners, ghost riders.

The problem is, in order to explain ghosts I’d have to explain something called gauge symmetry. And gauge symmetry is quite possibly the hardest topic in modern physics to explain to a general audience.

Deep down, gauge symmetry is the idea that irrelevant extra parts of how we represent things in physics should stay irrelevant. While that sounds obvious, it’s far from obvious how you can go from that to predicting new particles like the Higgs boson.

Explaining this is tough! Tough enough that I haven’t thought of a good way to do it yet.

Which is why I was fairly stoked when a fellow postdoc pointed out a recent popular physics article by Juan Maldacena, explaining gauge symmetry.

Juan Maldacena is a Big Deal. He’s the guy who figured out the AdS/CFT correspondence, showing that string theory (in a particular hyperbola-shaped space called AdS) and everybody’s favorite N=4 super Yang-Mills theory are secretly the same, a discovery which led to a Big Blue Dot on Paperscape. So naturally, I was excited to see what he had to say.

Big Blue Dot pictured here.

Big Blue Dot pictured here.

The core analogy he makes is with currencies in different countries. Just like gauge symmetry, currencies aren’t measuring anything “real”: they’re arbitrary conventions put in place because we don’t have a good way of just buying things based on pure “value”. However, also like gauge symmetry, then can have real-life consequences, as different currency exchange rates can lead to currency speculation, letting some people make money and others lose money. In Maldacena’s analogy the Higgs field works like a precious metal, making differences in exchange rates manifest as different prices of precious metals in different countries.

It’s a solid analogy, and one that is quite close to the real mathematics of the problem (as the paper’s Appendix goes into detail to show). However, I have some reservations, both about the paper as a whole and about the core analogy.

In general, Maldacena doesn’t do a very good job of writing something publicly accessible. There’s a lot of stilted, academic language, and a lot of use of “we” to do things other than lead the reader through a thought experiment. There’s also a sprinkling of terms that I don’t think the average person will understand; for example, I doubt the average college student knows flux as anything other than a zany card game.

Regarding the analogy itself, I think Maldacena has fallen into the common physicist trap of making an analogy that explains things really well…if you already know the math.

This is a problem I see pretty frequently. I keep picking on this article, and I apologize for doing so, but it’s got a great example of this when it describes supersymmetry as involving “a whole new class of number that can be thought of as the square roots of zero”. That’s a really great analogy…if you’re a student learning about the math behind supersymmetry. If you’re not, it doesn’t tell you anything about what supersymmetry does, or how it works, or why anyone might study it. It relates something unfamiliar to something unfamiliar.

I’m worried that Maldacena is doing that in this paper. His setup is mathematically rigorous, but doesn’t say much about the why of things: why do physicists use something like this economic model to understand these forces? How does this lead to what we observe around us in the real world? What’s actually going on, physically? What do particles have to do with dimensionless constants? (If you’re curious about that last one, I like to think I have a good explanation here.)

It’s not that Maldacena ignores these questions, he definitely puts effort into answering them. The problem is that his analogy itself doesn’t really address them. They’re the trickiest part, the part that people need help picturing and framing, the part that would benefit the most from a good analogy. Instead, the core imagery of the piece is wasted on details that don’t really do much for a non-expert.

Maybe I’m wrong about this, and I welcome comments from non-physicists. Do you feel like Maldacena’s account gives you a satisfying idea of what gauge symmetry is?

The Hardest Audience Knows Just Enough to Be Dangerous

You’d think that it would be hard to explain physics to people who know absolutely nothing about physics.

And you might be right, if there was anyone these days who knew absolutely nothing about physics. If someone didn’t know what atoms were, or didn’t know what a physicist was, then yes it would take quite a while to explain anything more than the basics. But most people know what atoms are, and know what physicists are, and at least have a basic idea that there are things called protons and neutrons and electrons.

And that’s often enough. Starting with a basis like that, I can talk people through the Large Hadron Collider, I can get them to picture Feynman Diagrams, I can explain, roughly, what it is I do.

On the other end, it’s not all that hard to explain what I do to people in my sub-field. Working on the same type of physics is like sharing a language, we have all sorts of terms to make explaining easier. While it’s still possible to trip up and explain too much or too little (a recent talk I gave left out the one part that one member of the audience needed…because everyone else would have gotten nothing out of it), you’re protected by a buffer of mutual understanding.

The hardest talks aren’t for the public, and they aren’t for fellow amplitudes-researchers. They’re for a general physics audience.

If you’re talking to physicists, you can’t start with protons and neutrons. Do that, and your audience is going to get annoyed with you rather quickly. You can’t rely on the common understanding everyone has of physics. In addition to making your audience feel like they’re being talked down to, you won’t manage to say anything substantial. You need to start at a higher level so that when you do describe what you do, it’s in enough detail that your audience feels like they really understand it.

At the same time, you can’t start with the jargon of your sub-field. If you want to really explain something (and not just have fifteen minutes of background before everyone tunes out) you need to build off of a common understanding.

The tricky part is, that “common understanding” is more elusive than you might think. For example, pretty much every physicist has some familiarity with Quantum Field Theory…but that can mean anything from “uses it every day” to “saw it a couple times back in grad school”. Too much background, and half your audience is bored. Too little, and half your audience is lost. You have to strike the proper balance, trying to show everyone enough to feel satisfied.

There are tricks to make this easier. I’ve noticed that some of the best speakers begin with a clever and unique take on something everyone understands. That way, people in very different fields will still have something they recognize, while people in the same field will still be seeing something new. Of course, the tricky part is coming up with a new example in the first place!

In general, I need to get better at estimating where my audience is. Talking to you guys is fun, but I ought to also practice a “physics voice” for discussions with physicists (as well as grants and applications), and an “amplitudes voice” for fellow specialists. The key to communication, as always, is knowing your audience.

A Nobel for Blue LEDs, or, How Does That Count as Physics?

When I first heard about this year’s Nobel Prize in Physics, I didn’t feel the need to post on it. The prize went to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, whose discoveries enabled blue LEDs. It’s a more impressive accomplishment than it might seem: while red LEDs have been around since the 60’s and 70’s, blue LEDs were only developed in the 90’s, and only with both can highly efficient, LED-based white light sources be made. Still, I didn’t consider posting on it because it’s pretty much entirely outside my field.

p-device20blue20led-23

Shiny, though

It took a conversation with another PI postdoc to point out one way I can comment on the Nobel, and it started when we tried to figure out what type of physicists Akasaki, Amano, and Nakamura are. After tossing around terms like “device physicist” and “condensed matter”, someone wondered whether the development of blue LEDs wasn’t really a matter of engineering.

At that point I realized, I’ve talked about something like this before.

Physicists work on lots of different things, and many of them don’t seem to have much to do with physics. They study geometry and topology, biological molecules and the nature of evolution, income inequality and, yes, engineering.

On the surface, these don’t have much to do with physics. A friend of mine used to quip that condensed matter physicists seem to just “pick whatever they want to research”.

There is something that ties all of these topics together, though. They’re all things that physicists are good at.

Physics grad school gives you a wide variety of tools with which to understand the world. Thermodynamics gives you a way to understand large, complicated systems with statistics, while quantum field theory lets you understand everything with quantum properties, not just fundamental particles but materials as well. This batch of tools can be applied to “traditional” topics, but they’re equally applicable if you’re researching something else entirely, as long as it obeys the right kinds of rules.

In the end, the best definition of physics is the most useful one. Physicists should be people who can benefit from being part of physics organizations, from reading physics journals, and especially from training (and having been) physics grad students. The whole reason we have scientific disciplines in the first place is to make it easier for people with common interests to work together. That’s why Akasaki, Amano, and Nakamura aren’t “just” engineers, and why I and my fellow string theorists aren’t “just” mathematicians. We use our knowledge of physics to do our jobs, and that, more than anything else, makes us physicists.


Edit: It has been pointed out to me that there’s a bit more to this story than the main accounts have let on. Apparently another researcher named Herbert Paul Maruska was quite close to getting a blue LED up and running back in the early 1970’s, getting far enough to have a working prototype. There’s a whole fascinating story about the quest for a blue LED, related here. Maruska seems to be on friendly terms with Akasaki, Amano, and Nakamura, and doesn’t begrudge them their recognition.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

Love It or Hate It, Don’t Fear the Multiverse

“In an infinite universe, anything is possible.”

A nice maxim for science fiction, perhaps. But it probably doesn’t sound like productive science.

A growing number of high profile scientists and science popularizers have come out in favor of the idea that there may exist a “multiverse” of multiple universes, and that this might explain some of the unusual properties of our universe. If there are multiple universes, each with different physical laws, then we must exist in one of the universes with laws capable of supporting us, no matter how rare or unlikely such a universe is. This sort of argument is called anthropic reasoning.

(If you’re picky about definitions and don’t like the idea of more than one universe, think instead of a large universe with many different regions, each one separated from the others. There are some decent physics-based reasons to suppose we live in such a universe.)

Not to mention continuity reasons.

Why is anyone in favor of this idea? It all goes back to the Higgs.

The Higgs field interacts with other particles, giving them mass. What most people don’t mention is that the effect, in some sense, goes both ways. Because the Higgs interacts with other particles, the mass of the Higgs is also altered. This alteration is large, much larger than the observed mass of the Higgs. (In fact, in a sense it’s infinite!)

In order for the Higgs to have the mass we observe, then, something has to cancel out these large corrections. That cancellation can either be a coincidence, or there can be a reason for it.

The trouble is, we’re running out of good reasons. One of the best was supersymmetry, the idea that each particle has a partner with tightly related properties. But if supersymmetry was going to save the day, we probably would have detected some of those partners at the Large Hadron Collider by now. More generally, it can be argued that almost all possible “good reasons” require some new particle to be found at the LHC.

If there are no good reasons, then we’re stuck with a coincidence. (This is often referred to as the Naturalness Problem in particle physics.) And it’s this uncomfortable coincidence that has driven prominent physicists to the arms of the multiverse.

There’s a substantial backlash, though. Many people view the multiverse as a cop-out. Some believe it to be even more toxic than that: if there’s a near-infinite number of possible universes then in principle any unusual feature of our universe could be explained by anthropic reasoning, which sounds like it could lead to the end of physics as we know it.

You can disdain the multiverse as a cop-out, but, as I’ll argue here, you shouldn’t fear it. Those who think the multiverse will destroy physics are fundamentally misunderstanding the way physics research works.

The key thing to keep in mind is that almost nobody out there prefers the multiverse. When a prominent physicist supports the multiverse, that doesn’t mean they’re putting aside productive work on other solutions to the problem. In general, it means they don’t have other solutions to the problem. Supporting the multiverse isn’t going to stop them from having ideas they wouldn’t have had to begin with.

And indeed, many of these people are quite supportive of alternatives to the multiverse. I’ve seen Nima Arkani-Hamed talk about the multiverse, and he generally lists a number of other approaches (some quite esoteric!) that he has worked (and failed to make progress) on, and encourages the audience to look into them.

Physics isn’t a zero-sum game, nor is it ruled by a few prominent people. If a young person has a good idea about how to explain something without the multiverse, they’re going to have all the support and recognition that such an idea deserves.

What the multiverse adds is another track, another potentially worthwhile line of research. Surprising as it may seem, the multiverse doesn’t automatically answer every question. It might not even answer the question of the mass of the Higgs! All that the existence of a multiverse tells us is that we should exist somewhere where intelligent life could exist…but if intelligent life is more likely to exist in a universe very different from ours, then we’re back to square one. There’s a lot of research involved in figuring out just what the multiverse implies, research by people who wouldn’t have been working on this sort of problem if the idea of the multiverse hadn’t been proposed.

That’s the key take-away message here. The multiverse may be wrong, but just considering it isn’t going to destroy physics. Rather, it’s opened up new avenues of research, widening the community of those trying to solve the Naturalness Problem. It may well be a cop-out for individuals, but science as a whole doesn’t have cop-outs: there’s always room for someone with a good idea to sweep away the cobwebs and move things forward.

(Interstellar) Dust In The Wind…

The news has hit the blogosphere: the team behind the Planck satellite has released new dust measurements, and they seem to be a nail in the coffin of BICEP2’s observation of primordial gravitational waves.

Some background for those who haven’t been following the story:

BICEP2, a telescope in Antarctica, is set up to observe the Cosmic Microwave Background, light left over from the very early universe. Back in March, they announced that they had seen characteristic ripples in that light, ripples that they believed were caused by gravitational waves in the early universe. By comparing the size of these gravitational waves to their (quantum-small) size when they were created, they could make statements about the exponential expansion of the early universe (called inflation). This amounted to better (and more specific) evidence about inflation than anyone else had ever found, so naturally people were very excited about it.

However, doubt was rather quickly cast on these exciting results. Like all experimental science, BICEP2 needed to estimate the chance that their observations could be caused by something more mundane. In particular, interstellar dust can cause similar “ripples” to those they observed. They argued that dust would have contributed a much smaller effect, so their “ripples” must be the real deal…but to make this argument, they needed an estimate of how much dust they should have seen. They had several estimates, but one in particular was based on data “scraped” off of a slide from a talk by the Planck collaboration.

Unfortunately, it seems that the BICEP2 team misinterpreted this “scraped” data. Now, Planck have released the actual data, and it seems like dust could account for BICEP2’s entire signal.

I say “could” because more information is needed before we know for sure. The BICEP2 and Planck teams are working together now, trying to tease out whether BICEP2’s observations are entirely dust, or whether there might still be something left.

I know I’m not the only person who wishes that this sort of collaboration could have happened before BICEP2 announced their discovery to the world. If Planck had freely shared their early data with BICEP2, they would have had accurate dust estimates to begin with, and they wouldn’t have announced all of this prematurely.

Of course, expecting groups to freely share data when Nobel prizes and billion-dollar experiments are on the line is pretty absurdly naive. I just wish we lived in a world where none of this was at issue, where careers didn’t ride on “who got there first”.

I’ve got no idea how to bring about such a world, of course. Any suggestions?

So the Higgs is like, everywhere, right?

When I tell people I do particle physics, they generally jump to the first thing they’ve heard of, the Higgs boson. Unfortunately, what most people have heard about the Higgs boson is misleading.

The problem is the “crowded room” metaphor, a frequent favorite of people trying to describe the Higgs. The story goes that the Higgs works like trying to walk through a crowded room: an interesting person (massive particle) will find that the crowd clusters around them, so it becomes harder to make progress, while a less interesting person (less massive or massless particle) will have an easier time traveling through the crowd.

This metaphor gives people the impression that each of us is surrounded by an invisible sea of particles, like an invisible crowd constantly jostling us.

I see Higgs people!

People get very impressed by the idea of some invisible, newly discovered stuff that extends everywhere and surrounds everything. The thing is, this really isn’t the unique part of the Higgs. In fact, every fundamental particle works like this!

In physics, we describe the behavior of fundamental particles (like the Higgs, but also everything from electrons to photons) with a framework called Quantum Field Theory. In Quantum Field Theory, each particle has a corresponding field, and each field extends everywhere, over all space and time. There’s an electron field, and the electron field is absolutely everywhere. The catch is, most of the time, most of these fields are at zero. The electron field tells you that there are zero electrons in a generic region of space.

Particles are ripples in these fields. If the electron field wobbles a bit higher than normal somewhere, that means there’s an electron there. If it wobbles a bit lower than normal instead, then it’s an anti-electron. (Note: this is a very fast-and-loose way to describe how antimatter works, don’t take it for more than it’s worth.)

When the Higgs field ripples, you get a Higgs particle, the one discovered at the LHC. The “crowd” surrounding us isn’t these ripples (which are rare and hard to create), but the field itself, which surrounds us in the same way every other field does.

With all that said, there is a difference between the Higgs field and other fields. The Higgs field is the only field we’ve discovered (so far) that isn’t usually zero. This is because the Higgs is the only field we’ve discovered that is allowed to be something other than zero.

Symmetry is a fundamental principle in physics. At its simplest, symmetry is the idea that nothing should be special for no good reason. One consequence is that there are no special directions. Up, down, right, left, the laws of physics don’t care which one you choose. Only the presence of some object (like the Earth) can make differences like up versus down relevant.

What does that have to do with fields?

Think about a magnetic field. A magnetic field pulls in a specific direction.

So far, so good…

Now imagine a magnetic field everywhere. Which way would it point? If it was curved like the one in the picture, what would it be curved around?

There isn’t a good choice. Any choice would single out one direction, making it special. But nothing should be special for no good reason, and unless there was an object out there releasing this huge magnetic field there would be no good reason for it to be pointed that way. Because of that, the default value of the magnetic field over all space has to be zero.

You can make a similar argument for fields like the electron field. It’s even harder to imagine a way for electrons to be everywhere and not pick some “special” direction.

The Higgs, though, is special. The Higgs is what’s known as a scalar field. That means that it doesn’t have a direction. At any specific point it’s just a number, a scalar quantity. The Higgs doesn’t have to be zero everywhere because even if it isn’t, no special direction is singled out. One metaphor I’ve used before is colored construction paper: the paper can be blue or red, and either way it will still be empty until someone draws on it.

A bit less exciting than ghosts, huh?

The Higgs is special because it’s the first fundamental scalar field we’ve been able to detect, but there are probably others. Most explanations of cosmic inflation, for example, rely on one or more new scalar fields. (Just like “mass of the fundamental particles” is just a number, “rate the universe is inflating” is also just a number, and can also be covered by a scalar field.) It’s not special just because it’s “everywhere”, and imagining it as a bunch of invisible particles careening about around you isn’t going to get you anywhere useful.

Now, if you find the idea of being surrounded by invisible particles interesting, you really ought to read up on neutrinos….

No, Hawking didn’t say that a particle collider could destroy the universe

So apparently Hawking says that the Higgs could destroy the universe.

HawkingHiggs

I’ve covered this already, right? No need to say anything more?

Ok, fine, I’ll write a real blog post.

The Higgs is a scalar field: a number, sort of like temperature, that can vary across space and time. In the case of the Higgs this number determines the mass of almost every fundamental particle (the jury is still somewhat out on neutrinos). The Higgs doesn’t vary much at all, in fact it takes an enormous (Large Hadron Collider-sized) amount of energy to get it to wobble even a little bit. That is because the Higgs is in a very very stable state.

Hawking was pointing out that, given our current model of the Higgs, there’s actually another possible state for the Higgs to be in, one that’s even more stable (because it takes less energy, essentially). In that state, the number the Higgs corresponds to is much larger, so everything would be much more massive, with potentially catastrophic results. (Matt Strassler goes into some detail about the assumptions behind this.)

For those who have been following my blog for a while, you may find these “stable states” familiar. They’re vacua, different possible ways to set up “empty” space. In that post, I may have given the impression that there’s no way to change from one stable state, one “vacuum”, to another. In the case of the Higgs, the state it’s in is so stable that vast amounts of energy (again, a Large Hadron Collider-worth) only serve to create a small, unstable fluctuation, the Higgs boson, which vanishes in a fraction of a second.

And that would be the full story, were it not for a curious phenomenon called quantum tunneling.

If you’ve heard someone else describe quantum tunneling, you’ve probably heard that quantum particles placed on one side of a wall have a very small chance of being found later on the other side of the wall, as if they had tunneled there.

Using their incredibly tiny shovels.

However, quantum tunneling applies to much more than just walls. In general, a particle in an otherwise stable state (whether stable because there are walls keeping it in place, or for other reasons) can tunnel into another state, provided that the new state is “more stable” (has lower energy).

The chance of doing this is small, and it gets smaller the more “stable” the particle’s initial state is. Still, if you apply that logic to the Higgs, you realize there’s a very very very small chance that one day the Higgs could just “tunnel” away from its current stable state, destroying the universe as we know it in the process.

If that happened, everything we know would vanish at the speed of light, and we wouldn’t see it coming.

While that may sound scary, it’s also absurdly unlikely, to the extent that it probably won’t happen until the universe is many times older than it is now. It’s not the sort of thing anybody should worry about, at least on a personal level.

Is Hawking fear-mongering, then, by pointing this out? Hardly. He’s just explaining science. Pointing out the possibility that the Higgs could spontaneously change and end the universe is a great way to emphasize the sheer scale of physics, and it’s pretty common for science communicators to mention it. I seem to recall a section about it in Particle Fever, and Sean Carroll even argues that it’s a good thing, due to killing off spooky Boltzmann Brains.

What do particle colliders have to do with all this? Well, apart from quantum tunneling, just inputting enough energy in the right way can cause a transition from one stable state to another. Here “enough energy” means about a million times that produced by the Large Hadron Collider. As Hawking jokes, you’d need a particle collider the size of the Earth to get this effect. I don’t know whether he actually ran the numbers, but if anything I’d guess that a Large Earth Collider would actually be insufficient.

Either way, Hawking is just doing standard science popularization, which isn’t exactly newsworthy. Once again, “interpret something Hawking said in the most ridiculous way possible” seems to be the du jour replacement for good science writing.

The Near and the Far: Motivations for Physics

When I introduce myself, I often describe my job like this:

“I develop mathematical tools to make calculations in particle physics easier and more efficient.”

However, I could equally well describe my job like this:

“I’m looking for a radical new way to reformulate particle physics in order to solve fundamental problems in space and time.”

These may sound very different, but they’re both correct. That’s because in theoretical physics, like in many branches of science, we have two types of goals: near-term and far-term.

In the near-term, I develop mathematical tools and tricks, which let me calculate things I (and others) couldn’t calculate before. Pushing the tricks to their limits gives me more proficiency, making the tools I develop more robust. In the future, I can imagine applying the tools to more types of calculations, and specifically to more “important” calculations.

All of that still involves relatively near-term goals, though. Develop a new trick, and you can already envision what it might be used for. The far-term goals are generally deeper.

End of the road, not just the next tree.

In the far term, the new techniques that I and others develop might lead to fundamentally new ways to understand particle physics. That’s because a central feature of most of the tricks we develop is that they rephrase the calculation in a way that leaves out something that used to be thought of as fundamental. They’re “revolutions”, overthrowing some basic principle of how we do things. The hope is that the right “revolution” will help us solve problems that our current understanding of physics seems incapable of solving.

Most scientists have both sorts of goals. Someone who studies quantum mechanics might talk about developing a quantum computer, but in the near-term be interested in perfecting some algorithm. A biologist might study how information is stored in a cell, but introduce themself as someone trying to cure cancer.

For some people, the far-term goals are a big component of how they view themselves. Nima Arkani-Hamed, for example, has joked that believing that “spacetime is doomed” is what allows him to get out of bed in the morning. (For a transcript of the relevant parts, see here.) There are plenty of others with similar perspectives, people who need a “big” goal to feel motivated.

Myself, I find it harder to identify with these kinds of goals, because the payoff is so uncertain. Rephrasing particle physics in a new way might be the solution to a fundamental problem…but it could also just be another way to say the same thing. There’s no guarantee that any one project will be that one magical solution. In contrast, for me, near term goals are something I can feel confident I’m making real progress on. I can envision each step along the way, and see the part my work plays in a larger picture, led along by the satisfaction of solving each puzzle as it comes.

Neither way is better than the other, and both are important parts of science. Some people do better with one, some do better with the other, and in the end, everyone can view themselves as accomplishing something they care about.