Tag Archives: DoingScience

4gravitons, Spinning Up

I had a new paper out last week, with Michèle Levi and Andrew McLeod. But to explain it, I’ll need to clarify something about our last paper.

Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?

Actually, we did. Our calculation doesn’t just apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…

LIGO’s next big discovery

That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.

Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?

In our last calculation, no. In this one, yes!

Effective Field Theory isn’t just a zoom lens, it’s a controlled approximation. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.

At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.

So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because some of the properties of neutron stars are unknown.

The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.

That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

Understanding Is Translation

Kernighan’s Law states, “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” People sometimes make a similar argument about philosophy of mind: “The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps.”

Both points operate on a shared kind of logic. They picture understanding something as modeling it in your mind, with every detail clear. If you’ve already used all your mind’s power to design code, you won’t be able to model when it goes wrong. And modeling your own mind is clearly nonsense, you would need an even larger mind to hold the model.

The trouble is, this isn’t really how understanding works. To understand something, you don’t need to hold a perfect model of it in your head. Instead, you translate it into something you can more easily work with. Like explanations, these translations can be different for different people.

To understand something, I need to know the algorithm behind it. I want to know how to calculate it, the pieces that go in and where they come from. I want to code it up, to test it out on odd cases and see how it behaves, to get a feel for what it can do.

Others need a more physical picture. They need to know where the particles are going, or how energy and momentum are conserved. They want entropy to be increased, action to be minimized, scales to make sense dimensionally.

Others in turn are more mathematical. They want to start with definitions and axioms. To understand something, they want to see it as an example of a broader class of thing, groups or algebras or categories, to fit it into a bigger picture.

Each of these are a kind of translation, turning something into code-speak or physics-speak or math-speak. They don’t require modeling every detail, but when done well they can still explain every detail.

So while yes, it is good practice not to write code that is too “smart”, and too hard to debug…it’s not impossible to debug your smartest code. And while you can’t hold an entire mind inside of yours, you don’t actually need to do that to understand the brain. In both cases, all you need is a translation.

You Can’t Anticipate a Breakthrough

As a scientist, you’re surrounded by puzzles. For every test and every answer, ten new questions pop up. You can spend a lifetime on question after question, never getting bored.

But which questions matter? If you want to change the world, if you want to discover something deep, which questions should you focus on? And which should you ignore?

Last year, my collaborators and I completed a long, complicated project. We were calculating the chance fundamental particles bounce off each other in a toy model of nuclear forces, pushing to very high levels of precision. We managed to figure out a lot, but as always, there were many questions left unanswered in the end.

The deepest of these questions came from number theory. We had noticed surprising patterns in the numbers that showed up in our calculation, reminiscent of the fancifully-named Cosmic Galois Theory. Certain kinds of numbers never showed up, while others appeared again and again. In order to see these patterns, though, we needed an unusual fudge factor: an unexplained number multiplying our result. It was clear that there was some principle at work, a part of the physics intimately tied to particular types of numbers.

There were also questions that seemed less deep. In order to compute our result, we compared to predictions from other methods: specific situations where the question becomes simpler and there are other ways of calculating the answer. As we finished writing the paper, we realized we could do more with some of these predictions. There were situations we didn’t use that nonetheless simplified things, and more predictions that it looked like we could make. By the time we saw these, we were quite close to publishing, so most of us didn’t have the patience to follow these new leads. We just wanted to get the paper out.

At the time, I expected the new predictions would lead, at best, to more efficiency. Maybe we could have gotten our result faster, or cleaned it up a bit. They didn’t seem essential, and they didn’t seem deep.

Fast forward to this year, and some of my collaborators (specifically, Lance Dixon and Georgios Papathanasiou, along with Benjamin Basso) have a new paper up: “The Origin of the Six-Gluon Amplitude in Planar N=4 SYM”. The “origin” in their title refers to one of those situations: when the variables in the problem are small, and you’re close to the “origin” of a plot in those variables. But the paper also sheds light on the origin of our calculation’s mysterious “Cosmic Galois” behavior.

It turns out that the origin (of the plot) can be related to another situation, when the paths of two particles in our calculation almost line up. There, the calculation can be done with another method, called the Pentagon Operator Product Expansion, or POPE. By relating the two, Basso, Dixon, and Papathanasiou were able to predict not only how our result should have behaved near the origin, but how more complicated as-yet un-calculated results should behave.

The biggest surprise, though, lurked in the details. Building their predictions from the POPE method, they found their calculation separated into two pieces: one which described the physics of the particles involved, and a “normalization”. This normalization, predicted by the POPE method, involved some rather special numbers…the same as the “fudge factor” we had introduced earlier! Somehow, the POPE’s physics-based setup “knows” about Cosmic Galois Theory!

It seems that, by studying predictions in this specific situation, Basso, Dixon, and Papathanasiou have accomplished something much deeper: a strong hint of where our mysterious number patterns come from. It’s rather humbling to realize that, were I in their place, I never would have found this: I had assumed “the origin” was just a leftover detail, perhaps useful but not deep.

I’m still digesting their result. For now, it’s a reminder that I shouldn’t try to pre-judge questions. If you want to learn something deep, it isn’t enough to sit thinking about it, just focusing on that one problem. You have to follow every lead you have, work on every problem you can, do solid calculation after solid calculation. Sometimes, you’ll just make incremental progress, just fill in the details. But occasionally, you’ll have a breakthrough, something that justifies the whole adventure and opens your door to something strange and new. And I’m starting to think that when it comes to breakthroughs, that’s always been the only way there.

What Do Theorists Do at Work?

Picture a scientist at work. You’re probably picturing an experiment, test tubes and beakers bubbling away. But not all scientists do experiments. Theoretical physicists work on the mathematical side of the field, making predictions and trying to understand how to make them better. So what does it look like when a theoretical physicist is working?

A theoretical physicist, at work in the equation mines

The first thing you might imagine is that we just sit and think. While that happens sometimes, we don’t actually do that very often. It’s better, and easier, to think by doing something.

Sometimes, this means working with pen and paper. This should be at least a little familiar to anyone who has done math homework. We’ll do short calculations and draw quick diagrams to test ideas, and do a more detailed, organized, “show your work” calculation if we’re trying to figure out something more complicated. Sometimes very short calculations are done on a blackboard instead, it can help us visualize what we’re doing.

Sometimes, we use computers instead. There are computer algebra packages, like Mathematica, Maple, or Sage, that let us do roughly what we would do on pen and paper, but with the speed and efficiency of a computer. Others program in more normal programming languages: C++, Python, even Fortran, making programs that can calculate whatever they are interested in.

Sometimes we read. With most of our field’s papers available for free on arXiv.org, we spend time reading up on what our colleagues have done, trying to understand their work and use it to improve ours.

Sometimes we talk. A paper can only communicate so much, and sometimes it’s better to just walk down the hall and ask a question. Conversations are also a good way to quickly rule out bad ideas, and narrow down to the promising ones. Some people find it easier to think clearly about something if they talk to a colleague about it, even (sometimes especially) if the colleague isn’t understanding much.

And sometimes, of course, we do all the other stuff. We write up our papers, making the diagrams nice and the formulas clean. We teach students. We go to meetings. We write grant applications.

It’s been said that a theoretical physicist can work anywhere. That’s kind of true. Some places are more comfortable, and everyone has different preferences: a busy office, a quiet room, a cafe. But with pen and paper, a computer, and people to talk to, we can do quite a lot.

The Road to Reality

I build tools, mathematical tools to be specific, and I want those tools to be useful. I want them to be used to study the real world. But when I build those tools, most of the time, I don’t test them on the real world. I use toy models, simpler cases, theories that don’t describe reality and weren’t intended to.

I do this, in part, because it lets me stay one step ahead. I can do more with those toy models, answer more complicated questions with greater precision, than I can for the real world. I can do more ambitious calculations, and still get an answer. And by doing those calculations, I can start to anticipate problems that will crop up for the real world too. Even if we can’t do a calculation yet for the real world, if it requires too much precision or too many particles, we can still study it in a toy model. Then when we’re ready to do those calculations in the real world, we know better what to expect. The toy model will have shown us some of the key challenges, and how to tackle them.

There’s a risk, working with simpler toy models. The risk is that their simplicity misleads you. When you solve a problem in a toy model, could you solve it only because the toy model is easy? Or would a similar solution work in the real world? What features of the toy model did you need, and which are extra?

The only way around this risk is to be careful. You have to keep track of how your toy model differs from the real world. You must keep in mind difficulties that come up on the road to reality: the twists and turns and potholes that real-world theories will give you. You can’t plan around all of them, that’s why you’re working with a toy model in the first place. But for a few key, important ones, you should keep your eye on the horizon. You should keep in mind that, eventually, the simplifications of the toy model will go away. And you should have ideas, perhaps not full plans but at least ideas, for how to handle some of those difficulties. If you put the work in, you stand a good chance of building something that’s useful, not just for toy models, but for explaining the real world.

Science, the Gift That Keeps on Giving

Merry Newtonmas, everyone!

You’ll find many scientists working over the holidays this year. Partly that’s because of the competitiveness of academia, with many scientists competing for a few positions, where even those who are “safe” have students who aren’t. But to put a more positive spin on it, it’s also because science is a gift that keeps on giving.

Scientists are driven by curiosity. We want to know more about the world, to find out everything we can. And the great thing about science is that, every time we answer a question, we have another one to ask.

Discover a new particle? You need to measure its properties, understand how it fits into your models and look for alternative explanations. Do a calculation, and in addition to checking it, you can see if the same method works on other cases, or if you can use the result to derive something else.

Down the line, the science that survives leads to further gifts. Good science spreads, with new fields emerging to investigate new phenomena. Eventually, science leads to technology, and our lives are enriched by the gifts of new knowledge.

Science is the gift that keeps on giving. It takes new forms, builds new ideas, it fills our lives and nourishes our minds. It’s a neverending puzzle.

So this Newtonmas, I hope you receive the greatest gift of all: the gift of science.