Tag Archives: DoingScience

4gravitons, Spinning Up

I had a new paper out last week, with Michèle Levi and Andrew McLeod. But to explain it, I’ll need to clarify something about our last paper.

Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?

Actually, we did. Our calculation doesn’t just apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…

LIGO’s next big discovery

That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.

Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?

In our last calculation, no. In this one, yes!

Effective Field Theory isn’t just a zoom lens, it’s a controlled approximation. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.

At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.

So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because some of the properties of neutron stars are unknown.

The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.

That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

Understanding Is Translation

Kernighan’s Law states, “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” People sometimes make a similar argument about philosophy of mind: “The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps.”

Both points operate on a shared kind of logic. They picture understanding something as modeling it in your mind, with every detail clear. If you’ve already used all your mind’s power to design code, you won’t be able to model when it goes wrong. And modeling your own mind is clearly nonsense, you would need an even larger mind to hold the model.

The trouble is, this isn’t really how understanding works. To understand something, you don’t need to hold a perfect model of it in your head. Instead, you translate it into something you can more easily work with. Like explanations, these translations can be different for different people.

To understand something, I need to know the algorithm behind it. I want to know how to calculate it, the pieces that go in and where they come from. I want to code it up, to test it out on odd cases and see how it behaves, to get a feel for what it can do.

Others need a more physical picture. They need to know where the particles are going, or how energy and momentum are conserved. They want entropy to be increased, action to be minimized, scales to make sense dimensionally.

Others in turn are more mathematical. They want to start with definitions and axioms. To understand something, they want to see it as an example of a broader class of thing, groups or algebras or categories, to fit it into a bigger picture.

Each of these are a kind of translation, turning something into code-speak or physics-speak or math-speak. They don’t require modeling every detail, but when done well they can still explain every detail.

So while yes, it is good practice not to write code that is too “smart”, and too hard to debug…it’s not impossible to debug your smartest code. And while you can’t hold an entire mind inside of yours, you don’t actually need to do that to understand the brain. In both cases, all you need is a translation.

You Can’t Anticipate a Breakthrough

As a scientist, you’re surrounded by puzzles. For every test and every answer, ten new questions pop up. You can spend a lifetime on question after question, never getting bored.

But which questions matter? If you want to change the world, if you want to discover something deep, which questions should you focus on? And which should you ignore?

Last year, my collaborators and I completed a long, complicated project. We were calculating the chance fundamental particles bounce off each other in a toy model of nuclear forces, pushing to very high levels of precision. We managed to figure out a lot, but as always, there were many questions left unanswered in the end.

The deepest of these questions came from number theory. We had noticed surprising patterns in the numbers that showed up in our calculation, reminiscent of the fancifully-named Cosmic Galois Theory. Certain kinds of numbers never showed up, while others appeared again and again. In order to see these patterns, though, we needed an unusual fudge factor: an unexplained number multiplying our result. It was clear that there was some principle at work, a part of the physics intimately tied to particular types of numbers.

There were also questions that seemed less deep. In order to compute our result, we compared to predictions from other methods: specific situations where the question becomes simpler and there are other ways of calculating the answer. As we finished writing the paper, we realized we could do more with some of these predictions. There were situations we didn’t use that nonetheless simplified things, and more predictions that it looked like we could make. By the time we saw these, we were quite close to publishing, so most of us didn’t have the patience to follow these new leads. We just wanted to get the paper out.

At the time, I expected the new predictions would lead, at best, to more efficiency. Maybe we could have gotten our result faster, or cleaned it up a bit. They didn’t seem essential, and they didn’t seem deep.

Fast forward to this year, and some of my collaborators (specifically, Lance Dixon and Georgios Papathanasiou, along with Benjamin Basso) have a new paper up: “The Origin of the Six-Gluon Amplitude in Planar N=4 SYM”. The “origin” in their title refers to one of those situations: when the variables in the problem are small, and you’re close to the “origin” of a plot in those variables. But the paper also sheds light on the origin of our calculation’s mysterious “Cosmic Galois” behavior.

It turns out that the origin (of the plot) can be related to another situation, when the paths of two particles in our calculation almost line up. There, the calculation can be done with another method, called the Pentagon Operator Product Expansion, or POPE. By relating the two, Basso, Dixon, and Papathanasiou were able to predict not only how our result should have behaved near the origin, but how more complicated as-yet un-calculated results should behave.

The biggest surprise, though, lurked in the details. Building their predictions from the POPE method, they found their calculation separated into two pieces: one which described the physics of the particles involved, and a “normalization”. This normalization, predicted by the POPE method, involved some rather special numbers…the same as the “fudge factor” we had introduced earlier! Somehow, the POPE’s physics-based setup “knows” about Cosmic Galois Theory!

It seems that, by studying predictions in this specific situation, Basso, Dixon, and Papathanasiou have accomplished something much deeper: a strong hint of where our mysterious number patterns come from. It’s rather humbling to realize that, were I in their place, I never would have found this: I had assumed “the origin” was just a leftover detail, perhaps useful but not deep.

I’m still digesting their result. For now, it’s a reminder that I shouldn’t try to pre-judge questions. If you want to learn something deep, it isn’t enough to sit thinking about it, just focusing on that one problem. You have to follow every lead you have, work on every problem you can, do solid calculation after solid calculation. Sometimes, you’ll just make incremental progress, just fill in the details. But occasionally, you’ll have a breakthrough, something that justifies the whole adventure and opens your door to something strange and new. And I’m starting to think that when it comes to breakthroughs, that’s always been the only way there.

What Do Theorists Do at Work?

Picture a scientist at work. You’re probably picturing an experiment, test tubes and beakers bubbling away. But not all scientists do experiments. Theoretical physicists work on the mathematical side of the field, making predictions and trying to understand how to make them better. So what does it look like when a theoretical physicist is working?

A theoretical physicist, at work in the equation mines

The first thing you might imagine is that we just sit and think. While that happens sometimes, we don’t actually do that very often. It’s better, and easier, to think by doing something.

Sometimes, this means working with pen and paper. This should be at least a little familiar to anyone who has done math homework. We’ll do short calculations and draw quick diagrams to test ideas, and do a more detailed, organized, “show your work” calculation if we’re trying to figure out something more complicated. Sometimes very short calculations are done on a blackboard instead, it can help us visualize what we’re doing.

Sometimes, we use computers instead. There are computer algebra packages, like Mathematica, Maple, or Sage, that let us do roughly what we would do on pen and paper, but with the speed and efficiency of a computer. Others program in more normal programming languages: C++, Python, even Fortran, making programs that can calculate whatever they are interested in.

Sometimes we read. With most of our field’s papers available for free on arXiv.org, we spend time reading up on what our colleagues have done, trying to understand their work and use it to improve ours.

Sometimes we talk. A paper can only communicate so much, and sometimes it’s better to just walk down the hall and ask a question. Conversations are also a good way to quickly rule out bad ideas, and narrow down to the promising ones. Some people find it easier to think clearly about something if they talk to a colleague about it, even (sometimes especially) if the colleague isn’t understanding much.

And sometimes, of course, we do all the other stuff. We write up our papers, making the diagrams nice and the formulas clean. We teach students. We go to meetings. We write grant applications.

It’s been said that a theoretical physicist can work anywhere. That’s kind of true. Some places are more comfortable, and everyone has different preferences: a busy office, a quiet room, a cafe. But with pen and paper, a computer, and people to talk to, we can do quite a lot.

The Road to Reality

I build tools, mathematical tools to be specific, and I want those tools to be useful. I want them to be used to study the real world. But when I build those tools, most of the time, I don’t test them on the real world. I use toy models, simpler cases, theories that don’t describe reality and weren’t intended to.

I do this, in part, because it lets me stay one step ahead. I can do more with those toy models, answer more complicated questions with greater precision, than I can for the real world. I can do more ambitious calculations, and still get an answer. And by doing those calculations, I can start to anticipate problems that will crop up for the real world too. Even if we can’t do a calculation yet for the real world, if it requires too much precision or too many particles, we can still study it in a toy model. Then when we’re ready to do those calculations in the real world, we know better what to expect. The toy model will have shown us some of the key challenges, and how to tackle them.

There’s a risk, working with simpler toy models. The risk is that their simplicity misleads you. When you solve a problem in a toy model, could you solve it only because the toy model is easy? Or would a similar solution work in the real world? What features of the toy model did you need, and which are extra?

The only way around this risk is to be careful. You have to keep track of how your toy model differs from the real world. You must keep in mind difficulties that come up on the road to reality: the twists and turns and potholes that real-world theories will give you. You can’t plan around all of them, that’s why you’re working with a toy model in the first place. But for a few key, important ones, you should keep your eye on the horizon. You should keep in mind that, eventually, the simplifications of the toy model will go away. And you should have ideas, perhaps not full plans but at least ideas, for how to handle some of those difficulties. If you put the work in, you stand a good chance of building something that’s useful, not just for toy models, but for explaining the real world.

Science, the Gift That Keeps on Giving

Merry Newtonmas, everyone!

You’ll find many scientists working over the holidays this year. Partly that’s because of the competitiveness of academia, with many scientists competing for a few positions, where even those who are “safe” have students who aren’t. But to put a more positive spin on it, it’s also because science is a gift that keeps on giving.

Scientists are driven by curiosity. We want to know more about the world, to find out everything we can. And the great thing about science is that, every time we answer a question, we have another one to ask.

Discover a new particle? You need to measure its properties, understand how it fits into your models and look for alternative explanations. Do a calculation, and in addition to checking it, you can see if the same method works on other cases, or if you can use the result to derive something else.

Down the line, the science that survives leads to further gifts. Good science spreads, with new fields emerging to investigate new phenomena. Eventually, science leads to technology, and our lives are enriched by the gifts of new knowledge.

Science is the gift that keeps on giving. It takes new forms, builds new ideas, it fills our lives and nourishes our minds. It’s a neverending puzzle.

So this Newtonmas, I hope you receive the greatest gift of all: the gift of science.

Calculating the Hard Way, for Science!

I had a new paper out last week, with Jacob Bourjaily and Matthias Volk. We’re calculating the probability that particles bounce off each other in our favorite toy model, N=4 super Yang-Mills. And this time, we’re doing it the hard way.

The “easy way” we didn’t take is one I have a lot of experience with. Almost as long as I’ve been writing this blog, I’ve been calculating these particle probabilities by “guesswork”: starting with a plausible answer, then honing it down until I can be confident it’s right. This might sound reckless, but it works remarkably well, letting us calculate things we could never have hoped for with other methods. The catch is that “guessing” is much easier when we know what we’re looking for: in particular, it works much better in toy models than in the real world.

Over the last few years, though, I’ve been using a much more “normal” method, one that so far has a better track record in the real world. This method, too, works better than you would expect, and we’ve managed some quite complicated calculations.

So we have an “easy way”, and a “hard way”. Which one is better? Is the hard way actually harder?

To test that, you need to do the same calculation both ways, and see which is easier. You want it to be a fair test: if “guessing” only works in the toy model, then you should do the “hard” version in the toy model as well. And you don’t want to give “guessing” any unfair advantages. In particular, the “guess” method works best when we know a lot about the result we’re looking for: what it’s made of, what symmetries it has. In order to do a fair test, we must use that knowledge to its fullest to improve the “hard way” as well.

We picked an example in the middle: not too easy, and not too hard, a calculation that was done a few years back “the easy way” but not yet done “the hard way”. We plugged in all the modern tricks we could, trying to use as much of what we knew as possible. We trained a grad student: Matthias Volk, who did the lion’s share of the calculation and learned a lot in the process. We worked through the calculation, and did it properly the hard way.

Which method won?

In the end, the hard way was indeed harder…but not by that much! Most of the calculation went quite smoothly, with only a few difficulties at the end. Just five years ago, when the calculation was done “the easy way”, I doubt anyone would have expected the hard way to be viable. But with modern tricks it wasn’t actually that hard.

This is encouraging. It tells us that the “hard way” has potential, that it’s almost good enough to compete at this kind of calculation. It tells us that the “easy way” is still quite powerful. And it reminds us that the more we know, and the more we apply our knowledge, the more we can do.

Life Cycle of an Academic Scientist

So you want to do science for a living. Some scientists work for companies, developing new products. Some work for governments. But if you want to do “pure science”, science just to learn about the world, then you’ll likely work at a university, as part of what we call academia.

The first step towards academia is graduate school. In the US, this means getting a PhD.

(Master’s degrees, at least in the US, have a different purpose. Most are “terminal Master’s”, designed to be your last degree. With a terminal Master’s, you can be a technician in a lab, but you won’t get farther down this path. In the US you don’t need a Master’s before you apply for a PhD program, and having one is usually a waste of time: PhD programs will make you re-take most of the same classes.)

Once you have a PhD, it’s time to get a job! Often, your first job after graduate school is a postdoc. Postdocs are short-term jobs, usually one to three years long. Some people are lucky enough to go to the next stage quickly, others have more postdoc jobs first. These jobs will take you all over the world, everywhere people with your specialty work. Sometimes these jobs involve teaching, but more often you just do scientific research.

In the US system, If everything goes well, eventually you get a tenure-track job. These jobs involve both teaching and research. You get to train PhD students, hire postdocs, and in general start acting like a proper professor. This stage lasts around seven years, while the university evaluates you. If they decide you’re not worth it then typically you’ll have to leave to apply for another job in another university. If they like you though, you get tenure.

Tenure is the first time as an academic scientist that you aren’t on a short-term contract. Your job is more permanent than most, you have extra protection from being fired that most people don’t. While you can’t just let everything slide, you have freedom to make more of your own decisions.

A tenured job can last until retirement, when you become an emeritus professor. Emeritus professors are retired but still do some of the work they did as professors. They’re paid out of their pension instead of a university salary, but they still sometimes teach or do research, and they usually still have an office. The university can hire someone new, and the cycle continues.

This isn’t the only path scientists take. Some work in a national lab instead. These don’t usually involve teaching duties, and the path to a permanent job is a bit different. Some get teaching jobs instead of research professorships. These teaching jobs are usually not permanent, instead universities are hiring more and more adjunct faculty who have to string together temporary contracts to make a precarious living.

I’ve mostly focused on the US system here. Europe is a bit different: Master’s degrees are a real part of the system, tenure-track doesn’t really exist, and adjunct faculty don’t always either. Some particular countries, like Germany, have their own quite complicated systems, other countries fall in between.

Rooting out the Answer

I have a new paper out today, with Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, Cristian Vergu and Matthias Volk.

There’s a story I’ve told before on this blog, about a kind of “alphabet” for particle physics predictions. When we try to make a prediction in particle physics, we need to do complicated integrals. Sometimes, these integrals simplify dramatically, in unexpected ways. It turns out we can understand these simplifications by writing the integrals in a sort of “alphabet”, breaking complicated mathematical “periods” into familiar logarithms. If we want to simplify an integral, we can use relations between logarithms like these:

\log(a b)=\log(a)+\log(b),\quad \log(a^n)=n\log(a)

to factor our “alphabet” into pieces as simple as possible.

The simpler the alphabet, the more progress you can make. And in the nice toy model theory we’re working with, the alphabets so far have been simple in one key way. Expressed in the right variables, they’re rational. For example, they contain no square roots.

Would that keep going? Would we keep finding rational alphabets? Or might the alphabets, instead, have square roots?

After some searching, we found a clean test case. There was a calculation we could do with just two Feynman diagrams. All we had to do was subtract one from the other. If they still had square roots in their alphabet, we’d have proven that the nice, rational alphabets eventually had to stop.


So we calculated these diagrams, doing the complicated integrals. And we found they did indeed have square roots in their alphabet, in fact many more than expected. They even had square roots of square roots!

You’d think that would be the end of the story. But square roots are trickier than you’d expect.

Remember that to simplify these integrals, we break them up into an alphabet, and factor the alphabet. What happens when we try to do that with an alphabet that has square roots?

Suppose we have letters in our alphabet with \sqrt{-5}. Suppose another letter is the number 9. You might want to factor it like this:

9=3\times 3

Simple, right? But what if instead you did this:

9=(2+ \sqrt{-5} )\times(2- \sqrt{-5} )

Once you allow \sqrt{-5} in the game, you can factor 9 in two different ways. The central assumption, that you can always just factor your alphabet, breaks down. In mathematical terms, you no longer have a unique factorization domain.

Instead, we had to get a lot more mathematically sophisticated, factoring into something called prime ideals. We got that working and started crunching through the square roots in our alphabet. Things simplified beautifully: we started with a result that was ten million terms long, and reduced it to just five thousand. And at the end of the day, after subtracting one integral from the other…

We found no square roots!

After all of our simplifications, all the letters we found were rational. Our nice test case turned out much, much simpler than we expected.

It’s been a long road on this calculation, with a lot of false starts. We were kind of hoping to be the first to find square root letters in these alphabets; instead it looks like another group will beat us to the punch. But we developed a lot of interesting tricks along the way, and we thought it would be good to publish our “null result”. As always in our field, sometimes surprising simplifications are just around the corner.