Tag Archives: DoingScience

Rooting out the Answer

I have a new paper out today, with Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, Cristian Vergu and Matthias Volk.

There’s a story I’ve told before on this blog, about a kind of “alphabet” for particle physics predictions. When we try to make a prediction in particle physics, we need to do complicated integrals. Sometimes, these integrals simplify dramatically, in unexpected ways. It turns out we can understand these simplifications by writing the integrals in a sort of “alphabet”, breaking complicated mathematical “periods” into familiar logarithms. If we want to simplify an integral, we can use relations between logarithms like these:

\log(a b)=\log(a)+\log(b),\quad \log(a^n)=n\log(a)

to factor our “alphabet” into pieces as simple as possible.

The simpler the alphabet, the more progress you can make. And in the nice toy model theory we’re working with, the alphabets so far have been simple in one key way. Expressed in the right variables, they’re rational. For example, they contain no square roots.

Would that keep going? Would we keep finding rational alphabets? Or might the alphabets, instead, have square roots?

After some searching, we found a clean test case. There was a calculation we could do with just two Feynman diagrams. All we had to do was subtract one from the other. If they still had square roots in their alphabet, we’d have proven that the nice, rational alphabets eventually had to stop.

Easy-peasy

So we calculated these diagrams, doing the complicated integrals. And we found they did indeed have square roots in their alphabet, in fact many more than expected. They even had square roots of square roots!

You’d think that would be the end of the story. But square roots are trickier than you’d expect.

Remember that to simplify these integrals, we break them up into an alphabet, and factor the alphabet. What happens when we try to do that with an alphabet that has square roots?

Suppose we have letters in our alphabet with \sqrt{-5}. Suppose another letter is the number 9. You might want to factor it like this:

9=3\times 3

Simple, right? But what if instead you did this:

9=(2+ \sqrt{-5} )\times(2- \sqrt{-5} )

Once you allow \sqrt{-5} in the game, you can factor 9 in two different ways. The central assumption, that you can always just factor your alphabet, breaks down. In mathematical terms, you no longer have a unique factorization domain.

Instead, we had to get a lot more mathematically sophisticated, factoring into something called prime ideals. We got that working and started crunching through the square roots in our alphabet. Things simplified beautifully: we started with a result that was ten million terms long, and reduced it to just five thousand. And at the end of the day, after subtracting one integral from the other…

We found no square roots!

After all of our simplifications, all the letters we found were rational. Our nice test case turned out much, much simpler than we expected.

It’s been a long road on this calculation, with a lot of false starts. We were kind of hoping to be the first to find square root letters in these alphabets; instead it looks like another group will beat us to the punch. But we developed a lot of interesting tricks along the way, and we thought it would be good to publish our “null result”. As always in our field, sometimes surprising simplifications are just around the corner.

When to Trust the Contrarians

One of my colleagues at the NBI had an unusual experience: one of his papers took a full year to get through peer review. This happens often in math, where reviewers will diligently check proofs for errors, but it’s quite rare in physics: usually the path from writing to publication is much shorter. Then again, the delays shouldn’t have been too surprising for him, given what he was arguing.

My colleague Mohamed Rameez, along with Jacques Colin, Roya Mohayaee, and Subir Sarkar, wants to argue against one of the most famous astronomical discoveries of the last few decades: that the expansion of our universe is accelerating, and thus that an unknown “dark energy” fills the universe. They argue that one of the key pieces of evidence used to prove acceleration is mistaken: that a large region of the universe around us is in fact “flowing” in one direction, and that tricked astronomers into thinking its expansion was accelerating. You might remember a paper making a related argument back in 2016. I didn’t like the media reaction to that paper, and my post triggered a response by the authors, one of whom (Sarkar) is on this paper as well.

I’m not an astronomer or an astrophysicist. I’m not qualified to comment on their argument, and I won’t. I’d still like to know whether they’re right, though. And that means figuring out which experts to trust.

Pick anything we know in physics, and you’ll find at least one person who disagrees. I don’t mean a crackpot, though they exist too. I mean an actual expert who is convinced the rest of the field is wrong. A contrarian, if you will.

I used to be very unsympathetic to these people. I was convinced that the big results of a field are rarely wrong, because of how much is built off of them. I thought that even if a field was using dodgy methods or sloppy reasoning, the big results are used in so many different situations that if they were wrong they would have to be noticed. I’d argue that if you want to overturn one of these big claims you have to disprove not just the result itself, but every other success the field has ever made.

I still believe that, somewhat. But there are a lot of contrarians here at the Niels Bohr Institute. And I’ve started to appreciate what drives them.

The thing is, no scientific result is ever as clean as it ought to be. Everything we do is jury-rigged. We’re almost never experts in everything we’re trying to do, so we often don’t know the best method. Instead, we approximate and guess, we find rough shortcuts and don’t check if they make sense. This can take us far sometimes, sure…but it can also backfire spectacularly.

The contrarians I’ve known got their inspiration from one of those backfires. They saw a result, a respected mainstream result, and they found a glaring screw-up. Maybe it was an approximation that didn’t make any sense, or a statistical measure that was totally inappropriate. Whatever it was, it got them to dig deeper, and suddenly they saw screw-ups all over the place. When they pointed out these problems, at best the people they accused didn’t understand. At worst they got offended. Instead of cooperation, the contrarians are told they can’t possibly know what they’re talking about, and ignored. Eventually, they conclude the entire sub-field is broken.

Are they right?

Not always. They can’t be, for every claim you can find a contrarian, believing them all would be a contradiction.

But sometimes?

Often, they’re right about the screw-ups. They’re right that there’s a cleaner, more proper way to do that calculation, a statistical measure more suited to the problem. And often, doing things right raises subtleties, means that the big important result everyone believed looks a bit less impressive.

Still, that’s not the same as ruling out the result entirely. And despite all the screw-ups, the main result is still often correct. Often, it’s justified not by the original, screwed-up argument, but by newer evidence from a different direction. Often, the sub-field has grown to a point that the original screwed-up argument doesn’t really matter anymore.

Often, but again, not always.

I still don’t know whether to trust the contrarians. I still lean towards expecting fields to sort themselves out, to thinking that error alone can’t sustain long-term research. But I’m keeping a more open mind now. I’m waiting to see how far the contrarians go.

Knowing When to Hold/Fold ‘Em in Science

The things one learns from Wikipedia. For example, today I learned that the country song “The Gambler” was selected for preservation by the US Library of Congress as being “culturally, historically, or artistically significant.”

You’ve got to know when to hold ’em, know when to fold ’em,

Know when to walk away, know when to run.

Knowing when to “hold ’em” or “fold ’em” is important in life in general, but it’s particularly important in science.

And not just on poker night

As scientists, we’re often trying to do something no-one else has done before. That’s exciting, but it’s risky too: sometimes whatever we’re trying simply doesn’t work. In those situations, it’s important to recognize when we aren’t making progress, and change tactics. The trick is, we can’t give up too early either: science is genuinely hard, and sometimes when we feel stuck we’re actually close to the finish line. Knowing which is which, when to “hold” and when to “fold”, is an essential skill, and a hard one to learn.

Sometimes, we can figure this out mathematically. Computational complexity theory classifies calculations by how difficult they are, including how long they take. If you can estimate how much time you should take to do a calculation, you can decide whether you’ll finish it in a reasonable amount of time. If you just want a rough guess, you can do a simpler version of the calculation, and see how long that takes, then estimate how much longer the full one will. If you figure out you’re doomed, then it’s time to switch to a more efficient algorithm, or a different question entirely.

Sometimes, we don’t just have to consider time, but money as well. If you’re doing an experiment, you have to estimate how much the equipment will cost, and how much it will cost to run it. Experimenters get pretty good at estimating these things, but they still screw up sometimes and run over budget. Occasionally this is fine: LIGO didn’t detect anything in its first eight-year run, but they upgraded the machines and tried again, and won a Nobel prize. Other times it’s a disaster, and money keeps being funneled into a project that never works. Telling the difference is crucial, and it’s something we as a community are still not so good at.

Sometimes we just have to follow our instincts. This is dangerous, because we have a bias (the “sunk cost fallacy”) to stick with something if we’ve already spent a lot of time or money on it. To counteract that, it’s good to cultivate a bias in the opposite direction, which you might call “scientific impatience”. Getting frustrated with slow progress may not seem productive, but it keeps you motivated to search for a better way. Experienced scientists get used to how long certain types of project take. Too little progress, and they look for another option. This can fail, killing a project that was going to succeed, but it can also prevent over-investment in a failing idea. Only a mix of instincts keeps the field moving.

In the end, science is a gamble. Like the song, we have to know when to hold ’em and fold ’em, when to walk away, and when to run an idea as far as it will go. Sometimes it works, and sometimes it doesn’t. That’s science.

Calabi-Yaus in Feynman Diagrams: Harder and Easier Than Expected

I’ve got a new paper up, about the weird geometrical spaces we keep finding in Feynman diagrams.

With Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, and most recently Cristian Vergu and Matthias Volk, I’ve been digging up odd mathematics in particle physics calculations. In several calculations, we’ve found that we need a type of space called a Calabi-Yau manifold. These spaces are often studied by string theorists, who hope they represent how “extra” dimensions of space are curled up. String theorists have found an absurdly large number of Calabi-Yau manifolds, so many that some are trying to sift through them with machine learning. We wanted to know if our situation was quite that ridiculous: how many Calabi-Yaus do we really need?

So we started asking around, trying to figure out how to classify our catch of Calabi-Yaus. And mostly, we just got confused.

It turns out there are a lot of different tools out there for understanding Calabi-Yaus, and most of them aren’t all that useful for what we’re doing. We went in circles for a while trying to understand how to desingularize toric varieties, and other things that will sound like gibberish to most of you. In the end, though, we noticed one small thing that made our lives a whole lot simpler.

It turns out that all of the Calabi-Yaus we’ve found are, in some sense, the same. While the details of the physics varies, the overall “space” is the same in each case. It’s a space we kept finding for our “Calabi-Yau bestiary”, but it turns out one of the “traintrack” diagrams we found earlier can be written in the same way. We found another example too, a “wheel” that seems to be the same type of Calabi-Yau.

And that actually has a sensible name

We also found many examples that we don’t understand. Add another rung to our “traintrack” and we suddenly can’t write it in the same space. (Personally, I’m quite confused about this one.) Add another spoke to our wheel and we confuse ourselves in a different way.

So while our calculation turned out simpler than expected, we don’t think this is the full story. Our Calabi-Yaus might live in “the same space”, but there are also physics-related differences between them, and these we still don’t understand.

At some point, our abstract included the phrase “this paper raises more questions than it answers”. It doesn’t say that now, but it’s still true. We wrote this paper because, after getting very confused, we ended up able to say a few new things that hadn’t been said before. But the questions we raise are if anything more important. We want to inspire new interest in this field, toss out new examples, and get people thinking harder about the geometry of Feynman integrals.

In Defense of the Streetlight

If you read physics blogs, you’ve probably heard this joke before:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is”.

The drunk’s line of thinking has a name, the streetlight effect, and while it may seem ridiculous it’s a common error, even among experts. When it gets too tough to research something, scientists will often be tempted by an easier problem even if it has little to do with the original question. After all, it’s “where the light is”.

Physicists get accused of this all the time. Dark matter could be completely undetectable on Earth, but physicists still build experiments to search for it. Our universe appears to be curved one way, but string theory makes it much easier to study universes curved the other way, so physicists write a lot of nice proofs about a universe we don’t actually inhabit. In my own field, we spend most of our time studying a very nice theory that we know can’t describe the real world.

I’m not going to defend this behavior in general. There are real cases where scientists trick themselves into thinking they can solve an easy problem when they need to solve a hard one. But there is a crucial difference between scientists and drunkards looking for their keys, one that makes this behavior a lot more reasonable: scientists build technology.

As scientists, we can’t just grope around in the dark for our keys. The spaces we’re searching, from the space of all theories of gravity to actual outer space, are much too vast to search randomly. We need new ideas, new mathematics or new equipment, to do the search properly. If we were the drunkard of the story, we’d need to invent night-vision goggles.

Is the light better here, or is it just me?

Suppose you wanted to design new night-vision goggles, to search for your keys in the park. You could try to build them in the dark, but you wouldn’t be able to see what you were doing: you’d lose pieces, miss screws, and break lenses. Much better to build the goggles under that convenient streetlight.

Of course, if you build and test your prototype goggles under the streetlight, you risk that they aren’t good enough for the dark. You’ll have calibrated them in an unrealistic case. In all likelihood, you’ll have to go back and fix your goggles, tweaking them as you go, and you’ll run into the same problem: you can’t see what you’re doing in the dark.

At that point, though, you have an advantage: you now know how to build night-vision goggles. You’ve practiced building goggles in the light, and now even if the goggles aren’t good enough, you remember how you put them together. You can tweak the process, modify your goggles, and make something good enough to find your keys. You’re good enough at making goggles that you can modify them now, even in the dark.

Sometimes scientists really are like the drunk, searching under the most convenient streetlight. Sometimes, though, scientists are working where the light is for a reason. Instead of wasting their time lost in the dark, they’re building new technology and practicing new methods, getting better and better at searching until, when they’re ready, they can go back and find their keys. Sometimes, the streetlight is worth it.

“X Meets Y” Conferences

Most conferences focus on a specific sub-field. If you call a conference “Strings” or “Amplitudes”, people know what to expect. Likewise if you focus on something more specific, say Elliptic Integrals. But what if your conference is named after two sub-fields?

These conferences, with names like “QCD Meets Gravity” and “Scattering Amplitudes and the Conformal Bootstrap”, try to connect two different sub-fields together. I’ll call them “X Meets Y” conferences.

The most successful “X Meets Y” conferences involve two sub-fields that have been working together for quite some time. At that point, you don’t just have “X” researchers and “Y” researchers, but “X and Y” researchers, people who work on the connection between both topics. These people can glue a conference together, showing the separate “X” and “Y” researchers what “X and Y” research looks like. At a conference like that speakers have a clear idea of what to talk about: the “X” researchers know how to talk to the “Y” researchers, and vice versa, and the organizers can invite speakers who they know can talk to both groups.

If the sub-fields have less history of collaboration, “X Meets Y” conferences become trickier. You need at least a few “X and Y” researchers (or at least aspiring “X and Y” researchers) to guide the way. Even if most of the “X” researchers don’t know how to talk to “Y” researchers, the “X and Y” researchers can give suggestions, telling “X” which topics would be most interesting to “Y” and vice versa. With that kind of guidance, “X Meets Y” conferences can inspire new directions of research, opening one field up to the tools of another.

The biggest risk in an “X Meets Y” conference, that becomes more likely the fewer “X and Y” researchers there are, is that everyone just gives their usual talks. The “X” researchers talk about their “X”, and the “Y” researchers talk about their “Y”, and both groups nod politely and go home with no new ideas whatsoever. Scientists are fundamentally lazy creatures. If we already have a talk written, we’re tempted to use it, even if it doesn’t quite fit the occasion. Counteracting that is a tough job, and one that isn’t always feasible.

“X Meets Y” conferences can be very productive, the beginning of new interdisciplinary ideas. But they’re certainly hard to get right. Overall, they’re one of the trickier parts of the social side of science.

At Aspen

I’m at the Aspen Center for Physics this week, for a workshop on Scattering Amplitudes and the Conformal Bootstrap.

A place even greener than its ubiquitous compost bins

Aspen is part of a long and illustrious tradition of physics conference sites located next to ski resorts. It’s ten years younger than its closest European counterpart Les Houches School of Physics, but if anything its traditions are stricter: all blackboard talks, and a minimum two-week visit. Instead of the summer schools of Les Houches, Aspen’s goal is to inspire collaboration: to get physicists to spend time working and hiking around each other until inspiration strikes.

This workshop is a meeting between two communities: people who study the Conformal Bootstrap (nice popular description here) and my own field of Scattering Amplitudes. The Conformal Boostrap is one of our closest sister-fields, so there may be a lot of potential for collaboration. This week’s talks have been amplitudes-focused, I’m looking forward to the talks next week that will highlight connections between the two fields.

Hexagon Functions VI: The Power Cosmic

I have a new paper out this week. It’s the long-awaited companion to a paper I blogged about a few months back, itself the latest step in a program that has made up a major chunk of my research.

The title is a bit of a mouthful, but I’ll walk you through it:

The Cosmic Galois Group and Extended Steinmann Relations for Planar N = 4 SYM Amplitudes

I calculate scattering amplitudes (roughly, probabilities that elementary particles bounce off each other) in a (not realistic, and not meant to be) theory called planar N=4 super-Yang-Mills (SYM for short). I can’t summarize everything we’ve been doing here, but if you read the blog posts I linked above and some of the Handy Handbooks linked at the top of the page you’ll hopefully get a clearer picture.

We started using the Steinmann Relations a few years ago. Discovered in the 60’s, the Steinmann relations restrict the kind of equations we can use to describe particle physics. Essentially, they mean that particles can’t travel two ways at once. In this paper, we extend the Steinmann relations beyond Steinmann’s original idea. We don’t yet know if we can prove this extension works, but it seems to be true for the amplitudes we’re calculating. While we’ve presented this in talks before, this is the first time we’ve published it, and it’s one of the big results of this paper.

The other, more exotic-sounding result, has to do with something called the Cosmic Galois Group.

Évariste Galois, the famously duel-prone mathematician, figured out relations between algebraic numbers (that is, numbers you can get out of algebraic equations) in terms of a mathematical structure called a group. Today, mathematicians are interested not just in algebraic numbers, but in relations between transcendental numbers as well, specifically a kind of transcendental number called a period. These numbers show up a lot in physics, so mathematicians have been thinking about a Galois group for transcendental numbers that show up in physics, a so-called Cosmic Galois Group.

(Cosmic here doesn’t mean it has to do with cosmology. As far as I can tell, mathematicians just thought it sounded cool and physics-y. They also started out with rather ambitious ideas about it, if you want a laugh check out the last few paragraphs of this talk by Cartier.)

For us, Cosmic Galois Theory lets us study the unusual numbers that show up in our calculations. Doing this, we’ve noticed that certain numbers simply don’t show up. For example, the Riemann zeta function shows up often in our results, evaluated at many different numbers…but never evaluated at the number three. Nor does any number related to that one through the Cosmic Galois Group show up. It’s as if the theory only likes some numbers, and not others.

This weird behavior has been observed before. Mathematicians can prove it happens for some simple theories, but it even applies to the theories that describe the real world, for example to calculations of the way an electron’s path is bent by a magnetic field. Each theory seems to have its own preferred family of numbers.

For us, this has been enormously useful. We calculate our amplitudes by guesswork, starting with the right “alphabet” and then filling in different combinations, as if we’re trying all possible answers to a word jumble. Cosmic Galois Theory and Extended Steinmann have enabled us to narrow down our guess dramatically, making it much easier and faster to get to the right answer.

More generally though, we hope to contribute to mathematicians’ investigations of Cosmic Galois Theory. Our examples are more complicated than the simple theories where they currently prove things, and contain more data than the more limited results from electrons. Hopefully together we can figure out why certain numbers show up and others don’t, and find interesting mathematical principles behind the theories that govern fundamental physics.

For now, I’ll leave you with a preview of a talk I’m giving in a couple weeks’ time:

The font, of course, is Cosmic Sans

Research Rooms, Collaboration Spaces

Math and physics are different fields with different cultures. Some of those differences are obvious, others more subtle.

I recently remembered a subtle difference I noticed at the University of Waterloo. The math building there has “research rooms”, rooms intended for groups of mathematicians to collaborate. The idea is that you invite visitors to the department, reserve the room, and spend all day with them trying to iron out a proof or the like.

Theoretical physicists collaborate like this sometimes too, but in my experience physics institutes don’t typically have this kind of “research room”. Instead, they have “collaboration spaces”. Unlike a “research room”, you don’t reserve a “collaboration space”. Typically, they aren’t even rooms: they’re a set of blackboards in the coffee room, or a cluster of chairs in the corner between two hallways. They’re open spaces, designed so that passers-by can overhear the conversation and (potentially) join in.

That’s not to say physicists never shut themselves in a room for a day (or night) to work. But when they do, it’s not usually in a dedicated space. Instead, it’s in an office, or a commandeered conference room.

Waterloo’s “research rooms” and physics institutes’ “collaboration spaces” can be used for similar purposes. The difference is in what they encourage.

The point of a “collaboration space” is to start new collaborations. These spaces are open in order to take advantage of serendipity: if you’re getting coffee or walking down the hall, you might hear something interesting and spark something new, with people you hadn’t planned to collaborate with before. Institutes with “collaboration spaces” are trying to make new connections between researchers, to be the starting point for new ideas.

The point of a “research room” is to finish a collaboration. They’re for researchers who are already collaborating, who know they’re going to need a room and can reserve it in advance. They’re enclosed in order to shut out distractions, to make sure the collaborators can sit down and focus and get something done. Institutes with “research rooms” want to give their researchers space to complete projects when they might otherwise be too occupied with other things.

I’m curious if this difference is more widespread. Do math departments generally tend to have “research rooms” or “collaboration spaces”? Are there physics departments with “research rooms”? I suspect there is a real cultural difference here, in what each field thinks it needs to encourage.

Nonperturbative Methods for Conformal Theories in Natal

I’m at a conference this week, on Nonperturbative Methods for Conformal Theories, in Natal on the northern coast of Brazil.

Where even the physics institutes have their own little rainforests.

“Nonperturbative” means that most of the people at this conference don’t use the loop-by-loop approximation of Feynman diagrams. Instead, they try to calculate things that don’t require approximations, finding formulas that work even for theories where the forces involved are very strong. In practice this works best in what are called “conformal” theories, roughly speaking these are theories that look the same whichever “scale” you use. Sometimes these theories are “integrable”, theories that can be “solved” exactly with no approximation. Sometimes these theories can be “bootstrapped”, starting with a guess and seeing how various principles of physics constrain it, mapping out a kind of “space of allowed theories”. Both approaches, integrability and bootstrap, are present at this conference.

This isn’t quite my community, but there’s a fair bit of overlap. We care about many of the same theories, like N=4 super Yang-Mills. We care about tricks to do integrals better, or to constrain mathematical guesses better, and we can trade these kinds of tricks and give each other advice. And while my work is typically “perturbative”, I did have one nonperturbative result to talk about, one which turns out to be more closely related to the methods these folks use than I had appreciated.