Tag Archives: DoingScience

Hexagon Functions VI: The Power Cosmic

I have a new paper out this week. It’s the long-awaited companion to a paper I blogged about a few months back, itself the latest step in a program that has made up a major chunk of my research.

The title is a bit of a mouthful, but I’ll walk you through it:

The Cosmic Galois Group and Extended Steinmann Relations for Planar N = 4 SYM Amplitudes

I calculate scattering amplitudes (roughly, probabilities that elementary particles bounce off each other) in a (not realistic, and not meant to be) theory called planar N=4 super-Yang-Mills (SYM for short). I can’t summarize everything we’ve been doing here, but if you read the blog posts I linked above and some of the Handy Handbooks linked at the top of the page you’ll hopefully get a clearer picture.

We started using the Steinmann Relations a few years ago. Discovered in the 60’s, the Steinmann relations restrict the kind of equations we can use to describe particle physics. Essentially, they mean that particles can’t travel two ways at once. In this paper, we extend the Steinmann relations beyond Steinmann’s original idea. We don’t yet know if we can prove this extension works, but it seems to be true for the amplitudes we’re calculating. While we’ve presented this in talks before, this is the first time we’ve published it, and it’s one of the big results of this paper.

The other, more exotic-sounding result, has to do with something called the Cosmic Galois Group.

Évariste Galois, the famously duel-prone mathematician, figured out relations between algebraic numbers (that is, numbers you can get out of algebraic equations) in terms of a mathematical structure called a group. Today, mathematicians are interested not just in algebraic numbers, but in relations between transcendental numbers as well, specifically a kind of transcendental number called a period. These numbers show up a lot in physics, so mathematicians have been thinking about a Galois group for transcendental numbers that show up in physics, a so-called Cosmic Galois Group.

(Cosmic here doesn’t mean it has to do with cosmology. As far as I can tell, mathematicians just thought it sounded cool and physics-y. They also started out with rather ambitious ideas about it, if you want a laugh check out the last few paragraphs of this talk by Cartier.)

For us, Cosmic Galois Theory lets us study the unusual numbers that show up in our calculations. Doing this, we’ve noticed that certain numbers simply don’t show up. For example, the Riemann zeta function shows up often in our results, evaluated at many different numbers…but never evaluated at the number three. Nor does any number related to that one through the Cosmic Galois Group show up. It’s as if the theory only likes some numbers, and not others.

This weird behavior has been observed before. Mathematicians can prove it happens for some simple theories, but it even applies to the theories that describe the real world, for example to calculations of the way an electron’s path is bent by a magnetic field. Each theory seems to have its own preferred family of numbers.

For us, this has been enormously useful. We calculate our amplitudes by guesswork, starting with the right “alphabet” and then filling in different combinations, as if we’re trying all possible answers to a word jumble. Cosmic Galois Theory and Extended Steinmann have enabled us to narrow down our guess dramatically, making it much easier and faster to get to the right answer.

More generally though, we hope to contribute to mathematicians’ investigations of Cosmic Galois Theory. Our examples are more complicated than the simple theories where they currently prove things, and contain more data than the more limited results from electrons. Hopefully together we can figure out why certain numbers show up and others don’t, and find interesting mathematical principles behind the theories that govern fundamental physics.

For now, I’ll leave you with a preview of a talk I’m giving in a couple weeks’ time:

The font, of course, is Cosmic Sans

Research Rooms, Collaboration Spaces

Math and physics are different fields with different cultures. Some of those differences are obvious, others more subtle.

I recently remembered a subtle difference I noticed at the University of Waterloo. The math building there has “research rooms”, rooms intended for groups of mathematicians to collaborate. The idea is that you invite visitors to the department, reserve the room, and spend all day with them trying to iron out a proof or the like.

Theoretical physicists collaborate like this sometimes too, but in my experience physics institutes don’t typically have this kind of “research room”. Instead, they have “collaboration spaces”. Unlike a “research room”, you don’t reserve a “collaboration space”. Typically, they aren’t even rooms: they’re a set of blackboards in the coffee room, or a cluster of chairs in the corner between two hallways. They’re open spaces, designed so that passers-by can overhear the conversation and (potentially) join in.

That’s not to say physicists never shut themselves in a room for a day (or night) to work. But when they do, it’s not usually in a dedicated space. Instead, it’s in an office, or a commandeered conference room.

Waterloo’s “research rooms” and physics institutes’ “collaboration spaces” can be used for similar purposes. The difference is in what they encourage.

The point of a “collaboration space” is to start new collaborations. These spaces are open in order to take advantage of serendipity: if you’re getting coffee or walking down the hall, you might hear something interesting and spark something new, with people you hadn’t planned to collaborate with before. Institutes with “collaboration spaces” are trying to make new connections between researchers, to be the starting point for new ideas.

The point of a “research room” is to finish a collaboration. They’re for researchers who are already collaborating, who know they’re going to need a room and can reserve it in advance. They’re enclosed in order to shut out distractions, to make sure the collaborators can sit down and focus and get something done. Institutes with “research rooms” want to give their researchers space to complete projects when they might otherwise be too occupied with other things.

I’m curious if this difference is more widespread. Do math departments generally tend to have “research rooms” or “collaboration spaces”? Are there physics departments with “research rooms”? I suspect there is a real cultural difference here, in what each field thinks it needs to encourage.

Nonperturbative Methods for Conformal Theories in Natal

I’m at a conference this week, on Nonperturbative Methods for Conformal Theories, in Natal on the northern coast of Brazil.

Where even the physics institutes have their own little rainforests.

“Nonperturbative” means that most of the people at this conference don’t use the loop-by-loop approximation of Feynman diagrams. Instead, they try to calculate things that don’t require approximations, finding formulas that work even for theories where the forces involved are very strong. In practice this works best in what are called “conformal” theories, roughly speaking these are theories that look the same whichever “scale” you use. Sometimes these theories are “integrable”, theories that can be “solved” exactly with no approximation. Sometimes these theories can be “bootstrapped”, starting with a guess and seeing how various principles of physics constrain it, mapping out a kind of “space of allowed theories”. Both approaches, integrability and bootstrap, are present at this conference.

This isn’t quite my community, but there’s a fair bit of overlap. We care about many of the same theories, like N=4 super Yang-Mills. We care about tricks to do integrals better, or to constrain mathematical guesses better, and we can trade these kinds of tricks and give each other advice. And while my work is typically “perturbative”, I did have one nonperturbative result to talk about, one which turns out to be more closely related to the methods these folks use than I had appreciated.

Hexagon Functions V: Seventh Heaven

I’ve got a new paper out this week, a continuation of a story that has threaded through my career since grad school. With a growing collaboration (now Simon Caron-Huot, Lance Dixon, Falko Dulat, Andrew McLeod, and Georgios Papathanasiou) I’ve been calculating six-particle scattering amplitudes in my favorite theory-that-does-not-describe-the-real-world, N=4 super Yang-Mills. We’ve been pushing to more and more “loops”: tougher and tougher calculations that approximate the full answer better and better, using the “word jumble” trick I talked about in Scientific American. And each time, we learn something new.

Now we’re up to seven loops for some types of particles, and six loops for the rest. In older blog posts I talked in megabytes: half a megabyte for three loops, 15 MB for four loops, 300 MB for five loops. I don’t have a number like that for six and seven loops: we don’t store the result in that way anymore, it just got too cumbersome. We have to store it in a simplified form, and even that takes 80 MB.

Some of what we learned has to do with the types of mathematical functions that we need: our “guess” for the result at each loop. We’ve honed that guess down a lot, and discovered some new simplifications along the way. I won’t tell that story here (except to hint that it has to do with “cosmic Galois theory”) because we haven’t published it yet. It will be out in a companion paper soon.

This paper focused on the next step, going from our guess to the correct six- and seven-loop answers. Here too there were surprises. For the last several loops, we’d observed a surprisingly nice pattern: different configurations of particles with different numbers of loops were related, in a way we didn’t know how to explain. The pattern stuck around at five loops, so we assumed it was the real deal, and guessed the new answer would obey it too.

Yes, in our field this counts as surprisingly nice

Usually when scientists tell this kind of story, the pattern works, it’s a breakthrough, everyone gets a Nobel prize, etc. This time? Nope!

The pattern failed. And it failed in a way that was surprisingly difficult to detect.

The way we calculate these things, we start with a guess and then add what we know. If we know something about how the particles behave at high energies, or when they get close together, we use that to pare down our guess, getting rid of pieces that don’t fit. We kept adding these pieces of information, and each time the pattern seemed ok. It was only when we got far enough into one of these approximations that we noticed a piece that didn’t fit.

That piece was a surprisingly stealthy mathematical function, one that hid from almost every test we could perform. There aren’t any functions like that at lower loops, so we never had to worry about this before. But now, in the rarefied land of six-loop calculations, they finally start to show up.

We have another pattern, like the old one but that isn’t broken yet. But at this point we’re cautious: things get strange as calculations get more complicated, and sometimes the nice simplifications we notice are just accidents. It’s always important to check.

Deep physics or six-loop accident? You decide!

This result was a long time coming. Coordinating a large project with such a widely spread collaboration is difficult, and sometimes frustrating. People get distracted by other projects, they have disagreements about what the paper should say, even scheduling Skype around everyone’s time zones is a challenge. I’m more than a little exhausted, but happy that the paper is out, and that we’re close to finishing the companion paper as well. It’s good to have results that we’ve been hinting at in talks finally out where the community can see them. Maybe they’ll notice something new!


A Field That Doesn’t Read Its Journals

Last week, the University of California system ended negotiations with Elsevier, one of the top academic journal publishers. UC had been trying to get Elsevier to switch to a new type of contract, one in which instead of paying for access to journals they pay for their faculty to publish, then make all the results openly accessible to the public. In the end they couldn’t reach an agreement and thus didn’t renew their contract, cutting Elsevier off from millions of dollars and their faculty from reading certain (mostly recent) Elsevier journal articles. There’s a nice interview here with one of the librarians who was sent to negotiate the deal.

I’m optimistic about what UC was trying to do. Their proposal sounds like it addresses some of the concerns raised here with open-access systems. Currently, journals that offer open access often charge fees directly to the scientists publishing in them, fees that have to be scrounged up from somebody’s grant at the last minute. By setting up a deal for all their faculty together, UC would have avoided that. While the deal fell through, having an organization as big as the whole University of California system advocating open access (and putting the squeeze on Elsevier’s profits) seems like it can only lead to progress.

The whole situation feels a little surreal, though, when I compare it to my own field.

At the risk of jinxing it, my field’s relationship with journals is even weirder than xkcd says.

arXiv.org is a website that hosts what are called “preprints”, which originally meant papers that haven’t been published yet. They’re online, freely accessible to anyone who wants to read them, and will be for as long as arXiv exists to host them. Essentially everything anyone publishes in my field ends up on arXiv.

Journals don’t mind, in part, because many of them are open-access anyway. There’s an organization, SCOAP3, that runs what is in some sense a large-scale version of what UC was trying to set up: instead of paying for subscriptions, university libraries pay SCOAP3 and it covers the journals’ publication costs.

This means that there are two coexisting open-access systems, the journals themselves and arXiv. But in practice, arXiv is the one we actually use.

If I want to show a student a paper, I don’t send them to the library or the journal website, I tell them how to find it on arXiv. If I’m giving a talk, there usually isn’t room for a journal reference, so I’ll give the arXiv number instead. In a paper, we do give references to journals…but they’re most useful when they have arXiv links as well. I think the only times I’ve actually read an article in a journal were for articles so old that arXiv didn’t exist when they were published.

We still submit our papers to journals, though. Peer review still matters, we still want to determine whether our results are cool enough for the fancy journals or only good enough for the ordinary ones. We still put journal citations on our CVs so employers and grant agencies know not only what we’ve done, but which reviewers liked it.

But the actual copy-editing and formatting and publishing, that the journals still employ people to do? Mostly, it never gets read.

In my experience, that editing isn’t too impressive. Often, it’s about changing things to fit the journal’s preferences: its layout, its conventions, its inconvenient proprietary document formats. I haven’t seen them try to fix grammar, or improve phrasing. Maybe my papers have unusually good grammar, maybe they do more for other papers. And maybe they used to do more, when journals had a more central role. But now, they don’t change much.

Sometimes the journal version ends up on arXiv, if the authors put it there. Sometimes it doesn’t. And sometimes the result is in between. For my last paper about Calabi-Yau manifolds in Feynman diagrams, we got several helpful comments from the reviewers, but the journal also weighed in to get us to remove our more whimsical language, down to the word “bestiary”. For the final arXiv version, we updated for the reviewer comments, but kept the whimsical words. In practice, that version is the one people in our field will read.

This has some awkward effects. It means that sometimes important corrections don’t end up on arXiv, and people don’t see them. It means that technically, if someone wanted to insist on keeping an incorrect paper online, they could, even if a corrected version was published in a journal. And of course, it means that a large amount of effort is dedicated to publishing journal articles that very few people read.

I don’t know whether other fields could get away with this kind of system. Physics is small. It’s small enough that it’s not so hard to get corrections from authors when one needs to, small enough that social pressure can get wrong results corrected. It’s small enough that arXiv and SCOAP3 can exist, funded by universities and private foundations. A bigger field might not be able to do any of that.

For physicists, we should keep in mind that our system can and should still be improved. For other fields, it’s worth considering whether you can move in this direction, and what it would cost to do so. Academic publishing is in a pretty bizarre place right now, but hopefully we can get it to a better one.

Grant Roulette

Sometimes, it feels like applying for funding in science is a form of high-stakes gambling. You put in weeks of work assembling a grant application, making sure that it’s exciting and relevant and contains all the obnoxious buzzwords you’re supposed to use…and in the end, it gets approved or rejected for reasons that seem entirely out of your control.

What if, instead, you were actually gambling?

Put all my money on post-Newtonian corrections…

That’s the philosophy behind a 2016 proposal by Ferric Fang and Arturo Casadevall, recently summarized in an article on Vox by Kelsey Piper. The goal is to cut down on the time scientists waste applying for money from various government organizations (for them, the US’s National Institute of Health) by making part of the process random. Applications would be reviewed to make sure they met a minimum standard, but past that point every grant would have an equal chance of getting funded. That way scientists wouldn’t spend so much time perfecting grant applications, and could focus on the actual science.

It’s an idea that seems, on its face, a bit too cute. Yes, grant applications are exhausting, but surely you still want some way to prioritize better ideas over worse ones? For all its flaws, one would hope the grant review process at least does that.

Well, maybe not. The Vox piece argues that, at least in medicine, grants are almost random already. Each grant is usually reviewed by multiple experts. Several studies cited in the piece looked at the variability between these experts: do they usually agree, or disagree? Measuring this in a variety of ways, they came to the same conclusion: there is almost no consistency among ratings by different experts. In effect, the NIH appears to already be using a lottery, one in which grants are randomly accepted or rejected depending on who reviews them.

What encourages me about these studies is that there really is a concrete question to ask. You could argue that physics shouldn’t suffer from the same problems as medicine, that grant review is really doing good work in our field. If you want to argue that, you can test it! Look at old reviews by different people, or get researchers to do “mock reviews”, and test statistical measures like inter-rater reliability. If there really is no consistency between reviews then we have a real problem in need of fixing.

I genuinely don’t know what to expect from that kind of study in my field. But the way people talk about grants makes me suspicious. Everyone seems to feel like grant agencies are biased against their sub-field. Grant-writing advice is full of weird circumstantial tips. (“I heard so-and-so is reviewing this year, so don’t mention QCD!”) It could all be true…but it’s also the kind of superstition people come up with when they look for patterns in a random process. If all the grant-writing advice in the world boils down to “bet on red”, we might as well admit which game we’re playing.

What Science Would You Do If You Had the Time?

I know a lot of people who worry about the state of academia. They worry that the competition for grants and jobs has twisted scientists’ priorities, that the sort of dedicated research of the past, sitting down and thinking about a topic until you really understand it, just isn’t possible anymore. The timeline varies: there are people who think the last really important development was the Standard Model, or the top quark, or AdS/CFT. Even more optimistic people, who think physics is still just as great as it ever was, often complain that they don’t have enough time.

Sometimes I wonder what physics would be like if we did have the time. If we didn’t have to worry about careers and funding, what would we do? I can speculate, comparing to different communities, but here I’m interested in something more concrete: what, specifically, could we accomplish? I often hear people complain that the incentives of academia discourage deep work, but I don’t often hear examples of the kind of deep work that’s being discouraged.

So I’m going to try an experiment here. I know I have a decent number of readers who are scientists of one field or another. Imagine you didn’t have to worry about funding any more. You’ve got a permanent position, and what’s more, your favorite collaborators do too. You don’t have to care about whether your work is popular, whether it appeals to the university or the funding agencies or any of that. What would you work on? What projects would you personally do, that you don’t have the time for in the current system? What worthwhile ideas has modern academia left out?