Hexagon Functions – or, what is my new paper about?

I’ve got a new paper up on arXiv this week.

(For those of you unfamiliar with it, arXiv.org is a website where physicists, mathematicians, and researchers in related fields post their papers before submitting them to journals. It’s a cultural quirk of physics that probably requires a post in its own right at some point. Anyway…)

What’s it about? Well, the paper is titled Hexagon functions and the three-loop remainder function. Let’s go through that and figure out what it means.

When the paper refers to hexagon functions, it’s referring to functions used to describe situations with six particles involved. An important point to clarify here is that when counting the number of “particles involved”, we add together both the particles that go in and the particles that go out. So if three particles arrive somewhere, interact with each other in some complicated way, and then those three particles leave, that’s a six-particle process. Similarly, if two particles collide and four particles emerge, that’s also a six-particle process. (If you find the idea of more particles coming out than went in confusing, read this post.) Hexagon functions, then, can describe either of those processes.

What, specifically, are these functions being used for? Well, they’re being used to find the three-loop remainder function of N=4 super Yang-Mills.

N=4 super Yang-Mills is my favorite theory. If you haven’t read my posts on the subject, I encourage you to do so.

N=4 super Yang-Mills is so nice because it is so symmetric, and because it takes part in so many dualities. These two traits ended up being enough for Zvi Bern, Lance Dixon, and Vladimir Smirnov to propose an ansatz for all amplitudes in N=4 super Yang-Mills, called the BDS ansatz. (Amplitudes are how we calculate the probability of events occurring: for example, the probability of that “two particles going to four particles” situation I talked about earlier.)

Unfortunately, their formula was incomplete. While it was possible to prove that the formula was true for four-particle and five-particle processes, for six or more particles the formula failed. As it turned out though, it failed in a predictable way. All that was needed to fix it was to add something called the remainder function, the remaining part of the formula beyond the BDS ansatz.

The task, then, was to compute this remainder function.

I’ve talked before about how in quantum field theory, we calculate probabilities through increasingly complicated diagrams, keeping track of the complexity by counting the number of loops. The remainder function had already been computed up to two loops by working out these diagrams, but three looked to be considerably more difficult.

Luckily, we (myself, Lance Dixon, James Drummond, and Jeffrey Pennington) had a trick up our sleeves.

Formulas in N=4 super Yang-Mills have a property called maximal transcendentality. I’ve talked about transcendentality before:  essentially, it’s a way of counting how many powers of pi and logarithms are in your equations. Maximal transcendentality means that every part of the formula has a fixed, maximum number for its degree of transcendentality. In the case of the remainder function, this is two times the number of loops. Thus, the two-loop remainder function has degree of transcendentality four, so it can have pi to the fourth power in it, while the three-loop remainder function (the one that we calculated) has degree of transcendentality six, so it can have pi to the sixth power.

Of course, it can have lots of other expressions as well, which brings us back to the hexagon functions. By classifying the sort of functions that can appear in these formulas at each level of transcendentality, we find the basic building blocks that can show up in the remainder function. All we have to do then is ask what combinations of building blocks are allowed: which ones make good physics sense, for example, or which ones allow our formula to agree with the predictions of other researchers.

As it turns out, once you apply all the restrictions there is only one possible way to put the building blocks together that gives you a functioning formula. By process of elimination, this formula must be the correct three-loop six-point remainder function. Every extra constraint then serves as a check that nothing went wrong and that the formula is sound. Without calculating a single Feynman diagram, we’ve gotten our result!

Just to give you an idea of how complicated this result is, in order to write the formula out fully would take 800 pages. We’ve got shorter ways to summarize it, but perhaps it would be better to give a picture. The formula depends on three variables, called u, v, and w. To show how the formula behaves when all three variables change, here’s a plot of the formula in the variables u and v, for a series of different values of w.

wstacksheaves

Without our various shortcuts to generate this formula, it would have taken an extraordinarily long amount of time. Luckily, N=4 super Yang-Mills’s nice properties save the day, and allow us to achieve what I hope you won’t mind me calling a truly impressive result.

New Guide, Taking Suggestions

Hello readers!

Some of you have probably read the guide to N=4 super Yang-Mills theory linked at the top of my blog’s home page. A few of you even discovered this blog via the guide.

I’m thinking about doing another series of posts, like those, explaining a different theory. I’d like to get an idea of which theory you guys are most interested in seeing described. Whichever I choose, it will be largely along the same lines as the N=4 posts, so focused less on technical details and more on presenting something that a layman can understand.

Here are some of the options I’m considering:

  • N=8 Supergravity: Very broadly speaking, this is gravity’s equivalent of N=4 super Yang-Mills. It’s connected to N=4 in a variety of interesting ways, and it’s something I’d like to work more with at some point.
  • The (2,0) Theory: This was the motivation behind my first paper. It’s harder to explain than some of the other theories because it doesn’t have an easy analogy with the particles of the real world. It’s also even harder to work with, to the point that saying something rigorous about it is often worthy of a paper on its own.
  • String Theory/M Theory: This is a big topic, and there are many sites out there already that cover aspects of it. What I might try to do is look for an angle of approach that others haven’t covered, and try to explain some slightly more technical aspects of the situation in a popularly approachable way.

I could also give a more detailed description of some method from amplitudeology, like generalized unitarity or symbology.

Finally, I could always just keep posting like I have been doing. But this seems like a good time to add to my site’s utility. So what do you think? What should I talk about?

Dammit Jim, I’m a Physicist not a Graphic Designer!

Over the last week I’ve been working with a few co-authors to get a paper ready for publication. For my part, this has mostly meant making plots of our data. (Yes, theorists have data! It’s the result of calculations, not observations, but it’s still data!)

As it turns out, making the actual plots is only the first and easiest step. We have a huge number of data points, which means the plots ended up being very large files. To fix this I had to smooth out the files so they don’t include every point, a process called rasterizing the images. I also needed to make sure that the labels of the plots matched the fonts in the paper, and that the images in the paper were of the right file type to be included, which in turn meant understanding the sort of information retained by each type of image file. I had to learn which image files include transparency and which don’t, which include fonts as text and which use images, and which fonts were included in each program I used. By the end, I learned more about graphic design than I ever intended to.

In a company, this sort of job would be given to a graphic designer on-staff, or a hired expert. In academia, however, we don’t have the resources for that sort of thing, so we have to become experts in the nitty-gritty details of how to get our work in publishable form.

As it turns out, this is part of a wider pattern in academia. Any given project doesn’t have a large staff of specialists or a budget for outside firms, so everyone involved has to become competent at tasks that a business would parcel out to experts. This is why a large part of work in physics isn’t really physics per se; rather, we theorists often spend much of our time programming, while experimentalists often have to build and repair their experimental apparatus. The end result is that much of what we do is jury-rigged together, with an amateur understanding of most of the side disciplines involved. Things work, but they aren’t as efficient or as slick as they could be if assembled by a real expert. On the other hand, it makes things much cheaper, and it’s a big contributor to the uncanny ability of physicists to know about other peoples’ disciplines.

Talks, and what they’re good for

It’s an ill-kept secret that basically everyone in academia is a specialist. Nobody is just a “physicist”, or just a “high energy theorist”, or even just a “string theorist”. Even when I describe myself as something as specific as an “amplitudeologist”, I’m still over-generalizing: there’s a lot of amplitudes work out there that I would be hard-pressed to understand, and even harder-pressed to reproduce.

In the end, each of us is only going to understand a small subset of the vastness of our subject. This is problematic when it comes to attending talks.

Rarely, we get to attend talks about something we completely understand. Generally, we’re the ones giving those talks. The rest of the time, even at conferences for people of our particular specialty, we’re going to miss some fraction of the content, either because we don’t understand it or because we don’t find it interesting.

The question then becomes, why attend the talk in the first place? Why spend an hour of your time when you’re not getting an hour’s worth of content?

There are a couple reasons, of varying levels of plausibility.

One is that it’s always nice to know what other subfields are doing. It lets one feel connected to one’s compatriots, and it helps one navigate one’s career. That said, it’s unclear whether going to talks is really the best way of doing this. If you just want to know what other people are doing, you can always just watch to see what they publish. That doesn’t take an hour, unless you’re really dedicated to wasting time.

A more important benefit is increasing levels of familiarity. These days, I can productively pay attention to the first quarter of a talk, half if it’s particularly good. When I first got to grad school, I’d probably tune out after the first five minutes. The more talks you see on a subject, the more of the talk makes sense, and the more you get out of it. That’s part of why even fairly specialized people who are further along in their careers can talk on a wide range of subjects: often, they’ve intentionally kept themselves aware of what’s going on in other subfields, going to talks, reading papers, and engaging in conversation. This is a valuable end goal, since there is some truth to the hype about the benefits of interdisciplinarity in providing unconventional solutions to problems. That said, this is a gradual process. The benefit of one individual talk is tiny, and it doesn’t seem worth an hour of time. Much like exercise, it’s the habit that provides the benefit, not any individual session.

So in the end, talks are almost always unsatisfying. But we keep going to them, because they make us better scientists.

Duality: Find out what it means to me

There’s a cute site out there called Why String Theory. Started by Oxford and the Royal Society, Why String Theory contains lots of concise and well-illustrated explanations of string theory, and it even wades into some of the more complex topics like AdS/CFT and string dualities in general. Their explanation of dualities is a nice introduction to why dualities matter in string theory, but I don’t think it does a very good job of explaining what a duality actually is or how one works. As your fearless host, I’m confident that I can do better.

Why String Theory defines dualities as when “different mathematical theories describe the same physics.” How does that work, though? In what sense are the theories different, if they describe the same thing? And if they describe the same thing, why do we need both of them?

1563px-face_or_vase_ata_01.svg_

You’ve probably seen the above image before, or one much like it. Look at it one way, and you see a goblet. Another, and you see two faces.

Now imagine that instead of a flat image, these are 3D objects, models you have in your house. You’ve got a goblet, and a pair of clay faces. You’re still pretty sure they fit together like they do in the image, though. Maybe they said they fit together on the packaging, maybe you stuck them together and it didn’t look like there were any gaps. Whatever the reason, you’re confident enough about this that you’re willing to assume it’s true.

Now suppose you want to figure out how long the noses on the faces are. In case you’ve never measured a human nose, I can let you know that it’s tricky. You could put a ruler along the nose, but it would be diagonal rather than straight, so you wouldn’t get an accurate measurement. Even putting the ruler beneath the nose doesn’t work for rounded noses like these.

That said, measuring the goblet is easy. You can run measuring tape around the neck of the goblet to find the circumference, and then calculate the diameter. And if you measure the goblet in this way, you also know how long the faces’ noses are.

You could go further, and build up a list of things you can measure on one object that tell you about the other one. The necks match up to the base of the goblet, the foreheads to the mouth, etc. It would be like a dictionary, translating between two languages: the language of measurements of the faces, and the language of measurements of the goblet.

That sort of “dictionary” is the essence of duality. When two theories have a duality (are dual to each other), you can make a “dictionary” to translate measurements in one theory to measurements in the other. That doesn’t mean, however, that the theories are clearly connected: like 3D models of the faces and the goblet, it may be that without looking at the particular “silhouette” defined by duality the two views are radically different. Rather than physical objects, the theories compare mathematical “objects”, so rather than physical obstructions like the solidity of noses we have to deal with mathematical ones, situations where one quantity or another is easier or harder to calculate depending on how the math is set up. For example, many dualities relate things that require calculations at very high loops to things that can be calculated with fewer loops (for an explanation of loops, check out this post).

As Why String Theory points out, one of the most prominent dualities is called AdS/CFT, and it relates N=4 super Yang-Mills (a Conformal Field Theory, or CFT) to string theory in something called Anti-de Sitter (AdS) space (tricky to describe, but essentially a world in which space is warped like a hyperbola). Another duality relates N=4 super Yang-Mills Feynman diagrams with n particles coming in from outside to diagrams with an n-sided shape and particles randomly coming in from the edges of the shape (these latter diagrams are called Wilson loops). In general N=4 super Yang-Mills is involved in many, many dualities, which is a big part of why it’s so dang cool.

Shout-Out for a Fellow Blogger

This is a blog about explaining science. Science is for everybody!

In particular, science is not just for geeks/those into geek culture. Nevertheless, I’m willing to bet that a substantial fraction of you are into something nerdy, whether sci-fi, fantasy, or one of the many genres and subgenres that have sprung up in the crazy genre jungles of the internet.

As such, some of you may be aware of Geek and Sundry, the Felicia Day-headed geek media mini-empire. They’re adding a set of new Vloggers (like bloggers but with video), and they’re running a contest to determine their lineup. And of these Vloggers, you should definitely vote for Kiri Callaghan.

I know Kiri as a kickass director from way back when I used to do community theatre stuff. These days, she’s running an extra-mini geek media empire of her own, centered around her blog. This blog somehow manages to update every weekday (and used to update every day), which as someone who updates once a week I can tell you is physically impossible, especially while also holding down a real job which she apparently does. She almost certainly owns a Time Turner or something.  So yeah, very impressive, and the high quality nerd stuff attached (check out some of her parody songs, in particular I Dreamed a Dream of Firefly) adds to the picture.

So for those in the audience who are into this sort of thing, vote for her! Comment on the video (apparently the scoring for this stage is based on interaction with commenters)! Join the facebook group to keep tabs on the competition!

You get paid to learn. How bad can that be?

In my “who am I” post, I describe being a grad student as like being an apprentice. I’d like to elaborate on that.

Ph.D. programs in the sciences are different at every school, but they have a few basic features. Generally you enter them with a bachelor’s degree from another university. The program lasts for somewhere between four and six years, longer for particularly unfortunate cases. Sometimes you get a Master’s degree after the first two years, sometimes you don’t, but you don’t usually have to get it from another school. Generally the first two years mostly involve taking courses while the later years are mostly research, but this can vary as well. And in general, once you’re in the program, you get paid: either as a Teaching Assistant, in which case you help grade papers, lead lab sections, and sometimes give lectures, or as a Research Assistant, in which you are paid to do research.

This last is occasionally confusing to people. If a Ph.D. student learns by doing research, then why are they also paid to do research? That sounds like not just getting your education for free, but being paid for it, which sounds at the very least like a very good deal.

There are two ways to think about the situation. One, as I mentioned in my “who am I” post, is as an apprenticeship. An apprentice is expected to learn on the job, and provided they learn enough they are eventually certified to work on their own. Despite this, an apprenticeship is still very much a job. An apprentice is subservient to their master, and can generally be counted upon to work on the master’s projects and help the master in their job. In much the same way, a Ph.D. student is not certified to work on their own until they graduate from the program and obtain their Ph.D. In the meantime they are subservient to their advisor, and they have to take their advisor’s desires into account when choosing research projects. In general, most of a grad student’s research projects will be part of their advisor’s research in one way or another, furthering their advisor’s goals. Beyond the research itself, grad students will often have other duties, depending on the nature of their advisor’s work, especially if their advisor has a lab with complicated equipment that needs to be maintained.

The other thing to realize is that grad students are, ostensibly, part-time workers. The university pays me for 20 hours a week of work. The thing is, though, I don’t just work part-time. I work full-time. I also work at home, on the weekends…whenever I can make progress on my research (and I’m not doing some side project like this blog or taking a needed sanity break), I work. So if I work 40 hours a week and am paid for 20, that means I am effectively spending half my income on education.

Not so free, is it?

It’s not as if any of us could just work less and take on another part-time job, either. Apart from the fact that many grad students are international students on visas that don’t allow them to get other jobs, it is research itself: keeping up, making progress, working towards graduating, that takes up so much of our time. To get any education out of the process at all, we have to be involved as much as possible.  So we are, inevitably, paying for our education. And hopefully, we’re getting something out of it.

Blackboards

As a college student, I already knew that theoretical physicists weren’t like how they were portrayed in movies. They didn’t wear lab coats, or have universally frizzy, unkempt white hair. I knew they didn’t have labs, or plot to take over the world. And I was pretty sure that they didn’t constantly use blackboards.

After all, blackboards are a teaching tool. They’re nice for getting equations up so that the guy way in the back can see them. But if you were actually doing a real calculation, surely you’d prefer paper, or a computer, or some other method that doesn’t involve an unkempt scrawl and a heap of loose white dust all over your clothing.

Right?

Right?

Over the last few years I’ve come to appreciate the value of blackboards. Blackboards actually can be used for calculations. You don’t want to use them all the time, but there are times when it’s useful to have a lot of room on a page, to be able to make notes and structure the board around concepts. More importantly, though, there is a third function that I didn’t even consider back in college. Between calculation and teaching, there is collaboration.

Go to a physics or math department, and you’ll find blackboards on the walls. You’ll find them not just in classrooms, but in offices, and occasionally in corridors. Go to a high-class physics location like the Perimeter Institute or the Simons Center, and they’ll brag to you about how many blackboards they have strewn around their common areas.

The purpose of these blackboards is to facilitate conversation. If you want to explain your work to someone else and you aren’t using a blog post, you need space to write in a way that you can both see what you’re doing. Blackboards are ideal for that sort of conversation, and as such are essential for collaboration and communication among scientists.

What about whiteboards? Well, whiteboards are just evil, obviously.

Hawking vs. Witten: A Primer

Have you seen the episode of Star Trek where Data plays poker with Stephen Hawking? How about the times he appeared on Futurama or the Simpsons? Or the absurd number of times he has come up in one way or another on The Big Bang Theory?

Stephen Hawking is probably the most recognizable theoretical physicist to laymen. Wheelchair-bound and speaking through a voice synthesizer, Hawking presents a very distinct image, while his work on black holes and the big bang, along with his popular treatments of science in books like A Brief History of Time, has made him synonymous in the public’s mind with genius.

He is not, however, the most recognizable theoretical physicist when talking to physicists. If Sheldon from The Big Bang Theory were a real string theorist he wouldn’t be obsessed with Hawking. He might, however, be obsessed with Edward Witten.

Edward Witten is tall and has an awkwardly high voice (for a sample, listen to the clip here). He’s also smart, smart enough to dabble in basically every subfield of theoretical physics and manage to make important contributions while doing so. He has a knack for digging up ideas from old papers and dredging out the solution to current questions of interest.

And far more than Hawking, he represents a clear target for parody, at least when that parody is crafted by physicists and mathematicians. Abstruse Goose has a nice take on his role in theoretical physics, while his collaboration with another physicist named Seiberg on what came to be known as Seiberg-Witten theory gave rise to the cyber-Witten pun.

If you would look into the mouth of physics-parody madness, let this link be your guide…

So why hasn’t this guy appeared on Futurama? (After all, his dog does!)

Witten is famous among theorists, but he hasn’t done as much as Hawking to endear himself to the general public. He hasn’t written popular science books, and he almost never gives public talks. So when a well-researched show like The Big Bang Theory wants to mention a famous physicist, they go to Hawking, not to Witten, because people know about him. And unless Witten starts interfacing more with the public (or blog posts like this become more common), that’s not about to change.

Perimeter and Patronage

I’m visiting the Perimeter Institute this week. For the non-physicists in the audience, Perimeter is a very prestigious institute of theoretical physics, founded by the founder of BlackBerry. It’s quite swanky. Some first impressions:

  • This occurred to me several times: this place is what the Simons Center wants to be when it grows up.
  • You’d think that the building is impossible to navigate because it was designed by a theoretical physicist, but Freddy Cachazo assured us that he actually had to get the architect to tone down the impossibly ridiculous architecture. Looks like the only person crazier than a physicist is an artist.
  • Having table service at an institute café feels very swanky at first, but it’s actually a lot less practical than cafeteria-style dining. I think the Simons Center Café has it right on this one, even if they don’t quite understand the concept of hurricane relief (don’t have a link for that joke, but I can explain if you’re curious).
  • Perimeter has some government money, but much of its funding comes from private companies and foundations, particularly Research in Motion (or RIM, now BlackBerry). Incidentally, I’m told that PeRIMeter is supposed to be a reference to RIM.

What interests me is that you don’t see this sort of thing (private support) very often in other fields. Private donors will found efforts to solve some real-world problem, like autism or income inequality. They rarely fund basic research*. When they do fund basic research, it’s usually at a particular university. Something like Perimeter, a private institute for basic research, is rather unusual. Perimeter itself describes its motivation as something akin to a long-range strategic investment, but I think this also ties back to the concept of patronage.

Like art, physics has a history of being a fashionable thing for wealthy patrons to support, usually when the research topic is in line with their wider interests. Newton, for example, re-cast his research in terms of its implications for an understanding of the tides to interest the nautically-minded King James II, despite the fact that he couldn’t predict the tides any better than anyone else in his day. Much like supporting art, supporting physics can allow someone’s name to linger on through history, while not running a risk of competing with others’ business interests like research in biology or chemistry might.

A man who liked his sailors

*basic research is a term scientists use to refer to research that isn’t made with a particular application in mind. In terms of theoretical physics, this often means theories that aren’t “true”.