Author Archives: 4gravitons

In Defense of Lord Kelvin, Michelson, and the Physics of Decimals

William Thompson, Lord Kelvin, was a towering genius of 19th century physics. He is often quoted as saying,

There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.

lord_kelvin_photograph

Certainly sounds like something I would say!

As it happens, he never actually said this. It’s a paraphrase of a quote from Albert Michelson, of the Michelson-Morley Experiment:

While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.

albert_abraham_michelson2

Now that’s more like it!

In hindsight, this quote looks pretty silly. When Michelson said that “it seems probable that most of the grand underlying principles have been firmly established” he was leaving out special relativity, general relativity, and quantum mechanics. From our perspective, the grandest underlying principles had yet to be discovered!

And yet, I think we should give Michelson some slack.

Someone asked me on twitter recently what I would choose if given the opportunity to unravel one of the secrets of the universe. At the time, I went for the wishing-for-more-wishes answer: I’d ask for a procedure to discover all of the other secrets.

I was cheating, to some extent. But I do think that the biggest and most important mystery isn’t black holes or the big bang, isn’t asking what will replace space-time or what determines the constants in the Standard Model. The most critical, most important question in physics, rather, is to find the consequences of the principles we actually know!

We know our world is described fairly well by quantum field theory. We’ve tested it, not just to the sixth decimal place, but to the tenth. And while we suspect it’s not the full story, it should still describe the vast majority of our everyday world.

If we knew not just the underlying principles, but the full consequences of quantum field theory, we’d understand almost everything we care about. But we don’t. Instead, we’re forced to calculate with approximations. When those approximations break down, we fall back on experiment, trying to propose models that describe the data without precisely explaining it. This is true even for something as “simple” as the distribution of quarks inside a proton. Once you start trying to describe materials, or chemistry or biology, all bets are off.

This is what the vast majority of physics is about. Even more, it’s what the vast majority of science is about. And that’s true even back to Michelson’s day. Quantum mechanics and relativity were revelations…but there are still large corners of physics in which neither matters very much, and even larger parts of the more nebulous “physical science”.

New fundamental principles get a lot of press, but you shouldn’t discount the physics of “the sixth place of decimals”. Most of the big mysteries don’t ask us to challenge our fundamental paradigm: rather, they’re challenges to calculate or measure better, to get more precision out of rules we already know. If a genie gave me the solution to any of physics’ mysteries I’d choose to understand the full consequences of quantum field theory, or even of the physics of Michelson’s day, long before I’d look for the answer to a trendy question like quantum gravity.

Things You Don’t Know about the Power of the Dark Side

Last Wednesday, Katherine Freese gave a Public Lecture at Perimeter on the topic of Dark Matter and Dark Energy. The talk should be on Perimeter’s YouTube page by the time this post is up.

Answering twitter questions during the talk made me realize that there’s a lot the average person finds confusing about Dark Matter and Dark Energy. Freese addressed much of this pretty well in her talk, but I felt like there was room for improvement. Rather than try to tackle it myself, I decided to interview an expert on the Dark Side of the universe.

darth_vader

Twitter doesn’t know the power of the dark side!

Lord Vader, some people have a hard time distinguishing Dark Matter and Dark Energy. What do you have to say to them?

Fools! Light side astronomers call “dark” that which they cannot observe and cannot understand. “Fear” and “anger” are different heights of emotion, but to the Jedi they are only the path to the Dark Side. Dark Energy and Dark Matter are much the same: both distinct, both essential to the universe, and both “dark” to the telescopes of the light.

Let’s start with Dark Matter. Is it really matter?

You ask an empty question. “Matter” has been defined in many ways. When we on the Dark Side refer to Dark Matter, we merely mean to state that it behaves much like the matter you know: it is drawn to and fro by gravity, sloshing about.

It is distinct from your ordinary matter in that two of the forces of nature, the strong nuclear force and electromagnetism, do not concern it. Ordinary matter is bound together in the nuclei of atoms by the strong force, or woven into atoms and molecules by electromagnetism. This makes it subject to all manner of messy collisions.

Dark Matter, in contrast, is pure, partaking neither of nuclear nor chemical reactions. It passes through each of us with no notice. Only the weak nuclear force and gravity affect it. The latter has brought it slowly into clumps and threads through the universe, each one a vast nest for groupings of stars. Truly, Dark Matter surrounds us, penetrates us, and binds the galaxy together.

Could Dark Matter be something we’re more familiar with, like neutrinos or black holes? What about a modification of gravity?

Many wondered as much, when the study of the Dark Side was young. They were wrong.

The matter you are accustomed to composes merely a twentieth of the universe, while Dark Matter is more than a quarter. There is simply not enough of these minor contributions, neutrinos and black holes, to account for the vast darkness that surrounds the galaxy, and with each astronomer’s investigation we grow more assured.

As for modifying gravity, do you seek to modify a fundamental Force?

If so, you should be wary. Forces, by their nature, are accompanied by particles, and gravity is no exception. Take care that your tinkering does not result in a new sort of particle. If so, you may be unknowingly walking the path of the Dark Side, for your modification may be just another form of Dark Matter.

What sort of things could Dark Matter be? Can Dark Matter decay into ordinary matter? Could there be anti-Dark Matter?

As of yet, your scientists are still baffled by the nature of Dark Matter. Still, there are limits. Since only rare events could produce it from ordinary matter, the universe’s supply of Dark Matter must be ancient, dating back to the dawn of the cosmos. In that case, it must decay only slowly, if at all. Similarly, if Dark Matter had antimatter forms then its interactions must be so weak that it has not simply annihilated with its antimatter half across the universe. So while either is possible, it may be simpler for your theorists if Dark Matter did not decay, and was its own antimatter counterpart. On the other hand, if Dark Matter did undergo such reactions, your kind may one day be able to detect it.

Of course, as a master of the Dark Side I know the true nature of Dark Matter. However, I could only impart it to a loyal apprentice…

Yeah, I think I’ll pass on that. They say you can only get a job in academia when someone dies, but unlike the Sith they don’t mean it literally.

Let’s move on to Dark Energy. What can you tell us about it?

Dark “Energy”, like Dark Matter, is named for what people on your Earth cannot comprehend. Nothing, not even Dark Energy, is “made of energy”. Dark Energy is “energy” merely because it behaves unlike matter.

Matter, even Dark Matter, is drawn together by the force of gravity. Under its yoke, the universe would slow down in its expansion and eventually collapse into a crunch, like the throat of an incompetent officer.

However, the universe is not collapsing, but accelerating, galaxies torn away from each other by a force that must compose more than two thirds of the universe. It is rather like the Yuuzhan Vong, a mysterious force from outside the galaxy that scouts persistently under- or over-estimate.

Umm, I’m pretty sure the Yuuzhan Vong don’t exist anymore, since Disney got rid of the Expanded Universe.

That perfidious Mouse!

Well folks, Vader is now on a rampage of revenge in the Disney offices, so I guess we’ll have to end the interview. Tune in next week, and until then, may the Force be with you!

Symbology 101

I work with functions called polylogarithms. There’s a whole field of techniques out there for manipulating these functions, and for better or worse people often refer to them as symbology.

My plan for this post is to give a general feel for how symbology works: what we know how to do, and why. It’s going to be a lot more technical than my usual posts, so the lay reader may want to skip this one. At the same time, I’m not planning to go through anything rigorously. If you want that sort of thing there are plenty of good papers on the subject, here’s one of mine that covers the basics. Rather, I’m going to draw what I hope is an illuminating sketch of what it is we do.

Still here? Let’s start with an easy question.

What’s a log?

balch_park_hollow_log

Ok, besides one of these.

For our purposes, a log is what happens when you integrate dx/x.

\log x=\int \frac{dx}{x}

 Schematically, a polylog is then what happens when you iterate these integrations:

G=\int \frac{dx_1}{x_1} \int \frac{dx_2}{x_2}\ldots

The simplest thing you can get from this is of course just a product of logs. The next most simple thing is one of the classical polylogarithms. But in general, this is a much wider class of functions, known as multiple, or Goncharov, polylogarithms.

The number of integrations is the transcendental weight. Naively, you’d expect an L-loop Feynman integral in four dimensions to give you something with transcendental weight 4L. In practice, that’s not the case: some of the momentum integrations end up just giving delta functions, so in the end an L-loop amplitude has transcendental weight 2L.

In most theories, you get a mix of functions: some with weight 2L, some with weight 2L-1, etc., all the way down to rational functions. N=4 super Yang-Mills is special: there, everything is at the maximum transcendental weight. In either case, though, being able to manipulate transcendental functions is very useful, and the symbol is one of the simplest ways to do so.

The core idea of the symbol is pretty easy to state, though it takes a bit more technology to state it rigorously. Essentially, we take our schematic polylog from above, and just list the logs:

\mathcal{S}(G)=\ldots\otimes x_2\otimes x_1

(Here I have switched the order in order to agree with standard conventions.)

What does that do? Well, it reminds us that these aren’t just some weird functions we don’t understand: they’re collections of logs, and we can treat them like collections of logs.

In particular, we can do this with logs,

\log (x y)=\log x+\log y

so we can do it with symbols as well:

x_1\otimes x y\otimes x_3=x_1\otimes x \otimes x_3+x_1\otimes y\otimes x_3

Similarly, we can always get rid of unwelcome exponents, like so:

\log (x^n)=n\log x

x_1\otimes x^n\otimes x_3=n( x_1\otimes x \otimes x_3)

This means that, in general, we can always factorize any polynomial or rational function that appears in a symbol. As such, we often express symbols in terms of some fixed symbol alphabet, a basis of rational functions that can be multiplied to get any symbol entry in the function we’re working with. In general, it’s a lot easier to calculate amplitudes when we know the symbol alphabet beforehand. For six-particle amplitudes in N=4 super Yang-Mills, the symbol alphabet contains just nine “letters”, which makes it particularly easy to work with.

That’s arguably the core of symbol methods. It’s how Spradlin and Volovich managed to get a seventeen-page expression down to two lines. Express a symbol in the right alphabet, and it tends to look a lot more simple. And once you know the right alphabet, it’s pretty straightforward to build an ansatz with it and constrain it until you get a candidate function for whatever you’re interested in.

There’s more technical detail I could give here: how to tell whether a symbol actually corresponds to a function, how to take limits and do series expansions and take derivatives and discontinuities…but I’m not sure whether anyone reading this would be interested.

As-is, I’ll just mention that the symbol is only part of the story. In particular, it’s a special case of something called a coproduct, which breaks up polylogarithms into various chunks. Break them down fully until each chunk is just an individual log, and you get the symbol. Break them into larger chunks, and you get other components of the coproduct, consisting of tensor products of polylogarithms with lower transcendental weight. These larger chunks mean we can capture as much of a function’s behavior as we like, while still taking advantage of these sorts of tricks. While in older papers you might have seen mention of “beyond-the-symbol” terms that the symbol couldn’t capture, this doesn’t tend to be a problem these days.

You Go, LIGO!

Well folks, they did it. LIGO has detected gravitational waves!

FAQ:

What’s a gravitational wave?

Gravitational waves are ripples in space and time. As Einstein figured out a century ago, masses bend space and time, which causes gravity. Wiggle masses in the right way and you get a gravity wave, like a ripple on a pond.

Ok, but what is actually rippling? It’s some stuff, right? Dust or something?

In a word, no. Not everything has to be “stuff”. Energy isn’t “stuff”, and space-time isn’t either, but space-time is really what vibrates when a gravitational wave passes by. Distances themselves are changing, in a way that is described by the same math and physics as a ripple in a pond.

What’s LIGO?

LIGO is the Laser Interferometer Gravitational-Wave Observatory. In simple terms, it’s an observatory (or rather, a pair of observatories in Washington and Louisiana) that can detect gravitational waves. It does this using beams of laser light four kilometers long. Gravitational waves change the length of these beams when they pass through, causing small but measurable changes in the laser light observed.

Are there other gravitational wave observatories?

Not currently in operation. LIGO originally ran from 2002 to 2010, and during that time there were other gravitational wave observatories also in operation (VIRGO in Italy and GEO600 in Germany). All of them (including LIGO) failed to detect anything, and so LIGO and VIRGO were shut down in order for them to be upgraded to more sensitive, advanced versions. Advanced LIGO went into operation first, and made the detection. VIRGO is still under construction, as is KAGRA, a detector in Japan. There are also plans for a detector in India.

Other sorts of experiments can detect gravitational waves on different scales. eLISA is a planned space-based gravitational wave observatory, while Pulsar Timing Arrays could use distant neutron stars as an impromptu detector.

What did they detect? What could they detect?

The gravitational waves that LIGO detected came from a pair of black holes merging. In general, gravitational waves come from a pair of masses, or one mass with an uneven and rapidly changing shape. As such, LIGO and future detectors might be able to observe binary stars, supernovas, weird-shaped neutron stars, colliding galaxies…pretty much any astrophysical event involving large things moving comparatively fast.

What does this say about string theory?

Basically nothing. There are gravity waves in string theory, sure (and they play a fairly important role), but there were gravity waves in Einstein’s general relativity. As far as I’m aware, no-one at this point seriously thought that gravitational waves didn’t exist. Nothing that LIGO observed has any bearing on the quantum properties of gravity.

But what about cosmic strings? They mentioned those in the announcement!

Cosmic strings, despite the name, aren’t a unique prediction of string theory. They’re big, string-shaped wrinkles in space and time, possible results of the rapid expansion of space during cosmic inflation. You can think of them a bit like the cracks that form in an over-inflated balloon right before it bursts.

Cosmic strings, if they exist, should produce gravitational waves. This means that in the future we may have concrete evidence of whether or not they exist. This wouldn’t say all that much about string theory: while string theory does have its own explanations for cosmic strings, it’s unclear whether it actually has unique predictions about them. It would say a lot about cosmic inflation, though, and would presumably help distinguish it from proposed alternatives. So keep your eyes open: in the next few years, gravitational wave observatories may well have something important to say about the overall history of the universe.

Why is this discovery important, though? If we already knew that gravitational waves existed, why does discovering them matter?

LIGO didn’t discover that gravitational waves exist. LIGO discovered that we can detect them.

The existence of gravitational waves is no discovery. But the fact that we now have observatories sensitive enough to detect them is huge. It opens up a whole new type of astronomy: we can now observe the universe not just by the light it sheds (and neutrinos), but through a whole new lens. And every time we get another observational tool like this, we notice new things, things we couldn’t have seen without it. It’s the dawn of a new era in astronomy, and LIGO was right to announce it with all the pomp and circumstance they could muster.

 

My impressions from the announcement:

Speaking of pomp and circumstance, I was impressed by just how well put-together LIGO’s announcement was.

As the US presidential election heats up, I’ve seen a few articles about the various candidates’ (well, usually Trump’s) use of the language of political propaganda. The idea is that there are certain visual symbols at political events for which people have strong associations, whether with historical events or specific ideas or the like, and that using these symbols makes propaganda more powerful.

What I haven’t seen is much discussion of a language of scientific propaganda. Still, the overwhelming impression I got from LIGO’s announcement is that it was shaped by a master in the use of such a language. They tapped in to a wide variety of powerful images: from the documentary-style interviews at the beginning, to Weiss’s tweed jacket and handmade demos, to the American flag in the background, that tied LIGO’s result to the history of scientific accomplishment.

Perimeter’s presentations tend to have a slicker look, my friends at Stony Brook are probably better at avoiding jargon. But neither is quite as good at propaganda, at saying “we are part of history” and doing so without a hitch, as the folks at LIGO have shown themselves to be with this announcement.

I was also fairly impressed that they kept this under wraps for so long. While there were leaks, I don’t think many people had a complete grasp of what was going to be announced until the week before. Somehow, LIGO made sure a collaboration of thousands was able to (mostly) keep their mouths shut!

Beyond the organizational and stylistic notes, my main thought was “What’s next?” They’ve announced the detection of one event. I’ve heard others rattle off estimates, that they should be detecting anywhere from one black hole merger per year to a few hundred. Are we going to see more events soon, or should we settle into a long wait? Could they already have detected more, with the evidence buried in their data, to be revealed by careful analysis? (The waves from this black hole merger were clear enough for them to detect them in real-time, but more subtle events might not make things so easy!) Should we be seeing more events already, and does not seeing them tell us something important about the universe?

Most of the reason I delayed my post till this week was to see if anyone had an answer to these questions. So far, I haven’t seen one, besides the “one to a few hundred” estimate mentioned. As more people weigh in and more of LIGO’s run is analyzed, it will be interesting to see where that side of the story goes.

Gravitational Waves, and Valentine’s Day Physics Poem 2016

By the time this post goes up, you’ll probably have seen Advanced LIGO’s announcement of the first direct detection of a gravitational wave. We got the news a bit early here at Perimeter, which is why we were able to host a panel discussion right after the announcement.

From what I’ve heard, this is the real deal. They’ve got a beautifully clear signal, and unlike BICEP, they kept this under wraps until they could get it looked at by non-LIGO physicists. While I think peer review gets harped on a little too much in these sorts of contexts, in this case their paper getting through peer review is a good sign that they’re really seeing something.

IMG_20160211_104600

Pictured: a very clear, very specific something

I’ll have more to say next week: explanations of gravitational waves and LIGO for my non-expert audience, and impressions from the press release and PI’s panel discussion for those who are interested. For now, though, I’ll wait until the dust (metaphorical this time) settles. If you’re hungry for immediate coverage, I’m sure that half the blogs on my blogroll have posts up, or will in the next few days.

In the meantime, since Valentine’s Day is in two days, I’ll continue this blog’s tradition and post one of my old physics poems.


 

When a sophisticated string theorist seeks an interaction

He does not go round and round in loops

As a young man would.

 

Instead he turns to topology.

 

Mature, the string theorist knows

That what happens on

(And between)

The (world) sheets,

Is universal.

 

That the process is the same

No matter which points

Which interactions

One chooses.

 

Only the shapes of things matter.

 

Only the topology.

 

For such a man there is no need.

To obsess

To devote

To choose

One point or another.

The interaction is the same.

 

The world, though

Is not an exercise in theory.

Is not a mere possibility.

And if a theorist would compute

An experiment

A probability

 

He must pick and choose

Obsess and devote

Label his interactions with zeroes and infinities

 

Because there is more to life

Than just the shapes of things

Than just topology.

 

The Universe, Astronomy’s Lab

There’s a theme in a certain kind of science fiction.

Not in the type with laser swords and space elves, and not in cyberpunk dystopias…but when sci-fi tries to explore what humanity might do if it really got a chance to explore its own capabilities. In a word, the theme is scale.

We start out with a Dyson sphere, built around our own sun to trap its energy. As time goes on, the projects get larger and larger, involving multiple stars and, eventually, reshaping the galaxy.

There’s an expectation, though, that this sort of thing is far in our future. Treating the galaxy as a resource, as a machine, seems well beyond our present capabilities.

On Wednesday, Victoria Kaspi gave a public lecture at Perimeter about neutron stars. At the very end of the lecture, she talked a bit about something she covered in more detail during her colloquium earlier that day, called a Pulsar Timing Array.

Neutron stars are one of the ways a star can end its life. Too big to burn out quietly and form a white dwarf, and too small to collapse all the way into a black hole, the progenitors of neutron stars have so much gravity that they force protons and electrons to merge, so that the star ends up as a giant ball of neutrons, like an enormous atomic nucleus.

Many of these neutron stars have strong magnetic fields. A good number of them are what are called pulsars: stars that emit powerful pulses of electromagnetic radiation, often at regular intervals. Some of these pulsars are very regular indeed, rivaling atomic clocks in their precision. The idea of a Pulsar Timing Array is to exploit this regularity by using these pulsars as a gravitational wave telescope.

Gravitational waves are ripples in space-time. They were predicted by Einstein’s theory, and we’ve observed their indirect effects, but so far we have yet to detect them directly. Attempts have been made: vast detectors like LIGO have been built that bounce light across long “arms”, trying to detect minute disruptions in space. The problem is, it’s hard to distinguish these disruptions from ordinary vibrations in the area, like minor earthquakes. Size also limits the effectiveness of these detectors, with larger detectors able to see the waves from bigger astronomical events.

Pulsar Timing Arrays sidestep both of those problems. Instead of trying to build a detector on the ground like LIGO (or even in space like LISA), they use the pulsars themselves as the “arms” of a galaxy-sized detector. Because these pulsars emit light so regularly, small disruptions can be a sign that a gravitational wave is passing by the earth and disrupting the signal. Because they are spread roughly evenly across the galaxy, we can correlate signals across multiple pulsars, to make sure we’re really seeing gravitational waves. And because they’re so far apart, we can use them to detect waves from some of the biggest astronomical events, like galaxies colliding.

ptas

Earth very much not to scale.

Longtime readers know that I find astronomy really inspiring, but Kaspi’s talk woke me up to a completely different aspect, that of our mastery of scale.

Want to dream of a future where we use the solar system and the galaxy as resources? We’re there, and we’ve been there for a long time. We’re a civilization that used nearby planets to bootstrap up the basic laws of motion before we even had light bulbs. We’ve honed our understanding of space and time using distant stars. And now, we’re using an array of city-sized balls of neutronium, distributed across the galaxy, as a telescope. If that’s not the stuff of science fiction, I don’t know what is.


 

By the way, speaking of webcast lectures, I’m going to be a guest on the Alda Center’s Science Unplugged show next week. Tune in if you want to hear about the sort of stuff I work on, using string theory as a tool to develop shortcuts for particle physics calculations.

PSI Winter School

I’m at the Perimeter Scholars International Winter School this week. Perimeter Scholars International is Perimeter’s one-of-a-kind master’s program in theoretical physics, that jams the basics of theoretical physics into a one-year curriculum. We’ve got students from all over the world, including plenty of places that don’t get any snow at all. As such, it was decided that the students need to spend a week somewhere with even more snow than Waterloo: Musoka, Ontario.

IMG_20160127_152710

A place that occasionally manages to be this photogenic

This isn’t really a break for them, though, which is where I come in. The students have been organized into groups, and each group is working on a project. My group’s project is related to the work of integrability master Pedro Vieira. He and his collaborators came up with a way to calculate scattering amplitudes in N=4 super Yang-Mills without the usual process of loop-by-loop approximations. However, this method comes at a price: a new approximation, this time to low energy. This approximation is step-by-step, like loops, but in a different direction. It’s called the Pentagon Operator Product Expansion, or POPE for short.

IMG_20160127_123210

Approach the POPE, and receive a blessing

What we’re trying to do is go back and add up all of the step-by-step terms in the approximation, to see if we can match to the old expansion in loops. One of Pedro’s students recently managed to do this for the first approximation (“tree” diagrams), and the group here at the Winter School is trying to use her (still unpublished) work as a jumping-off point to get to the first loop. Time will tell whether we’ll succeed…but we’re making progress, and the students are learning a lot.

Trust Your Notation as Far as You Can Prove It

Calculus contains one of the most famous examples of physicists doing something silly that irritates mathematicians. See, there are two different ways to write down a derivative, both dating back to the invention of calculus: Newton’s method, and Leibniz’s method.

Newton cared a lot about rigor (enough that he actually published his major physics results without calculus because he didn’t think calculus was rigorous enough, despite inventing it himself). His notation is direct and to the point: if you want to take the derivative of a function f of x, you write,

f'(x)

Leibniz cared a lot less about rigor, and a lot more about the scientific community. He wanted his notation to be useful and intuitive, to be the sort of thing that people would pick up and run with. To write a derivative in Leibniz notation, you write,

\frac{df}{dx}

This looks like a fraction. It’s really, really tempting to treat it like a fraction. And that’s the point: it’s to tell you that treating it like a fraction is often the right thing to do. In particular, you can do something like this,

y=\frac{df}{dx}

y dx=df

\int y dx=\int df

and what you did actually makes a certain amount of sense.

The tricky thing here is that it doesn’t always make sense. You can do these sorts of tricks up to a point, but you need to be aware that they really are just tricks. Take the notation too seriously, and you end up doing things you aren’t really allowed to do. It’s always important to stay aware of what you’re really doing.

There are a lot of examples of this kind of thing in physics. In quantum field theory, we use path integrals. These aren’t really integrals…but a lot of the time, we can treat them as such. Operators in quantum mechanics can be treated like numbers and multiplied…up to a point. A friend of mine was recently getting confused by operator product expansions, where similar issues crop up.

I’ve found two ways to clear up this kind of confusion. One is to unpack your notation: go back to the definitions, and make sure that what you’re doing really makes sense. This can be tedious, but you can be confident that you’re getting the right answer.

The other option is to stop treating your notation like the familiar thing it resembles, and start treating it like uncharted territory. You’re using this sort of notation to remind you of certain operations you can do, certain rules you need to follow. If you take those rules as basic, you can think about what you’re doing in terms of axioms rather than in terms of the suggestions made by your notation. Follow the right axioms, and you’ll stay within the bounds of what you’re actually allowed to do.

Either way, familiar-looking notation can help your intuition, making calculations more fluid. Just don’t trust it farther than you can prove it.

Amplitudes for the New Year

Ah, the new year, time of new year’s resolutions. While some people resolve to go to the gym or take up online dating, physicists resolve to finally get that paper out.

At least, that’s the impression I get, given the number of papers posted to arXiv in the last month. Since a lot of them were amplitudes-related, I figured I’d go over some highlights.

Everyone once in a while people ask me for the latest news on the amplituhedron. While I don’t know what Nima is working on right now, I can point to what others have been doing. Zvi Bern, Jaroslav Trnka, and collaborators have continued to make progress towards generalizing the amplituhedron to non-planar amplitudes. Meanwhile, a group in Europe has been working on solving an issue I’ve glossed over to some extent. While the amplituhedron is often described as calculating an amplitude as the volume of a geometrical object, in fact there is a somewhat more indirect procedure involved in going from the geometrical object to the amplitude. It would be much simpler if the amplitude was actually the volume of some (different) geometrical object, and that’s what these folks are working towards. Finally, Daniele Galloni has made progress on solving a technical issue: the amplituhedron gives a mathematical recipe for the amplitude, but it doesn’t tell you how to carry out that recipe, and Galloni provides an algorithm for part of this process.

With this new algorithm, is the amplituhedron finally as efficient as older methods? Typically, the way to show that is to do a calculation with the amplituhedron that wasn’t possible before. It doesn’t look like that’s happening soon though, as Jake Bourjaily and collaborators compute an eight-loop integrand using one of the more successful of the older methods. Their paper provides a good answer to the perennial question, “why more loops?” What they find is that some of the assumptions that people made at lower loops fail to hold at this high loop order, and it becomes increasingly important to keep track of exactly how far your symmetries can take you.

Back when I visited Brown, I talked to folks there about some ongoing work. Now that they’ve published, I can talk about it. A while back, Juan Maldacena resurrected an old technique of Landau’s to solve a problem in AdS/CFT. In that paper, he suggested that Landau’s trick might help prove some of the impressive simplifications in N=4 super Yang-Mills that underlie my work and the work of those at Brown. In their new paper, the Brown group finds that, while useful, Landau’s trick doesn’t seem to fully explain the simplicity they’ve discovered. To get a little partisan, I have to say that this was largely the result I expected, and that it felt a bit condescending for Maldacena to assume that an old trick like that from the Feynman diagram era could really be enough to explain one of the big discoveries of amplitudeology.

There was also a paper by Freddy Cachazo and collaborators on an interesting trick to extend their CHY string to one-loop, and one by Bo Feng and collaborators on an intriguing new method called Q-cuts that I will probably say more about in future, but I’ll sign off for now. I’ve got my own new years’ physics resolutions, and I ought to get back to work!

The Higgs Solution

My grandfather is a molecular biologist. Over the holidays I had many opportunities to chat with him, and our conversations often revolved around explaining some aspect of our respective fields. While talking to him, I came up with a chemistry-themed description of the Higgs field, and how it leads to electro-weak symmetry breaking. Very few of you are likely to be chemists, but I think you still might find the metaphor worthwhile.

Picture the Higgs as a mixture of ions, dissolved in water.

In this metaphor, the Higgs field is a sort of “Higgs solution”. Overall, this solution should be uniform: if you have more ions of a certain type in one place than another, over time they will dissolve until they reach a uniform mixture again. In this metaphor, the Higgs particle detected by the LHC is like a brief disturbance in the fluid: by stirring the solution at high energy, we’ve managed to briefly get more of one type of ion in one place than the average concentration.

What determines the average concentration, though?

Essentially, it’s arbitrary. If this were really a chemistry experiment, it would depend on the initial conditions: which ions we put in to the mixture in the first place. In physics, quantum mechanics plays a role, randomly selecting one option out of the many possibilities.

 

nile_red_01

Choose wisely

(Note that this metaphor doesn’t explain why there has to be a solution, why the water can’t just be “pure”. A setup that required this would probably be chemically complicated enough to confuse nearly everybody, so I’m leaving that feature out. Just trust that “no ions” isn’t one of our options.)

Up till now, the choice of mixture didn’t matter very much. But different ions interact with other chemicals in different ways, and this has some interesting implications.

Suppose we have a tube filled with our Higgs solution. We want to shoot some substance through the tube, and collect it on the other side. This other substance is going to represent a force.

If our force substance doesn’t react with the ions in our Higgs solution, it will just go through to the other side. If it does react, though, then it will be slowed down, and only some of it will get to the other side, possibly none at all.

You can think of the electro-weak force as a mixture of these sorts of substances. Normally, there is no way to tell the different substances apart. Just like the different Higgs solutions, different parts of the electro-weak force are arbitrary.

However, once we’ve chosen a Higgs solution, things change. Now, different parts of our electro-weak substance will behave differently. The parts that react with the ions in our Higgs solution will slow down, and won’t make it through the tube, while the parts that don’t interact will just flow on through.

We call the part that gets through the tube electromagnetism, and the part that doesn’t the weak nuclear force. Electromagnetism is long-range, its waves (light) can travel great distances. The weak nuclear force is short-range, and doesn’t have an effect outside of the scale of atoms.

The important thing to take away from this is that the division between electromagnetism and the weak nuclear force is totally arbitrary. Taken by themselves, they’re equivalent parts of the same, electro-weak force. It’s only because some of them interact with the Higgs, while others don’t, that we distinguish those parts from each other. If the Higgs solution were a different mixture (if the Higgs field had different charges) then a different part of the electroweak force would be long-range, and a different part would be short-range.

We wouldn’t be able to tell the difference, though. We’d see a long-range force, and a short-range force, and a Higgs field. In the end, our world would be completely the same, just based on a different, arbitrary choice.