Symbology 101

I work with functions called polylogarithms. There’s a whole field of techniques out there for manipulating these functions, and for better or worse people often refer to them as symbology.

My plan for this post is to give a general feel for how symbology works: what we know how to do, and why. It’s going to be a lot more technical than my usual posts, so the lay reader may want to skip this one. At the same time, I’m not planning to go through anything rigorously. If you want that sort of thing there are plenty of good papers on the subject, here’s one of mine that covers the basics. Rather, I’m going to draw what I hope is an illuminating sketch of what it is we do.

Still here? Let’s start with an easy question.

What’s a log?

balch_park_hollow_log

Ok, besides one of these.

For our purposes, a log is what happens when you integrate dx/x.

\log x=\int \frac{dx}{x}

 Schematically, a polylog is then what happens when you iterate these integrations:

G=\int \frac{dx_1}{x_1} \int \frac{dx_2}{x_2}\ldots

The simplest thing you can get from this is of course just a product of logs. The next most simple thing is one of the classical polylogarithms. But in general, this is a much wider class of functions, known as multiple, or Goncharov, polylogarithms.

The number of integrations is the transcendental weight. Naively, you’d expect an L-loop Feynman integral in four dimensions to give you something with transcendental weight 4L. In practice, that’s not the case: some of the momentum integrations end up just giving delta functions, so in the end an L-loop amplitude has transcendental weight 2L.

In most theories, you get a mix of functions: some with weight 2L, some with weight 2L-1, etc., all the way down to rational functions. N=4 super Yang-Mills is special: there, everything is at the maximum transcendental weight. In either case, though, being able to manipulate transcendental functions is very useful, and the symbol is one of the simplest ways to do so.

The core idea of the symbol is pretty easy to state, though it takes a bit more technology to state it rigorously. Essentially, we take our schematic polylog from above, and just list the logs:

\mathcal{S}(G)=\ldots\otimes x_2\otimes x_1

(Here I have switched the order in order to agree with standard conventions.)

What does that do? Well, it reminds us that these aren’t just some weird functions we don’t understand: they’re collections of logs, and we can treat them like collections of logs.

In particular, we can do this with logs,

\log (x y)=\log x+\log y

so we can do it with symbols as well:

x_1\otimes x y\otimes x_3=x_1\otimes x \otimes x_3+x_1\otimes y\otimes x_3

Similarly, we can always get rid of unwelcome exponents, like so:

\log (x^n)=n\log x

x_1\otimes x^n\otimes x_3=n( x_1\otimes x \otimes x_3)

This means that, in general, we can always factorize any polynomial or rational function that appears in a symbol. As such, we often express symbols in terms of some fixed symbol alphabet, a basis of rational functions that can be multiplied to get any symbol entry in the function we’re working with. In general, it’s a lot easier to calculate amplitudes when we know the symbol alphabet beforehand. For six-particle amplitudes in N=4 super Yang-Mills, the symbol alphabet contains just nine “letters”, which makes it particularly easy to work with.

That’s arguably the core of symbol methods. It’s how Spradlin and Volovich managed to get a seventeen-page expression down to two lines. Express a symbol in the right alphabet, and it tends to look a lot more simple. And once you know the right alphabet, it’s pretty straightforward to build an ansatz with it and constrain it until you get a candidate function for whatever you’re interested in.

There’s more technical detail I could give here: how to tell whether a symbol actually corresponds to a function, how to take limits and do series expansions and take derivatives and discontinuities…but I’m not sure whether anyone reading this would be interested.

As-is, I’ll just mention that the symbol is only part of the story. In particular, it’s a special case of something called a coproduct, which breaks up polylogarithms into various chunks. Break them down fully until each chunk is just an individual log, and you get the symbol. Break them into larger chunks, and you get other components of the coproduct, consisting of tensor products of polylogarithms with lower transcendental weight. These larger chunks mean we can capture as much of a function’s behavior as we like, while still taking advantage of these sorts of tricks. While in older papers you might have seen mention of “beyond-the-symbol” terms that the symbol couldn’t capture, this doesn’t tend to be a problem these days.

You Go, LIGO!

Well folks, they did it. LIGO has detected gravitational waves!

FAQ:

What’s a gravitational wave?

Gravitational waves are ripples in space and time. As Einstein figured out a century ago, masses bend space and time, which causes gravity. Wiggle masses in the right way and you get a gravity wave, like a ripple on a pond.

Ok, but what is actually rippling? It’s some stuff, right? Dust or something?

In a word, no. Not everything has to be “stuff”. Energy isn’t “stuff”, and space-time isn’t either, but space-time is really what vibrates when a gravitational wave passes by. Distances themselves are changing, in a way that is described by the same math and physics as a ripple in a pond.

What’s LIGO?

LIGO is the Laser Interferometer Gravitational-Wave Observatory. In simple terms, it’s an observatory (or rather, a pair of observatories in Washington and Louisiana) that can detect gravitational waves. It does this using beams of laser light four kilometers long. Gravitational waves change the length of these beams when they pass through, causing small but measurable changes in the laser light observed.

Are there other gravitational wave observatories?

Not currently in operation. LIGO originally ran from 2002 to 2010, and during that time there were other gravitational wave observatories also in operation (VIRGO in Italy and GEO600 in Germany). All of them (including LIGO) failed to detect anything, and so LIGO and VIRGO were shut down in order for them to be upgraded to more sensitive, advanced versions. Advanced LIGO went into operation first, and made the detection. VIRGO is still under construction, as is KAGRA, a detector in Japan. There are also plans for a detector in India.

Other sorts of experiments can detect gravitational waves on different scales. eLISA is a planned space-based gravitational wave observatory, while Pulsar Timing Arrays could use distant neutron stars as an impromptu detector.

What did they detect? What could they detect?

The gravitational waves that LIGO detected came from a pair of black holes merging. In general, gravitational waves come from a pair of masses, or one mass with an uneven and rapidly changing shape. As such, LIGO and future detectors might be able to observe binary stars, supernovas, weird-shaped neutron stars, colliding galaxies…pretty much any astrophysical event involving large things moving comparatively fast.

What does this say about string theory?

Basically nothing. There are gravity waves in string theory, sure (and they play a fairly important role), but there were gravity waves in Einstein’s general relativity. As far as I’m aware, no-one at this point seriously thought that gravitational waves didn’t exist. Nothing that LIGO observed has any bearing on the quantum properties of gravity.

But what about cosmic strings? They mentioned those in the announcement!

Cosmic strings, despite the name, aren’t a unique prediction of string theory. They’re big, string-shaped wrinkles in space and time, possible results of the rapid expansion of space during cosmic inflation. You can think of them a bit like the cracks that form in an over-inflated balloon right before it bursts.

Cosmic strings, if they exist, should produce gravitational waves. This means that in the future we may have concrete evidence of whether or not they exist. This wouldn’t say all that much about string theory: while string theory does have its own explanations for cosmic strings, it’s unclear whether it actually has unique predictions about them. It would say a lot about cosmic inflation, though, and would presumably help distinguish it from proposed alternatives. So keep your eyes open: in the next few years, gravitational wave observatories may well have something important to say about the overall history of the universe.

Why is this discovery important, though? If we already knew that gravitational waves existed, why does discovering them matter?

LIGO didn’t discover that gravitational waves exist. LIGO discovered that we can detect them.

The existence of gravitational waves is no discovery. But the fact that we now have observatories sensitive enough to detect them is huge. It opens up a whole new type of astronomy: we can now observe the universe not just by the light it sheds (and neutrinos), but through a whole new lens. And every time we get another observational tool like this, we notice new things, things we couldn’t have seen without it. It’s the dawn of a new era in astronomy, and LIGO was right to announce it with all the pomp and circumstance they could muster.

 

My impressions from the announcement:

Speaking of pomp and circumstance, I was impressed by just how well put-together LIGO’s announcement was.

As the US presidential election heats up, I’ve seen a few articles about the various candidates’ (well, usually Trump’s) use of the language of political propaganda. The idea is that there are certain visual symbols at political events for which people have strong associations, whether with historical events or specific ideas or the like, and that using these symbols makes propaganda more powerful.

What I haven’t seen is much discussion of a language of scientific propaganda. Still, the overwhelming impression I got from LIGO’s announcement is that it was shaped by a master in the use of such a language. They tapped in to a wide variety of powerful images: from the documentary-style interviews at the beginning, to Weiss’s tweed jacket and handmade demos, to the American flag in the background, that tied LIGO’s result to the history of scientific accomplishment.

Perimeter’s presentations tend to have a slicker look, my friends at Stony Brook are probably better at avoiding jargon. But neither is quite as good at propaganda, at saying “we are part of history” and doing so without a hitch, as the folks at LIGO have shown themselves to be with this announcement.

I was also fairly impressed that they kept this under wraps for so long. While there were leaks, I don’t think many people had a complete grasp of what was going to be announced until the week before. Somehow, LIGO made sure a collaboration of thousands was able to (mostly) keep their mouths shut!

Beyond the organizational and stylistic notes, my main thought was “What’s next?” They’ve announced the detection of one event. I’ve heard others rattle off estimates, that they should be detecting anywhere from one black hole merger per year to a few hundred. Are we going to see more events soon, or should we settle into a long wait? Could they already have detected more, with the evidence buried in their data, to be revealed by careful analysis? (The waves from this black hole merger were clear enough for them to detect them in real-time, but more subtle events might not make things so easy!) Should we be seeing more events already, and does not seeing them tell us something important about the universe?

Most of the reason I delayed my post till this week was to see if anyone had an answer to these questions. So far, I haven’t seen one, besides the “one to a few hundred” estimate mentioned. As more people weigh in and more of LIGO’s run is analyzed, it will be interesting to see where that side of the story goes.

Gravitational Waves, and Valentine’s Day Physics Poem 2016

By the time this post goes up, you’ll probably have seen Advanced LIGO’s announcement of the first direct detection of a gravitational wave. We got the news a bit early here at Perimeter, which is why we were able to host a panel discussion right after the announcement.

From what I’ve heard, this is the real deal. They’ve got a beautifully clear signal, and unlike BICEP, they kept this under wraps until they could get it looked at by non-LIGO physicists. While I think peer review gets harped on a little too much in these sorts of contexts, in this case their paper getting through peer review is a good sign that they’re really seeing something.

IMG_20160211_104600

Pictured: a very clear, very specific something

I’ll have more to say next week: explanations of gravitational waves and LIGO for my non-expert audience, and impressions from the press release and PI’s panel discussion for those who are interested. For now, though, I’ll wait until the dust (metaphorical this time) settles. If you’re hungry for immediate coverage, I’m sure that half the blogs on my blogroll have posts up, or will in the next few days.

In the meantime, since Valentine’s Day is in two days, I’ll continue this blog’s tradition and post one of my old physics poems.


 

When a sophisticated string theorist seeks an interaction

He does not go round and round in loops

As a young man would.

 

Instead he turns to topology.

 

Mature, the string theorist knows

That what happens on

(And between)

The (world) sheets,

Is universal.

 

That the process is the same

No matter which points

Which interactions

One chooses.

 

Only the shapes of things matter.

 

Only the topology.

 

For such a man there is no need.

To obsess

To devote

To choose

One point or another.

The interaction is the same.

 

The world, though

Is not an exercise in theory.

Is not a mere possibility.

And if a theorist would compute

An experiment

A probability

 

He must pick and choose

Obsess and devote

Label his interactions with zeroes and infinities

 

Because there is more to life

Than just the shapes of things

Than just topology.

 

The Universe, Astronomy’s Lab

There’s a theme in a certain kind of science fiction.

Not in the type with laser swords and space elves, and not in cyberpunk dystopias…but when sci-fi tries to explore what humanity might do if it really got a chance to explore its own capabilities. In a word, the theme is scale.

We start out with a Dyson sphere, built around our own sun to trap its energy. As time goes on, the projects get larger and larger, involving multiple stars and, eventually, reshaping the galaxy.

There’s an expectation, though, that this sort of thing is far in our future. Treating the galaxy as a resource, as a machine, seems well beyond our present capabilities.

On Wednesday, Victoria Kaspi gave a public lecture at Perimeter about neutron stars. At the very end of the lecture, she talked a bit about something she covered in more detail during her colloquium earlier that day, called a Pulsar Timing Array.

Neutron stars are one of the ways a star can end its life. Too big to burn out quietly and form a white dwarf, and too small to collapse all the way into a black hole, the progenitors of neutron stars have so much gravity that they force protons and electrons to merge, so that the star ends up as a giant ball of neutrons, like an enormous atomic nucleus.

Many of these neutron stars have strong magnetic fields. A good number of them are what are called pulsars: stars that emit powerful pulses of electromagnetic radiation, often at regular intervals. Some of these pulsars are very regular indeed, rivaling atomic clocks in their precision. The idea of a Pulsar Timing Array is to exploit this regularity by using these pulsars as a gravitational wave telescope.

Gravitational waves are ripples in space-time. They were predicted by Einstein’s theory, and we’ve observed their indirect effects, but so far we have yet to detect them directly. Attempts have been made: vast detectors like LIGO have been built that bounce light across long “arms”, trying to detect minute disruptions in space. The problem is, it’s hard to distinguish these disruptions from ordinary vibrations in the area, like minor earthquakes. Size also limits the effectiveness of these detectors, with larger detectors able to see the waves from bigger astronomical events.

Pulsar Timing Arrays sidestep both of those problems. Instead of trying to build a detector on the ground like LIGO (or even in space like LISA), they use the pulsars themselves as the “arms” of a galaxy-sized detector. Because these pulsars emit light so regularly, small disruptions can be a sign that a gravitational wave is passing by the earth and disrupting the signal. Because they are spread roughly evenly across the galaxy, we can correlate signals across multiple pulsars, to make sure we’re really seeing gravitational waves. And because they’re so far apart, we can use them to detect waves from some of the biggest astronomical events, like galaxies colliding.

ptas

Earth very much not to scale.

Longtime readers know that I find astronomy really inspiring, but Kaspi’s talk woke me up to a completely different aspect, that of our mastery of scale.

Want to dream of a future where we use the solar system and the galaxy as resources? We’re there, and we’ve been there for a long time. We’re a civilization that used nearby planets to bootstrap up the basic laws of motion before we even had light bulbs. We’ve honed our understanding of space and time using distant stars. And now, we’re using an array of city-sized balls of neutronium, distributed across the galaxy, as a telescope. If that’s not the stuff of science fiction, I don’t know what is.


 

By the way, speaking of webcast lectures, I’m going to be a guest on the Alda Center’s Science Unplugged show next week. Tune in if you want to hear about the sort of stuff I work on, using string theory as a tool to develop shortcuts for particle physics calculations.

PSI Winter School

I’m at the Perimeter Scholars International Winter School this week. Perimeter Scholars International is Perimeter’s one-of-a-kind master’s program in theoretical physics, that jams the basics of theoretical physics into a one-year curriculum. We’ve got students from all over the world, including plenty of places that don’t get any snow at all. As such, it was decided that the students need to spend a week somewhere with even more snow than Waterloo: Musoka, Ontario.

IMG_20160127_152710

A place that occasionally manages to be this photogenic

This isn’t really a break for them, though, which is where I come in. The students have been organized into groups, and each group is working on a project. My group’s project is related to the work of integrability master Pedro Vieira. He and his collaborators came up with a way to calculate scattering amplitudes in N=4 super Yang-Mills without the usual process of loop-by-loop approximations. However, this method comes at a price: a new approximation, this time to low energy. This approximation is step-by-step, like loops, but in a different direction. It’s called the Pentagon Operator Product Expansion, or POPE for short.

IMG_20160127_123210

Approach the POPE, and receive a blessing

What we’re trying to do is go back and add up all of the step-by-step terms in the approximation, to see if we can match to the old expansion in loops. One of Pedro’s students recently managed to do this for the first approximation (“tree” diagrams), and the group here at the Winter School is trying to use her (still unpublished) work as a jumping-off point to get to the first loop. Time will tell whether we’ll succeed…but we’re making progress, and the students are learning a lot.

Trust Your Notation as Far as You Can Prove It

Calculus contains one of the most famous examples of physicists doing something silly that irritates mathematicians. See, there are two different ways to write down a derivative, both dating back to the invention of calculus: Newton’s method, and Leibniz’s method.

Newton cared a lot about rigor (enough that he actually published his major physics results without calculus because he didn’t think calculus was rigorous enough, despite inventing it himself). His notation is direct and to the point: if you want to take the derivative of a function f of x, you write,

f'(x)

Leibniz cared a lot less about rigor, and a lot more about the scientific community. He wanted his notation to be useful and intuitive, to be the sort of thing that people would pick up and run with. To write a derivative in Leibniz notation, you write,

\frac{df}{dx}

This looks like a fraction. It’s really, really tempting to treat it like a fraction. And that’s the point: it’s to tell you that treating it like a fraction is often the right thing to do. In particular, you can do something like this,

y=\frac{df}{dx}

y dx=df

\int y dx=\int df

and what you did actually makes a certain amount of sense.

The tricky thing here is that it doesn’t always make sense. You can do these sorts of tricks up to a point, but you need to be aware that they really are just tricks. Take the notation too seriously, and you end up doing things you aren’t really allowed to do. It’s always important to stay aware of what you’re really doing.

There are a lot of examples of this kind of thing in physics. In quantum field theory, we use path integrals. These aren’t really integrals…but a lot of the time, we can treat them as such. Operators in quantum mechanics can be treated like numbers and multiplied…up to a point. A friend of mine was recently getting confused by operator product expansions, where similar issues crop up.

I’ve found two ways to clear up this kind of confusion. One is to unpack your notation: go back to the definitions, and make sure that what you’re doing really makes sense. This can be tedious, but you can be confident that you’re getting the right answer.

The other option is to stop treating your notation like the familiar thing it resembles, and start treating it like uncharted territory. You’re using this sort of notation to remind you of certain operations you can do, certain rules you need to follow. If you take those rules as basic, you can think about what you’re doing in terms of axioms rather than in terms of the suggestions made by your notation. Follow the right axioms, and you’ll stay within the bounds of what you’re actually allowed to do.

Either way, familiar-looking notation can help your intuition, making calculations more fluid. Just don’t trust it farther than you can prove it.

Amplitudes for the New Year

Ah, the new year, time of new year’s resolutions. While some people resolve to go to the gym or take up online dating, physicists resolve to finally get that paper out.

At least, that’s the impression I get, given the number of papers posted to arXiv in the last month. Since a lot of them were amplitudes-related, I figured I’d go over some highlights.

Everyone once in a while people ask me for the latest news on the amplituhedron. While I don’t know what Nima is working on right now, I can point to what others have been doing. Zvi Bern, Jaroslav Trnka, and collaborators have continued to make progress towards generalizing the amplituhedron to non-planar amplitudes. Meanwhile, a group in Europe has been working on solving an issue I’ve glossed over to some extent. While the amplituhedron is often described as calculating an amplitude as the volume of a geometrical object, in fact there is a somewhat more indirect procedure involved in going from the geometrical object to the amplitude. It would be much simpler if the amplitude was actually the volume of some (different) geometrical object, and that’s what these folks are working towards. Finally, Daniele Galloni has made progress on solving a technical issue: the amplituhedron gives a mathematical recipe for the amplitude, but it doesn’t tell you how to carry out that recipe, and Galloni provides an algorithm for part of this process.

With this new algorithm, is the amplituhedron finally as efficient as older methods? Typically, the way to show that is to do a calculation with the amplituhedron that wasn’t possible before. It doesn’t look like that’s happening soon though, as Jake Bourjaily and collaborators compute an eight-loop integrand using one of the more successful of the older methods. Their paper provides a good answer to the perennial question, “why more loops?” What they find is that some of the assumptions that people made at lower loops fail to hold at this high loop order, and it becomes increasingly important to keep track of exactly how far your symmetries can take you.

Back when I visited Brown, I talked to folks there about some ongoing work. Now that they’ve published, I can talk about it. A while back, Juan Maldacena resurrected an old technique of Landau’s to solve a problem in AdS/CFT. In that paper, he suggested that Landau’s trick might help prove some of the impressive simplifications in N=4 super Yang-Mills that underlie my work and the work of those at Brown. In their new paper, the Brown group finds that, while useful, Landau’s trick doesn’t seem to fully explain the simplicity they’ve discovered. To get a little partisan, I have to say that this was largely the result I expected, and that it felt a bit condescending for Maldacena to assume that an old trick like that from the Feynman diagram era could really be enough to explain one of the big discoveries of amplitudeology.

There was also a paper by Freddy Cachazo and collaborators on an interesting trick to extend their CHY string to one-loop, and one by Bo Feng and collaborators on an intriguing new method called Q-cuts that I will probably say more about in future, but I’ll sign off for now. I’ve got my own new years’ physics resolutions, and I ought to get back to work!

The Higgs Solution

My grandfather is a molecular biologist. Over the holidays I had many opportunities to chat with him, and our conversations often revolved around explaining some aspect of our respective fields. While talking to him, I came up with a chemistry-themed description of the Higgs field, and how it leads to electro-weak symmetry breaking. Very few of you are likely to be chemists, but I think you still might find the metaphor worthwhile.

Picture the Higgs as a mixture of ions, dissolved in water.

In this metaphor, the Higgs field is a sort of “Higgs solution”. Overall, this solution should be uniform: if you have more ions of a certain type in one place than another, over time they will dissolve until they reach a uniform mixture again. In this metaphor, the Higgs particle detected by the LHC is like a brief disturbance in the fluid: by stirring the solution at high energy, we’ve managed to briefly get more of one type of ion in one place than the average concentration.

What determines the average concentration, though?

Essentially, it’s arbitrary. If this were really a chemistry experiment, it would depend on the initial conditions: which ions we put in to the mixture in the first place. In physics, quantum mechanics plays a role, randomly selecting one option out of the many possibilities.

 

nile_red_01

Choose wisely

(Note that this metaphor doesn’t explain why there has to be a solution, why the water can’t just be “pure”. A setup that required this would probably be chemically complicated enough to confuse nearly everybody, so I’m leaving that feature out. Just trust that “no ions” isn’t one of our options.)

Up till now, the choice of mixture didn’t matter very much. But different ions interact with other chemicals in different ways, and this has some interesting implications.

Suppose we have a tube filled with our Higgs solution. We want to shoot some substance through the tube, and collect it on the other side. This other substance is going to represent a force.

If our force substance doesn’t react with the ions in our Higgs solution, it will just go through to the other side. If it does react, though, then it will be slowed down, and only some of it will get to the other side, possibly none at all.

You can think of the electro-weak force as a mixture of these sorts of substances. Normally, there is no way to tell the different substances apart. Just like the different Higgs solutions, different parts of the electro-weak force are arbitrary.

However, once we’ve chosen a Higgs solution, things change. Now, different parts of our electro-weak substance will behave differently. The parts that react with the ions in our Higgs solution will slow down, and won’t make it through the tube, while the parts that don’t interact will just flow on through.

We call the part that gets through the tube electromagnetism, and the part that doesn’t the weak nuclear force. Electromagnetism is long-range, its waves (light) can travel great distances. The weak nuclear force is short-range, and doesn’t have an effect outside of the scale of atoms.

The important thing to take away from this is that the division between electromagnetism and the weak nuclear force is totally arbitrary. Taken by themselves, they’re equivalent parts of the same, electro-weak force. It’s only because some of them interact with the Higgs, while others don’t, that we distinguish those parts from each other. If the Higgs solution were a different mixture (if the Higgs field had different charges) then a different part of the electroweak force would be long-range, and a different part would be short-range.

We wouldn’t be able to tell the difference, though. We’d see a long-range force, and a short-range force, and a Higgs field. In the end, our world would be completely the same, just based on a different, arbitrary choice.

Who Needs Non-Empirical Confirmation?

I’ve figured out what was bugging me about Dawid’s workshop on non-empirical theory confirmation.

It’s not the concept itself that bothers me. While you might think of science as entirely based on observations of the real world, in practice we can’t test everything. Inevitably, we have to add in other sorts of evidence: judgments based on precedent, philosophical considerations, or sociological factors.

It’s Dawid’s examples that annoy me: string theory, inflation, and the multiverse. Misleading popularizations aside, none of these ideas involve non-empirical confirmation. In particular, string theory doesn’t need non-empirical confirmation, inflation doesn’t want it, and the multiverse, as of yet, doesn’t merit it.

In order for non-empirical confirmation to matter, it needs to affect how people do science. Public statements aren’t very relevant from a philosophy of science perspective; they ebb and flow based on how people promote themselves. Rather, we should care about what scientists assume in the course of their work. If people are basing new work on assumptions that haven’t been established experimentally, then we need to make sure their confidence isn’t misplaced.

String theory hasn’t been established experimentally…but it fails the other side of this test: almost no-one is assuming string theory is true.

I’ve talked before about theorists who study theories that aren’t true. String theory isn’t quite in that category, it’s still quite possible that it describes the real world. Nonetheless, for most string theorists, the distinction is irrelevant: string theory is a way to relate different quantum field theories together, and to formulate novel ones with interesting properties. That sort of research doesn’t rely on string theory being true, often it doesn’t directly involve strings at all. Rather, it relies on string theory’s mathematical abundance, its versatility and power as a lens to look at the world.

There are string theorists who are more directly interested in describing the world with string theory, though they’re a minority. They’re called String Phenomenologists. By itself, “phenomenologist” refers to particle physicists who try to propose theories that can be tested in the real world. “String phenomenology” is actually a bit misleading, since most string phenomenologists aren’t actually in the business of creating new testable theories. Rather, they try to reproduce some of the more common proposals of phenomenologists, like the MSSM, from within the framework of string theory. While string theory can reproduce many possible descriptions of the world (10^500 by some estimates), that doesn’t mean it covers every possible theory; making sure it can cover realistic options is an important, ongoing technical challenge. Beyond that, a minority within a minority of string phenomenologists actually try to make testable predictions, though often these are controversial.

None of these people need non-empirical confirmation. For the majority of string theorists, string theory doesn’t need to be “confirmed” at all. And for the minority who work on string phenomenology, empirical confirmation is still the order of the day, either directly from experiment or indirectly from the particle phenomenologists struggling to describe it.

What about inflation?

Cosmic inflation was proposed to solve an empirical problem, the surprising uniformity of the observed universe. Look through a few papers in the field, and you’ll notice that most are dedicated to finding empirical confirmation: they’re proposing observable effects on the cosmic microwave background, or on the distribution of large-scale structures in the universe. Cosmologists who study inflation aren’t claiming to be certain, and they aren’t rejecting experiment: overall, they don’t actually want non-empirical confirmation.

To be honest, though, I’m being a little unfair to Dawid here. The reason that string theory and inflation are in the name of his workshop aren’t because he thinks they independently use non-empirical confirmation. Rather, it’s because, if you view both as confirmed (and make a few other assumptions), then you’ve got a multiverse.

In this case, it’s again important to compare what people are doing in their actual work to what they’re saying in public. While a lot of people have made public claims about the existence of a multiverse, very few of them actually work on it. In fact, the two sets of people seem to be almost entirely disjoint.

People who make public statements about the multiverse tend to be older prominent physicists, often ones who’ve worked on supersymmetry as a solution to the naturalness problem. For them, the multiverse is essentially an excuse. Naturalness predicted new particles, we didn’t find new particles, so we need an excuse to have an “unnatural” universe, and for many people the multiverse is that excuse. As I’ve argued before, though, this excuse doesn’t have much of an impact on research. These people aren’t discouraged from coming up with new ideas because they believe in the multiverse, rather, they’re talking about the multiverse because they’re currently out of new ideas. Nima Arkani-Hamed is a pretty clear case of someone who has supported the multiverse in pieces like Particle Fever, but who also gets thoroughly excited about new ideas to rescue naturalness.

By contrast, there are many fewer people who actually work on the multiverse itself, and they’re usually less prominent. For the most part, they actually seem concerned with empirical confirmation, trying to hone tricks like anthropic reasoning to the point where they can actually make predictions about future experiments. It’s unclear whether this tiny group of people are on the right track…but what they’re doing definitely doesn’t seem like something that merits non-empirical confirmation, at least at this point.

It’s a shame that Dawid chose the focus he did for his workshop. Non-empirical theory confirmation is an interesting idea (albeit one almost certainly known to philosophy long before Dawid), and there are plenty of places in physics where it could use some examination. We seem to have come to our current interpretation of renormalization non-empirically, and while string theory itself doesn’t rely on non-empirical conformation many of its arguments with loop quantum gravity seem to rely on non-empirical considerations, in particular arguments about what is actually required for a proper theory of quantum gravity. But string theory, inflation, and the multiverse aren’t the examples he’s looking for.

Newtonmas 2015

Merry Newtonmas!

I’ll leave up my poll a bit longer, but the results are already looking pretty consistent.

A strong plurality of my readers have PhDs in high energy or theoretical physics, a little more than a quarter. Another big chunk (a bit over a fifth) are physics grad students. All together, that means almost half of my readers have some technical background in what I do.

In the comments, Cliff suggests this is a good reason to start writing more technical posts. Looking at the results, I agree, it looks like there would definitely be an audience for that sort of thing. Technical posts take a lot more effort than general audience posts, so don’t expect a lot of them…but you can definitely look forward to a few technical posts next year.

On the other hand, between people with some college physics and people who only saw physics in high school, about a third of my audience wouldn’t get much out of technical posts. Most of my posts will still be geared to this audience, since it’s kind of my brand at this point, but I do want to start experimenting with aiming a few posts to more specific segments.

Beyond that, I’ve got a smattering of readers in other parts of physics, and a few mathematicians. Aside from the occasional post defending physics notation, there probably won’t be much aimed at either group, but do let me know what I can do to make things more accessible!