Tag Archives: theoretical physics

Blackboards

As a college student, I already knew that theoretical physicists weren’t like how they were portrayed in movies. They didn’t wear lab coats, or have universally frizzy, unkempt white hair. I knew they didn’t have labs, or plot to take over the world. And I was pretty sure that they didn’t constantly use blackboards.

After all, blackboards are a teaching tool. They’re nice for getting equations up so that the guy way in the back can see them. But if you were actually doing a real calculation, surely you’d prefer paper, or a computer, or some other method that doesn’t involve an unkempt scrawl and a heap of loose white dust all over your clothing.

Right?

Right?

Over the last few years I’ve come to appreciate the value of blackboards. Blackboards actually can be used for calculations. You don’t want to use them all the time, but there are times when it’s useful to have a lot of room on a page, to be able to make notes and structure the board around concepts. More importantly, though, there is a third function that I didn’t even consider back in college. Between calculation and teaching, there is collaboration.

Go to a physics or math department, and you’ll find blackboards on the walls. You’ll find them not just in classrooms, but in offices, and occasionally in corridors. Go to a high-class physics location like the Perimeter Institute or the Simons Center, and they’ll brag to you about how many blackboards they have strewn around their common areas.

The purpose of these blackboards is to facilitate conversation. If you want to explain your work to someone else and you aren’t using a blog post, you need space to write in a way that you can both see what you’re doing. Blackboards are ideal for that sort of conversation, and as such are essential for collaboration and communication among scientists.

What about whiteboards? Well, whiteboards are just evil, obviously.

Achieving Transcendence: The Physicist Way

I wanted to shed some light on something I’ve been working on recently, but I realized that a little background was needed to explain some of the ideas. As such, this post is going to be a bit more math-y than usual, but I hope it’s educational!

Pi is special. Familiar to all through the area of a circle \pi r^2, pi is particularly interesting in that you cannot write an algebra equation made up of whole numbers whose solution is pi. While you can easily get fractions (3x=4 gives x=\frac{4}{3}) and even many irrational numbers (x^2=2 gives x=\sqrt{2}), pi is one of a set of numbers that it is impossible to get. These special numbers transcend other numbers, in that you cannot use more everyday numbers to get to them, and as such mathematicians call them transcendental numbers.

In addition to transcendental numbers, you can have transcendental functions. Transcendental functions are functions that can take in a normal number and produce a transcendental number. For example, you may be aware of the delightful equation below:

e^{i \pi}=-1

We can manipulate both sides of this equation by taking the natural logarithm, \ln, to find

i\pi=\ln(-1)

This tells us that the natural logarithm function can take a (negative) whole number (-1) and give us a transcendental number (pi). This means that the natural logarithm is a transcendental function.

There are many other transcendental functions. In addition to logarithms, there are a whole host of related functions called the polylogarithms, and even more generally the harmonic polylogarithms. All of these functions can take in whole numbers like -1 or 1 and give transcendental numbers.

Here physicists introduce a concept called degree of transcendentality, or transcendental weight, which we use to measure how transcendental a number or a function is. Pi (and functions that can give pi, like the natural logarithm) have transcendental weight one. Pi squared has transcendental weight two. Pi cubed (and another number called \zeta(3)) have transcendental weight three. And so on.

Note here that, according to mathematicians, there is no rigorous way that a number can be “more transcendental” than another number. In the case of some of these numbers, like \zeta(5), it hasn’t even been proven that the number is actually transcendental at all! However, physicists still use the concept of transcendental weight because it allows us to classify and manipulate a common and useful set of functions. This is an example of the differences in methods and standards between physicists and mathematicians, even when they are working on similar things.

In what way are these functions common and useful? Well it turns out that in N=4 super Yang-Mills many calculated results are not only made up of these polylogarithms, they have a particular (fixed) transcendental weight. In situations when we expect this to be true, we can use our knowledge to guess most, or even all, of the result without doing direct calculations. That’s immensely useful, and it’s a big part of what I’ve been doing recently.

Model-Hypothesis-Experiment: Sure, Just Not All the Same Person!

At some point, we were all taught how science works.

The scientific method gets described differently in different contexts, but it goes something like this:

First, a scientist proposes a model, a potential explanation for how something out in the world works. They then create a hypothesis, predicting some unobserved behavior that their model implies should exist. Finally, they perform an experiment, testing the hypothesis in the real world. Depending on the results of the experiment, the model is either supported or rejected, and the scientist begins again.

It’s a handy picture. At the very least, it’s a good way to fill time in an introductory science course before teaching the actual science.

But science is a big area. And just as no two sports have the same league setup, no two areas of science use the same method. While the central principles behind the method still hold (the idea that predictions need to be made before experiments are performed, the idea that in order to test a model you need to know something it implies that other models don’t, the idea that the question of whether a model actually describes the real world should be answered by actual experiments…), the way they are applied varies depending on the science in question.

In particular, in high-energy particle physics, we do roughly follow the steps of the method: we propose models, we form hypotheses, and we test them out with experiments. We just don’t expect the same person to do each step!

In high energy physics, models are the domain of Theorists. Occasionally referred to as “pure theorists” to distinguish them from the next category, theorists manipulate theories (some intended to describe the real world, some not). “Manipulate” here can mean anything from modifying the principles of the theory to see what works, to attempting to use the theory to calculate some quantity or another, to proving that the theory has particular properties. There’s quite a lot to do, and most of it can happen without ever interacting with the other areas.

Hypotheses, meanwhile, are the province of Phenomenologists. While theorists often study theories that don’t describe the real world, phenomenologists focus on theories that can be tested. A phenomenologist’s job is to take a theory (either proposed by a theorist or another phenomenologist) and calculate its consequences for experiments. As new data comes in, phenomenologists work to revise their theories, computing just how plausible the old proposals are given the new information. While phenomenologists often work closely with those in the next category, they also do large amounts of work internally, honing calculation techniques and looking through models to find explanations for odd behavior in the data.

That data comes, ultimately, from Experimentalists. Experimentalists run the experiments. With experiments as large as the Large Hadron Collider, they don’t actually build the machines in question. Rather, experimentalists decide how the machines are to be run, then work to analyze the data that emerges. Data from a particle collider or a neutrino detector isn’t neatly labeled by particle. Rather, it involves a vast set of statistics, energies and charges observed in a variety of detectors. An experimentalist takes this data and figures out what particles the detectors actually observed, and from that what sorts of particles were likely produced. Like the other areas, much of this process is self-contained. Rather than being concerned with one theory or another, experimentalists will generally look for general signals that could support a variety of theories (for example, leptoquarks).

If experimentalists don’t build the colliders, who does? That’s actually the job of an entirely different class of scientists, the Accelerator Physicists. Accelerator physicists not only build particle accelerators, they study how to improve them, with research just as self-contained as the other groups.

So yes, we build models, form hypotheses, and construct and perform experiments to test them. And we’ve got very specialized, talented people who focus on each step. That means a lot of internal discussion, and many papers published that only belong to one step or another. For our subfield, it’s the best way we’ve found to get science done.

In Defense of Pure Theory

I’d like to preface this by saying that this post will be a bit more controversial than usual. I have somewhat unconventional opinions about the nature and purpose of science, and what I say below shouldn’t be taken as representative of the field in general.

A bit more than a week ago, Not Even Wrong had a post on the Fundamental Physics Prize. Peter Woit is often…I’m going to say annoying…and this post was no exception.

The Fundamental Physics Prize, for those not in the know, is a fairly recently established prize for physicists, mostly theoretical physicists.  Clocking in at three million dollars, the prize is larger than the Nobel, and is currently the largest prize of its sort. Woit has several objections to the current choice of award recipient (Alexander Polyakov). I sympathize with some of these objections, in particular the snarky observation that a large number of the awardees are from Princeton’s Institute for Advanced Study. But there is one objection in particular that I feel the need to rebut, if only due to its wording: the gripe that “Viewers of the part I saw would have no idea that string theory is not tested, settled science.”

There are two problems with this statement. The first is something that Woit is likely aware of, but it probably isn’t obvious to everyone reading this. To be clear, the fact that a certain theory is not experimentally tested is not a barrier to its consideration for the Fundamental Physics Prize. Far from it, the purpose of the Fundamental Physics Prize is precisely to honor powerful insights in theoretical physics that have not yet been experimentally verified. The Fundamental Physics Prize was created, in part, to remedy what was perceived as unfairness in the awarding of the Nobel Prize, as the Nobel is only awarded to theorists after their theories have received experimental confirmation. Since the whole purpose of this prize is to honor theories that have not been experimentally tested, griping that the prizes are being awarded to untested theories is a bit like griping that Oscars aren’t awarded to scientists, or objecting that viewers of the Oscars would have no idea that the winners haven’t done anything especially amazing for humanity. If you’re watching the ceremony, you probably know what it’s for.

Has this been experimentally verified?

The other problem is a difference of philosophy. When Woit says that string theory is not “tested, settled science” he is implying that in order to be “settled science”, a theory must be tested, and while I can’t be sure of his intent I’m guessing he means tested experimentally. It is this latter implication I want to address: whether or not Woit is making it here, it serves to underscore an important point about the structure of physics as an institution.

Past readers will be aware that a theory can be valuable even if it doesn’t correspond to the real world because of what it can teach us about theories that do correspond to the real world. And while that is an important point, the point I’d like to make here is a bit more controversial. I would like to argue that pure theory, theory unconnected with experiment, can be important and valuable and “settled science” in and of itself.

First off, let’s talk about how such a theory can be science, and in particular how it can be physics. Plenty of people do work that doesn’t correspond to the experimentally accessible real world.  Mathematicians are the clearest example, but the point also arguably applies to fields like literary analysis. Physics is ostensibly supposed to be special, though: as part of science, we expect it to concern itself with the real world, otherwise one would argue that it is simply mathematics. However, as I have argued before, the difference between mathematics and physics is not one of subject matter, but of methods. This makes sense, provided you think of physics not as some sort of fixed school of thought, but as an institution. Physicists train new physicists, and as such physicists learn methods common to other physicists. That which physicists like to do, then, is physics, which means that physics is defined much more by the methods used to do it than by its object of study.

How can such a theory be settled, then? After all, if reality is out, what possible criteria could there be for deciding what is or is not a “good” theory?

The thing about physics as an institution is that physics is done by physicists, and physicists have careers. Over the course of those careers, those physicists need to publish papers, which need to catch the attention and approval of other physicists. They also need to have projects for grad students to do, so as to produce more physicists. Because of this, a “good” theory cannot be worked on alone. It has to be a theory with many implications, a theory that can be worked on and understood consistently by different people. It also needs to constrain further progress, to make sure that not just anyone can create novel results: this is what allows papers to catch the attention of other physicists! If you have all that, you have all of the relevant advantages of reality.

String theory has not been experimentally tested, but it meets all of these criteria. String theory has been a major force in theoretical physics for the past thirty years because it can fuel careers and lead to discussion in a way that nothing else on the table can. It has been tested mathematically in numerous ways, ways which demonstrate its robustness as a theory of quantum gravity. In this sense, string theory is a prime example of tested, settled science.

Ansatz: Progress by Guesswork

I’ve talked before about how hard traditional Quantum Field Theory is. Building things up step by step is slow and inefficient. And like any slow and inefficient process, there is a quicker way. An easier way. A…riskier way.

You guess.

Guess is such an ugly word, though…so let’s call it an ansatz.

Ansatz is a word of German origin. In German, it is part of various idiomatic expressions, where it can refer to an approach, an attempt, or a starting point. When physicists and mathematicians use the term ansatz, they mean a combination of all of these.

An ansatz is an approach in that it is a way of finding a solution to a problem without using more general, inefficient methods. Rather than approaching problems starting from the question, an ansatz approaches problems by starting with an answer, or rather, an attempt at an answer.

An ansatz is an attempt in that it serves as researcher’s best first guess at what the answer is, based on what they know about it. This knowledge can come from several sources. Sometimes, the question constrains the answer, ruling out some possibilities or restricting the output to a particular form. Usually, though, the attempt of an ansatz goes beyond this, incorporating the scientist’s experience as to what sorts of answers similar questions have had in the past, even if it isn’t understood yet why those sorts of answers are common. With information from both of these sources, a scientist comes up with a preliminary guess, or ansatz, as to answer to the problem at hand.

What if the answer is wrong, though? The key here is that an ansatz is only a starting point. Rather than being a full answer with all the details filled in, an ansatz generally leaves some parameters free. These free parameters represent unknowns, and it is up to further tests to fix their values and complete the answer. These tests can be experimental, but they can also be mathematical: often there are restrictions on possible answers that are difficult to apply when creating a first guess, but easier to apply when one has only a few parameters to fix. In order to avoid the risk of finding an ansatz that only works by coincidence, many more tests are done than there are parameters. That way, if the guess behind the ansatz is wrong, then some of the tests will give contradictory rules for the values of the parameters, and you’ll know that it’s time to go back and find a better guess.

In the end, this approach, using your first attempt as a starting point, should end up with only a few parameters free, ideally none at all. One way or another, you have figured out a lot about your question just by guessing the answer!

The use of ansatzes is quite common in theoretical physics. Some of the most interesting problem either can’t be solved or are tedious to solve through traditional means. The only way to make progress, to go beyond what everyone else can already do, is to notice a pattern, make a guess, and hope you get lucky. Well, not just a guess: an ansatz.

Nature Abhors a Constant

Why is a neutrino lighter than an electron? Why is the strong nuclear force so much stronger than the weak nuclear force, and why are both so much stronger than gravity? For that matter, why do any particles have the masses they do, or forces have the strengths they do?

To some people, these sorts of questions are meaningless. A scientist’s job is to find out the facts, to measure what the constants are. To ask why, though…why would you want to do that?

Maybe a sense of history?

See, physics has a history of taking what look like arbitrary facts (the orbits of the planets, the rate objects fall, the pattern of chemical elements) and finding out why they are that way. And there’s no reason not to expect this trend to continue.

The point can be made even more strongly: increasingly, it is becoming clear that nature abhors a constant.

To explain this, I first have to clarify what I mean by a constant. If you were asked to think of a constant, you’d probably think of the speed of light. The thing is, the speed of light is actually not the sort of constant I have in mind. The speed of light is three hundred million meters per second…but it’s also 671 million miles per hour, or one light year per year. Choose the right units, and the speed of light is just one. To go a bit further: the speed of light is merely an artifact of how we choose our units of distance and time, so it’s not a “real” constant at all!

So what would a “real” constant look like? Well, imagine if there were two fundamental speeds: a maximum, like the speed of light and a minimum, which nothing could go slower than. You could pick units so that one of the speeds was one, or so that the other was…but they couldn’t both be one at the same time. Their ratio stays the same, no matter what units you’re using. That’s the sign of a true constant. To say it another way: a “real” constant is dimensionless.

It is these “real” constants that nature so abhors, because whenever such a “real” constant appears to exist, it is likely to be due to a scalar field.

To remind readers, a scalar field is a type of quantum field consisting of a number that can vary through space. Temperature is an iconic illustration of a scalar field: at any given point you can define temperature by a number, and that number changes as you move from place to place.

Now constants, being constant, are not known for changing from place to place. Just because we don’t see mass or charge being different in different places, though, doesn’t mean they aren’t scalar fields.

To illustrate, imagine that you live far in the past, far enough that no-one knows that air has weight. Through careful experimentation, though, you can observe air pressure: everything is pressed upon in all directions by some mysterious force. Even if you don’t have access to mountains and therefore can’t see that air pressure varies by height, maybe you have begun to guess that air pressure is related to the weight of the air. You have a possible explanation for your constant pressure, in terms of a scalar pressure field. But how do you test your idea? Well, the big difference between a scalar and a constant is that a scalar can vary. Since there’s so much air above you, it’s hard to get air pressure to vary: you have to put enough energy in to the air to make it happen. More specifically, you vibrate the air: you create sound waves! By measuring how fast the sound waves go, you can test out your proposed number for the mass of the air, and if everything lines up right, you have successfully replaced a mysterious constant with a logical explanation.

This is almost exactly what happened with the Higgs. Scientists observed that particle masses seemed to be arbitrary numbers, and proposed a scalar field to explain them. (As a matter of fact, the masses involved actually cannot just be constants; the mathematics involved doesn’t allow it. They must be scalar fields). In order to test out the theory, we built the Large Hadron Collider, and used it to cause ripples in the seemingly constant masses, just like sound waves in air. In this case, those ripples were the Higgs particle, which served as evidence for the Higgs field just as sound waves serve as evidence for the mass of air.

And this sort of method keeps going. The Higgs explains mass in many cases, but it doesn’t explain the differences between particle masses, and it may be that new fields are needed to explain those. The same thing goes for the strengths of forces. Scalar fields are the most likely explanations for inflation, and in string theory scalars control the size and shape of the extra dimensions. So if you’ve got a mysterious constant, nature likely has a scalar field waiting in the wings to explain it.

So what do you actually do?

A few days ago, my sister asked me what I do at work. What do I actually do in order to do my job? What sort of tasks does it involve?

I answered by showing her this:

WhatIDo

Needless to say, that wasn’t very helpful, so I thought a bit and now I have a better answer.

Doing theoretical physics is basically like doing homework. In particular, it’s like doing difficult, interesting homework.

Think of the toughest homework assignment you’ve ever had to do. A homework assignment so tough, you and all your friends in the class worked together to finish it, and none of you were sure you were going to get it right.

Chances are, you handled the situation in one of two ways, depending on whether this was a group project, or an individual one.

Group Project:

This is what you do when you’re supposed to be in a group. Maybe you’re putting together a presentation, or building a rocket. Whatever you’re doing, you’ve got a lot of little tasks that need to get done in order to achieve your goals, so you parcel them out: each group member is assigned a specific task, and at the end everyone meets and puts it all together.

This sort of situation is common in theoretical physics as well, and it happens when different people have different skills to contribute. If one theorist is good at programming, while another understands a particular esoteric type of mathematics, then the math person will do the calculations and then give the results to the programming person, who makes a program to implement it.

Individual Project:

On the other hand, if everyone needs to submit their own work, you can’t very well just do part of it (not without cheating, anyway). Still, it’s not as if you’re doing this on your own. You do your own work to solve the problem, but you keep in contact with your classmates, and when you get stuck, you ask one of them for help.

This sort of situation happens in theoretical physics when everyone is relatively on the same page. Everyone works through the problem individually, doing the calculation and making their own programs, and whenever someone gets stuck, they talk to the others. Everyone periodically compares their results, which serves as a cross-check to make sure nobody made a mistake. The only difference from doing homework is that you and your collaborators write your own problems…which means, none of you know if there is a solution!

In both cases (group and individual), theoretical physics is a matter of doing calculations, writing programs, and thinking through thought experiments. Sometimes that means specific tasks as part of one huge project; sometimes it means working side by side on the same calculation. Either way, it all boils down to one thing: I’m someone who does homework for a living.

Why I Am Not A Mathematician

(No relation to Russel’s Why I Am Not A Christian. Well, not much.)

I am a theorist. I study theories. Not the well-supported theories of the AAAS definition, but simply potential lists of particles, and lists that, further, are almost certainly not “true”.

Most people find that disconcerting. Used to thinking of scientists as people who investigate the real world, people whose ideas are always tested in the fire of experiment, the idea of a scientist whose work has no direct connection to the real world is a major source of cognitive dissonance…for at least a few minutes. After that, a light dawns in most people’s heads, as they turn to me with a sigh of relief and say,

“Oh. So you’re a Mathematician.”

No.

No, I am not a Mathematician. There is a difference, subtle but vast, between what I do and a mathematician does.

An illustrative example: Quantum Electro-Dynamics, or QED, is the most successful theory in the entirety of science. Yes, I do mean the entirety of science. Quantum Electro-Dynamics, the theory of how electrons and light behave, agrees with experiments to ten decimal places. Ten digits of detail, predicted then observed. That’s more confirmed accuracy than anything else in physics, in science at all, has ever achieved.

And if you ask a mathematician who specializes in this sort of thing, they’ll tell you that QED probably doesn’t exist.

Now, by this they don’t mean that electrons don’t exist, or that light doesn’t exist. What they mean is that, if you follow the theory’s implications all the way, you get a contradiction. You can calculate each step of the way, getting reasonable results each time, results that keep agreeing perfectly with experiments…but if you were to go all the way, off to infinity, you get results that make your whole theory stop making any sort of reasonable sense.

But as physicists, we keep using it. Because before reaching infinity, for any real calculation, it works. Perfectly.

That’s the difference between a theoretical physicist and a mathematician: for a mathematician, everything must be completely rigorous, and every implication, out to infinity, has to be vetted. For a physicist, if a theory gives reasonable results, we don’t really care whether it is completely clear how it works mathematically. We use physical reasoning, using concepts that work in the physical world, even if we’re studying a theory that doesn’t actually exist in the physical world. And while that sounds like a poor way to study abstract ideas, it allows us to take risks mathematicians can’t, which sometimes means we can make discoveries that even the mathematicians find interesting.

N=4: Maximal Particles for Maximal Fun

Part Four of a Series on N=4 Super Yang-Mills Theory

This is the fourth in a series of articles that will explain N=4 super Yang-Mills theory. In this series I take that phrase apart bit by bit, explaining as I go. Because I’m perverse and out to confuse you, I started with the last bit here, and now I’ve reached the final part.

N=4 Super Yang-Mills Theory

Last time I explained supersymmetry as a relationship between two particles, one with spin X and the other with spin X-½. It’s actually a leeetle bit more complicated than that.

When a shape is symmetric, you can turn it around and it will look the same. When a theory is supersymmetric, you can “turn” it, moving from particles with spin X to particles of spin X-½, and the theory will look the same.

With a 2D shape, that’s the whole story. But if you have a symmetric 3D shape, you can turn it in two different directions, moving to different positions, and the shape will look the same either way. In supersymmetry, the number of different ways you can “turn” the theory and still have it look the same is called N.

N=1 symmetric shape

N=2 symmetric shape

Consider the example of super Yang-Mills. If we start out with a particle of spin 1 (a Yang-Mills field), N=1 supersymmetry says that there will also be a particle of spin ½, similar to the particles of everyday matter. But suppose that instead we had N=2 supersymmetry. You can move from the spin 1 particle to spin ½ in one direction, or in the other one, and just like regular symmetry moving in two different directions will get you to two different positions. That means you need two different spin ½ particles! Furthermore, you can also move in one direction, then in the other one: you go from spin 1 to spin ½, then down from spin ½ to spin 0. So our theory can’t just have spin 1 and spin ½, it has to have spin 0 particles as well!

You can keep increasing N, as long as you keep increasing the number and types of particles. Finally, at N=4, you’ve got the maximal set: one Yang-Mills field with spin 1, four different spin ½ particles, and six different spin 0 scalars. The diagram below shows how the particles are related: you start in the center with a Yang-Mills field, and then travel in one of four directions to the spin ½ particles. Picking two of those directions, you travel further, to a scalar in between two spin ½ particles. Applying more supersymmetry just takes you back down: first to spin ½, then all the way back to spin 1.

N=4 super Yang-Mills is where the magic happens. Its high degree of symmetry gives it conformal invariance and dual conformal invariance, it has been observed to have maximal transcendentality and it may even be integrable. Any one of those statements could easily take a full blog post to explain. For now, trust me when I tell you that while N=4 super Yang-Mills may seem complicated, its symmetry means that deep down it is one of the easiest theories to work with, and in fact it might be the simplest non-gravity quantum field theory possible. That makes it an immensely important stepping stone, the first link to take us to a full understanding of particle physics.

One final note: you’re probably wondering why we stopped at N=4. At N=4 we have enough symmetry to go out from spin 1 to spin 0, and then back in to spin 1 again. Any more symmetry, and we need more space, which in this case means higher spin, which means we need to start talking about gravity. Supergravity takes us all the way up to N=8, and has its own delightful properties…but that’s a topic for another day.

A Theorist’s Theory

Part One of a Series on N=4 Super Yang-Mills Theory

In my last post, I called Wikipedia’s explanation of N=4 super Yang-Mills theory only “half-decent”. It’s not particularly bad, though it could use more detail. What it isn’t, and what I wanted, was an explanation that would make sense to a general audience (i.e., you guys!).

Well, if you want something done right, you have to quote that cliché. Or, well, do it yourself.

This is the first in a series of articles that will explain N=4 super Yang-Mills theory. In this series I will take that phrase apart bit by bit, explaining as I go. And because I’m perverse and out to confuse you, I’ll start with the last bit and work my way up.

N=4 Super Yang-Mills Theory

Now as a relatively well-educated person, you may be grumbling at this point. “I know what a theory is!”

“A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment.”

Ah. It appears you’ve been talking to the biologists again. This is exactly why we needed this post. Let’s have a chat.

To be clear, when a biologist says that something (evolution, say, or germ theory) is a theory, this is exactly what they mean. They are describing an idea that has been repeatedly tested and that actually describes the real world. Most other scientists work the same way: geologists (plate tectonics theory), chemists (molecular orbital theory), even most physicists (big bang theory). But this isn’t what theoretical physicists mean when they say theory. In contrast, most things that theorists call theories have no experimental evidence, and usually aren’t even meant to describe the real world.

Unlike the AAAS definition above, theoretical physicists don’t have a formal definition of their usage of theory. If we did, it might go something like this:

“A theory (in theoretical physics) consists of a list of quantum fields, their properties, and how they interact. These fields do not need to be ones that exist in the natural world, but they do have to be (relatively) mathematically consistent. To study a theory is then to consider the interactions of a specific list of quantum fields, without taking into account any other fields that might otherwise interfere.”

Note that there are ways to get around parts of this definition. The (2,0) theory is famously mysterious because we don’t know how to write down the interactions between its fields, but even there we have an implicit definition of how the fields interact built into the theory’s definition, and the challenge is to make that definition explicit. Other theories stretch the definition of a quantum field, or cover a range of different properties. Still, all of them fit the basic template: define some mathematical entities, and describe how they interact.

With that definition in hand, some of you are already asking the next question: “What are the quantum fields of N=4 super Yang-Mills? How do they interact?”

Tune in to the next installment to find out!