# The Cycle of Exploration

Science is often described as a journey of exploration. You might imagine scientists carefully planning an expedition, gathering their equipment, then venturing out into the wilds of Nature, traveling as far as they can before returning with tales of the wonders they discovered.

Is it capybaras? Please let it be capybaras.

This misses an important part of the story, though. In science, exploration isn’t just about discovering the true nature of Nature, as important as that is. It’s also about laying the groundwork for future exploration.

Picture our explorers, traveling out into the wilderness with no idea what’s in store. With only a rough idea of the challenges they might face, they must pack for every possibility: warm clothing for mountains, sunscreen for the desert, canoes to ford rivers, cameras in case they encounter capybaras. Since they can only carry so much, they can only travel so far before they run out of supplies.

Once they return, though, the explorers can assess what they did and didn’t need. Maybe they found a jungle, full of capybaras. The next time they travel they’ll make sure to bring canoes and cameras, but they can skip the warm coats. This lets them free up more room, letting them bring more supplies that’s actually useful. In the end, this lets them travel farther.

Science is a lot like this. The more we know, the better questions we can ask, and the further we can explore. It’s true not just for experiments, but for theoretical work as well. Here’s a slide from a talk I’m preparing, about how this works in my sub-field of Amplitudeology.

Unfortunately not a capybara.

In theoretical physics, you often start out doing a calculation using the most general methods you have available. Once you’ve done it, you understand a bit more about your results: in particular, you can start figuring out which parts of the general method are actually unnecessary. By paring things down, you can figure out a new method, one that’s more efficient and allows for more complicated calculations. Doing those calculations then reveals new patterns, letting you propose even newer methods and do even more complicated calculations.

It’s the circle of exploration, and it really does move us all, motivating everything we do. With each discovery, we can go further, learn more, than the last attempt, keeping science churning long into the future.

# Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

97% of climate scientists agree that global warming exists, and is most probably human-caused. On a more controversial note, string theorists vastly outnumber adherents of other approaches to quantum gravity, such as Loop Quantum Gravity.

As many who disagree with climate change or string theory would argue, the majority is not always right. Science should be concerned with truth, not merely with popularity. After all, what if scientists are merely taking part in a fad? What makes climate change any more objectively true than pet rocks?

Apparently this wikipedia’s best example of a fad.

People are susceptible to fads, after all. A style of music becomes popular, and everyone’s listening to the same sounds. A style of clothing, and everything’s wearing the same thing. So if an idea in science became popular, everyone might…write the same papers?

That right there is the problem. Scientists only succeed by creating meaningfully original work. If we don’t discover something new, we can’t publish, and as the old saying goes it’s publish or perish out there. Even if social pressure gets us working on something, if we’re going to get any actual work done there has to be enough there, at least, for us to do something different, something no-one has done before.

This doesn’t mean scientists can’t be influenced by popularity, but it means that that influence is limited by the requirements of doing meaningful, original work. In the case of climate change, climate scientists investigate the topic with so many different approaches and look at so many different areas of impact (for example, did you know rising CO2 levels make the ocean acidic?) that the whole field simply wouldn’t function if climate change wasn’t real: there’d be a contradiction, and most of the myriad projects involving it simply wouldn’t work. As I’ve talked about before, science is an interlocking system, and it’s hard to doubt one part without being forced to doubt everything else.

What about string theory? Here, the situation is a little different. There aren’t experiments testing string theory, so whether or not string theory describes the real world won’t have much effect on whether people can write string theory papers.

The existence of so many string theory papers does say something, though. The up-side of not involving experiments is that you can’t go and test something slightly different and write a paper about it. In order to be original, you really need to calculate something that nobody expected you to calculate, or notice a trend nobody expected to exist. The fact that there are so many more string theorists than loop quantum gravity theorists is in part because there are so many more interesting string theory projects than interesting loop quantum gravity projects.

In string theory, projects tend to be interesting because they unveil some new aspect of quantum field theory, the class of theories that explain the behavior of subatomic particles. Given how hard quantum field theory is, any insight is valuable, and in my experience these sorts of insights are what most string theorists are after. So while string theory’s popularity says little about whether it describes the real world, it says a lot about its ability to say interesting things about quantum field theory. And since quantum field theories do describe the real world, string theory’s continued popularity is also evidence that it continues to be useful.

Climate change and string theory aren’t fads, not exactly. They’re popular, not simply because they’re popular, but because they make important contributions and valuable to science. And as long as science continues to reward original work, that’s not about to change.

# Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

# Hype versus Miscommunication, or the Language of Importance

A fellow amplitudes-person was complaining to me recently about the hype surrounding the debate regarding whether black holes have “firewalls”. New York Times coverage seems somewhat excessive for what is, in the end, a fairly technical debate, and its enthusiasm was (rightly?) mocked in several places.

There’s an attitude I often run into among other physicists. The idea is that when hype like this happens, it’s because senior physicists are, at worst, cynically manipulating the press to further their positions or, at best, so naïve that they really see what they’re working on as so important that it deserves hype-y coverage. Occasionally, the blame will instead be put on the journalists, with largely the same ascribed motivations: cynical need for more page views, or naïve acceptance of whatever story they’re handed.

In my opinion, what’s going on there is a bit deeper, and not so easily traceable to any particular person.

In the articles on the (2, 0) theory I put up in the last few weeks, I made some disparaging comments about the tone of this Scientific American blog post. After exchanging a few tweets with the author, I think I have a better idea of what went down.

The problem here is that when you ask a scientist about something they’re excited about, they’re going to tell you why they’re excited about it. That’s what happened here when Nima Arkani-Hamed was interviewed for the above article: he was asked about the (2, 0) theory, and he seems to have tried to convey his enthusiasm with a metaphor that explained how the situation felt to him.

The reason this went wrong and led to a title as off-base and hype-sounding as “the Ultimate Ultimate Theory of Physics” was that we (scientists and science journalists) are taught to express enthusiasm in the language of importance.

There has been an enormous resurgence in science communication in recent years, but it has come with a very us-vs.-them mentality. The prevailing attitude is that the public will only pay attention to a scientific development if they are told that it is important. As such, both scientists and journalists try to make whatever they’re trying to communicate sound central, either to daily life or to our understanding of the universe. When both sides of the conversation are operating under this attitude, it creates an echo chamber where a concept’s importance is blown up many times greater than it really deserves, without either side doing anything other than communicating science in the only way they know.

We all have to step back and realize that most of the time, science isn’t interesting because of its absolute “importance”. Rather, a puzzle is often interesting simply because it is a puzzle. That’s what’s going on with the (2, 0) theory, or with firewalls: they’re hard to figure out, and that’s why we care.

Being honest about this is not going to lose us public backing, or funding. It’s not just scientists who value interesting things because they are challenging. People choose the path of their lives not based on some absolute relevance to the universe at large, but because things make sense in context. You don’t fall in love because the target of your affections is the most perfect person in the universe, you fall in love because they’re someone who can constantly surprise you.

Scientists are in love with what they do. We need to make sure that that, and not some abstract sense of importance, is what we’re communicating. If we do that, if we calm down and make a bit more effort to be understood, maybe we can win back some of the trust that we’ve lost by appearing to promote Ultimate Ultimate Theories of Everything.

# Physics and its (Ridiculously One-Sided) Search for a Nemesis

Maybe it’s arrogance, or insecurity. Maybe it’s due to viewing themselves as the arbiters of good and bad science. Perhaps it’s just because, secretly, every physicist dreams of being a supervillain.

Physicists have a rivalry, you see. Whether you want to call it an archenemy, a nemesis, or even a kismesis, there is another field of study that physicists find so antithetical to everything they believe in that it crops up in their darkest and most shameful dreams.

What field of study? Well, pretty much all of them, actually.

Won’t you be my Kismesis?

Chemistry

A professor of mine once expressed the following sentiment:

“I have such respect for chemists. They accomplish so many things, while having no idea what they are doing!”

Disturbingly enough, he actually meant this as a compliment. Physicists’ relationship with chemists is a bit like a sibling rivalry. “Oh, isn’t that cute! He’s just playing with chemicals. Little guy doesn’t know anything about atoms, and yet he’s just sluggin’ away…wait, why is it working? What? How did you…I mean, I could have done that. Sure.”

Biology

They study all that weird, squishy stuff. They get to do better mad science. And somehow they get way more funding than us, probably because the government puts “improving lives” over “more particles”. Luckily, we have a solution to the problem.

Mathematics

Saturday Morning Breakfast Cereal has a pretty good take on this. Mathematicians are rigorous…too rigorous. They never let us have any fun, even when it’s totally fine, and everyone thinks they’re better than us. Well they’re not! Neener neener.

Computer Science

I already covered math, didn’t I?

Engineering

Think about how mathematicians think about physicists, and you’ll know how physicists think about engineers. They mangle our formulas, ignoring our pristine general cases for silly criteria like “ease of use” and “describing the everyday world”. Just lazy!

Philosophy

What do these guys even study? I mean, what’s the point of metaphysics? We’ve covered that, it’s called physics! And why do they keep asking what quantum mechanics means?

These guys have an annoying habit of pointing out moral issues with things like nuclear power plants and worry entirely too much about world-destroying black holes. They’re also our top competition for GRE scores.

Economics

So, what do you guys use real analysis for again? Pretending to be math-based science doesn’t make you rigorous, guys.

Psychology

We point out that surveys probably don’t measure anything, and that you can’t take the average of “agree” and “strongly agree”. Plus, if you’re a science, where is your F=ma?

They point out that we don’t actually know anything about how psychology research actually works, and that we seem to think that all psychologists are Freud. Then they ask us to look at just how fuzzy the plots we get from colliders actually are.

The argument escalates from there, often ending with frenzied makeout sessions.

Geology?  Astronomy?

Hey, we want a nemesis, but we’re not that desperate.

# Why I Study a Theory That Isn’t “True”

I study a theory called N=4 super Yang-Mills. (There’s a half-decent explanation of the theory here. For now, just know that it involves a concept called supersymmetry, where forces and matter are very closely related.) When I mention this to people, sometimes they ask me if I’m expecting to see evidence for N=4 super Yang-Mills at the Large Hadron Collider. And if not there, when can we expect a test of the theory?

Never.

Never? Yep. N=4 super Yang-Mills will never be tested, because N=4 super Yang-Mills (sYM for short) is not “true”.

We know it’s not “true”, because it contains particles that don’t exist. Not just particles we might not have found yet, but particles that would make the universe a completely different and possibly unknowable place.

So if it isn’t true, why do I study it?

Let me give you an analogy. Remember back in 2008 when Sarah Palin made fun of funding “fruit fly research in France”?

Most people I talked to found that pretty ridiculous. After all, fruit flies are one of the most stereotypical research animals, second only to mice. And besides, hadn’t we all grown up knowing about how they were used to research HOX genes?

Wait, you didn’t know about that? Evidently, you weren’t raised by a biologist.

HOX genes are how your body knows what limbs go where. When HOX genes activate in an embryo, they send signals, telling cells where to grow arms and legs.

Much of HOX genes’ power was first discovered in fruit flies. With their relatively simple genetics, scientists were able to manipulate the HOX genes, creating crazy frankenflies like Antennapedia (literally: antenna-feet) here.

A fruity fly’s HOX genes, and the body parts they correspond to.

Old antenna-feet. Ain’t he a beauty?

It was only later, as the science got more sophisticated, that biologists began to track what HOX genes do in humans, making substantial progress in understanding debilitating mutations.

How is this related to N=4 super Yang-Mills? Well, just as fruit flies are simpler to study than humans, sYM is simpler to study than the whole mess of unconnected particles that exist in the real world. We can do calculations with sYM that would be out of reach in normal particle physics. As we do these calculations, we discover new patterns and new techniques. The hope is that, just like HOX genes, we will discover traits that still hold in the more complicated situation of the real world. We’re not quite there yet, but it’s getting close.

By the way, make sure to watch Big Bang Theory on Thursday (11/29, 8/7c on CBS). Turns out, Sheldon is working on this stuff too, and for those who have read arXiv:1210.7709, his diagrams should look quite familiar…

# Who Am I?

I call myself a String Theorist, someone who describes the world in terms of subatomic lengths of string that move in ten dimensions (nine of space and one of time),

But in practice I’m more of a Particle Theorist, describing the world not in terms of short lengths of string but rather with particles that each occupy a single point in space,

More specifically, I’m an Amplitudeologist, part of a trendy new tribe including the likes of Zvi Bern, Lance Dixon, Nima Arkani-Hamed, John Joseph Carrasco (jjmc on twitter), and sometimes Sheldon Cooper,

In terms of my career, I’m a Graduate Student, less like a college student and more like an apprentice, learning not primarily through classes but rather through working to advance my advisor’s research,

And what do I work on? Things like this.