Category Archives: Amateur Philosophy

Entropy is Ignorance

(My last post had a poll in it! If you haven’t responded yet, please do.)

Earlier this month, philosopher Richard Dawid ran a workshop entitled “Why Trust a Theory? Reconsidering Scientific Methodology in Light of Modern Physics” to discuss his idea of “non-empirical theory confirmation” for string theory, inflation, and the multiverse. They haven’t published the talks online yet, so I’m stuck reading coverage, mostly these posts by skeptical philosopher Massimo Pigliucci. I find the overall concept annoying, and may rant about it later. For now though, I’d like to talk about a talks on the second day by philosopher Chris Wüthrich about black hole entropy.

Black holes, of course, are the entire-stars-collapsed-to-a-point-that-no-light-can-escape that everyone knows and loves. Entropy is often thought of as the scientific term for chaos and disorder, the universe’s long slide towards dissolution. In reality, it’s a bit more complicated than that.

2000px-chaos_star-svg

For one, you need to take Elric into account…

Can black holes be disordered? Naively, that doesn’t seem possible. How can a single point be disorderly?

Thought about in a bit more detail, the conclusion seems even stronger. Via something called the “No Hair Theorem”, it’s possible to prove that black holes can be described completely with just three numbers: their mass, their charge, and how fast they are spinning. With just three numbers, how can there be room for chaos?

On the other hand, you may have heard of the Second Law of Thermodynamics. The Second Law states that entropy always increases. Absent external support, things will always slide towards disorder eventually.

If you combine this with black holes, then this seems to have weird implications. In particular, what happens when something disordered falls into a black hole? Does the disorder just “go away”? Doesn’t that violate the Second Law?

This line of reasoning has led to the idea that black holes have entropy after all. It led Bekenstein to calculate the entropy of a black hole based on how much information is “hidden” inside, and Hawking to find that black holes in a quantum world should radiate as if they had a temperature consistent with that entropy. One of the biggest successes of string theory is an explanation for this entropy. In string theory, black holes aren’t perfect points: they have structure, arrangements of strings and higher dimensional membranes, and this structure can be disordered in a way that seems to give the right entropy.

Note that none of this has been tested experimentally. Hawking radiation, if it exists, is very faint: not the sort of thing we could detect with a telescope. Wüthrich is worried that Bekenstein’s original calculation of black hole entropy might have been on the wrong track, which would undermine one of string theory’s most well-known accomplishments.

I don’t know Wüthrich’s full argument, since the talks haven’t been posted online yet. All I know is Pigliucci’s summary. From that summary, it looks like Wüthrich’s primary worry is about two different definitions of entropy.

See, when I described entropy as “disorder”, I was being a bit vague. There are actually two different definitions of entropy. The older one, Gibbs entropy, grows with the number of states of a system. What does that have to do with disorder?

Think about two different substances: a gas, and a crystal. Both are made out of atoms, but the patterns involved are different. In the gas, atoms are free to move, while in the crystal they’re (comparatively) fixed in place.

147515main_phases_large

Blurrily so in this case

There are many different ways the atoms of a gas can be arranged and still be a gas, but fewer in which they can be a crystal, so a gas has more entropy than a crystal. Intuitively, the gas is more disordered.

When Bekenstein calculated the entropy of a black hole he didn’t use Gibbs entropy, though. Instead, he used Shannon entropy, a concept from information theory. Shannon entropy measures the amount of information in a message, with a formula very similar to that of Gibbs entropy: the more different ways you can arrange something, the more information you can use it to send. Bekenstein used this formula to calculate the amount of information that gets hidden from us when something falls into a black hole.

Wüthrich’s worry here (again, as far as Pigliucci describes) is that Shannon entropy is a very different concept from Gibbs entropy. Shannon entropy measures information, while Gibbs entropy is something “physical”. So by using one to predict the other, are predictions about black hole entropy just confused?

It may well be he has a deeper argument for this, one that wasn’t covered in the summary. But if this is accurate, Wüthrich is missing something fundamental. Shannon entropy and Gibbs entropy aren’t two different concepts. Rather, they’re both ways of describing a core idea: entropy is a measure of ignorance.

A gas has more entropy than a crystal, it can be arranged in a larger number of different ways. But let’s not talk about a gas. Let’s talk about a specific arrangement of atoms: one is flying up, one to the left, one to the right, and so on. Space them apart, but be very specific about how they are arranged. This arrangement could well be a gas, but now it’s a specific gas. And because we’re this specific, there are now many fewer states the gas can be in, so this (specific) gas has less entropy!

Now of course, this is a very silly way to describe a gas. In general, we don’t know what every single atom of a gas is doing, that’s why we call it a gas in the first place. But it’s that lack of knowledge that we call entropy. Entropy isn’t just something out there in the world, it’s a feature of our descriptions…but one that, nonetheless, has important physical consequences. The Second Law still holds: the world goes from lower entropy to higher entropy. And while that may seem strange, it’s actually quite logical: the things that we describe in more vague terms should become more common than the things we describe in specific terms, after all there are many more of them!

Entropy isn’t the only thing like this. In the past, I’ve bemoaned the difficulty of describing the concept of gauge symmetry. Gauge symmetry is in some ways just part of our descriptions: we prefer to describe fundamental forces in a particular way, and that description has redundant parameters. We have to make those redundant parameters “go away” somehow, and that leads to non-existent particles called “ghosts”. However, gauge symmetry also has physical consequences: it was how people first knew that there had to be a Higgs boson, long before it was discovered. And while it might seem weird to think that a redundancy could imply something as physical as the Higgs, the success of the concept of entropy should make this much less surprising. Much of what we do in physics is reasoning about different descriptions, different ways of dividing up the world, and then figuring out the consequences of those descriptions. Entropy is ignorance…and if our ignorance obeys laws, if it’s describable mathematically, then it’s as physical as anything else.

What’s so Spooky about Action at a Distance?

With Halloween coming up, it’s time once again to talk about the spooky side of physics. And what could be spookier than action at a distance?

Pictured here.

Ok, maybe not an obvious contender for spookiest concept of the year. But physicists have struggled with action at a distance for centuries, and there are deep reasons why.

It all dates back to Newton. In Newton’s time, all of nature was expected to be mechanical. One object pushes another, which pushes another in turn, eventually explaining everything that every happens. And while people knew by that point that the planets were not circling around on literal crystal spheres, it was still hoped that their motion could be explained mechanically. The favored explanations of the time were vortices, whirlpools of celestial fluid that drove the planets around the Sun.

Newton changed all that. Not only did he set down a law of gravitation that didn’t use a fluid, he showed that no fluid could possibly replicate the planets’ motions. And while he remained agnostic about gravity’s cause, plenty of his contemporaries accused him of advocating “action at a distance”. People like Leibniz thought that a gravitational force without a mechanical cause would be superstitious nonsense, a betrayal of science’s understanding of the world in terms of matter.

For a while, Newton’s ideas won out. More and more, physicists became comfortable with explanations involving a force stretching out across empty space, using them for electricity and magnetism as these became more thoroughly understood.

Eventually, though, the tide began to shift back. Electricity and Magnetism were explained, not in terms of action at a distance, but in terms of a field that filled the intervening space. Eventually, gravity was too.

The difference may sound purely semantic, but it means more than you might think. These fields were restricted in an important way: when the field changed, it changed at one point, and the changes spread at a speed limited by the speed of light. A theory composed of such fields has a property called locality, the property that all interactions are fundamentally local, that is, they happen at one specific place and time.

Nowadays, we think of locality as one of the most fundamental principles in physics, on par with symmetry in space and time. And the reason why is that true action at a distance is quite a spooky concept.

Much of horror boils down to fear of the unknown. From what might lurk in the dark to the depths of the ocean, we fear that which we cannot know. And true action at a distance would mean that our knowledge might forever be incomplete. As long as everything is mediated by some field that changes at the speed of light, we can limit our search for causes. We can know that any change must be caused by something only a limited distance away, something we can potentially observe and understand. By contrast, true action at a distance would mean that forces from potentially anywhere in the universe could alter events here on Earth. We might never know the ultimate causes of what we observe; they might be stuck forever out of reach.

Some of you might be wondering, what about quantum mechanics? The phrase “spooky action at a distance” was famous because Einstein used it as an accusation against quantum entanglement, after all.

The key thing about quantum mechanics is that, as J. S. Bell showed, you can’t have locality…unless you throw out another property, called realism. Realism is the idea that quantum states have definite values for measurements before those measurements are taken. And while that sounds important, most people find getting rid of it much less scary than getting rid of locality. In a non-realistic world, at least we can still predict probabilities, even if we can’t observe certainties. In a non-local world, there might be aspects of physics that we just can’t learn. And that’s spooky.

Explanations of Phenomena Are All Alike; Every Unexplained Phenomenon Is Unexplained in Its Own Way

Vladimir Kazakov began his talk at ICTP-SAIFR this week with a variant of Tolstoy’s famous opening to the novel Anna Karenina: “Happy families are all alike; every unhappy family is unhappy in its own way.” Kazakov flipped the order of the quote, stating that while “Un-solvable models are each un-solvable in their own way, solvable models are all alike.”

In talking about solvable and un-solvable models, Kazakov was referring to a concept called integrability, the idea that in certain quantum field theories it’s possible to avoid the messy approximations of perturbation theory and instead jump straight to the answer. Kazakov was observing that these integrable systems seem to have a deep kinship: the same basic methods appear to work to understand all of them.

I’d like to generalize Kazakov’s point, and talk about a broader trend in physics.

Much has been made over the years of the “unreasonable effectiveness of mathematics in the natural sciences”, most notably in physicist Eugene Wigner’s famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences. There’s a feeling among some people that mathematics is much better at explaining physical phenomena than one would expect, that the world appears to be “made of math” and that it didn’t have to be.

On the surface, this is a reasonable claim. Certain mathematical ideas, group theory for example, seem to pop up again and again in physics, sometimes in wildly different contexts. The history of fundamental physics has tended to see steady progress over the years, from clunkier mathematical concepts to more and more elegant ones.

Some physicists tend to be dismissive of this. Lee Smolin in particular seems to be under the impression that mathematics is just particularly good at providing useful approximations. This perspective links to his definition of mathematics as “the study of systems of evoked relationships inspired by observations of nature,” a definition to which Peter Woit vehemently objects. Woit argues what I think any mathematician would when presented by a statement like Smolin’s: that mathematics is much more than just a useful tool for approximating observations, and that contrary to physicists’ vanity most of mathematics goes on without any explicit interest in observing the natural world.

While it’s generally rude for physicists to propose definitions for mathematics, I’m going to do so anyway. I think the following definition is one mathematicians would be more comfortable with, though it may be overly broad: Mathematics is the study of simple rules with complex consequences.

We live in a complex world. The breadth of the periodic table, the vast diversity of life, the tangled webs of galaxies across the sky, these are things that display both vast variety and a sense of order. They are, in a rather direct way, the complex consequences of rules that are at heart very very simple.

Part of the wonder of modern mathematics is how interconnected it has become. Many sub-fields, once distinct, have discovered over the years that they are really studying different aspects of the same phenomena. That’s why when you see a proof of a three-hundred-year-old mathematical conjecture, it uses terms that seem to have nothing to do with the original problem. It’s why Woit, in an essay on this topic, quotes Edward Frenkel’s description of a particular recent program as a blueprint for a “Grand Unified Theory of Mathematics”. Increasingly, complex patterns are being shown to be not only consequences of simple rules, but consequences of the same simple rules.

Mathematics itself is “unreasonably effective”. That’s why, when faced with a complex world, we shouldn’t be surprised when the same simple rules pop up again and again to explain it. That’s what explaining something is: breaking down something complex into the simple rules that give rise to it. And as mathematics progresses, it becomes more and more clear that a few closely related types of simple rules lie behind any complex phenomena. While each unexplained fact about the universe may seem unexplained in its own way, as things are explained bit by bit they show just how alike they really are.

The Real Problem with Fine-Tuning

You’ve probably heard it said that the universe is fine-tuned.

The Standard Model, our current best understanding of the rules that govern particle physics, is full of lots of fiddly adjustable parameters. The masses of fundamental particles and the strengths of the fundamental forces aren’t the sort of thing we can predict from first principles: we need to go out, do experiments, and find out what they are. And you’ve probably heard it argued that, if these fiddly parameters were even a little different from what they are, life as we know it could not exist.

That’s fine-tuning…or at least, that’s what many people mean when they talk about fine-tuning. It’s not exactly what physicists mean though. The thing is, almost nobody who studies particle physics thinks the parameters of the Standard Model are the full story. In fact, any theory with adjustable parameters probably isn’t the full story.

It all goes back to a point I made a while back: nature abhors a constant. The whole purpose of physics is to explain the natural world, and we have a long history of taking things that look arbitrary and linking them together, showing that reality has fewer parameters than we had thought. This is something physics is very good at. (To indulge in a little extremely amateurish philosophy, it seems to me that this is simply an inherent part of how we understand the world: if we encounter a parameter, we will eventually come up with an explanation for it.)

Moreover, at this point we have a rough idea of what this sort of explanation should look like. We have experience playing with theories that don’t have any adjustable parameters, or that only have a few: M theory is an example, but there are also more traditional quantum field theories that fill this role with no mention of string theory. From our exploration of these theories, we know that they can serve as the kind of explanation we need: in a world governed by one of these theories, people unaware of the full theory would observe what would look at first glance like a world with many fiddly adjustable parameters, parameters that would eventually turn out to be consequences of the broader theory.

So for a physicist, fine-tuning is not about those fiddly parameters themselves. Rather, it’s about the theory that predicts them. Because we have experience playing with these sorts of theories, we know roughly the sorts of worlds they create. What we know is that, while sometimes they give rise to worlds that appear fine-tuned, they tend to only do so in particular ways. Setups that give rise to fine-tuning have consequences: supersymmetry, for example, can give rise to an apparently fine-tuned universe but has to have “partner” particles that show up in powerful enough colliders. In general, a theory that gives rise to apparent fine-tuning will have some detectable consequences.

That’s where physicists start to get worried. So far, we haven’t seen any of these detectable consequences, and it’s getting to the point where we could have, had they been the sort many people expected.

Physicists are worried about fine-tuning, but not because it makes the universe “unlikely”. They’re worried because the more finely-tuned our universe appears, the harder it is to find an explanation for it in terms of the sorts of theories we’re used to working with, and the less likely it becomes that someone will discover a good explanation any time soon. We’re quite confident that there should be some explanation, hundreds of years of scientific progress strongly suggest that to be the case. But the nature of that explanation is becoming increasingly opaque.

Merry Newtonmas!

Yesterday, people around the globe celebrated the birth of someone whose new perspective and radical ideas changed history, perhaps more than any other.

I’m referring, of course, to Isaac Newton.

Ho ho ho!

Born on December 25, 1642, Newton is justly famed as one of history’s greatest scientists. By relating gravity on Earth to the force that holds the planets in orbit, Newton arguably created physics as we know it.

However, like many prominent scientists, Newton’s greatness was not so much in what he discovered as how he discovered it. Others had already had similar ideas about gravity. Robert Hooke in particular had written to Newton mentioning a law much like the one Newton eventually wrote down, leading Hooke to accuse Newton of plagiarism.

Newton’s great accomplishment was not merely proposing his law of gravitation, but justifying it, in a way that no-one had ever done before. When others (Hooke for example) had proposed similar laws, they were looking for a law that perfectly described the motion of the planets. Kepler had already proposed ellipse-shaped orbits, but it was clear by Newton and Hooke’s time that such orbits did not fully describe the motion of the planets. Hooke and others hoped that if some sufficiently skilled mathematician started with the correct laws, they could predict the planets’ motions with complete accuracy.

The genius of Newton was in attacking this problem from a different direction. In particular, Newton showed that his laws of gravitation do result in (incorrect) ellipses…provided that there was only one planet.

With multiple planets, things become much more complicated. Even just two planets orbiting a single star is so difficult a problem that it’s impossible to write down an exact solution.

Sensibly, Newton didn’t try to write down an exact solution. Instead, he figured out an approximation: since the Sun is much bigger than the planets, he could simplify the problem and arrive at a partial solution. While he couldn’t perfectly predict the motions of the planets, he knew more than that they were just “approximately” ellipses: he had a prediction for how different from ellipses they should be.

That step was Newton’s great contribution. That insight, that science was able not just to provide exact answers to simpler problems but to guess how far those answers might be off, was something no-one else had really thought about before. It led to error analysis in experiments, and perturbation methods in theory. More generally, it led to the idea that scientists have to be responsible, not just for getting things “almost right”, but for explaining how their results are still wrong.

So this holiday season, let’s give thanks to the man whose ideas created science as we know it. Merry Newtonmas everyone!

What Can Replace Space-Time?

Nima Arkani-Hamed is famous for believing that space-time is doomed, that as physicists we will have to abandon the concepts of space and time if we want to find the ultimate theory of the universe. He’s joked that this is what motivates him to get up in the morning. He tends to bring it up often in talks, both for physicists and for the general public.

The latter especially tend to be baffled by this idea. I’ve heard a lot of questions like “if space-time is doomed, what could replace it?”

In the past, Nima and I both tended to answer this question with a shrug. (Though a more elaborate shrug in his case.) This is the honest answer: we don’t know what replaces space-time, we’re still looking for a good solution. Nima’s Amplituhedron may eventually provide an answer, but it’s still not clear what that answer will look like. I’ve recently realized, though, that this way of responding to the question misses its real thrust.

When people ask me “what could replace space-time?” they’re not asking “what will replace space-time?” Rather, they’re asking “what could possibly replace space-time?” It’s not that they want to know the answer before we’ve found it, it’s that they don’t understand how any reasonable answer could possibly exist.

I don’t think this concern has been addressed much by physicists, and it’s a pity, because it’s not very hard to answer. You don’t even need advanced physics. All you need is some fairly old philosophy. Specifically we’ll use concepts from metaphysics, the branch of philosophy that deals with categories of being.

Think about your day yesterday. Maybe you had breakfast at home, drove to work, had a meeting, then went home and watched TV.

Each of those steps can be thought of as an event. Each event is something that happened that we want to pay attention to. You having breakfast was an event, as was you arriving at work.

These events are connected by relations. Here, each relation specifies the connection between two events. There might be a relation of cause-and-effect, for example, between you arriving at work late and meeting with your boss later in the day.

Space and time, then, can be seen as additional types of relations. Your breakfast is related to you arriving at work: it is before it in time, and some distance from it in space. Before and after, distant in one direction or another, these are all relations between the two events.

Using these relations, we can infer other relations between the events. For example, if we know the distance relating your breakfast and arriving at work, we can make a decent guess at another relation, the difference in amount of gas in your car.

This way of viewing the world, events connected by relations, is already quite common in physics. With Einstein’s theory of relativity, it’s hard to say exactly when or where an event happened, but the overall relationship between two events (distance in space and time taken together) can be thought of much more precisely. As I’ve mentioned before, the curved space-time necessary for Einstein’s theory of gravity can be thought of equally well as a change in the way you measure distances between two points.

So if space and time are relations between events, what would it mean for space-time to be doomed?

The key thing to realize here is that space and time are very specific relations between events, with very specific properties. Some of those properties are what cause problems for quantum gravity, problems which prompt people to suggest that space-time is doomed.

One of those properties is the fact that, when you multiply two distances together, it doesn’t matter which order you do it in. This probably sounds obvious, because you’re used to multiplying normal numbers, for which this is always true anyway. But even slightly more complicated mathematical objects, like matrices, don’t always obey this rule. If distances were this sort of mathematical object, then multiplying them in different orders could give slightly different results. If the difference were small enough, we wouldn’t be able to tell that it was happening in everyday life: distance would have given way to some more complicated concept, but it would still act like distance for us.

That specific idea isn’t generally suggested as a solution to the problems of space and time, but it’s a useful toy model that physicists have used to solve other problems.

It’s the general principle I want to get across: if you want to replace space and time, you need a relation between events. That relation should behave like space and time on the scales we’re used to, but it can be different on very small scales (Big Bang, inside of Black Holes) and on very large scales (long-term fate of the universe).

Space-time is doomed, and we don’t know yet what’s going to replace it. But whatever it is, whatever form it takes, we do know one thing: it’s going to be a relation between events.

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

A Nobel for Blue LEDs, or, How Does That Count as Physics?

When I first heard about this year’s Nobel Prize in Physics, I didn’t feel the need to post on it. The prize went to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, whose discoveries enabled blue LEDs. It’s a more impressive accomplishment than it might seem: while red LEDs have been around since the 60’s and 70’s, blue LEDs were only developed in the 90’s, and only with both can highly efficient, LED-based white light sources be made. Still, I didn’t consider posting on it because it’s pretty much entirely outside my field.

p-device20blue20led-23

Shiny, though

It took a conversation with another PI postdoc to point out one way I can comment on the Nobel, and it started when we tried to figure out what type of physicists Akasaki, Amano, and Nakamura are. After tossing around terms like “device physicist” and “condensed matter”, someone wondered whether the development of blue LEDs wasn’t really a matter of engineering.

At that point I realized, I’ve talked about something like this before.

Physicists work on lots of different things, and many of them don’t seem to have much to do with physics. They study geometry and topology, biological molecules and the nature of evolution, income inequality and, yes, engineering.

On the surface, these don’t have much to do with physics. A friend of mine used to quip that condensed matter physicists seem to just “pick whatever they want to research”.

There is something that ties all of these topics together, though. They’re all things that physicists are good at.

Physics grad school gives you a wide variety of tools with which to understand the world. Thermodynamics gives you a way to understand large, complicated systems with statistics, while quantum field theory lets you understand everything with quantum properties, not just fundamental particles but materials as well. This batch of tools can be applied to “traditional” topics, but they’re equally applicable if you’re researching something else entirely, as long as it obeys the right kinds of rules.

In the end, the best definition of physics is the most useful one. Physicists should be people who can benefit from being part of physics organizations, from reading physics journals, and especially from training (and having been) physics grad students. The whole reason we have scientific disciplines in the first place is to make it easier for people with common interests to work together. That’s why Akasaki, Amano, and Nakamura aren’t “just” engineers, and why I and my fellow string theorists aren’t “just” mathematicians. We use our knowledge of physics to do our jobs, and that, more than anything else, makes us physicists.


Edit: It has been pointed out to me that there’s a bit more to this story than the main accounts have let on. Apparently another researcher named Herbert Paul Maruska was quite close to getting a blue LED up and running back in the early 1970’s, getting far enough to have a working prototype. There’s a whole fascinating story about the quest for a blue LED, related here. Maruska seems to be on friendly terms with Akasaki, Amano, and Nakamura, and doesn’t begrudge them their recognition.

Does Science have Fads?

97% of climate scientists agree that global warming exists, and is most probably human-caused. On a more controversial note, string theorists vastly outnumber adherents of other approaches to quantum gravity, such as Loop Quantum Gravity.

As many who disagree with climate change or string theory would argue, the majority is not always right. Science should be concerned with truth, not merely with popularity. After all, what if scientists are merely taking part in a fad? What makes climate change any more objectively true than pet rocks?

Apparently this wikipedia’s best example of a fad.

People are susceptible to fads, after all. A style of music becomes popular, and everyone’s listening to the same sounds. A style of clothing, and everything’s wearing the same thing. So if an idea in science became popular, everyone might…write the same papers?

That right there is the problem. Scientists only succeed by creating meaningfully original work. If we don’t discover something new, we can’t publish, and as the old saying goes it’s publish or perish out there. Even if social pressure gets us working on something, if we’re going to get any actual work done there has to be enough there, at least, for us to do something different, something no-one has done before.

This doesn’t mean scientists can’t be influenced by popularity, but it means that that influence is limited by the requirements of doing meaningful, original work. In the case of climate change, climate scientists investigate the topic with so many different approaches and look at so many different areas of impact (for example, did you know rising CO2 levels make the ocean acidic?) that the whole field simply wouldn’t function if climate change wasn’t real: there’d be a contradiction, and most of the myriad projects involving it simply wouldn’t work. As I’ve talked about before, science is an interlocking system, and it’s hard to doubt one part without being forced to doubt everything else.

What about string theory? Here, the situation is a little different. There aren’t experiments testing string theory, so whether or not string theory describes the real world won’t have much effect on whether people can write string theory papers.

The existence of so many string theory papers does say something, though. The up-side of not involving experiments is that you can’t go and test something slightly different and write a paper about it. In order to be original, you really need to calculate something that nobody expected you to calculate, or notice a trend nobody expected to exist. The fact that there are so many more string theorists than loop quantum gravity theorists is in part because there are so many more interesting string theory projects than interesting loop quantum gravity projects.

In string theory, projects tend to be interesting because they unveil some new aspect of quantum field theory, the class of theories that explain the behavior of subatomic particles. Given how hard quantum field theory is, any insight is valuable, and in my experience these sorts of insights are what most string theorists are after. So while string theory’s popularity says little about whether it describes the real world, it says a lot about its ability to say interesting things about quantum field theory. And since quantum field theories do describe the real world, string theory’s continued popularity is also evidence that it continues to be useful.

Climate change and string theory aren’t fads, not exactly. They’re popular, not simply because they’re popular, but because they make important contributions and valuable to science. And as long as science continues to reward original work, that’s not about to change.

What does Copernicus have to say about String Theory?

Putting aside some highly controversial exceptions, string theory has made no testable predictions. Conceivably, a world governed by string theory and a world governed by conventional particle physics would be indistinguishable to every test we could perform today. Furthermore, it’s not even possible to say that string theory predicts the same things with fewer fudge-factors, as string theory descriptions of our world seem to have dramatically many more free parameters than conventional ones.

Critics of string theory point to this as a reason why string theory should be excluded from science, sent off to the chilly arctic wasteland of the math department. (No offense to mathematicians, I’m sure your department is actually quite warm and toasty.) What these critics are missing is an important feature of the scientific process: before scientists are able to make predictions, they propose explanations.

To explain what I mean by that, let’s go back to the beginning of the 16th century.

At the time, the authority on astronomy was still Ptolemy’s Syntaxis Mathematica, a book so renowned that it is better known by the Arabic-derived superlative Almagest, “the greatest”. Ptolemy modeled the motions of the planets and stars as a series of interlocking crystal spheres with the Earth at the center, and did so well enough that until that time only minor improvements on the model had been made.

This is much trickier than it sounds, because even in Ptolemy’s day astronomers could tell that the planets did not move in simple circles around the Earth. There were major distortions from circular motion, the most dramatic being the phenomenon of retrograde motion.

If the planets really were moving in simple circles around the Earth, you would expect them to keep moving in the same direction. However, ancient astronomers saw that sometimes, some of the planets moved backwards. The planet would slow down, turn around, go backwards a bit, then come to a stop and turn again.

Thus sparking the invention of the spirograph.

In order to take this into account, Ptolemy introduced epicycles, extra circles of motion for the planets. The epicycle would move on the planet’s primary circle, or deferent, and the planet would rotate around the epicycle, like so:

French Wikipedia had a better picture.

These epicycles weren’t just for retrograde motion, though. They allowed Ptolemy to model all sorts of irregularities in the planets’ motions. Any deviation from a circle could conceivably be plotted out by adding another epicycle (though Ptolemy had other methods to model this sort of thing, among them something called an equant). Enter Copernicus.

Enter Copernicus’s hair.

Copernicus didn’t like Ptolemy’s model. He didn’t like equants, and what’s more, he didn’t like the idea that the Earth was the center of the universe. Like Plato, he preferred the idea that the center of the universe was a divine fire, a source of heat and light like the Sun. He decided to put together a model of the planets with the Sun in the center. And what he found, when he did, was an explanation for retrograde motion.

In Copernicus’s model, the planets always go in one direction around the Sun, never turning back. However, some of the planets are faster than the Earth, and some are slower. If a planet is slower than the Earth and it passes by it will look like it is going backwards, due to the Earth’s speed. This is tricky to visualize, but hopefully the picture below will help: As you can see in the picture, Mars starts out ahead of Earth in its orbit, then falls behind, making it appear to move backwards.

Despite this simplification, Copernicus still needed epicycles. The planets’ motions simply aren’t perfect circles, even around the Sun. After getting rid of the equants from Ptolemy’s theory, Copernicus’s model ended up having just as many epicycles as Ptolemy’s!

Copernicus’s model wasn’t any better at making predictions (in fact, due to some technical lapses in its presentation, it was even a little bit worse). It didn’t have fewer “fudge factors”, as it had about the same number of epicycles. If you lived in the 16th century, you would have been completely justified in believing that the Earth was the center of the universe, and not the Sun. Copernicus had failed to establish his model as scientific truth.

However, Copernicus had still done something Ptolemy didn’t: he had explained retrograde motion. Retrograde motion was a unique, qualitative phenomenon, and while Ptolemy could include it in his math, only Copernicus gave you a reason why it happened.

That’s not enough to become the reigning scientific truth, but it’s a damn good reason to pay attention. It was justification for astronomers to dedicate years of their lives to improving the model, to working with it and trying to get unique predictions out of it. It was enough that, over half a century later, Kepler could take it and turn it into a theory that did make predictions better than Ptolemy, that did have fewer fudge-factors.

String theory as a model of the universe doesn’t make novel predictions, it doesn’t have fewer fudge factors. What it does is explain, explaining spectra of particles in terms of shapes of space and time, the existence of gravity and light in terms of closed and open strings, the temperature of black holes in terms of what’s going on inside them (this last really ought to be the subject of its own post, it’s one of the big triumphs of string theory). You don’t need to accept it as scientific truth. Like Copernicus’s model in his day, we don’t have the evidence for that yet. But you should understand that, as a powerful explanation, the idea of string theory as a model of the universe is worth spending time on.

Of course, string theory is useful for many things that aren’t modeling the universe. But that’s the subject of another post.