Anthropic Reasoning, Multiverses, and Eternal Inflation (Part Two of Two)

So suppose you want to argue that, contrary to appearances, the universe isn’t impossible, and you want to use anthropic reasoning to do it. Suppose further that you read my post last week, so you know what anthropic reasoning is. In case you haven’t, anthropic reasoning means recognizing that, while it may be unlikely that the location/planet/solar system/universe you’re in is a nice place for you to live, as long as there is at least one nice place to live you will almost certainly find yourself living there. Applying this to the universe as a whole requires there to be many additional universes, making up a multiverse, at least one of which is a nice place for human life.

Is there actually a multiverse, though? How would that even work?

One of the more plausible proposals for a multiverse is the concept of eternal inflation.

Eternal inflation is idea with many variants (such as chaotic inflation), and rather than give the details of any particular variant, I want to describe the setup in as broad strokes as possible.

The first thing to be aware of is that the universe is expanding, and has been since the Big Bang. Counter-intuitively, this doesn’t mean that the universe was once small, and is now bigger: in all likelihood, the universe was always infinite in size. Instead, it means that things began packed in close together, and have since moved further apart. While various forces (gravity, electromagnetism) hold things together on short scales, the wide open spaces between galaxies are constantly widening, spreading out the map of the universe.

You would expect this process to slow down over time. While it might have started with a burst of energy (aforementioned Big Bang), as the universe gets more and more spread out it should be running out of steam. The thing is, it’s not. The evidence (complicated enough that I’m not going to go into it now) shows that the universe actually sped up dramatically shortly after the Big Bang, and seems to be speeding up again now. This speeding up is called inflation.

So what could make the universe speed up? You might have heard of Einstein’s cosmological constant, a constant added to Einstein’s equations of general relativity that, while originally intended to make the universe stay in a steady state forever, can also be chosen so as to speed up the universe’s expansion. While that works mathematically, it’s not really an explanation, especially if it changes with time.

Enter scalar fields. A scalar is what happens when you let what looks like a constant of nature vary as a quantum field. Scalar fields can vary over space, and they can change over time, making them ideal candidates for explaining inflation. And as a quantum field, the scalar field behind inflation (often called the inflaton) should randomly fluctuate, giving rise to the occasional particle just like the Higgs (another scalar field) does.

Well, not just like the Higgs. See, the Higgs controls mass, and if the mass of some particles increases a bit in a tiny area, it’s weird, but it’s not going to spread. On the other hand, if space in some place is inflating faster than space in another place…

Suppose you have two empty blocks in the middle of intergalactic space, each a cube one foot on each side, with one inflating faster than the other. Twice as fast, let’s say, so that when one cube grows to two feet on a side, the other grows to four feet on a side. Then when the first cube is four feet on a side, the other will be sixteen. When the first has eight foot sides, the other’s will be sixty-four. And so forth. Even a small difference in expansion rates quickly leads to one region dominating the other. And if inflation stops slightly later in one region than in another, that can be a pretty dramatic difference too.

The end result is that if inflation were this sort of scalar field, the universe would just keep expanding forever, faster and faster. Only small pockets would slow down enough that anything could actually stick together. So while most of the universe would just tear itself apart forever, some of it, the parts that tear themselves apart slowly, can contain atoms and stars and well, life. A universe like that is one that is experiencing eternal inflation. It’s eternal because it doesn’t have a beginning or end: what looks to us like the Big Bang, the beginning of our universe, is really just the point at which our part of the universe started expanding slow enough that anything we recognize as matter could exist.

There’s no reason for us to be the only bubble that slowed down, though, and that’s where the multiverse aspect comes in. In eternal inflation there are lots and lots of slow regions, each one like a mini-universe in its own right. What’s more, each region can have totally different constants of nature.

To understand how that works, remember that each region has a different rate of inflation, and thus a different value for the inflaton scalar field. It turns out that many types of scalar fields like to interact with each other. If you recall my post on scalar fields (already linked, not gonna link it again), you’ll remember that for everything that looks like a constant of nature, chances are there’s a scalar field that controls it. So different values for inflation means different values for all of those scalar fields too, which means different physical constants. With so many (possibly infinitely many) regions with different physical constants, there’s bound to be one where we could live.

Now, before you get excited here, there are a few caveats. Well, a lot of caveats.

First, it’s all well and good if the multiverse can produce life, but what if it produces dramatically different life? What sort of life is eternal inflation most likely to produce, and what are the chances it would look at all like us? For that matter, how do you figure out the chances of anything in an infinite, eternally expanding universe? This last is a very difficult problem, and work on it is ongoing.

Beyond that, we don’t even know enough about inflation to know whether eternal inflation would happen or not. We’ve got a pretty good idea that inflation involves scalar fields, but how many and in what combination? We don’t know yet, and the evidence is still coming in. We’re right on the cutting edge of things now, and until we know more it’s tough to say for certain whether any of this is viable. Still, it’s fun to think about.

Anthropic Reasoning, Multiverses, and Eternal Inflation (Part One of Two)

You and I are very very lucky. Human life is very delicate, and the conditions under which it can thrive are not in the majority. Going by random chance, neither of us should exist.

I am referring, of course, to the fact that the Earth’s surface is about 70 percent ocean. Just think how lucky you are not to have been born there: you would have drowned! Let alone if you were born beneath the Earth’s crust!

If you understand why the above is ridiculous, congratulations: you’ve just discovered anthropic reasoning.

There are some situations we find ourselves in because they are common. Most (all) of the Earth is in orbit around the Sun, so if you find yourself in orbit around the Sun you should hardly be surprised. Some situations, on the other hand, keep happening not because they are common in the universe in general, but because they are the part of the universe in which we can exist. Recognizing those situations is anthropic reasoning.

It’s not weird that you were born on land, even though land is rarer than water, because land, and not water, is where people live. As long as there was any land on the earth at all, you would expect people to be born on it (or on ships, I suppose) rather than on the ocean.

The same sort of reasoning explains why we evolved on Earth to begin with. There are eight planets in the solar system (yes, Pluto is not a planet, get over it), and only one of them is in the right place for life like us. We aren’t “lucky” that we ended up on Earth rather than another planet, nor is it something “unlikely” that needs to be explained: we’re on Earth because the universe is big enough that there happens to be a planet that has the right conditions for life, and Earth is that planet.

What anthropic reasoning has a harder time explaining (but what some people are working very hard to make it explain) is the question of why our whole universe is the way it is. Our universe is a pretty good place for life to evolve. Granted, that’s probably just a side effect of it being a good place for stars to evolve, but let’s put that aside for a second. Suppose the universe really is a particularly nice place for life, even improbably nice. Can anthropic reasoning explain that?

Probably. But it takes some work.

See, the difficulty is that in order for anthropic reasoning to work, you need to be certain that some place hospitable to life actually is likely to exist. Earthlike planets may be rare, but there are enough planets in the universe that some of them are bound to be like Earth. If universes like ours are rare, though, then how can there be enough universes to guarantee one like ours? How can there be more than one universe at all?

That’s why you need a multiverse.

A multiverse, in simple terms, is a collection of universes. If you object that a universe is, by definition, all that exists, and thus there can’t possibly be more than one, then you can use an alternate definition: a multiverse is a vast universe in which there are many smaller universe-like regions. These sub-universes don’t have much (or any) contact with eachother, and (in order for anthropic reasoning to work) must have different properties.

Does a multiverse exist, though? How would one work?

There are several possibilities, of varying degrees of plausibility. Some people have argued that quantum mechanics leads to many parallel universes, while others posit that each universe could be like a membrane in some higher dimensional space. The multiple universes could be separated in ordinary space, or even in time.

In the next post, I will discuss one of the more plausible (if still controversial) possibilities, called eternal inflation, in which new universes are continually birthed in a vast sea of exponentially expanding space. If you have no idea what the heck I meant by that, great! Tune in next time to find out!

In Defense of Pure Theory

I’d like to preface this by saying that this post will be a bit more controversial than usual. I have somewhat unconventional opinions about the nature and purpose of science, and what I say below shouldn’t be taken as representative of the field in general.

A bit more than a week ago, Not Even Wrong had a post on the Fundamental Physics Prize. Peter Woit is often…I’m going to say annoying…and this post was no exception.

The Fundamental Physics Prize, for those not in the know, is a fairly recently established prize for physicists, mostly theoretical physicists.  Clocking in at three million dollars, the prize is larger than the Nobel, and is currently the largest prize of its sort. Woit has several objections to the current choice of award recipient (Alexander Polyakov). I sympathize with some of these objections, in particular the snarky observation that a large number of the awardees are from Princeton’s Institute for Advanced Study. But there is one objection in particular that I feel the need to rebut, if only due to its wording: the gripe that “Viewers of the part I saw would have no idea that string theory is not tested, settled science.”

There are two problems with this statement. The first is something that Woit is likely aware of, but it probably isn’t obvious to everyone reading this. To be clear, the fact that a certain theory is not experimentally tested is not a barrier to its consideration for the Fundamental Physics Prize. Far from it, the purpose of the Fundamental Physics Prize is precisely to honor powerful insights in theoretical physics that have not yet been experimentally verified. The Fundamental Physics Prize was created, in part, to remedy what was perceived as unfairness in the awarding of the Nobel Prize, as the Nobel is only awarded to theorists after their theories have received experimental confirmation. Since the whole purpose of this prize is to honor theories that have not been experimentally tested, griping that the prizes are being awarded to untested theories is a bit like griping that Oscars aren’t awarded to scientists, or objecting that viewers of the Oscars would have no idea that the winners haven’t done anything especially amazing for humanity. If you’re watching the ceremony, you probably know what it’s for.

Has this been experimentally verified?

The other problem is a difference of philosophy. When Woit says that string theory is not “tested, settled science” he is implying that in order to be “settled science”, a theory must be tested, and while I can’t be sure of his intent I’m guessing he means tested experimentally. It is this latter implication I want to address: whether or not Woit is making it here, it serves to underscore an important point about the structure of physics as an institution.

Past readers will be aware that a theory can be valuable even if it doesn’t correspond to the real world because of what it can teach us about theories that do correspond to the real world. And while that is an important point, the point I’d like to make here is a bit more controversial. I would like to argue that pure theory, theory unconnected with experiment, can be important and valuable and “settled science” in and of itself.

First off, let’s talk about how such a theory can be science, and in particular how it can be physics. Plenty of people do work that doesn’t correspond to the experimentally accessible real world.  Mathematicians are the clearest example, but the point also arguably applies to fields like literary analysis. Physics is ostensibly supposed to be special, though: as part of science, we expect it to concern itself with the real world, otherwise one would argue that it is simply mathematics. However, as I have argued before, the difference between mathematics and physics is not one of subject matter, but of methods. This makes sense, provided you think of physics not as some sort of fixed school of thought, but as an institution. Physicists train new physicists, and as such physicists learn methods common to other physicists. That which physicists like to do, then, is physics, which means that physics is defined much more by the methods used to do it than by its object of study.

How can such a theory be settled, then? After all, if reality is out, what possible criteria could there be for deciding what is or is not a “good” theory?

The thing about physics as an institution is that physics is done by physicists, and physicists have careers. Over the course of those careers, those physicists need to publish papers, which need to catch the attention and approval of other physicists. They also need to have projects for grad students to do, so as to produce more physicists. Because of this, a “good” theory cannot be worked on alone. It has to be a theory with many implications, a theory that can be worked on and understood consistently by different people. It also needs to constrain further progress, to make sure that not just anyone can create novel results: this is what allows papers to catch the attention of other physicists! If you have all that, you have all of the relevant advantages of reality.

String theory has not been experimentally tested, but it meets all of these criteria. String theory has been a major force in theoretical physics for the past thirty years because it can fuel careers and lead to discussion in a way that nothing else on the table can. It has been tested mathematically in numerous ways, ways which demonstrate its robustness as a theory of quantum gravity. In this sense, string theory is a prime example of tested, settled science.

Ansatz: Progress by Guesswork

I’ve talked before about how hard traditional Quantum Field Theory is. Building things up step by step is slow and inefficient. And like any slow and inefficient process, there is a quicker way. An easier way. A…riskier way.

You guess.

Guess is such an ugly word, though…so let’s call it an ansatz.

Ansatz is a word of German origin. In German, it is part of various idiomatic expressions, where it can refer to an approach, an attempt, or a starting point. When physicists and mathematicians use the term ansatz, they mean a combination of all of these.

An ansatz is an approach in that it is a way of finding a solution to a problem without using more general, inefficient methods. Rather than approaching problems starting from the question, an ansatz approaches problems by starting with an answer, or rather, an attempt at an answer.

An ansatz is an attempt in that it serves as researcher’s best first guess at what the answer is, based on what they know about it. This knowledge can come from several sources. Sometimes, the question constrains the answer, ruling out some possibilities or restricting the output to a particular form. Usually, though, the attempt of an ansatz goes beyond this, incorporating the scientist’s experience as to what sorts of answers similar questions have had in the past, even if it isn’t understood yet why those sorts of answers are common. With information from both of these sources, a scientist comes up with a preliminary guess, or ansatz, as to answer to the problem at hand.

What if the answer is wrong, though? The key here is that an ansatz is only a starting point. Rather than being a full answer with all the details filled in, an ansatz generally leaves some parameters free. These free parameters represent unknowns, and it is up to further tests to fix their values and complete the answer. These tests can be experimental, but they can also be mathematical: often there are restrictions on possible answers that are difficult to apply when creating a first guess, but easier to apply when one has only a few parameters to fix. In order to avoid the risk of finding an ansatz that only works by coincidence, many more tests are done than there are parameters. That way, if the guess behind the ansatz is wrong, then some of the tests will give contradictory rules for the values of the parameters, and you’ll know that it’s time to go back and find a better guess.

In the end, this approach, using your first attempt as a starting point, should end up with only a few parameters free, ideally none at all. One way or another, you have figured out a lot about your question just by guessing the answer!

The use of ansatzes is quite common in theoretical physics. Some of the most interesting problem either can’t be solved or are tedious to solve through traditional means. The only way to make progress, to go beyond what everyone else can already do, is to notice a pattern, make a guess, and hope you get lucky. Well, not just a guess: an ansatz.

Nature Abhors a Constant

Why is a neutrino lighter than an electron? Why is the strong nuclear force so much stronger than the weak nuclear force, and why are both so much stronger than gravity? For that matter, why do any particles have the masses they do, or forces have the strengths they do?

To some people, these sorts of questions are meaningless. A scientist’s job is to find out the facts, to measure what the constants are. To ask why, though…why would you want to do that?

Maybe a sense of history?

See, physics has a history of taking what look like arbitrary facts (the orbits of the planets, the rate objects fall, the pattern of chemical elements) and finding out why they are that way. And there’s no reason not to expect this trend to continue.

The point can be made even more strongly: increasingly, it is becoming clear that nature abhors a constant.

To explain this, I first have to clarify what I mean by a constant. If you were asked to think of a constant, you’d probably think of the speed of light. The thing is, the speed of light is actually not the sort of constant I have in mind. The speed of light is three hundred million meters per second…but it’s also 671 million miles per hour, or one light year per year. Choose the right units, and the speed of light is just one. To go a bit further: the speed of light is merely an artifact of how we choose our units of distance and time, so it’s not a “real” constant at all!

So what would a “real” constant look like? Well, imagine if there were two fundamental speeds: a maximum, like the speed of light and a minimum, which nothing could go slower than. You could pick units so that one of the speeds was one, or so that the other was…but they couldn’t both be one at the same time. Their ratio stays the same, no matter what units you’re using. That’s the sign of a true constant. To say it another way: a “real” constant is dimensionless.

It is these “real” constants that nature so abhors, because whenever such a “real” constant appears to exist, it is likely to be due to a scalar field.

To remind readers, a scalar field is a type of quantum field consisting of a number that can vary through space. Temperature is an iconic illustration of a scalar field: at any given point you can define temperature by a number, and that number changes as you move from place to place.

Now constants, being constant, are not known for changing from place to place. Just because we don’t see mass or charge being different in different places, though, doesn’t mean they aren’t scalar fields.

To illustrate, imagine that you live far in the past, far enough that no-one knows that air has weight. Through careful experimentation, though, you can observe air pressure: everything is pressed upon in all directions by some mysterious force. Even if you don’t have access to mountains and therefore can’t see that air pressure varies by height, maybe you have begun to guess that air pressure is related to the weight of the air. You have a possible explanation for your constant pressure, in terms of a scalar pressure field. But how do you test your idea? Well, the big difference between a scalar and a constant is that a scalar can vary. Since there’s so much air above you, it’s hard to get air pressure to vary: you have to put enough energy in to the air to make it happen. More specifically, you vibrate the air: you create sound waves! By measuring how fast the sound waves go, you can test out your proposed number for the mass of the air, and if everything lines up right, you have successfully replaced a mysterious constant with a logical explanation.

This is almost exactly what happened with the Higgs. Scientists observed that particle masses seemed to be arbitrary numbers, and proposed a scalar field to explain them. (As a matter of fact, the masses involved actually cannot just be constants; the mathematics involved doesn’t allow it. They must be scalar fields). In order to test out the theory, we built the Large Hadron Collider, and used it to cause ripples in the seemingly constant masses, just like sound waves in air. In this case, those ripples were the Higgs particle, which served as evidence for the Higgs field just as sound waves serve as evidence for the mass of air.

And this sort of method keeps going. The Higgs explains mass in many cases, but it doesn’t explain the differences between particle masses, and it may be that new fields are needed to explain those. The same thing goes for the strengths of forces. Scalar fields are the most likely explanations for inflation, and in string theory scalars control the size and shape of the extra dimensions. So if you’ve got a mysterious constant, nature likely has a scalar field waiting in the wings to explain it.

What are colliders for, anyway?

Above is a thoroughly famous photo from ATLAS, one of six different particle detectors that sit around the ring of the Large Hadron Collider (or LHC for short). Forming a 26 kilometer ring spanning a chunk of southern France and Switzerland, the LHC is the biggest experiment of its kind, with the machine alone costing around 4 billion dollars.

But what is “its kind”? And why does it need to be so huge?

Aesthetics, clearly.

Explaining what a particle collider like the LHC does is actually fairly simple, if you’re prepared for some rather extreme mental images: using incredibly strong magnetic fields, the LHC accelerates protons until they’re moving at 99.9999991% of the speed of light, then lets them smash into each other in the middle of sophisticated detectors designed to observe and track everything that comes out of the collision.

That’s all well and awesome, but why do the protons need to be moving so fast? Are they really really hard to crack open, or something?

This gets at a common misunderstanding of particle physics, which I’d like to correct here.

When most people imagine what a particle collider does, they picture it smashing particles together like hollow shells, revealing the smaller particles trapped inside. You may have even heard particle colliders referred to as “atom smashers”, and if you’re used to hearing about scientists “splitting the atom”, this all makes sense: with lots of energy, atoms can be broken apart into protons and neutrons, which is what they are made of. Protons are made of quarks, and quarks were discovered using particle colliders, so the story seems to check out, right?

The thing is, lots of things have been discovered using particle colliders that definitely aren’t part of protons and neutrons. Relatives of the electron like muons and tau particles, new varieties of neutrinos, heavier quarks…pretty much the only particles that are part of protons or neutrons are the three lightest quarks (and that’s leaving aside the fact that what is or is not “part of” a proton is a complicated question in its own right).

So where do the extra particles come from? How do you crash two protons together and get something out that wasn’t in either of them?

You…throw Einstein at them?

E equals m c squared. This equation, famous to the point of cliché, is often misinterpreted. One useful way to think about it is that it describes mass as a type of energy, and clarifies how to convert between units of mass and units of energy. Then E in the equation is merely the contribution to the energy of a particle from its mass, while the full energy also includes kinetic energy, the energy of motion.

Energy is conserved, that is, cannot be created or destroyed. Mass, on the other hand, being merely one type of energy, is not necessarily conserved. The reason why mass seems to be conserved in day to day life is because it takes a huge amount of energy to make any appreciable mass: the c in m c squared is the speed of light, after all. That’s why if you’ve got a radioactive atom it will decay into lighter elements, never heavier ones.

However, this changes with enough kinetic energy. If you get something like a proton accelerated to up near the speed of light, its kinetic energy will be comparable to (or even much higher than) its mass. With that much “spare” energy, energy can transform from one form into another: from kinetic energy into mass!

Of course, it’s not quite that simple. Energy isn’t the only thing that’s conserved: so is charge, and not just electric charge, but other sorts of charge too, like the colors of quarks.  All in all, the sorts of particles that are allowed to be created are governed by the ways particles can interact. So you need not just one high energy particle, but two high energy particles interacting in order to discover new particles.

And that, in essence, is what a particle collider is all about. By sending two particles hurtling towards each other at almost the speed of light you are allowing two high energy particles to interact. The bigger the machine, the faster those particles can go, and thus the more kinetic energy is free to transform into mass. Thus the more powerful you make your particle collider, the more likely you are to see rare, highly massive particles that if left alone in nature would transform unseen into less massive particles in order to release their copious energy. By producing these massive particles inside a particle collider we can make sure they are created inside of sophisticated particle detectors, letting us observe what they turn into with precision and extrapolate what the original particles were. That’s how we found the Higgs, and it’s how we’re trying to find superpartners. It’s one of the only ways we have to answer questions about the fundamental rules that govern the universe.

Breakthrough or Crackpot?

Suppose that you have an idea. Not necessarily a wonderful, awful idea, but an idea that seems like it could completely change science as we know it. And why not? It’s been done before.

My advice to you is to be very very careful. Because if you’re not careful, your revolutionary idea might force you to explain much much more than you expect.

Let’s consider an example. Suppose you believe that the universe is only six thousand years old, in contrast to the 13.772 ± 0.059 billion years that scientists who study the subject have calculated. And furthermore, imagine that you’ve gone one step further: you’ve found evidence!

Being no slouch at this sort of thing, you read the Wikipedia article linked above, and you figure you’ve got two problems to deal with: extrapolations from the expansion of the universe, and the cosmic microwave background. Let’s say your new theory is good enough that you can address both of these: you can explain why calculations based on both of these methods give 14 billion years, while you still assert that the universe is only six thousand years old. You’ve managed to explain away all of the tests that scientists used to establish the age of the universe. If you can manage that, you’re done, right?

Not quite. Explaining all the direct tests may seem like great progress, but it’s only the first step, because the age of the universe can show up indirectly as well. No stars have been observed that are 13.772 billion years old, but every star whose age has been calculated has been found to be older than six thousand years! And even if you can explain why every attempt to measure a star’s age turned out wrong, there’s more to it than that, because the age of stars is a very important part of how astronomers model stellar behavior. Every time astronomers make a prediction about a star, whether estimating its size, it’s brightness, its color, every time they make such a prediction and then the prediction turns out correct, they’re using the fact that the star is (some specific number) much much older than six thousand years. And because almost everything we can see in space either is made of stars, or orbits a star, or once was a star, changing the age of the universe means you have to explain all those results too. If you propose that the age of the universe is only six thousand, you need to explain not only the cosmic microwave background, not only the age of stars, but almost every single successful prediction made in the last fifty years of astronomy, none of which would have been successful if the age of the universe was only six thousand.

Daunting, isn’t it?

Oh, we’re not done yet!

See, it’s not just astronomy you have to contend with, because the age of the Earth specifically is also calculated to be much larger than six thousand years. And just as astronomers use the age of stars to make successful predictions about their other properties, geologists use the age of rock formations to make their own predictions. And the same is true for species of animals and plants, studied through genetic drift with known rates over time, or fossils with known ages. So in proposing that the universe is only six thousand years old, you need to explain not just two pieces of evidence, but the majority of successful predictions made in three distinct disciplines over the last fifty years. Is your evidence that the universe is only six thousand years old good enough to outweigh all of that?

This is one of the best ways to tell a genuine scientific breakthrough from ideas that can be indelicately described as crackpot. If your idea questions something that has been used to make successful predictions for decades, then it becomes your burden of proof to explain why all those results were successful, and chances are, you can’t fulfill that burden.

This test can be applied quite widely. As another example, homeopathic medicine relies on the idea that if you dilute a substance (medicine or poison) drastically then rather than getting weaker it will suddenly become stronger, sometimes with the reverse effect. While you might at first think this could be confirmed or denied merely by testing homeopathic medicines themselves, the principle would also have to apply to any other dilution, meaning that a homeopath needs to explain everything from the success of water treatment plants that wash out all but tiny traces of contaminants to high school chemistry experiments involving diluting acid to observe its pH.

This is why scientific revolutions are hard! If you want to change the way we look at the world, you need to make absolutely sure you aren’t invalidating the success of prior researchers. In fact, the successes of past research constrain new science so much, that it sometimes is possible to make predictions just from these constraints!

So whenever you think you’ve got a breakthrough, ask yourself: how much does this mean I have to explain? What is my burden of proof?

Why a Quantum Field Theorist is the wrong person to ask about Quantum Mechanics

Quantum Mechanics is quite possibly the sexiest, most mysterious thing to come out of 20th century physics. Almost a century of evidence has confirmed that the world is fundamentally ambiguous and yet deeply predictable, that physics is best described probabilistically, and that however alien this seems the world wouldn’t work without it. Quantum Mechanics raises deep philosophical questions about the nature of reality, some of the most interesting of which are still unanswered to this day.

And I am (for the moment, at least) not the best person to ask about these questions. Because while I specialize in Quantum Field Theory, that actually means I pay very little attention to the paradoxes of Quantum Mechanics.

It all boils down to the way calculations in quantum field theory work. As I described in a previous post, quantum field theory involves adding up progressively more complicated Feynman Diagrams. There are methods that don’t involve Feynman Diagrams, but in one way or another they work on the same basic principle: to take quantum mechanics into account, add up all possible outcomes, either literally or through shortcuts.

That may sound profound, but in many ways it’s quite mundane. Yes, you’re adding up all possibilities, but each possibility is essentially a mundane possibility. There are a few caveats, but essentially each element you add in, each Feynman Diagram for example, looks roughly like the sort of thing you could get without quantum mechanics.

In a typical quantum field theory calculation, you don’t see the mysterious parts of quantum mechanics: you don’t see entanglement, or measurements collapsing the wavefunction, and you don’t have to think about whether reality is really real. Because of that, I’m not the best person to ask about quantum paradoxes, as I’ve got little more than an undergraduate’s knowledge of these things.

There are people whose work focuses much more on quantum paradoxes. Generally these people focus on systems closer to everyday experiments, atoms rather than more fundamental particles. Because the experimentalists they cooperate with have much more ability to manipulate the systems they study, they are able to probe much more intricate quantum properties. People interested in the possibility of a quantum computer are often at the forefront of this, so if you’ve got a question about a quantum paradox, don’t ask me, ask people like WLOG blog.

A final note: there are many people (often very experienced and elite researchers) who, though they might primarily be described as quantum field theorists, have weighed in on the subject of quantum paradoxes. If you’ve heard of the black hole firewall debate, that is a recent high-profile example of this. The important thing to remember is that these people are masters of many areas of physics. They have taken the time to study the foundations of quantum mechanics, and have broadened their horizons to the tools more commonly used in other subfields. So while your average grad student quantum field theorist won’t know an awful lot about quantum paradoxes, these guys do.

Valentine’s Day Physics Poem

In honor of Valentine’s Day, a physics-themed poem I wrote a few years ago, about unrequited love.

Measurement:

 

I once took a measurement

It was a simple, two-body problem,

Solvable. Not Poisson’s mess.

Two particles, drifting, perhaps entangled.

I wanted to know two things:

Position, and momentum:

Where they were, and where they might go.

 

I perturbed the system

Like a good scientist, I interacted, and observed,

Added input, caused change.

Then I knew their positions.

They became tightly entangled,

Bound together,

And there was no way of knowing

Any way they could change.

 

I should have remembered:

In quantum systems

The observer is always involved;

And a three-body problem

Has no solution.

Black Holes and a Superluminal River of Glass

If I told you that scientists have been able to make black holes in their labs for years, you probably either wouldn’t believe me, or would suddenly get exceptionally paranoid. Turns out it’s true, provided you understand a little bit about black holes.

A black hole is, at its most basic, an object that light cannot escape. That’s why it’s “black”: it absorbs all colors of light. That’s really, deep down, all you need in order to have a black hole.

Black holes out in space, as you are likely aware, are the result of collapsed stars. Gather enough mass into a small enough space and, according to general relativity, space and time begin to bend. Bend space and time enough and the paths that light would follow curve in on themselves, until inside the event horizon (the “point of no return”) the only way light can go is down, into the center of the black hole.

That’s not the only way to get a “point of no return” though. Imagine flying a glider above a fast-moving river. If the plane is slower than the river, then any object placed in the river is like a “point of no return”:  once the object passes you, you can never fly back and find it again.

Of course, trying to apply this to light runs into a difficulty: you can have a river faster than a plane, but it’s pretty hard to have a river faster than light. You might even say it’s impossible: nothing can travel faster than light, after all, right?

The idea that nothing can travel faster than light is actually a common misconception, held because it makes a better buzzword than the truth: nothing can travel faster than light in a vacuum. Light in a vacuum goes straight to its target, the fastest thing in the universe. But light in a substance, moving through air or water or glass, gets deflected: it runs into atoms, gets absorbed, gets released, and overall moves slower. So in order to make a black hole, all we need is some substance moving faster than light moves in that substance: a superluminal river of glass.

(By the way, is that not an amazingly evocative phrase? Sounds like the title of a Gibson novel.)

Now it turns out that literally making glass move faster than light moves inside it is still well beyond modern science. But scientists can get around that. Instead of making the glass move, they  make the properties of the glass change, using lasers to alter the glass so that the altered area moves faster than the light around it. With this sort of setup, they can test all sorts of theoretical black hole properties up close, in the comfort of a university basement.

That’s just one example of how to create an artificial black hole. There are several others, and all of them rely on various ingenious manipulations of the properties of matter. You live in a world in which artificial black holes are routine and diverse. Inspiring, no?