With Halloween coming up, it’s time once again to talk about the spooky side of physics. And what could be spookier than action at a distance?

Ok, maybe not an obvious contender for spookiest concept of the year. But physicists have struggled with action at a distance for centuries, and there are deep reasons why.

It all dates back to Newton. In Newton’s time, all of nature was expected to be mechanical. One object pushes another, which pushes another in turn, eventually explaining everything that every happens. And while people knew by that point that the planets were not circling around on literal crystal spheres, it was still hoped that their motion could be explained mechanically. The favored explanations of the time were vortices, whirlpools of celestial fluid that drove the planets around the Sun.

Newton changed all that. Not only did he set down a law of gravitation that didn’t use a fluid, he showed that no fluid could possibly replicate the planets’ motions. And while he remained agnostic about gravity’s cause, plenty of his contemporaries accused him of advocating “action at a distance”. People like Leibniz thought that a gravitational force without a mechanical cause would be superstitious nonsense, a betrayal of science’s understanding of the world in terms of matter.

For a while, Newton’s ideas won out. More and more, physicists became comfortable with explanations involving a force stretching out across empty space, using them for electricity and magnetism as these became more thoroughly understood.

Eventually, though, the tide began to shift back. Electricity and Magnetism were explained, not in terms of action at a distance, but in terms of a *field* that filled the intervening space. Eventually, gravity was too.

The difference may sound purely semantic, but it means more than you might think. These fields were restricted in an important way: when the field changed, it changed at one point, and the changes spread at a speed limited by the speed of light. A theory composed of such fields has a property called **locality**, the property that all interactions are fundamentally *local*, that is, they happen at one specific place and time.

Nowadays, we think of locality as one of the most fundamental principles in physics, on par with symmetry in space and time. And the reason why is that true action at a distance is quite a spooky concept.

Much of horror boils down to fear of the unknown. From what might lurk in the dark to the depths of the ocean, we fear that which we cannot know. And true action at a distance would mean that our knowledge might forever be incomplete. As long as everything is mediated by some field that changes at the speed of light, we can limit our search for causes. We can know that any change must be caused by something only a limited distance away, something we can potentially observe and understand. By contrast, true action at a distance would mean that forces from potentially anywhere in the universe could alter events here on Earth. We might never know the ultimate causes of what we observe; they might be stuck forever out of reach.

Some of you might be wondering, what about quantum mechanics? The phrase “spooky action at a distance” was famous because Einstein used it as an accusation against quantum entanglement, after all.

The key thing about quantum mechanics is that, as J. S. Bell showed, you can’t have locality…unless you throw out another property, called realism. Realism is the idea that quantum states have definite values for measurements before those measurements are taken. And while that sounds important, most people find getting rid of it much less scary than getting rid of locality. In a non-realistic world, at least we can still predict probabilities, even if we can’t observe certainties. In a non-local world, there might be aspects of physics that we just can’t learn. And that’s spooky.

TonyGreat post! I didn’t know that about Leibniz (or have forgotten it long ago). Goes to show that, despite co-inventing the differential calculus, these two were quite different.

Personally, I find the Higgs vacuum tunneling quite scary, even if the new vacuum bubble would propagate at the speed of light. Don’t know why, possibly from some old SciFi show with beings from another dimension creeping into ours.

LikeLiked by 1 person

4gravitonsandagradstudentPost authorVacuum decay is definitely one of the scarier ones, being one of those world-destroying events that you can’t really prepare for. You might find Ashoke Sen’s take on it interesting.

LikeLike

nuewwthanks for that nice piece of sen!

LikeLike

ohwillekeIsn’t there really a trilemma? Realism, locality or causality?

LikeLike

4gravitonsandagradstudentPost authorIf I recall correctly, causality is included in locality in the usual treatment.

LikeLike

Lubos MotlI’ve written a followup post to your blog post on realism and locality here:

http://motls.blogspot.com/2015/11/locality-correct-realism-incorrect-why.html?m=1

LikeLike

Andrei BocanDear 4gravitons,

Your interpretation of the implications of Bell’s theorem is as wrong as it is widespread. Along realism (represented by the hidden variable), the theorem assumes measurement independence, or free-will. This limits the scope of the theorem to those hidden-variable theories that are non-deterministic and/or non-contextual. But classical field theories, like general relativity and electromagnetism, are both deterministic and contextual so one cannot make the claim that this type of theory has been ruled out by Bell’s theorem.

The correct implication of Bell’s theorem is that the world is either local and deterministic or non-local and non-deterministic. So, as long as you want to preserve locality you need to reject the assumption of fundamental randomness.

Andrei

LikeLike

4gravitonsandagradstudentPost authorSo there are two important points to keep in mind here:

First, yes, Bell’s theorem assumes that you can set what measurements the experimenters take independently of the “hidden variables” that determine the results of those measurements. In an utterly deterministic world, this seems false: after all, the measurements performed are determined like everything else. But in order to argue for local realism in Bell’s case, you need more than that: not just that the experimenters’ measurements are determined, but that they’re determined

in a way that’s (fairly) closely linked to the “hidden variables” of the quantum system. In effect, you’re violating something like a naturalness principle: you’ve tied two things together that really have no reason to be tied together, and unless you’re a fan of Leibniz-style determinism this is a tough pill to swallow.The other point is that what you’re rejecting here isn’t precisely free will in the philosophical sense. Rather, it’s the ability to pose counterfactuals: to say “what if we had measured X instead?” and have that question actually have meaning. And if you toss out counterfactuals altogether, then you can’t really write down any laws of physics at all. Scientific statements boil down to claims like “if an object with properties Y is acted upon by a force Z, then…”, and getting rid of the “if…then” phrasing completely changes the game. Rebuilding science in that sort of world is quite tricky, it’s what leads philosophers like David Lewis to argue for a multiverse.

Free will’s interaction with Bell’s theorem is not a pointless thing to notice. But it won’t give you the kind of glib answers you’re looking for.

LikeLike

Lubos MotlA very concise rebuttal of this “superdeterminism”. It may be fun to see a broader blog post about all these conspiracy theories. Not that it’s the only “kind of a problem” with superdeterminism but it’s perhaps the most physical one. It’s great that you were provoked to write this good point about the required correlation between the hidden variables and our decisions by Andrei’s post whose last paragraph said:

“The correct implication of Bell’s theorem is that the world is either local and deterministic or non-local and non-deterministic. So, as long as you want to preserve locality you need to reject the assumption of fundamental randomness.”

This is really bizarre because every single part of this claim seems to be the opposite of the truth. Bell’s theorem implies that the world is either local and non-deterministic (yes, it is, like in QM), or non-local and deterministic (some relativity-violating hidden-variable theories). Andrei wrote exactly the other two options among the four! 😉 And the final sentence is

exactlythe opposite of the truth, too (although it’s perhaps equivalent to a part of the previous sentence). If one wants to preserve locality, he needs a fundamentally probabilistic framework of quantum mechanics, while he writes that one needs torejectthe fundamental randomness. Oops?I am afraid that up to the moment when one fixes these Andrei-style mistakes, he can’t comprehend your point that discusses a problem that Andrei could have raised if he had done so correctly.

LikeLike

AndreiDear Dr. Lubos Motl,

I said:

“The correct implication of Bell’s theorem is that the world is either local and deterministic or non-local and non-deterministic. So, as long as you want to preserve locality you need to reject the assumption of fundamental randomness.”

And you replied:

“This is really bizarre because every single part of this claim seems to be the opposite of the truth. Bell’s theorem implies that the world is either local and non-deterministic (yes, it is, like in QM), or non-local and deterministic (some relativity-violating hidden-variable theories). Andrei wrote exactly the other two options among the four!”

Let me support my original claim.

1. Deterministic theories reject free-will, therefore they are not ruled out by Bell.

2. Non-deterministic theories with a local hidden-variable are ruled out.

3. What remains (non-deterministic theories with a non-local hidden-variable) are allowed.

It is interesting to notice that de-Broglie-Bohm theory is an overkill. It can evade Bell by being both deterministic and non-local!

“And the final sentence is exactly the opposite of the truth, too (although it’s perhaps equivalent to a part of the previous sentence). If one wants to preserve locality, he needs a fundamentally probabilistic framework of quantum mechanics, while he writes that one needs to reject the fundamental randomness. Oops?”

1.By rejecting fundamental randomness you can (at least in theory) recover locality and realism together, which seems quite a good deal. No spookiness, no weirdness, just the good old classical universe.

2.The non-realistic probabilistic theories (the last possible type) seem to be ruled out by simply looking at the results of a Bell test with both detectors fixed on the same axis. If you roll two independent dice, one at Alice and one at Bob you cannot have the 100% correlation. You need to roll one dice and set the result at both places (which is non-local).

Andrei

LikeLike

Tim MaudlinWell, I just stumbled across this, and it is certainly amusing to see the blind leading the blind, or more accurately the blind having a catfight with the blind, or even yet more accurately three different blind people all arguing at the top of their lungs about whether a fire engine that is sitting right in front of them is purple or green or blue. Not a single one of you has the vaguest idea what Bell’s theorem proves, but all three of you think you are experts. The whole thing would be hilarious if it were not so sad.

For the record.

Bell’s theorem has precisely two premises, no more and no less. One is locality or local causality or Bell-locality or Einstein-locality, which are all words for the same condition and all express what any normal person means by “no spooky action-at-a-distance”. And the other is called Statistical Independence, and states that, over a large collection of runs of a particular experimental set-up, especially a set-up that employs a physical randomizing device to determine the setting of the experimental apparatus, the statistical characteristics of the group of systems that are subjected to one experimental arrangement will be statistically independent of which arrangement that happens to be, or, in other words, that the various subsets of systems that are subjected to different experimental arrangements are (within epsilon) statistically similar to each other. Period. End of story.

In particular, there is no assumption of determinism, or assumption of “realism” (whatever that means) or assumption of counterfactual definiteness or assumption that you are using an “ontological model” (whatever that means) or assumption that human beings have free will (how in the world did that ever even enter the discussion?). There are exactly and precisely and only two premises:

1) Locality

2) Statistical Independence

If you want to retain locality, then you have to deny Statistical Independence. That’s it. That is your only, sole choice. And then you have a “conspiracy” theory or a “hyperfine tuned” theory (my choice of terminology, or a “superdeterminstic” theory (Bell’s choice of terminology and one of the few bad decisions he made.) And since it is insane to adopt a conspiracy theory or a hyperfine tuned theory or a superdeterminstic theory, because that would undermine all of empirical science, the only rational choice is to give up on locality. Period.

This applies to quantum theory as will, including QFT: those theories are all non-local. Just as Einstein correctly noted.

LikeLike

AndreiDear 4gravitonsandagradstudent,

“First, yes, Bell’s theorem assumes that you can set what measurements the experimenters take independently of the “hidden variables” that determine the results of those measurements. In an utterly deterministic world, this seems false: after all, the measurements performed are determined like everything else.”

I find the syntagms like super-determinism, strong-determinism, hard-determinism, and your “utterly deterministic” both superfluous and misleading. Determinism, as understood in classical physics implies that the state at T0 uniquely determines the state at any future time, T1. It so happens that our world is also reversible, so the state at any time in the past is uniquely determined as well.

“in order to argue for local realism in Bell’s case, you need more than that: not just that the experimenters’ measurements are determined, but that they’re determined in a way that’s (fairly) closely linked to the “hidden variables” of the quantum system. In effect, you’re violating something like a naturalness principle: you’ve tied two things together that really have no reason to be tied together, and unless you’re a fan of Leibniz-style determinism this is a tough pill to swallow.”

Let’s think of a Bell test assuming that the world is described by a classical field theory. You may think of classical electromagnetism or general relativity but this is not important. The only condition is that any particle is a field source and the field is not limited in range. In this case Alice and its detector (and whatever you want her to use to set the measurement angle) is a group of charges (A1, A2, A3…..An), Bob and its detector is another group of charges (B1, B2, B3, …., Bn) and the source of the entangled particles another one (S1, S2, S3, …., Sn).

Any classical field theory (as defined above) implies that the motion of each particle is a function of A1, A2,…An, B1, B2, ….Bn, C1, C2,…Cn. So, it is mathematically impossible to have Alice, Bob and the source evolve independently. They were never independent, they are not independent and they will never be independent no matter how many tricks you may use in the experiment. So, contrary to your claim, it is to be expected that all objects are correlated evolution. Nothing unnatural about that.

“The other point is that what you’re rejecting here isn’t precisely free will in the philosophical sense. Rather, it’s the ability to pose counterfactuals: to say “what if we had measured X instead?” and have that question actually have meaning.”

It is perfectly possible to imagine a different experiment, however, you need to take care to make the necessary changes in all parts of the system, to preserve consistency. In order to have Bob measure on x instead of y you need a different set of initial parameters which in turn would determine a different value for the hidden variable as well.

“Scientific statements boil down to claims like “if an object with properties Y is acted upon by a force Z, then…”, and getting rid of the “if…then” phrasing completely changes the game.”

The proper way to analyse a Bell test is as follows:

Run1: initial parameters: x11,x12……x1n, p11,p12….p1n. Experimental outcome: Bob measures x, Alice measures x, spin of Bob’s particle on X: +1/2, spin of Bob’s particle on Y: -1/2, spin of Bob’s particle on Z: -1/2, spin of Alice’s particle on X: -1/2, spin of Alice’s particle on Y: -1/2, spin of Alice’s particle on Z: +1/2.

Run2:……

……

Run n:….

After a large enough n calculate the statics. As you can see, this kind of calculation takes care of the dependency between Alice, Bob and the hidden variable and it is not at all obvious that the correct result, as predicted by quantum mechanics, cannot be reproduced.

“Free will’s interaction with Bell’s theorem is not a pointless thing to notice. But it won’t give you the kind of glib answers you’re looking for.”

I thing the free-will assumption is the core of the theorem. Assume it and you can reject classical determinism without even mentioning entanglement or even quantum mechanics. Reject it, and no contradiction between quantum and classical descriptions can be proven.

Andrei

LikeLike

4gravitonsandagradstudentPost author“Syntagm” is a fun word, if one of limited applicability. Going to have to find more uses for it.

Anyway, here is the crux of the issue: what stops me from simply picking a set of initial parameters such that Alice and Bob’s decisions vary, while the particle states stay fixed? Remember, quite a wide range of physical brain states can encode the same information, and result in actions that are sufficiently similar that Bell’s theorem classifies them as the same measurement. If you’re stating that there are no parameters that satisfy this condition, then you’re making a very specific claim about brain states…and the only reason you would make that sort of claim would be if you had a very specific model that predicted it. So what’s the model?

As for counterfactuals, remember that your “initial conditions” are, much like the “free will” of the experimenters, just a convenient stopping-point. In reality, they would be determined by yet more conditions, and eventually fixed completely.

LikeLike

AndreiDear 4gravitons,

“Anyway, here is the crux of the issue: what stops me from simply picking a set of initial parameters such that Alice and Bob’s decisions vary, while the particle states stay fixed?”

In order for the Alice and Bob’s decisions to vary you need to change the position/momenta of the particles in their brains. Because the particles are field sources the field at every location will change so the particle states will change as well.

“As for counterfactuals, remember that your “initial conditions” are, much like the “free will” of the experimenters, just a convenient stopping-point. In reality, they would be determined by yet more conditions, and eventually fixed completely.”

There is a big difference between the initial condiions and free-will. Free-will has the effect of making the parts of a system uncorrelated, while a set of initial conditions does not. There is nothing wrong with having a stopping point.

Andrei

LikeLike

4gravitonsandagradstudentPost authorLargely agreed. But it’s harder to make a spooooky Halloween post about causality violation. 😉

LikeLike

Lubos MotlLOL, I actually find the retroactive modifications of our past – and people’s rewriting of the history – spookier than nonlocality. 25 years ago in my country, people were rewriting the past all the time while the action at a distance was a common-sense classical physics concept believed by Newton and almost 300 years of his followers.

LikeLike