Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

54 thoughts on “Reality as an Algebra of Observables

    1. 4gravitons Post author

      Oddly enough, I had the idea before I saw Scott’s post. I don’t remember quite what got me thinking about it, but I think the “beable” advocate I have in my head is less of a Bohmian and more of a Hossenfelder-style superdeterminist.

      Like

      Reply
  1. Alejandro Baroni

    Hello, I liked your post. I think a good and useful philosophical reference for your thinking is the work of Richard Rorty, in particular, his book ‘Philosophy and the mirror of nature’. Regards, Alejandro Baroni Montevideo, Uruguay

    El vie., 5 de marzo de 2021 14:06, 4 gravitons escribió:

    > 4gravitons posted: ” Listen to a physicist talk about quantum mechanics, > and you’ll hear the word “observable”. Observables are, intuitively enough, > things that can be observed. They’re properties that, in principle, one > could measure in an experiment, like the position of a” >

    Liked by 1 person

    Reply
    1. 4gravitons Post author

      Yeah, I’m sure there’s a little bit of second-hand Rorty-style Pragmatism behind my beliefs, along the lines of that “independent politicians are actually slaves to defunct economists” quote.

      Like

      Reply
  2. Madeleine Birchfield

    I’m a subscriber to the Arnold Neumaier tradition of viewing quantum mechanics as deterministic in nature (the thermal interpretation of quantum mechanics). Quantum physics could be founded upon the Ehrenfest equation, whose variables, what Neumaier calls q-expectations, are deterministic variables by nature, and where the measurement problem is just reduced to a technical question of quantum statistical mechanics. The algebra of q-expectations is a Lie algebra $\mathbb{L}$, with an associated Poisson manifield $\mathbb{L}^$, and what Neumaier calls the Ehrenfest picture of quantum mechanics is just classical Hamiltonian mechanics on $\mathbb{L}^$. No observables appear in this picture at all, but neither do beables.

    The objective reality in the thermal interpretation are the quantum fields, such as electron fields or photon fields, and particles are only a semi-classical approximation of the full quantum system. For example, traditionally, an electron beam is said to consist of a series of undetected electrons, and the electrons are the objective reality. In this interpretation, the beam itself is the objective reality, an electron field, with electron particles being only a semi-classical approximation of the quantum electron beam.

    Like

    Reply
    1. Madeleine Birchfield

      I should add that there do exist these things called q-observables in this picture, but they are unlike traditional observables, as q-observables are deterministic in nature, while regular observables are probabilistic in nature. The Born rule in particular only has limited validity for q-observables, and does not always apply, like for the total energy of a composite system, or for quantum fields.

      Like

      Reply
      1. 4gravitons Post author

        Not knowing much about the thermal interpretation, which loophole in Bell’s Theorem does it exploit? Is it statistical independence? (In other words, is this a superdeterminism picture?)

        Like

        Reply
  3. Andrei

    4gravitons,

    “But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual.”

    The problem with this view is that it can be shown to be wrong by a simple argument based on the EPR-Bohm experiment. The argument is presented below:

    At two distant locations (A and B) you can measure the X-spin of an entangled pair. QM predicts that:

    P1: If you measure the X-spin at A you can predict with certainty (probability 1) the X-spin at B.

    Let’s exclude non-locality:

    P2: the X-spin at B is not determined by the measurement at A.

    From P1 and P2 it follows that there was something at B that determined the result of the B measurement. EPR named that “something” an “element of reality”. So:

    P3: There is an element of reality at B that determines the measurement result at B.

    You may observe that there is no other logical option available (unless you think that it’s by pure luck that we manage to always predict with certainty the X-spin at B, which is rather absurd).

    From P1 and P3 it follows:

    P4: there was an element of reality at A that determined the measurement at A.

    This is because once we’ve established that the measurement at B was fixed (P3) it’s impossible that the measurement at A could have been different, right?

    OK, so P3 and P4 lead to:

    C: The X-spin of both particles, at A and B were determined before any measurement took place (deterministic realism).

    In conclusion, the “beables” are not some unnecessary addition, designed to satisfy some classical type of thinking but the conclusion of a sound argument. Both premises (the validity of QM’s prediction and locality) are very well established both from a theoretical and an experimental point of view.

    Like

    Reply
    1. 4gravitons Post author

      There’s a problem with this argument in the first line:

      “At two distant locations (A and B) you can measure the X-spin of an entangled pair.”

      No, you can’t. If things are sufficiently distant for locality to matter here, you can only measure at one location, at least initially. Someone else can measure at the other location, and you can eventually make a measurement to discover what that person measured.

      In the meantime, yes, you can predict with certainty what that other measurement will tell you. That doesn’t mean the other person can do that before they make their measurement: they don’t know what you measured, again, the two locations are too distant for you to communicate that, that’s the whole point. So while it’s determined based on your knowledge, it isn’t determined based on theirs.

      You may object that “determined” doesn’t work that way: things either are one way or the other, they aren’t just determined relative to one person or another’s knowledge. But that’s the point under dispute. What I’m claiming here is that, in normal (even non-quantum) science, “determined relative to one’s knowledge” is usually what we care about, not some absolute “determined”. That’s not to say there can’t be an argument that the absolute sense is worthwhile…but you would have to make that argument, not assume it.

      Like

      Reply
  4. Andrei

    4gravitons,

    “If things are sufficiently distant for locality to matter here, you can only measure at one location, at least initially. Someone else can measure at the other location, and you can eventually make a measurement to discover what that person measured.”

    This is not required. You can perform both measurements using computers and synchronized clocks. You record the measurements (the results and the time of measurement). Looking at the experimental records you conclude that indeed, your prediction about B, based on your measurement at A, was correct. No need to involve somebody else here.

    “In the meantime, yes, you can predict with certainty what that other measurement will tell you. ”

    So you in fact agree with P1, fine.

    “That doesn’t mean the other person can do that before they make their measurement: they don’t know what you measured, again, the two locations are too distant for you to communicate that, that’s the whole point. So while it’s determined based on your knowledge, it isn’t determined based on theirs.”

    This is a red herring. My argument does not depend on what any other person knows or does.

    “You may object that “determined” doesn’t work that way: things either are one way or the other, they aren’t just determined relative to one person or another’s knowledge. But that’s the point under dispute.”

    I don’t need to object to this. My argument does not depend on multiple persons doing the experiment. Just one is OK. So, no dispute here.

    “What I’m claiming here is that, in normal (even non-quantum) science, “determined relative to one’s knowledge” is usually what we care about, not some absolute “determined”. ”

    Again, this is not relevant to my argument. It’s one person, let’s keep it simple! But, I’m curious, if you have another person looking at the same experimental records would you expect some disagreement? I don’t quite get your point. As far as I know when some results are published they are not considered relevant only for the authors of the paper.

    “That’s not to say there can’t be an argument that the absolute sense is worthwhile…but you would have to make that argument, not assume it.”

    I didn’t assume anything about other persons. My argument assumes just two things: QM’s prediction is true (P1) and locality (P2). You agreed with P1. Do you agree or disagree with P2?

    Like

    Reply
    1. 4gravitons Post author

      Yeah, upon reflection the question of “who is measuring B” is indeed a red herring.

      What I was trying (clumsily) to get across is a particular perspective. You’re coming at this from the usual perspective of the “beables” crowd, that Bell’s theorem gives two options: quantum mechanics, or locality, pick one. The other perspective, the one in most modern QM textbooks, is that there are three options: quantum mechanics, locality, or realism, pick two. At which point the “beables” crowd usually boggles at what the heck giving up realism could possibly mean.

      I am not a quantum foundations researcher. I am aware I don’t know all of the arguments here, so I don’t pretend to actually know whether “give up realism” is ultimately a philosophically coherent thing to do. I’ll leave that to the foundations field to sort out.

      With that said, I don’t think “give up realism” is unimaginable or inconceivable or scientifically unprecedented or anything like that. I think it can be perfectly plausible, and that’s what I’m trying to convey.

      So to make a cleaner argument for that:

      Let’s focus on your P2. One way to answer it is to say that there is in some sense nonlocality here, the X-spin at B is determined by the measurement at A. It’s just a trivial nonlocality, a harmless nonlocality. To be more specific, it’s the locality of knowledge.

      To illustrate, imagine a classical version of the experiment. One object at rest separates into two, A and B, and you measure object A. Once you measure it, you now know something about object B. You know this no matter how far away B is, and you know it instantly. A property of B has thus changed nonlocally: that property being your knowledge about it.

      The QBist perspective, as far as I understand it, is that all statements in science are actually statements of knowledge. The statement “the X-spin at B is determined by the measurement at A” means simply “I know the X-spin at B because I know the X-spin at A”. Depending on who’s arguing it, either there is no “X-spin at B” independent of your knowledge of it, or there is, but there’s no point in talking about it.

      My point in this post is that that perspective isn’t so unprecedented. Philosophically, it’s something people have dallied with many times before, though it’s less popular at present. And scientifically, it isn’t all that different from a set of statements we should be comfortable with: “entropy depends on how you choose to distinguish particles” “simultaneity depends on your reference frame” “a CFT is defined by a spectrum of operators and their structure constants”.

      So, to finally ask you a question:

      Do you believe you have an argument that would convince someone starting from that perspective that your perspective is correct? And do you understand why that argument can’t merely be restating EPR?

      Like

      Reply
  5. Andrei

    4gravitons,

    “What I was trying (clumsily) to get across is a particular perspective. You’re coming at this from the usual perspective of the “beables” crowd, that Bell’s theorem gives two options: quantum mechanics, or locality, pick one.”

    Bell’s theorem should always be discussed together with EPR, because it is a refinement of EPR. Bell accepted that EPR proved that QM is either incomplete (in which case it should be completed with the so-called hidden variables) or non-local. In his seminal paper:

    On the Einstein Podolsky Rosen paradox

    Bell writes:

    “THE paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality [2].”

    So, there is a reason for introducing hidden variables: locality. I think locality is a very well established principle of physics, so it is reasonable to find a way to harmonize it with QM. And, as my argument proves, the only way is deterministic hidden variables. There can be no local indeterministic theories.

    The purpose of Bell’s theorem is to investigate the only remaining class of local theories (the hidden variable ones). Beyond locality, the theorem assumes that the hidden variables are independent from detectors’ settings. The theorem proves that any theory that is able to reproduce the experimental results must either be non-local or superdeterministic (the hidden variables are not independent from detectors’ settings). So, it’s not true that we have to chose between QM and locality, we have to choose between locality (in the form of superdeterminism) and non-locality.

    “The other perspective, the one in most modern QM textbooks, is that there are three options: quantum mechanics, locality, or realism, pick two. At which point the “beables” crowd usually boggles at what the heck giving up realism could possibly mean.”

    This perspective is wrong as it is not based on any argument. Just look at my argument, where do you see the assumption of realism? There are only two assumptions: QM is correct (P1) and locality (P2). So I don’t care about realism at this point. It’s a red herring.

    “I don’t pretend to actually know whether “give up realism” is ultimately a philosophically coherent thing to do”

    This is not the issue. The issue is that “realism” is not an assumption, so dropping it does you no good.

    “Let’s focus on your P2. One way to answer it is to say that there is in some sense nonlocality here, the X-spin at B is determined by the measurement at A. It’s just a trivial nonlocality, a harmless nonlocality. To be more specific, it’s the locality of knowledge.
    To illustrate, imagine a classical version of the experiment. One object at rest separates into two, A and B, and you measure object A. Once you measure it, you now know something about object B. You know this no matter how far away B is, and you know it instantly. A property of B has thus changed nonlocally: that property being your knowledge about it”

    There is nothing nonlocal here. It’s just a classical correlation (Bertlmann’s socks). This is the hidden variable explanation of the experiment.

    “The QBist perspective, as far as I understand it, is that all statements in science are actually statements of knowledge. The statement “the X-spin at B is determined by the measurement at A” means simply “I know the X-spin at B because I know the X-spin at A”. Depending on who’s arguing it, either there is no “X-spin at B” independent of your knowledge of it, or there is, but there’s no point in talking about it.”

    If the X-spin at B does not exist” independent of your knowledge of it” it means that it acquires existence at the same time with your knowledge, when A is measured. Hence, the theory is non-local. If X-spin at B exists ” independent of your knowledge of it” we are back to hidden variables.

    “My point in this post is that that perspective isn’t so unprecedented.”

    True, but this perspective is useless. We already knew our options (locality+hidden variables or non-locality). I don’t see what QBism’s novelty is.

    “So, to finally ask you a question:
    Do you believe you have an argument that would convince someone starting from that perspective that your perspective is correct?”

    Yes, the argument presented in my original post is rock-solid. I am sure there will never be a valid rebuttal. A QBist cannot touch it. He may try but he will fail.

    “And do you understand why that argument can’t merely be restating EPR?”

    This is a bit funny because QBists accept that QM is not complete, in fact, according to them it’s not describing nature at all. It’s just an algorithm to make bets. So they don’t actually contradict EPR, they just choose incompleteness. But then, they argue for locality. Again, this is funny because if you don’t know how nature works how would you know it’s local? Their answer is ridiculous. The betting algorithm (QM) is applied locally, they say, by the agent. Hence, everything is local. But of course, this isn’t the issue. Any theory (Newtonian gravity for example) can be computed locally and this does not make it local.

    Well, my argument is more powerful than EPR. The mistake of EPR was to insist that non-commuting observables have simultaneous existence. This doesn’t work so well because it depends on counterfactuals (unperformed measurements).

    Like

    Reply
    1. 4gravitons Post author

      What you’re missing/eliding here is that the QBist perspective isn’t supposed to apply just to QM, but to classical physics, and indeed all of scientific reasoning. Every theory is just an algorithm to make bets. In every case, things only come into existence with your knowledge of them, because “your knowledge of them” is the only entity about which we can speak productively. Showing that science is capable of anything beyond that is a significant philosophical project, and one that QBists and the like are not yet convinced by.

      As you are probably well aware, that doesn’t make everything local by default, because locality becomes a different statement: not about what causes what “in reality”, but the usual thing particle physicists care about when they use the word “locality”, the principle of cluster decomposition, or equivalently, the absence of superluminal signaling.

      (If you don’t like that definition of locality, fine…but it suffices for the task you’re complaining about, it distinguishes Newtonian gravity from GR.)

      Like

      Reply
    2. 4gravitons Post author

      Actually, perhaps here’s a cleaner point: you seem to allow for the possibility of a non-local and non-deterministic/non-realistic theory. If so, then I think a QBist would agree with your reasoning, provided they weren’t the type to haggle over definitions. According to your definitions, QBism thinks of QM as a non-local, non-deterministic, non-realist theory. They just use the word “local” to mean something different in practice, because when doing QFT “is this polynomial in derivatives” is a useful thing to have a word for.

      Like

      Reply
  6. Andrei

    4gravitons,

    “Every theory is just an algorithm to make bets.”

    I disagree. A theory like classical electromagnetism provides you with a certain understanding about the world. If you want a charge to follow a curved path you know that you need to apply a magnetic field for example. It can answer “why” and “how” questions. Sure, you can use it to make bets, but it’s far more than that.

    The QBist view, as you expose it, is more like an advanced AI program that has access to a lot of data you know nothing about. It gives you the right answer so you can make bets, but if asked why was that answer chosen it gets silent. It completely lacks explanatory power.

    “In every case, things only come into existence with your knowledge of them, because “your knowledge of them” is the only entity about which we can speak productively.”

    I think the above is either demonstrably false or irrelevant, depending on what “your knowledge of them” is supposed to mean.

    If by “your knowledge of them” you mean any information you can acquire in principle, then everything exists in the universe, just as in classical physics, with some limitation of accuracy imposed by the uncertainty principle. For example, you “feel” the gravitational field of all objects in the universe by virtue of having a non-zero mass. Once recorded at the macroscopic level, quantum measurements would also exist irrespective of your conscious awareness of them.

    On the other hand, if by “your knowledge of them” you mean only what you consciously experienced then your claim is false. It implies that a kind of gravitational/EM shock wave would be expected when new objects are discovered. Also, different agents should experience different such shock waves as each makes the discovery in a different order, etc.

    “Showing that science is capable of anything beyond that is a significant philosophical project, and one that QBists and the like are not yet convinced by.”

    I’ve just proven that above. In order to avoid conflict with experiment (those shock waves) you need to assume that everything that classical physics says exists actually exists, even if it is not directly experienced.

    “According to your definitions, QBism thinks of QM as a non-local, non-deterministic, non-realist theory. They just use the word “local” to mean something different in practice, because when doing QFT “is this polynomial in derivatives” is a useful thing to have a word for.”

    This is fine for me, it’s a logically consistent view (even if I cannot imagine what you find appealing about it), yet I don’t think this view represents QBism. I also think that accepting even such a form of “mild” non-locality requires some revisions of relativity (both special and general). You would need to introduce an absolute frame of reference, etc. It seems important, especially when working on quantum gravity. Who knows, if you can reformulate GR with an absolute frame it might be easier to progress towards unification. But I know next to nothing about string theory/QG so maybe what I am saying is plainly stupid. Let me quote here from Fuchs, Mermin, and Schack on the problem of non-locality in QBism:

    An Introduction to QBism with an Application to the Locality of Quantum Mechanics

    Click to access 1311.5253.pdf

    “QBist quantum mechanics is local because its entire purpose is to enable any single agent to organize her own degrees of belief about the contents of her own personal experience. No agent can move faster than light: the space-time trajectory of any agent is necessarily timelike. Her personal experience takes place along that trajectory.
    Therefore when any agent uses quantum mechanics to calculate “[cor]relations between the manifold aspects of [her] experience”, those experiences cannot be space-like separated. Quantum correlations, by their very nature, refer only to time-like separated events: the acquisition of experiences by any single agent. Quantum mechanics, in the QBist interpretation, cannot assign correlations, spooky or otherwise, to space-like separated events, since they cannot be experienced by any single agent. Quantum mechanics is thus explicitly local in the QBist interpretation.”

    If I give you a camera capable of taking a picture of Mars, and, after 20 minutes or so (when another picture of the same event comes from Mars at the speed of light) you could confirm that the camera took that picture instantly, according to the trio of physicists above you should be undisturbed. You didn’t move faster than light, so nothing nonlocal has taken place. Do you really buy this?

    Like

    Reply
    1. 4gravitons Post author

      It’s a bit ironic that you’re using electromagnetism as an example of a theory that explains, not merely predicts observations: its contemporaries certainly didn’t view it that way.

      I’m not 100% sure, but I think “every piece of information you could acquire in principle” is the right answer here. The more information you take into account, the better your predictions will be, so correctly using QM means using all information at your disposal. In practice you won’t take into account all such information, and your predictions will be wrong to the extent they depend on that information.

      This includes every measurement in your light-cone, but nothing that “hasn’t been measured yet”: i.e. no system sufficiently isolated that it hasn’t entangled with its environment. It only has a preferred frame in the sense that your own frame is a preferred frame, which is already true in SR: every measurement that you make is perforce a measurement in your own frame, you have to extrapolate to compute measurements in another.

      The QBist quote you give is phrased pretty recklessly. I suspect that they didn’t have superluminal signaling in mind, and if pressed they would agree that it is perfectly meaningful to call actual superluminal signaling (like your Mars camera example) nonlocal. For one, you can use it to make bets that you couldn’t make in a local theory. I can’t prove that, though I suspect if you read their longer pieces you can probably find something that clarifies.

      Like

      Reply
  7. Andrei

    4gravitons,

    “It only has a preferred frame in the sense that your own frame is a preferred frame, which is already true in SR: every measurement that you make is perforce a measurement in your own frame, you have to extrapolate to compute measurements in another.”

    Let me know if I understand this correctly. At this time we will have two persons, Alice and Bob, each performing a measurement on their particle at the same time (they are not moving, so they share the same frame).

    According to the QBist account, as I understand it from you, once Alice measures her particle both spins become real, so she can claim that her measurement caused Bob’s spin to take the value Bob measures. Bob’s account is different, his measurement created both spins. When Alice and Bob meet they cannot agree on who’s account is the correct one, both cannot be true at the same time. So it seems to me that we arrive at a contradiction. This version of QBism is not consistent.

    Like

    Reply
  8. Andrei

    Sorry, I’ve made a mistake. In this example they can simply decide what measurement came first, and that account would be the correct one. However, other observers would disagree with which measurement came first (simultaneity is not absolute in SR) so, in regards to those observers the QBism account is contradictory.

    Like

    Reply
    1. 4gravitons Post author

      Eh, the issue with your point is the same regardless.

      You’re smuggling the word “real” in here, and “create” as well. Remember, QBists are doing something like philosophical positivism or philosophical pragmatism, they don’t think ontological statements have any meaning. A QBist, as we already established, thinks of science as a tool to make bets, nothing more. At no point do a QBist Alice and Bob think the spin “becomes real”. It just “becomes known”.

      Look, I get it, you’re not a positivist or a pragmatist, you don’t think about the world in this way. You probably think of them as betrayals of the scientific method. Myself, I think giving up counterfactuals is a pretty thorough betrayal of the scientific method, that it makes the notions of scientific laws and cause and effect completely incoherent. Either way, neither of us are philosophers, and we’re both (or rather, you and my picture of QBists are both) biting some unpalatable bullets.

      If you can’t either A) phrase your objections in a way that fits within some kind of positivist/pragmatist/”science is just for making bets” philosophy or B) provide an argument against that philosophy that isn’t just an emotional appeal, then I don’t think this conversation is going to be productive.

      Like

      Reply
  9. Andrei

    “You’re smuggling the word “real” in here, and “create” as well.”

    Sorry if I’m doing that but I am trying to understand what you are saying. You said:

    “there is no “X-spin at B” independent of your knowledge of it”

    I deduced from here that once you have knowledge of it (after you measured A), the X-spin at B exists. Is this correct? Because if you don’t agree with its existence even after you have “knowledge of it” it means solipsism. It only exists in your mind. And also, the word “knowledge” is inappropriate. Again, I’m sorry if I misunderstand your point but as I have shown you with Mermin/Shack/Fuchs paper, your view of QBism is different from what has been published. But I promise you, I will learn fast.

    “they don’t think ontological statements have any meaning”

    Do you think an external world exists at all? If so, ontological statements should have meaning. If not, well, that’s solipsism.

    “At no point do a QBist Alice and Bob think the spin “becomes real”. It just “becomes known”.”

    In order to “know” something, that something has to be there. If it’s not there, the correct word would be “dream” or “hallucination”. Again, I’m fine if you think that it is possible to “know” something that does not exist in the external world, but this is not the usual meaning of the word.

    “Look, I get it, you’re not a positivist or a pragmatist, you don’t think about the world in this way.”

    True, but this is not the point. I am only interested to understand your view. QBists are quite vocal in denying solipsism. Bohr was (presumably) a positivist, yet at no point did he claim that ontological statements have no meaning. So it’s not easy for me to get a clear view of your claims.

    “Myself, I think giving up counterfactuals is a pretty thorough betrayal of the scientific method, that it makes the notions of scientific laws and cause and effect completely incoherent.”

    I agree with this. Counterfactual reasoning should be accepted, but one should be careful when thinking about counterfactuals.

    I agree that a scientific theory should be able to give answers for different measurements (including unperformed ones). But the original EPR paper required more than that. It required that properties measured in one environment should remain the same in a different, counterfactual experiment. This assumption is typically false when long-range interactions are involved. And QM is more or less a theory of EM, which implies such interactions.

    An electron does not move in straight lines, like a bullet, until it bumps into something. It has a curved trajectory, sensitive to all EM fields produced by any objects (including the so-called neutral ones). In a double-slit experiment, the electron “feels” the EM fields produced by the charged particles (electrons and nuclei) in the barrier and reacts according to Lorentz force. A barrier with two slits will produce a different field from a barrier with one slit, hence one cannot assume that the electron’s trajectory is the same regardless of the presence/absence of the slit the particle didn’t pass through. So, it’s OK to use counterfactuals, but when we change the geometry of the barrier we should also change the EM fields associated with that geometry. Once you do that, this quantum “mystery” evaporates.

    The same holds in gravity (another long-range field theory). The trajectory of a star is different if you change the distribution of the other stars in the galaxy, even if they are distant. In both GR and EM the properties of a system are not independent of the experimental environment. Both theories accept counterfactuals, but one should use them correctly.

    “If you can’t either A) phrase your objections in a way that fits within some kind of positivist/pragmatist/”science is just for making bets” philosophy or B) provide an argument against that philosophy that isn’t just an emotional appeal, then I don’t think this conversation is going to be productive.”

    I am trying. I am building on your statements and am using the words with their generally accepted meaning. I assume you are not a solipsist (if you are, sure, there is no point debating further) so you accept that an external world exists. If so, it is meaningful to speak about it. I assume “knowledge” to mean information about something in the external world. If not, provide your own definition and we will work from there.

    Like

    Reply
    1. 4gravitons Post author

      Thanks. Part of why I seem to be being vague or inconsistent is that I’m trying to defend a few different philosophical pictures here, and along the way some physicists who haven’t really read the relevant philosophy. (All while not having read enough of the philosophy myself to be comfortable having my own opinion.)

      While Bohr may have described himself as a positivist and been comfortable talking ontology, the impression I get is that the actual philosophers who describe themselves as positivists wouldn’t be: they explicitly rejected metaphysics, including ontology. This isn’t equivalent to solipsism, because solipsism is itself a metaphysical claim: “nothing exists besides me”. Positivists think that whenever you try to talk about what exists and what doesn’t, rather than about what you observe and what you don’t observe, you are using a word that doesn’t actually have any meaning.

      Pragmatists instead care about whether beliefs are useful. In practice, this cashes out similarly, just instead of arguing that “exists” is a meaningless word, they think that what you characterize as “existing” can and should change based on what it enables you to do. (Wikipedia describes this as a “pluralist” view of metaphysics.) I suspect they’d be a bit more comfortable with superdeterminism and friends, provided experiments supported them. That is, if you in practice can predict the result of a quantum measurement, great! If you have this whole theory that posits deterministic measurement outcomes, but that can’t do that, and that introduces a whole bunch of formalism that doesn’t help you with predicting anything else, then it isn’t pragmatic to believe in it.

      In practice, QBists seem to sort of stir these two perspectives together. They tend to be more comfortable talking ontology than either group of philosophers, but I don’t know how much this is due just to them speaking more casually. Some of them probably have a very specific philosophical picture behind their statements, some haven’t thought it through as far. You’d need to ask them.

      I think I had misremembered an earlier comment by you as saying you rejected counterfactuals. If you just think people should be careful about them in the way you describe, I agree. But I think there is a risk in taking this too far. In practice, we need to be able to abstract away imperfections in experimental design. We need to be able to argue that, though the circumstances of two different measurements are different, they are similar enough that they shouldn’t affect the outcome. (There is no perfectly isolated experiment after all.) One tries to establish this by showing that these effects are small in comparison to the effects considered in the experiment. I think if superdeterminists had a specific model to propose, people would be a lot more friendly to them: they could estimate the size of these kinds of effects, and see what it would take to make an experiment that could suppress them enough to go beyond QM. In practice, even the most detailed superdeterminist proposals don’t yet let you do that.

      Like

      Reply
  10. Andrei

    I think I understand your view better and I think I can reformulate my argument so that it makes perfect sense even for a QBist.

    First, I agree with QBist that we have no way to access “absolute reality”. It is always possible that we live in a simulation, that we are brains in vats in a Matrix-like reality we know nothing about. Even more, we have no theory of consciousness, so there exists a large gap between what we experience and what causes (if anything) those experiences.

    That being said, I don’t think this allows QBism to evade the EPR-Bohm argument. We only need to define reality in a way that does not make any assumption about the hypothetical “absolute reality”. So, here is my definition:

    If you are virtually certain that a certain experience would take place given some well-defined conditions, we define that experience to be “an object/system/property” that exists.

    Example 1: If you are virtually certain that by hitting hard with your leg in some direction you would experience “hitting a wall” we consider, by definition, that “a wall” exists there.
    Example 2: If you are virtually certain that by looking through a telescope in a certain direction you would experience “seeing a planet” we consider, by definition, that “a planet” exists there.
    Example 3: If you are virtually certain that by looking at a chair (under appropriate lighting conditions) you would have a “color red” experience we consider, by definition, that the chair is red.

    I think those definitions are consistent with our use of “real objects” in physics. They cover everything from chairs, planets and dogs to electric fields and viruses and electrons. The whole physics (classical mechanics, relativity, electromagnetism, QM) can be understood as dealing with these experiences masquerading as “real objects”.

    Back to the EPR-Bohm argument, the X-spin at B exists (not absolutely, but as defined above) after X-spin at A is measured. It exists in the same way as chairs or planets exist, and it must be subjected to the same physical principles like relativity. Relativity itself is also a way to “organize our experiences”, so, if you want to be consistent you need to make the experience of entangled spins consistent with relativity. In conclusion, replacing an objective reality with a subjective reality does not help in any way avoiding EPR-Bohm. The way QBists pretend to evade these arguments is by using a double standard. Relativity is supposed to apply only to the objective world, so, spin measurements, being subjective, are somehow exempt. But if we use QM and relativity consistently (either both objective or both subjective) the difficulty remains the same. Even from a QBist perspective, hidden variables are required to preserve locality.

    Now let’s clear up the problem of counterfactuals. You say:

    “But I think there is a risk in taking this too far. In practice, we need to be able to abstract away imperfections in experimental design. We need to be able to argue that, though the circumstances of two different measurements are different, they are similar enough that they shouldn’t affect the outcome. (There is no perfectly isolated experiment after all.)”

    I fully agree, however, I am not speaking about some imperfections. It’s fundamental physical principles/phenomena” that are completely ignored. I’m not complaining that, in a two-slit experiment, the nuclei of the atoms in the barrier are approximated to point charges or something like that. I could not find any treatment of this experiment, no matter how simplified, using the correct theory (electromagnetism). Feynman and a huge cohort of great physicists are surprised that we cannot make an interference pattern using bullets. What about using charged bullets instead?

    The independence assumption of Bell’s theorem rests on a confusion about field theories. The assumption is that if you separate your systems enough, they become “independent”. By this logic, Earth should be more independent from the Sun than Mercury is and Pluto’s motion should be virtually random. Well, this is not how those theories work. There is no dependency of the EM equations (or gravity’s) on any scale. The motion of a charge is determined by the field produced by all charges, everywhere, regardless of the distance. Moving a charge or a group of charges far away does not make it “independent”. The equations remain exactly the same. What is called “superdeterminism” is just a consequence of how long-range field theories behave. Large distance means the field is weaker, but a weak field does not give you “independence”.

    One argument against superdeterminism, vocally propagated by Tim Maudlin and Scott Aaronson is that superdeterminism is not scientific because, they say, if the object of your experiment is pre-correlated with your measurement device you cannot do medical tests, or chemistry experiments, and so-on. Here are some reasons this argument fails:

    A superdeterminstic interpretation of QM would give, by definition, the same predictions as QM. So, if QM allows you to do medical tests, so would a superdeterministic interpretation of QM. The superdeterministic correlations are confined to the quantum world, just like the non-locality of Bohm’s theory.
    The lack of “independence” implied by the electromagnetic interaction applies to the fundamental level of elementary particles. Large, macroscopic objects do not obey those laws directly. The existing correlations at the fundamental level cancel out, statistically, at the macroscopic level.
    The lack of independence is a direct consequence of field theories, so, unless you are ready to replace them, live with it!

    “I think if superdeterminists had a specific model to propose, people would be a lot more friendly to them: they could estimate the size of these kinds of effects, and see what it would take to make an experiment that could suppress them enough to go beyond QM.”

    Here is ‘t Hooft’s last paper:

    Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy

    Click to access 2103.04335.pdf

    We will see what the reaction of the physics community will be, but I am quite sure it will be ignored. They “know” it can’t be done, so it must be wrong. Why bother?

    Like

    Reply
    1. 4gravitons Post author

      So, there’s a usual QBist slogan that stops your initial argument pretty early: “probability 1 is still a probability”. That is, going from uncertain to certain of B based on your measurement of A doesn’t change B, and doesn’t make anything “exist” at B that didn’t before your measurement. Only a measurement at B (if that, depending on the framing of QBism) will do that.

      Another kind of response (one which cedes your first point instead of rejecting it) would be to say that special relativity shouldn’t be understood as a restriction on all “things”, or all “experiences”, or anything like that, but specifically on causal influence: on things that can be used to send signals. If you can’t send a signal with something, you can’t set up a paradox, nor do you violate Eotvos experiments. So even if you want to view it as violating relativity, it does so in a way that neither breaks the formalism of the theory or contradicts experiment.

      To clarify, is your paragraph about charged bullets speculating that if you just rig up a classical experiment with charged bullets, you’d find an interference pattern? You’re aware you can just compute that yourself, right? Or more generally, that experiments have used many different types of diffraction gratings, not just conducting ones?

      When people argue that superdeterminism rules out pretty much all experimental science, they’re not saying that superdeterministic QM rules out experimental science. They’re saying that superdeterminism demands an approach to epistemology that, if you applied it elsewhere, would rule out all experimental science. And since there’s nothing about the approach that restricts it to QM, you have to apply it elsewhere as well.

      Unlike Maudlin and Aaronson (or your summary of them at least, not having read their arguments myself), I don’t think superdeterminism necessarily does this. I think it depends on the approach taken.

      On the one hand, a superdeterminist might believe that, in principle, one can perform experiments that are independent enough from what they measure that one can approximate them that way within experimental error. They just argue that this is not true of QM, that it so happens that the experiments people have done in QM and QFT have not been cautious enough in this respect, that there are effects from the choice of measurement that are large enough to affect the outcome. Such a superdeterminist must believe that, eventually, reality will deviate from QM: for example, in t’Hooft’s picture, once we can probe short enough time scales to notice his “fast fluctuating variables”. I get the impression this is Sabine Hossenfelder and Tim Palmer’s view.

      On the other hand, a superdeterminist might think that, no matter what you do or how careful you are, you can never safely neglect correlations between your measurement device and the subject of your measurement. Such a superdeterminist can freely believe that QM will never be violated, but in doing so is giving up the ability to do any science. If that principle, that you can never safely neglect correlations between your measurement device and subject, is true, then it is true generally, not just for QM, and the chain of reasoning people used to get to QM was fallacious from the first scientist onwards. That’s what Aaronson and Maudlin are warning about.

      Regarding t’Hooft’s paper, it doesn’t look like he estimates the scale of the violation of QM, or describes which experiments could probe it. That’s the most crucial step: make a prediction, put your money where your mouth is, and the rest follows. I’d worry that the “fast-fluctuating variables” can be made as fast as desired to skirt experimental constraints, in the same way one sometimes complains people make supersymmetric particles as heavy as needed to avoid constraints.

      (It’s also a bit weird that he’s claiming some variety of uniqueness when Tim Palmer’s setup looks very different…those two should probably sort out what’s going on there.)

      Like

      Reply
  11. Andrei

    “So, there’s a usual QBist slogan that stops your initial argument pretty early: “probability 1 is still a probability”. That is, going from uncertain to certain of B based on your measurement of A doesn’t change B, and doesn’t make anything “exist” at B that didn’t before your measurement. Only a measurement at B (if that, depending on the framing of QBism) will do that.”
    “going from uncertain to certain of B based on your measurement of A doesn’t change B, and doesn’t make anything “exist” at B that didn’t before your measurement”

    I can understand the above in two ways:

    There was nothing at B before the measurement at A and there is still nothing at B after the measurement at A. The X-spin at B is “created” by the measurement at B.
    There was “something” at B before the measurement at A and that “something” at B does not change after the measurement at A. That “something” at B determines the measurement result at B. (I assume that if that “something” at B does not determine the measurement result at B it’s simply irrelevant and we are back in the case 1.)

    Do you agree with any of these two options, or it’s something else?

    “Another kind of response (one which cedes your first point instead of rejecting it) would be to say that special relativity shouldn’t be understood as a restriction on all “things”, or all “experiences”, or anything like that, but specifically on causal influence: on things that can be used to send signals. If you can’t send a signal with something, you can’t set up a paradox, nor do you violate Eotvos experiments. So even if you want to view it as violating relativity, it does so in a way that neither breaks the formalism of the theory or contradicts experiment.”

    I think I can illustrate a paradox in case the measurement at A is instantly “creating” a result at B. If the measurements are space-like, there is a frame in which A happened before B (let’s call that C) and another frame in which B happened before A (let’s call that D).

    According to an observer in C, the result at A is completely random (you said you want indeterminism) and the result at B is determined by the one at A. In this frame, B did not happen so there is no constraint on the result at A.

    According to an observer in D, the result at B is completely random and the result at A is determined by the one at B. In this frame, A did not happen so there is no constraint on the result at B.

    What is the probability that the C and D observers will agree on the outcome of the experiment? It’s 50%. The measurements in each frame are like independent coin flips. And the probability for two coin-flips to give head-head or tail-tail is 50%, the rest being head-tail and tail-head. But this cannot happen because we are speaking here about the same experiment seen from two frames. So, either the first measurement is not random either (the hidden variable interpretation) or there is a preferred (absolute) frame and all observers agree with the order of measurements in that frame and you can save indeterminism. Bohm’s theory does not let you send FTL signals either, but the Bohmian camp accepted that an absolute frame is unavoidable. An absolute frame does not seem to go well with the QBist view, but that’s the way it is.

    “To clarify, is your paragraph about charged bullets speculating that if you just rig up a classical experiment with charged bullets, you’d find an interference pattern?”

    No, I have no idea what the outcome would be (of course you need also a barrier buid out of charged “bullets” to play the role of the atoms). My point is that nobody (as far as I know) tried. So there is no justification to claim that classical physics has any problem with this experiment. It’s simply an experiment that was not properly analyzed according to classical physics.

    “They’re saying that superdeterminism demands an approach to epistemology that, if you applied it elsewhere, would rule out all experimental science.”

    Superdeterminism does not require any new “approach to epistemology”. One should look at the experiment in discussion and see what interactions take place. If A interacts with B we would expect a correlation. Earth interacts gravitationally with the Sun, hence we expect a correlation. That correlation was confirmed experimentally. The same should be done with Bell’s tests. Something like this:

    We have an electron-positron pair and two distant magnets (Stern-Gerlach devices). Do the electron and the positron interact with those magnets? Yes. Should we expect a correlation? Yes, maybe, let’s evaluate the problem! We may find out that those EM interactions cancel out, so we may find QBism appealing. But it’s possible that a careful assessment of those interactions would explain entanglement in terms of trivial EM interactions. We need to find out.

    Exactly the same reasoning should be applied everywhere, including medical tests. Does a lab mouse interact electromagnetically and gravitationally with the experimenter? Yes. Can this interaction mutate the DNA or increase the number of blood cells? No, the only change would be at the level of EM microstates. When the experimenter moves, the EM fields around the mouse would change, some electrons inside the mouse would go a little up, some nuclei a little down, that’s all. The experiment is not sensitive to a specific EM microstate so we can assume independence.

    What makes the quantum experiments “quantum” is their sensitivity to a specific microstate. The way an electron-positron pair splits depends on the exact configuration of the EM fields at that location and this determines the results. Change those fields and the pair may not split at that time, or split in a different direction and giving you completely different results.

    “And since there’s nothing about the approach that restricts it to QM, you have to apply it elsewhere as well.”

    Hopefully, I’ve explained above. Initially, you need to look at all possible interactions. You may decide that some experiments depend on a specific microstate (the quantum ones) or not (classical mechanics). So, we have a solid justification to treat the system as independent from how it is measured at the macroscopic level, but not so at the microscopic level.

    “On the one hand, a superdeterminist might believe that, in principle, one can perform experiments that are independent enough from what they measure that one can approximate them that way within experimental error.”

    This is not a matter of belief. Superdeterminist or not, a physicist should justify his theory based on what physical effects take place in the experiment. I know of no justification for the independence assumption in Bell’s theorem. Such a justification can be offered for macroscopic experiments, as presented above. Any unjustified assumption should be rejected.

    “Such a superdeterminist must believe that, eventually, reality will deviate from QM: for example, in t’Hooft’s picture, once we can probe short enough time scales to notice his “fast fluctuating variables”.”

    This depends on what is possible to measure according to the theory. It could be the case that the uncertainty principle could be defeated in some situations, or not.

    “Such a superdeterminist can freely believe that QM will never be violated, but in doing so is giving up the ability to do any science.”

    Why? If QM is a mathematically correct statistical approximation of an underlying theory QM will never be violated.

    “If that principle, that you can never safely neglect correlations between your measurement device and subject, is true, then it is true generally, not just for QM”

    If it is “safe” or not to neglect those correlations should be established based on the physics of the experiment not based on dogma.

    “Regarding t’Hooft’s paper, it doesn’t look like he estimates the scale of the violation of QM, or describes which experiments could probe it.“

    t’Hooft took a bottom-up approach. He provided proof that QM can be recovered from a local-deterministic mechanism. In itself (assuming his math is correct) this is a Nobel-level discovery that would give us a deep understanding of physics.

    “That’s the most crucial step: make a prediction, put your money where your mouth is, and the rest follows.”

    Can you tell me what predictions the QBists proposed by now? At least ‘t Hooft claims:
    “All interaction parameters for the fundamental particles are calculable in terms of simple, rational coefficients.”
    It’s not very impressive at this time, but it’s at least something. How is QBism progressing here?

    Like

    Reply
    1. 4gravitons Post author

      “There was nothing at B before the measurement at A and there is still nothing at B after the measurement at A. The X-spin at B is “created” by the measurement at B.”

      I think this would be the typical QBist answer, yeah.

      For the second kind of response, I agree that it results in a preferred frame. This corresponds to the vague statement I made earlier, that the observer’s frame (whichever of A, B, C, or D you happen to be) is preferred. Alternatively, one can literally be a Bohmian, and pick whatever preferred frame they pick.

      From that, it would look like there are three choices here: that the spins are only “created” when measured, that the observer’s frame is preferred, or that some other frame is preferred according to some specific Bohmian model. The key thing is that, from a positivist or a pragmatist perspective, these are actually all the same choice. As a positivist, there is no experiment you can do that can distinguish between these possibilities and tell you which frame is preferred. As a pragmatist, no matter which option you choose, you still can’t send superluminal signals, your available actions (aside from chatting metaphysics) are identical. From either perspective, it makes no difference which one of these you choose: and since it makes no difference, you should either not talk about it at all (positivist) or pick whichever option is more convenient in the moment (pragmatist).

      For the rest, I think you’re a little too hung up on an arbitrary distinction between microstates and macrostates. The only difference between a “microstate” and a “macrostate” is size. If you’re experimenting on a mouse, you know that the electric field from your body won’t do much that matters biologically. But a stronger electromagnetic field would, and would kill the mouse. Similarly, if you’re calculating the orbit of Pluto, you can’t neglect the Sun, but you can typically neglect the Earth. In each case, it’s not that there is a bright line between “micro” and “macro”, but rather that you must estimate your errors based on your model. If the errors are small enough, you can treat your measurement choices as independent. If not, you can’t.

      I find this statement particularly baffling:

      “Why? If QM is a mathematically correct statistical approximation of an underlying theory QM will never be violated.”

      An approximation by definition is eventually violated. If not, it’s not an approximation, it’s the full result. This is certainly true in classical statistical mechanics: take a system far enough from equilibrium, or zoom in to a small enough scale, and statistical mechanics breaks down. If quantum mechanics is ultimately an approximation to a classical statistical model, then the right experiment will reveal that, violating quantum mechanics in the process. That’s what approximation means.

      QBism, by contrast (as well as all of the other “interpretations” that don’t alter QM, like many-worlds, consistent histories, some varieties of Bohm, etc…) doesn’t predict anything that QM doesn’t already. They don’t view QM as an approximation, but as the full theory, so they don’t need any extra tests beyond the ones already carried out daily on QM.

      By the way, this is the other thing I think you’re missing with your talk of charged bullets. You seem to be thinking of QM as something tested in a few seminal experiments, like double-slit, Stern-Gerlach, and the Aspect et. al tests of EPR. QM is used every day, in a variety of technologies and a variety of physical disciplines. The simpler ways it could be violated or trivialized, like your speculation about EM effects on the diffraction grating, are things that are tested constantly, not directly but implicitly by their relevance to routine problems. For example, you might be wondering if electron diffraction has something to do with electrons being charged. It doesn’t: neutron diffraction is not just something people have tested, it’s used every day to measure crystal structure of materials. You complained there wasn’t an EM-based calculation of the double-slit experiment, but analogues to the double-slit are performed every day by laser physicists, using shaped electromagnetic fields for the gratings. They aren’t going to be called “double-slit experiments” because they don’t view themselves as testing the double-slit experiment: that’s just settled science to them. Instead, they’re using it to test something else, or build something else, and the fact that their construction works is indirect evidence. Meanwhile, quantum computing researchers’ holy grail is to isolate a quantum system reliably enough that the external world won’t mess with it. Everything they do is to try to get interaction with the outside world as low as possible. If QM is just an artifact of not shutting off that interaction enough, they will eventually find that out.

      That doesn’t guarantee none of these people have missed something. It just makes it hard, a lot harder than just saying “well, there might be some interaction”. You need some decent argument why the effects are so small they haven’t been seen yet. t’Hooft at least sort of has this argument, by postulating that the variables are very quickly changing…though note that he admits his model has problems with Lorentz invariance as a result.

      (And again, you should be able to calculate the classical EM answer to a charged bullet moving through a charged wall yourself, if you have the physics background.)

      I think you missed the “if only” in the paragraph before t’Hooft’s sentence that you quote there.

      Like

      Reply
  12. Andrei

    Andrei: “There was nothing at B before the measurement at A and there is still nothing at B after the measurement at A. The X-spin at B is “created” by the measurement at B.”

    4gravitons: „I think this would be the typical QBist answer, yeah.”

    In that case, you are in trouble because such an answer contradicts QM. The measurement at B would be independent of what happened at the time of emission (there was nothing at B before the measurement at A) and also of the measurement at A (there is still nothing at B after the measurement at A). The probability of anticorrelation would be 50% (independent random coin-flips), not 100% as required. So, if this is really „the typical QBist answer” QBism is falsified.

    „For the second kind of response, I agree that it results in a preferred frame. This corresponds to the vague statement I made earlier, that the observer’s frame (whichever of A, B, C, or D you happen to be) is preferred.”

    This is exactly the situation that leads to a paradox (C and D observers from my previous post used their own, preferred frame). Again, this leads to a contradiction with the experiment (50% instead of 100% anticorrelation). QBism is falsified again.

    „Alternatively, one can literally be a Bohmian, and pick whatever preferred frame they pick.”

    This has to be implemented in formalism. If QBism is to be used you need to specify what frame is to be used, when, how it is found, etc. You cannot just say: take a look at the last Bohmian paper and use that!

    „The key thing is that, from a positivist or a pragmatist perspective, these are actually all the same choice..”

    As shown above, the first and second options lead to falsification, while the third is undefined in QBism’s formalism. At this point the theory is unusable.

    “As a positivist, there is no experiment you can do that can distinguish between these possibilities and tell you which frame is preferred”

    This is bad for QBism. If it cannot distinguish between a situation that leads to a contradiction with experiment and one that could in principle work it cannot be of any use. If you are sure that an absolute frame cannot be implemented in QBism then you are stuck with options 1 and 2, both leading to falsification. This seems to lead to the conclusion that realism is inevitable.

    “As a pragmatist, no matter which option you choose, you still can’t send superluminal signals, your available actions (aside from chatting metaphysics) are identical.”

    True, but you still can arrive at a contradiction with the experiment.

    “For the rest, I think you’re a little too hung up on an arbitrary distinction between microstates and macrostates. The only difference between a “microstate” and a “macrostate” is size. “

    I am using “microstate” and “microstate” as they are defined in statistical mechanics. It has nothing to do with the size of the system. A macrostate is a statistical (macroscopic) quantity like temperature, pressure, density, the position of the center of mass, etc. A microstate consists of the position/momenta of all internal particles, the magnitude of electric/magnetic/gravitational fields at each point, and the magnitude of all charges. A macrostate consists of an ensemble of microstates.

    The EM interaction between two distant, neutral objects (say two chairs 100 km away) does not change the macrostates (the EM forces between the internal electrons and quarks cancel out) but changes the microstates. Hence, we can neglect those interactions when the experiment’s results depend on macrostates. We cannot neglect them when the results depend on microstates. This is why it is OK to assume independence between such objects when doing classical mechanics, thermodynamics, fluid mechanics, chemistry, biology, medicine. In a Bell test, the result depends on microstates (the exact conditions at the locus of emission), so the independence condition is not justified.

    “An approximation by definition is eventually violated. If not, it’s not an approximation, it’s the full result.”

    Probably we understand “violation” differently. If you have a theory that predicts the exact location where the electron goes in a double-slit experiment you do not violate QM, as its probabilistic prediction would still be OK. QM does not postulate that a better prediction cannot exist. If QM is a correct statistical limit of the underlying theory it will be mathematically impossible to violate (falsify) it.

    “take a system far enough from equilibrium, or zoom in to a small enough scale, and statistical mechanics breaks down”

    If the theory was only derived for equilibrium conditions it is to be expected that it will fail when those conditions are not met. But if you provide a rigorous statistical treatment for non-equilibrium conditions, it will work. If the system is very small (few particles) the predicted deviations would be large, but the statistics will still work (just like in the case of QM).

    “QBism, by contrast (as well as all of the other “interpretations” that don’t alter QM, like many-worlds, consistent histories, some varieties of Bohm, etc…) doesn’t predict anything that QM doesn’t already. They don’t view QM as an approximation, but as the full theory, so they don’t need any extra tests beyond the ones already carried out daily on QM.”

    As shown above, that doesn’t work well for QBism. Many worlds does not work either (they have no way to get the probabilities right and the attempts to do that based on decision theory are inherently circular). Consistent histories does not work either (they accept a kind of hidden variables – the measurements reveal pre-existing properties, but then they claim that all frameworks are valid, which leads to a contradiction; I can get into more detail if you want). Bohm’s interpretation works (at least for the non-relativistic part) but it goes beyond QM by postulating real particles. So, after almost 100 years, the EPR conclusion still stands. There are no local and complete versions of QM.

    “By the way, this is the other thing I think you’re missing with your talk of charged bullets. You seem to be thinking of QM as something tested in a few seminal experiments, like double-slit, Stern-Gerlach, and the Aspect et. al tests of EPR.”

    Not at all.

    “For example, you might be wondering if electron diffraction has something to do with electrons being charged. It doesn’t: neutron diffraction is not just something people have tested, it’s used every day to measure crystal structure of materials.”

    In order for this experiment to work, the barrier should behave like a barrier (stop the particles). The only way to stop the particles is to interact with them. Neutrons are neutral in the same way atoms or chairs are neutral (they are made out of equal numbers of positive and negative charges). Such neutral objects interact electromagnetically (a dipole has a field). Try the experiment with non-interacting particles (like neutrinos) and no interference would be observed.

    “Meanwhile, quantum computing researchers’ holy grail is to isolate a quantum system reliably enough that the external world won’t mess with it.”

    Sure, but it’s important here what you mean by “isolating”. The concept of an isolated electron (an electron that does not interact electromagnetically with the rest of the universe) is a meaningless concept. An electron IS a field that permeates the whole universe, there is nothing you can do about that. This can be rigorously proven. Let’s assume that you can build a box that stops the electric/magnetic/gravitational fields. Closing that box would “delete” the charge/mass from the universe. But charge and mass/energy are conserved, so such a box cannot exist.

    The only way you can isolate a system is to eliminate other particles from bumping directly into it (make vacuum), go underground to prevent cosmic radiation from bumping into it, and cool it to reduce infrared radiation from bumping into it. Electric/magnetic/gravitational fields are still there.

    And, by the way, the above, simple argument proves that all those famous experiments that depend on such insulating magical boxes (Schrodinger’s cat, Wigner’s friend, and the like) are meaningless. There is no box that stops you to track a cat inside it. The cat is never in a superposition of dead/alive states. Its state is objective for the whole universe, box or no box (within the limits of the uncertainty principle, sure – but for such a large object they don’t matter).

    Like

    Reply
    1. 4gravitons Post author

      “The measurement at B would be independent of what happened at the time of emission (there was nothing at B before the measurement at A) ”

      This statement doesn’t follow from the one in parentheses. “There needs to be a prexisting object to lead to a correlation” is a metaphysical assumption. If you rule out metaphysics (by being a positivist or a pragmatist) then it is unjustified.

      “This is exactly the situation that leads to a paradox (C and D observers from my previous post used their own, preferred frame).”

      You are not both C and D, you are one or the other. In the perspective I was describing, you, the person who happens to be using QM right now, are the one with the preferred frame. Not any arbitrary observer, but the specific one doing the calculation.

      “This is bad for QBism. If it cannot distinguish between a situation that leads to a contradiction with experiment and one that could in principle work it cannot be of any use. If you are sure that an absolute frame cannot be implemented in QBism then you are stuck with options 1 and 2, both leading to falsification. This seems to lead to the conclusion that realism is inevitable.”

      Remember, QBism isn’t positivism or pragmatism. QBism is an interpretation of QM, positivism and pragmatism are philosophies. My point is that if you start out as a pragmatist or positivist, then you can’t distinguish the three possibilities: they’re identical. So if you begin as a positivist or a pragmatist, you are forced to treat the three possibilities as physically identical, until and unless you discover phenomena that go beyond QM (superluminal signaling for Bohm, sufficiently precise experiments to capture the hidden variables for superdeterminism).

      As for the discussion of macrostates and microstates, are you familiar with Effective Field Theory? That would make this easier to explain. If not, then I should just say that it seems like you don’t understand that statistical mechanics is subjective. The scale at which one delineates macrostates versus microstates in any given system is a choice, depending on what one is choosing to model. In plenty of well-studied quantum systems, the relevant objects are macrostates with respect to a microscopic theory: for example, protons and neutrons with respect to quarks and gluons, or atoms with respect to protons and neutrons.

      Like

      Reply
  13. Andrei

    “ “There needs to be a prexisting object to lead to a correlation” is a metaphysical assumption. If you rule out metaphysics (by being a positivist or a pragmatist) then it is unjustified.”

    I have provided two options: a preexisting “element of reality” (a hidden variable) OR a direct influence exerted by A on B. If you deny BOTH, you reject any causal connection between the results recorded at A and B, hence we should expect the same correlation as in the case of two independent coin flips (fundamentally random events). Now, if you can give me a reason for expecting 100% anticorrelation for independent coin flips I am eager to know it. If you deny that the measurements should behave like independent coin flips then you must accept that the measurements are predetermined (past/local causality), OR one causes the other (non-local influence).

    “You are not both C and D, you are one or the other.”

    True, but what stops C from comparing his account with D after the experiment? Sure, before they meet, C can believe A measured first, and D can believe B measured first, no problem. However, once they meet, they would expect their experimental records to coincide only 50% of the time (random coin flips). The fact that they will notice a 100% coincidence would force them to accept that at least one prior belief was wrong (true assumptions cannot lead to false conclusions). They would either accept that the measurements cannot be random or find a way to decide in an observer-independent way what measurement came first.

    “My point is that if you start out as a pragmatist or positivist, then you can’t distinguish the three possibilities: they’re identical.”

    The contradictions explained above are a matter of logic. Independent random events with binary outcomes should coincide only 50% of the time. Of course, you can claim that you are just lucky. You always get your prediction right, for no reason whatsoever. This is logically possible but I doubt the scientific community would be convinced. Once you accept logic and basic statistics, you can see that your initial belief, regarding the measurements being random, must be false. And from here you can take the local (hidden variable) route or the non-local one. If pragmatism or positivism requires you to abandon logic, I think it is time to drop them.

    “So if you begin as a positivist or a pragmatist, you are forced to treat the three possibilities as physically identical, until and unless you discover phenomena that go beyond QM (superluminal signaling for Bohm, sufficiently precise experiments to capture the hidden variables for superdeterminism).”

    Would you agree with the reformulation below?

    “So if you begin as a positivist or a pragmatist, you are forced to treat the three possibilities as physically identical, until you discover phenomena that go beyond QM, or a logical contradiction can be proven”?

    If so, I’ve proven that the first two options lead to a logical contradiction, so you either have to accept the third (absolute reference frame) or abandon positivism/pragmatism or abandon logic. The choice is yours.

    “As for the discussion of macrostates and microstates, are you familiar with Effective Field Theory?”

    I only have formal studies in non-relativistic QM, not in QFT. But this does not matter because Bell’s theorem is not about testing QFT, but hidden variable models. In order to justify the independence assumption for ruling out those models, you need to show that the assumption holds for THEM. The model I consider is classical electromagnetism. I am free to choose whatever hidden variable model I like. Hopefully, you would not claim that classical EM is not science. According to this model, I argue, the independence assumption in the context of Bell’s tests is not justified. I assume that what you say about QFT is true, but it does not contradict my point.

    “I should just say that it seems like you don’t understand that statistical mechanics is subjective. The scale at which one delineates macrostates versus microstates in any given system is a choice, depending on what one is choosing to model. “

    And this is exactly what I have done. In order to model the split of an electron/positron pair in the context of classical electromagnetism, one should take into account the exact configuration of the fields at the location of the pair. Describing the particle source in terms of its macroscopic orientation or temperature does not allow you to determine when and how the pair will split. If you want to use classical electromagnetism to describe the interaction between two chairs 100 Km apart, a complete specification of the fields at their location is not necessary, because the EM forces cancel (I have no rigorous proof of that, I admit it’s just my intuition here). So, the macroscopic characteristics of the chairs (except, I guess, temperature – if they are at different temperatures they would equilibrate in time) could be assumed to be independent.

    Like

    Reply
    1. 4gravitons Post author

      “because the EM forces cancel (I have no rigorous proof of that, I admit it’s just my intuition here)”

      They don’t cancel perfectly, they just approximately cancel, up to the limits you are choosing to consider in your macroscopic picture. Does that make my point a bit clearer? There’s nothing different from what you are doing there and what people do when they ignore gravity in a double-slit experiment.

      Regarding the rest, let me focus on the “C and D make different claims” version. As mentioned, I get the impression this is not what QBists have in mind, but it works to illustrate my overall point. (And keeping track of all the different versions has gotten quite messy, to the extent that I likely screwed up my explanations somewhere.)

      Suppose C and D are both extremely self-centered. Each one believes that they have some magical role to play in the universe, that they and only they determine the preferred frame for all of physics. Suppose these two self-obsessed physicists meet, and compare their observations.

      They will disagree about which measurement happened first, and hence which one “collapsed the wavefunction”. But if they look at the actual measurements they made, they will notice no disagreement. Each one will see the same correlation between measurements at A and B, each one will see them not as independent but as always paired in a particular way. They will have different descriptions of why those measurements ended up that way. But if they tried to come up with an experiment that would distinguish, they would come up empty. The only experiment that could distinguish the two descriptions would be one that measures the particles before they are measured, which is impossible by definition.

      So C and D are at an impasse. Both believe something nonlocal is going on, both believe that their version of the story is correct. You can go to either and tell them they’re being ridiculous, that they ought to agree, that their situations are symmetrical. But you can’t propose an experiment to show them that, or find a logical contradiction in their opinions. You can criticize them on aesthetic grounds, metaphysical grounds, philosophical grounds: but not via physics or mathematics.

      A positivist or a pragmatist looks at this situation, and sees a textbook case of their philosophy. Here we have two people disagreeing about something, in a way that no experiment can ever resolve, that has no implications for any technology or actions outside of their silly argument. To a positivist or a pragmatist, the conclusion to draw is that these people are arguing about nothing. They are playing a game with language, not talking about a valid object of study. And the proper response, if you are a positivist or a pragmatist, or in general if you “view reality as an algebra of observables”, is to just refuse to play that game. To say that, not only are C and D wrong, but the thing they are arguing about is wrong. The question “which measurement happened first?” is wrong. The question “what were the spins before you heard about them?” is wrong. The right question is “what is the mathematics I need to use to predict observations?”. The right question is “what is the algebra of observables?”

      So to distill out one question, and maybe cut out some of the earlier confusion: let’s say you were faced with one of these people. Let’s say C. Suppose C is convinced their frame is the correct frame, and that measurements nonlocally collapse the wavefunction in the order they observe. Can you propose an experiment to convince C that they’re wrong?

      Like

      Reply
  14. Andrei

    “Suppose C and D are both extremely self-centered. Each one believes that they have some magical role to play in the universe, that they and only they determine the preferred frame for all of physics. Suppose these two self-obsessed physicists meet, and compare their observations.”

    So, I guess those physicists reject relativity, right? Do they replace it with some other theory equally effective in explaining experiments like the half-life of the muon? I admit, my argument is effective if the agent ascribes a very high probability for the currently accepted theories being right.

    “They will disagree about which measurement happened first, and hence which one “collapsed the wavefunction”. But if they look at the actual measurements they made, they will notice no disagreement. Each one will see the same correlation between measurements at A and B, each one will see them not as independent but as always paired in a particular way. They will have different descriptions of why those measurements ended up that way. ”

    I agree.

    “But if they tried to come up with an experiment that would distinguish, they would come up empty.”

    Yes, they cannot distinguish who is right and who is wrong but they know that at least one of them must be wrong.

    “So to distill out one question, and maybe cut out some of the earlier confusion: let’s say you were faced with one of these people. Let’s say C. Suppose C is convinced their frame is the correct frame, and that measurements nonlocally collapse the wavefunction in the order they observe. Can you propose an experiment to convince C that they’re wrong?”

    Yes. I would invite C to send two drones, equipped with high-quality cameras of their choice and record the observations. In this case, neither frame is “his frame”, he is just relaxing at home. I would ask him to decide which of the drones is right and which is wrong and what reason stands behind that judgment. I would also ask him to explain the reason for his strong belief in the randomness of the measurements. From a practical point of view, randomness is indistinguishable from pseudorandomness.

    Like

    Reply
    1. 4gravitons Post author

      In the thought experiment, C rejects relativity only so far as they assume their reference frame is special. They still understand that, if they want to figure out what anyone else observes, they need to use relativity to convert between reference frames. They just think that the order of events/simultaneity in their frame is what “really happened”, and everyone else is doing their calculations in the wrong frame. So if they sent drones to observe, they’d get the drones back, take the footage, and translate it into the reference frame they had at the time lounging at home, using that to decide which measurement happened first.

      I agree that on some level you can just ask C “why”: “why do you believe that” “why do you care about this” etc. C won’t have any good answers. The problem is, C can ask you the reverse questions: “why not”, “why not prioritize this” etc., and you also won’t be able to say anything convincing, anything that isn’t on some level about aesthetics or philosophy. C’s point of view may seem strange to us, but our point of view is equally strange to C. In science, that sort of situation tends to be a sign that the whole question is unproductive and should be de-prioritized in favor of more practical matters.

      Personally, unlike some other physicists, I appreciate that there are genuine loopholes in QM, that what seems like pure randomness may well be pseudorandom, either due to something nonlocal or to something embedded in the experimental conditions (though I’d really like to get a clearer idea of how the latter is supposed to work). I’m just not convinced that these loopholes are any more special than the loopholes in SR or GR. They’re one way someone could conceivably make progress, but after a century they haven’t yet had a productive impact on practical physics, let alone clean experimental evidence.

      Like

      Reply
  15. Andrei

    “I agree that on some level you can just ask C “why”: “why do you believe that” “why do you care about this” etc. C won’t have any good answers.”

    Indeed.

    “The problem is, C can ask you the reverse questions: “why not”, “why not prioritize this” etc., and you also won’t be able to say anything convincing, anything that isn’t on some level about aesthetics or philosophy.”

    I disagree. The experiment with the two drones is performed by C. If this is supposed to be science he needs to provide some explanation for his claims. He wouldn’t be able to publish a paper saying that one of the instruments (the D drone) fails each end every time without being able to suggest what is wrong with it. The reviewers would simply reject it. It’s not their job to explain “why not”. There is a logical principle regarding the burden of proof. The one who makes a claim needs to provide evidence for that claim. The burden does not stand on the opponent to prove it wrong.

    With this approach creationism becomes legitimate science. The dinosaurs were killed in the flood. Why not? The established age of the fossils is wrong, the instrument did not work properly, why not?

    In a previous post, you expressed your concerns regarding the use of counterfactuals in superdeterminism. Hopefully, I was able to provide a clear explanation (one needs to take into account all the physical consequences of the change in the experimental setup). But in QBism, as far as I see, this problem is practically unsolvable. Not only C cannot explain a different experiment that was not performed, but he also refuses to comment on what a performed experiment (D’s observation) proves. He just ignores it, claiming that only his experiment is relevant.

    “I’m just not convinced that these loopholes are any more special than the loopholes in SR or GR.”

    Can you give me some examples of loopholes in SR/GR? I do not know about any them.

    “They’re one way someone could conceivably make progress, but after a century they haven’t yet had a productive impact on practical physics, let alone clean experimental evidence.”

    True, but this is because they were not properly investigated. Hopefully, ‘t Hooft will change that.

    Like

    Reply
    1. 4gravitons Post author

      First, let’s be clear here: C is not supposed to be a QBist. The point of the example is to illustrate why, from a positivist or pragmatist perspective, one ought not to be a realist about quantum measurements. QBism is then one interpretation that’s compatible with that anti-realism.

      In the example, C isn’t ignoring anyone’s experiments. C accepts that D measured what they measured, and that their drones measured what they measured as well. C doesn’t think the experiments coincidentally failed or anything like that. C just thinks that, in order to properly interpret these measurements, you need to convert them back into C’s frame.

      You bring up the question of who has the burden of proof. That’s one way to try to distinguish between competing scientific proposals when the evidence is ambiguous. It’s very difficult to formulate in a clean way though. For example, the most naive way to formulate it: “whoever proposes a new idea has to provide evidence for it” is very time-dependent: whoever suggests their idea first has a big advantage! You could instead try to apply something like Occam’s razor, and say that the more complicated theory has the burden of proof. Then you need some way to judge which theories are more complex, which leads to all of the usual arguments between interpretations, from MWI proponents claiming their interpretation is minimal because it’s just the Schrodinger equation to others claiming MWI isn’t minimal because of all the extra universes. The whole thing is a massive philosophy problem, and one needs to be very good at philosophy to sort it out conclusively.

      As for loopholes in SR/GR, I’m largely thinking of tests of the equivalence principle, along the lines of the Eot-Wash group. In general, there is a small amount of room for violations of the equivalence principle, though they would have to be very small violations. The impression I’d gotten from Palmer and Hossenfelder’s explanations of superdeterminism is that the situation is pretty analogous, just as of yet less precisely formulated. (So for example, the rapid degrees of freedom in t’Hooft’s model would have to be very rapid.)

      Like

      Reply
  16. Andrei

    “In the example, C isn’t ignoring anyone’s experiments. C accepts that D measured what they measured, and that their drones measured what they measured as well. C doesn’t think the experiments coincidentally failed or anything like that. C just thinks that, in order to properly interpret these measurements, you need to convert them back into C’s frame.”

    The C drone records a movie showing A performing the measurement while B is still drinking his coffee. After some time B performs its own measurement.

    The D drone records a movie showing B performing the measurement while A is still drinking his coffee. After some time A performs its own measurement.

    So, C cannot accept that the movie recorded by D represents what happened. The C movie is OK, but the D one is a lie. This is not about some philosophical interpretation of what might have happened. It’s about claiming that a movie is wrong. What you see there with your own eyes did not happen. So, it seems to me that it is quite clear who has the burden of proof here. C must stand in front of his peers and explain why the D movie has to be dismissed as evidence. If he fails to support his claims, the conclusion is that both movies are valid so the measurements cannot be random. Of course, nobody can prove to him that he is wrong, but he will not be taken seriously.

    “the most naive way to formulate it: “whoever proposes a new idea has to provide evidence for it” is very time-dependent: whoever suggests their idea first has a big advantage!”

    I guess you wanted to say “disadvantage”. But I disagree. If someone has no good reason to make a claim why make it in the first place? Believes should be based on sound arguments. C in the above example has no reason whatsoever to insist that measurements are random. There is no evidence or logical argument that would point in that direction.

    Occam’s razor is OK if the theories have the same explanatory power. I think it is better to replace it with Bayes’ theorem. A less parsimonious theory of similar explanatory power would be found less likely because each new assumption will decrease the overall probability.

    MWI does not provide a clear account of how to get the probabilities right so we cannot speak about how parsimonious it is. Once it will achieve that, if ever, we could compare it to other interpretations, no problem.

    As far as I know, the equivalence principle holds, so there is no problem for SR.

    I admit that I don’t understand Palmer and Hossenfelder’s models, they are too abstract for me. I tried to discuss them with Sabine but it was OT so I dropped it after some time. I hope she will dedicate a post describing them for non-professionals.

    Like

    Reply
    1. 4gravitons Post author

      Remember, the two drones aren’t each simultaneously at A and B. In order for the drones to measure anything, light has to go from A and B to their position. So in order for each drone to decide whether A or B was measured first, it needs to do a calculation, using some measure of the distance between it, A, and B. It can use the distance in its frame, and then it will come to the judgement you did. But if instead both drones use the distances of C’s frame, they will agree on the order of the events.

      (Think about it like this: C’s approach breaks locality in the same way that Bohm’s does, by picking a preferred notion of instantaneity/a preferred foliation. If C’s approach led to paradoxes, so would Bohm’s. I’m just using C’s approach rather than Bohm’s to show that you don’t need anything fancy to do this, the extra elements in Bohm’s approach are to make the theory satisfying on a philosophical level, not needed to make it consistent.)

      I meant “advantage” there actually. And you bringing up Bayes’ theorem helps me explain why: the first person to propose a theory will determine the prior the second person has to weigh against. Unless you’re imagining all knowledge as descending from some uniform prior at the beginning of time (which I get the impression is mathematically impossible) the prior will encode some assumptions about how the world works, and those will come in part from the theories that worked in the past. So you’re still quite history-dependent: if C’s approach had been the standard for thousands of years, it would be very difficult to shift with new evidence.

      You also have essentially the same problem with assumption-counting you had before with naive Occam’s razor: you need to characterize which proposals are introducing new assumptions and which are simply getting rid of old assumptions.

      “As far as I know, the equivalence principle holds, so there is no problem for SR.”

      And as far as I know there are no regularities in quantum randomness, so there is no problem with assuming it is perfectly random. Do you see the analogy now?

      Like

      Reply
  17. Andrei

    “Remember, the two drones aren’t each simultaneously at A and B. In order for the drones to measure anything, light has to go from A and B to their position. So in order for each drone to decide whether A or B was measured first, it needs to do a calculation, using some measure of the distance between it, A, and B. It can use the distance in its frame, and then it will come to the judgement you did. But if instead both drones use the distances of C’s frame, they will agree on the order of the events.”

    Yes, all that is true. But the same reasoning can be made for D’s frame. You can transform everything in that frame and everything is fine. Now, what C has to do is to convince his peers that using the D frame is wrong, the order is wrong. What argument can he provide? That he was in the C frame. So what? Why should anybody care about the frame C was in when the experiment unfolded? Oh, he believes he has “some magical role to play in the universe”. Is this supposed to be science?

    ” If C’s approach led to paradoxes, so would Bohm’s.”

    No, the absolute frame approach does not lead to paradoxes. The problem is that C has no justification for imposing his frame as the absolute one. He wasn’t even doing the experiment. A and B did the measurements, and the drones were on autopilot. He has no argument of any sort. He just claims some sort of divine revelation.

    The approach in Bohm’s theory is completely different. In this paper:

    Can Bohmian mechanics be made relativistic?

    Click to access 1307.1714.pdf

    ” The strategy proposed here involves extracting from the wave function also a foliation of space-time into space-like hypersurfaces, which foliation is used to define a Bohmian dynamics in a manner similar to the way equal-time hyperplanes are used to define the usual Bohmian dynamics. We show how this extraction can itself be Lorentz invariant in an appropriate sense, and argue that virtually any relativistic quantum theory, Bohmian or otherwise, will thus already contain a special space-time foliation, buried in the structure of the wave function.”

    Their foliation is based on the wave function which defines the system under observation. So, they use an element that is perfectly relevant to the experiment, not what some dude believes about his “magical role to play in the universe”.

    “the first person to propose a theory will determine the prior the second person has to weigh against.”

    The second person can propose a different set of assumptions. He doesn’t need to accept those proposed by the first one.

    “the prior will encode some assumptions about how the world works, and those will come in part from the theories that worked in the past.”

    True.

    “So you’re still quite history-dependent: if C’s approach had been the standard for thousands of years, it would be very difficult to shift with new evidence.”

    Indeed, but it so happens that C’s approach was not the standard because it didn’t work. He makes claims that cannot be checked by anybody so he cannot convince anybody. Realism worked. It gave us all science up to QM, and there are also realist versions of QM. There is simply no justification, empirical or logical, to replace it with some observer-centered view. And, by the way, the absolute reference frame was widely believed since Newton, so pragmatism has the historical advantage here. Yet, such a view has been rejected once SR was discovered, exactly because that absolute frame provided no explanatory power. If you want to reintroduce it, fine, but you must explain why is it preferable to determinism, which leads us to this crucial point:

    “as far as I know there are no regularities in quantum randomness, so there is no problem with assuming it is perfectly random.”

    The question here is: what regularities would you expect under the assumption of determinism? Determinism implies that observed states are perfectly correlated with past states by some equation. In order to observe those regularities, you need to know the past state and compare it with the present one. And, lo and behold, each time we can do that determinism works just fine. Subsequent measurements of the X-spin will confirm the first measurement. On the other hand, if we have no clue regarding the past state of the system (say the conditions relevant to the emission of the particle) we have nothing to compare, so no regularities are expected. So, the observed lack of regularities in some experiments does not make indeterminism more likely than determinism. But the fact that regularities are observed under many other circumstances do make determinism more likely.

    There is another issue regarding the determinism/indeterminism debate. We know for sure that deterministic systems can appear indeterministic due to our lack of knowledge. I do not know if the reverse is true. Can you get to a deterministic behavior like unitary evolution starting from an indeterministic fundamental layer? I do not know if this is possible. My guess is that a pure random background would evolve in a random walk manner, never achieving equilibrium. But I may be wrong.

    Like

    Reply
    1. 4gravitons Post author

      You should read the paper you cite there more carefully. They are proposing a particular strategy, they do not argue, and certainly do not prove, that that strategy is unique. While they might have aesthetic arguments that the frame they pick is more naturally linked to the wave function in some sense, they have no more empirical or mathematical justification for it than C does for their frame choice.

      As for whether indeterministic systems can appear deterministic, you have already provided an example where they do so: statistical systems when considering macrostates. From a statistical point of view whether the underlying microstates are deterministic or indeterministic is irrelevant, the system will still be approximately deterministic on a macroscopic level. (To be more irritable about this than probably warranted: please don’t pretend ignorance of things you’ve already shown you know, that’s a really quick way to get me to think you’re arguing in bad faith.)

      Like

      Reply
  18. Andrei

    “From a statistical point of view whether the underlying microstates are deterministic or indeterministic is irrelevant, the system will still be approximately deterministic on a macroscopic level.”

    I do not know if this is true. See for example this derivation for an ideal gas:

    Click to access kinetic_theory.pdf

    The gas atoms here are assumed to obey Newton’s laws. Would such a derivation work for atoms NOT obeying deterministic laws? I don’t think so.

    “please don’t pretend ignorance of things you’ve already shown you know, that’s a really quick way to get me to think you’re arguing in bad faith.”

    I hope that the above explanation clears that.

    One example of theory with fundamental indeterminism is the GRW interpretation. But even there you have a deterministic “backbone”, the wavefunction, assumed to be a “real” entity. I do not know of any example where you start with some random events at the fundamental level and get to determinism at the macroscopic level. And, it seems to me, this is required in a non-realistic interpretation like QBism where the wavefunction is just a tool used to make sense of agent’s experiences (measurement results).

    Like

    Reply
    1. 4gravitons Post author

      Ok, I think the problem on my end is I’ve been overestimating your physics background a bit.

      In this case, you’re missing that that derivation, at each step, uses average quantities: the average speed of particles, average energy, etc. In light of that, a simpler answer to your question is that while individual quantum particles are treated as following indeterministic laws, their average properties follow deterministic laws: this is the Ehrenfest theorem. One can then go on (and people have) and derive statistical mechanics from this in a way appropriate to quantum particles, including Fermi-Dirac statistics and Bose statistics. You end up with systems that macroscopically appear deterministic.

      (The other standard answer to your question is, well, decoherence, which I’m assuming you’ve read enough of the interpretation debate to be at least broad-strokes familiar with.)

      In general I don’t want to discourage you from asking questions, but you’ve suggested a lot of “maybe scientists are missing this one simple thing” questions. For any question like that, it’s worth asking two questions. First, would everything people do today work if this were true? (All the quantum technology, etc.?) Second, would the people you respect have noticed? For a lot of the things you suggest, if it were that simple t’Hooft would not have had to go to so much trouble to develop his model. I get that some of this is just curiosity: you want to know why this or that doesn’t work, and I’ve been patient enough so far to answer some of your questions. But I don’t have the time to actually walk you through full derivations, and good answers to some of your questions would require that. So it’s worth distinguishing “this would be interesting to know” from “on reflection, I think this could be a real loophole”.

      Like

      Reply
  19. Andrei

    “In this case, you’re missing that that derivation, at each step, uses average quantities: the average speed of particles, average energy, etc.”

    The gas particles are assumed to obey, individually, Newton’s laws. Averaging comes later. On the Wikipedia page:

    https://en.wikipedia.org/wiki/Kinetic_theory_of_gases

    “The particles undergo random elastic collisions between themselves and with the enclosing walls of the container.”

    Elastic collisions imply that each collision obeys Newton’s laws. Momentum is conserved for each collision, not the average momentum.

    Later on the page we read:

    “Thus, the dynamics of particle motion can be treated classically, and the equations of motion are time-reversible.”

    If the particle would move at random it could not be time-reversible, etc.

    “while individual quantum particles are treated as following indeterministic laws, their average properties follow deterministic laws”

    True, and I have given an example myself (the GRW interpretation). The problem is that the deterministic evolution of the wavefunction is postulated in QM. It is not derived from some fundamental random events. So the QM – classical mechanics correspondence is not a valid example of a fundamentally random system appearing deterministic in some limit. Anyway, feel free to ignore this point as I need some more time to crystallize it into a more clear argument.

    “In general I don’t want to discourage you from asking questions, but you’ve suggested a lot of “maybe scientists are missing this one simple thing” questions.”

    The above issue, of how to get determinism out of indeterminism was indeed a question. I just don’t know of such examples, I’ve never claimed there are none. On the other hand, I have made two important claims (not questions) that I still find perfectly justified:

    The EM interactions between the quantum system and the experimental environment are not treated properly. They are not mentioned in the two-slit experiment, Bell’s tests, Vaidman bomb tester, etc. Yes, I think the “scientists are missing this one simple thing”. It’s easy to refute my claim with no effort on your part, just show me a paper where this is done.
    The Wigner’s friend-type experiments are meaningless because electric and gravitational fields cannot be shielded. There are probably 1000’s of papers discussing those thought experiments yet I’ve never seen one describing how one could make a box that stops the content of the box from interacting with the exterior. Again, I think the “scientists are missing this one simple thing”. And, obviously, it’s easy to refute my claim with no effort on your part, just show me a paper where this is done.

    “For any question like that, it’s worth asking two questions. First, would everything people do today work if this were true? (All the quantum technology, etc.?)”

    Yes, it would work. You can use QM to make a superconductor without being bothered by any of the above. The QM is right, but it is interpreted in the wrong way.

    “Second, would the people you respect have noticed? For a lot of the things you suggest, if it were that simple t’Hooft would not have had to go to so much trouble to develop his model.”

    t’Hooft did notice point 1 above. His model IS a field theory. I don’t know about the second. And, of course, observing that something is wrong is much easier than building a model that solves the problem. And sure, I respect most physicists even if they missed those points.

    “I get that some of this is just curiosity: you want to know why this or that doesn’t work, and I’ve been patient enough so far to answer some of your questions. ”

    I appreciate your patience!

    “But I don’t have the time to actually walk you through full derivations, and good answers to some of your questions would require that.”

    Just providing a link to a paper where a valid rebuttal exists would be enough. If finding such a paper proves difficult you may consider the possibility that I am right.

    Like

    Reply
    1. 4gravitons Post author

      Here’s a paper for the diffraction question. As I mentioned earlier, experimentalists regularly use EM fields to make diffraction gratings, so you don’t need to model a metal diffraction grating atom-by-atom to answer your question.

      (Also, your sentence “observing that something is wrong is much easier than building a model that solves the problem” is totally inapplicable to that particular line of questioning: your question was whether EM by itself explained diffraction. If it did, nobody would need to build any new model, they could just use EM. Again, this is 100% something either you or t’Hooft can calculate, which is part of why you’re not going to find a random paper on it, it’s too easy. Any paper on it will be like the one I linked above and instead apply it to an unusual situation, which means it takes more work to find and decode.)

      As for the question of whether you can get determinism out of indeterministic laws, look at the actual derivation of the ideal gas law, not a summary of the assumptions involved. Yes, the kinetic theory of gases assumes Newtonian mechanics, and gets corrected to incorporate quantum effects. Yes, this makes it deterministic by assumption. However, if you actually look at the derivation, you will see it doesn’t need this property in order to be deterministic, because it only ever invokes the average behavior of the gas molecules. It could just as easily have been derived from a gas of indeterministic particles that on average obey Newton’s laws. Again, go through the derivation and point me to where it uses anything other than average properties of the molecules of the gas.

      Like

      Reply
  20. Andrei

    “As I mentioned earlier, experimentalists regularly use EM fields to make diffraction gratings, so you don’t need to model a metal diffraction grating atom-by-atom to answer your question.”

    I didn’t say that EM fields are not used. I said that there was no attempt to explain the two-slit experiment in the context of classical EM. Whyle I cannot read the paper (I don’t have free access) the abstract is clearly speaking about a quantum treatment of the system (in this case rubidium atoms) in an external magnetic field. The rubidium atoms are assumed to be “atomic de Broglie waves”, there seems to be no discussion about the EM interaction between the charged particles inside those atoms and the permanent magnets.

    The problem I see again and again is that the literature abounds with completely unjustified claims regarding the so-called “failures” of classical physics to explain this or that experiment (two-slit is just the most famous example). Theese false claims are then used to support the thesis that QM requires a paradigm shift, that realism and/or determinism and/or locality must be rejected. This is all a house of cards.

    “your question was whether EM by itself explained diffraction. If it did, nobody would need to build any new model, they could just use EM.”

    I disagree. QM was not invented to explain diffraction, but the structure of the atoms, black-body radiation specific heat of solids etc. where, the claim was, classical EM failed. Following this, QM was used in everything without too much attention given to classical EM. Now, this position seems justified, we don’t try to explain neutron stars merger using epicicless. The difference is that those failures were not necessary failures of classical EM itself (Coulomb’s law is still used to get the Hamiltonian in quantum calculations and it works very well) but they were (probably) the result of a wrong assumption, that the systems are isolated. Most of the initial problems have been solved in the context of classical EM by a theory called Stochastic Electrodynamics. See a short presentation here:

    Stochastic Electrodynamics: The Closest Classical Approximation to Quantum Theory
    Timothy H. Boyer
    https://arxiv.org/abs/1903.00996

    The only remaining problem, the classical atom is still likely solvable and progress is made as we speak:

    Relativity and Radiation Balance for the Classical Hydrogen Atom in Classical Electromagnetic Zero-Point Radiation
    Timothy H. Boyer

    Click to access 2103.09084.pdf

    Most results have been published a long time ago, but they were ignored.

    “Again, this is 100% something either you or t’Hooft can calculate, which is part of why you’re not going to find a random paper on it, it’s too easy. ”

    It’s not so easy. You need to solve the N-body EM problem for, probably hundreds of charges, and take into account the rest of the charges in the universe as well (statistically this is probably approximated by the zero-point field of stochastic electrodynamics). It is probably doable, but not on my PC. Another point I can make is that there are a lot of papers discussing the two-slit experiment using rigid-body mechanics with contact forces (bullets/billiard balls) claiming that the lack of interference here proves classical physics wrong. I very much doubt that the corresponding EM treatment is simpler than bullets.

    “Yes, the kinetic theory of gases assumes Newtonian mechanics, and gets corrected to incorporate quantum effects. Yes, this makes it deterministic by assumption. However, if you actually look at the derivation, you will see it doesn’t need this property in order to be deterministic, because it only ever invokes the average behavior of the gas molecules. It could just as easily have been derived from a gas of indeterministic particles that on average obey Newton’s laws.”

    OK, I will look more carefully into it.

    Like

    Reply
    1. 4gravitons Post author

      Let me address this first:

      “It’s not so easy. You need to solve the N-body EM problem for, probably hundreds of charges, and take into account the rest of the charges in the universe as well ”

      No, you don’t. The reason I brought up the use of EM fields as diffraction gratings is to illustrate that you don’t need to consider the atoms in a diffraction grating to model diffraction results that others attribute to QM. If you think electron diffraction experiments can be fully explained by EM, then all you need to do is cook up an EM field that gives a diffraction pattern when you shoot random electrons at it. You don’t need to consider the sources, because EM is local: the principle you’re eager to preserve with superdeterminism says that it doesn’t matter what the far away sources look like, their effect is entirely summarized by the EM field in the region you’re considering. So cook up a field (or better, solve for the required field) that classically gives electron diffraction. From what you’ve described, that’s within your capabilities. If the paper were more accessible I’d suggest you also check what happens with the field they use in their calculation.

      I’m assuming in the above that you didn’t have Boyer’s model in mind, and were asking about a scenario in which the classical vacuum is treated the usual way. I don’t have time to go through it in detail, but what Boyer is doing looks even fishier than t’Hooft’s setup (see here for some people trying to figure out what he means by a Lorentz-invariant spectrum, for one). Regardless it’s a different scenario, not EM business-as-usual.

      Like

      Reply
      1. Andrei

        “If you think electron diffraction experiments can be fully explained by EM, then all you need to do is cook up an EM field that gives a diffraction pattern when you shoot random electrons at it.”

        I don’t see the point of this, except as an existence proof. A CRT monitor is able to produce any pattern you want, including a diffraction pattern, so it’s possible. It is still necessary to model the field in a real experiment.

        “I’m assuming in the above that you didn’t have Boyer’s model in mind, and were asking about a scenario in which the classical vacuum is treated the usual way.”

        Boyer does not use a different theory, it’s still the classical theory of electromagnetism. The justification for the zero-point field (ZPF) comes from the Casimir effect, so it is based on actual experiments. Presumably, this field corresponds to all charges in the universe, so any accurate model should take it into account. It is necessary in order to explain the stability of atoms. But, it might be that the particle-barrier interaction (ignoring the reason the atoms inside the barrier are stable) is enough to explain the observed pattern. This simpler situation should be investigated first.

        Like

        Reply
        1. 4gravitons Post author

          To be clearer, let me explain the goal of that exercise (solving for the EM fields that give a diffraction pattern). The point is that once you have parametrized all fields that give you a diffraction pattern, you can investigate whether any of them resemble a double-slit barrier. So for example, you can see whether any of the solutions let you restrict electron trajectories to only (or approximately only) go through two slices of a line in the middle, rather than all points on that line. You can also investigate the strength of the field in the intervening space, and how it changes as you change the size of the apparatus, to see whether you can get by with the field being approximately undetectable (since a dramatic field, like one in a CRT monitor, would be obvious in an experiment). I would be, to put it mildly, extremely surprised if you can cook up a field that satisfies those restrictions, and that’s the minimum possible requirement to explain diffraction with classical E&M.

          If Boyer were deriving his field from classical EM, classical EM alone would justify the appearance of the Planck scale. It doesn’t, he has to put it in by hand to attempt to match (for example) the Casimir effect. He may believe it comes from “all charges in the universe” (though I didn’t get that impression, since he emphasizes it’s a vacuum solution it seemed like he was treating it as existing independently of any charges), but if he does, he still needs to show that’s where it comes from in order to actually have used classical EM. Otherwise he’s using classical EM plus some other unexplained physics.

          ETA: Just realized the above concern is irrelevant for the point you were making. Yeah, if Boyer’s setup actually represents a legitimate EM field (without the divergence problems the people in that thread I linked were worried about) then it would work as a solution to the above problem: postpone the question of where the field comes from, and just see if you can get a field that does the right thing without an obvious disagreement with experiment. As said, I suspect that you won’t be able to (and thus that Boyer’s setup either has divergence issues or contradicts experiment).

          Like

          Reply
  21. Andrei

    “To be clearer, let me explain the goal of that exercise (solving for the EM fields that give a diffraction pattern). The point is that once you have parametrized all fields that give you a diffraction pattern, you can investigate whether any of them resemble a double-slit barrier.”

    OK, let me try. Place a coil at each slit and let a current run so that all electrons are sent towards the central band. After some time change the current so that the electrons go for the second band, and so on. The desired intensity of each band can be achieved from the time each current is used. I guess this should work.

    I guess it is possible that something like that happens in the real experiment. There are probably many charge configurations of the barrier of the same energy so the system will oscillate between them. If most of them will generate a field that guides the electron towards the central maximum, a smaller number will send the electron towards a secondary maximum and so on, you could get the correct pattern. But this has to be established first, most likely by a computer simulation.

    “a dramatic field, like one in a CRT monitor, would be obvious in an experiment”

    Not if the field is changing quickly. On average the field is zero (if a copper foil is used as a barrier) or the classical expected value (if permanent magnets or currents are used instead of a solid barrier).

    Regarding the ZPF and a more critical view of stochastic electrodynamics you can look at this paper:

    Stochastic electrodynamics and the interpretation of quantum theory
    Emilio Santos
    https://arxiv.org/abs/1205.0916

    At page 4 we read:

    “Of course the spectrum eq.(1) implies a divergent energy density and any cutoff would break Lorentz invariance. However we may assume that it is valid for low enough frequencies, the behaviour at high frequencies requiring the inclusion of other vacuum fields and general relativity theory.”

    Like

    Reply
    1. 4gravitons Post author

      My guess is that a coil powerful enough to deflect the electrons sufficiently would change the field around the barrier enough to be detectably different. You should do the calculation and see (and if you’re unsure what would be detectably different, I can see if I have any experimentalist friends who can tell you). A rapidly varying field doesn’t help you here, at least not as easily as you think: remember the time derivatives of the field show up in Maxwell’s equations. Unless you can keep the intensity undetectably low, you’d be generating a lot of detectable UV light. You also would need to make sure you’re getting the same diffraction pattern predicted by QM, and not one with a different frequency: in particular, this means it depends on the mass of the incident particles in a particular way. If you have to tweak your field for different incoming particles then (absent much wilder superdeterminism than you seem willing to commit to here), you won’t be actually solving the problem.

      Santos’s paper seems a bit more detailed, and at least admits that the proposal as stated doesn’t work and would require extension. Again, I don’t have the time to study it enough for a detailed critique, and you’d be better off hearing that kind of thing from either an experimentalist or a quantum foundations researcher. But I will make the general point that it’s quite a bit more useful to have a theory that already lets you calculate (and match to experiment) fifth-order corrections in Planck’s constant to one that is falsified past leading order, even if one likes the aesthetics of the latter.

      Like

      Reply
  22. Andrei

    “ A rapidly varying field doesn’t help you here, at least not as easily as you think: remember the time derivatives of the field show up in Maxwell’s equations. Unless you can keep the intensity undetectably low, you’d be generating a lot of detectable UV light.”
    A lot of radiation is generated, right. This is the point of SED. All matter emits and absorbs radiation like hell. That radiation is detectable in the form of Casimir effect.

    But I don’t know, maybe you are right and my proposed “solution” does not work. I am not used to this kind of calculations. My degree is in chemistry and I’ve studied electromagnetism and QM in relation to my field. So, I am not the right person to design magnetic coils and all that.

    That being said, I think that the problem is severely underdetermined. It’s mathematically impossible to calculate a 3D trajectory from only two points (the tip of the electron gun and the spot on the screen). For all we know, the electron can go around the slits for a number of times and then spiral towards the screen. There is no way to know, there are infinitely many trajectories and, consequently, infinitely many field configurations. If the field is not constant in time you cannot even use the pattern to infer it since each electron interacts with a different field. Could a physicist with a great intuition guess the right field? It’s possible but I doubt it. So, I still think that a simulation of the experiment is the best way to see what’s going on.

    “I will make the general point that it’s quite a bit more useful to have a theory that already lets you calculate (and match to experiment) fifth-order corrections in Planck’s constant to one that is falsified past leading order, even if one likes the aesthetics of the latter.”

    Sure, my point is not that QM should be replaced with SED. But I think SED provides a counterexample to the claim that some observations (like the stability of atoms) present insurmountable problems for classical physics. With some tweaks it might work. And if it works it could provide an easier road towards unification, since (correct me if I’m wrong) electromagnetism and GR are unified already in the framework of Kaluza-Klein theory.

    Like

    Reply
    1. 4gravitons Post author

      You need the radiation to actually be the right amount, and for it to be sufficiently undetectable. I’m not convinced the SED proponents actually solve this: even the sophisticated one you cite expects to introduce a cutoff where some other physics takes over, but it’s unclear to me that this lets them keep clear of observable problems without also giving up on the effects they’re trying to get at energies where people frequently do experiments.

      In general, there’s a lot less wiggle-room in the phenomena attributed to quantum mechanics than you seem to think. You’re still largely picturing one kind of experiment: an electron gets sent across assumed-to-be-empty space, goes through one of two slits, and hits a detector on the other side. But there are a huge number of variations on this picture. Diffraction gets used to measure crystalline structure, for example, mapping a densely-packed 3d space. The maps obtained then get used to design new materials with tailor-made macroscopic properties, confirming that the crystal structure computed is the right one. As I think I’ve mentioned, this can be done with neutrons as well, and the forces involved are so different in that context that the same alternative explanation working for both is very unlikely.

      Since you’re a chemist, let me give you an analogy here. If you were arguing with a homeopath, you might dismiss them on the basis of what you know, Avogadro’s number and so forth. But you can also dismiss them because, if homeopathy was true, then basically every experiment chemists perform on a day-to-day basis wouldn’t work. If a tiny concentration of an additive, vigorously stirred, could dramatically change the medical properties of something, then chemistry would be completely unreliable. There would almost always be a risk of some negligible contaminant throwing things off, and the experiments precise enough to rule out those contaminants are also precise enough to clearly not see the effects expected by homeopaths.

      The situation in quantum physics is a lot like that. There are a huge number of different experiments where people need to establish extremely precise EM fields, or need reliable behavior between very different situations. There is extremely little room for a conventional explanation that deviates from that.

      That’s why most of the people proposing alternatives to QM don’t explain experiments one by one. Instead, they try to show some overall equivalence, that their proposal gives the same predictions as QM for every possible experiment. Those that deviate from QM do so at extremely small scales, small enough to be well out of range of the kinds of experiments people regularly do.

      That’s part of why I’m skeptical of SED. The other reason is that, taking the claims you cited before at face value, I’m a lot less impressed than you are about something that works at first order. I do perturbative calculations for a living, and my field is littered with the graves of methods that worked great at first order and taught nothing useful beyond. In my experience, working at first order means you got lucky, it’s not a reason for optimism.

      As a sidenote, I don’t know why you’re bringing up KK theory there. Are you assuming that some successor to SED would still only need EM and gravity? Even if it were fully classical, I’d assume that any stochastic replacement for QFT would have to include the weak and the strong force as well, not to mention spin-1/2 fermions, and you can’t get any of those from KK alone.

      Like

      Reply
  23. Andrei

    “In general, there’s a lot less wiggle-room in the phenomena attributed to quantum mechanics than you seem to think.”

    I don’t need any “wiggle-room”. I am not proposing some new, yet to be defined theory that needs to be adjusted to fit QM. The mathematics of classical electromagnetism is well known and unambiguous. It needs to be used as such.

    “You’re still largely picturing one kind of experiment: an electron gets sent across assumed-to-be-empty space, goes through one of two slits, and hits a detector on the other side.”

    It’s good to focus on one experiment so that it is clear what the argument is. But, fundamentally, all experiments are the same. You have a bunch of interacting charged particles.

    “Diffraction gets used to measure crystalline structure, for example, mapping a densely-packed 3d space. The maps obtained then get used to design new materials with tailor-made macroscopic properties, confirming that the crystal structure computed is the right one. As I think I’ve mentioned, this can be done with neutrons as well, and the forces involved are so different in that context that the same alternative explanation working for both is very unlikely.“

    I am not sure what the argument is. Yes, there is an infinite number of experiments consisting of interacting charged particles. Can you find at least one where a prediction in the context of classical EM has been rigorously calculated and it does not fit the experiment? If not, we need to conclude that classical EM is still an unfalsified theory, on par with QM and GR. You cannot assume that a theory is false based on some intuition that its prediction is “unlikely” to come true. You need to show it, unambiguously, for at least one experiment.

    “If you were arguing with a homeopath, you might dismiss them on the basis of what you know, Avogadro’s number and so forth. But you can also dismiss them because, if homeopathy was true, then basically every experiment chemists perform on a day-to-day basis wouldn’t work.

    The main problem with homeopathy is that it does not work. As far as I know, there is no study showing any effect of those solutions. Hence, there is no need to bother with what the underlying mechanism must be.

    “If a tiny concentration of an additive, vigorously stirred, could dramatically change the medical properties of something, then chemistry would be completely unreliable.”

    Well, catalysis works. Still, the homeopathic solutions are so diluted that even a single molecule of the active is unlikely to be found in a bottle, so even a catalytic mechanism is excluded, but again, the problem is that there is no effect.

    “The situation in quantum physics is a lot like that. There are a huge number of different experiments where people need to establish extremely precise EM fields, or need reliable behavior between very different situations. There is extremely little room for a conventional explanation that deviates from that.”

    I disagree. There is no similarity between electromagnetism and homeopathy. Both QM and classical EM are well established theories with countless confirmed predictions. Neither has been falsified. For both we have experiments for which detailed calculations are missing (Uranium’s spectrum would be an example for QM). We have to wait for those predictions to be calculated. Homeopathy never worked, there are no exact predictions and no equations to be checked. It’s not even crackpottery, it’s a fraud, collecting billions $ from the gullible.

    “That’s part of why I’m skeptical of SED.”

    SED is itself an approximation of classical EM. It’s still better than the initial one when the external fields were completely ignored. Your skepticism is justified.

    “The other reason is that, taking the claims you cited before at face value, I’m a lot less impressed than you are about something that works at first order.”

    I agree, but, as stated before, SED puts to rest the generic arguments against classical EM of the sort “it cannot work because atoms would be unstable”. Now there is no argument supporting the claim that classical EM has been falsified. It hasn’t.

    “As a sidenote, I don’t know why you’re bringing up KK theory there. Are you assuming that some successor to SED would still only need EM and gravity? Even if it were fully classical, I’d assume that any stochastic replacement for QFT would have to include the weak and the strong force as well, not to mention spin-1/2 fermions, and you can’t get any of those from KK alone.”

    Electron’s spin has been deduced in SED’s framework since 1982:

    The spin of the electron according to stochastic electrodynamics
    de la Peña, L., Jáuregui Found Phys 12, 441–465 (1982). https://doi.org/10.1007/BF00729994

    There is also a recent paper:

    Physical basis for the electron spin and antisymmetry: A first-principles explanation
    https://www.researchgate.net/publication/318729544_Physical_basis_for_the_electron_spin_and_antisymmetry_A_first-principles_explanation

    I’ve never studied nuclear forces but I’ve heard that they are somehow similar with EM, so classical equivalents could be possible. If so, this could be helpful for the unification efforts, so it’s not just a matter of philosophical preference. But, sure, it’s just speculation at this point.

    Like

    Reply
    1. 4gravitons Post author

      There’s something I’ve been assuming you realized here, but maybe you haven’t gotten the implications: every time someone computes something using quantum mechanics, that calculation implicitly includes the classical calculation as well. It’s not always the “full” classical calculation in the sense you’re imagining, with the field of every proton and electron included in full detail (though when people use diffraction to determine crystal structure, it essentially is: at least, every proton and electron in the crystal). Instead, typically, there’s some approximation being used, with different approximations in different cases.

      So let’s examine the consequences of what you seem to be proposing. Suppose that some broad class of effects, like electron diffraction, is actually due to EM. That would mean that the approximations used in every such experiment are wrong, and are neglecting effects with a measurable impact. It would also mean that in every case, those effects exactly mimic the behavior attributed to QM. It would mean that this was true no matter what approximation was taken, despite very different choices in different experiments. That would require a massive, massive coincidence.

      (I’m also not convinced that any of these SED guys, even the more careful ones, are actually doing things honestly…but since I haven’t gone over any of it in detail that’s neither here nor there. Just, general point that you should probably not trust random theorists.)

      Two asides:

      One, I’m saying this as someone who used to do a lot of debunking of homeopathy: they absolutely do have studies showing effects. In many cases, they’re not even that bad for medical studies (this is because medical studies are often quite bad). The more sophisticated homeopaths are also perfectly willing to admit that no atom of the substance they’re diluting survives, they just have some wild claims about water having memory (including theory papers which honestly don’t look that different from the SED papers you’ve shown me). In practice, one does need a bit more to debunk them, and that bit more generally involves pointing out that their burden of proof is much higher because of the ridiculous coincidences it implies.

      Two, you still haven’t clarified what difference you expected KK to make there. Or were you just confused?

      Like

      Reply
  24. Andrei

    “There’s something I’ve been assuming you realized here, but maybe you haven’t gotten the implications: every time someone computes something using quantum mechanics, that calculation implicitly includes the classical calculation as well.”

    I know that. In my previous post, I mentioned the uranium spectrum as an example of experiment where a quantum prediction does not exist. A single atom (if complex enough) is still too much for our computational ability. So, it’s obvious that I wouldn’t expect a QM description of the two-slit experiment to treat all particles individually.

    “Suppose that some broad class of effects, like electron diffraction, is actually due to EM.”

    OK.

    “That would mean that the approximations used in every such experiment are wrong, and are neglecting effects with a measurable impact.”

    For the so-called “classical” approach of the experiment (with bullets), this is right. The approximation is wrong and the prediction is wrong. For the QM’s approach this is not right, because QM takes those effects into account in an implicit way. In the path integral approach, you simply assert that the particles cannot pass through the barrier. This is an implicit assumption that the particles interact with the barrier. In classical EM you cannot just say where a charge should go, you should provide a reason for that, and that reason has to be a field that needs to be properly calculated.

    QM, in my opinion, is just a statistical description of the underlying EM interactions. Those interactions are not ignored in QM, they are implicit in some way (in the same way Newton’s laws are implicit in the ideal gas law). This is my working hypothesis here. If this hypothesis is correct, a rigorous classical calculation should reproduce the quantum prediction. We do not have that calculation so we do not know.

    “It would also mean that in every case, those effects exactly mimic the behavior attributed to QM.”

    Yes, because QM is, IMHO, a statistical description of those effects.

    “It would mean that this was true no matter what approximation was taken, despite very different choices in different experiments.”

    I fail to see the relevance of those “different choices”. Yes, we have a lot of different experiments and we do not have a proper classical calculation for any of them. The only thing we need is one single experiment treated properly. And sure, I would not expect an exact calculation, that cannot be done. A statistical one is perfectly fine, but the statistics have to be properly grounded in the underlying physics – Maxwell’s equations, Lorentz force law, Newton’s laws. The only place I’ve seen this attempted is in the framework of SED. I have my own doubts about some of their assumptions, but it is something in the right direction.

    “I’m also not convinced that any of these SED guys, even the more careful ones, are actually doing things honestly”

    Well, I was unable to find any rebuttal. They are publishing for decades, many papers in top journals. Of course, anything is possible but I see no reason to reject their claims. On the other hand, even if SED is wrong, my point remains. No proper classical calculation has been done so there is no proper falsification of classical EM.

    “I’m saying this as someone who used to do a lot of debunking of homeopathy: they absolutely do have studies showing effects.”

    Well, I don’t know much about the subject. Still, homeopathy is not part of the curriculum. Doctors, in hospitals, do not prescribe homeopathic treatments and so-on. They claim they are doing “alternative” medicine. Classical EM is taught in every school, it’s not “alternative” physics.

    “…including theory papers which honestly don’t look that different from the SED papers you’ve shown me”

    Can you be more specific? Show me one of those papers. And how are the SED papers different from the QBist papers or any other paper of your choice? Are their logically unsound? Are their calculations wrong?

    “In practice, one does need a bit more to debunk them, and that bit more generally involves pointing out that their burden of proof is much higher because of the ridiculous coincidences it implies.”

    Homeopathy asserts that some undetected memory effects exist. This is a claim that needs to be supported by strong evidence. SED claims that Maxwell’s equations hold. They need not provide any new evidence for this because a vast amount already exists. They also claim a field exists. They provide the required evidence (Casimir effect).

    Like

    Reply
    1. 4gravitons Post author

      Ok, I think you didn’t get the point I was making there, so let me try to focus in on it. You said

      “In the path integral approach, you simply assert that the particles cannot pass through the barrier.”

      That was what I was refuting there. That statement is simply not true, and I’ve been stating this over and over again in this conversation. In some calculations for some experiments you approximate the barrier as impassible. In others, like the ones involving diffraction through an EM field, you instead approximate the field, assuming it’s what you tried to generate without any small imperfections or extra radiation. In others, like diffraction through crystals, you approximate the field as generated by the individual atoms of the crystal, neglecting nuclear forces and the fields of whatever equipment you’re holding the crystal in.

      Each of these is a different approximation, neglecting different things. What you are implying is that in each case, the QM calculation covers precisely whatever was left out of the classical approximation, no matter where the line was drawn.

      That cannot be due to any specific details of the experiment, not without massive coincidences. That, at least, is something the SED people realize, and it’s why the effect they posit is independent of the details of the specific experiment, and instead is an isotropic radiation bath.

      In terms of why the SED papers look fishy (and again, making clear I haven’t read in enough detail to say anything definite):

      The papers seem to handle several special cases in idiosyncratic ways, rather than working from an explicit unified framework. That’s something you do if your problem is quite computationally or conceptually difficult, but the kind of mix of statistical physics and EM they’re invoking isn’t actually that difficult.

      Basically, what I’d expect very early in that field’s existence (at least say in the 1980’s onward, after Wilson’s work) would be that one of them would write down an action for EM fields in a stochastic background. (That’s something I’m 90% sure is not very hard to do for people who do modern statistical physics.) Then it could be directly compared to, for example, the Euler-Heisenberg Lagrangian, to see whether it matches the predictions of QM to leading order. There would be no need to do this piece-by-piece thing of comparing to the Casimir effect, then to diffraction, then to etc. etc.

      Maybe Santos does this? It didn’t look like it from my skim of the paper. And even if he does, it shouldn’t have taken that long for someone to finally do it.

      Maybe there’s a reason why it’s more complicated to do that calculation than I would expect. But the end result is the papers look really evasive and disorganized in their reasoning: they don’t state their assumptions, they use a lot of words when a few more equations could make it much clearer what they were actually proposing. Certainly, there are mainstream papers with those traits, and even papers in very nice journals: the black hole firewall debate was famous for that kind of thing. But in general (and this absolutely includes black hole firewalls, random QBist manifestos, and the like), papers like that aren’t trustworthy, and outside of the small communities that produce them they aren’t trusted. It’s far too easy to rationalize whatever you want when you don’t commit yourself to a clear formalism.

      Like

      Reply

Leave a comment