The Quantum Paths Not Traveled

Before this week’s post: a former colleague of mine from CEA Paris-Saclay, Sylvain Ribault, posted a dialogue last week presenting different perspectives on academic publishing. One of the highlights of my brief time at the CEA were the times I got to chat with Sylvain and others about the future forms academia might take. He showed me a draft of his dialogue a while ago, designed as a way to introduce newcomers to the debate about how, and whether, academics should do peer review. I’ve got a different topic this week so I won’t say much more about it, but I encourage you to take a look!


Matt Strassler has a nice post up about waves and particles. He’s writing to address a common confusion, between two concepts that sound very similar. On the other hand, there are the waves of quantum field theory, ripples in fundamental fields the smallest versions of which correspond to particles. (Strassler likes to call them “wavicles”, to emphasize their wavy role.) On the other hand, there are the wavefunctions of quantum mechanics, descriptions of the behavior of one or more interacting particles over time. To distinguish, he points out that wavicles can hurt you, while wavefunctions cannot. Wavicles are the things that collide and light up detectors, one by one, wavefunctions are the math that describes when and how that happens. Many types of wavicles can run into each other one by one, but their interactions can all be described together by a single wavefunction. It’s an important point, well stated.

(I do think he goes a bit too far in saying that the wavefunction is not “an object”, though. That smacks of metaphysics, and I think that’s not worth dabbling in for physicists.)

After reading his post, there’s something that might still confuse you. You’ve probably heard that in quantum mechanics, an electron is both a wave and a particle. Does the “wave” in that saying mean “wavicle”, or “wavefunction”?

A “wave” built out of particles

The gif above shows data from a double-slit experiment, an important type of experiment from the early days of quantum mechanics. These experiments were first conducted before quantum field theory (and thus, before the ideas that Strassler summarizes with “wavicles”). In a double-slit experiment, particles are shot at a screen through two slits. The particles that hit the screen can travel through one slit or the other.

A double-slit experiment, in diagram form

Classically, you would expect particles shot randomly at the screen to form two piles on the other side, one in front of each slit. Instead, they bunch up into a rippling pattern, the same sort of pattern that was used a century earlier to argue that light was a wave. The peaks and troughs of the wave pass through both slits, and either line up or cancel out, leaving the distinctive pattern.

When it was discovered that electrons do this too, it led to the idea that electrons must be waves as well, despite also being particles. That insight led to the concept of the wavefunction. So the “wave” in the saying refers to wavefunctions.

But electrons can hurt you, and as Strassler points out, wavefunctions cannot. So how can the electron be a wavefunction?

To risk a bit of metaphysics myself, I’ll just say: it can’t. An electron can’t “be” a wavefunction.

The saying, that electrons are both particles and waves, is from the early days of quantum mechanics, when people were confused about what it all meant. We’re still confused, but we have some better ways to talk about it.

As a start, it’s worth noticing that, whenever you measure an electron, it’s a particle. Each electron that goes through the slits hits your screen as a particle, a single dot. If you see many electrons at once, you may get the feeling that they look like waves. But every actual electron you measure, every time you’re precise enough to notice, looks like a particle. And for each individual electron, you can extrapolate back the path it took, exactly as if it traveled like a particle the whole way through.

The same is true, though, of light! When you see light, photons enter your eyes, and each one that you see triggers a chemical change in a molecule called a photopigment. The same sort of thing happens for photographs, while an electrical signal gets triggered instead in a digital camera. Light may behave like a wave in some sense, but every time you actually observe it it looks like a particle.

But while you can model each individual electron, or photon, as a classical particle, you can’t model the distribution of multiple electrons that way.

That’s because in quantum mechanics, the “paths not taken” matter. A single electron will only go through one slit in the double-slit experiment. But the fact that it could have gone through both slits matters, and changes the chance that it goes through each particular path. The possible paths in the wavefunction interfere with each other, the same way different parts of classical waves do.

That role of the paths not taken, of the “what if”, is the heart and soul of quantum mechanics. No matter how you interpret its mysteries, “what if” matters. If you believe in a quantum multiverse, you think every “what if” happens somewhere in that infinity of worlds. If you think all that matters is observations, then “what if” shows the folly of modeling the world as anything else. If you are tempted to try to mend quantum mechanics with faster-than-light signals, then you have to declare one “what if” the true one. And if you want to double-down on determinism and replace quantum mechanics, you need to declare that certain “what if” questions are off-limits.

“What if matters” isn’t the same as a particle traveling every path at once, it’s its own weird thing with its own specific weird consequences. It’s a metaphor, because everything written in words is a metaphor. But it’s a better metaphor than thinking an electron is both a particle and a wave.

17 thoughts on “The Quantum Paths Not Traveled

  1. Peter Morgan

    As a slightly different approach, I suggest that to say ‘particle’ at all can be thought a metaphysical claim too far: we can instead say ‘event’ and discuss the time and place data we have about them and statistics of event properties instead of statistics of particle properties. Is an ‘event’ caused by a ‘particle’? Is the time and place data about an ‘event’ really and always time and place data about a ‘particle’?

    How do we know when an event happened? Hardware typically measures the signal level out of a device at GHz rates (a million times slower than the PHz of optical frequencies) and notices when the signal level suddenly rises. The usually significant noise level makes it necessary to have elaborate hardware and software algorithms to decide what time to save in memory, so each event is a complex enough dance that we ought to hesitate before we say it is caused by a single particle. At a minimum, the event would not have happened if the device were not there, and of a carefully chosen type, and vacuum noise, thermal noise, and every aspect of the whole apparatus contribute to dark rate events.

    Admittedly, we must save only the time that an event happened insofar as it is prohibitive to store Gigabytes per second as a record of the signal level: without compression rates of thousands or millions we would be overwhelmed. Yet storing the signal level instead of storing the event times would give us far more information, and who is to say what we might learn from it if we give it a chance? What I think we can say is that if we change the experiment apparatus by increments, the statistics of event data will change by increments and so would the statistics we could compute for signal level data, if we stored it.

    It seems that everything we know about quantum mechanics and quantum field theory, we know by mathematical inference from data about experiments and the events that happen within them, which used to be written in notebooks but is now stored in computer memory at much faster rates. I’d like to know more precisely and with less pretense about ‘particles’ and ‘waves’ how that inference has constructed the edifice we have today.

    That’s obviously rather idiosyncratic and almost a rant, but I hope it’s not a complete waste of the time of the three people who read it.

    Like

    Reply
    1. 4gravitons Post author

      I think you can chalk up my use of the word “particle” not (just) to leftover metaphysics, but to a field theorist’s idiom. “A particle is an irreducible representation of the Poincaré group”. These are events, but they’re related to other events in reliable ways, part of a family of very similar events that differ only by changes of reference frame. I think it’s worth thinking of those groups of events as meaningful, instead of just the individual events. Calling them particles is indeed residual metaphysics though. 😉

      It’s not obvious to me that there haven’t been experiments where you store the signal level and not just the signal, even if modern colliders don’t do it. Certainly they exist when you’re not trying to identify individual particles: experiments where you measure a lot of particles and keep track of the signal level are how this hardware was developed in the first place. I think more powerfully, you can think of colliders not just as using their silicon hardware as detectors, but as using the colliding quarks and gluons themselves. Because the statistics of measured collisions match well (sometimes very very well) with theory, you can use that match to probe events happening at much smaller timescales than your actual hardware can measure.

      Like

      Reply
      1. Peter Morgan

        The irrep idea for asymptotically free particles associates them with a wavenumber, so we might call it the space-time opposite of a recorded ‘event’: everywhere instead of where an event is. But yes, that idea seems to me very clear in the mathematics of QFT as associated with the Fourier transform of a real-space operator-valued distribution.

        I think of lines or spirals of events at CERN, say, as very different from the one isolated event we typically see for a photon or an electron in a quantum optics or Stern-Gerlach experiment. I suppose a pattern of thousands of events that can near-obviously be assigned to a single particle can be almost as substantial as the apparatus the pattern occurs in, but it seems much less obviously substantial if we only have one event.

        It has seemed to me that we give names to different patterns of events: this is a proton pattern-of-events, this is an electron pattern-of-events, this is a tau-neutrino pattern-of-events, et cetera. If that’s what we’re doing, it seems better to say “look, an electron-type pattern-of-events” instead of “look, an electron particle” (yeah, we don’t say “electron particle”, but do people say in their heads electron-not-a-particle?)

        I’ve seen signals as they’ve been recorded at GHz rates in many articles about not-particle-physics experiments. In any case, when an experiment is debugged there are often occasions when an oscilloscope has to be attached to determine why there is some timing or other glitch. Too much data has to be analyzed until some feature emerges that explains the glitch so it can be eliminated and we can go back to massive compression. My understanding is that LIGO stores Petabytes of signal data and consequently needs lots of compute to analyze it after the fact.

        Love this last phrase: “you can use that match to probe events happening at much smaller timescales than your actual hardware can measure”, but I want to restate ‘probe events’ as about measurement: we use actual experimental records to determine a class of quantum states for the experimental apparatus that are consistent with the actual results. Then we can report the statistics of the results we expect would have been recorded by a measurement device that is much more refined than any measurement device we actually have, if we had put it in various different places.

        One place where I think the signal/event distinction comes into its own is when we consider Bell inequality violating experiments. When people think about event pairs, they think that a particle pair caused the events to be as they are; but if we think about signals, that encourages us to think that all the degrees of freedom of the EM field that are contained by the apparatus are needed to cause the signals to be as they are. I have a video on YouTube on that, “Bell’s theorem for noisy fields” (and an article in JPhysA 2006).

        Sorry this is so philosophical, but I’ve found that trying to clear out metaphysics is always something like this. Thank you for your reply but feel free to ignore this.

        Like

        Reply
        1. 4gravitons Post author

          This is the fun kind of rambly philosophical discussion, don’t worry.

          There’s a similarity with the kind of “let’s just focus on events” thing you’re leaning towards and one of the long-term dreams of the scattering amplitudes field, where the hope is that everything can be expressed in terms of on-shell states. A key difference, as you point out, is that one usually thinks of measurements as happening at a definite place and time, and that’s not true of the external states in scattering amplitudes. There’s another “theorist idiom” that supports the bias towards these kinds of things, “there are no local observables in quantum gravity”. Since diffeomorphism in gravity plays the same role as gauge symmetries in gauge theories, you want to define your observables in a diffeomorphism-invariant way, which seems to lock you into considering the S-matrix rather than the in-practice local measurements people actually make. There was a recent Witten paper that discussed a nice way to transcend this: instead of insisting on observables at infinity or local observables, talk about what can be observed on a specific worldline. Taken fully seriously this would mean taking a step back even further than you seem to be doing, and thinking not about the hits of a particle on a detector but the hits of the news of said particle on your brain.

          I agree that there’s also a valuable distinction to be made between the measurements actually made and the counterfactual measurements one infers: “if we had a detector that was at this vertex and was fine-grained enough, we could have seen…”. I do think it’s important to not confuse avoiding metaphysics with avoiding theory-laden observations. We infer that a detection is evidence of something because there are lots of different types of experiments that are collectively well-described if we propose a particular concept. If we throw out all such concepts we’re not just losing the ability to predict very much, we’re also understating how good the evidence really is. The fact that we end up with a consistent story of these counterfactual measurements/events is a very nontrivial consequence, and not easily satisfied, as can be seen by how powerful unitarity constraints can be on scattering amplitudes.

          Like

          Reply
  2. Peter Morgan

    I like almost all of that comment a lot. Witten has recently been discussing physics in ways that seem to me part-way between the Wightman axioms and the Haag-Kastler axioms, in which the S-matrix and other asymptotic observables are very little or not mentioned. Thinking about QFT as a Data&Signal Analysis formalism is all about one or more timelines, so I’m very in favor of his approach in general and specifically of his invocation of the Timelike Tube Theorem. In detail, however, I’m much more empiricist than Witten, preferring to discuss how we infer from the data to quantum theory and quantum field theory, instead of using inference the other way round. I’m not against theory-first approaches, it’s just that theory-first seems to me to have been heavily investigated for the last 50 years, so it seems good to get off that beaten path a little and think in a data-first way for a while. I want to think about physics in a high theory-informed way, but focused on an engineer’s hands-on approach to experimental apparatus and data and how an engineering textbook would discuss how to analyze that data. I think it helps that Machine Learning is taking much the same path as I am but with what is, so far, a different focus (it was fun to take part in the Yale Inference Workshop, where I developed that cross-fertilization a little, though I still know next to nothing about ML).

    Here’s a thing: data&signal analysis, which is often the heart of real experiments, seems very badly served by QM/QFT. If we perform millions of measurements at time-like separation, that means we either have that many collapses of the wave function happen, we model the whole experiment as a single final measurement, or we do something else… An approach I suggest in JPhysA 2022, “The collapse of a quantum state as a joint probability construction”, rethinks the measurement problem by introducing Quantum Non-Demolition measurements, which allows us to construct a very-nearly-classical formalism inside QFT. Somehow we have to use the resources of quantum measurement theory to construct joint probabilities for measurements at time-like separation, which can be done in many different ways, but the simplest is what we call ‘collapse of the wave function’. I’m not the only person to have thought about this problem, of course, but there isn’t nearly as much out there as I think the practical relevance of the question justifies; in particular, I think Witten hasn’t yet thought through this problem in anything like this way, though given the right push he could do in a week what takes me a year.

    I don’t think it’s necessary to step all the way back to the brain/mind of the observer. An experiment records event times or signal levels to computer memory at MHz to GHz rates. Nobody can tell me that my mind (or anybody else’s?) works at that speed. Most of the data in a PetaByte dataset will never be looked at by any human: a computer will compute some statistical summary of billions or trillions of such items and we’ll be done. We can use QND measurements to construct a consistent histories kind of no-collapse interpretation of QFT, so that I think we cannot as easily invoke collapse of the wave function to claim that the mind/brain must be introduced (to be clear, no-collapse has become quite closely associated with MWI/RSI, but this is not that).

    “one of the long-term dreams of the scattering amplitudes field, where the hope is that everything can be expressed in terms of on-shell states” seems rather differently stated than I have seen in other places. Your version seems closer to the nonlinear mathematics I have been developing as a way to construct a subalgebra of a number of free field algebras. We can use the Reeh-Schlieder theorem for Wightman fields to prove that with non-Lagrangian methods we can reproduce anything we can do with a renormalized path integral formalism, but very differently. So differently that it will take some time for it be adopted and a few decades for new calculation methods to be introduced, but I hope this new approach will be helpful just because it allows us to rethink what we’re doing when we use QFT. Sorry that’s so grandiose. The ideas are slowly developing, so that my current preprint on the topic, arXiv:2109.04412v2, is much less good than a talk I gave to the Math Physics Seminar at the University of Iowa in September 2023, https://www.youtube.com/watch?v=JWU32B8rJ14 (sorry for the link: there’s a link there to a PDF of the slides on Dropbox, so nobody has to watch it.)

    I should say that I only started following 4 gravitons when your “Well That Didn’t Work” post was picked up in various places. I like it a lot. I’m gonna not apologize for how long this comment is, but if you want me to dial it back please say.

    Like

    Reply
    1. 4gravitons Post author

      Ok, the idea of thinking about electronic readouts as giving rise to a consistent-histories style classical history of an experiment is pretty damn cool. One of those things I never would have realized someone out there was working on and am glad they are. I do tend to lean QBist (I’m not going to say I am QBist because I get the impression there’s some sillier stuff that the main QBists think is crucial to the interpretation), so I tend to think of measurement as subjective anyway, a feature of how we frame a particular problem and what knowledge we’re posing ourselves to have for it. But “what can the machine conclude” is just as reasonable a question to pose as “what can the human conclude from the histograms afterwards”, so that’s kind of how I would cod-QBist-ly think about your approach.

      Like

      Reply
      1. Peter Morgan

        Part of why I’ve never felt willing to commit to a QBism is that the range of different QBisms has seemed to me a little dizzying. I suppose my church is of algebraic models of Generalized Probability Theory as part of a Data&Signal Analysis interpretation of QFT. Perhaps the most important difference is what I said above: I prefer to put the data first, then we infer from that data to what would happen in a new experiment. Within that seemingly very operational perspective, however, we can use the QND-consistent histories picture to say what the results of measurements not done would have been, which gives us an idea of a collection of experiments that is more complete than I think we ever had from classical non-equilibrium thermodynamics.

        FWIW, the way I put the human mind into physics is in the creation of a new experimental apparatus. Deciding what it would be interesting to investigate, designing an apparatus, persuading committees and grant bodies to give the money for the apparatus and its running costs, ordering the parts, debugging the inevitable problems that arise as it is constructed, coding the data analysis, et cetera, is all done by people using whatever tools they have available. But when the apparatus works and the power has been turned on, almost everyone can get a coffee or check their e-mail while the machine collects data at MHz or GHz rates for a few hours or months. The data only exists as a whole because somebody imagined that the apparatus would be interesting, but this is not the intimate moment-by-moment contact of more mind-imbued interpretations of QM.

        About that Consistent Histories approach: for the free EM field, we can construct what I call a QNDFT, a Quantum Non-Demolition Field Theory, a completely commutative algebraic version of a classical random field, within quantized electromagnetism. With that done, we can construct isomorphisms between the QNDFT and QFT Hilbert spaces. If we allow the use of the vacuum projection operator as freely as we do when we use QFT, we can construct isomorphisms between the respective algebras of measurements. That’s in an article in Physica Scripta 2019, “Classical states, quantum field measurement” (and, much more succinctly, also in the talk I mention above). The relationship in this mathematics between classical and quantum is different from quantization because it’s about isomorphisms. QNDFT seems better suited to signal analysis (indeed it is an extension of ordinary signal analysis to 1+3-dimensions that includes an explicit Lorentz invariant quantum noise model), whereas QFT is better suited when we consider an experiment only at a single time and without complex measurement schemes. There is a fairly clear sense in which we can call QFT ‘analytic QNDFT’, by analogy with the idea of the analytic signal in signal analysis [of course for interacting QFT we can’t construct isomorphisms because we have no well-defined interacting QFTs.]

        This structure allows us to construct a QNDFT model for anything for which we can construct a model in the equivalent QFT. In this way of imagining experiments, an apparatus is not classical or quantum, it’s the model of the apparatus that is a QNDFT model or a QFT model. The apparatus doesn’t care how we model it, but how we model experiments influences what we think might be interesting in the future. I hope this will eventually simplify our worldview significantly, although there are definitely problematic details and it is inevitably not a panacea.

        Thank you for your patience.

        Like

        Reply
  3. flippiefanus

    Perhaps the reason why all measurements of electrons make them look like particles is because all the measurement devices that we use to detector electrons can only detect particles. The say it more precisely, the elements of the measurement bases of such measurement devices are always localized functions. Since these measurement bases are complete orthogonal bases, the electron wavefunction can be expanded in terms of them and only one of these terms in the expansions is detected. The measurement involves a fundamental interaction that requires a single quantum of the field, only one such measurement can be recorded per electron. Therefore it looks like a (classical) particle, but that conclusion is actually imposed by the measurement device.

    Like

    Reply
    1. 4gravitons Post author

      So, you can absolutely measure in different bases and get results that “look” different, including not looking like particles at all. (In fact, I have a post pointing out that thinking about photons as individual particles is also not the only right approach because of IR divergences. If you’re measuring photons, most of the time you’re really measuring photons+an unmeasured amount of soft photons, so your measurement device is already imposing a conclusion that’s not just individual particles.

      I don’t think the idea that different bases can make something look more or less particle-like is really the substance of the pop science idea of “particle-wave duality” though. I think that idea is trying to say that sometimes things are wavefunctions and sometimes they aren’t, and that isn’t really the right way to think about wavefunctions.

      Like

      Reply
      1. Peter Morgan

        The pop science idea of “particle-wave duality” seems somewhat closely related to the idea of Fourier transforms in particular, which in mathematical terms is just one of an infinite class of transforms between different base sets of a Hilbert space. I support this partly by reference to the many videos that can be found on YouTube that are effectively low-level mathematics discussions of time-frequency analysis applied to music. There are many more, but I cite four in my AnnPhys 2020, “An algebraic approach to Koopman classical mechanics”:

        [18] 3Blue1Brown, https://www.youtube.com/watch?v=MBnnXbOM5S4. (Accessed 8 June 2019).
        [19] Minute Physics, https://www.youtube.com/watch?v=7vc-Uvp3vwg. (Accessed 8 June 2019).
        [20] Sixty Symbols, https://www.youtube.com/watch?v=VwGyqJMPmvE. (Accessed 8 June 2019).
        [21] The Science Asylum, https://www.youtube.com/watch?v=Q2OlsMblugo. (Accessed 8 June 2019).

        I think the Sixty Symbols video is especially noteworthy because it mentions that it is the Fourier transform applied to probability distributions that is characteristic of quantum theory, which is significantly more abstract. If the analogy with sound were perfect, I doubt we would still be confused by quantum theory. The analogy with sound is potent at the pop science level because almost everyone has some intuitive grasp of different frequencies and many have seen oscilloscope displays of, for example, what a narrow-band sound looks like in the position basis.

        Like

        Reply
      2. flippiefanus

        Not sure what the “substance of the pop science idea … ” means, but I am not trying the learn anything about nature by studying pop science ideas. So the question is more about the physical process that occurs when a measurement is made. In my view, all measurements require at least one interaction. It is the properties of such interactions that dictate the effects of the measurement.

        About the photons that one measures, I’m also confused about the statement that one is “measuring … unmeasured … soft photons.” The statement seems to me to be internally inconsistent. So it is difficult to understand what it means. The post for which you provided the link seems to assume that a single photon always has a single fixed frequency. But that is not the case. A single photon usually has a finite frequency spectrum. It is not a plane wave. My apologies if this is not what you meant.

        Like

        Reply
        1. 4gravitons Post author

          I brought up pop science to say that your question, while interesting, was not what my post was about. My post is trying to correct a particular kind of pop physics misunderstanding, and isn’t talking about the distinction between different kinds of measurable states. I’m happy to discuss your topic, just wanted to clarify that that wasn’t what I was talking about in the post.

          Treating photons as plane waves is indeed a particle physicist’s (over-)simplification. What I’m referring to here is the concept of soft photons. If you’re not familiar with the idea, the point is that any measurement apparatus will have a minimum frequency it is able to register, so photons that lack any measurable modes above this frequency will be un-measurable. That means that any experiment that claims to measure a single photon hasn’t actually measured a single photon, it has measured a superposition of a single photon, plus all possible configurations of “soft photons” (because it can’t distinguish a single photon from a single photon plus soft photons, and you can’t measure something you can’t distinguish). This matters a lot in particle physics, because without taking this into account you get divergent scattering amplitudes even in theories that don’t need to be renormalized. But even outside of that context, it should be clear that we never actually measure a single photon, we’re always measuring a superposition of a single photon and all possible configurations of soft photons.

          Like

          Reply
          1. flippiefanus

            Any superposition of single photons is still a single photon. It may just be an issue of semantics in the end.

            Like

            Reply
            1. 4gravitons Post author

              Ah, I think I see the confusion.

              Let \lvert p_1 p_2 ... p_n \rangle be an n-photon state. We take one large momentum p and some number of small momenta k_i<<p such that k_i is too small for our measurement device to detect.

              Then when your measurement device triggers, you can't distinguish whether it was due to a state \lvert p \rangle, or to a state \lvert p k_1 \rangle or to \lvert p k_1 k_2 \rangle…and so on for arbitrarily many k_i. So what you actually measure is a superposition of states with different particle numbers. In no case is your measurement apparatus actually capable of measuring single photons, as distinct from multi-photon states, because photons are massless and thus can have arbitrarily low energy.

              Like

              Reply
  4. Andrei

    “Classically, you would expect particles shot randomly at the screen to form two piles on the other side, one in front of each slit.”

    This is only true in the absence of long-range interactions, like electromagnetism. Macroscopic objects like bullets behave in this way because, being neutral on average, they only interact electromagnetically at very close range (during collisions). Electrons do not behave like that. We just do not know how charged particles interacting with a barrier composed of charged particles would behave according to classical electromagnetism. As far as I know, such a calculation has not been performed yet.

    “But while you can model each individual electron, or photon, as a classical particle, you can’t model the distribution of multiple electrons that way.

    That’s because in quantum mechanics, the “paths not taken” matter. A single electron will only go through one slit in the double-slit experiment. But the fact that it could have gone through both slits matters, and changes the chance that it goes through each particular path.”

    Of course we have a well-known classical concept that makes “paths not taken” matter. It’s the concept of fields. The information about the geometry of the barrier exists at the electron’s location in the form of electric, magnetic and gravitational fields associated with the electrons and nuclei in the barrier. The electron does not need to bump into each point in the barrier to “know” if there is a hole there. Earth “knows” about the Sun, Moon or Jupiter not because it bumps into them at the same time in parallel universes, as many worlds interpretation would put it, but because that information is locally encoded in the gravitational field.

    Like

    Reply
    1. 4gravitons Post author

      Andrei, if you don’t get a comment through, posting it multiple times won’t help: WordPress will just classify it as Spam. If you think you’ve gotten something erroneously classified as Spam just send me a message through the contact form and I’ll check. In this case your post was 100% ok and on-topic, so I would have approved of it.

      Like

      Reply
  5. flippiefanus

    OK, now I understand what you mean. Nevertheless, if the detector system can only detect photon energies that are above a certain threshold those additional $k$’s are not actually detected. They are traced out.

    There are many different scenarios related to the detection of photons. It is perhaps a bit misleading to try and discuss photon detection in general without considering the specific scenario. Often the detection of a single photon is inferred more from the experimental setup than from the properties of the detection system.

    Like

    Reply

Leave a comment! If it's your first time, it will go into moderation.