Monthly Archives: June 2021

A Few Advertisements

A couple different things that some of you might like to know about:

Are you an amateur with an idea you think might revolutionize all of physics? If so, absolutely do not contact me about it. Instead, you can talk to these people. Sabine Hossenfelder runs a service that will hook you up with a scientist who will patiently listen to your idea and help you learn what you need to develop it further. They do charge for that service, and they aren’t cheap, so only do this if you can comfortably afford it. If you can’t, then I have some advice in a post here. Try to contact people who are experts in the specific topic you’re working on, ask concrete questions that you expect to give useful answers, and be prepared to do some background reading.

Are you an undergraduate student planning for a career in theoretical physics? If so, consider the Perimeter Scholars International (PSI) master’s program. Located at the Perimeter Institute in Waterloo, Canada, PSI is an intense one-year boot-camp in theoretical physics, teaching the foundational ideas you’ll need for the rest of your career. It’s something I wish I was aware of when I was applying for schools at that age. Theoretical physics is a hard field, and a big part of what makes it hard is all the background knowledge one needs to take part in it. Starting work on a PhD with that background knowledge already in place can be a tremendous advantage. There are other programs with similar concepts, but I’ve gotten a really good impression of PSI specifically so it’s them I would recommend. Note that applications for the new year aren’t open yet: I always plan to advertise them when they open, and I always forget. So consider this an extremely-early warning.

Are you an amplitudeologist? Registration for Amplitudes 2021 is now live! We’re doing an online conference this year, co-hosted by the Niels Bohr Institute and Penn State. We’ll be doing a virtual poster session, so if you want to contribute to that please include a title and abstract when you register. We also plan to stream on YouTube, and will have a fun online surprise closer to the conference date.

Light and Lens, Collider and Detector

Why do particle physicists need those enormous colliders? Why does it take a big, expensive, atom-smashing machine to discover what happens on the smallest scales?

A machine like the Large Hadron Collider seems pretty complicated. But at its heart, it’s basically just a huge microscope.

Familiar, right?

If you’ve ever used a microscope in school, you probably had one with a light switch. Forget to turn on the light, and you spend a while confused about why you can’t see anything before you finally remember to flick the switch. Just like seeing something normally, seeing something with a microscope means that light is bouncing off that thing and hitting your eyes. Because of this, microscopes are limited by the wavelength of the light that they use. Try to look at something much smaller than that wavelength and the image will be too blurry to understand.

To see smaller details then, people use light with smaller wavelengths. Using massive X-ray producing machines called synchrotrons, scientists can study matter on the sub-nanometer scale. To go further, scientists can take advantage of wave-particle duality, and use electrons instead of light. The higher the energy of the electrons, the smaller their wavelength. The best electron microscopes can see objects measured in angstroms, not just nanometers.

Less familiar?

A particle collider pushes this even further. The Large Hadron Collider accelerates protons until they have 6.5 Tera-electron-Volts of energy. That might be an unfamiliar type of unit, but if you’ve seen it before you can run the numbers, and estimate that this means the LHC can sees details below the attometer scale. That’s a quintillionth of a meter, or a hundred million times smaller than an atom.

A microscope isn’t just light, though, and a collider isn’t just high-energy protons. If it were, we could just wait and look at the sky. So-called cosmic rays are protons and other particles that travel to us from outer space. These can have very high energy: protons with similar energy to those in the LHC hit our atmosphere every day, and rays have been detected that were millions of times more powerful.

People sometimes ask why we can’t just use these cosmic rays to study particle physics. While we can certainly learn some things from cosmic rays, they have a big limitation. They have the “light” part of a microscope, but not the “lens”!

A microscope lens magnifies what you see. Starting from a tiny image, the lens blows it up until it’s big enough that you can see it with your own eyes. Particle colliders have similar technology, using their particle detectors. When two protons collider inside the LHC, they emit a flurry of other particles: photons and electrons, muons and mesons. Each of these particles is too small to see, let alone distinguish with the naked eye. But close to the collision there are detector machines that absorb these particles and magnify their signal. A single electron hitting one of these machines triggers a cascade of more and more electrons, in proportion to the energy of the electron that entered the machine. In the end, you get a strong electrical signal, which you can record with a computer. There are two big machines that do this at the Large Hadron Collider, each with its own independent scientific collaboration to run it. They’re called ATLAS and CMS.

The different layers of the CMS detector, magnifying signals from different types of particles.

So studying small scales needs two things: the right kind of “probe”, like light or protons, and a way to magnify the signal, like a lens or a particle detector. That’s hard to do without a big expensive machine…unless nature is unusually convenient. One interesting possibility is to try to learn about particle physics via astronomy. In the Big Bang particles collided with very high energy, and as the universe has expanded since then those details have been magnified across the sky. That kind of “cosmological collider” has the potential to teach us about physics at much smaller scales than any normal collider could reach. A downside is that, unlike in a collider, we can’t run the experiment over and over again: our “cosmological collider” only ran once. Still, if we want to learn about the very smallest scales, some day that may be our best option.

Who Is, and Isn’t, Counting Angels on a Pinhead

How many angels can dance on the head of a pin?

It’s a question famous for its sheer pointlessness. While probably no-one ever had that exact debate, “how many angels fit on a pin” has become a metaphor, first for a host of old theology debates that went nowhere, and later for any academic study that seems like a waste of time. Occasionally, physicists get accused of doing this: typically string theorists, but also people who debate interpretations of quantum mechanics.

Are those accusations fair? Sometimes yes, sometimes no. In order to tell the difference, we should think about what’s wrong, exactly, with counting angels on the head of a pin.

One obvious answer is that knowing the number of angels that fit on a needle’s point is useless. Wikipedia suggests that was the origin of the metaphor in the first place, a pun on “needle’s point” and “needless point”. But this answer is a little too simple, because this would still be a useful debate if angels were real and we could interact with them. “How many angels fit on the head of a pin” is really a question about whether angels take up space, whether two angels can be at the same place at the same time. Asking that question about particles led physicists to bosons and fermions, which among other things led us to invent the laser. If angelology worked, perhaps we would have angel lasers as well.

Be not afraid of my angel laser

“If angelology worked” is key here, though. Angelology didn’t work, it didn’t lead to angel-based technology. And while Medieval people couldn’t have known that for certain, maybe they could have guessed. When people accuse academics of “counting angels on the head of a pin”, they’re saying they should be able to guess that their work is destined for uselessness.

How do you guess something like that?

Well, one problem with counting angels is that nobody doing the counting had ever seen an angel. Counting angels on the head of a pin implies debating something you can’t test or observe. That can steer you off-course pretty easily, into conclusions that are either useless or just plain wrong.

This can’t be the whole of the problem though, because of mathematics. We rarely accuse mathematicians of counting angels on the head of a pin, but the whole point of math is to proceed by pure logic, without an experiment in sight. Mathematical conclusions can sometimes be useless (though we can never be sure, some ideas are just ahead of their time), but we don’t expect them to be wrong.

The key difference is that mathematics has clear rules. When two mathematicians disagree, they can look at the details of their arguments, make sure every definition is as clear as possible, and discover which one made a mistake. Working this way, what they build is reliable. Even if it isn’t useful yet, the result is still true, and so may well be useful later.

In contrast, when you imagine Medieval monks debating angels, you probably don’t imagine them with clear rules. They might quote contradictory bible passages, argue everyday meanings of words, and win based more on who was poetic and authoritative than who really won the argument. Picturing a debate over how many angels can fit on the head of a pin, it seems more like Calvinball than like mathematics.

This then, is the heart of the accusation. Saying someone is just debating how many angels can dance on a pin isn’t merely saying they’re debating the invisible. It’s saying they’re debating in a way that won’t go anywhere, a debate without solid basis or reliable conclusions. It’s saying, not just that the debate is useless now, but that it will likely always be useless.

As an outsider, you can’t just dismiss a field because it can’t do experiments. What you can and should do, is dismiss a field that can’t produce reliable knowledge. This can be hard to judge, but a key sign is to look for these kinds of Calvinball-style debates. Do people in the field seem to argue the same things with each other, over and over? Or do they make progress and open up new questions? Do the people talking seem to be just the famous ones? Or are there cases of young and unknown researchers who happen upon something important enough to make an impact? Do people just list prior work in order to state their counter-arguments? Or do they build on it, finding consequences of others’ trusted conclusions?

A few corners of string theory do have this Calvinball feel, as do a few of the debates about the fundamentals of quantum mechanics. But if you look past the headlines and blogs, most of each of these fields seems more reliable. Rather than interminable back-and-forth about angels and pinheads, these fields are quietly accumulating results that, one way or another, will give people something to build on.

Papers With Questions and Papers With Answers

I’ve found that when it comes to reading papers, there are two distinct things I look for.

Sometimes, I read a paper looking for an answer. Typically, this is a “how to” kind of answer: I’m trying to do something, and the paper I’m reading is supposed to explain how. More rarely, I’m directly using a result: the paper proved a theorem or compute a formula, and I just take it as written and use it to calculate something else. Either way, I’m seeking out the paper with a specific goal in mind, which typically means I’m reading it long after it came out.

Other times, I read a paper looking for a question. Specifically, I look for the questions the author couldn’t answer. Sometimes these are things they point out, limitations of their result or opportunities for further study. Sometimes, these are things they don’t notice, holes or patterns in their results that make me wonder “what if?” Either can be the seed of a new line of research, a problem I can solve with a new project. If I read a paper in this way, typically it just came out, and this is the first time I’ve read it. When that isn’t the case, it’s because I start out with another reason to read it: often I’m looking for an answer, only to realize the answer I need isn’t there. The missing answer then becomes my new question.

I’m curious about the balance of these two behaviors in different fields. My guess is that some fields read papers more for their answers, while others read them more for their questions. If you’re working in another field, let me know what you do in the comments!