Tag Archives: electron

These Ain’t Democritus’s Particles

Physicists talk a lot about fundamental particles. But what do we mean by fundamental?

The Ancient Greek philosopher Democritus thought the world was composed of fundamental indivisible objects, constantly in motion. He called these objects “atoms”, and believed they could never be created or destroyed, with every other phenomenon explained by different types of interlocking atoms.

The things we call atoms today aren’t really like this, as you probably know. Atoms aren’t indivisible: their electrons can be split from their nuclei, and with more energy their nuclei can be split into protons and neutrons. More energy yet, and protons and neutrons can in turn be split into quarks. Still, at this point you might wonder: could quarks be Democritus’s atoms?

In a word, no. Nonetheless, quarks are, as far as we know, fundamental particles. As it turns out, our “fundamental” is very different from Democritus’s. Our fundamental particles can transform.

Think about beta decay. You might be used to thinking of it in terms of protons and neutrons: an unstable neutron decays, becoming a proton, an electron, and an (electron-anti-)neutrino. You might think that when the neutron decays, it literally “decays”, falling apart into smaller pieces.

But when you look at the quarks, the neutron’s smallest pieces, that isn’t the picture at all. In beta decay, a down quark in the neutron changes, turning into an up quark and an unstable W boson. The W boson then decays into an electron and a neutrino, while the up quark becomes part of the new proton. Even looking at the most fundamental particles we know, Democritus’s picture of unchanging atoms just isn’t true.

Could there be some even lower level of reality that works the way Democritus imagined? It’s not impossible. But the key insight of modern particle physics is that there doesn’t need to be.

As far as we know, up quarks and down quarks are both fundamental. Neither is “made of” the other, or “made of” anything else. But they also aren’t little round indestructible balls. They’re manifestations of quantum fields, “ripples” that slosh from one sort to another in complicated ways.

When we ask which particles are fundamental, we’re asking what quantum fields we need to describe reality. We’re asking for the simplest explanation, the simplest mathematical model, that’s consistent with everything we could observe. So “fundamental” doesn’t end up meaning indivisible, or unchanging. It’s fundamental like an axiom: used to derive the rest.

The Theorist Exclusion Principle

There are a lot of people who think theoretical physics has gone off-track, though very few of them agree on exactly how. Some think that string theory as a whole is a waste of time, others that the field just needs to pay more attention to their preferred idea. Some think we aren’t paying enough attention to the big questions, or that we’re too focused on “safe” ideas like supersymmetry, even when they aren’t working out. Some think the field needs less focus on mathematics, while others think it needs even more.

Usually, people act on these opinions by writing strongly worded articles and blog posts. Sometimes, they have more power, and act with money, creating grants and prizes that only go to their preferred areas of research.

Let’s put the question of whether the field actually needs to change aside for the moment. Even if it does, I’m skeptical that this sort of thing will have any real effect. While grants and blogs may be very good at swaying experimentalists, theorists are likely to be harder to shift, due to what I’m going to call the Theorist Exclusion Principle.

The Pauli Exclusion Principle is a rule from quantum mechanics that states that two fermions (particles with half-integer spin) can’t occupy the same state. Fermions include electrons, quarks, protons…essentially, all the particles that make up matter. Many people learn about the Pauli Exclusion Principle first in a chemistry class, where it explains why electrons fall into different energy levels in atoms: once one energy level “fills up”, no more electrons can occupy the same state, and any additional electrons are “excluded” and must occupy a different energy level.

Those 1s electrons are such a clique!

In contrast, bosons (like photons, or the Higgs) can all occupy the same state. It’s what allows for things like lasers, and it’s why all the matter we’re used to is made out of fermions: because fermions can’t occupy the same state as each other, as you add more fermions the structures they form have to become more and more complicated.

Experimentalists are a little like bosons. While you can’t stuff two experimentalists into the same quantum state, you can get them working on very similar projects. They can form large collaborations, with each additional researcher making the experiment that much easier. They can replicate eachother’s work, making sure it was accurate. They can take some physical phenomenon and subject it to a battery of tests, so that someone is bound to learn something.

Theorists, on the other hand, are much more like fermions. In theory, there’s very little reason to work on something that someone else is already doing. Replication doesn’t mean very much: the purest theory involves mathematical proofs, where replication is essentially pointless. Theorists do form collaborations, but they don’t have the same need for armies of technicians and grad students that experimentalists do. With no physical objects to work on, there’s a limit to how much can be done pursuing one particular problem, and if there really are a lot of options they can be pursued by one person with a cluster.

Like fermions, then, theorists expand to fill the projects available. If an idea is viable, someone will probably work on it, and once they do, there isn’t much reason for someone else to do the same thing.

This makes theory a lot harder to influence than experiment. You can write the most beautiful thinkpiece possible to persuade theorists to study the deep questions of the universe, but if there aren’t any real calculations available nothing will change. Contrary to public perception, theoretical physicists aren’t paid to just sit around thinking all day: we calculate, compute, and publish, and if a topic doesn’t lend itself to that then we won’t get much mileage out of it. And no matter what you try to preferentially fund with grants, mostly you’ll just get people re-branding what they’re already doing, shifting a few superficial details to qualify.

Theorists won’t occupy the same states, so if you want to influence theorists you need to make sure there are open states where you’re trying to get them to go. Historically, theorists have shifted when new states have opened up: new data from experiment that needed a novel explanation, new mathematical concepts that opened up new types of calculations. You want there to be fewer string theorists, or more focus on the deep questions? Give us something concrete to do, and I guarantee you’ll get theorists flooding in.

Journalists Are Terrible at Quasiparticles

TerribleQuasiparticleHeadlineNo, they haven’t, and no, that’s not what they found, and no, that doesn’t make sense.

Quantum field theory is how we understand particle physics. Each fundamental particle comes from a quantum field, a law of nature in its own right extending across space and time. That’s why it’s so momentous when we detect a fundamental particle, like the Higgs, for the first time, why it’s not just like discovering a new species of plant.

That’s not the only thing quantum field theory is used for, though. Quantum field theory is also enormously important in condensed matter and solid state physics, the study of properties of materials.

When studying materials, you generally don’t want to start with fundamental particles. Instead, you usually want to think about overall properties, ways the whole material can move and change overall. If you want to understand the quantum properties of these changes, you end up describing them the same way particle physicists talk about fundamental fields: you use quantum field theory.

In particle physics, particles come from vibrations in fields. In condensed matter, your fields are general properties of the material, but they can also vibrate, and these vibrations give rise to quasiparticles.

Probably the simplest examples of quasiparticles are the “holes” in semiconductors. Semiconductors are materials used to make transistors. They can be “doped” with extra slots for electrons. Electrons in the semiconductor will move around from slot to slot. When an electron moves, though, you can just as easily think about it as a “hole”, an empty slot, that “moved” backwards. As it turns out, thinking about electrons and holes independently makes understanding semiconductors a lot easier, and the same applies to other types of quasiparticles in other materials.

Unfortunately, the article I linked above is pretty impressively terrible, and communicates precisely none of that.

The problem starts in the headline:

Scientists have finally discovered massless particles, and they could revolutionise electronics

Scientists have finally discovered massless particles, eh? So we haven’t seen any massless particles before? You can’t think of even one?

After 85 years of searching, researchers have confirmed the existence of a massless particle called the Weyl fermion for the first time ever. With the unique ability to behave as both matter and anti-matter inside a crystal, this strange particle can create electrons that have no mass.

Ah, so it’s a massless fermion, I see. Well indeed, there are no known fundamental massless fermions, not since we discovered neutrinos have mass anyway. The statement that these things “create electrons” of any sort is utter nonsense, however, let alone that they create electrons that themselves have no mass.

Electrons are the backbone of today’s electronics, and while they carry charge pretty well, they also have the tendency to bounce into each other and scatter, losing energy and producing heat. But back in 1929, a German physicist called Hermann Weyl theorised that a massless fermion must exist, that could carry charge far more efficiently than regular electrons.

Ok, no. Just no.

The problem here is that this particular journalist doesn’t understand the difference between pure theory and phenomenology. Weyl didn’t theorize that a massless fermion “must exist”, nor did he say anything about their ability to carry charge. Weyl described, mathematically, how a massless fermion could behave. Weyl fermions aren’t some proposed new fundamental particle, like the Higgs boson: they’re a general type of particle. For a while, people thought that neutrinos were Weyl fermions, before it was discovered that they had mass. What we’re seeing here isn’t some ultimate experimental vindication of Weyl, it’s just an old mathematical structure that’s been duplicated in a new material.

What’s particularly cool about the discovery is that the researchers found the Weyl fermion in a synthetic crystal in the lab, unlike most other particle discoveries, such as the famous Higgs boson, which are only observed in the aftermath of particle collisions. This means that the research is easily reproducible, and scientists will be able to immediately begin figuring out how to use the Weyl fermion in electronics.

Arrgh!

Fundamental particles from particle physics, like the Higgs boson, and quasiparticles, like this particular Weyl fermion, are completely different things! Comparing them like this, as if this is some new efficient trick that could have been used to discover the Higgs, just needlessly confuses people.

Weyl fermions are what’s known as quasiparticles, which means they can only exist in a solid such as a crystal, and not as standalone particles. But further research will help scientists work out just how useful they could be. “The physics of the Weyl fermion are so strange, there could be many things that arise from this particle that we’re just not capable of imagining now,” said Hasan.

In the very last paragraph, the author finally mentions quasiparticles. There’s no mention of the fact that they’re more like waves in the material than like fundamental particles, though. From this description, it makes it sound like they’re just particles that happen to chill inside crystals, like they’re agoraphobic or something.

What the scientists involved here actually discovered is probably quite interesting. They’ve discovered a new sort of ripple in the material they studied. The ripple can carry charge, and because it can behave like a massless particle it can carry charge much faster than electrons can. (To get a basic idea as to how this works, think about waves in the ocean. You can have a wave that goes much faster than the ocean’s current. As the wave travels, no actual water molecules travel from one side to the other. Instead, it is the motion that travels, the energy pushing the wave up and down being transferred along.)

There’s no reason to compare this to particle physics, to make it sound like another Higgs boson. This sort of thing dilutes the excitement of actual particle discoveries, perpetuating the misconception of particles as just more species to find and catalog. Furthermore, it’s just completely unnecessary: condensed matter is a very exciting field, one that the majority of physicists work on. It doesn’t need to ride on the coat-tails of particle physics rhetoric in order to capture peoples’ attention. I’ve seen journalists do this kind of thing before, comparing new quasiparticles and composite particles with fundamental particles like the Higgs, and every time I cringe. Don’t you have any respect for the subject you’re writing about?

Want to Make Something New? Just Turn on the Lights.

Isn’t it weird that you can collide two protons, and get something else?

It wouldn’t be so weird if you collided two protons, and out popped a quark. After all, protons are made of quarks. But how, if you collide two protons together, do you get a tau, or the Higgs boson: things that not only aren’t “part of” protons, but are more massive than a proton by themselves?

It seems weird…but in a way, it’s not. When a particle releases another particle that wasn’t inside it to begin with, it’s actually not doing anything more special than an everyday light bulb.

Eureka!

How does a light bulb work?

You probably know the basics: when an electrical current enters the bulb, the electrons in the filament start to move. They heat the filament up, releasing light.

That probably seems perfectly ordinary. But ask yourself for a moment: where did the light come from?

Light is made up of photons, elementary particles in their own right. When you flip a light switch, where do the photons come from? Were they stored in the light bulb?

Silly question, right? You don’t need to “store” light in a light bulb: light bulbs transform one type of energy (electrical, or the movement of electrons) into another type of energy (light, or photons).

Here’s the thing, though: mass is just another type of energy.

I like to describe mass as “energy we haven’t met yet”. Einstein’s equation, E=mc^2, relates a particle’s mass to its “rest energy”, the energy it would have if it stopped moving around and sit still. Even when a particle seems to be sitting still from the outside, there’s still a lot going on, though. “Composite” particles like protons have powerful forces between their internal quarks, while particles like electrons interact with the Higgs field. These processes give the particle energy, even when it’s not moving, so from our perspective on the outside they’re giving the particle mass.

What does that mean for the protons at the LHC?

The protons at the LHC have a lot of kinetic energy: they’re going 99.9999991% of the speed of light! When they collide, all that energy has to go somewhere. Just like in a light bulb, the fast-moving particles will release their energy in another form. And while that some of that energy will add to the speed of the fragments, much of it will go into the mass and energy of new particles. Some of these particles will be photons, some will be tau leptons, or Higgs bosons…pretty much anything that the protons have enough energy to create.

So if you want to understand how to create new particles, you don’t need a deep understanding of the mysteries of quantum field theory. Just turn on the lights.

“China” plans super collider

When I saw the headline, I was excited.

“China plans super collider” says Nature News.

There’s been a lot of worry about what may happen if the Large Hadron Collider finishes its run without discovering anything truly new. If that happens, finding new particles might require a much bigger machine…and since even that machine has no guarantee of finding anything at all, world governments may be understandably reluctant to fund it.

As such, several prominent people in the physics community have put their hopes on China. The country’s somewhat autocratic nature means that getting funding for a collider is a matter of convincing a few powerful people, not a whole fractious gaggle of legislators. It’s a cynical choice, but if it keeps the field alive so be it.

If China was planning a super collider, then, that would be great news!

Too bad it’s not.

Buried eight paragraphs in to Nature’s article we find the following:

The Chinese government is yet to agree on any funding, but growing economic confidence in the country has led its scientists to believe that the political climate is ripe, says Nick Walker, an accelerator physicist at DESY, Germany’s high-energy physics laboratory in Hamburg. Although some technical issues remain, such as keeping down the power demands of an energy-hungry ring, none are major, he adds.

The Chinese government is yet to agree on any funding. China, if by China you mean the Chinese government, is not planning a super collider.

So who is?

Someone must have drawn these diagrams, after all.

Reading the article, the most obvious answer is Beijing’s Institute of High Energy Physics (IHEP). While this is true, the article leaves out any mention of a more recently founded site, the Center for Future High Energy Physics (CFHEP).

This is a bit odd, given that CFHEP’s whole purpose is to compose a plan for the next generation of colliders, and persuade China’s government to implement it. They were founded, with heavy involvement from non-Chinese physicists including their director Nima Arkani-Hamed, with that express purpose in mind. And since several of the quotes in the article come from Yifang Wang, director of IHEP and member of the advisory board of CFHEP, it’s highly unlikely that this isn’t CFHEP’s plan.

So what’s going on here? On one level, it could be a problem on the journalists’ side. News editors love to rewrite headlines to be more misleading and click-bait-y, and claiming that China is definitely going to build a collider draws much more attention than pointing out the plans of a specialized think tank. I hope that it’s just something like that, and not the sort of casual racism that likes to think of China as a single united will. Similarly, I hope that the journalists involved just didn’t dig deep enough to hear about CFHEP, or left it out to simplify things, because there is a somewhat darker alternative.

CFHEP’s goal is to convince the Chinese government to build a collider, and what better way to do that than to present them with a fait accompli? If the public thinks that this is “China’s” plan, that wheels are already in motion, wouldn’t it benefit the Chinese government to play along? Throw in a few sweet words about the merits of international collaboration (a big part of the strategy of CFHEP is to bring international scientists to China to show the sort of community a collider could attract) and you’ve got a winning argument, or at least enough plausibility to get US and European funding agencies in a competitive mood.

This…is probably more cynical than what’s actually going on. For one, I don’t even know whether this sort of tactic would work.

Do these guys look like devious manipulators?

Indeed, it might just be a journalistic omission, part of a wider tendency of science journalists to focus on big projects and ignore the interesting part, the nitty-gritty things that people do to push them forward. It’s a shame, because people are what drive the news forward, and as long as science is viewed as something apart from real human beings people are going to continue to mistrust and misunderstand it.

Either way, one thing is clear. The public deserves to hear a lot more about CFHEP.

What’s so hard about Quantum Field Theory anyway?

As I have mentioned before a theory in theoretical physics can be described as a list of quantum fields and the ways in which they interact. It turns out this is all you need to start drawing Feynman Diagrams.

Feynman Diagrams are tools physicists use to calculate the probability of things happening: radioactive particles decaying, protons colliding, electrons changing course in a magnetic field…basically anything small enough that quantum mechanics is important. Each Feynman Diagram depicts the paths that a group of particles take over time, interacting as they go. It’s important to remember, however, that Feynman Diagrams are not literally what’s going on: rather, they are tools for calculation.

To start making a Feynman Diagram, think about what you need present in order to start whatever process you’re investigating. For the examples given above, this means a radioactive particle, two protons, and an electron and a magnetic field, respectively. For each particle or field that you start out with, draw a line on the left of the diagram.

4gravincoming

If you’re making a Feynman Diagram you’re looking for a probability of some particular outcome. Draw lines corresponding to the particles and fields in that outcome on the left of the diagram. For example, if you were looking at a radioactive decay, you’d want the new particles the original particle decayed into. For an electron moving in a magnetic field, you want the electron’s new path.

4gravoutgoing

Now come the interactions. Each way that the particles and fields can interact is a potential way that lines can come together. For example, electrons are affected by the photons that make up electric and magnetic fields. Specifically, an electron can absorb a photon, changing its path. This gives us an interaction: an electron and a photon go in, and an electron comes out.

4gravinteraction

You’ve got the basic building blocks: particles as lines, and interactions where the lines come together. Now, just link them all up! Something like this:

4gravclassical

Then again, you could also do it like this:

4gravanom

Or this:

4grav2loop

Or this:

4gravcomplicated

You get the idea. To use these diagrams, a physicist assigns a number to each line and each interaction, depending on various traits of the particles involved including their energy and angles of travel. For each diagram, all these numbers are multiplied together. Then, because in quantum mechanics every possible event has to be included, you add up all the numbers from all of the diagrams. Every single one.

Not just the simple diagram on the top, but also the more complicated one below it, and the one below that, and every way you could possibly link up all of the particles going in and coming out, each more and more complicated. An infinite list of diagrams. Only by adding all of those diagrams together can a physicist find the true, complete probability of a quantum event.

Adding an infinite set of increasingly complicated diagrams is tricky. By tricky, I mean nearly absolutely impossible and so insane in principle that mathematicians aren’t even sure that it has any real meaning.

Because of this, everything that physicists calculate is an approximation. This approximation is possible because each interaction multiplies the total for a diagram by a “small” number, which gets smaller the weaker the force involved, from around 1/2 for the strong nuclear force to about 1/12 for electricity and magnetism. If you limit the number of points of interaction, you limit the number of possible diagrams. For our example, limiting things to one point of interaction gives only the first diagram. If you allow up to three points, you get the second diagram, and so on. Each time you add two more interactions, your diagram gets another loop, and the contribution to the total is smaller, so that even just four loops with a force as weak as electricity and magnetism gets you all but a billionth of the total, which is about as accurate as the experiments are anyway.

What this means, though, is that we’re only at the very edge of a vast ocean of knowledge. We know the rules, the laws of physics if you will, but we can only tiptoe loop by loop towards the full formulas, sitting infinitely far away.

That, in essence, is what I work on. I look for patterns in the numbers, tricks in the calculation, ways to yank ourselves up by our bootstraps to higher and higher loops, and maybe, just maybe, for a shortcut up to infinity.

Because just because we know the rules, doesn’t mean we know how the game is played.

That’s Quantum Field Theory.