Tag Archives: physics

The Real E=mc^2

It’s the most famous equation in all of physics, written on thousands of chalkboard stock photos. Part of its charm is its simplicity: E for energy, m for mass, c for the speed of light, just a few simple symbols in a one-line equation. Despite its simplicity, E=mc^2 is deep and important enough that there are books dedicated to explaining it.

What does E=mc^2 mean?

Some will tell you it means mass can be converted to energy, enabling nuclear power and the atomic bomb. This is a useful picture for chemists, who like to think about balancing ingredients: this much mass on one side, this much energy on the other. It’s not the best picture for physicists, though. It makes it sound like energy is some form of “stuff” you can pour into your chemistry set flask, and energy really isn’t like that.

There’s another story you might have heard, in older books. In that story, E=mc^2 tells you that in relativity mass, like distance and time, is relative. The more energy you have, the more mass you have. Those books will tell you that this is why you can’t go faster than light: the faster you go, the greater your mass, and the harder it is to speed up.

Modern physicists don’t talk about it that way. In fact, we don’t even write E=mc^2 that way. We’re more likely to write:

E=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}

“v” here stands for the velocity, how fast the mass is moving. The faster the mass moves, the more energy it has. Take v to zero, and you get back the familiar E=mc^2.

The older books weren’t lying to you, but they were thinking about a different notion of mass: “relativistic mass” m_r instead of “rest mass” $m_0$, related like this:

m_r=\frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}

which explains the difference in how we write E=mc^2.

Why the change? In part, it’s because of particle physics. In particle physics, we care about the rest mass of particles. Different particles have different rest mass: each electron has one rest mass, each top quark has another, regardless of how fast they’re going. They still get more energy, and harder to speed up, the faster they go, but we don’t describe it as a change in mass. Our equations match the old books, we just talk about them differently.

Of course, you can dig deeper, and things get stranger. You might hear that mass does change with energy, but in a very different way. You might hear that mass is energy, that they’re just two perspectives on the same thing. But those are stories for another day.

I titled this post “The Real E=mc^2”, but to clarify, none of these explanations are more “real” than the others. They’re words, useful in different situations and for different people. “The Real E=mc^2” isn’t the E=mc^2 of nuclear chemists, or old books, or modern physicists. It’s the theory itself, the mathematical rules and principles that all the rest are just trying to describe.

A Field That Doesn’t Read Its Journals

Last week, the University of California system ended negotiations with Elsevier, one of the top academic journal publishers. UC had been trying to get Elsevier to switch to a new type of contract, one in which instead of paying for access to journals they pay for their faculty to publish, then make all the results openly accessible to the public. In the end they couldn’t reach an agreement and thus didn’t renew their contract, cutting Elsevier off from millions of dollars and their faculty from reading certain (mostly recent) Elsevier journal articles. There’s a nice interview here with one of the librarians who was sent to negotiate the deal.

I’m optimistic about what UC was trying to do. Their proposal sounds like it addresses some of the concerns raised here with open-access systems. Currently, journals that offer open access often charge fees directly to the scientists publishing in them, fees that have to be scrounged up from somebody’s grant at the last minute. By setting up a deal for all their faculty together, UC would have avoided that. While the deal fell through, having an organization as big as the whole University of California system advocating open access (and putting the squeeze on Elsevier’s profits) seems like it can only lead to progress.

The whole situation feels a little surreal, though, when I compare it to my own field.

At the risk of jinxing it, my field’s relationship with journals is even weirder than xkcd says.

arXiv.org is a website that hosts what are called “preprints”, which originally meant papers that haven’t been published yet. They’re online, freely accessible to anyone who wants to read them, and will be for as long as arXiv exists to host them. Essentially everything anyone publishes in my field ends up on arXiv.

Journals don’t mind, in part, because many of them are open-access anyway. There’s an organization, SCOAP3, that runs what is in some sense a large-scale version of what UC was trying to set up: instead of paying for subscriptions, university libraries pay SCOAP3 and it covers the journals’ publication costs.

This means that there are two coexisting open-access systems, the journals themselves and arXiv. But in practice, arXiv is the one we actually use.

If I want to show a student a paper, I don’t send them to the library or the journal website, I tell them how to find it on arXiv. If I’m giving a talk, there usually isn’t room for a journal reference, so I’ll give the arXiv number instead. In a paper, we do give references to journals…but they’re most useful when they have arXiv links as well. I think the only times I’ve actually read an article in a journal were for articles so old that arXiv didn’t exist when they were published.

We still submit our papers to journals, though. Peer review still matters, we still want to determine whether our results are cool enough for the fancy journals or only good enough for the ordinary ones. We still put journal citations on our CVs so employers and grant agencies know not only what we’ve done, but which reviewers liked it.

But the actual copy-editing and formatting and publishing, that the journals still employ people to do? Mostly, it never gets read.

In my experience, that editing isn’t too impressive. Often, it’s about changing things to fit the journal’s preferences: its layout, its conventions, its inconvenient proprietary document formats. I haven’t seen them try to fix grammar, or improve phrasing. Maybe my papers have unusually good grammar, maybe they do more for other papers. And maybe they used to do more, when journals had a more central role. But now, they don’t change much.

Sometimes the journal version ends up on arXiv, if the authors put it there. Sometimes it doesn’t. And sometimes the result is in between. For my last paper about Calabi-Yau manifolds in Feynman diagrams, we got several helpful comments from the reviewers, but the journal also weighed in to get us to remove our more whimsical language, down to the word “bestiary”. For the final arXiv version, we updated for the reviewer comments, but kept the whimsical words. In practice, that version is the one people in our field will read.

This has some awkward effects. It means that sometimes important corrections don’t end up on arXiv, and people don’t see them. It means that technically, if someone wanted to insist on keeping an incorrect paper online, they could, even if a corrected version was published in a journal. And of course, it means that a large amount of effort is dedicated to publishing journal articles that very few people read.

I don’t know whether other fields could get away with this kind of system. Physics is small. It’s small enough that it’s not so hard to get corrections from authors when one needs to, small enough that social pressure can get wrong results corrected. It’s small enough that arXiv and SCOAP3 can exist, funded by universities and private foundations. A bigger field might not be able to do any of that.

For physicists, we should keep in mind that our system can and should still be improved. For other fields, it’s worth considering whether you can move in this direction, and what it would cost to do so. Academic publishing is in a pretty bizarre place right now, but hopefully we can get it to a better one.

What Science Would You Do If You Had the Time?

I know a lot of people who worry about the state of academia. They worry that the competition for grants and jobs has twisted scientists’ priorities, that the sort of dedicated research of the past, sitting down and thinking about a topic until you really understand it, just isn’t possible anymore. The timeline varies: there are people who think the last really important development was the Standard Model, or the top quark, or AdS/CFT. Even more optimistic people, who think physics is still just as great as it ever was, often complain that they don’t have enough time.

Sometimes I wonder what physics would be like if we did have the time. If we didn’t have to worry about careers and funding, what would we do? I can speculate, comparing to different communities, but here I’m interested in something more concrete: what, specifically, could we accomplish? I often hear people complain that the incentives of academia discourage deep work, but I don’t often hear examples of the kind of deep work that’s being discouraged.

So I’m going to try an experiment here. I know I have a decent number of readers who are scientists of one field or another. Imagine you didn’t have to worry about funding any more. You’ve got a permanent position, and what’s more, your favorite collaborators do too. You don’t have to care about whether your work is popular, whether it appeals to the university or the funding agencies or any of that. What would you work on? What projects would you personally do, that you don’t have the time for in the current system? What worthwhile ideas has modern academia left out?

Congratulations to Arthur Ashkin, Gérard Mourou, and Donna Strickland!

The 2018 Physics Nobel Prize was announced this week, awarded to Arthur Ashkin, Gérard Mourou, and Donna Strickland for their work in laser physics.

nobel2018Some Nobel prizes recognize discoveries of the fundamental nature of reality. Others recognize the tools that make those discoveries possible.

Ashkin developed techniques that use lasers to hold small objects in place, culminating in “optical tweezers” that can pick up and move individual bacteria. Mourou and Strickland developed chirped pulse amplification, the current state of the art in extremely high-power lasers. Strickland is only the third woman to win the Nobel prize in physics, Ashkin at 96 is the oldest person to ever win the prize.

(As an aside, the phrase “optical tweezers” probably has you imagining two beams of laser light pinching a bacterium between them, like microscopic lightsabers. In fact, optical tweezers use a single beam, focused and bent so that if an object falls out of place it will gently roll back to the middle of the beam. Instead of tweezers, it’s really more like a tiny laser spoon.)

The Nobel announcement emphasizes practical applications, like eye surgery. It’s important to remember that these are research tools as well. I wouldn’t have recognized the names of Ashkin, Mourou, and Strickland, but I recognized atom trapping, optical tweezers, and ultrashort pulses. Hang around atomic physicists, or quantum computing experiments, and these words pop up again and again. These are essential tools that have given rise to whole subfields. LIGO won a Nobel based on the expectation that it would kick-start a vast new area of research. Ashkin, Mourou, and Strickland’s work already has.

Don’t Marry Your Arbitrary

This fall, I’m TAing a course on General Relativity. I haven’t taught in a while, so it’s been a good opportunity to reconnect with how students think.

This week, one problem left several students confused. The problem involved Christoffel symbols, the bane of many a physics grad student, but the trick that they had to use was in the end quite simple. It’s an example of a broader trick, a way of thinking about problems that comes up all across physics.

To see a simplified version of the problem, imagine you start with this sum:

g(j)=\Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Now, imagine you want to sum the function g(j) over j. You can write:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Let’s break this up into two terms, for later convenience:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{j=0}^n \Sigma_{i=0}^n f(j,i)

Without telling you anything about f(i,j), what do you know about this sum?

Well, one thing you know is that i and j are arbitrary.

i and j are letters you happened to use. You could have used different letters, x and y, or \alpha and \beta. You could even use different letters in each term, if you wanted to. You could even just pick one term, and swap i and j.

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{i=0}^n \Sigma_{j=0}^n f(i,j) = 0

And now, without knowing anything about f(i,j), you know that \Sigma_{j=0}^n g(j) is zero.

In physics, it’s extremely important to keep track of what could be really physical, and what is merely your arbitrary choice. In general relativity, your choice of polar versus spherical coordinates shouldn’t affect your calculation. In quantum field theory, your choice of gauge shouldn’t matter, and neither should your scheme for regularizing divergences.

Ideally, you’d do your calculation without making any of those arbitrary choices: no coordinates, no choice of gauge, no regularization scheme. In practice, sometimes you can do this, sometimes you can’t. When you can’t, you need to keep that arbitrariness in the back of your mind, and not get stuck assuming your choice was the only one. If you’re careful with arbitrariness, it can be one of the most powerful tools in physics. If you’re not, you can stare at a mess of Christoffel symbols for hours, and nobody wants that.

Different Fields, Different Worlds

My grandfather is a molecular biologist. When we meet, we swap stories: the state of my field and his, different methods and focuses but often a surprising amount of common ground.

Recently he forwarded me an article by Raymond Goldstein, a biological physicist, arguing that biologists ought to be more comfortable with physical reasoning. The article is interesting in its own right, contrasting how physicists and biologists think about the relationship between models, predictions, and experiments. But what struck me most about the article wasn’t the content, but the context.

Goldstein’s article focuses on a question that seemed to me oddly myopic: should physical models be in the Results section, or the Discussion section?

As someone who has never written a paper with either a Results section or a Discussion section, I wondered why anyone would care. In my field, paper formats are fairly flexible. We usually have an Introduction and a Conclusion, yes, but in between we use however many sections we need to explain what we need to. In contrast, biology papers seem to have a very fixed structure: after the Introduction, there’s a Results section, a Discussion section, and a Materials and Methods section at the end.

At first blush, this seemed incredibly bizarre. Why describe your results before the methods you used to get them? How do you talk about your results without discussing them, but still take a full section to do it? And why do reviewers care how you divide things up in the first place?

It made a bit more sense once I thought about how biology differs from theoretical physics. In theoretical physics, the “methods” are most of the result: unsolved problems are usually unsolved because existing methods don’t solve them, and we need to develop new methods to make progress. Our “methods”, in turn, are often the part of the paper experts are most eager to read. In biology, in contrast, the methods are much more standardized. While papers will occasionally introduce new methods, there are so many unexplored biological phenomena that most of the time researchers don’t need to invent a new method: just asking a question no-one else has asked can be enough for a discovery. In that environment, the “results” matter a lot more: they’re the part that takes the most scrutiny, that needs to stand up on its own.

I can even understand the need for a fixed structure. Biology is a much bigger field than theoretical physics. My field is small enough that we all pretty much know each other. If a paper is hard to read, we’ll probably get a chance to ask the author what they meant. Biology, in contrast, is huge. An important result could come from anywhere, and anyone. Having a standardized format makes it a lot easier to scan through an unfamiliar paper and find what you need, especially when there might be hundreds of relevant papers.

The problem with a standardized system, as always, is the existence of exceptions. A more “physics-like” biology paper is more readable with “physics-like” conventions, even if the rest of the field needs to stay “biology-like”. Because of that, I have a lot of sympathy for Goldstein’s argument, but I can’t help but feel that he should be asking for more. If creating new mathematical models and refining them with observation is at the heart of what Goldstein is doing, then maybe he shouldn’t have to use Results/Discussion/Methods in the first place. Maybe he should be allowed to write biology papers that look more like physics papers.

Conferences Are Work! Who Knew?

I’ve been traveling for over a month now, from conference to conference, with a bit of vacation thrown in at the end.

(As such, I haven’t had time to read up on the recent announcement of the detection of neutrinos and high-energy photons from a blazar, Matt Strassler has a nice piece on it.)

One thing I didn’t expect was how exhausting going to three conferences in a row would be. I didn’t give any talks this time around, so I thought I was skipping the “work” part. But sitting in a room for talk after talk, listening and taking notes, turns out to still be work! There’s effort involved in paying attention, especially in a scientific talk where the details matter. You assess the talks in your head, turning concepts around and thinking about what you might do with them. It’s the kind of thing you don’t notice for a seminar or two, but at a conference, after a while, it really builds up. After three, let’s just say I’ve really needed this vacation. I’ll be back at work next week, and maybe I’ll have a longer blog post for you folks. Until then, I ought to get some rest!