The Undefinable

If I can teach one lesson to all of you, it’s this: be precise. In physics, we try to state what we mean as precisely as we can. If we can’t state something precisely, that’s a clue: maybe what we’re trying to state doesn’t actually make sense.

Someone recently reached out to me with a question about black holes. He was confused about how they were described, about what would happen when you fall in to one versus what we could see from outside. Part of his confusion boiled down to a question: “is the center really an infinitely small point?”

I remembered a commenter a while back who had something interesting to say about this. Trying to remind myself of the details, I dug up this question on Physics Stack Exchange. user4552 has a detailed, well-referenced answer, with subtleties of General Relativity that go significantly beyond what I learned in grad school.

According to user4552, the reason this question is confusing is that the usual setup of general relativity cannot answer it. In general relativity, singularities like the singularity in the middle of a black hole aren’t treated as points, or collections of points: they’re not part of space-time at all. So you can’t count their dimensions, you can’t see whether they’re “really” infinitely small points, or surfaces, or lines…

This might surprise people (like me) who have experience with simpler equations for these things, like the Schwarzchild metric. The Schwarzchild metric describes space-time around a black hole, and in the usual coordinates it sure looks like the singularity is at a single point where r=0, just like the point where r=0 is a single point in polar coordinates in flat space. The thing is, though, that’s just one sort of coordinates. You can re-write a metric in many different sorts of coordinates, and the singularity in the center of a black hole might look very different in those coordinates. In general relativity, you need to stick to things you can say independent of coordinates.

Ok, you might say, so the usual mathematics can’t answer the question. Can we use more unusual mathematics? If our definition of dimensions doesn’t tell us whether the singularity is a point, maybe we just need a new definition!

According to user4552, people have tried this…and it only sort of works. There are several different ways you could define the dimension of a singularity. They all seem reasonable in one way or another. But they give different answers! Some say they’re points, some say they’re three-dimensional. And crucially, there’s no obvious reason why one definition is “right”. The question we started with, “is the center really an infinitely small point?”, looked like a perfectly reasonable question, but it actually wasn’t: the question wasn’t precise enough.

This is the real problem. The problem isn’t that our question was undefined, after all, we can always add new definitions. The problem was that our question didn’t specify well enough the definitions we needed. That is why the question doesn’t have an answer.

Once you understand the difference, you see these kinds of questions everywhere. If you’re baffled by how mass could have come out of the Big Bang, or how black holes could radiate particles in Hawking radiation, maybe you’ve heard a physicist say that energy isn’t always conserved. Energy conservation is a consequence of symmetry, specifically, symmetry in time. If your space-time itself isn’t symmetric (the expanding universe making the past different from the future, a collapsing star making a black hole), then you shouldn’t expect energy to be conserved.

I sometimes hear people object to this. They ask, is it really true that energy isn’t conserved when space-time isn’t symmetric? Shouldn’t we just say that space-time itself contains energy?

And well yes, you can say that, if you want. It isn’t part of the usual definition, but you can make a new definition, one that gives energy to space-time. In fact, you can make more than one new definition…and like the situation with the singularity, these definitions don’t always agree! Once again, you asked a question you thought was sensible, but it wasn’t precise enough to have a definite answer.

Keep your eye out for these kinds of questions. If scientists seem to avoid answering the question you want, and keep answering a different question instead…it might be their question is the only one with a precise answer. You can define a method to answer your question, sure…but it won’t be the only way. You need to ask precise enough questions to get good answers.

Gateway Hobbies

When biologists tell stories of their childhoods, they’re full of trails of ants and fireflies in jars. Lots of writers start young, telling stories on the playground and making skits with their friends. And the mere existence of “chemistry sets” tells you exactly how many chemists get started. Many fields have these “gateway hobbies”, like gateway drugs for careers, ways that children and teenagers get hooked and gain experience.

Physics is a little different, though. While kids can play with magnets and electricity, there aren’t a whole lot of other “physics hobbies”, especially for esoteric corners like particle physics. Instead, the “gateway hobbies” of physics are more varied, drawing from many different fields.

First, of course, even if a child can’t “do physics”, they can always read about it. Kids will memorize the names of quarks, read about black holes, or watch documentaries about string theory. I’m not counting this as a “physics hobby” because it isn’t really: physics isn’t a collection of isolated facts, but of equations: frameworks you can use to make predictions. Reading about the Big Bang is a good way to get motivated and excited, it’s a great thing to do…but it doesn’t prepare you for the “science part” of the science.

A few efforts at physics popularization get a bit more hands-on. Many come in the form of video games. You can get experience with relativity through Velocity Raptor, quantum mechanics through Quantum Chess, or orbital mechanics through Kerbal Space Program. All of these get just another bit closer to “doing physics” rather than merely reading about it.

One can always gain experience in other fields, and that can be surprisingly relevant. Playing around with a chemistry set gives first-hand experience of the kinds of things that motivated quantum mechanics, and some things that still motivate condensed matter research. Circuits are physics, more directly, even if they’re also engineering: and for some physicists, designing electronic sensors is a huge part of what they do.

Astronomy has a special place, both in the history of physics and the pantheon of hobbies. There’s a huge amateur astronomy community, one that both makes real discoveries and reaches out to kids of all ages. Many physicists got their start looking at the heavens, using it like Newton’s contemporaries as a first glimpse into the mechanisms of nature.

More and more research in physics involves at least some programming, and programming is another activity kids have access to in spades, from Logo to robotics competitions. Learning how to program isn’t just an important skill: it’s also a way for young people to experience a world bound by clear laws and logic, another motivation to study physics.

Of course, if you’re interested in rules and logic, why not go all the way? Plenty of physicists grew up doing math competitions. I have fond memories of Oregon’s Pentagames, and the more “serious” activities go all the way up to the famously challenging Putnam Competition.

Finally, there are physics competitions too, at least in the form of the International Physics Olympiad, where high school students compete in physics prowess.

Not every physicist did these sorts of things, of course: some got hooked later. Others did more than one. A friend of mine who’s always been “Mr. Science” got almost the whole package, with a youth spent exploring the wild west of the early internet, working at a planetarium, and discovering just how easy it is to get legal access to dangerous and radioactive chemicals. There are many paths in to physics, so even if kids can’t “do physics” the same way they “do chemistry”, there’s still plenty to do!

Keeping It Colloquial

In the corners of academia where I hang out, a colloquium is a special kind of talk. Most talks we give are part of weekly seminars for specific groups. For example, the theoretical particle physicists here have a seminar. Each week we invite a speaker, who gives a talk on their recent work. Since they expect an audience of theoretical particle physicists, they can go into more detail.

A colloquium isn’t like that. Colloquia are talks for the whole department: theorists and experimentalists, particle physicists and biophysicists. They’re more prestigious, for big famous professors (or sometimes, for professors interviewing for jobs…). The different audience, and different context, means that the talk plays by different rules.

Recently, I saw a conference full of “colloquium-style” talks, trying to play by these rules. Some succeeded, some didn’t…and I think I now have a better idea of how those rules work.

First, in a colloquium, you’re not just speaking for yourself. You’re an ambassador for your field. For some of the audience, this might be the first time they’ve heard a talk by someone who does your kind of research. You want to give them a good impression, not just about you, but about the whole topic. So while you definitely want to mention your own work, you want to tell a full story, one that gives more than a glimpse of what others are doing as well.

Second, you want to connect to something the audience already knows. With an audience of physicists, you can assume a certain baseline, but not much more than that. You need to make the beginning accessible and start with something familiar. For the conference I mentioned, a talk that did this well was the talk on exoplanets, which started with the familiar planets of the solar system, classifying them in order to show what you might expect exoplanets to look like. In contrast, t’Hooft’s talk did this poorly. His work is exploiting a loophole in a quantum-mechanical argument called Bell’s theorem, which most physicists have heard of. Instead of mentioning Bell’s theorem, he referred vaguely to “criteria from philosophers”, and only even mentioned that near the end of the talk, instead starting with properties of quantum mechanics his audience was much less familiar with.

Moving on, then, you want to present a mystery. So far, everything in the talk has made sense, and your audience feels like they understand. Now, you show them something that doesn’t fit, something their familiar model can’t accommodate. This activates your audience’s scientist instincts: they’re curious now, they want to know the answer. A good example from the conference was a talk on chemistry in space. The speaker emphasized that we can see evidence of complex molecules in space, but that space dust is so absurdly dilute that it seems impossible such molecules could form: two atoms could go a billion years without meeting each other.

You can’t just leave your audience mystified, though. You next have to solve the mystery. Ideally, your solution will be something smart, but simple: something your audience can intuitively understand. This has two benefits. First, it makes you look smart: you described a mysterious problem, and then you show how to solve it! Second, it makes the audience feel smart: they felt the problem was hard, but now they understand how to solve it too. The audience will have good feelings about you as a result, and good feelings about the topic: in some sense, you’ve tied a piece of their self-esteem to knowing the solution to your problem. This was well-done by the speaker discussing space chemistry, who explained that the solution was chemistry on surfaces: if two atoms are on the surface of a dust grain or meteorite, they’re much more likely to react. It was also well-done by a speaker discussing models of diseases like diabetes: he explained the challenge of controlling processes with cells, when cells replicate exponentially, and showed one way they could be controlled, when the immune system kills off any cells that replicate much faster than their neighbors. (He also played the guitar to immune system-themed songs…also a good colloquium strategy for those who can pull it off!)

Finally, a picture is worth a thousand wordsas long as it’s a clear one. For an audience that won’t follow most of your equations, it’s crucial to show them something visual: graphics, puns, pictures of equipment or graphs. Crucially, though, your graphics should be something the audience can understand. If you put up a graph with a lot of incomprehensible detail: parameters you haven’t explained, or just set up in a way your audience doesn’t get, then your audience gets stuck. Much like an unfamiliar word, a mysterious graph will have members of the audience scratching their heads, trying to figure out what it means. They’ll be so busy trying, they’ll miss what you say next, and you’ll lose them! So yes, put in graphs, put in pictures: but make sure that the ones you use, you have time to explain.

Answering Questions: Virtue or Compulsion?

I was talking to a colleague about this blog. I mentioned worries I’ve had about email conversations with readers: worries about whether I’m communicating well, whether the readers are really understanding. For the colleague though, something else stood out:

“You sure are generous with your time.”

Am I?

I’d never really thought about it that way before. It’s not like I drop everything to respond to a comment, or a message. I leave myself a reminder, and get to it when I have time. To the extent that I have a time budget, I don’t spend it freely, I prioritize work before chatting with my readers, as nice as you folks are.

At the same time, though, I think my colleague was getting at a real difference there. It’s true that I don’t answer questions right away. But I do answer them eventually. I can’t imagine being asked a question, and just never answering it.

There are exceptions, of course. If you’re obviously just trolling, just insulting me or messing with me or asking the same question over and over, yeah I’ll skip your question. And if I don’t understand what you’re asking, there’s only so much effort I’m going to put in to try to decipher it. Even in those cases, though, I have a certain amount of regret. I have to take a deep breath and tell myself no, I can really skip this one.

On the one hand, this feels like a moral obligation, a kind of intellectual virtue. If knowledge, truth, information are good regardless of anything else, then answering questions is just straightforwardly good. People ought to know more, asking questions is how you learn, and that can’t work unless we’re willing to teach. Even if there’s something you need to keep secret, you can at least say something, if only to explain why you can’t answer. Just leaving a question hanging feels like something bad people do.

On the other hand, I think this might just be a compulsion, a weird quirk of my personality. It may even be more bad than good, an urge that makes me “waste my time”, or makes me too preoccupied with what others say, drafting responses in my head until I find release by writing them down. I think others are much more comfortable just letting a question lie, and moving on. It feels a bit like the urge to have the last word in a conversation, just more specific: if someone asks me to have the last word, I feel like I have to oblige!

I know this has to have its limits. The more famous bloggers get so many questions they can’t possibly respond to all of them. I’ve seen how people like Neil Gaiman describe responding to questions on tumblr, just opening a giant pile of unread messages, picking a few near the top, and then going back to their day. I can barely stand leaving unread messages in my email. If I got that famous, I don’t know how I’d deal with that. But I’d probably figure something out.

Am I too generous with you guys? Should people always answer questions? And does the fact that I ended this post with questions mean I’ll get more comments?

Of Snowmass and SAGEX

arXiv-watchers might have noticed an avalanche of papers with the word Snowmass in the title. (I contributed to one of them.)

Snowmass is a place, an area in Colorado known for its skiing. It’s also an event in that place, the Snowmass Community Planning Exercise for the American Physical Society’s Division of Particles and Fields. In plain terms, it’s what happens when particle physicists from across the US get together in a ski resort to plan their future.

Usually someone like me wouldn’t be involved in that. (And not because it’s a ski resort.) In the past, these meetings focused on plans for new colliders and detectors. They got contributions from experimentalists, and a few theorists heavily focused on their work, but not the more “formal” theorists beyond.

This Snowmass is different. It’s different because of Corona, which changed it from a big meeting in a resort to a spread-out series of meetings and online activities. It’s also different because they invited theorists to contribute, and not just those interested in particle colliders. The theorists involved study everything from black holes and quantum gravity to supersymmetry and the mathematics of quantum field theory. Groups focused on each topic submit “white papers” summarizing the state of their area. These white papers in turn get organized and summarized into a few subfields, which in turn contribute to the planning exercise. No-one I’ve talked to is entirely clear on how this works, how much the white papers will actually be taken into account or by whom. But it seems like a good chance to influence US funding agencies, like the Department of Energy, and see if we can get them to prioritize our type of research.

Europe has something similar to Snowmass, called the European Strategy for Particle Physics. It also has smaller-scale groups, with their own purposes, goals, and funding sources. One such group is called SAGEX: Scattering Amplitudes: from Geometry to EXperiment. SAGEX is an Innovative Training Network, an organization funded by the EU to train young researchers, in this case in scattering amplitudes. Its fifteen students are finishing their PhDs and ready to take the field by storm. Along the way, they spent a little time in industry internships (mostly at Maple and Mathematica), and quite a bit of time working on outreach.

They have now summed up that outreach work in an online exhibition. I’ve had fun exploring it over the last couple days. They’ve got a lot of good content there, from basic explanations of relativity and quantum mechanics, to detailed games involving Feynman diagrams and associahedra, to a section that uses solitons as a gentle introduction to integrability. If you’re in the target audience, you should check it out!

How Expert Is That Expert?

The blog Astral Codex Ten had an interesting post a while back, about when to trust experts. Rather than thinking of some experts as “trustworthy” and some as “untrustworthy”, the post suggests an approach of “bounded distrust”. Even if an expert is biased or a news source sometimes lies, there are certain things you can still expect them to tell the truth about. If you are familiar enough with their work, you can get an idea of which kinds of claims you can trust and which you can’t, in a consistent and reliable way. Knowing how to do this is a skill, one you can learn to get better at.

In my corner of science, I can’t think of anyone who outright lies. Nonetheless, some claims are worth more trust than others. Sometimes experts have solid backing for what they say, direct experience that’s hard to contradict. Other times they’re speaking mostly from general impressions, and bias could easily creep in. Luckily, it’s not so hard to tell the difference. In this post, I’ll try to teach you how.

For an example, I’ll use something I saw at a conference last week. A speaker gave a talk describing the current state of cosmology: the new tools we have to map the early universe, and the challenges in using them to their full potential. After the talk, I remember her answering three questions. In each case, she seemed to know what she was talking about, but for different reasons. If she was contradicted by a different expert, I’d use these reasons to figure out which one to trust.

First, sometimes an expert gives what is an informed opinion, but just an informed opinion. As scientists, we are expected to know a fairly broad range of background behind our work, and be able to say something informed about it. We see overview talks and hear our colleagues’ takes, and get informed opinions about topics we otherwise don’t work on. This speaker fielded a question about quantum gravity, and her answer made it clear that the topic falls into this category for her. Her answer didn’t go into much detail, mentioning a few terms but no specific scientific results, and linked back in the end to a different question closer to her expertise. That’s generally how we speak on this kind of topic: vaguely enough to show what we know without overstepping.

The second question came from a different kind of knowledge, which I might call journal club knowledge. Many scientists have what are called “journal clubs”. We meet on a regular basis, read recent papers, and talk about them. The papers go beyond what we work on day-to-day, but not by that much, because the goal is to keep an eye open for future research topics. We read papers in close-by areas, watching for elements that could be useful, answers to questions we have or questions we know how to answer. The kind of “journal club knowledge” we have covers a fair amount of detail: these aren’t topics we are working on right now, but if we spent more time on it they could be. Here, the speaker answered a question about the Hubble tension, a discrepancy between two different ways of measuring the expansion of the universe. The way she answered focused on particular results: someone did X, there was a paper showing Y, this telescope is planning to measure Z. That kind of answer is a good way to tell that someone is answering from “journal club knowledge”. It’s clearly an area she could get involved in if she wanted to, one where she knows the important questions and which papers to read, with some of her own work close enough to the question to give an important advantage. But it was also clear that she hadn’t developed a full argument on one “side” or the other, and as such there are others I’d trust a bit more on that aspect of the question.

Finally, experts are the most trustworthy when we speak about our own work. In this speaker’s case, the questions about machine learning were where her expertise clearly shone through. Her answers there were detailed in a different way than her answers about the Hubble tension: not just papers, but personal experience. They were full of phrases like “I tried that, but it doesn’t work…” or “when we do this, we prefer to do it this way”. They also had the most technical terms of any of her answers, terms that clearly drew distinctions relevant to those who work in the field. In general, when an expert talks about what they do in their own work, and uses a lot of appropriate technical terms, you have especially good reason to trust them.

These cues can help a lot when evaluating experts. An expert who makes a generic claim, like “no evidence for X”, might not know as much as an expert who cites specific papers, and in turn they might not know as much as an expert who describes what they do in their own research. The cues aren’t perfect: one risk is that someone may be an expert on their own work, but that work may be irrelevant to the question you’re asking. But they help: rather than distrusting everyone, they help you towards “bounded distrust”, knowing which claims you can trust and which are riskier.

At the Bohr Centennial

One hundred years ago, Niels Bohr received his Nobel prize. One hundred and one years ago, the Niels Bohr Institute opened its doors (it would have been one hundred and two, but pandemics are inconvenient things).

This year, also partly delayed by a pandemic, the Niels Bohr Institute is celebrating.

Using the fanciest hall the university has.

We’ve had a three-day conference, packed with Nobel prizewinners, people who don’t feel out of place among Nobel prizewinners, and for one morning’s ceremony the crown prince of Denmark. There were last-minute cancellations but also last-minute additions, including a moving speech by two Ukrainian PhD students. I don’t talk politics on this blog, so I won’t say much more about it (and you shouldn’t in the comments either, there are better venues), but I will say that was the only time I’ve seen a standing ovation at a scientific conference.

The other talks ran from reminiscences (Glashow struggled to get to the stage, but his talk was witty, even quoting Peter Woit apparently to try to rile David Gross in the front row (next to the Ukranian PhD students who must have found it very awkward)) to classic colloquium style talks (really interesting crisply described puzzles from astrochemistry to biophysics) to a few more “conference-ey” talks (t’Hooft, unfortunately). It’s been fun, but also exhausting, and as such that’s all I’m writing this week.

The Only Speed of Light That Matters

A couple weeks back, someone asked me about a Veritasium video with the provocative title “Why No One Has Measured The Speed Of Light”. Veritasium is a science popularization youtube channel, and usually a fairly good one…so it was a bit surprising to see it make a claim usually reserved for crackpots. Many, many people have measured the speed of light, including Ole Rømer all the way back in 1676. To argue otherwise seems like it demands a massive conspiracy.

Veritasium wasn’t proposing a conspiracy, though, just a technical point. Yes, many experiments have measured the speed of light. However, the speed they measure is in fact a “two-way speed”, the speed that light takes to go somewhere and then come back. They leave open the possibility that light travels differently in different directions, and only has the measured speed on average: that there are different “one-way speeds” of light.

The loophole is clearest using some of the more vivid measurements of the speed of light, timing how long it takes to bounce off a mirror and return. It’s less clear using other measurements of the speed of light, like Rømer’s. Rømer measured the speed of light using the moons of Jupiter, noticing that the time they took to orbit appeared to change based on whether Jupiter was moving towards or away from the Earth. For this measurement Rømer didn’t send any light to Jupiter…but he did have to make assumptions about Jupiter’s rotation, using it like a distant clock. Those assumptions also leave the door open to a loophole, one where the different one-way speeds of light are compensated by different speeds for distant clocks. You can watch the Veritasium video for more details about how this works, or see the wikipedia page for the mathematical details.

When we think of the speed of light as the same in all directions, in some sense we’re making a choice. We’ve chosen a convention, called the Einstein synchronization convention, that lines up distant clocks in a particular way. We didn’t have to choose that convention, though we prefer to (the math gets quite a bit more complicated if we don’t). And crucially for any such choice, it is impossible for any experiment to tell the difference.

So far, Veritasium is doing fine here. But if the video was totally fine, I wouldn’t have written this post. The technical argument is fine, but the video screws up its implications.

Near the end of the video, the host speculates whether this ambiguity is a clue. What if a deeper theory of physics could explain why we can’t tell the difference between different synchronizations? Maybe that would hint at something important.

Well, it does hint at something important, but not something new. What it hints at is that “one-way speeds” don’t matter. Not for light, or really for anything else.

Think about measuring the speed of something, anything. There are two ways to do it. One is to time it against something else, like the signal in a wire, and assume we know that speed. Veritasium shows an example of this, measuring the speed of a baseball that hits a target and sends a signal back. The other way is to send it somewhere with a clock we trust, and compare it to our clock. Each of these requires that something goes back and forth, even if it’s not the same thing each time. We can’t measure the one-way speed of anything because we’re never in two places at once. Everything we measure, every conclusion we come to about the world, rests on something “two-way”: our actions go out, our perceptions go in. Even our depth perception is an inference from our ancestors, whose experience seeing food and traveling to it calibrated our notion of distance.

Synchronization of clocks is a convention because the external world is a convention. What we have really, objectively, truly, are our perceptions and our memories. Everything else is a model we build to fill the gaps in between. Some features of that model are essential: if you change them, you no longer match our perceptions. Other features, though, are just convenience: ways we arrange the model to make it easier to use, to make it not “sound dumb”, to tell a coherent story. Synchronization is one of those things: the notion that you can compare times in distant places is convenient, but as relativity already tells us in other contexts, not necessary. It’s part of our storytelling, not an essential part of our model.

Geometry and Geometry

Last week, I gave the opening lectures for a course on scattering amplitudes, the things we compute to find probabilities in particle physics. After the first class, one of the students asked me if two different descriptions of these amplitudes, one called CHY and the other called the amplituhedron, were related. There does happen to be a connection, but it’s a bit subtle and indirect, not the sort of thing the student would have been thinking of. Why then, did he think they might be related? Well, he explained, both descriptions are geometric.

If you’ve been following this blog for a while, you’ve seen me talk about misunderstandings. There are a lot of subtle ways a smart student can misunderstand something, ways that can be hard for a teacher to recognize. The right question, or the right explanation, can reveal what’s going on. Here, I think the problem was that there are multiple meanings of geometry.

One of the descriptions the student asked about, CHY, is related to string theory. It describes scattering particles in terms of the path of a length of string through space and time. That path draws out a surface called a world-sheet, showing all the places the string touches on its journey. And that picture, of a wiggly surface drawn in space and time, looks like what most people think of as geometry: a “shape” in a pretty normal sense, which here describes the physics of scattering particles.

The other description, the amplituhedron, also uses geometric objects to describe scattering particles. But the “geometric objects” here are much more abstract. A few of them are familiar: straight lines, the area between them forming shapes on a plane. Most of them, though are generalizations of this: instead of lines on a plane, they have higher dimensional planes in higher dimensional spaces. These too get described as geometry, even though they aren’t the “everyday” geometry you might be familiar with. Instead, they’re a “natural generalization”, something that, once you know the math, is close enough to that “everyday” geometry that it deserves the same name.

This week, two papers presented a totally different kind of geometric description of particle physics. In those papers, “geometric” has to do with differential geometry, the mathematics behind Einstein’s theory of general relativity. The descriptions are geometric because they use the same kinds of building-blocks of that theory, a metric that bends space and time. Once again, this kind of geometry is a natural generalization of the everyday notion, but now in once again a different way.

All of these notions of geometry do have some things in common, of course. Maybe you could even write down a definition of “geometry” that includes all of them. But they’re different enough that if I tell you that two descriptions are “geometric”, it doesn’t tell you all that much. It definitely doesn’t tell you the two descriptions are related.

It’s a reasonable misunderstanding, though. It comes from a place where, used to “everyday” geometry, you expect two “geometric descriptions” of something to be similar: shapes moving in everyday space, things you can directly compare. Instead, a geometric description can be many sorts of shape, in many sorts of spaces, emphasizing many sorts of properties. “Geometry” is just a really broad term.

Valentine’s Day Physics Poem 2022

Monday is Valentine’s Day, so I’m following my yearly tradition and posting a poem about love and physics. If you like it, be sure to check out my poems from past years here.

Time Crystals

A physicist once dreamed
of a life like a crystal.
Each facet the same, again and again,
     effortlessly
         until the end of time.

This is, of course, impossible.

A physicist once dreamed
of a life like a crystal.
Each facet the same, again and again,
      not effortlessly,
	   but driven,
with reliable effort
input energy
(what the young physicists call work).

This, (you might say of course,) is possible.
It means more than you’d think.

A thing we model as a spring
(or: anyone and anything)
has a restoring force:
a force to pull it back
a force to keep it going.

A thing we model as a spring
(yes you and me and everything)
has a damping force, too:
this slows it down
and tires it out.
The dismal law
of finite life.

The driving force is another thing
no mere possession of the spring.
The driving force comes from

    o u t s i d e

and breaks the rules.

Your rude “of course”:
a sign you guess
a simple resolution.
That outside helpmeet,
doing work,
will be used up,
drained,
fueling that crystal life.

But no.

That was the discovery.

No net drain,
but back and forth,
each feeding the other.
With this alone
(and only this)
the system breaks the dismal law
and lives forever.

(As a child, did you ever sing,
of giving away, and giving away,
and only having more?)

A physicist dreamed,
alone, impossibly,
of a life like a crystal.

Collaboration made it real.