Tag Archives: mathematica

This Week at Quanta Magazine

I’ve got an article in Quanta Magazine this week, about a program called FORM.

Quanta has come up a number of times on this blog, they’re a science news outlet set up by the Simons Foundation. Their goal is to enhance the public understanding of science and mathematics. They cover topics other outlets might find too challenging, and they cover the topics others cover with more depth. Most people I know who’ve worked with them have been impressed by their thoroughness: they take fact-checking to a level I haven’t seen with other science journalists. If you’re doing a certain kind of mathematical work, then you hope that Quanta decides to cover it.

A while back, as I was chatting with one of their journalists, I had a startling realization: if I want Quanta to cover something, I can send them a tip, and if they’re interested they’ll write about it. That realization resulted in the article I talked about here. Chatting with the journalist interviewing me for that article, though, I learned something if anything even more startling: if I want Quanta to cover something, and I want to write about it, I can pitch the article to Quanta, and if they’re interested they’ll pay me to write about it.

Around the same time, I happened to talk to a few people in my field, who had a problem they thought Quanta should cover. A software, called FORM, was used in all the most serious collider physics calculations. Despite that, the software wasn’t being supported: its future was unclear. You can read the article to learn more.

One thing I didn’t mention in that article: I hadn’t used FORM before I started writing it. I don’t do those “most serious collider physics calculations”, so I’d never bothered to learn FORM. I mostly use Mathematica, a common choice among physicists who want something easy to learn, even if it’s not the strongest option for many things.

(By the way, it was surprisingly hard to find quotes about FORM that didn’t compare it specifically to Mathematica. In the end I think I included one, but believe me, there could have been a lot more.)

Now, I wonder if I should have been using FORM all along. Many times I’ve pushed to the limits of what Mathematica could comfortable handle, the limits of what my computer’s memory could hold, equations long enough that just expanding them out took complicated work-arounds. If I had learned FORM, maybe I would have breezed through those calculations, and pushed even further.

I’d love it if this article gets FORM more attention, and more support. But also, I’d love it if it gives a window on the nuts and bolts of hard-core particle physics: the things people have to do to turn those T-shirt equations into predictions for actual colliders. It’s a world in between physics and computer science and mathematics, a big part of the infrastructure of how we know what we know that, precisely because it’s infrastructure, often ends up falling through the cracks.

Edit: For researchers interested in learning more about FORM, the workshop I mentioned at the end of the article is now online, with registrations open.

The Unpublishable Dirty Tricks of Theoretical Physics

As the saying goes, it is better not to see laws or sausages being made. You’d prefer to see the clean package on the outside than the mess behind the scenes.

The same is true of science. A good paper tells a nice, clean story: a logical argument from beginning to end, with no extra baggage to slow it down. That story isn’t a lie: for any decent paper in theoretical physics, the conclusions will follow from the premises. Most of the time, though, it isn’t how the physicist actually did it.

The way we actually make discoveries is messy. It involves looking for inspiration in all the wrong places: pieces of old computer code and old problems, trying to reproduce this or that calculation with this or that method. In the end, once we find something interesting enough, we can reconstruct a clearer, cleaner, story, something actually fit to publish. We hide the original mess partly for career reasons (easier to get hired if you tell a clean, heroic story), partly to be understood (a paper that embraced the mess of discovery would be a mess to read), and partly just due to that deep human instinct to not let others see us that way.

The trouble is, some of that “mess” is useful, even essential. And because it’s never published or put into textbooks, the only way to learn it is word of mouth.

A lot of these messy tricks involve numerics. Many theoretical physics papers derive things analytically, writing out equations in symbols. It’s easy to make a mistake in that kind of calculation, either writing something wrong on paper or as a bug in computer code. To correct mistakes, many things are checked numerically: we plug in numbers to make sure everything still works. Sometimes this means using an approximation, trying to make sure two things cancel to some large enough number of decimal places. Sometimes instead it’s exact: we plug in prime numbers, and can much more easily see if two things are equal, or if something is rational or contains a square root. Sometimes numerics aren’t just used to check something, but to find a solution: exploring many options in an easier numerical calculation, finding one that works, and doing it again analytically.

“Ansatze” are also common: our fancy word for an educated guess. These we sometimes admit, when they’re at the core of a new scientific idea. But the more minor examples go un-mentioned. If a paper shows a nice clean formula and proves it’s correct, but doesn’t explain how the authors got it…probably, they used an ansatz. This trick can go hand-in-hand with numerics as well: make a guess, check it matches the right numbers, then try to see why it’s true.

The messy tricks can also involve the code itself. In my field we often use “computer algebra” systems, programs to do our calculations for us. These systems are programming languages in their own right, and we need to write computer code for them. That code gets passed around informally, but almost never standardized. Mathematical concepts that come up again and again can be implemented very differently by different people, some much more efficiently than others.

I don’t think it’s unreasonable that we leave “the mess” out of our papers. They would certainly be hard to understand otherwise! But it’s a shame we don’t publish our dirty tricks somewhere, even in special “dirty tricks” papers. Students often start out assuming everything is done the clean way, and start doubting themselves when they notice it’s much too slow to make progress. Learning the tricks is a big part of learning to be a physicist. We should find a better way to teach them.

The Wolfram Physics Project Makes Me Queasy

Stephen Wolfram is…Stephen Wolfram.

Once a wunderkind student of Feynman, Wolfram is now best known for his software, Mathematica, a tool used by everyone from scientists to lazy college students. Almost all of my work is coded in Mathematica, and while it has some flaws (can someone please speed up the linear solver? Maple’s is so much better!) it still tends to be the best tool for the job.

Wolfram is also known for being a very strange person. There’s his tendency to name, or rename, things after himself. (There’s a type of Mathematica file that used to be called “.m”. Now by default they’re “.wl”, “Wolfram Language” files.) There’s his live-streamed meetings. And then there’s his physics.

In 2002, Wolfram wrote a book, “A New Kind of Science”, arguing that computational systems called cellular automata were going to revolutionize science. A few days ago, he released an update: a sprawling website for “The Wolfram Physics Project”. In it, he claims to have found a potential “theory of everything”, unifying general relativity and quantum physics in a cellular automata-like form.

If that gets your crackpot klaxons blaring, yeah, me too. But Wolfram was once a very promising physicist. And he has collaborators this time, who are currently promising physicists. So I should probably give him a fair reading.

On the other hand, his introduction for a technical audience is 448 pages long. I may have more time now due to COVID-19, but I still have a job, and it isn’t reading that.

So I compromised. I didn’t read his 448-page technical introduction. I read his 90-ish page blog post. The post is written for a non-technical audience, so I know it isn’t 100% accurate. But by seeing how someone chooses to promote their work, I can at least get an idea of what they value.

I started out optimistic, or at least trying to be. Wolfram starts with simple mathematical rules, and sees what kinds of structures they create. That’s not an unheard of strategy in theoretical physics, including in my own field. And the specific structures he’s looking at look weirdly familiar, a bit like a generalization of cluster algebras.

Reading along, though, I got more and more uneasy. That unease peaked when I saw him describe how his structures give rise to mass.

Wolfram had already argued that his structures obey special relativity. (For a critique of this claim, see this twitter thread.) He found a way to define energy and momentum in his system, as “fluxes of causal edges”. He picks out a particular “flux of causal edges”, one that corresponds to “just going forward in time”, and defines it as mass. Then he “derives” E=mc^2, saying,

Sometimes in the standard formalism of physics, this relation by now seems more like a definition than something to derive. But in our model, it’s not just a definition, and in fact we can successfully derive it.

In “the standard formalism of physics”, E=mc^2 means “mass is the energy of an object at rest”. It means “mass is the energy of an object just going forward in time”. If the “standard formalism of physics” “just defines” E=mc^2, so does Wolfram.

I haven’t read his technical summary. Maybe this isn’t really how his “derivation” works, maybe it’s just how he decided to summarize it. But it’s a pretty misleading summary, one that gives the reader entirely the wrong idea about some rather basic physics. It worries me, because both as a physicist and a blogger, he really should know better. I’m left wondering whether he meant to mislead, or whether instead he’s misleading himself.

That feeling kept recurring as I kept reading. There was nothing else as extreme as that passage, but a lot of pieces that felt like they were making a big deal about the wrong things, and ignoring what a physicist would find the most important questions.

I was tempted to get snarkier in this post, to throw in a reference to Lewis’s trilemma or some variant of the old quip that “what is new is not good; and what is good is not new”. For now, I’ll just say that I probably shouldn’t have read a 90 page pop physics treatise before lunch, and end the post with that.

My Other Brain (And My Other Other Brain)

What does a theoretical physicist do all day? We sit and think.

Most of us can’t do all that thinking in our heads, though. Maybe Steven Hawking could, but the rest of us need to visualize what we’re thinking. Our memories, too, are all-too finite, prone to forget what we’re doing midway through a calculation.

So rather than just use our imagination and memory, we use another imagination, another memory: a piece of paper. Writing is the simplest “other brain” we have access to, but even by itself it’s a big improvement, adding weeks of memory and the ability to “see” long calculations at work.

But even augmented by writing, our brains are limited. We can only calculate so fast. What’s more, we get bored: doing the same thing mechanically over and over is not something our brains like to do.

Luckily, in the modern era we have access to other brains: computers.

As I write, the “other brain” sitting on my desk works out a long calculation. Using programs like Mathematica or Maple, or more serious programming languages, I can tell my “other brain” to do something and it will do it, quickly and without getting bored.

My “other brain” is limited too. It has only so much memory, only so much speed, it can only do so many calculations at once. While it’s thinking, though, I can find yet another brain to think at the same time. Sometimes that’s just my desktop, sitting back in my office in Denmark. Sometimes I have access to clusters, blobs of synchronized brains to do my bidding.

While I’m writing this, my “brains” are doing five different calculations (not counting any my “real brain” might be doing). I’m sitting and thinking, as a theoretical physicist should.

Romeo and Juliet, through a Wormhole

Perimeter is hosting this year’s Mathematica Summer School on Theoretical Physics. The school is a mix of lectures on a topic in physics (this year, the phenomenon of quantum entanglement) and tips and tricks for using the symbolic calculation program Mathematica.

Juan Maldacena is one of the lecturers, which gave me a chance to hear his Romeo and Juliet-based explanation of the properties of wormholes. While I’ve criticized some of Maldacena’s science popularization work in the past, this one is pretty solid, so I thought I’d share it with you guys.

You probably think of wormholes as “shortcuts” to travel between two widely separated places. As it turns out, this isn’t really accurate: while “normal” wormholes do connect distant locations, they don’t do it in a way that allows astronauts to travel between them, Interstellar-style. This can be illustrated with something called a Penrose diagram:


Static “Greyish Black” Diagram

In the traditional Penrose diagram, time goes upward, while space goes from side to side. In order to measure both in the same units, we use the speed of light, so one year on the time axis corresponds to one light-year on the space axis. This means that if you’re traveling at a 45 degree line on the diagram, you’re going at the speed of light. Any lower angle is impossible, while any higher angle means you’re going slower.

If we start in “our universe” in the diagram, can we get to the “other universe”?

Pretty clearly, the answer is no. As long as we go slower than the speed of light, when we pass the event horizon of the wormhole we will end up, not in the “other universe”, but at the part of the diagram labeled Future Singularity, the singularity at the center of the black hole. Even going at the speed of light only keeps us orbiting the event horizon for all eternity, at best.

What use could such a wormhole be? Well, imagine you’re Romeo or Juliet.

Romeo has been banished from Verona, but he took one end of a wormhole with him, while the other end was left with Juliet. He can’t go through and visit her, she can’t go through and visit him. But if they’re already considering taking poison, there’s an easier way. If they both jump in to the wormhole, they’ll fall in to the singularity. Crucially, though, it’s the same singularity, so once they’re past the event horizon they can meet inside the black hole, spending some time together before the end.

Depicted here for more typical quantum protagonists, Alice and Bob.

This explains what wormholes really are: two black holes that share a center.

Why was Maldacena talking about this at a school on entanglement? Maldacena has recently conjectured that quantum entanglement and wormholes are two sides of the same phenomenon, that pairs of entangled particles are actually connected by wormholes. Crucially, these wormholes need to have the properties described above: you can’t use a pair of entangled particles to communicate information faster than light, and you can’t use a wormhole to travel faster than light. However, it is the “shared” singularity that ends up particularly useful, as it suggests a solution to the problem of black hole firewalls.

Firewalls were originally proposed as a way of getting around a particular paradox relating three states connected by quantum entanglement: a particle inside a black hole, radiation just outside the black hole, and radiation far away from the black hole. The way the paradox is set up, it appears that these three states must all be connected. As it turns out, though, this is prohibited by quantum mechanics, which only allows two states to be entangled at a time. The original solution proposed for this was a “firewall”, a situation in which anyone trying to observe all three states would “burn up” when crossing the event horizon, thus avoiding any observed contradiction. Maldacena’s conjecture suggests another way: if someone interacts with the far-away radiation, they have an effect on the black hole’s interior, because the two are connected by a wormhole! This ends up getting rid of the contradiction, allowing the observer to view the black hole and distant radiation as two different descriptions of the same state, and it depends crucially on the fact that a wormhole involves a shared singularity.

There’s still a lot of detail to be worked out, part of the reason why Maldacena presented this research here was to inspire more investigation from students. But it does seem encouraging that Romeo and Juliet might not have to face a wall of fire before being reunited.

Where Do the Experts Go When They Need an Expert?

If your game crashes, or Windows keeps spitting out bizarre error messages, you google the problem. Chances are, you find someone on a help forum who had the same problem, and hopefully someone else posted the answer.

(If your preferred strategy is to ask a younger relative, then I’m sorry, but nine times out of ten they’re just doing that.)

What do scientists do, though? We’re at the cutting-edge of knowledge. When we have a problem, who do we turn to?

Typically, Stack Exchange.

The thing is, when we’re really confused about something, most of the time it’s not really a physics problem. We get mystified by the intricacies of Mathematica, or we need some quality trick from numerical methods. And while I haven’t done much with them yet, there are communities dedicated to answering actual physics questions, like Physics Overflow.

The idea I was working on last week? That came from a poster on the Mathematica Stack Exchange, who mentioned a handy little function called Association that I hadn’t heard of before. (It worked, by the way.)

Science is a collaborative process. Sometimes that means actual collaborators, but sometimes we need a little help from folks online, just like everyone else.

“Super” Computers: Using a Cluster

When I join a new department or institute, the first thing I ask is “do we have a cluster?”

Most of what I do, I do on a computer. Gone are the days when theorists would always do all their work on notepads and chalkboards (though many still do!). Instead, we use specialized computer programs like Mathematica and Maple. Using a program helps keep us from forgetting pesky minus signs, and it allows working with equations far too long to fit on a sheet of paper.

Now if computers help, more computer should help more. Since physicists like to add “super” to things, what about a supercomputer?

The Jaguars of the computing world.

Supercomputers are great, but they’re also expensive. The people who use supercomputers are the ones who model large, complicated systems, like the weather, or supernovae. For most theorists, you still want power, but you don’t need quite that much. That’s where computer clusters come in.

A computer cluster is pretty much what it sounds like: several computers wired together. Different clusters contain different numbers of computers. For example, my department has a ten-node cluster. Sure, that doesn’t stack up to a supercomputer, but it’s still ten times as fast as an ordinary computer, right?

The power of ten computers!

The power of ten computers!

Well, not exactly. As several of my friends have been surprised to learn, the computers on our cluster are actually slower than most of our laptops.

The power of ten old computers!

The power of ten old computers!

Still, ten older computers is still faster than one new one, yes?

Even then, it depends how you use it.

Run a normal task on a cluster, and it’s just going to run on one of the computers, which, as I’ve said, are slower than a modern laptop. You need to get smarter.

There are two big advantages of clusters: time, and parallelization.

Sometimes, you want to do a calculation that will take a long time. Your computer is going to be busy for a day or two, and that’s inconvenient when you want to do…well, pretty much anything else. A cluster is a space to run those long calculations. You put the calculation on one of the nodes, you go back to doing your work, and you check back in a day or two to see if it’s finished.

Clusters are at their most powerful when you can parallelize. If you need to do ten versions of the same calculation, each slightly different, then rather than doing them one at a time a cluster lets you do them all at once. At that point, it really is making you ten times faster.

If you ever program, I’d encourage you to look into the resources you have available. A cluster is a very handy thing to have access to, no matter what you’re doing!

Why we Physics

There are a lot of good reasons to study theories in theoretical physics, even the ones that aren’t true. They teach us how to do calculations in other theories, including those that do describe reality, which lets us find out fundamental facts about nature. They let us hone our techniques, developing novel methods that often find use later, in some cases even spinoff technology. (Mathematica came out of the theoretical physics community, while experimental high energy physics led to the birth of the modern internet.)

Of course, none of this is why physicists actually do physics. Sure, Nima Arkani-Hamed might need to tell himself that space-time is doomed to get up in the morning, but for a lot of us, it isn’t about proving any wide-ranging point about the universe. It’s not even all about the awesome, as some would have it: most of what we do on a day-to-day basis isn’t especially awesome. It goes a bit deeper than that.

Science, in the end, is about solving puzzles. And solving puzzles is immensely satisfying, on a deep, fundamental level.

There’s a unique feeling that you get when all the pieces come together, when you’re calculating something and everything cancels and you’re left with a simple answer, and for some people that’s the best thing in existence.

It’s especially true when you’re working with an ansatz or using some other method where you fix parameters and fill in uncertainties, one by one. You can see how close you are to the answer, which means each step gives you that little thrill of getting just that much closer. One of my colleagues describes the calculations he does in supergravity as not tedious but “delightful” for precisely this reason: a calculation where every step puts another piece in the right place just feels good.

Theoretical physicists are the kind of people who would get a Lego set for their birthday, build it up to completion, and then never play with it again (unless it was to take it apart and make something else). We do it for the pure joy of seeing something come together and become complete. Save what it’s “for” for the grant committees, we’ve got a different rush in mind.

So what do you actually do?

A few days ago, my sister asked me what I do at work. What do I actually do in order to do my job? What sort of tasks does it involve?

I answered by showing her this:


Needless to say, that wasn’t very helpful, so I thought a bit and now I have a better answer.

Doing theoretical physics is basically like doing homework. In particular, it’s like doing difficult, interesting homework.

Think of the toughest homework assignment you’ve ever had to do. A homework assignment so tough, you and all your friends in the class worked together to finish it, and none of you were sure you were going to get it right.

Chances are, you handled the situation in one of two ways, depending on whether this was a group project, or an individual one.

Group Project:

This is what you do when you’re supposed to be in a group. Maybe you’re putting together a presentation, or building a rocket. Whatever you’re doing, you’ve got a lot of little tasks that need to get done in order to achieve your goals, so you parcel them out: each group member is assigned a specific task, and at the end everyone meets and puts it all together.

This sort of situation is common in theoretical physics as well, and it happens when different people have different skills to contribute. If one theorist is good at programming, while another understands a particular esoteric type of mathematics, then the math person will do the calculations and then give the results to the programming person, who makes a program to implement it.

Individual Project:

On the other hand, if everyone needs to submit their own work, you can’t very well just do part of it (not without cheating, anyway). Still, it’s not as if you’re doing this on your own. You do your own work to solve the problem, but you keep in contact with your classmates, and when you get stuck, you ask one of them for help.

This sort of situation happens in theoretical physics when everyone is relatively on the same page. Everyone works through the problem individually, doing the calculation and making their own programs, and whenever someone gets stuck, they talk to the others. Everyone periodically compares their results, which serves as a cross-check to make sure nobody made a mistake. The only difference from doing homework is that you and your collaborators write your own problems…which means, none of you know if there is a solution!

In both cases (group and individual), theoretical physics is a matter of doing calculations, writing programs, and thinking through thought experiments. Sometimes that means specific tasks as part of one huge project; sometimes it means working side by side on the same calculation. Either way, it all boils down to one thing: I’m someone who does homework for a living.