Tag Archives: academia

Ingredients of a Good Talk

It’s one of the hazards of physics that occasionally we have to attend talks about other people’s sub-fields.

Physics is a pretty heavily specialized field. It’s specialized enough that an otherwise perfectly reasonable talk can be totally incomprehensible to someone just a few sub-fields over.

I went to a talk this week on someone else’s sub-field, and was pleasantly surprised by how much I could follow. I thought I’d say a bit about what made it work.

In my experience, a good talk tells me why I should care, what was done, and what we know now.

Most talks start with a Motivation section, covering the why I should care part. If a talk doesn’t provide any motivation, it’s assuming that everyone finds the point of the research self-evident, and that’s a risky assumption.

Even for talks with a Motivation section, though, there’s a lot of variety. I’ve been to plenty of talks where the motivation presented is very sketchy: “this sort of thing is important in general, so we’re going to calculate one”. While that’s technically a motivation, all it does for an outsider is to tell them which sub-field you’re part of. Ideally, a motivation section does more: for a good talk, the motivation should not only say why you’re doing the work, but what question you’re asking and how your work can answer it.

The bulk of any talk covers what was done, but here there’s also varying quality. Bad talks often make it unclear how much was done by the presenter versus how much was done before. This is important not just to make sure the right people get credit, but because it can be hard to tell how much progress has been made. A good talk makes it clear not only what was done, but why it wasn’t done before. The whole point of a talk is to show off something new, so it should be clear what the new thing is.

If those two parts are done well, it becomes a lot easier to explain what we know now. If you’re clear on what question you were asking and what you did to answer it, then you’ve already framed things in those terms, and the rest is just summarizing. If not, you have to build it up from scratch, ending up with the important information packed in to the last few minutes.

This isn’t everything you need for a good talk, but it’s important, and far too many people neglect it. I’ll be giving a few talks next week, and I plan to keep this structure in mind.

Science Is a Collection of Projects, Not a Collection of Beliefs

Read a textbook, and you’ll be confronted by a set of beliefs about the world.

(If it’s a half-decent textbook, it will give justifications for those beliefs, and they will be true, putting you well on the way to knowledge.)

The same is true of most science popularization. In either case, you’ll be instructed that a certain set of statements about the world (or about math, or anything else) are true.

If most of your experience with science comes from popularizations and textbooks, you might think that all of science is like this. In particular, you might think of scientific controversies as matters of contrasting beliefs. Some scientists “believe in” supersymmetry, some don’t. Some “believe in” string theory, some don’t. Some “believe in” a multiverse, some don’t.

In practice, though, only settled science takes the form of beliefs. The rest, science as it is actually practiced, is better understood as a collection of projects.

Scientists spend most of their time working on projects. (Well, or procrastinating in my case.) Those projects, not our beliefs about the world, are how we influence other scientists, because projects build off each other. Any time we successfully do a calculation or make a measurement, we’re opening up new calculations and measurements for others to do. We all need to keep working and publishing, so anything that gives people something concrete to do is going to be influential.

The beliefs that matter come later. They come once projects have been so successful, and so widespread, that their success itself is evidence for beliefs. They’re the beliefs that serve as foundational assumptions for future projects. If you’re going to worry that some scientists are behaving unscientifically, these are the sorts of beliefs you want to worry about. Even then, things are often constrained by viable projects: in many fields, you can’t have a textbook without problem sets.

Far too many people seem to miss this distinction. I’ve seen philosophers focus on scientists’ public statements instead of their projects when trying to understand the implications of their science. I’ve seen bloggers and journalists who mostly describe conflicts of beliefs, what scientists expect and hope to be true rather than what they actually work on.

Do scientists have beliefs about controversial topics? Absolutely. Do those beliefs influence what they work on? Sure. But only so far as there’s actually something there to work on.

That’s why you see quite a few high-profile physicists endorsing some form of multiverse, but barely any actual journal articles about it. The belief in a multiverse may or may not be true, but regardless, there just isn’t much that one can do with the idea right now, and it’s what scientists are doing, not what they believe, that constitutes the health of science.

Different fields seem to understand this to different extents. I’m reminded of a story I heard in grad school, of two dueling psychologists. One of them believed that conversation was inherently cooperative, and showed that, unless unusually stressed or busy, people would put in the effort to understand the other person’s perspective. The other believed that conversation was inherently egocentric, and showed that, the more you stressed or busy people are, the more they assume that everyone else has the same perspective they do.

Strip off the “beliefs”, and these two worked on the exact same thing, with the same results. With their beliefs included, though, they were bitter rivals who bristled if their grad students so much as mentioned the other scientist.

We need to avoid this kind of mistake. The skills we have, the kind of work we do, these are important, these are part of science. The way we talk about it to reporters, the ideas we champion when we debate, those are sidelines. They have some influence, dragging people one way or another. But they’re not what science is, because on the front lines, science is about projects, not beliefs.

arXiv, Our Printing Press

IMG_20160714_091400

Johannes Gutenberg, inventor of the printing press, and possibly the only photogenic thing on the Mainz campus

I’ve had a few occasions to dig into older papers recently, and I’ve noticed a trend: old papers are hard to read!

Ok, that might not be surprising. The older a paper is, the greater the chance it will use obsolete notation, or assume a context that has long passed by. Older papers have different assumptions about what matters, or what rigor requires, and their readers cared about different things. All this is to be expected: a slow, gradual approach to a modern style and understanding.

I’ve been noticing, though, that this slow, gradual approach doesn’t always hold. Specifically, it seems to speed up quite dramatically at one point: the introduction of arXiv, the website where we store all our papers.

Part of this could just be a coincidence. As it happens, the founding papers in my subfield, those that started Amplitudes with a capital “A”, were right around the time that arXiv first got going. It could be that all I’m noticing is the difference between Amplitudes and “pre-Amplitudes”, with the Amplitudes subfield sharing notation more than they did before they had a shared identity.

But I suspect that something else is going on. With arXiv, we don’t just share papers (that was done, piecemeal, before arXiv). We also share LaTeX.

LaTeX is a document formatting language, like a programming language for papers. It’s used pretty much universally in physics and math, and increasingly in other fields. As it turns out, when we post a paper to arXiv, we don’t just send a pdf: we include the raw LaTeX code as well.

Before arXiv, if you wanted to include an equation from another paper, you’d format it yourself. You’d probably do it a little differently from the other paper, in accord with your own conventions, and just to make it easier on yourself. Over time, more and more differences would crop up, making older papers harder and harder to read.

With arXiv, you can still do all that. But you can also just copy.

Since arXiv makes the LaTeX code behind a paper public, it’s easy to lift the occasional equation. Even if you’re not lifting it directly, you can see how they coded it. Even if you don’t plan on copying, the default gets flipped around: instead of having to try to make your equation like the one in the previous paper and accidentally getting it wrong, every difference is intentional.

This reminds me, in a small-scale way, of the effect of the printing press on anatomy books.

Before the printing press, books on anatomy tended to be full of descriptions, but not illustrations. Illustrations weren’t reliable: there was no guarantee the monk who copied them would do so correctly, so nobody bothered. This made it hard to tell when an anatomist (fine it was always Galen) was wrong: he could just be using an odd description. It was only after the printing press that books could actually have illustrations that were reliable across copies of a book. Suddenly, it was possible to point out that a fellow anatomist had left something out: it would be missing from the illustration!

In a similar way, arXiv seems to have led to increasingly standard notation. We still aren’t totally consistent…but we do seem a lot more consistent than older papers, and I think arXiv is the reason why.

Most of String Theory Is Not String Pheno

Last week, Sabine Hossenfelder wrote a post entitled “Why not string theory?” In it, she argued that string theory has a much more dominant position in physics than it ought to: that it’s crowding out alternative theories like Loop Quantum Gravity and hogging much more funding than it actually merits.

If you follow the string wars at all, you’ve heard these sorts of arguments before. There’s not really anything new here.

That said, there were a few sentences in Hossenfelder’s post that got my attention, and inspired me to write this post.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

(Bolding mine)

Here, Hossenfelder explicitly leaves out string theorists who work on “lesser understood aspects of quantum field theories” from her critique. They’re not the big, dominant program she’s worried about.

What Hossenfelder doesn’t seem to realize is that right now, it is precisely the “aspects of quantum field theories” crowd that is big and dominant. The communities of string theorists working on something else, and especially those making bold pronouncements about the nature of the real world, are much, much smaller.

Let’s define some terms:

Phenomenology (or pheno for short) is the part of theoretical physics that attempts to make predictions that can be tested in experiments. String pheno, then, covers attempts to use string theory to make predictions. In practice, though, it’s broader than that: while some people do attempt to predict the results of experiments, more work on figuring out how models constructed by other phenomenologists can make sense in string theory. This still attempts to test string theory in some sense: if a phenomenologist’s model turns out to be true but it can’t be replicated in string theory then string theory would be falsified. That said, it’s more indirect. In parallel to string phenomenology, there is also the related field of string cosmology, which has a similar relationship with cosmology.

If other string theorists aren’t trying to make predictions, what exactly are they doing? Well, a large number of them are studying quantum field theories. Quantum field theories are currently our most powerful theories of nature, but there are many aspects of them that we don’t yet understand. For a large proportion of string theorists, string theory is useful because it provides a new way to understand these theories in terms of different configurations of string theory, which often uncovers novel and unexpected properties. This is still physics, not mathematics: the goal, in the end, is to understand theories that govern the real world. But it doesn’t involve the same sort of direct statements about the world as string phenomenology or string cosmology: crucially, it doesn’t depend on whether string theory is true.

Last week, I said that before replying to Hossenfelder’s post I’d have to gather some numbers. I was hoping to find some statistics on how many people work on each of these fields, or on their funding. Unfortunately, nobody seems to collect statistics broken down by sub-field like this.

As a proxy, though, we can look at conferences. Strings is the premier conference in string theory. If something has high status in the string community, it will probably get a talk at Strings. So to investigate, I took a look at the talks given last year, at Strings 2015, and broke them down by sub-field.

strings2015topics

Here I’ve left out the historical overview talks, since they don’t say much about current research.

“QFT” is for talks about lesser understood aspects of quantum field theories. Amplitudes, my own sub-field, should be part of this: I’ve separated it out to show what a typical sub-field of the QFT block might look like.

“Formal Strings” refers to research into the fundamentals of how to do calculations in string theory: in principle, both the QFT folks and the string pheno folks find it useful.

“Holography” is a sub-topic of string theory in which string theory in some space is equivalent to a quantum field theory on the boundary of that space. Some people study this because they want to learn about quantum field theory from string theory, others because they want to learn about quantum gravity from quantum field theory. Since the field can’t be cleanly divided into quantum gravity and quantum field theory research, I’ve given it its own category.

While all string theory research is in principle about quantum gravity, the “Quantum Gravity” section refers to people focused on the sorts of topics that interest non-string quantum gravity theorists, like black hole entropy.

Finally, we have String Cosmology and String Phenomenology, which I’ve already defined.

Don’t take the exact numbers here too seriously: not every talk fit cleanly into a category, so there were some judgement calls on my part. Nonetheless, this should give you a decent idea of the makeup of the string theory community.

The biggest wedge in the diagram by far, taking up a majority of the talks, is QFT. Throwing in Amplitudes (part of QFT) and Formal Strings (useful to both), and you’ve got two thirds of the conference. Even if you believe Hossenfelder’s tale of the failures of string theory, then, that only matters to a third of this diagram. And once you take into account that many of the Holography and Quantum Gravity people are interested in aspects of QFT as well, you’re looking at an even smaller group. Really, Hossenfelder’s criticism is aimed at two small slices on the chart: String Pheno, and String Cosmo.

Of course, string phenomenologists also have their own conference. It’s called String Pheno, and last year it had 130 participants. In contrast, LOOPS’ 2015, the conference for string theory’s most famous “rival”, had…190 participants. The fields are really pretty comparable.

Now, I have a lot more sympathy for the string phenomenologists and string cosmologists than I do for loop quantum gravity. If other string theorists felt the same way, then maybe that would cause the sort of sociological effect that Hossenfelder is worried about.

But in practice, I don’t think this happens. I’ve met string theorists who didn’t even know that people still did string phenomenology. The two communities are almost entirely disjoint: string phenomenologists and string cosmologists interact much more with other phenomenologists and cosmologists than they do with other string theorists.

You want to talk about sociology? Sociologically, people choose careers and fund research because they expect something to happen soon. People don’t want to be left high and dry by a dearth of experiments, don’t feel comfortable working on something that may only be vindicated long after they’re dead. Most people choose the safe option, the one that, even if it’s still aimed at a distant goal, is also producing interesting results now (aspects of quantum field theories, for example).

The people that don’t? Tend to form small, tight-knit, passionate communities. They carve out a few havens of like-minded people, and they think big thoughts while the world around them seems to only care about their careers.

If you’re a loop quantum gravity theorist, or a quantum gravity phenomenologist like Hossenfelder, and you see some of your struggles in that paragraph, please realize that string phenomenology is like that too.

I feel like Hossenfelder imagines a world in which string theory is struck from its high place, and alternative theories of quantum gravity are of comparable size and power. But from where I’m sitting, it doesn’t look like it would work out that way. Instead, you’d have alternatives grow to the same size as similarly risky parts of string theory, like string phenomenology. And surprise, surprise: they’re already that size.

In certain corners of the internet, people like to argue about “punching up” and “punching down”. Hossenfelder seems to think she’s “punching up”, giving the big dominant group a taste of its own medicine. But by leaving out string theorists who study QFTs, she’s really “punching down”, or at least sideways, and calling out a sub-group that doesn’t have much more power than her own.

Amplitudes for the New Year

Ah, the new year, time of new year’s resolutions. While some people resolve to go to the gym or take up online dating, physicists resolve to finally get that paper out.

At least, that’s the impression I get, given the number of papers posted to arXiv in the last month. Since a lot of them were amplitudes-related, I figured I’d go over some highlights.

Everyone once in a while people ask me for the latest news on the amplituhedron. While I don’t know what Nima is working on right now, I can point to what others have been doing. Zvi Bern, Jaroslav Trnka, and collaborators have continued to make progress towards generalizing the amplituhedron to non-planar amplitudes. Meanwhile, a group in Europe has been working on solving an issue I’ve glossed over to some extent. While the amplituhedron is often described as calculating an amplitude as the volume of a geometrical object, in fact there is a somewhat more indirect procedure involved in going from the geometrical object to the amplitude. It would be much simpler if the amplitude was actually the volume of some (different) geometrical object, and that’s what these folks are working towards. Finally, Daniele Galloni has made progress on solving a technical issue: the amplituhedron gives a mathematical recipe for the amplitude, but it doesn’t tell you how to carry out that recipe, and Galloni provides an algorithm for part of this process.

With this new algorithm, is the amplituhedron finally as efficient as older methods? Typically, the way to show that is to do a calculation with the amplituhedron that wasn’t possible before. It doesn’t look like that’s happening soon though, as Jake Bourjaily and collaborators compute an eight-loop integrand using one of the more successful of the older methods. Their paper provides a good answer to the perennial question, “why more loops?” What they find is that some of the assumptions that people made at lower loops fail to hold at this high loop order, and it becomes increasingly important to keep track of exactly how far your symmetries can take you.

Back when I visited Brown, I talked to folks there about some ongoing work. Now that they’ve published, I can talk about it. A while back, Juan Maldacena resurrected an old technique of Landau’s to solve a problem in AdS/CFT. In that paper, he suggested that Landau’s trick might help prove some of the impressive simplifications in N=4 super Yang-Mills that underlie my work and the work of those at Brown. In their new paper, the Brown group finds that, while useful, Landau’s trick doesn’t seem to fully explain the simplicity they’ve discovered. To get a little partisan, I have to say that this was largely the result I expected, and that it felt a bit condescending for Maldacena to assume that an old trick like that from the Feynman diagram era could really be enough to explain one of the big discoveries of amplitudeology.

There was also a paper by Freddy Cachazo and collaborators on an interesting trick to extend their CHY string to one-loop, and one by Bo Feng and collaborators on an intriguing new method called Q-cuts that I will probably say more about in future, but I’ll sign off for now. I’ve got my own new years’ physics resolutions, and I ought to get back to work!

Knowing Too Little, Knowing Too Much

(Commenter nueww has asked me to comment on the flurry of blog posts around an interview with Lance Dixon that recently went up on the SLAC website. I’m not going to comment on it until I have a chance to talk with Lance, beyond saying that this is a remarkable amount of attention paid to a fairly workaday organizational puff piece.)

I’ve been in Oregon this week, giving talks at Oregon State and at the University of Oregon. After my talk at Brown in front of some of the world’s top experts in my subfield, I’ve had to adapt quite a bit for these talks. Oregon State doesn’t have any particle theorists at all, while at the University of Oregon I gave a seminar for their Institute of Theoretical Science, which contains a mix of researchers ranging from particle theorists to theoretical chemists.

Guess which talk was harder to give?

If you guessed the UofO talk, you’re right. At Oregon State, I had a pretty good idea of everyone’s background. I knew these were people who would be pretty familiar with quantum mechanics, but probably wouldn’t have heard of Feynman diagrams. From that, I could build a strategy, and end up giving a pretty good talk.

At the University of Oregon, if I aimed for the particle physicists in the audience, I’d lose the chemists. So I should aim for the chemists, right?

That has its problems too. I’ve talked about some of them: the risk that the experts in your audience feel talked-down to, that you don’t cover the more important parts of your work. But there’s another problem, one that I noticed when I tried to prepare this talk: knowing too little can lead to misunderstandings, but so can knowing too much.

What would happen if I geared the talk completely to the chemists? Well, I’d end up being very vague about key details of what I did. And for the chemists, that would be fine: they’d get a flavor of what I do, and they’d understand not to read any more into it. People are pretty good at putting something in the “I don’t understand this completely” box, as long as it’s reasonably clearly labeled.

That vagueness, though, would be a disaster for the physicists in the audience. It’s not just that they wouldn’t get the full story: unless I was very careful, they’d end up actively misled. The same vague descriptions that the chemists would accept as “flavor”, the physicists would actively try to read for meaning. And with the relevant technical terms replaced with terms the chemists would recognize, they would end up with an understanding that would be actively wrong.

In the end, I ended up giving a talk mostly geared to the physicists, but with some background and vagueness to give the chemists some value. I don’t feel like I did as good of a job as I would like, and neither group really got as much out of the talk as I wanted them to. It’s tricky talking for a mixed audience, and it’s something I’m still learning how to do.

Scooped Is a Spectrum

I kind of got scooped recently.

I say kind of, because as I’ve been realizing being scooped isn’t quite the all-or-nothing thing you’d think it would be. Rather, being scooped is a spectrum.

Go ahead and scoop up a spectrum as you’re reading this.

By the way, I’m going to be a bit cagey about what exactly I got scooped on. As you’ll see, there are still a few things my collaborator and I need to figure out, and in the meantime I don’t want to put my foot in my mouth. Those of you who follow what’s going on in amplitudes might have some guesses. In case you’re worried, it has nothing to do with my work on Hexagon Functions.

When I heard about the paper that scooped us, my first reaction was to assume the project I’d been working on for a few weeks was now a dead end. When another group publishes the same thing you’ve been working on, and does it first, there doesn’t seem to be much you can do besides shake hands and move on.

As it turns out, though, things are a bit more complicated. The risk of publishing fast, after all, is making mistakes. In this case, it’s starting to look like a few of the obstructions that were holding us back weren’t solved by the other group, and in fact that they may have ignored those obstructions altogether in their rush to get something publishable.

This creates an interesting situation. It’s pretty clear the other group is beyond us in certain respects, they published first for a (good) reason. On the other hand, precisely because we’ve been slower, we’ve caught problems that it looks like the other group didn’t notice. Rather than rendering our work useless, this makes it that much more useful: complementing the other group’s work rather than competing with it.

Being scooped is a spectrum. If two groups are working on very similar things, then whoever publishes first usually wins. But if the work is different enough, then a whole range of roles opens up, from corrections and objections to extensions and completions. Being scooped doesn’t have to be the end of the world, in fact, it can be the beginning.

The Theorist Exclusion Principle

There are a lot of people who think theoretical physics has gone off-track, though very few of them agree on exactly how. Some think that string theory as a whole is a waste of time, others that the field just needs to pay more attention to their preferred idea. Some think we aren’t paying enough attention to the big questions, or that we’re too focused on “safe” ideas like supersymmetry, even when they aren’t working out. Some think the field needs less focus on mathematics, while others think it needs even more.

Usually, people act on these opinions by writing strongly worded articles and blog posts. Sometimes, they have more power, and act with money, creating grants and prizes that only go to their preferred areas of research.

Let’s put the question of whether the field actually needs to change aside for the moment. Even if it does, I’m skeptical that this sort of thing will have any real effect. While grants and blogs may be very good at swaying experimentalists, theorists are likely to be harder to shift, due to what I’m going to call the Theorist Exclusion Principle.

The Pauli Exclusion Principle is a rule from quantum mechanics that states that two fermions (particles with half-integer spin) can’t occupy the same state. Fermions include electrons, quarks, protons…essentially, all the particles that make up matter. Many people learn about the Pauli Exclusion Principle first in a chemistry class, where it explains why electrons fall into different energy levels in atoms: once one energy level “fills up”, no more electrons can occupy the same state, and any additional electrons are “excluded” and must occupy a different energy level.

Those 1s electrons are such a clique!

In contrast, bosons (like photons, or the Higgs) can all occupy the same state. It’s what allows for things like lasers, and it’s why all the matter we’re used to is made out of fermions: because fermions can’t occupy the same state as each other, as you add more fermions the structures they form have to become more and more complicated.

Experimentalists are a little like bosons. While you can’t stuff two experimentalists into the same quantum state, you can get them working on very similar projects. They can form large collaborations, with each additional researcher making the experiment that much easier. They can replicate eachother’s work, making sure it was accurate. They can take some physical phenomenon and subject it to a battery of tests, so that someone is bound to learn something.

Theorists, on the other hand, are much more like fermions. In theory, there’s very little reason to work on something that someone else is already doing. Replication doesn’t mean very much: the purest theory involves mathematical proofs, where replication is essentially pointless. Theorists do form collaborations, but they don’t have the same need for armies of technicians and grad students that experimentalists do. With no physical objects to work on, there’s a limit to how much can be done pursuing one particular problem, and if there really are a lot of options they can be pursued by one person with a cluster.

Like fermions, then, theorists expand to fill the projects available. If an idea is viable, someone will probably work on it, and once they do, there isn’t much reason for someone else to do the same thing.

This makes theory a lot harder to influence than experiment. You can write the most beautiful thinkpiece possible to persuade theorists to study the deep questions of the universe, but if there aren’t any real calculations available nothing will change. Contrary to public perception, theoretical physicists aren’t paid to just sit around thinking all day: we calculate, compute, and publish, and if a topic doesn’t lend itself to that then we won’t get much mileage out of it. And no matter what you try to preferentially fund with grants, mostly you’ll just get people re-branding what they’re already doing, shifting a few superficial details to qualify.

Theorists won’t occupy the same states, so if you want to influence theorists you need to make sure there are open states where you’re trying to get them to go. Historically, theorists have shifted when new states have opened up: new data from experiment that needed a novel explanation, new mathematical concepts that opened up new types of calculations. You want there to be fewer string theorists, or more focus on the deep questions? Give us something concrete to do, and I guarantee you’ll get theorists flooding in.

Why I Spent Convergence Working

Convergence is basically Perimeter Institute Christmas.

This week, the building was dressed up in festive posters and elaborate chalk art, and filled with Perimeter’s many distant relations. Convergence is like a hybrid of an alumni reunion and a conference, where Perimeter’s former students and close collaborators come to hear talks about the glory of Perimeter and the marvels of its research.

Sponsored by the Bank of Montreal

And I attended none of those talks.

I led a discussion session on the first day of Convergence (which was actually pretty fun!), and I helped out in the online chat for the public lecture on Emmy Noether. But I didn’t register for the conference, and I didn’t take the time to just sit down and listen to a talk.

Before you ask, this isn’t because the talks are going to be viewable online. (Though they are, and I’d recommend watching a few if you’re in the mood for a fun physics talk.)

It’s partly to do with how general these talks are. Convergence is very broad: rather than being focused on a single topic, its goal is to bring people from very different sub-fields together, hopefully to spark new ideas. The result, though, are talks that are about as broad as you can get while still being directed at theoretical physicists. Most physics departments have talks like these once a week, they’re called colloquia. Perimeter has colloquia too: they’re typically in the room that the Convergence talks happened in. Some of the Convergence talks have already been given as colloquia! So part of my reluctance is the feeling that, if I haven’t seen these talks before, I probably will before too long.

The main reason, though, is work. I’ve been working on a fairly big project, since shortly after I got to Perimeter. It’s an extension of my previous work, dealing with the next, more complicated step in the same calculation. And it’s kind of driving me nuts.

The thing is, we had almost all of what we needed around January. We’ve accomplished our main goal, we’ve got the result that we were looking for. We just need to plot it, to get actual numbers out. And for some reason, that’s taken six months.

This week, I thought I had an idea that would make the calculation work. Rationally, I know I could have just taken the week to attend Convergence, and worked on the problem afterwards. We’ve waited six months, we can wait another week.

But that’s not why I do science. I do science to solve problems. And right here, in front of me, I had a problem that maybe I could solve. And I knew I wasn’t going to be able to focus on a bunch of colloquium talks with that sitting in the back of my mind.

So I skipped Convergence, and sat watching the calculation run again and again, each time trying to streamline it until it’s fast enough to work properly. It hasn’t worked yet, but I’m so close. So I’m hoping.

Physics Is a Small World

Earlier this week, Vilhelm Bohr gave a talk at Perimeter about the life of his grandfather, the famous physicist Niels Bohr. The video of the talk doesn’t appear to be up on the Perimeter site yet, but it should be soon.

Until then, here is a picture of some eyebrows.

This was especially special for me, because my family has a longstanding connection to the Bohrs. My great grandfather worked at the Niels Bohr Institute in the mid-1930’s, and his children became good friends with Bohr’s grandchildren, often visiting each other even after my family relocated to the US.

These kinds of connections are more common in physics than you might think. Time and again I’m surprised by how closely linked people are in this field. There’s a guy here at Perimeter who went to school with Jaroslav Trnka, and a bunch of Israelis at nearby institutions all know each other from college. In my case, I went to high school with an unusually large number of mathematicians.

While it’s fun to see familiar faces, there’s a dark side to the connected nature of physics. So much of what it takes to succeed in academia involves knowing unwritten rules, as well as a wealth of other information that just isn’t widely known. Many people don’t even know it’s possible to have a career in physics, and I’ve met many who didn’t know that science grad schools pay your tuition. Academic families, and academic communities, have an enormous leg up on this kind of knowledge, so it’s not surprising that so many physicists come from so few sources.

Artificially limiting the pool of people who become physicists is bound to hurt us in the long run. Great insights often come from outsiders, like Hooke in the 17th century and Noether in the early 20th. If we can expand the reach of physics, make the unwritten rules written and the secret tricks revealed, if we work to make physics available to anyone who might be suited for it, then we can make sure that physics doesn’t end up a hereditary institution, with all the problems that entails.