Monthly Archives: April 2018

Seeing the Wires in Science Communication

Recently, I’ve been going to Science and Cocktails, a series of popular science lectures in Freetown Christiania. The atmosphere is great fun, but I’ve been finding the lectures themselves a bit underwhelming. It’s mostly my fault, though.

There’s a problem, common to all types of performing artists. Once you know the tricks that make a performance work, you can’t un-see them. Do enough theater and you can’t help but notice how an actor interprets their lines, or how they handle Shakespeare’s dirty jokes. Play an instrument, and you think about how they made that sound, or when they pause for breath. Work on action movies, and you start to see the wires.

This has been happening to me with science communication. Going to the Science and Cocktails lectures, I keep seeing the tricks the speaker used to make the presentation easier. I notice the slides that were probably copied from the speaker’s colloquiums, sometimes without adapting them to the new audience. I notice when an example doesn’t really fit the narrative, but is wedged in there anyway because the speaker wants to talk about it. I notice filler, like a recent speaker who spent several slides on the history of electron microscopes, starting with Hooke!

I’m not claiming I’m a better speaker than these people. The truth is, I notice these tricks because I’ve been guilty of them myself! I reuse slides, I insert pet topics, I’ve had talks that were too short until I added a long historical section.

And overall, it doesn’t seem to matter. The audience doesn’t notice our little shortcuts, just like they didn’t notice the wires in old kung-fu movies. They’re there for the magic of the performance, they want to be swept away by a good story.

I need to reconnect with that. It’s still important to avoid using blatant tricks, to cover up the wires and make things that much more seamless. But in the end, what matters is whether the audience learned something, and whether they had a good time. I need to watch not just the tricks, but the magic: what makes the audience’s eyes light up, what makes them laugh, what makes them think. I need to stop griping about the wires, and start seeing the action.

Bubbles of Nothing

I recently learned about a very cool concept, called a bubble of nothing.

Read about physics long enough, and you’ll hear all sorts of cosmic disaster scenarios. If the Higgs vacuum decays, and the Higgs field switches to a different value, then the masses of most fundamental particles would change. It would be the end of physics, and life, as we know it.

A bubble of nothing is even more extreme. In a bubble of nothing, space itself ceases to exist.

The idea was first explored by Witten in 1982. Witten started with a simple model, a world with our four familiar dimensions of space and time, plus one curled-up extra dimension. What he found was that this simple world is unstable: quantum mechanics (and, as was later found, thermodynamics) lets it “tunnel” to another world, one that contains a small “bubble”, a sphere in which nothing at all exists.

giphy

Except perhaps the Nowhere Man

A bubble of nothing might sound like a black hole, but it’s quite different. Throw a particle into a black hole and it will fall in, never to return. Throw it into a bubble of nothing, though, and something more interesting happens. As you get closer, the extra dimension of space gets smaller and smaller. Eventually, it stops, smoothly closing off. The particle you threw in will just bounce back, smoothly, off the outside of the bubble. Essentially, it reached the edge of the universe.

The bubble starts out small, comparable to the size of the curled-up dimension. But it doesn’t stay that way. In Witten’s setup, the bubble grows, faster and faster, until it’s moving at the speed of light, erasing the rest of the universe from existence.

You probably shouldn’t worry about this happening to us. As far as I’m aware, nobody has written down a realistic model that can transform into a bubble of nothing.

Still, it’s an evocative concept, and one I’m surprised isn’t used more often in science fiction. I could see writers using a bubble of nothing as a risk from an experimental FTL drive, or using a stable (or slowly growing) bubble as the relic of some catastrophic alien war. The idea of a bubble of literal nothing is haunting enough that it ought to be put to good use.

By Any Other Author Would Smell as Sweet

I was chatting with someone about this paper (which probably deserves a post in its own right, once I figure out an angle that isn’t just me geeking out about how much I could do with their new setup), and I referred to it as “Claude’s paper”. This got me chided a bit: the paper has five authors, experts on Feynman diagrams and elliptic integrals. It’s not just “Claude’s paper”. So why do I think of it that way?

Part of it, I think, comes from the experience of reading a paper. We want to think of a paper as a speech act: someone talking to us, explaining something, leading us through a calculation. Our brain models that as a conversation with a single person, so we naturally try to put a single face to a paper. With a collaborative paper, this is almost never how it was written: different sections are usually written by different people, who then edit each other’s work. But unless you know the collaborators well, you aren’t going to know who wrote which section, so it’s easier to just picture one author for the whole thing.

Another element comes from how I think about the field. Just as it’s easier to think of a paper as the speech of one person, it’s easier to think of new developments as continuations of a story. I at least tend to think about the field in terms of specific programs: these people worked on this, which is a continuation of that. You can follow those kinds of threads though the field, but in reality they’re tangled together: collaborations are an opportunity for two programs to meet. In other fields you might have a “first author” to default to, but in theoretical physics we normally write authors alphabetically. For “Claude’s paper”, it just feels like the sort of thing I’d expect Claude Duhr to write, like a continuation of the other things he’s known for, even if it couldn’t have existed without the other four authors.

You’d worry that associating papers with people like this takes away deserved credit. I don’t think it’s quite that simple, though. In an older post I described this paper as the work of Anastasia Volovich and Mark Spradlin. On some level, that’s still how I think about it. Nevertheless, when I heard that Cristian Vergu was going to be at the Niels Bohr Institute next year, I was excited: we’re hiring one of the authors of GSVV! Even if I don’t think of him immediately when I think of the paper, I think of the paper when I think of him.

That, I think, is more important for credit. If you’re a hiring committee, you’ll start out by seeing names of applicants. It’s important, at that point, that you know what they did, that the authors of important papers stand out, that you assign credit where it’s due. It’s less necessary on the other end, when you’re reading a paper and casually classify it in your head.

Nevertheless, I should be more careful about credit. It’s important to remember that “Claude Duhr’s paper” is also “Johannes Broedel’s paper” and “Falko Dulat’s paper”, “Brenda Penante’s paper” and “Lorenzo Tancredi’s paper”. It gives me more of an appreciation of where it comes from, so I can get back to having fun applying it.

A Paper About Ranking Papers

If you’ve ever heard someone list problems in academia, citation-counting is usually near the top. Hiring and tenure committees want easy numbers to judge applicants with: number of papers, number of citations, or related statistics like the h-index. Unfortunately, these metrics can be gamed, leading to a host of bad practices that get blamed for pretty much everything that goes wrong in science. In physics, it’s not even clear that these statistics tell us anything: papers in our field have been including more citations over time, and for thousand-person experimental collaborations the number of citations and papers don’t really reflect any one person’s contribution.

It’s pretty easy to find people complaining about this. It’s much rarer to find a proposed solution.

That’s why I quite enjoyed Alessandro Strumia and Riccardo Torre’s paper last week, on Biblioranking fundamental physics.

Some of their suggestions are quite straightforward. With the number of citations per paper increasing, it makes sense to divide each paper by the number of citations it contains: it means more to get cited by a paper with ten citations than by a paper with one hundred. Similarly, you could divide credit for a paper among its authors, rather than giving each author full credit.

Some are more elaborate. They suggest using a variant of Google’s PageRank algorithm to rank papers and authors. Essentially, the algorithm imagines someone wandering from paper to paper and tries to figure out which papers are more central to the network. This is apparently an old idea, but by combining it with their normalization by number of citations they eke a bit more mileage from it. (I also found their treatment a bit clearer than the older papers they cite. There are a few more elaborate setups in the literature as well, but they seem to have a lot of free parameters so Strumia and Torre’s setup looks preferable on that front.)

One final problem they consider is that of self-citations, and citation cliques. In principle, you could boost your citation count by citing yourself. While that’s easy to correct for, you could also be one of a small number of authors who cite each other a lot. To keep the system from being gamed in this way, they propose a notion of a “CitationCoin” that counts (normalized) citations received minus (normalized) citations given. The idea is that, just as you can’t make anyone richer just by passing money between your friends without doing anything with it, so a small community can’t earn “CitationCoins” without getting the wider field interested.

There are still likely problems with these ideas. Dividing each paper by its number of authors seems like overkill: a thousand-person paper is not typically going to get a thousand times as many citations. I also don’t know whether there are ways to game this system: since the metrics are based in part on citations given, not just citations received, I worry there are situations where it would be to someone’s advantage to cite others less. I think they manage to avoid this by normalizing by number of citations given, and they emphasize that PageRank itself is estimating something we directly care about: how often people read a paper. Still, it would be good to see more rigorous work probing the system for weaknesses.

In addition to the proposed metrics, Strumia and Torre’s paper is full of interesting statistics about the arXiv and InSpire databases, both using more traditional metrics and their new ones. Whether or not the methods they propose work out, the paper is definitely worth a look.