Tag Archives: physics

A Field That Doesn’t Read Its Journals

Last week, the University of California system ended negotiations with Elsevier, one of the top academic journal publishers. UC had been trying to get Elsevier to switch to a new type of contract, one in which instead of paying for access to journals they pay for their faculty to publish, then make all the results openly accessible to the public. In the end they couldn’t reach an agreement and thus didn’t renew their contract, cutting Elsevier off from millions of dollars and their faculty from reading certain (mostly recent) Elsevier journal articles. There’s a nice interview here with one of the librarians who was sent to negotiate the deal.

I’m optimistic about what UC was trying to do. Their proposal sounds like it addresses some of the concerns raised here with open-access systems. Currently, journals that offer open access often charge fees directly to the scientists publishing in them, fees that have to be scrounged up from somebody’s grant at the last minute. By setting up a deal for all their faculty together, UC would have avoided that. While the deal fell through, having an organization as big as the whole University of California system advocating open access (and putting the squeeze on Elsevier’s profits) seems like it can only lead to progress.

The whole situation feels a little surreal, though, when I compare it to my own field.

At the risk of jinxing it, my field’s relationship with journals is even weirder than xkcd says.

arXiv.org is a website that hosts what are called “preprints”, which originally meant papers that haven’t been published yet. They’re online, freely accessible to anyone who wants to read them, and will be for as long as arXiv exists to host them. Essentially everything anyone publishes in my field ends up on arXiv.

Journals don’t mind, in part, because many of them are open-access anyway. There’s an organization, SCOAP3, that runs what is in some sense a large-scale version of what UC was trying to set up: instead of paying for subscriptions, university libraries pay SCOAP3 and it covers the journals’ publication costs.

This means that there are two coexisting open-access systems, the journals themselves and arXiv. But in practice, arXiv is the one we actually use.

If I want to show a student a paper, I don’t send them to the library or the journal website, I tell them how to find it on arXiv. If I’m giving a talk, there usually isn’t room for a journal reference, so I’ll give the arXiv number instead. In a paper, we do give references to journals…but they’re most useful when they have arXiv links as well. I think the only times I’ve actually read an article in a journal were for articles so old that arXiv didn’t exist when they were published.

We still submit our papers to journals, though. Peer review still matters, we still want to determine whether our results are cool enough for the fancy journals or only good enough for the ordinary ones. We still put journal citations on our CVs so employers and grant agencies know not only what we’ve done, but which reviewers liked it.

But the actual copy-editing and formatting and publishing, that the journals still employ people to do? Mostly, it never gets read.

In my experience, that editing isn’t too impressive. Often, it’s about changing things to fit the journal’s preferences: its layout, its conventions, its inconvenient proprietary document formats. I haven’t seen them try to fix grammar, or improve phrasing. Maybe my papers have unusually good grammar, maybe they do more for other papers. And maybe they used to do more, when journals had a more central role. But now, they don’t change much.

Sometimes the journal version ends up on arXiv, if the authors put it there. Sometimes it doesn’t. And sometimes the result is in between. For my last paper about Calabi-Yau manifolds in Feynman diagrams, we got several helpful comments from the reviewers, but the journal also weighed in to get us to remove our more whimsical language, down to the word “bestiary”. For the final arXiv version, we updated for the reviewer comments, but kept the whimsical words. In practice, that version is the one people in our field will read.

This has some awkward effects. It means that sometimes important corrections don’t end up on arXiv, and people don’t see them. It means that technically, if someone wanted to insist on keeping an incorrect paper online, they could, even if a corrected version was published in a journal. And of course, it means that a large amount of effort is dedicated to publishing journal articles that very few people read.

I don’t know whether other fields could get away with this kind of system. Physics is small. It’s small enough that it’s not so hard to get corrections from authors when one needs to, small enough that social pressure can get wrong results corrected. It’s small enough that arXiv and SCOAP3 can exist, funded by universities and private foundations. A bigger field might not be able to do any of that.

For physicists, we should keep in mind that our system can and should still be improved. For other fields, it’s worth considering whether you can move in this direction, and what it would cost to do so. Academic publishing is in a pretty bizarre place right now, but hopefully we can get it to a better one.

What Science Would You Do If You Had the Time?

I know a lot of people who worry about the state of academia. They worry that the competition for grants and jobs has twisted scientists’ priorities, that the sort of dedicated research of the past, sitting down and thinking about a topic until you really understand it, just isn’t possible anymore. The timeline varies: there are people who think the last really important development was the Standard Model, or the top quark, or AdS/CFT. Even more optimistic people, who think physics is still just as great as it ever was, often complain that they don’t have enough time.

Sometimes I wonder what physics would be like if we did have the time. If we didn’t have to worry about careers and funding, what would we do? I can speculate, comparing to different communities, but here I’m interested in something more concrete: what, specifically, could we accomplish? I often hear people complain that the incentives of academia discourage deep work, but I don’t often hear examples of the kind of deep work that’s being discouraged.

So I’m going to try an experiment here. I know I have a decent number of readers who are scientists of one field or another. Imagine you didn’t have to worry about funding any more. You’ve got a permanent position, and what’s more, your favorite collaborators do too. You don’t have to care about whether your work is popular, whether it appeals to the university or the funding agencies or any of that. What would you work on? What projects would you personally do, that you don’t have the time for in the current system? What worthwhile ideas has modern academia left out?

Congratulations to Arthur Ashkin, Gérard Mourou, and Donna Strickland!

The 2018 Physics Nobel Prize was announced this week, awarded to Arthur Ashkin, Gérard Mourou, and Donna Strickland for their work in laser physics.

nobel2018Some Nobel prizes recognize discoveries of the fundamental nature of reality. Others recognize the tools that make those discoveries possible.

Ashkin developed techniques that use lasers to hold small objects in place, culminating in “optical tweezers” that can pick up and move individual bacteria. Mourou and Strickland developed chirped pulse amplification, the current state of the art in extremely high-power lasers. Strickland is only the third woman to win the Nobel prize in physics, Ashkin at 96 is the oldest person to ever win the prize.

(As an aside, the phrase “optical tweezers” probably has you imagining two beams of laser light pinching a bacterium between them, like microscopic lightsabers. In fact, optical tweezers use a single beam, focused and bent so that if an object falls out of place it will gently roll back to the middle of the beam. Instead of tweezers, it’s really more like a tiny laser spoon.)

The Nobel announcement emphasizes practical applications, like eye surgery. It’s important to remember that these are research tools as well. I wouldn’t have recognized the names of Ashkin, Mourou, and Strickland, but I recognized atom trapping, optical tweezers, and ultrashort pulses. Hang around atomic physicists, or quantum computing experiments, and these words pop up again and again. These are essential tools that have given rise to whole subfields. LIGO won a Nobel based on the expectation that it would kick-start a vast new area of research. Ashkin, Mourou, and Strickland’s work already has.

Don’t Marry Your Arbitrary

This fall, I’m TAing a course on General Relativity. I haven’t taught in a while, so it’s been a good opportunity to reconnect with how students think.

This week, one problem left several students confused. The problem involved Christoffel symbols, the bane of many a physics grad student, but the trick that they had to use was in the end quite simple. It’s an example of a broader trick, a way of thinking about problems that comes up all across physics.

To see a simplified version of the problem, imagine you start with this sum:

g(j)=\Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Now, imagine you want to sum the function g(j) over j. You can write:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Let’s break this up into two terms, for later convenience:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{j=0}^n \Sigma_{i=0}^n f(j,i)

Without telling you anything about f(i,j), what do you know about this sum?

Well, one thing you know is that i and j are arbitrary.

i and j are letters you happened to use. You could have used different letters, x and y, or \alpha and \beta. You could even use different letters in each term, if you wanted to. You could even just pick one term, and swap i and j.

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{i=0}^n \Sigma_{j=0}^n f(i,j) = 0

And now, without knowing anything about f(i,j), you know that \Sigma_{j=0}^n g(j) is zero.

In physics, it’s extremely important to keep track of what could be really physical, and what is merely your arbitrary choice. In general relativity, your choice of polar versus spherical coordinates shouldn’t affect your calculation. In quantum field theory, your choice of gauge shouldn’t matter, and neither should your scheme for regularizing divergences.

Ideally, you’d do your calculation without making any of those arbitrary choices: no coordinates, no choice of gauge, no regularization scheme. In practice, sometimes you can do this, sometimes you can’t. When you can’t, you need to keep that arbitrariness in the back of your mind, and not get stuck assuming your choice was the only one. If you’re careful with arbitrariness, it can be one of the most powerful tools in physics. If you’re not, you can stare at a mess of Christoffel symbols for hours, and nobody wants that.

Different Fields, Different Worlds

My grandfather is a molecular biologist. When we meet, we swap stories: the state of my field and his, different methods and focuses but often a surprising amount of common ground.

Recently he forwarded me an article by Raymond Goldstein, a biological physicist, arguing that biologists ought to be more comfortable with physical reasoning. The article is interesting in its own right, contrasting how physicists and biologists think about the relationship between models, predictions, and experiments. But what struck me most about the article wasn’t the content, but the context.

Goldstein’s article focuses on a question that seemed to me oddly myopic: should physical models be in the Results section, or the Discussion section?

As someone who has never written a paper with either a Results section or a Discussion section, I wondered why anyone would care. In my field, paper formats are fairly flexible. We usually have an Introduction and a Conclusion, yes, but in between we use however many sections we need to explain what we need to. In contrast, biology papers seem to have a very fixed structure: after the Introduction, there’s a Results section, a Discussion section, and a Materials and Methods section at the end.

At first blush, this seemed incredibly bizarre. Why describe your results before the methods you used to get them? How do you talk about your results without discussing them, but still take a full section to do it? And why do reviewers care how you divide things up in the first place?

It made a bit more sense once I thought about how biology differs from theoretical physics. In theoretical physics, the “methods” are most of the result: unsolved problems are usually unsolved because existing methods don’t solve them, and we need to develop new methods to make progress. Our “methods”, in turn, are often the part of the paper experts are most eager to read. In biology, in contrast, the methods are much more standardized. While papers will occasionally introduce new methods, there are so many unexplored biological phenomena that most of the time researchers don’t need to invent a new method: just asking a question no-one else has asked can be enough for a discovery. In that environment, the “results” matter a lot more: they’re the part that takes the most scrutiny, that needs to stand up on its own.

I can even understand the need for a fixed structure. Biology is a much bigger field than theoretical physics. My field is small enough that we all pretty much know each other. If a paper is hard to read, we’ll probably get a chance to ask the author what they meant. Biology, in contrast, is huge. An important result could come from anywhere, and anyone. Having a standardized format makes it a lot easier to scan through an unfamiliar paper and find what you need, especially when there might be hundreds of relevant papers.

The problem with a standardized system, as always, is the existence of exceptions. A more “physics-like” biology paper is more readable with “physics-like” conventions, even if the rest of the field needs to stay “biology-like”. Because of that, I have a lot of sympathy for Goldstein’s argument, but I can’t help but feel that he should be asking for more. If creating new mathematical models and refining them with observation is at the heart of what Goldstein is doing, then maybe he shouldn’t have to use Results/Discussion/Methods in the first place. Maybe he should be allowed to write biology papers that look more like physics papers.

Conferences Are Work! Who Knew?

I’ve been traveling for over a month now, from conference to conference, with a bit of vacation thrown in at the end.

(As such, I haven’t had time to read up on the recent announcement of the detection of neutrinos and high-energy photons from a blazar, Matt Strassler has a nice piece on it.)

One thing I didn’t expect was how exhausting going to three conferences in a row would be. I didn’t give any talks this time around, so I thought I was skipping the “work” part. But sitting in a room for talk after talk, listening and taking notes, turns out to still be work! There’s effort involved in paying attention, especially in a scientific talk where the details matter. You assess the talks in your head, turning concepts around and thinking about what you might do with them. It’s the kind of thing you don’t notice for a seminar or two, but at a conference, after a while, it really builds up. After three, let’s just say I’ve really needed this vacation. I’ll be back at work next week, and maybe I’ll have a longer blog post for you folks. Until then, I ought to get some rest!

Why Physicists Leave Physics

It’s an open secret that many physicists end up leaving physics. How many depends on how you count things, but for a representative number, this report has 31% of US physics PhDs in the private sector after one year. I’d expect that number to grow with time post-PhD. While some of these people might still be doing physics, in certain sub-fields that isn’t really an option: it’s not like there are companies that do R&D in particle physics, astrophysics, or string theory. Instead, these physicists get hired in data science, or quantitative finance, or machine learning. Others stay in academia, but stop doing physics: either transitioning to another field, or taking teaching-focused jobs that don’t leave time for research.

There’s a standard economic narrative for why this happens. The number of students grad schools accept and graduate is much higher than the number of professor jobs. There simply isn’t room for everyone, so many people end up doing something else instead.

That narrative is probably true, if you zoom out far enough. On the ground, though, the reasons people leave academia don’t feel quite this “economic”. While they might be indirectly based on a shortage of jobs, the direct reasons matter. Physicists leave physics for a wide variety of reasons, and many of them are things the field could improve on. Others are factors that will likely be present regardless of how many students graduate, or how many jobs there are. I worry that an attempt to address physics attrition on a purely economic level would miss these kinds of details.

I thought I’d talk in this post about a few reasons why physicists leave physics. Most of this won’t be new information to anyone, but I hope some of it is at least a new perspective.

First, to get it out of the way: almost no-one starts a physics PhD with the intention of going into industry. I’ve met a grand total of one person who did, and he’s rather unusual. Almost always, leaving physics represents someone’s dreams not working out.

Sometimes, that just means realizing you aren’t suited for physics. These are people who feel like they aren’t able to keep up with the material, or people who find they aren’t as interested in it as they expected. In my experience, people realize this sort of thing pretty early. They leave in the middle of grad school, or they leave once they have their PhD. In some sense, this is the healthy sort of attrition: without the ability to perfectly predict our interests and abilities, there will always be people who start a career and then decide it’s not for them.

I want to distinguish this from a broader reason to leave, disillusionment. These are people who can do physics, and want to do physics, but encounter a system that seems bent on making them do anything but. Sometimes this means disillusionment with the field itself: phenomenologists sick of tweaking models to lie just beyond the latest experimental bounds, or theorists who had hoped to address the real world but begin to see that they can’t. This kind of motivation lay behind several great atomic physicists going into biology after the second world war, to work on “life rather than death”. Sometimes instead it’s disillusionment with academia: people who have been bludgeoned by academic politics or bureaucracy, who despair of getting the academic system to care about real research or teaching instead of its current screwed-up priorities or who just don’t want to face that kind of abuse again.

When those people leave, it’s at every stage in their career. I’ve seen grad students disillusioned into leaving without a PhD, and successful tenured professors who feel like the field no longer has anything to offer them. While occasionally these people just have a difference of opinion, a lot of the time they’re pointing out real problems with the system, problems that actually should be fixed.

Sometimes, life intervenes. The classic example is the two-body problem, where you and your spouse have trouble finding jobs in the same place. There aren’t all that many places in the world that hire theoretical physicists, and still fewer with jobs open. One or both partners end up needing to compromise, and that can mean switching to a career with a bit more choice in location. People also move to take care of their parents, or because of other connections.

This seems closer to the economic picture, but I don’t think it quite lines up. Even if there were a lot fewer physicists applying for the same number of jobs, it’s still not certain that there’s a job where you want to live, specifically. You’d still end up with plenty of people leaving the field.

A commenter here frequently asks why physicists have to travel so much. Especially for a theorist, why can’t we just work remotely? With current technology, shouldn’t that be pretty easy to do?

I’ve done a lot of remote collaboration, it’s not impossible. But there really isn’t a substitute for working in the same place, for being able to meet someone in the hall and strike up a conversation around a blackboard. Remote collaborations are an ok way to keep a project going, but a rough way to start one. Institutes realize this, which is part of why most of the time they’ll only pay you a salary if they think you’re actually going to show up.

Could I imagine this changing? Maybe. The technology doesn’t exist right now, but maybe someday someone will design a social network with the right features, one where you can strike up and work on collaborations as naturally as you can in person. Then again, maybe I’m silly for imagining a technological solution to the problem in the first place.

What about more direct economic reasons? What about when people leave because of the academic job market itself?

This certainly happens. In my experience though, a lot of the time it’s pre-emptive. You’d think that people would apply for academic jobs, get rejected, and quit the field. More often, I’ve seen people notice the competition for jobs and decide at the outset that it’s not worth it for them. Sometimes this happens right out of grad school. Other times it’s later. In the latter case, these are often people who are “keeping up”, in that their career is moving roughly as fast as everyone else’s. Rather, it’s the stress, of keeping ahead of the field and marketing themselves and applying for every grant in sight and worrying that it could come crashing down any moment, that ends up too much to deal with.

What about the people who do get rejected over and over again?

Physics, like life in Jurassic Park, finds a way. Surprisingly often, these people manage to stick around. Without faculty positions they scrabble up postdoc after postdoc, short-term position after short-term position. They fund their way piece by piece, grant by grant. Often they get depressed, and cynical, and pissed off, and insist that this time they’re just going to quit the field altogether. But from what I’ve seen, once someone is that far in, they often don’t go through with it.

If fewer people went to physics grad school, or more professors were hired, would fewer people leave physics? Yes, absolutely. But there’s enough going on here, enough different causes and different motivations, that I suspect things wouldn’t work out quite as predicted. Some attrition is here to stay, some is independent of the economics. And some, perhaps, is due to problems we ought to actually solve.

Why Your Idea Is Bad

By A. Physicist

 

Your idea is bad…

 

…because it disagrees with precision electroweak measurements

…………………………………..with bounds from ATLAS and CMS

…………………………………..with the power spectrum of the CMB

…………………………………..with Eötvös experiments

…because it isn’t gauge invariant

………………………….Lorentz invariant

………………………….diffeomorphism invariant

………………………….background-independent, whatever that means

…because it violates unitarity

…………………………………locality

…………………………………causality

…………………………………observer-independence

…………………………………technical naturalness

…………………………………international treaties

…………………………………cosmic censorship

…because you screwed up the calculation

…because you didn’t actually do the calculation

…because I don’t understand the calculation

…because you predict too many magnetic monopoles

……………………………………too many proton decays

……………………………………too many primordial black holes

…………………………………..remnants, at all

…because it’s fine-tuned

…because it’s suspiciously finely-tuned

…because it’s finely tuned to be always outside of experimental bounds

…because you’re misunderstanding quantum mechanics

…………………………………………………………..black holes

………………………………………………………….effective field theory

…………………………………………………………..thermodynamics

…………………………………………………………..the scientific method

…because Condensed Matter would contribute more to Chinese GDP

…because the approximation you’re making is unjustified

…………………………………………………………………………is not valid

…………………………………………………………………………is wildly overoptimistic

………………………………………………………………………….is just kind of lazy

…because there isn’t a plausible UV completion

…because you care too much about the UV

…because it only works in polynomial time

…………………………………………..exponential time

…………………………………………..factorial time

…because even if it’s fast it requires more memory than any computer on Earth

…because it requires more bits of memory than atoms in the visible universe

…because it has no meaningful advantages over current methods

…because it has meaningful advantages over my own methods

…because it can’t just be that easy

…because it’s not the kind of idea that usually works

…because it’s not the kind of idea that usually works in my field

…because it isn’t canonical

…because it’s ugly

…because it’s baroque

…because it ain’t baroque, and thus shouldn’t be fixed

…because only a few people work on it

…because far too many people work on it

…because clearly it will only work for the first case

……………………………………………………………….the first two cases

……………………………………………………………….the first seven cases

……………………………………………………………….the cases you’ve published and no more

…because I know you’re wrong

…because I strongly suspect you’re wrong

…because I strongly suspect you’re wrong, but saying I know you’re wrong looks better on a grant application

…….in a blog post

…because I’m just really pessimistic about something like that ever actually working

…because I’d rather work on my own thing, that I’m much more optimistic about

…because if I’m clear about my reasons

……and what I know

…….and what I don’t

……….then I’ll convince you you’re wrong.

 

……….or maybe you’ll convince me?

 

The Way You Think Everything Is Connected Isn’t the Way Everything Is Connected

I hear it from older people, mostly.

“Oh, I know about quantum physics, it’s about how everything is connected!”

“String theory: that’s the one that says everything is connected, right?”

“Carl Sagan said we are all stardust. So really, everything is connected.”

connect_four

It makes Connect Four a lot easier anyway

I always cringe a little when I hear this. There’s a misunderstanding here, but it’s not a nice clean one I can clear up in a few sentences. It’s a bunch of interconnected misunderstandings, mixing some real science with a lot of confusion.

To get it out of the way first, no, string theory is not about how “everything is connected”. String theory describes the world in terms of strings, yes, but don’t picture those strings as links connecting distant places: string theory’s proposed strings are very, very short, much smaller than the scales we can investigate with today’s experiments. The reason they’re thought to be strings isn’t because they connect distant things, it’s because it lets them wiggle (counteracting some troublesome wiggles in quantum gravity) and wind (curling up in six extra dimensions in a multitude of ways, giving us what looks like a lot of different particles).

(Also, for technical readers: yes, strings also connect branes, but that’s not the sort of connection these people are talking about.)

What about quantum mechanics?

Here’s where it gets trickier. In quantum mechanics, there’s a phenomenon called entanglement. Entanglement really does connect things in different places…for a very specific definition of “connect”. And there’s a real (but complicated) sense in which these connections end up connecting everything, which you can read about here. There’s even speculation that these sorts of “connections” in some sense give rise to space and time.

You really have to be careful here, though. These are connections of a very specific sort. Specifically, they’re the sort that you can’t do anything through.

Connect two cans with a length of string, and you can send messages between them. Connect two particles with entanglement, though, and you can’t send messages between them…at least not any faster than between two non-entangled particles. Even in a quantum world, physics still respects locality: the principle that you can only affect the world where you are, and that any changes you make can’t travel faster than the speed of light. Ansibles, science-fiction devices that communicate faster than light, can’t actually exist according to our current knowledge.

What kind of connection is entanglement, then? That’s a bit tricky to describe in a short post. One way to think about entanglement is as a connection of logic.

Imagine someone takes a coin and cuts it along the rim into a heads half and a tails half. They put the two halves in two envelopes, and randomly give you one. You don’t know whether you have heads or tails…but you know that if you open your envelope and it shows heads, the other envelope must have tails.

m_nickel

Unless they’re a spy. Then it could contain something else.

Entanglement starts out with connections like that. Instead of a coin, take a particle that isn’t spinning and “split” it into two particles spinning in different directions, “spin up” and “spin down”. Like the coin, the two particles are “logically connected”: you know if one of them is “spin up” the other is “spin down”.

What makes a quantum coin different from a classical coin is that there’s no way to figure out the result in advance. If you watch carefully, you can see which coin gets put in to which envelope, but no matter how carefully you look you can’t predict which particle will be spin up and which will be spin down. There’s no “hidden information” in the quantum case, nowhere nearby you can look to figure it out.

That makes the connection seem a lot weirder than a regular logical connection. It also has slightly different implications, weirdness in how it interacts with the rest of quantum mechanics, things you can exploit in various ways. But none of those ways, none of those connections, allow you to change the world faster than the speed of light. In a way, they’re connecting things in the same sense that “we are all stardust” is connecting things: tied together by logic and cause.

So as long as this is all you mean by “everything is connected” then sure, everything is connected. But often, people seem to mean something else.

Sometimes, they mean something explicitly mystical. They’re people who believe in dowsing rods and astrology, in sympathetic magic, rituals you can do in one place to affect another. There is no support for any of this in physics. Nothing in quantum mechanics, in string theory, or in big bang cosmology has any support for altering the world with the power of your mind alone, or the stars influencing your day to day life. That’s just not the sort of connection we’re talking about.

Sometimes, “everything is connected” means something a bit more loose, the idea that someone’s desires guide their fate, that you could “know” something happened to your kids the instant it happens from miles away. This has the same problem, though, in that it’s imagining connections that let you act faster than light, where people play a special role. And once again, these just aren’t that sort of connection.

Sometimes, finally, it’s entirely poetic. “Everything is connected” might just mean a sense of awe at the deep physics in mundane matter, or a feeling that everyone in the world should get along. That’s fine: if you find inspiration in physics then I’m glad it brings you happiness. But poetry is personal, so don’t expect others to find the same inspiration. Your “everyone is connected” might not be someone else’s.

Movie Review: The Truth is in the Stars

Recently, Perimeter aired a showing of The Truth is in the Stars, a documentary about the influence of Star Trek on science and culture, with a panel discussion afterwards. The documentary follows William Shatner as he wanders around the world interviewing scientists and film industry people about how Star Trek inspired them. Along the way he learns a bit about physics, and collects questions to ask Steven Hawking at the end.

5834308d05e48__po__ipho__380_568

I’ll start with the good: the piece is cute. They managed to capture some fun interactions with the interviewees, there are good (if occasionally silly) visuals, and the whole thing seems fairly well edited. If you’re looking for an hour of Star Trek nostalgia and platitudes about physics, this is the documentary for you.

That said, it doesn’t go much beyond cute, and it dances between topics in a way that felt unsatisfying.

The piece has a heavy focus on Shatner, especially early on, beginning with a clumsily shoehorned-in visit to his ranch to hear his thoughts on horses. For a while, the interviews are all about him: his jokes, his awkward questions, his worries about getting old. He has a habit of asking the scientists he talks to whether “everything is connected”, which to the scientists’ credit is usually met by a deft change of subject. All of this fades somewhat as the movie progresses, though: whether by a trick of editing, or because after talking to so many scientists he begins to pick up some humility.

(Incidentally, I really ought to have a blog post debunking the whole “everything is connected” thing. The tricky part is that it involves so many different misunderstandings, from confusion around entanglement to the role of strings to “we are all star-stuff” that it’s hard to be comprehensive.)

Most of the scientific discussions are quite superficial, to the point that they’re more likely to confuse inexperienced viewers than to tell them something new (especially the people who hinted at dark energy-based technology…no, just no). While I don’t expect a documentary like this to cover the science in-depth, trying to touch on so many topics in this short a time mostly just fuels the “everything is connected” misunderstanding. One surprising element of the science coverage was the choice to have both Michio Kaku giving a passionate description of string theory and Neil Turok bluntly calling string theory “a mess”. While giving the public “both sides” like that isn’t unusual in other contexts, for some reason most science documentaries I’ve seen take one side or the other.

Of course, the point of the documentary isn’t really to teach science, it’s to show how Star Trek influenced science. Here too, though, the piece was disappointing. Most of the scientists interviewed could tell their usual story about the power of science fiction in their childhood, but didn’t have much to say about Star Trek specifically. It was the actors and producers who had the most to say about Star Trek, from Ben Stiller showing off his Gorn mask to Seth MacFarlane admiring the design of the Enterprise. The best of these was probably Whoopi Goldberg’s story of being inspired by Uhura, which has been covered better elsewhere (and might have been better as Mae Jemison’s similar story, which would at least have involved an astronaut rather than another actor). I did enjoy Neil deGrasse Tyson’s explanation of how as a kid he thought everything on Star Trek was plausible…except for the automatic doors.

Shatner’s meeting with Hawking is the finale, and is the documentary’s strongest section. Shatner is humbled, even devout, in Hawking’s presence, while Hawking seems to show genuine joy swapping jokes with Captain Kirk.

Overall, the piece felt more than a little disjointed. It’s not really about the science, but it didn’t have enough content to be really about Star Trek either. If it was “about” anything, it was Shatner’s journey: an aging actor getting to hang out and chat with interesting people around the world. If that sounds fun, you should watch it: but don’t expect much deeper than that.