I calculate things called scattering amplitudes, the building-blocks of predictions in particle physics. I’m part of a community of “amplitudeologists” that try to find better ways to compute these things, to achieve more efficiency and deeper understanding. We meet once a year for our big conference, called Amplitudes. And this year, I’m one of the organizers.
This year also happens to be the 100th anniversary of the founding of the Niels Bohr Institute, so we wanted to do something special. We found a group of artists working on a rendering of Niels Bohr. The original idea was to do one of those celebrity holograms, but after the conference went online we decided to make a few short clips instead. I wrote a Bohr-esque script, and we got help from one of Bohr’s descendants to get the voice just-so. Now, you can see the result, as our digital Bohr invites you to the conference.
We’ll be livestreaming the conference on the same YouTube channel, and posting videos of the talks each day. If you’re curious about the latest developments in scattering amplitudes, I encourage you to tune in. And if you’re an amplitudeologist yourself, registration is still open!
Last month, our local nest of science historians at the Niels Bohr Archive hosted a Zoom talk by Jed Z. Buchwald, a Newton scholar at Caltech. Buchwald had a story to tell about experimental uncertainty, one where Newton had an important role.
If you’ve ever had a lab course in school, you know experiments never quite go like they’re supposed to. Set a room of twenty students to find Newton’s constant, and you’ll get forty different answers. Whether you’re reading a ruler or clicking a stopwatch, you can never measure anything with perfect accuracy. Each time you measure, you introduce a little random error.
Textbooks worth of statistical know-how has cropped up over the centuries to compensate for this error and get closer to the truth. The simplest trick though, is just to average over multiple experiments. It’s so obvious a choice, taking a thousand little errors and smoothing them out, that you might think people have been averaging in this way through history.
They haven’t though. As far as Buchwald had found, the first person to average experiments in this way was Isaac Newton.
What did people do before Newton?
Well, what might you do, if you didn’t have a concept of random error? You can still see that each time you measure you get a different result. But you would blame yourself: if you were more careful with the ruler, quicker with the stopwatch, you’d get it right. So you practice, you do the experiment many times, just as you would if you were averaging. But instead of averaging, you just take one result, the one you feel you did carefully enough to count.
Before Newton, this was almost always what scientists did. If you were an astronomer mapping the stars, the positions you published would be the last of a long line of measurements, not an average of the rest. Some other tricks existed. Tycho Brahe for example folded numbers together pair by pair, averaging the first two and then averaging that average with the next one, getting a final result weighted to the later measurements. But, according to Buchwald, Newton was the first to just add everything together.
Even Newton didn’t yet know why this worked. It would take later research, theorems of statistics, to establish the full justification. It seems Newton and his later contemporaries had a vague physics analogy in mind, finding a sort of “center of mass” of different experiments. This doesn’t make much sense – but it worked, well enough for physics as we know it to begin.
So this Newtonmas, let’s thank the scientists of the past. Working piece by piece, concept by concept, they gave use the tools to navigate our uncertain times.
I recently read a biography of James Franck. Many of you won’t recognize the name, but physicists might remember the Franck-Hertz experiment from a lab class. Franck and Hertz performed a decisive test of Bohr’s model of the atom, ushering in the quantum age and receiving the 1925 Nobel Prize. After fleeing Germany when Hitler took power, Franck worked on the Manhattan project and co-authored the Franck Report urging the US not to use nuclear bombs on Japan. He settled at the University of Chicago, which named an institute after him.*
You can find all that on his Wikipedia page. The page also mentions his marriage later in life to Hertha Sponer. Her Wikipedia page talks about her work in spectroscopy, about how she was among the first women to receive a PhD in Germany and the first on the physics faculty at Duke University, and that she remained a professor there until 1966, when she was 70.
Neither Wikipedia page talks about two-body problems, or teaching loads, or pensions.
That’s why I was surprised when the biography covered Franck’s later life. Until Franck died, he and Sponer would travel back and forth, he visiting her at Duke and she visiting him in Chicago. According to the biography, this wasn’t exactly by choice: they both would have preferred to live together in the same city. Somehow though, despite his Nobel Prize and her scientific accomplishments, they never could. The biography talks about how the university kept her teaching class after class, so she struggled to find time for research. It talks about what happened as the couple got older, as their health made it harder and harder to travel back and forth, and they realized that without access to their German pensions they would not be able to support themselves in retirement. The biography gives the impression that Sponer taught till 70 not out of dedication but because she had no alternative.
When we think about the heroes of the past, we imagine them battling foes with historic weight: sexism, antisemitism, Nazi-ism. We don’t hear about their more everyday battles, with academic two-body problems and stingy universities. From this, we can get the impression that the dysfunctions of modern academia are new. But while the problems have grown, we aren’t the first academics with underpaid, overworked teaching faculty, nor the first to struggle to live where we want and love who we want. These are struggles academics have faced for a long, long time.
*Full disclosure: Franck was also my great-great-grandfather, hence I may find his story more interesting than most.
I’ve been reading a biography of James Clerk Maxwell, who formulated the theory of electromagnetism. Nowadays, we think about the theory in terms of fields: we think there is an “electromagnetic field”, filling space and time. At the time, though, this was a very unusual way to think, and not even Maxwell was comfortable with it. He felt that he had to present a “physical model” to justify the theory: a picture of tiny gears and ball bearings, somehow occupying the same space as ordinary matter.
Maxwell didn’t think space was literally filled with ball bearings. He did, however, believe he needed a picture that was sufficiently “physical”, that wasn’t just “mathematics”. Later, when he wrote down a theory that looked more like modern field theory, he still thought of it as provisional: a way to use Lagrange’s mathematics to ignore the unknown “real physical mechanism” and just describe what was observed. To Maxwell, field theory was a description, but not an explanation.
This attitude surprised me. I would have thought physicists in Maxwell’s day could have accepted fields. After all, they had accepted Newton.
In his time, there was quite a bit of controversy about whether Newton’s theory of gravity was “physical”. When rival models described planets driven around by whirlpools, Newton simply described the mathematics of the force, an “action at a distance”. Newton famously insisted hypotheses non fingo, “I feign no hypotheses”, and insisted that he wasn’t saying anything about why gravity worked, merely how it worked. Over time, as the whirlpool models continued to fail, people gradually accepted that gravity could be explained as action at a distance.
You’d think that this would make them able to accept fields as well. Instead, by Maxwell’s day the options for a “physical explanation” had simply been enlarged by one. Now instead of just explaining something with mechanical parts, you could explain it with action at a distance as well. Indeed, many physicists tried to explain electricity and magnetism with some sort of gravity-like action at a distance. They failed, though. You really do need fields.
The author of the biography is an engineer, not a physicist, so I find his perspective unusual at times. After discussing Maxwell’s discomfort with fields, the author says that today physicists are different: instead of insisting on a physical explanation, they accept that there are some things they just cannot know.
At first, I wanted to object: we do have physical explanations, we explain things with fields! We have electromagnetic fields and electron fields, gluon fields and Higgs fields, even a gravitational field for the shape of space-time. These fields aren’t papering over some hidden mechanism, they are the mechanism!
Are they, though?
Fields aren’t quite like the whirlpools and ball bearings of historical physicists. Sometimes fields that look different are secretly the same: the two “different explanations” will give the same result for any measurement you could ever perform. In my area of physics, we try to avoid this by focusing on the measurements instead, building as much as we can out of observable quantities instead of fields. In effect we’re going back yet another layer, another dose of hypotheses non fingo.
Physicists still ask for “physical explanations”, and still worry that some picture might be “just mathematics”. But what that means has changed, and continues to change. I don’t think we have a common standard right now, at least nothing as specific as “mechanical parts or action at a distance, and nothing else”. Somehow, we still care about whether we’ve given an explanation, or just a description, even though we can’t define what an explanation is.
There’s a picture we learn in high school. It’s not the whole story, certainly: philosophers of science have much more sophisticated notions. But for practicing scientists, it’s a picture that often sits in the back of our minds, informing what we do. Because of that, it’s worth examining in detail.
In the high school picture, scientific theories make predictions. Importantly, postdictions don’t count: if you “predict” something that already happened, it’s too easy to cheat and adjust your prediction. Also, your predictions must be different from those of other theories. If all you can do is explain the same results with different words you aren’t doing science, you’re doing “something else” (“metaphysics”, “religion”, “mathematics”…whatever the person you’re talking to wants to make fun of, but definitely not science).
Seems reasonable, right? Let’s try a thought experiment.
In the late 1950’s, the physics of protons and neutrons was still quite mysterious. They seemed to be part of a bewildering zoo of particles that no-one could properly explain. In the 60’s and 70’s the field started converging on the right explanation, from Gell-Mann’s eightfold way to the parton model to the full theory of quantum chromodynamics (QCD for short). Today we understand the theory well enough to package things into computer code: amplitudes programs like BlackHat for collisions of individual quarks, jet algorithms that describe how those quarks become signals in colliders, lattice QCD implemented on supercomputers for pretty much everything else.
Now imagine that you had a time machine, prodigious programming skills, and a grudge against 60’s era-physicists.
Suppose you wrote a computer program that combined the best of QCD in the modern world. BlackHat and more from the amplitudes side, the best jet algorithms and lattice QCD code, and more: a program that could reproduce any calculation in QCD that anyone can do today. Further, suppose you don’t care about silly things like making your code readable. Since I began the list above with BlackHat, we’ll call the combined box of different codes BlackBox.
Now suppose you went back in time, and told the bewildered scientists of the 50’s that nuclear physics was governed by a very complicated set of laws: the ones implemented in BlackBox.
Your “BlackBox theory” passes the high school test. Not only would it match all previous observations, it could make predictions for any experiment the scientists of the 50’s could devise. Up until the present day, your theory would match observations as well as…well as well as QCD does today.
(Let’s ignore for the moment that they didn’t have computers that could run this code in the 50’s. This is a thought experiment, we can fudge things a bit.)
Now suppose that one of those enterprising 60’s scientists, Gell-Mann or Feynman or the like, noticed a pattern. Maybe they got it from an experiment scattering electrons off of protons, maybe they saw it in BlackBox’s code. They notice that different parts of “BlackBox theory” run on related rules. Based on those rules, they suggest a deeper reality: protons are made of quarks!
But is this “quark theory” scientific?
“Quark theory” doesn’t make any new predictions. Anything you could predict with quarks, you could predict with BlackBox. According to the high school picture of science, for these 60’s scientists quarks wouldn’t be scientific: they would be “something else”, metaphysics or religion or mathematics.
And in practice? I doubt that many scientists would care.
“Quark theory” makes the same predictions as BlackBox theory, but I think most of us understand that it’s a better theory. It actually explains what’s going on. It takes different parts of BlackBox and unifies them into a simpler whole. And even without new predictions, that would be enough for the scientists in our thought experiment to accept it as science.
Why am I thinking about this? For two reasons:
First, I want to think about what happens when we get to a final theory, a “Theory of Everything”. It’s probably ridiculously arrogant to think we’re anywhere close to that yet, but nonetheless the question is on physicists’ minds more than it has been for most of history.
Right now, the Standard Model has many free parameters, numbers we can’t predict and must fix based on experiments. Suppose there are two options for a final theory: one that has a free parameter, and one that doesn’t. Once that one free parameter is fixed, both theories will match every test you could ever devise (they’re theories of everything, after all).
If we come up with both theories before testing that final parameter, then all is well. The theory with no free parameters will predict the result of that final experiment, the other theory won’t, so the theory without the extra parameter wins the high school test.
What if we do the experiment first, though?
If we do, then we’re in a strange situation. Our “prediction” of the one free parameter is now a “postdiction”. We’ve matched numbers, sure, but by the high school picture we aren’t doing science. Our theory, the same theory that was scientific if history went the other way, is now relegated to metaphysics/religion/mathematics.
I don’t know about you, but I’m uncomfortable with the idea that what is or is not science depends on historical chance. I don’t like the idea that we could be stuck with a theory that doesn’t explain everything, simply because our experimentalists were able to work a bit faster.
My second reason focuses on the here and now. You might think we have nothing like BlackBox on offer, no time travelers taunting us with poorly commented code. But we’ve always had the option of our own Black Box theory: experiment itself.
The Standard Model fixes some of its parameters from experimental results. You do a few experiments, and you can predict the results of all the others. But why stop there? Why not fix all of our parameters with experiments? Why not fix everything with experiments?
That’s the Black Box Theory of Everything. Each individual experiment you could possibly do gets its own parameter, describing the result of that experiment. You do the experiment, fix that parameter, then move on to the next experiment. Your theory will never be falsified, you will never be proven wrong. Sure, you never predict anything either, but that’s just an extreme case of what we have now, where the Standard Model can’t predict the mass of the Higgs.
What’s wrong with the Black Box Theory? (I trust we can all agree that it’s wrong.)
It’s not just that it can’t make predictions. You could make it a Black Box All But One Theory instead, that predicts one experiment and takes every other experiment as input. You could even make a Black Box Except the Standard Model Theory, that predicts everything we can predict now and just leaves out everything we’re still confused by.
The Black Box Theory is wrong because the high school picture of what counts as science is wrong. The high school picture is a useful guide, it’s a good rule of thumb, but it’s not the ultimate definition of science. And especially now, when we’re starting to ask questions about final theories and ultimate parameters, we can’t cling to the high school picture. We have to be willing to actually think, to listen to the philosophers and consider our own motivations, to figure out what, in the end, we actually mean by science.
George Gamow was one of the “quantum kids” who got their start at the Niels Bohr Institute in the 30’s. He’s probably best known for the Alpher, Bethe, Gamow paper, which managed to combine one of the best sources of evidence we have for the Big Bang with a gratuitous Greek alphabet pun. He was the group jester in a lot of ways: the historians here have archives full of his cartoons and in-jokes.
Naturally, he also did science popularization.
I recently read two of Gamow’s science popularization books, “Mr Tompkins” and “Thirty Years That Shook Physics”. Reading them was a trip back in time, to when people thought about physics in surprisingly different ways.
“Mr. Tompkins” started as a series of articles in Discovery, a popular science magazine. They were published as a book in 1940, with a sequel in 1945 and an update in 1965. Apparently they were quite popular among a certain generation: the edition I’m reading has a foreword by Roger Penrose.
(As an aside: Gamow mentions that the editor of Discovery was C. P. Snow…that C. P. Snow?)
Mr Tompkins himself is a bank clerk who decides on a whim to go to a lecture on relativity. Unable to keep up, he falls asleep, and dreams of a world in which the speed of light is much slower than it is in our world. Bicyclists visibly redshift, and travelers lead much longer lives than those who stay at home. As the book goes on he meets the same professor again and again (eventually marrying his daughter) and sits through frequent lectures on physics, inevitably falling asleep and experiencing it first-hand: jungles where Planck’s constant is so large that tigers appear as probability clouds, micro-universes that expand and collapse in minutes, and electron societies kept strictly monogamous by “Father Paulini”.
The structure definitely feels dated, and not just because these days people don’t often go to physics lectures for fun. Gamow actually includes the full text of the lectures that send Mr Tompkins to sleep, and while they’re not quite boring enough to send the reader to sleep they are written on a higher level than the rest of the text, with more technical terms assumed. In the later additions to the book the “lecture” aspect grows: the last two chapters involve a dream of Dirac explaining antiparticles to a dolphin in basically the same way he would explain them to a human, and a discussion of mesons in a Japanese restaurant where the only fantastical element is a trio of geishas acting out pion exchange.
Some aspects of the physics will also feel strange to a modern audience. Gamow presents quantum mechanics in a way that I don’t think I’ve seen in a modern text: while modern treatments start with uncertainty and think of quantization as a consequence, Gamow starts with the idea that there is a minimum unit of action, and derives uncertainty from that. Some of the rest is simply limited by timing: quarks weren’t fully understood even by the 1965 printing, in 1945 they weren’t even a gleam in a theorist’s eye. Thus Tompkins’ professor says that protons and neutrons are really two states of the same particle and goes on to claim that “in my opinion, it is quite safe to bet your last dollar that the elementary particles of modern physics [electrons, protons/neutrons, and neutrinos] will live up to their name.” Neutrinos also have an amusing status: they hadn’t been detected when the earlier chapters were written, and they come across rather like some people write about dark matter today, as a silly theorist hypothesis that is all-too-conveniently impossible to observe.
“Thirty Years That Shook Physics”, published in 1966, is a more usual sort of popular science book, describing the history of the quantum revolution. While mostly focused on the scientific concepts, Gamow does spend some time on anecdotes about the people involved. If you’ve read much about the time period, you’ll probably recognize many of the anecdotes (for example, the Pauli Principle that a theorist can break experimental equipment just by walking in to the room, or Dirac’s “discovery” of purling), even the ones specific to Gamow have by now been spread far and wide.
Like Mr Tompkins, the level in this book is not particularly uniform. Gamow will spend a paragraph carefully defining an average, and then drop the word “electroscope” as if everyone should know what it is. The historical perspective taught me a few things I perhaps should have already known, but found surprising anyway. (The plum-pudding model was an actual mathematical model, and people calculated its consequences! Muons were originally thought to be mesons!)
Both books are filled with Gamow’s whimsical illustrations, something he was very much known for. Apparently he liked to imitate other art styles as well, which is visible in the portraits of physicists at the front of each chapter.
1966 was late enough that this book doesn’t have the complacency of the earlier chapters in Mr Tompkins: Gamow knew that there were more particles than just electrons, nucleons, and neutrinos. It was still early enough, though, that the new particles were not fully understood. It’s interesting seeing how Gamow reacts to this: his expectation was that physics was on the cusp of another massive change, a new theory built on new fundamental principles. He speculates that there might be a minimum length scale (although oddly enough he didn’t expect it to be related to gravity).
It’s only natural that someone who lived through the dawn of quantum mechanics should expect a similar revolution to follow. Instead, the revolution of the late 60’s and early 70’s was in our understanding: not new laws of nature so much as new comprehension of just how much quantum field theory can actually do. I wonder if the generation who lived through that later revolution left it with the reverse expectation: that the next crisis should be solved in a similar way, that the world is quantum field theory (or close cousins, like string theory) all the way down and our goal should be to understand the capabilities of these theories as well as possible.
The final section of the book is well worth waiting for. In 1932, Gamow directed Bohr’s students in staging a play, the “Blegdamsvej Faust”. A parody of Faust, it features Bohr as god, Pauli as Mephistopheles, and Ehrenfest as the “erring Faust” (Gamow’s pun, not mine) that he tempts to sin with the promise of the neutrino, Gretchen. The piece, translated to English by Gamow’s wife Barbara, is filled with in-jokes on topics as obscure as Bohr’s habitual mistakes when speaking German. It’s gloriously weird and well worth a read. If you’ve ever seen someone do a revival performance, let me know!
I gave a pair of public talks at the Niels Bohr International Academy this week on “The Quest for Quantum Gravity” as part of their “News from the NBIA” lecture series. The content should be familiar to long-time readers of this blog: I talked about renormalization, and gravitons, and the whole story leading up to them.
(I wanted to title the talk “How I Learned to Stop Worrying and Love Quantum Gravity”, like my blog post, but was told Danes might not get the Doctor Strangelove reference.)
I also managed to work in some history, which made its way into the talk after Poul Damgaard, the director of the NBIA, told me I should ask the Niels Bohr Archive about Gamow’s Thought Experiment Device.
“What’s a Thought Experiment Device?”
If you’ve heard of George Gamow, you’ve probably heard of the Alpher-Bethe-Gamow paper, his work with grad student Ralph Alpher on the origin of atomic elements in the Big Bang, where he added Hans Bethe to the paper purely for an alpha-beta-gamma pun.
As I would learn, Gamow’s sense of humor was prominent quite early on. As a research fellow at the Niels Bohr Institute (essentially a postdoc) he played with Bohr’s kids, drew physics cartoons…and made Thought Experiment Devices. These devices were essentially toy experiments, apparatuses that couldn’t actually work but that symbolized some physical argument. The one I used in my talk, pictured above, commemorated Bohr’s triumph over one of Einstein’s objections to quantum theory.
Learning more about the history of the institute, I kept noticing the young researchers, the postdocs and grad students.
Lev Landau, George Gamow, Edward Teller. The kids are Aage and Ernest Bohr. Picture from the Niels Bohr Archive.
We don’t usually think about historical physicists as grad students. The only exception I can think of is Feynman, with his stories about picking locks at the Manhattan project. But in some sense, Feynman was always a grad student.
This was different. This was Lev Landau, patriarch of Russian physics, crowning name in a dozen fields and author of a series of textbooks of legendary rigor…goofing off with Gamow. This was Edward Teller, father of the Hydrogen Bomb, skiing on the institute lawn.
These were the children of the quantum era. They came of age when the laws of physics were being rewritten, when everything was new. Starting there, they could do anything, from Gamow’s cosmology to Landau’s superconductivity, spinning off whole fields in the new reality.
On one level, I envy them. It’s possible they were the last generation to be on the ground floor of a change quite that vast, a shift that touched all of physics, the opportunity to each become gods of their own academic realms.
I’m glad to know about them too, though, to see them as rambunctious grad students. It’s all too easy to feel like there’s an unbridgeable gap between postdocs and professors, to worry that the only people who make it through seem to have always been professors at heart. Seeing Gamow and Landau and Teller as “quantum kids” dispels that: these are all-too-familiar grad students and postdocs, joking around in all-too-familiar ways, who somehow matured into some of the greatest physicists of their era.
Johannes Gutenberg, inventor of the printing press, and possibly the only photogenic thing on the Mainz campus
I’ve had a few occasions to dig into older papers recently, and I’ve noticed a trend: old papers are hard to read!
Ok, that might not be surprising. The older a paper is, the greater the chance it will use obsolete notation, or assume a context that has long passed by. Older papers have different assumptions about what matters, or what rigor requires, and their readers cared about different things. All this is to be expected: a slow, gradual approach to a modern style and understanding.
I’ve been noticing, though, that this slow, gradual approach doesn’t always hold. Specifically, it seems to speed up quite dramatically at one point: the introduction of arXiv, the website where we store all our papers.
Part of this could just be a coincidence. As it happens, the founding papers in my subfield, those that started Amplitudes with a capital “A”, were right around the time that arXiv first got going. It could be that all I’m noticing is the difference between Amplitudes and “pre-Amplitudes”, with the Amplitudes subfield sharing notation more than they did before they had a shared identity.
But I suspect that something else is going on. With arXiv, we don’t just share papers (that was done, piecemeal, before arXiv). We also share LaTeX.
LaTeX is a document formatting language, like a programming language for papers. It’s used pretty much universally in physics and math, and increasingly in other fields. As it turns out, when we post a paper to arXiv, we don’t just send a pdf: we include the raw LaTeX code as well.
Before arXiv, if you wanted to include an equation from another paper, you’d format it yourself. You’d probably do it a little differently from the other paper, in accord with your own conventions, and just to make it easier on yourself. Over time, more and more differences would crop up, making older papers harder and harder to read.
With arXiv, you can still do all that. But you can also just copy.
Since arXiv makes the LaTeX code behind a paper public, it’s easy to lift the occasional equation. Even if you’re not lifting it directly, you can see how they coded it. Even if you don’t plan on copying, the default gets flipped around: instead of having to try to make your equation like the one in the previous paper and accidentally getting it wrong, every difference is intentional.
This reminds me, in a small-scale way, of the effect of the printing press on anatomy books.
Before the printing press, books on anatomy tended to be full of descriptions, but not illustrations. Illustrations weren’t reliable: there was no guarantee the monk who copied them would do so correctly, so nobody bothered. This made it hard to tell when an anatomist (fine it was always Galen) was wrong: he could just be using an odd description. It was only after the printing press that books could actually have illustrations that were reliable across copies of a book. Suddenly, it was possible to point out that a fellow anatomist had left something out: it would be missing from the illustration!
In a similar way, arXiv seems to have led to increasingly standard notation. We still aren’t totally consistent…but we do seem a lot more consistent than older papers, and I think arXiv is the reason why.
I don’t get a lot of time to read for pleasure these days. When I do, it’s usually fiction. But I’ve always had a weakness for stories from the dawn of science, and David Wootton’s The Invention of Science: A New History of the Scientific Revolution certainly fit the bill.
Wootton’s book is a rambling tour of the early history of science, from Brahe’s nova in 1572 to Newton’s Optics in 1704. Tying everything together is one clear, central argument: that the scientific revolution involved, not just a new understanding of the world, but the creation of new conceptual tools. In other words, the invention of science itself.
Wootton argues this, for the most part, by tracing changes in language. Several chapters have a common structure: Wootton identifies a word, like evidence or hypothesis, that has an important role in how we talk about science. He then tracks that word back to its antecedents, showing how early scientists borrowed and coined the words they needed to describe the new type of reasoning they had pioneered.
Some of the most compelling examples come early on. Wootton points out that the word “discover” only became common in European languages after Columbus’s discovery of the new world: first in Portugese, then later in the rest of Europe. Before then, the closest term meant something more like “find out”, and was ambiguous: it could refer to finding something that was already known to others. Thus, early writers had to use wordy circumlocutions like “found out that which was not known before” to refer to genuine discovery.
The book covers the emergence of new social conventions in a similar way. For example, I was surprised to learn that the first recorded priority disputes were in the sixteenth century. Before then, discoveries weren’t even typically named for their discoverers: “the Pythagorean theorem”, oddly enough, is a name that wasn’t used until after the scientific revolution was underway. Beginning with explorers arguing over the discovery of the new world and anatomists negotiating priority for identifying the bones of the ear or the “discovery” of the clitoris, the competitive element of science began to come into its own.
Along the way, Wootton highlights episodes both familiar and obscure. You’ll find Bruno and Torricelli, yes, but also disputes over whether the seas are higher than the land or whether a weapon could cure wounds it caused via the power of magnetism. For anyone as fascinated by the emergence of science as I am, it’s a joyous wealth of detail.
If I had one complaint, it would be that for a lay reader far too much of Wootton’s book is taken up by disputes with other historians. His particular foes are relativists, though he spares some paragraphs to attack realists too. Overall, his dismissals of his opponents are so pat, and his descriptions of their views so self-evidently silly, that I can’t help but suspect that he’s not presenting them fairly. Even if he is, the discussion is rather inside baseball for a non-historian like me.
I read part of Newton’s Principia in college, and I was hoping for a more thorough discussion of Newton’s role. While he does show up, Wootton seems to view Newton as a bit of an enigma: someone who insisted on using the old language of geometric proofs while clearly mastering the new science of evidence and experiment. In this book, Newton is very much a capstone, not a focus.
Overall, The Invention of Science is a great way to learn about the twists and turns of the scientific revolution. If you set aside the inter-historian squabbling (or if you like that sort of thing) you’ll find a book brim full of anecdotes from the dawn of modern thought, and a compelling argument that what we do as scientists is neither an accident of culture nor obvious common-sense, but a hard-won invention whose rewards we are still reaping today.
We don’t have IRBs in theoretical physics. We didn’t get quite as wacky as the psychologists did. But the 60’s were still a time of utopian dreams and experimentation, even in physics. We may not have done unethical experiments on people…but we did have the Analytic S-Matrix Program.
The Analytic S-Matrix Program was an attempt to rebuild quantum field theory from the ground up. The “S” in S-Matrix stands for “scattering”: the S-Matrix is an enormous matrix that tells you, for each set of incoming particles, the probability that they scatter into some new set of outgoing particles. Normally, this gets calculated piece by piece with what are called Feynman diagrams. The goal of the Analytic S-Matrix program was a loftier one: to derive the S-Matrix from first principles, without building it out of quantum field theory pieces. Without Feynman diagrams’ reliance on space and time, people like Geoffrey Chew, Stanley Mandelstam, Tullio Regge, and Lev Landau hoped to reach a deeper understanding of fundamental physics.
If this sounds familiar, it should. Amplitudeologists like me view the physicists of the Analytic S-Matrix Program as our spiritual ancestors. Like us, they tried to skip the mess of Feynman diagrams, looking for mathematical tricks and unexpected symmetries to show them the way forward.
Unfortunately, they didn’t have the tools we do now. They didn’t understand the mathematical functions they needed, nor did they have novel ways of writing down their results like the amplituhedron. Instead, they had to work with what they knew, which in practice usually meant going back to Feynman diagrams.
Paradoxically then, much of the lasting impact of the Analytic S-Matrix Program has been on how we understand the results of Feynman diagram calculations. Just as psychologists learn about the Milgram experiment in school, we learn about Mandelstam variables and Regge trajectories. Recently, we’ve been digging up old concepts from those days and finding new applications, like the recent work on Landau singularities, or some as-yet unpublished work I’ve been doing.
Of course, this post wouldn’t be complete without mentioning the Analytic S-Matrix Program’s most illustrious child, String Theory. Some of the mathematics cooked up by the physicists of the 60’s, while dead ends for the problems they were trying to solve, ended up revealing a whole new world of potential.
The physicists of the 60’s were overly optimistic. Nevertheless, their work opened up questions that are still worth asking today. Much as psychologists can’t ignore what they got up to in the 60’s, it’s important for physicists to be aware of our history. You never know what you might dig up.
And as Levar Burton would say, you don’t have to take my word for it.