Category Archives: Astrophysics/Cosmology

4gravitons, Spinning Up

I had a new paper out last week, with Michèle Levi and Andrew McLeod. But to explain it, I’ll need to clarify something about our last paper.

Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?

Actually, we did. Our calculation doesn’t just apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…

LIGO’s next big discovery

That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.

Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?

In our last calculation, no. In this one, yes!

Effective Field Theory isn’t just a zoom lens, it’s a controlled approximation. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.

At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.

So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because some of the properties of neutron stars are unknown.

The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.

That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

Guest Post: On the Real Inhomogeneous Universe and the Weirdness of ‘Dark Energy’

A few weeks ago, I mentioned a paper by a colleague of mine, Mohamed Rameez, that generated some discussion. Since I wasn’t up for commenting on the paper’s scientific content, I thought it would be good to give Rameez a chance to explain it in his own words, in a guest post. Here’s what he has to say:


In an earlier post, 4gravitons had contemplated the question of ‘when to trust the contrarians’, in the context of our about-to-be-published paper in which we argue that accounting for the effects of the bulk flow in the local Universe, there is no evidence for any isotropic cosmic acceleration, which would be required to claim some sort of ‘dark energy’.

In the following I would like to emphasize that this is a reasonable view, and not a contrarian one. To do so I will examine the bulk flow of the local Universe and the historical evolution of what appears to be somewhat dodgy supernova data. I will present a trivial solution (from data) to the claimed ‘Hubble tension’.  I will then discuss inhomogeneous cosmology, and the 2011 Nobel prize in Physics. I will proceed to make predictions that can be falsified with future data. I will conclude with some questions that should be frequently asked.

Disclaimer: The views expressed here are not necessarily shared by my collaborators. 

The bulk flow of the local Universe:

The largest anisotropy in the Cosmic Microwave Background is the dipole, believed to be caused by our motion with respect to the ‘rest frame’ of the CMB with a velocity of ~369 km s^-1. Under this view, all matter in the local Universe appear to be flowing. At least out to ~300 Mpc, this flow continues to be directionally coherent, to within ~40 degrees of the CMB dipole, and the scale at which the average relative motion between matter and radiation converges to zero has so far not been found.

This is one of the most widely accepted results in modern cosmology, to the extent that SN1a data come pre ‘corrected’ for it.

Such a flow has covariant consequences under general relativity and this is what we set out to test.

Supernova data, directions in the sky and dodgyness:

Both Riess et al 1998 and Perlmutter et al 1999 used samples of supernovae down to redshifts of 0.01, in which almost all SNe at redshifts below 0.1 were in the direction of the flow.

Subsequently in Astier et al 2006, Kowalsky et al 2008, Amanullah et al 2010 and Suzuki et al 2011, it is reported that a process of outlier rejection was adopted in which data points >3\sigma from the Hubble diagram were discarded. This was done using a highly questionable statistical method that involves adjusting an intrinsic dispersion term \sigma_{\textrm{int}} by hand until a \chi^2/\textrm{ndof} of 1 is obtained to the assumed \LambdaCDM model. The number of outliers rejected is however far in excess of 0.3% – which is the 3\sigma expectation. As the sky coverage became less skewed, supernovae with redshift less than ~0.023 were excluded for being outside the Hubble flow. While the Hubble diagram so far had been inferred from heliocentric redshifts and magnitudes, with the introduction of SDSS supernovae that happened to be in the direction opposite to the flow, peculiar velocity ‘corrections’ were adopted in the JLA catalogue and supernovae down to extremely low redshifts were reintroduced. While the early claims of a cosmological constant were stated as ‘high redshift supernovae were found to be dimmer (15% in flux) than the low redshift supernovae (compared to what would be expected in a \Lambda=0 universe)’, it is worth noting that the peculiar velocity corrections change the redshifts and fluxes of low redshift supernovae by up to ~20 %.

When it was observed that even with this ‘corrected’ sample of 740 SNe, any evidence for isotropic acceleration using a principled Maximum Likelihood Estimator is less than 3\sigma , it was claimed that by adding 12 additional parameters (to the 10 parameter model) to allow for redshift and sample dependence of the light curve fitting parameters, the evidence was greater than 4\sigma .

As we discuss in Colin et al. 2019, these corrections also appear to be arbitrary, and betray an ignorance of the fundamentals of both basic statistical analysis and relativity. With the Pantheon compilation, heliocentric observables were no longer public and these peculiar velocity corrections initially extended far beyond the range of any known flow model of the Local Universe. When this bug was eventually fixed, both the heliocentric redshifts and magnitudes of the SDSS SNe that filled in the ‘redshift desert’ between low and high redshift SNe were found to be alarmingly discrepant. The authors have so far not offered any clarification of these discrepancies.

Thus it seems to me that the latest generation of ‘publicly available’ supernova data are not aiding either open science or progress in cosmology.

A trivial solution to the ‘Hubble tension’?

The apparent tension between the Hubble parameter as inferred from the Cosmic Microwave Background and low redshift tracers has been widely discussed, and recent studies suggest that redshift errors as low as 0.0001 can have a significant impact. Redshift discrepancies as big as 0.1 have been reported. The shifts reported between JLA and Pantheon appear to be sufficient to lower the Hubble parameter from ~73 km s^-1 Mpc^-1 to ~68 km s^-1 Mpc^-1.

On General Relativity, cosmology, metric expansion and inhomogeneities:

In the maximally symmetric Friedmann-Lemaitre-Robertson-Walker solution to general relativity, there is only one meaningful global notion of distance and it expands at the same rate everywhere. However, the late time Universe has structure on all scales, and one may only hope for statistical (not exact) homogeneity. The Universe is expected to be lumpy. A background FLRW metric is not expected to exist and quantities analogous to the Hubble and deceleration parameters will vary across the sky.  Peculiar velocities may be more precisely thought of as variations in the expansion rate of the Universe. At what rate does a real Universe with structure expand? The problems of defining a meaningful average notion of volume, its dynamical evolution, and connecting it to observations are all conceptually open.

On the 2011 Nobel Prize in Physics:

The Fitting Problem in cosmology was written in 1987. In the context of this work and the significant theoretical difficulties involved in inferring fundamental physics from the real Universe, any claims of having measured a cosmological constant from directionally skewed, sparse samples of intrinsically scattered observations should have been taken with a grain of salt.  By honouring this claim with a Nobel Prize, the Swedish Academy may have induced runaway prestige bias in favour of some of the least principled analyses in science, strengthening the confirmation bias that seems prevalent in cosmology.

This has resulted in the generation of a large body of misleading literature, while normalizing the practice of ‘massaging’ scientific data. In her recent video about gravitational waves, Sabine Hossenfelder says “We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data”. What about when the data was fitted (in 1998-1999) using a method that has been discredited in 1989 to a toy model that has been cautioned against in 1987, leading to a ‘discovery’ of profound significance to fundamental physics?

A prediction with future cosmological data:

With the advent of high statistics cosmological data in the future, such as from the Large Synoptic Survey Telescope, I predict that the Hubble and deceleration parameters inferred from supernovae in hemispheres towards and away from the CMB dipole will be found to be different in a statistically significant (>5\sigma ) way. Depending upon the criterion for selection and blind analyses of data that can be agreed upon, I would be willing to bet a substantial amount of money on this prediction.

Concluding : on the amusing sociology of ‘Dark Energy’ and manufactured concordance:

Of the two authors of the well-known cosmology textbook ‘The Early Universe’, Edward Kolb writes these interesting papers questioning dark energy while Michael Turner is credited with coining the term ‘Dark Energy’.  Reasonable scientific perspectives have to be presented as ‘Dark Energy without dark energy’. Papers questioning the need to invoke such a mysterious content that makes up ‘68% of the Universe’ are quickly targeted by inane articles by non-experts or perhaps well-meant but still misleading YouTube videos. Much of this is nothing more than a spectacle.

In summary, while the theoretical debate about whether what has been observed as Dark Energy is the effect of inhomogeneities is ongoing, observers appear to have been actively using the most inhomogeneous feature of the local Universe through opaque corrections to data, to continue claiming that this ‘dark energy’ exists.

It is heartening to see that recent works lean toward a breaking of this manufactured concordance and speak of a crisis for cosmology.

Questions that should be frequently asked:

Q. Is there a Hubble frame in the late time Universe?

A. The Hubble frame is a property of the FLRW exact solution, and in the late time Universe in which galaxies and clusters have peculiar motions with respect to each other, an equivalent notion does not exist. While popular inference treats the frame in which the CMB dipole vanishes as the Hubble frame, the scale at which the bulk flow of the local Universe converges to that frame has never been found. We are tilted observers.

Q. I am about to perform blinded analyses on new cosmological data. Should I correct all my redshifts towards the CMB rest frame?

A. No. Correcting all your redshifts towards a frame that has never been found is a good way to end up with ‘dark energy’. It is worth noting that while the CMB dipole has been known since 1994, supernova data have been corrected towards the CMB rest frame only after 2010, for what appear to be independent reasons.

Q. Can I combine new data with existing Supernova data?

A. No. The current generation of publicly available supernova data suffer from the natural biases that are to be expected when data are compiled incrementally through a human mediated process. It would be better to start fresh with a new sample.

Q. Is ‘dark energy’ fundamental or new physics?

A. Given that general relativity is a 100+ year old theory and significant difficulties exist in describing the late time Universe with it, it is unnecessary to invoke new fundamental physics when confronting any apparent acceleration of the real Universe. All signs suggest that what has been ascribed to dark energy are the result of a community that is hell bent on repeating what Einstein supposedly called his greatest mistake.

Digging deeper:

The inquisitive reader may explore the resources on inhomogeneous cosmology, as well as the works of George Ellis, Thomas Buchert and David Wiltshire.

Still Traveling, and a Black Hole

I’m still at the conference in Natal this week, so I don’t have time for a long post. The big news this week was the Event Horizon Telescope’s close-up of the black hole at the center of galaxy M87. If you’re hungry for coverage of that, Matt Strassler has some of his trademark exceptionally clear posts on the topic, while Katie Mack has a nice twitter thread.

Pictured: Not a black hole

Cosmology, or Cosmic Horror?

Around Halloween, I have a tradition of posting about the “spooky” side of physics. This year, I’ll be comparing two no doubt often confused topics, Cosmic Horror and Cosmology.

cthulhu_and_r27lyeh

Pro tip: if this guy shows up, it’s probably Cosmic Horror

Cosmic Horror

Cosmology

Started in the 1920’s with the work of Howard Phillips Lovecraft Started in the 1920’s with the work of Alexander Friedmann
Unimaginably ancient universe Precisely imagined ancient universe
In strange ages even death may die Strange ages, what redshift is that?
An expedition to Antarctica uncovers ruins of a terrifying alien civilization An expedition to Antarctica uncovers…actually, never mind, just dust
Alien beings may propagate in hidden dimensions Gravitons may propagate in hidden dimensions
Cultists compete to be last to be eaten by the Elder Gods Grad students compete to be last to realize there are no jobs
Oceanic “deep ones” breed with humans Have you seen daycare costs in a university town? No way.
Variety of inventive and bizarre creatures, inspiring libraries worth of copycat works Fritz Zwicky
Hollywood adaptations are increasingly popular, not very faithful to source material Actually this is exactly the same
Can waste hours on an ultimately fruitless game of Arkham Horror Can waste hours on an ultimately fruitless argument with Paul Steinhardt
No matter what we do, eventually Azathoth will kill us all No matter what we do, eventually vacuum decay will kill us all

The Physics Isn’t New, We Are

Last week, I mentioned the announcement from the IceCube, Fermi-LAT, and MAGIC collaborations of high-energy neutrinos and gamma rays detected from the same source, the blazar TXS 0506+056. Blazars are sources of gamma rays, thought to be enormous spinning black holes that act like particle colliders vastly more powerful than the LHC. This one, near Orion’s elbow, is “aimed” roughly at Earth, allowing us to detect the light and particles it emits. On September 22, a neutrino with energy around 300 TeV was detected by IceCube (a kilometer-wide block of Antarctic ice stuffed with detectors), coming from the direction of TXS 0506+056. Soon after, the satellite Fermi-LAT and ground-based telescope MAGIC were able to confirm that the blazar TXS 0506+056 was flaring at the time. The IceCube team then looked back, and found more neutrinos coming from the same source in earlier years. There are still lingering questions (Why didn’t they see this kind of behavior from other, closer blazars?) but it’s still a nice development in the emerging field of “multi-messenger” astronomy.

It also got me thinking about a conversation I had a while back, before one of Perimeter’s Public Lectures. An elderly fellow was worried about the LHC. He wondered if putting all of that energy in the same place, again and again, might do something unprecedented: weaken the fabric of space and time, perhaps, until it breaks? He acknowledged this didn’t make physical sense, but what if we’re wrong about the physics? Do we really want to take that risk?

At the time, I made the same point that gets made to counter fears of the LHC creating a black hole: that the energy of the LHC is less than the energy of cosmic rays, particles from space that collide with our atmosphere on a regular basis. If there was any danger, it would have already happened. Now, knowing about blazars, I can make a similar point: there are “galactic colliders” with energies so much higher than any machine we can build that there’s no chance we could screw things up on that kind of scale: if we could, they already would have.

This connects to a broader point, about how to frame particle physics. Each time we build an experiment, we’re replicating something that’s happened before. Our technology simply isn’t powerful enough to do something truly unprecedented in the universe: we’re not even close! Instead, the point of an experiment is to reproduce something where we can see it. It’s not the physics itself, but our involvement in it, our understanding of it, that’s genuinely new.

The IceCube experiment itself is a great example of this: throughout Antarctica, neutrinos collide with ice. The only difference is that in IceCube’s ice, we can see them do it. More broadly, I have to wonder how much this is behind the “unreasonable effectiveness of mathematics”: if mathematics is just the most precise way humans have to communicate with each other, then of course it will be effective in physics, since the goal of physics is to communicate the nature of the world to humans!

There may well come a day when we’re really able to do something truly unprecedented, that has never been done before in the history of the universe. Until then, we’re playing catch-up, taking laws the universe has tested extensively and making them legible, getting humanity that much closer to understanding physics that, somewhere out there, already exists.

Bubbles of Nothing

I recently learned about a very cool concept, called a bubble of nothing.

Read about physics long enough, and you’ll hear all sorts of cosmic disaster scenarios. If the Higgs vacuum decays, and the Higgs field switches to a different value, then the masses of most fundamental particles would change. It would be the end of physics, and life, as we know it.

A bubble of nothing is even more extreme. In a bubble of nothing, space itself ceases to exist.

The idea was first explored by Witten in 1982. Witten started with a simple model, a world with our four familiar dimensions of space and time, plus one curled-up extra dimension. What he found was that this simple world is unstable: quantum mechanics (and, as was later found, thermodynamics) lets it “tunnel” to another world, one that contains a small “bubble”, a sphere in which nothing at all exists.

giphy

Except perhaps the Nowhere Man

A bubble of nothing might sound like a black hole, but it’s quite different. Throw a particle into a black hole and it will fall in, never to return. Throw it into a bubble of nothing, though, and something more interesting happens. As you get closer, the extra dimension of space gets smaller and smaller. Eventually, it stops, smoothly closing off. The particle you threw in will just bounce back, smoothly, off the outside of the bubble. Essentially, it reached the edge of the universe.

The bubble starts out small, comparable to the size of the curled-up dimension. But it doesn’t stay that way. In Witten’s setup, the bubble grows, faster and faster, until it’s moving at the speed of light, erasing the rest of the universe from existence.

You probably shouldn’t worry about this happening to us. As far as I’m aware, nobody has written down a realistic model that can transform into a bubble of nothing.

Still, it’s an evocative concept, and one I’m surprised isn’t used more often in science fiction. I could see writers using a bubble of nothing as a risk from an experimental FTL drive, or using a stable (or slowly growing) bubble as the relic of some catastrophic alien war. The idea of a bubble of literal nothing is haunting enough that it ought to be put to good use.