Paper Explainer: Force-feeding Supermassive Black Holes with Dissipative Dark Matter
/This is a paper explainer for my recent paper “Force-feeding Supermassive Black Holes with Dissipative Dark Matter” with Rutgers postdoc Nicolas Fernandez.
There is a problem in the early Universe, one that demands an answer.
Or maybe there isn’t. But at least, there is something weird going on in the data, and as a theorist, that’s good enough for me.
Let’s start over.
JWST, our newest space telescope, is capable of peering back further into the history of the Universe than previously possible. Sensitive to the infrared (IR) wavelengths, it can see the earliest stars whose visible light has been redshifted into the IR by the expansion of the Universe. Among its many surprising discoveries, it has identified a population of “little red dots” — early galaxies with a perhaps surprising number of stars given how early they are forming and with evidence of supermassive black holes with high masses already formed in their galactic cores.
These early galaxies are seen at high redshifts — a measure of how much the wavelengths have stretched between the emission of the photons and JWST’s detection of them. Redshift is also correlated with the age of the Universe when the light was emitted: larger redshifts means the light left its source when the Universe was smaller, and thus younger. We speak of redshift rather than age, as redshift $z$ is “easily” measured (you identify a series of characteristic spectral lines from atomic energy level transitions in the galaxies’ light, and from that work out the wavelength change). Age is inferred, and requires both a redshift and a model of cosmology to calculate. These early supermassive black holes are being seen in galaxies with redshifts as high as 9 (meaning the ‘cosmic meter-stick’ of Universe was 1/9th the size it is today), which corresponds to something like 13.1 billion years ago or around 500 million years after the initial Big Bang (though that age is of course assuming something about the evolution of the Universe).
At such early times, JWST is reporting black holes in center of large galaxies with masses on the order of a million or ten million Suns (a Solar mass, $M_\odot$, is a very convenient measure of mass at the scale of galaxies. Our Milky Way weighs in at around $10^{12}M_\odot$ including all the dark matter, and contains a $\sim 4\times 10^6M_\odot$ black hole at its center). A bit later, at redshifts $z\sim 6$, JWST finds that there are already monster black holes of $\sim 10^{10}M_\odot$ lurking in the center of the very largest galaxies.
Now, the mystery is not exactly that supermassive black holes (SMBH) exist. The presence of such black holes has been known for a long time in the “modern” ($z\sim 0$) Universe. Every big galaxy appears to have a SMBH, with the mass of the galaxy tracking the mass of the SMBH (though this relation is itself not understood — though it makes intuitive sense that the two masses are correlated, we actually don’t have a good causal mechanism that related the mass of the MSBH with the galaxy it resides in. Despite being incredibly massive, SMBHs are only a tiny fraction of the mass of their galaxies, and it is hard to understand how the two parameters are related). Given that they exist today, it was always reasonable to assume that these massive objects existed in the early Universe, at lower masses, but slowly growing through the infall of material until they reached the masses we see today.
However, the JWST black holes are a bit too big a bit too early. I want to be a bit careful here, since it is completely possible that these early black holes are totally in line with “normal” physics. But as I’ll explain, it is also possible that they are not.
What is certainly the case is that the SMBHs we see today are the result of a “seed” black hole which started at much lower mass.
Here’s the standard physics story of how these black holes come to be. The easiest way to do this in the standard cosmology is if the seeds are the remnants of the first generation of stars. These stars would have been created from the primordial hydrogen and helium that came out of the plasma after the Big Bang. Without elements heavier than helium (“metals” in the parlance of astronomers), the primordial gas clouds would tend to create more massive stars than the results of star formation today. This is a complicated problem, but the basic reason is that gas clouds fragment as they cool and collapse, and without metals adding new cooling channels the fragmentation tends to stop at higher mass. I’ll return to how fragmentation works for clouds of material later.
These massive early stars ($100-1000M_\odot$) would almost immediately burn through their gigantic reserves of nuclear fuel (the rate of nuclear fusion in a star goes as a high power of the stellar mass, so even though they have a lot of hydrogen to fuse, the go through it very quick). Within a few 10,000 years (or less), these early stars run out of fuel, undergo core-collapse, explode in a Type-II supernova, and leave a black hole remnant with a mass of few solar masses. The earliest such seed black holes near the core of a young galaxy will having other gas fall through the event horizon, “feeding” the black hole and resulting in a mass increase. Eventually, black holes near the galactic core can merge, creating even bigger and bigger central black holes. All of this makes sense and requires no exotic new physics.
But the problem is that it is actually really hard to feed a black hole regular matter (that is, “baryonic” matter made of protons and neutrons). The reason for this is that as matter falls in towards a black hole, it starts heating up due to scattering and collisions with the other infalling material. Before it falls past the event horizon, these hot baryons emit radiation, which will push back against other mass falling in behind. This results in what is called the Eddington limit: the maximum rate at which you can feed a black hole baryons in the form of hot gas. You can overcome the Eddington limit by feeding the black hole “cold” matter if the baryons are coming in along streams, but such unusual configurations are expected to be somewhat rare.
So, the problem is this. For baryons, the Eddington limit creates a maximum rate at which black holes are expected to grow, which remember is set only by the photon pressure pushing back against the proton-electron plasma of infalling hot gas. These rate is most often expressed in terms of an Saltpeter time, which is around 45 million years. In one Saltpeter time, a black hole can grow at most by a factor of $e\sim 2.72$ (unless there has been a high duty cycle of super-Eddington accretion). And the black holes seen by JWST appear to be too massive too early to be explained by sub-Eddington accretion onto the seed black holes left over by early stars. There aren’t enough 45-million-year Saltpeter times between the first seeds and the little red dots seen by JWST.
So, either the JWST black holes are not actually as young as we think they are (and this is early days, so maybe they are not at the redshifts we currently assign them), or the seed black holes left over by the first stars are more massive than we expected (which again is possible, we don’t fully understand these complicated systems), or super-Eddington accretion is the primary growth mechanism of early SMBHs (again, possible, though seems a bit unlikely).
Or, and this is where theorists like me come in, maybe we’re thinking about the seeding mechanism all wrong. If we could create early black holes some other way, more massive than the stellar remnants, or if we could feed early black holes more efficiently, then maybe what we’re seeing with these JWST black holes are a first sign of new physics.
As I said, it is early days, and the safe money is that the SMBHs we’re seeing are all generated through normal baryonic processes, and everything is fine. But I’m a theorist, and where’s the fun in that?
So, let’s build a black hole seed in a new way. It needs to be built fast (so let’s say starting at redshifts of $z\sim 20$), and it needs to start big — big here being at least $1000M_\odot$. Also, I study dark matter, so of course I’m going to do this with dark matter.
Black holes will eat dark matter without any fancy theorist tricks: gravity is gravity and the event horizon is ecumenical in accepting all forms of stress-energy. But typically, it is hard to throw dark matter into a black hole. This is because dark matter in a galaxy has kinetic energy and angular momentum, and thus as gravity pulls it towards the black hole, it will tend to “miss” as it falls inwards. Instead of coming close enough to pass the event horizon, dark matter will just form a giant cloud orbiting around the black hole (remember: at distances much beyond the event horizon, black holes don’t “suck” material in any more than any other gravitating object). Only a very tiny fraction ever would get close enough to the center to pass the point of no return.
So if I want to get dark matter to fall into a black hole, or even better, if I want dark matter to form a black hole, I’m going to need to remove kinetic energy (and angular momentum, but it is easiest to think just about the energy for now) from the dark matter system. In boring old dark matter theories, this is very very hard. In the simplest theories dark matter doesn’t interact very much, even with itself, so it can’t “cool” by radiating away energy away.
But we can change that. The nature of dark matter is unknown, and it is reasonable to ask if dark matter is part of a larger “dark sector” of physics where there are multiple particles and interactions, just like in the baryonic matter of which we are made. After all, the Standard Model of particle physics isn’t boring, so why should dark matter be?
This is an idea that myself and others have played with for many years, and it turns out that it is completely possible for dark matter to have its own set of interactions. There are constraints, lots of them, and they all come from astrophysical observations of how dark matter distributes itself in galaxies and in the early Universe. In general, to avoid the dark matter halos of galaxies from looking too different from how they appear, dark matter must interact somewhat less than photons interact with electrons and protons. But “somewhat less” is not zero.
So let’s take a very simple model: “dark atoms” where you imagine dark matter is made of a heavy proton-analogue (which I’ll call $H$), a lighter electron analogue ($L$), which are charged equally and oppositely under a new long range $U(1)$ gauge force which results in a “dark photon” ${\hat{\gamma}}$. The dark force allows bound states (dark "atoms"). Such models are explored by many groups, considering a wide range of mass of interaction strength parameters — and they are a nice proxy for other forms of complicated dark sector physics, so even if in the end you want to think about some other form of dark matter with interactions, dark atoms are a good starting point to get a sense of the possible phenomenon. As I showed some years ago, you can make all of the dark matter out of such dark atomic states without changing the structure of the known dark matter halos: the effects are only noticeable in small halos. In this present work, we use this model specifically, but most of the core ideas would apply as long as you have a dark matter scenario with a long range force.
To do this right, you need to run a full simulation of a cloud of dark matter with dark photon interactions. These are computationally expensive, and you should only do that if you think there’s a very good reason. So in the present work, it makes sense to do some “back of the envelope” calculations to see if it is possible to have interesting behavior.
We start by thinking about what a cloud of atomic dark matter can do. The more massive the cloud is, the deeper the gravitational potential well is, and the faster the dark matter will be moving within the well (this result is known as the virial theorem). Faster moving material is “hotter,” so dark matter in large halos has a higher temperature than small halos.
When atomic dark matter is cold, the $H$ and $L$ particles can combine and form bound states (“dark atoms”). When it is hot, the dark matter is more and more ionized. When you have ionized or partially ionized material, as the particles pass by each other, they can scatter and lose energy by emitting photons (or “dark photons” in this case). That is, dark atoms can cool. The rate of cooling is set by the range of interactions that are possible within your model, but interestingly, the rate of cooling in most cases doesn’t increase faster than a single power of the temperature $\Gamma \propto T^n$ with $n < 1$ (there is one mechanism that is the exception: $L-L$ bremsstrahlung scattering, but that turns out to be not too important here). So as you make a halo more massive, the dark matter gets hotter, but it can’t radiate away that energy very well. So large dark matter halos can’t cool efficiently. As a result, for big collections of dark matter, these dark atomic cooling mechanisms don’t matter: they just don’t radiate energy away fast enough (where “fast” here is compared to the free-fall time of a dark matter particle in the halo).
A similar behavior is known for baryons. A Milky Way-mass spiral galaxy is a pancake of baryons (in a giant roughly-spherical halo of dark matter). Those baryons cooled efficiently and as a result collapsed into a disk. A galaxy cluster on the other hand has baryons that are not collapsed: the gas floats in intergalactic space rather than forming structures like disks and stars. Why? Because the rate of cooling for protons and electrons doesn’t increase fast enough to keep up with the higher temperature of galaxy cluster baryons. This was pointed out by Joe Silk in 1977, and I really like the result: the size of a spiral galaxy is set by the particle physics parameters of the mass of a proton, the mass of an electron, and the strength of electromagnetism. That’s really very cool.
As a theorist I get to pick those parameters for my dark atoms. If the $H$ and $L$ mass is heavier than that proton and electron masses the Standard Model and if the dark photon coupling is slightly weaker, then we can arrange things so that only dark matter halos below $\sim 3000M_\odot$ in mass will efficiently cool.
As those halos cool, they will fragment: little overdensities will collect more matter than others, and the density increases more in those regions than on average. Initially, the energy loss through radiation of dark photons will cause the collapsing region to lose temperature while the density remains roughly constant. Eventually, the temperature hits a wall where the $H$ and $L$ form bound dark atoms, and the temperature remains constant while the density starts increasing. All the while, the fragmentation continues, forming smaller and smaller masses. This process only is arrested by the clouds of dark matter becoming opaque to the dark radiation. At that time, the fragmentation stops and the smallest masses that are created will be the seeds of whatever comes next.
This is shown for an example set of parameters in the figure here: starting at a particular point of temperature and density (chosen to correspond to a $1000,M_\odot$ halo of dark matter at $z=20$, the 1st red star), the cooling of dark atoms would cause the temperature to decrease while keeping the average density of the halo more or less constant. Eventually, the cooling hits a wall — if the halo cools any more, the energy loss mechanisms will become inefficient, but that will cause heat to be added to the system by gravitational collapse. As a result, the system stops getting colder and starts getting more and more dense (the inflection point is the 2nd red star). All the while, the size of the fragments (shown by the diagonal dotted lines in units of $10^x,M_\odot$) will keep decreasing. Finally, when the system reaches the 3rd red star, the dark matter has finally gotten dense enough that it is opaque to its own dark radiation. The evolution of temperature and density are more complicated at this point, but the system will heat up while getting denser, with the fragment mass finally no longer decreasing. Without additional new physics to keep the fragments stable, they will themselves collapse into tiny black holes.
This happens in the Standard Model as well, with the seeds forming protostars. The differences in cooling mechanisms and the resulting opacity limit between pristine hydrogen and helium for the first stars and the metal-rich gas today is why we suspect the first stars were more massive — for them, the minimum fragment mass is expected to be $100-1000,M_\odot$, whereas for interstellar gas today the fragments are around a solar mass (which is why we have solar mass stars). For dark matter in our dark atom story, these seeds would be less massive, perhaps Earth-mass or lower. But without a new energy source akin to nuclear fusion, the seeds collapse directly to black holes.
So we’ve created a bunch of sub-Earth mass black holes out of a cloud of a few thousand $M_\odot$ of dark matter. Not an auspicious start to supermassive black holes, it would seem.
But remember, the issue with standard black hole growth is that damned Eddington limit. Which comes about because photons are too good at pushing back baryons as they try to fall into a black hole. Dark matter is not only dark to photons, it is really bad at talking to itself. Dark photons, if they exist, would be inefficiently coupled to dark matter itself. The Eddington accretion time for dark atoms (the time you have to wait for a black hole fed from dark matter to increase by a factor of $e$) is around 20,000 years for the parameters we’re working with, not 45 million years. So the tiny little black hole seeds will almost immediately consume all the available cooling (dissipative) dark matter, and grow. The growth is limited not by time (because the Saltpeter time is so short), but by the amount of material available. Full simulations would be required to understand what the resulting black holes look like, but I think it is reasonable to expect that you can get a few thousand solar mass black hole (the characteristic mass of the collapsing halo of dark matter) out of this. They can be created early, and they will be able to feed and grow consuming baryons as well.
Other groups have considered black holes from dark atoms, and other mechanisms to create black holes from dark matter to explain the SMBHs we see. But I think the interesting relation between the darkness of dark matter and the rate at which you could — if dark matter had the right interactions — feed a black hole with dark matter hasn’t been pointed out before. It’s a fascinating oddity: If I can get dark matter to cool, I can throw a lot more of it into a black hole than I can of “normal” atoms. Maybe that is a useful property to explain what we see in the Universe.
We don’t know if these JWST black holes are really a problem for the standard cosmology. As I said, the most likely outcome is that they can be created through standard physics. But, it is worth thinking about what is possible. As we learn more about the population of SMBHs across cosmic time, and about the population of stars that might have seeded them, we might see that there is a problem that can’t be reconciled within the Standard Model. If so, novel solutions are required, and I think looking in to what dark matter can do is a worthwhile endeavor.