Nobody wants to die. Natural risks are known to be pretty low, because we can estimate their frequencies in the future with their frequencies in the past. As it happens, supervolcanic explosions and planet killing asteroids don’t come around very often. So if very few people are trying to wipe humanity out and natural risk is low, then why worry?
Consider the risk posed by passively listening for alien messages (recently explored in an excellent post by Matthew Barnett). If we expect that some alien civilizations will expand very rapidly but still significantly slower than the speed of light, there will be a large margin between the frontier of their physical expansion and the furthest places they can reach by sending messages at light speed. Expansionist aliens might try to use messages to start expansion waves from new points further out or to prevent other civilizations from grabbing stuff that is in the future path of their expanding frontier.
Therefore, if we get an alien message, it might be bad news. It might encode instructions for some kind of nightmarish world destroying weapon or hostile, alien-created AI. Maybe we just shouldn’t try to interpret it or run it on a computer. (Bracketing all the technical problems this obviously raises–if we get a message we don’t recognize as such or that we can’t make head or tail of, then there’s obviously nothing to worry about). As one commenter summarized Matthew’s argument: “Passive SETI exposes an attack surface which accepts unsanitized input from literally anyone, anywhere in the universe. This is very risky to human civilization.”
The SETI Institute’s current plan if they get an alien message is apparently to post it on the internet. For the above reasons, this is a terrible idea. Matthew wrote: “If a respectable academic wrote a paper carefully analyzing how to deal with alien signals, informed by the study of information hazards, I think there is a decent chance that the kind people at the SETI Institute would take note, and consider improving their policy (which, for what it’s worth, was last modified in 2010)”.
I have studied information hazards a bit, and the subject is very interesting. But as far as I can tell the study of information hazards is short on general purpose lessons besides: be careful! One important finding is the idea of the unilateralist’s curse. If a group of independent actors discovers a piece of sensitive information, the probability that it will be released is given not by the average of the probabilities that each member will release it but by the probability that the most optimistic or risk tolerant member will. This leads to a “principle of conformity”. In an information hazard situation, you shouldn’t just do what you think best. You should take the other group members’ assessment of how risky publicizing something is into account. Be careful!
Information hazard research pioneer Nick Bostrom came up with an analogy for existential risks created by future technologies. Imagine that there is an urn containing white, gray, and black balls. A white ball is a beneficial new invention, a gray ball is an invention with mixed effects, and a black ball is an invention that destroys human civilization (for example, a bomb that, once discovered, any idiot can assemble which would destroy the entire earth if detonated). So far, technological progress has been good for humanity. We’ve drawn lots of white balls, a few gray balls, and no black balls.
But will that continue? Bostrom wrote:
Most scientific communities have neither the culture, nor the incentives, nor the expertise in security and risk assessment, nor the institutional enforcement mechanisms that would be required for dealing effectively with infohazards. The scientific ethos is rather this: every ball must be extracted from the urn as quickly as possible and revealed to everyone in the world immediately; the more this happens, the more progress has been made; and the more you contribute to this, the better a scientist you are. The possibility of a black ball does not enter into the equation.
I think the existence of this ethos is the most important big picture reason for pessimism about existential risk. It is much harder to bound the risks created by new technologies than it is to bound natural risks. We have only had a few centuries of fast technological progress. Presumably the technologies of the future will be more powerful than the technologies of the past. Presumably things that are more powerful are riskier. And, if a black ball had come out of the urn already, there would be nobody around to ponder this question. So how much can we conclude, really, from the fact that no black ball has been drawn so far in our own history?
We should be worried that our civilization spends almost no energy worrying about this possibility. Science emerged from the breakdown of various orthodoxies and taboos. Scientists’ hatred of taboos is pretty understandable–and the fact that I find it understandable worries me all the more. I hate, hate, hate people who try to institute taboos on exploration and discussion! And that is even after spending a lot of time thinking about why future technologies might be risky. Even though I think this attitude imperils my species, I cannot suppress my allergy to taboos.
So what are the chances, then, that the “pull out as many balls as possible, color be damned” ethos changes, without other huge changes in the structure of human civilization? Maybe we could convince SETI organizations to “be careful!”. Then that (small, in the grand scheme of things) problem might be solved. But how many other groups, doing other kinds of research, would we have to convince? We either have to scramble to invent countermeasures to technologies that do not exist yet, or we have to try to persuade various research communities to change in ways that they find very uncongenial. We are running around a dam that is springing leaks, trying to plug them with our fingers. That is the kind of thing that will eventually stop working.