Risk = Hazard + Outrage, or Why Risk Stats about Toddlers, Lightning, and Guns vs. Terrorists Don’t Work

Philosopher Massimo Pigliucci recently tweeted a comparison of risk stats.


While these statistics are surely correct, they are equally unlikely to have much impact on the public perception of risk. Peter Sandman is an expert in the field of risk communication. One of his contributions is the quasi-formula, “Risk = Hazard + Outrage.” Pigliucci’s statistics are an example of what Sandman calls data about the “hazard.” But there is another set of considerations, what he calls “outrage,” which play a much bigger role in how most people think about risk.

Sandman defines “outrage” as “how upsetting the risky situation is.” He explains as follows:

When I invented the label “outrage” some 30 years ago I had in mind the sort of righteous anger people feel when they suspect a nearby factory is belching carcinogens into the air. But as I use the concept now, it applies to fear-arousing situations as much as anger-arousing situations. High-outrage risks are the risks that tend to upset people, independent of how much harm they’re actually likely to do.

A risk that is voluntary, for example, provokes less outrage than one that’s coerced. A fair risk is less outrage-provoking than an unfair one. Among the other outrage factors link is to a PDF file:

  • Familiar versus exotic
  • Not memorable versus memorable
  • Not dreaded versus dreaded
  • Individually controlled versus controlled by others
  • Trustworthy sources versus untrustworthy sources
  • Responsive process versus unresponsive process


Not only does Sandman’s hazard vs. outrage distinction explain why appeals to comparative risk statistics don’t work, it is actually constructive because it help guides the conversation. Most of Sandman’s website is, in fact, an extended lesson in how to use an awareness of his “outrage factors” to get people’s outrage to better align with the hazard.

I’m not going to spell out how Sandman’s outrage factors play into the debate over Trump’s Executive Order temporarily banning immigration from seven Muslim-majority nations. I’ll leave that as an exercise for the reader. But I predict that, if you apply his techniques, you can both predict the talking points from Trump’s supporters as well as identify areas where Trump’s critics could do a much better job.


2 thoughts on “Risk = Hazard + Outrage, or Why Risk Stats about Toddlers, Lightning, and Guns vs. Terrorists Don’t Work”

  1. This formula is nice, simple and illuminating. E.g., “Outrage” can be interpreted to capture the sense of the “other” that makes white, Christian people in the U.S. fear Islamic terrorism more than White Supremacist terrorism.

    It misses or overly simplifies some key points about risk assessment, however.

    The Black Swan Principle

    Normal people correctly take the potential for Black Swan events into account. E.g., the risk of dying from lightning or being accidentally or purposefully shot is pretty constant over time. But as we saw with 9/11, the 10-year average death rate for terrorist deaths does not predict the potential death rate. Factor in the very real potential of ex-USSR nukes kicking about uncontrolled, and it’s not unreasonable to fear terrorism more than constant threats, because it is spiky and unpredictable. This isn’t just irrationality and fear-mongering. This is one area where normal people, on average, may have a better appreciation of overall risk than many “experts”.

    The Sharknado Principle

    U.S.-wide risk numbers do not account for individual or group behavior. People know instinctively that certain risks can be nearly eliminated by certain behaviors, and take that into account when assessing personal risk. E.g., if you spend all your time in Manhattan and don’t go outside in a major thunderstorm (both quite doable), you have a 0% chance of being killed by lightning. If you don’t have a toddler, or don’t have a gun in your household, or keep it locked up religiously, you have a 0% chance of being shot by a toddler. If you live in Seattle, you won’t be killed by a tornado; if you live in Montana, you won’t be killed by an earthquake. U.S.-wide risk is mostly useless in assessing personal risk.

    A great example of this in my life is shark attacks. Since I’m a surfer (or aspire to be), my risk of dying from a shark attack is substantially non-zero. Most of the places I’ve surfed or might surf regularly, including Hawaii, Oregon, California and Washington, have shark danger warnings posted. However, for the 99%-ish of the population who never swims in an ocean, their risk of dying from a shark attack is zero, severely skewing nation-wide data for the small group of people who swim in the ocean regularly. The only shark attack rate statistics (as opposed to incident reports) readily available are nation-wide or maybe state-wide, meaning I can’t reasonably assess the danger while surfing of a shark attack vs., e.g., drowning, without a lot more research. In fact, I’ve tried to find more specific risk numbers without success, but then I’m a bad researcher.

    Basically, people know Sharknados don’t exist, and adjust their risk assessment accordingly.

    The Business as Usual Principle

    This is related to the “outrage” concept, but “non outrage” needs to be examined just as closely as “outrage” to understand how people assess risk.

    For a long time now, since at least WW2, if you live in the U.S., the vast majority of stuff that can kill you before your time is totally mundane, falling in a mental “business as usual” bucket. Four out of the last five items on your chart fall squarely in this category. Not only are these things perceived as a background hum in people’s journey from birth to death, many of them are explicitly seen by most people as an acceptable cost of doing business. Traffic fatalities and drinking alcohol, e.g., are accepted so much as normal that drunk drivers aren’t considered terrorists and barely as criminals, even though they kill more people through deliberate, unforced actions (getting drunk then driving a car) in the U.S. every year than terrorists have killed in our entire history. Industrial “accidents” also fall into this category: people assign much higher risk to “deliberate” acts vs. “accidental” acts (even if they are a result of deliberate policies) and “acts of God”.

    Fortunately, we have people and orgs like OSHA, the NTSB, and Elon Musk who crunch the numbers and try to chip away at these problems, with varying degrees of effectiveness depending on the political climate.

    3a. The Boiling Frog Corollary

    An important corollary to Business as Usual is Boiling Frog: people are evolutionarily incapable of prioritizing slow catastrophes in their risk assessments. They tend to perceive any changes related to these threats as Business as Usual, adjusting their idea of “usual” over time.

    This is the flip side of your average person’s common sense understanding of Black Swan: if the Black Swan is a comet strike or terrorist nuke, people can grok that; if it’s a slow-moving catastrophe like climate change, people just can’t get their head around it enough to see it as the existential threat it is. This is true even if people are already dying from the slow-moving catastrophe. E.g., many view the conflicts in the Middle East as being driven by slow-moving changes to resource economics (Oil and Water) that are tied to climate change in lots of interesting ways. Thus, you could see the current Syrian refugee crisis that is leading to progressive democracies electing fascist dictators as the first big climate change-driven Black Swan Event. But it’s impossible to make this argument in a way that has real-time political impact.

    3a-i. The “Exponential Curves Look Linear in their Lag Phase” Problem

    A lot of times, when you rigorously crunch the numbers on a trend that’s about to change the world fundamentally, you run across an exponential curve where something is doubling every year. Ice melt. Processing power. Whatever. The problem is, most people, including statisticians doing risk assessments and general analysts making predictions, don’t assume they are looking at an exponential curve. They assume they are looking at linear growth, and try to fit a linear curve to the data. Unfortunately, in the lag phase of an exponential curve, it’s easy to “fit” a linear curve to exponential data and make it seem, from a statistics perspective, reasonable. If you build your business model or disaster response strategy around this presumed linear trend, the exponential curve will smash right through your plans when it hits its log phase (the business end of the “hockey stick”).

    This most often emerges as a serious problem when there’s a feedback loop at play: the more of some change that happens, the faster that change happens. Ice melt has feedback loops. Information tech has feedback loops. Feedback loops can connect to external systems, making them even harder to spot. E.g., anything that benefits from IT (e.g. biotech) has feedback loops with IT and thus itself (since biotech spending feeds IT spending). The oceans store heat for long periods, but they may also release that pent-up heat rapidly under the right conditions.

    Humans do a great job of understanding exponential curves that manifest in short time spans, like “charging tiger”. They do a terrible, terrible job of seeing them over longer time spans.


    1. Hi Bruce, I agree with all of your data or supporting points, but I disagree with your conclusion. I work professionally in risk management, specifically information risk management, and I don’t think your points support your conclusion that “It misses or overly simplifies some key points about risk assessment, however.”

      The Black Swan Principle: in Sandman’s terminology, this principle would apply to the “hazard” portion of the quasi-formula.

      The Sharknado Principle: this is just another name for the reference class problem. It also applies to the “hazard” portion of his quasi-formula.

      The Business as Usual Principle: this is covered by the “outrage” portion of the quasi-formula. It probably wasn’t obvious from the very brief summary in my post, however. If you take a look at his classic book on risk communication and skip to the chapter on “outrage,” you’ll find that each outrage factor is really a pair of factors, one which increases outrage and one which decreases it. The “Business as Usual Principle” relates to what Sandman calls “familiar” vs. “exotic” risks (page 19).


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s