Communication is intrinsic to risk management, yet it’s all too easy to forget to adequately communicate the results of that analysis. This is perhaps especially true for complex risks such as terrorism and national security where we require specialist knowledge to understand the issues in any depth. There are a couple of very simple things that we can do however to improve our risk communication.
It’s reasonable to say that communication is intrinsic to the risk management process, yet it’s all too easy to get caught up with risk analysis and forget to adequately communicate the results of that analysis. This is perhaps especially true for complex risks such as terrorism and national security where we require specialist knowledge to understand the issues in any depth.
It’s beyond the scope of this article to cover all the elements of risk communication but it’s worth singling out at least one critical element of risk communication: how we as risk professionals communicate the nature of risks to our leaders, laypersons and the general public. So how in fact do we communicate risk?
“Badly” is unfortunately often the answer to that question and any number of examples abound to illustrate this. Consider if you will, that most people are more afraid of terrorism than driving yet as the United States statistics show, an average of 100 Americans are killed each year in terrorism related events while 40,000 to 45,000 Americans are killed on the roads in the same period. Somewhere between 50,000 and 100,000 Americans will die each year in hospital from documented and preventable medical errors [i] while roughly 400,000 die annually from tobacco related illnesses and yet, both the level of fear and expenditure of funds to redress these risks are broadly speaking inversely proportional to the actual consequences. Clearly, given that these statistics are relatively consistent across most of the developed nations effective risk communication is not one of humankind’s strong points.
We are not going to solve humanity’s risk management issues with a wave of the magic wand but there are some things that we can do. We have any number of options available to us including one on one conversations, meetings, emails, newsletters and mass media. The issue however is not how to communicate but rather what to communicate. The key challenge lies in the way our brains are programmed to consider risks. Our brains are finely tuned instruments for assessing immediate fight or flight risks but our ability to consider more complex risks is a relatively recent invention of the mammalian neo-cortex. Large numbers and abstract ideas are unfortunately not what we do best. Saying that next year 40,000 out of 300 million people will probably die on the roads while 19,000 will be murdered and the average deaths from terrorism are 100 people per year simply doesn’t register in any meaningful way for us. The numbers are simply too large and too abstract for us to really comprehend. A better way to present complex risk information is to break it down into natural frequencies.
To illustrate this concept, let’s examine a potentially fatal risk for which we have some existing data and research available. Imagine that you are responsible for publishing public health risk information for counselors and Doctors and have to produce a leaflet for patients who are about to undertake an HIV test. By way of background, false positives are not uncommon. When an HIV test produces a positive result, the blood sample is therefore normally retested once or twice in the lab to verify the result. Despite this additional testing, a small number of cases (roughly 0.01%) can still yield false positives (or false negatives) for a variety of reasons including medical conditions, accidental swapping of blood samples and data input error. Most HIV information does not mention this seemingly minor false positive rate and a study of 21 HIV/AIDS information leaflets in America found that precisely none of the leaflets mentioned even the possibility of a false positive. [ii]
America is not alone in this oversight and another example of poor risk communication was confirmed in a 1998 German study of pre-test counseling for HIV tests. [iii] Twenty counselors were assessed and although they were very knowledgeable about most aspects of the topic, they exhibited significant gaps in the interpretation of tests. Of the 20 counselors in the study who gave pre-test counseling to a client with no known risk behavior (eg: homosexual, IV drug user), 5 incorrectly claimed that false negatives never occur and 16 incorrectly claimed that false positives never occur. The reasons for this inaccurate information included poor risk communication in their training, the illusion of certainty in testing and a failure to understand that the proportion of false positives is highest in low risk patients.
To understand why these otherwise knowledgeable health professionals should be so consistently ill-informed, consider this the results of some research by Gerd Gigerenzer. [iv] He first phrased the following question to HIV counselors in probabilities, as is fairly typical of the way statistics are presented to counselors and medical professionals.
“About 0.01 percent of men with no known risk behavior are infected with HIV. If such a man has the virus, there is a 99.99 percent chance that the test result will be positive. If a man is not infected, there is a 99.99 percent chance that the test result will be negative. What is the chance that a man with no known risk behavior who tests positive actually has the virus?”
Most people think that it is 99.99 percent or higher (including most of the counselors in the above study). Now consider the same question worded differently.
“Imagine 10,000 men who are not in any known risk category. One is infected and will test positive with practical certainty. Of the 9,999 men who are not infected, one will test positive. So we can expect that two men will test positive.”
From this latter question, you can easily see that the odds are roughly 1 in 2 or 50% that someone from a low-risk category who has a positive test result is actually HIV positive.
It’s worth noting that for men in high-risk categories (homosexual men or IV drug users for example) with a base rate of 1.5% HIV infection, the chance of a false positive is less than 1 percent. In a group of 10,000 homosexual men we would expect about 150 to be HIV positive and with practical certainty they will all test positive. Of the 9,850 who are HIV negative, it is likely that 1 would test positive. The chance therefore of this person receiving a false positive is therefore 1 in 151 or less than 1 percent.
The significance of this information for low risk individuals should not be underestimated. Countless people have endured traumatic psychological stress, lost jobs, separated from spouses, participated in unprotected sex with HIV positive persons or committed suicide as a result of false positive tests. By 1987 for example, 22 blood donors in Florida had committed suicide after being told that they were HIV positive. An analysis of these cases many years later concluded that the chances were at most only 50-50 that these individual were actually infected. [v] The downstream impacts of poor risk communication are not confined to the recipients of the communication either. The potential for legal action against Doctors or government agencies is just one example of a potential cascading spiral of risk begetting risk.
As you can see from the example above, the way in which we communicate risk can have a significant impact. Risk communication can of itself, introduce considerable risks where none existed if it is not carefully considered. The problem of inappropriate risk communication is by no means rare but it is relatively easily addressed. An example of how the above information could be better communicated would be to provide patients and counselors with the same information presented in terms of natural frequencies as outlined below. [vi]
Depending on the exact procedure used, an HIV test is likely to be positive for about 998 of 1,000 people infected with HIV. About 1 in 10,000 persons will generate a false positive result. False positives can be reduced by repeated testing using different methods but not completely eliminated as certain medical conditions and laboratory errors can still generate false positives.
About 1 in 10,000 heterosexual men with low-risk behavior are infected with HIV. Of those 10,000 low-risk men, one is likely to be infected and will almost certainly test positive (99.8% likelihood). Of the 9,999 non-infected men, 1 will also test positive. Thus we expect that out of 2 men who test positive, only 1 has HIV. This is the situation you would be in if you were to test positive and are in a low-risk group. Your chance of having the virus would be about 1 in 2. [vii]
Therefore, for persons with no known risk behaviors, a second HIV test should be conducted before confirming the positive diagnosis.
The reason that the above wording appears so much clearer is because our brain absorbs the information in a distinctly different way. Presenting the data using natural frequencies means that we are evaluating it using numbers that we can intuitively understand. It yields the same result but is much easier for our brains to calculate that result. The difference between these two ways is illustrated in Figure 1 below. Presenting the data in a tree based on actual numbers of people as shown on the left yields the same result as the more complex formula on the right but is much easier for us to calculate the correct answer.
Figure 1: How People Interpret Natural Frequencies vs. Probabilities
Perception and Deception
Another simple example of poor or misleading risk communication can be found in the O. J. Simpson murder trial. One piece of information that OJ’s defense team were able to quash was the prosecutions assertion that spousal abuse leads to murder. The defense argued that Simpsons history of assaulting his wife, Nicole Brown Simpson was not relevant to whether or not he had murdered her. Alan Dershowitz, a Harvard Law Professor in his book about the case argued that in the United States:
“As many as 4 million women are battered annually by husbands and boyfriends. Yet in 1992, according to the FBI Uniform Crime Reports, a total of 913 women were killed by their husbands and 519 were killed by their boyfriends. In other words, while there were 2 ½ to 4 million incidents of abuse, there were only 1,432 homicides. Some of these homicides may have occurred after a history of abuse but obviously most abuse, presumably even more serious abuse, does not end in murder” 
Essentially the defense argued that based on these figures, there is less than one homicide per 2,500 incidents of abuse and they used this to argue that there was no evidence of domestic violence being a prelude to murder. While this is factually true, it is not a useful statistic.
Not only was it not useful but it may well have misled the court. The correct question to ask should have been: “How many women were murdered by men who had previously abused them?” At the time of the trial, statistics showed that out of every 100,000 battered women, 45 were murdered. Of those 45, 40 were murdered by men who had previously battered them. In short, 90% of murdered women who had been battered by their partners had in fact, been killed by their partners. Rather than a 1 in 2,500 probability, past data suggested a statistically significant probability of 90% that OJ was the murderer.
It’s also worth bearing in mind that the death of a woman at the hands of a partner who has previously battered her may appear predictable in hindsight but when only 1 in 2,500 battered women go on to be murdered, this statistic has little if any utility when predicting the likely risk of murder.
Does a 90% probability constitute evidence of OJ’s guilt? Of course not! Whether or not it would have influenced the jury is another story. We can never say for certain but ask yourself – is it likely that the way in which this information was presented would have influenced your views?
A Call to Action
Given what you now know, is it any wonder that our political leaders and the general public have trouble understanding and prioritising risks such as terrorism, crime, health, national security and hundreds of other risks. It seems that even in the 21st century with all our amazing communications technologies we have a long way to go to master the simple act of communication risk in any meaningful fashion. The groundwork on how to present risks using natural frequencies has been done for us by practitioners in areas such as medicine, psychology and statistics. Perhaps it is time that we as security and risk professionals started to look more closely at the data we already have in our field and exactly how we choose to present it?
 Kohn, L. T., Corrigan, J. M., and Donaldson, M. S. (2000), To err is human: Building a safer health system, DCS National Academy Press, Washington, DC, USA
 Reported in Gigerenzer, Gerd (2002), Calculated Risks, Simon & Schuster, New York, USA
 Gigerenzer, Gerd, Hoffrage, Ulrich and Ebert, A. (1998), Aids counseling for low risk patients, Aids Care, 10, 197 – 211
 Gigerenzer, Gerd (2002), Calculated Risks, Simon & Schuster, New York, USA
 Stine, G. J. (1996), Acquired immune deficiency syndrom: Biological, medical, social, and legal issues. (2nd ed.), Prentice Hall, Englewood Cliffs, NJ USA.
 Adapted from Gigerenzer (2002)
 Gigerenger, Hoffrage and Ebert (1998)
 Dershowitz, Alan (1997), Reasonable Doubts: The Criminal Justice System and The O.J. Simpson Case, Touchstone, New York, USA.
 Dershowitz, Alan (1997), Reasonable Doubts: The Criminal Justice System and The O.J. Simpson Case, Touchstone, New York, USA.
Julian Talbot, M Risk Mgmt, CPP, FRMIA
This article is based on excerpts from Julians latest book “Snapshot Guide to ISO 31000:2009 Risk Management fast!” due for publication in 2010 at http://www.riskebooks.com.
info [a t} juliantalbot.com