Terrorism, specifically how best to address it, is a subject of considerable debate in the United States. This article argues that people, expert and average alike, are prone to thinking about terrorism in an emotionally-driven manner using heuristics that express themselves in systematic patterns of judgment. These patterns can play out when evidence is unavailable to the person or institution, does not exist, and even in the presence of contrary evidence. Institutional overconfidence and the appeal of certainties based on possibilities factor prominently into a snowballing tendency to focus on what is emotionally and heuristically salient rather than what is a legitimate threat. This is further complicated by the current political situation in the U.S., particularly that it has effectively crippled its most salient terrorist threat: Al Qaeda in the Arabian Peninsula (AQAP). To this end, the article prescribes revisiting the idea of institutional accountability, individual measures for moderating the effect of fear on judgment, and a paradigm shift from “who to how” in security forecasting (Stewart & Burton, 2009).
Propagation of Myths Surrounding Counterterrorism
The word ‘terrorism’ has become a trigger in the collective consciousness of U.S. citizens. It can elicit fear, solidarity, outrage, indignation, patriotism, and a number of other feelings that can motivate for good or ill. Nor are its effects limited to the average person trying to make sense of political issues. The demonstrably real but often misrepresented threat of terrorist acts has also assumed a more prominent role in the administration of government and non-government organizations dealing with foreign policy.
The language of national security has changed. Analysts and politicians spend less time talking about the threat posed by an air force or tanks and more time talking about the threat posed by a small group of people with a plan (Stewart & Burton 2009). As military and government targets tighten their security procedures, the focus of terror attacks shifts increasingly towards so-called “soft” targets (Stewart & Burton 2009); typically public places with limited security frequented by foreigners and diplomats, such as hotels and markets. This makes civilian casualties likely not only as collateral damage, but also as the intended target.
For these reasons, it comes as no surprise that people in and from the U.S. are worried about terrorism. Attacks seem to appear out of nowhere, leading to catastrophic results. Even so, many common beliefs and policies do not address the actual risk, but instead the fears that have become most salient. International security strategist Thomas Barnett (2005) claims that we are trapped in a pattern of fabricating so-called “imminent threats” to our national security to justify action, which we have not truly faced since the Cuban Missile Crisis. Of interest to this article are the misleading and sometimes spurious assumptions that at times guide both policy and popular opinion regarding how best to address the threat of terrorism. The goal in this case is not to prescribe policy, but rather to identify some of the individual and institutional mechanisms at work in the spread of misinformation.
Daniel Kahneman and Amos Tversky performed a compelling empirical study at the individual level (1983 as cited in Kahneman & Tversky, 2000) before the events of September 11, 2001 made the issue of terrorism more salient. It is a powerful demonstration of how a specific threat can be viewed as more dangerous than an inclusive category.
Kahneman and Tversky (1983) interviewed sixteen MBA students who were planning to take a trip to Thailand as part of their degree program. The US State Department had at that time issued a warning about potential acts of terrorism in Thailand. Kahneman and Tversky (1983) asked the students how much they would be willing to pay for $100,000 worth of “terrorism insurance” covering their flights to and from Bangkok divided into two plans. Each was essentially a life insurance policy, the flight insurance paying out in case of death due to terrorism in the air while the travel insurance would pay out in case of any terror-related death.
While Kahneman & Tversky had a number of findings, the one most interesting to this article is that students asked about flight insurance were willing to pay on average $13.90 and those asked about complete terror insurance were willing to pay $7.44. Kahneman and Tversky suggest that is a result of what they call the vividness and availability heuristics. It has been suggested (Shedler & Manis, 1986) that vividness and availability are related cognitive phenomena in which people equate the ease of bringing an event to mind with statistical likelihood that the event occurs. Thus, students could more easily bring to mind thoughts of air-related terrorism, and consequently “behaved” (offered to pay) as if it were more likely when the insurance plan was framed with a more specific threat. While Kahneman and Tversky (1983) were perhaps not attempting to make a point specific about fear, they suggest a potential cognitive manifestation through which fear produces skewed beliefs about the likelihood of a low-probability averse event. It also suggests that people may undervalue a more general and comprehensive protection plan compared to one that addresses a more specific threat. This may be related to the Conjunction Fallacy associated with the Representativeness heuristic (Bazerman, 2006; Tversky & Kahneman, 1983 as cited in Camerer & Kunreuther, 1989), wherein people consider a detailed account of an event more likely than a vague and more inclusive one. Because a more specific event better represents what people expect (which, as stated earlier, is often based on a statistically weak sample size), it is seen as more likely.
Other literature has suggested that fear specifically can be a strong factor in an individual’s expectation of low-probability events (Glassner, 1999; Camerer & Kunreuther, 1989). Camerer and Kunreuther (1989) list a number of fear-based beliefs people hold such as the safety of air travel and nuclear power. They suggest that risks that are “unknowable” or uncontrollable tend to elicit particularly strong fear-based responses.
Unknowability and uncontrollability are particularly potent when combined with low-probability averse events. Because they seldom occur, people tend to base their opinions regarding such issues on a statistically meaningless sample size (Camerer & Kunreuther, 1989). Thus, when something actually does go wrong in areas where people have cultivated fear-based beliefs, that is often seen as a clear sign that they were ‘right all along,’ and whatever protective and preventative measures were in place were ineffective (Camerer & Kunreuther, 1989). A more nuanced presentation might say that the preventative measures were inadequate and ‘we need to do better,’ but such a statement may still be based on what amounts to statistical noise.
Terror attacks seem to fit this picture on all counts: they are low-probability, uncontrollable for certain, and the risk is reasonably “unknowable.” Even professional analysts have at best a vague idea of when a terror attack may or may not occur. Due to the nature of their job, as well as that of law enforcement and the military, any highly predictable attack would theoretically be averted. Similarly, successful terror attacks tend by their nature to involve relatively little advance knowledge that would allow accurate forecasting (“The Terrorist Attack Cycle (4),” 2005).
Fear and heuristics each play a unique role in the problem of reasoning about terrorism. Fear is crucial as a motivator. If people believed that “we” had nothing to fear from terrorists, then they would not be motivated to form strong beliefs using whatever information and faculties were available to them. Simply put, they would not care. Because fear gives people motivation to care, they then resort to heuristically and emotionally-biased reasoning common with low-probability averse events. An individual usually also looks to outside sources for more information, but whether she/he is apt to have better luck there is up for grabs.
While the odds are high that many will revert to emotional-driven and heuristic-based beliefs regarding low-probability events in general, the odds are even higher that they will do so in the absence of reliable information from experts. In a sense, the average citizen is not entirely at fault because the very nature of the problem of terrorism lends itself to scarce information of dubious credibility.
This leads to the question of how institutions, some of which do use statistics to guide their decisions when possible or expedient, can still fall prey to acting on fear-based beliefs. After all, only through anthropomorphism could an institution be afraid of something. The answer is threefold. First: an issue as complicated and pressing as terrorism requires that even experts act on incomplete information, even if the action is to say “we consider the risk low.” Second: institutions consist of people, and when they must act on sparse information they are subject to the same emotional and heuristic biases as the rest of us. What kind of evidence is considered and how it is used are as important as the use of evidence itself. Third: most institutions in question, whether directly or indirectly, hope to reach the average citizen, and in order to do so must to some extent address the average citizen’s concerns.
Whether it is Fox News or the Department of Homeland Security, institutions can seldom afford to wait until all the facts are in before rendering a decision. This sounds hyperbolic, but carries some truth. Often authorities struggle to piece together the details regarding a terror attack (attempted or successful) months if not years after the fact. Khalid Sheikh Mohammed, alleged mastermind of the attacks on September 11th, 2001, has yet to be tried and sentenced, and has just recently received his trial location in New York City (Savage, 2009). Part of this delay has to do with the U.S. legal system and the political importance of his trial, but ultimately much is still unknown about the initial attack.
Seldom is the problem entirely a lack of available information. Rather, analysts and investigators have to wade through a sea of information and weed out that which is relevant. Particularly in culturally diverse areas, attempts at profiling suspects based on background are difficult (Burton, 2006). Sorting out every brand of radical who may resort to terrorism, or even one, is a daunting challenge. Additionally, analysts have to separate those affectionately known as “jihadist cheerleaders” who merely hold radical views from hardened veterans who can and will use violence (Burton, 2006). Being in a country with civil rights further complicates matters, as there are limits to acceptable methods of gathering evidence.
In such situations, speculation is somewhat inevitable, even if it comes in the guise of an expert opinion. Expert judgment can be subject to many of the same pitfalls as the average person’s once their area of expertise runs out of explanatory power, but one phenomenon is particularly problematic: overconfidence. For purposes of this article, overconfidence is a belief in the accuracy of one’s judgment beyond what available evidence supports. This phenomenon has been documented in a variety of experts including physicians, clinical psychologists, negotiators, lawyers, and security analysts (Tversky & Shafir, 2004).
There are a number of conditions that facilitate expert overconfidence and, as a result, ‘expert’ opinions can be minimally anchored in reality (Tversky & Shafir, 2004). Situations where evidence is of high “strength” but low “weight” tend to result in overconfidence while the opposite tends to result in underconfidence. Strength is the ‘extremeness’ of the evidence, loosely understood as the persuasive power and how much the event differs from a normal state (statistical or perceived). Eyewitness testimony by an emotional individual would be considered high strength. Weight is the predictive validity of the evidence, or how relevant it is statistically. Within the terrorism context, this could be considered the credibility of the source (Tversky & Shafir, 2004). Thus, a particularly strong claim made with a moderate degree of credibility will trump a weaker claim with the same amount of credibility in many cases.
Another common problem is lack of discriminability (Tversky & Shafir, 2004). While evidence may support the hypothesis in question, it may also support one or more plausible alternative hypotheses. In scientific journals, this is part of the process. Peer review points out flaws by replying to an article, the study evolves, further theories are formulated, etc. To a certain extent this happens in the policy world as well, but once a broadcast or official statement is released, the information is public. This makes lack of discriminability a relevant consideration.
Also relevant are findings that as a judgment call becomes increasingly difficult, overconfidence increases dramatically (Tversky & Shafir, 2004). This seems counterintuitive, which is what makes it particularly insidious. While people still tend to understand that their odds on a difficult task are lower than on an easier one, they still overestimate, drastically in the case of near-impossible task. As mentioned earlier, security forecasting often falls within that category in regard to specific events.
Last but not least, a lack of immediate and meaningful feedback prevents experts from adjusting their opinions accurately based on prior experience (Tversky & Shafir, 2004). Analysts are required to deal with ambiguous information and situations, and as a result their predictions also tend to be ambiguous. Often, they will give a vague assessment that the risk of a terror attack is “high” or “low” over a stretch of several weeks. For example, though I am sure that Transportation Security Administration’s (TSA) threat level has been “orange” every time I have traveled recently, no terror attacks occurred. Due to the relatively low probability of terror attacks in general, institutions lack timely and meaningful feedback on their assessments. If analysts or agencies declare an “elevated risk” and nothing happens, the tendency among institutions is to assume they were correct but the risk simply did not manifest in an attack. Because the predictions themselves seldom carry a precise meaning or associated statistical figure, it is virtually impossible for such a forecast to be wrong or proven inaccurate.
Nor can the expert be saved by simple peer review. Evidence suggests that groups are as subject to overconfidence as individuals (Bazerman, 2006). While a group setting can have a moderating effect on extreme opinions, everybody being on the same page can nonetheless result in everybody being on the wrong page. Each group member is still human, and has in varying extents the same emotional and heuristic cocktail that the rest of us do.
This is problematic because expert opinions carry a fair amount of weight in contemporary society. In modern times, knowledge and credibility have transitioned from being based primarily on familiarity with the source to faith in institutions and the abstract capacities that they are believed to possess (Shapin, 2004 p. 411). This can even extend beyond the credibility the experts themselves ascribe to a statement or policy, to the point where ‘strong evidence supporting the possibility that this may be the case’ becomes ‘this is fact’ in colloquial discourse.
This leads to the third point that an institution tends to give people what they want and expect. An institution aims to deliver information in a manner that will draw attention. Not only does such publicity often suit institutional goals better (e.g. a news program needs an audience) but it can also make the institution appear useful and secure its own future existence. Certainty is desirable, even if it is conditional upon uncertain events (Bazerman, 2006). People want to be certain they are safe and they want to be certain a suspect committed the crime of which she was accused. There are a variety of possible interpretations for this finding in policy and media, one of which is that institutions are better received when they offer ‘conditional certainties.’
These factors may account for what is largely maintenance of the status quo regarding how institutions address the issue of terrorism. Note that many of the references in this article are decades old now, and have done little to reshape policies in the interim. While the fields of cognitive psychology and international affairs have advanced significantly in that time, biologically people have not evolved much. This is not to say meaningful reform along these lines is a fool’s errand; rather that until it accounts for people being people such an effort likely to fail.
Interaction between Individual and Institution
It is not sufficient to say that both institutions and individuals can be motivated by fear into facile heuristic reasoning about terrorism. Neither effect occurs in a vacuum, and the interface between the individual and the institution is a major factor in the spread of misinformation. Consider the following examples from current events.
Liquids on a Plane
Unsurprisingly, post-9/11 airline security has been a priority, both for the U.S. government and for media coverage. One of the more visible and controversial policies enacted by TSA has been the recent (though now seemingly ancient) restrictions on liquids and gels in carry-on luggage. Is this policy based on a sound analysis of the best available evidence? Or emotionally salient heuristics?
It is dubious that this policy would actually stop liquid explosives from being brought on a plane, as they could simply be stored in checked luggage (Stratfor, 2006). Rather, it is meant to stop a specific method of deploying liquid explosives. Officials believed that terrorists were planning to work in two to three person teams, each bringing a seemingly innocuous component of an improvised explosive device (IED) onto the plane. They would then assemble and detonate it in-flight.
While the situation certainly is dangerous, consider the countermeasure employed. What was initially a quick fix to an “active” plot is still in effect years later, and has seen little if any refinement. It protects against a vivid, available, and representative plot of how people imagine terrorism but does nothing against a wider and more inclusive set of circumstances. This points to heuristic bias, either in the intelligence leading to the decision or the decision itself.
Additionally, it provided the traveler with a false sense of security by suggesting that ‘we found this potential problem and stopped it in its tracks.’ TSA offers a certainty based upon a possibility, which as Bazerman (2006) pointed out has a selective advantage over providing merely a possibility. The policy’s survival can be accounted for as much if not more due to fear and heuristic exploitation than its merits as a security measure.
Nor did it necessarily put a dent in the problems facing airline security. As of 2006, checked luggage, air-mail, and cargo were still major vulnerabilities in the air travel system (Stratfor, 2006). Destruction of a cargo plane, for example, could result in hundreds of deaths (“vulnerabilities in the air cargo system,” 2005). It is not, however, as salient in the media or among government agencies as the threat of a terrorist physically carrying an IED on to a passenger plane.
As a closing thought, it has also been suggested that the Department of Homeland Security (DHS) in its entirety is a “strategic feel-good measure” (Barnett, 2005). One possible reason for this is that it further fragments U.S. government agencies, producing gaps in communication (Associated Press, 2005). Indeed, Barnett accuses DHS of being an agency whose primary role is to scare the American public by exaggerating terrorist threats and enacting ineffective security measures (Barnett, 2009). If his accusation carries weight, it’s easy enough for DHS to then go to the rest of the government and/or public and point out all the disasters they ‘prevented.’ As in the case of liquid in carry-on baggage, this would suggest overconfidence and the marketing of certainties based on possibilities.
Terror Returns to NYC
As mentioned earlier, a number of terror suspects accused of involvement with the 9/11 attacks (including the alleged mastermind) will be on trial in New York City (Savage, 2009). Not long after the announcement, opinion columns and political blogs exploded with claims of government irresponsibility. While it is almost unnecessary to suggest that many of these opinions are fear-driven and based on heuristic reasoning, it is impressive that some choose to actively embrace the process.
Consider the blogger known only as “LooseKannon.” He posted claims that the government’s actions are both “naive and dangerous” (2009). His argument hinges primarily upon NYC’s status as a prime target for terrorist attacks, and accepting that the trials will increase that risk in a significant fashion. Where he truly comes into his own is in talking about the psychological damage NYC as a gestalt whole has suffered (including a story about how he and others gladly sought imaginary protection after 9/11), and how holding the trials in NYC would essentially trigger mass Post-Traumatic Stress Disorder (PTSD). Aside from an unsupported claim about additional danger, his argument is that the trials are a bad idea because they would scare NYC residents. For this individual, acting on fear constitutes not only means to an end but a worthy end unto itself.
Taken with the same grain of salt as all security forecasting, it is unlikely that NYC residents have anything additional to be afraid of (Slakas, 2009; West & Burton, 2009). Or, to be more precise, it is unlikely that there is a better place to hold the trials in a U.S. Civilian court. The staff of the U.S. attorney’s office in New York City has amassed years of experience prosecuting terrorists (during which average residents probably didn’t so much as bat an eyelash), and the U.S. Marshals’ Special Operations Group (SOG) specializes in providing physical security for precisely those types of trials. Additionally, the NYPD now has extensive training and manpower for dealing with terrorist threats that few other cities could compete with (West & Burton, 2009). In reality, it may be more dangerous and naïve to hold the trial anywhere else than to hold it in NYC.
War on Terror
It is common to encounter political or interpersonal rhetoric along the lines that the U.S. armed forces fight a war on terror. A fundamental sticking point is that terrorists by nature usually prefer not to operate in a wartime environment (Barnett, 2005). Rather, they increasingly prefer civilian targets or military targets that do not expect to be attacked. Arguably, the public has picked up on this disconnect in part due to a lack of representativeness and no immediate reason to fear somebody across the world, which helps to explain why the idea has become fairly controversial.
Consider the recent Fort Hood shooting. While it is still unclear whether Nidal Malik Hasan’s intent was to commit a terror attack for the end purpose of media exploitation, it is well-documented at this point that he was a muslim with radical political views and contact with known jihadists who attacked a base populated by troops being prepared for deployment (“The Hasan Case,” 2009). As such, the end effect is much the same. The effectiveness of this attack can largely be attributed to the soft nature of the target and difficulties profiling Hasan among what is a plethora of false positives in the intelligence world (“The Hasan Case,” 2009). Soldiers at Fort Hood are only issued weapons for training purposes, making it essentially a peacetime target despite its arterial role in feeding the war on terror.
Perhaps a more accurate description of the war on terror would be that the U.S. armed forces are conducting a counterinsurgency designed to hinder specific terror groups, most prominently Al-Qaeada in the Arabian Peninsula (AQAP). To this end, they have severely disrupted AQAP’s ability to conduct terror attacks on the U.S. homeland (Stewart & Burton, 2009). Fragmented leadership and the creation of a less permissive environment abroad has essentially crippled their ability to conduct the type of large-scale attack that occurred on 9/11.
Even so, AQAP is a fairly specific threat made salient to U.S. citizens through the 9/11 attacks and the ensuing political exploitation. Exploitation in this sense does not refer to an inherently negative process but rather to a number of processes that essentially created AQAP as a major enemy in the public eye. Because AQAP has become a highly salient (available, representative, feared) target, institutions may over-commit resources to addressing that particular threat. Certainly, it would not sound as good to call government policy “the war on those terrorists deemed most politically relevant.” While crippling AQAP may act as a symbolic statement or deterrent to other groups, it does not directly affect them.
Indeed, Stewart and Burton (2009) suggest that the greatest threat to U.S. homeland security today comes not from Al-Qaeda itself, but from home-grown grassroots cells and so-called “lone wolves” inspired to commit terror attacks. These people are particularly dangerous because they lack traceable ties to a larger organization and often have minimal communication accessible to law-enforcement agencies. To this end Al Qaeda has largely become a P.R. machine, glorifying past acts and issuing public statements encouraging jihadist cheerleaders to step off the sidelines (Stewart & Burton, 2009). While this contribution may be a significant one, Al-Qaeda and radical Islam are far from the only threats to national security.
On November 28th, a Russian train was derailed by an IED (“Russia: Bomb attack,” 2009). Claiming responsibility for the attack is a white supremacist group known as Combat 18. A google search on “Combat 18” (12/1/2009) turns up no major U.S. news sources covering them in the first five pages. Even so, Combat 18 has a branch operating in the U.S. (“Russia: Bomb attack,” 2009). In a sense, giving a terrorist organization media coverage is often giving them what they want while facilitating fear-based salience. At the same time, neglect of a potential “rising star” in the terrorist world can at times be negligent. A correlate of fear, availability, and representativeness driving institutional attention is that is a lack thereof may allow a group to prepare an attack undisturbed. This is a legitimate concern because, once a terrorist plot enters the ‘operation’ phase, it is almost guaranteed to succeed (“The terrorist attack cycle (4),” 2005). More importantly, it was precisely this kind of attack that occurred on 9/11, 2001 (Friedman, 2009). Perhaps the attitude that “we don’t want another 9/11” is being pursued and endorsed a bit too literally.
“There’s no Time!” (Kiefer Sutherland)
It seems that every season of the hit show “24” (Surnow, 2001-2009) involves nuclear weapons, chemical weapons, biological weapons or all of the above wielded by terrorists. While this is perhaps not the ultimate window into the North American collective consciousness, it is illuminating. Clearly, the idea sells. Each season creates a worst-case scenario with a disastrous (and low-probability) potential outcome and representative bad guys running the gamut from Islamic radicals to Russian ex-military to greedy Private Military Companies.
Practically speaking, this is more or less rubbish (“Debunking myths,” 2009). As a representative example, even AQAP in its prime was only able to acquire fake components for a nuclear weapon. Both nuclear and chemical weapons require extensive financial backing, highly-trained personnel, and undisturbed time to build. Any such installation designed for the manufacture of WMDs would be extremely vulnerable to discovery. The international community is more than happy to keep track of certain necessary components and, as necessary, give the U.S. Air Force a forwarding address (“Debunking myths,” 2009).
The above does not, however, apply to home-brewed devices. While the thought of basement labs churning out WMDs may sound alarming, attempts to deploy crude chemical or biological weapons in terror attacks have a poor track record. Aside from being more costly and time-consuming than traditional explosives, they also have a high rate of outright malfunction or not working as intended (“Debunking myths,” 2009). Diffusion is difficult to account for so that bystanders receive a lethal dose rather than an annoying dose, and handling such substances before the attack is inherently dangerous to the terrorist. Costs and benefits assessed, the use of these devices is unlikely as a viable terrorist method.
Implications for public dissemination of information on terrorism
It is telling that information on heuristics and fear-driven thinking (Kahneman & Tversky, 1983; Glassner, 1999) is relatively old. Policy implications of this material were readily apparent when it was initially published, yet reform along those lines has been scarce. Inattention to cognitive and social psychology may partially account for this, but perhaps more troublesome is the selective advantage ideas utilizing heuristics enjoy in planning and public discourse.
One possible step towards minimizing their adverse effects is to redefine institutional accountability. Rather than being held accountable for delivering information people expect or want to hear, institutions would be held responsible for identifying systematic flaws where possible in information that is otherwise largely accepted. This is not necessarily a pipe dream. Strategic Forecasting Inc. (www.stratfor.com) is an example of an NGO staffed by members of the intelligence community no longer affiliated with the U.S. government. As such, they have the experience to warrant attention but lack the pressure placed on other institutions to deliver forecasts that serve a specific agenda. A costly subscription fee means that Stratfor can support itself with a much smaller user base than mainstream media, similar to the method employed by scholarly journals.
A more direct approach reduces fear-based salience itself. Lee, Gibson, Markon, and Lemyre (2009) found that individual preparation for terrorist attacks reduced psychological stress over its possibility. Assuming that they measured the same sort of fear that may drive heuristic thinking, preparation could make people slower to resort to heuristics. At the institutional level, however, the idea of preparing for nigh-impossible events to reduce emotional salience has questionable effectiveness and can place an immense drain on resources.
A third possibility is to refocus the practice of security forecasting. This article has demonstrated a propensity among analysts to pursue specific threats with minimal evidence. This is, in a sense, their job. It can create a problems, however, when more pressing threats receive less attention. If there is actionable intelligence that a specific person or group is pursuing an exact plot, then taking action against precisely that makes sense. In the rest of the gray expanse that is considered significant, perhaps evidence should factor into a types-based analysis more than an actor-based analysis. This would in essence force a mode of thought less conducive to heuristics.
Volumes have been written on the subject of terrorism, to which this article adds a meager portion. The truth, as best as can be determined, is that the world is both less and more dangerous than people think. A systematic tendency to misjudge certain threats based on heuristics rather than the situation can lead to costly mistakes, both in financial and security terms. These patterns of judgment are pervasive and persistent in the face of psychological findings precisely because they represent the human element of decision-making. While it is not possible or desirable to erase heuristics and fear from the human repertoire, it is possible to mitigate the adverse effects they have on the world.
Associated Press (2005). Report: poor coordination threatens security. Retrieved from http://www.msnbc.msn.com/id/6867491/
Barnett, T. (2009). One scare-the-hell-out-of-the-America-public department was enough. Retrieved from http://thomaspmbarnett.com/weblog/2009/01/one_scare-the-hell-out-of-the-.html
Barnett, T. (2005). Thomas Barnett draws a new map for peace. Retrieved from http://www.ted.com/index.php/talks/thomas_barnett_draws_a_new_map_for_peace.html
Bazerman, M. (2006). Judgment in managerial decision making. Hoboken, N.J., John Wiley & Sons Inc.
Burton, F. (2006). Tactical realities in the counterterrorism war. Stratfor. Retrieved from http://www.stratfor.com/tactical_realities_counterterrorism_war
Friedman, G. (2009). Torture and the U.S. intelligence failure. Stratfor. Retrieved from http://www.stratfor.com/weekly/20090420_torture_and_u_s_intelligence_failure
Camerer, C. & Kunreuther, H. (1989). Decision processes for low-probability events: policy implications. Journal of Policy Analysis and Management, Vol. 8(4), pp. 565-592.
Glassner, B. (1999). The culture of fear: why Americans are afraid of the wrong things. New York, NY; Basic Books.
Kahneman, D. & Tversky, A. (2000). Choices, values and frames. Cambridge, UK, Cambridge University Press.
Lee, K., Gibson, S., Markon, M., & Lemyre, L. (2009). A preventative coping perspective of individual response to terrorism in Canada [abstract]. Current Psychology, 28 (2), p. 1046-1310.
LooseKannon (2009). Jury’s in: NYC terror trial naïve and dangerous. Retrieved from http://loosekannon.com/jurys-in-nyc-terror-trial-naive-and-dangerous/
National Counterterrorism Center (2009). 2008 Report on Terrorism. Retrieved from http://www.fas.org/irp/threat/nctc2008.pdf
Savage, C. (2009). Accused 9/11 mastermind to face civilian trial in N.Y. The New York Times, November 13th, 2009. Retrieved from http://www.nytimes.com/2009/11/14/us/14terror.html
Shapin, S (1994) The Social History of Truth: Civility and Science in Seventeenth-Century England. Chicago: University of Chicago Press.
Slakas, J. (2009). We shouldn’t fear terror trials in NYC. Fox News. Retrieved from http://www.foxnews.com/opinion/2009/11/20/joe-slakas-terror-trials-new-york-giuliani/
Stewart, S. & Burton, F. (2009). Counterterrorism: shifting from ‘who’ to ‘how.’ Stratfor. Retrieved from http://www.stratfor.com/weekly/20091104_counterterrorism_shifting_who_how
Stewart, S. & Burton, F. (2009). Paying attention to the grassroots. Stratfor. Retrieved from http://www.stratfor.com/weekly/20090805_paying_attention_grassroots
Stewart, S. & Burton, F. (2009). The Hasan case: overt clues and tactical challenges. Stratfor. Retrieved from http://www.stratfor.com/weekly/20091111_hasan_case_overt_clues_and_tactical_challenges
Stratfor (2005). The terrorist attack cycle (4): deployment and attack. Retrieved from http://www.stratfor.com/terrorist_attack_cycle_deployment_and_attack
Stratfor (2005). U.S.: vulnerabilities in the air cargo system. Retrieved from http://www.stratfor.com/u_s_vulnerabilities_air_cargo_system
Stratfor (2006). The quick fix for airline security. Retrieved from http://www.stratfor.com/quick_fix_airline_security
Stratfor (2009). Debunking myths about nuclear weapons and terrorism. Retrieved from http://www.stratfor.com/analysis/20090528_debunking_myths_about_nuclear_weapons_and_terrorism
Stratfor (2009). Russia: bomb attack on train. Retrieved from http://www.stratfor.com/analysis/20091128_russia_rail_attack_train
Surnow, J. (2001-2009). 24. 20th Century Fox Television.
Tversky, A. & Shafir, E. (2004). The weighing of evidence and the determinants of confidence. Preference, Belief, and Similarity: selected writings. Massachusetts Institute of Technology.
West, B., & Burton, F. (2009). A terrorist trial in New York City. Stratfor. Retrieved from http://www.stratfor.com/weekly/20091118_terrorist_trial_new_york_city