Jump to ContentJump to Main Navigation
Show Summary Details
In This Section

Journal of Homeland Security and Emergency Management

Editor-in-Chief: Renda-Tanali, Irmak, D.Sc.

Managing Editor: McGee, Sibel, Ph.D.

4 Issues per year


IMPACT FACTOR 2016: 0.474
5-year IMPACT FACTOR: 0.627

CiteScore 2016: 0.57

SCImago Journal Rank (SJR) 2015: 0.272
Source Normalized Impact per Paper (SNIP) 2015: 0.640

Online
ISSN
1547-7355
See all formats and pricing
In This Section

Deliberative Risk Ranking to Inform Homeland Security Strategic Planning

Russell Lundberg
  • Corresponding author
  • Department of Security Studies, Sam Houston State University, Huntsville, TX 77341-2296, USA
  • RAND Corporation, 1776 Main Street, Santa Monica, CA 90401, USA
  • Email:
/ Henry H. Willis
  • RAND Corporation, Homeland Security and Defense Center, PA, USA
Published Online: 2016-04-13 | DOI: https://doi.org/10.1515/jhsem-2015-0065

Abstract

Reliably managing homeland security risks requires an understanding of which risks are more concerning than others. This paper applies a validated risk ranking methodology, the Deliberative Method for Ranking Risks, to the homeland security domain. This method provides a structured approach to elicit risk rankings from participants based on deliberative consideration of science-based risk assessments. Steps in this effort include first identifying the set of attributes that must be covered when describing terrorism and disaster hazards in a comprehensive manner, then developing concise summaries of existing knowledge of a broad set of homeland security hazards. Using these materials, the study elicits relative concerns about the hazards that are being considered. The relative concerns about hazards provide a starting point for prioritizing solutions for reducing risks to homeland security. The consistency and agreement of the rankings, as well as the individual satisfaction with the process and results, suggest that the Deliberative Method for Ranking Risks can be appropriately applied in the homeland security domain. The rankings themselves reflected greater concern over natural disasters than terrorist events. This research suggests that deliberative risk ranking methods could provide a valid and reliable description of individuals’ concerns about homeland security risks to inform strategic planning.

Keywords: homeland security; priority setting; risk ranking

1 Introduction

Homeland security can include countless types of accidents, disasters and malicious attacks motivated by terrorism and crime. The damage caused by these events can be high – the attacks on September 11, 2001 killed thousands and cost billions of dollars, but even larger events (such as a terrorist nuclear detonation) are imaginable. While in most years the number killed in natural disasters and terrorist events is low, the small probability of such worst case scenarios results in a risk on the order of tens of billions of dollars and hundreds to thousands of lives per year on average (Lundberg and Willis 2015). Deciding how to manage these risks involves making choices among alternatives to protect facilities, strengthen infrastructure resilience, and enhance communities’ emergency preparedness. Ideally these choices would reflect a strategy that takes into account societal perceptions of what is an acceptable risk (Fischhoff et al. 1984; International Risk Governance Council 2005; Drennan et al. 2014). However, doing this requires comparing risks that are different in kind and consequence, and as a result are perceived differently.

How people perceive the risks around them influences the choices they make about activities to pursue, opportunities to take, and situations to avoid. Reliably capturing these perceptions in risk management is a challenging example of comparative risk assessment. One of the challenges of comparative risk ranking is that of biases, where aspects of the risk other than consequence distort one’s perception of risk (Lichtenstein et al. 1978; Kahneman and Tversky 1982; Slovic 1987; Slovic et al. 2004; Viscusi and Zeckhauser 2015). Homeland security risks are particularly susceptible to these concerns, with large but rare events that are associated with dread consequences (in the case of terrorism, intentionally so) that amplify perceptions of risk (Slovic and Weber 2002; Sunstein 2003; Kunreuther and Michel-Kerjan 2004). Another challenge involves disagreement in bringing together diverse risks with multiple, differing aspects of consequence. Even if people could understand the risks perfectly the may still come to different conclusions because they value the consequences differently. For example, the relative concern for casualties as compared to economic damage may be different for one person over another. Valuing risk involves an inherent subjectivity. A third challenge includes the difficulties estimating risks and risk reductions in the homeland security domain, including rare events, non-probabilistic actors, interdependencies, and others (Cox Jr 2008; Cox Jr 2009; Ezell et al. 2010; Brown and Cox 2011).

Identifying risks on a national level is an important step in national risk prioritizations. National risk assessments are being conducted by governments across the world, including the UK, the Netherlands, Japan, and others (OECD 2009; Willis et al. 2012; Vlek 2013). According to the OECD, “[national risk assessments are important] for top level policymakers to make informed decisions on the relative benefits of buying down risks to public health, safety, or security” (OECD 2009: p. 40). While many of these risk assessments reflect a perspective similar to that of the US considering threats to their homeland, national conceptualizations of homeland security vary and some national risk assessments include social, economic, and geo-political risks (such as structural unemployment or the impact of the Scottish Referendum) in addition to terrorism, accidents, and disease (Department of the Taoiseach 2014).

In the US, national risk assessment has focused on risks in the homeland security domain – terrorism, major accidents, and major disasters. The Strategic National Risk Assessment is one such risk assessment, executed in support of Presidential Policy Directive 8 (PPD-8) (DHS 2011). A National Academy of Sciences review of Department of Homeland Security (DHS) risk analyses identifies developing methods of comparative risk assessment as an analytic priority for homeland security planning and analysis (Committee to Review the DHS’s Approach to Risk Analysis 2010). Specifically, they recommend the use of qualitative methodologies that can “help illuminate the discussion of risks and thus aid decision makers” (Committee to Review the DHS’s Approach to Risk Analysis 2010: p. 9). Qualitative comparative risk assessments allow individuals to incorporate two aspects of risk assessment into deliberations. First, some characteristics of risk are not easily measured using quantitative approaches because of either lack of valid measures or lack of reliable data. For example, the psychologic impacts of risks or environmental consequences of risk have been characterized using qualitative methods in past comparative risk analysis studies (Morgan 1999; Florig et al. 2001; Morgan et al. 2001; Willis et al. 2004). Second, for a quantitative approach requires using importance-weights to aggregate across different types of consequences. This can be challenging if it is unclear whose values should be represented using the weights or it is impractical to elicit the weights from the appropriate individuals or groups (Committee to Review the DHS’s Approach to Risk Analysis 2010: p. 84).

This paper examines the use of one such qualitative methodology, a deliberative risk ranking method, as applied in the homeland security domain. We apply this methodology to identify the homeland security concerns of members of the general public. As DHS has been encouraged to strengthen its scientific practices regarding risk analysis (Committee to Review the DHS’s Approach to Risk Analysis 2010: p. 112), validating this risk assessment methodology for use in the homeland security domain can be a useful first step towards integrating the methodology in practical applications.

1.1 The Deliberative Method for Ranking Risks

The Deliberative Method for Ranking Risks (also known as the Carnegie Mellon Risk Ranking Method) was developed to address issues of concern in the environmental policy. From its creation in the 1970s, the Environmental Protection Agency (EPA) had a mandate to manage a portfolio of environmental risks that varied greatly in kind and consequence. After a 1987 EPA report entitled Unfinished Business: A Comparative Approach of Environmental Priorities concluded that environmental risks were too often considered in isolation and in reaction to political perceptions rather than actual risk (EPA 1987), steps were taken to estimate these risks relative to each other. Initial comparative risk assessments were often ad hoc, leading to calls for more systematic approaches. In response, Morgan et al. proposed a framework for a risk-ranking method that could engage a wide range of stakeholder participation in a systematic process that used multiple quantitative and qualitative estimates of consequence (Morgan et al. 1996). Later papers developed the framework into a systematic process called the Deliberative Method for Ranking Risks (Jenni 1997; Morgan et al. 2000; Florig et al. 2001; Morgan et al. 2001).

The Deliberative Method for Ranking Risk was initially tested in a ranking of the health and safety risks at a school and was validated for both lay-people and risk experts (Morgan 1999; Florig et al. 2001; Morgan et al. 2001). Later papers expanded the set of risks to compare risks that affect both human-related and ecological environmental concerns (Willis et al. 2004), applied the risk ranking method in high-level governmental decision-making contexts (Willis et al. 2010), and examined the utility of the method in different contexts of the UAE (Willis et al. 2010) and China (Xu et al. 2011). This paper presents the first known attempt to apply the Deliberative Method for Ranking Risk to homeland security risks.

1.2 Overview of this Analysis

We adopt the goals of Florig et al., who state “a good ranking should (a) make use of available theory and empirical knowledge in behavioral social science, decision theory, and risk analysis; (b) encourage those doing the ranking to systematically consider all relevant information; (c) assist individual participants in expressing (or constructing) internally consistent rankings; (d) ensure that participants understand the procedures and feel satisfied with both the process and the products; and (e) describe the level of agreement and sources of disagreement among participants” (Florig et al. 2001: p. 914). To assess whether we meet these goals, we examined three things: the extent to which the individual rankings converge; the extent to which the individual rankings are correlated with the consequences; and the extent to which the participants support the process and results.

The primary question of this analysis asks whether approaches to using comparative risk analysis in a deliberate risk management process, which were developed in other domains, are applicable in the homeland security domain. To answer this question, we consider the following subordinate questions: Over which dimensions of risk should homeland security risk be described? Can assessment be made across these dimensions? Does the Deliberative Method for Ranking Risks perform similarly in the homeland security domain to how it has performed in other domains? Further, we consider the implications of that public ranking: which hazards are of greatest concern; which hazards have the greatest consensus in their rankings and which have the least; and what factors contribute to people ranking some homeland security risks as higher than others. These latter considerations will be limited in their claims of representativeness, but they may provide interesting insights for later consideration.

The remainder of this article is organized as follows. Section 2 describes the steps involved in the Deliberative Method for Ranking Risks, including how those steps were implemented for homeland security risks. Section 3 presents the results of the risk ranking exercise, with attention both to the outcome of the rankings and the process by which these rankings were generated. Section 4 explores the implication of the rankings and ranking process for setting homeland security priorities.

2 Methods

The Deliberative Method for Ranking Risk involves five steps (see Figure 1) (Florig et al. 2001). The first steps involve conceptualizing the risk, concurrently determining the risks to rank and the attributes that inform that ranking. Next, the risks are assessed in terms of the identified attributes and described in risk summary sheets that reflect best-practices of risk communication. These risk sheets are then used to inform focus groups that guide stakeholders through the risks in a guided process to develop a ranking of the risks. Finally, the results of the workshops are analyzed, identifying the relevant issues from the resulting rankings.

Figure 1:

Steps in the Deliberative Method for Ranking Risks (Adapted from Florig et al. 2001).

The initial steps of conceptualizing and describing the risks (steps A through C) are described in detail in Lundberg (Lundberg 2013). This article briefly describes the work done to conceptualize the risks then focuses on the later steps regarding the risk ranking sessions.

2.1 Defining and Categorizing Risks

2.1.1 Defining Risks in Terms of Hazards

The first step of the Deliberative Method for Ranking Risk is to determine the risks to be ranked. Risks can be described in many ways. Morgan et al. gave examples of these different categorization schemes in terms of environmental risks, using activities (e.g. power plants), initiation (e.g. Sulfuric Oxides), exposure (e.g. outdoor air), effects (e.g. lung disease), and perception and valuation (e.g. sociological status) as examples of how the respective strategies could be applied (Morgan et al. 2000). Similarly, homeland security risks can be categorized in several ways such as by event (e.g. hurricane), area (e.g. Washington, DC), facility (e.g. power plants), sector (e.g. electricity) or cause of risk (e.g. terrorist groups). Morgan et al. conclude that the categorization structure used should align with the organizational structures responsible for managing the risks (Morgan et al. 2000). For our purposes, we chose homeland security risks to align with the organizational structures and interventions of DHS. We adopt an approach to frame risks in terms of hazards, which DHS defines as the “natural or man-made source or cause of harm or difficulty” (DHS-RSC 2010). Focusing on the cause aligns with DHS decisions to lessen those harms in existing planning documents at the strategic level (HSC/DHS 2005; DHS 2008, 2009, 2010).

We selected a set of ten hazards that cover the range of causal agents and consequences (see Table 1). Previous studies using this method have used between 6 and 14 hazards as any more becomes confusing for participants (Florig et al. 2001); as this is far fewer than the 300 threat categories identified by Luiikf and Nieuwenhuijs (Luiijf and Nieuwenhuijs 2008), the 47 hazards identified in the National Fire Protection Association Standard on Disaster/Emergency Management and Business Continuity Programs (National Fire Protection Association 2007), or numerous other frameworks, we made a decision to select a subset of 10 hazards. This subset was selected to present an interesting set in a number of ways, with considerations including: cause, with natural disasters, accidents, and terrorism (reflecting the DHS mission); frequency; with common and rare events; novelty, with events that have occurred and events that have not yet occurred; and consequence, with high and low fatality events, high and low economic damage events, and high and low environmental damage events. While the selected hazards do not include every hazard of concern to DHS, they do present an interesting set of hazards reflecting the DHS mission and concerns.

Table 1:

Hazards Selected for Risk Ranking.

2.1.2 Identifying attributes

In order to express an informed assessment of the risk of a hazard, one must be given an overview of how that hazard affects communities in all relevant ways. This means describing many attributes of the hazards. Identifying the attributes to describe the hazards requires bringing behavioral social science and decision theory together with knowledge of the emergency management field. As estimating the risk with regards to any particular attribute is a lengthy analytical process, we identified a set of risks for use in the ranking sessions.

A literature review was conducted to identify important attributes of homeland security risks. As there are tens of thousands of articles that consider disasters or terrorism in terms of one aspect of consequence or another, we focused our review on articles that attempted to present an overarching framework, notably Committee on Assessing the Costs of Natural Disasters, Mileti, Lindell and Prater, Committee on Disaster Research in the Social Science, Committee to Review the DHS’s Approach to Risk Analysis, and Keeney and von Winterfeldt (Committee on Assessing the Costs of Natural Disasters 1999; Mileti 1999; Lindell and Prater 2003; Committee on Disaster Research in the Social Sciences: Future Challenges and Opportunities 2006; Committee to Review the DHS’s Approach to Risk Analysis 2010; Keeney and von Winterfeldt 2011). Additionally, several other articles that examined only some portion of the consequences rather than an overarching framework were included if they were identified when assessing the specific risks in the hazard sheets (Step C of the method). Such attributes included aspects of mental health (Ursano et al. 1995; Norwood et al. 2000), indirect damage (including damage to critical infrastructure, regional economic impacts, et al.) (Haimes et al. 2005; Rose 2009), international relations (Treverton et al. 2008), and others.

In addition to aspects of consequence, we included attributes associated with non-consequence aspects of hazards. There is a well-established literature that examines how non-consequence attributes – often characterized in terms of dread and uncertainty – affect how risks are perceived (Slovic 1992; Slovic et al. 2004). Jenni explored specific attributes of dread and uncertainty appropriate for a deliberative risk ranking, identifying attributes that were justifiable, clearly justified, and measurable (Jenni 1997). Consistent with other applications of the method, we drew on Jenni’s research to identify a set of non-consequence attributes that would be appropriate to describe homeland security risks.

While the number of ways that hazards could be described can be quite large, the high correlation among many attributes (for example, between population affected and economic losses) allows us to describe them in a more parsimonious set (Slovic et al. 1985). We selected 17 attributes to describe the hazards: five attributes described health, four described economic consequences, three involves non-economic societal consequences, and five represented non-consequence attributes associated with the perception of risks (Table 2). Consistent with previous studies using the deliberative method, we included multiple perspectives of important characteristics (Florig et al. 2001; Morgan et al. 2001; Willis et al. 2004; Willis et al. 2010; Xu et al. 2011); in this case, we included both expected value as well as the consequences for an event should one occur for both health and economic consequences. We presented both perspectives to avoid constraining the choices of those comparing the risks (Lundberg 2013). Perspectives other than expected value were also included for size and duration of economic damages as well as non-consequence characteristics (such as natural or human-induced).

Table 2:

Attributes Selected to Describe Homeland Security Hazards for Risk Ranking.

The broad range of attributes suggested by the literature was supported by the assessment of the risks and the participants’ perception of the risks. The values of the attributes for this set of hazards found only limited correlation between these identified attributes, suggesting that the different attributes did identify different things (Lundberg 2013). Additionally, participants reported considering using all of the attributes to a large extent (Lundberg 2013).

2.1.3 Describe the Hazards in Terms of the Attributes

Once the attributes were selected, the hazards were described in terms of these attributes in risk summary sheets. The hazards were described in terms of the attributes with regards to the risk specific to the US as whole over the course of 1 year (Lundberg 2013; Lundberg and Willis 2015). Estimating the risks involved significant judgment and over 400 technical reports and publications were reviewed to identify the exposure to and consequences of each hazard (see Lundberg and Willis 2015 for a detailed discussion of the methodology for the underlying risk assessments). No classified or sensitive data were used. Where possible, attributes were described using quantitative estimates; these included attributes related to physical injury, economic damage, and number displaced from their homes. Qualitative estimates were used when quantitative approaches were conceptually unclear or their use was not supported by the data; these included psychological consequences, size and duration of economic damage, environmental damage, government disruption, and non-consequence attributes. The data quality varied by attribute and hazard, with novel and less common risks better informed by modeled estimates than historical data. Our estimates of quantitative consequences also include a consideration of uncertainty with lowest and highest estimates for each attribute serving as bounds for the best estimate. If existing data could not support a quantitative estimate then qualitative levels were used. This may have been because of data availability reasons or because of more conceptual limitations regarding less precisely defined concepts such as psychological damages. The estimates derived were similar in precision (as examined in this case as the orders of magnitude between the lowest estimate and the highest estimate for each attribute) to other studies using the method in other domains (Lundberg 2013).

Each of the hazards was then summarized in four-page risk summary sheets standard to the risk ranking method. The summary sheet format developed for the Deliberative Method for Ranking Risks to reflect best-practices in risk communication (Florig et al. 2001). The first page included a summary of the risk and a summary table containing the 17 risk attributes for that hazard (Figure 2). The subsequent pages described the scope and mechanisms of the risk, the exposure to the risk, and what is already being done about the risk. The summary sheets and the documentation of the risk assessment underlying them can be found in the technical appendices of Lundberg (Lundberg 2013).

Figure 2:

Example of the First Page of a Risk Summary Sheet.

2.2 Conducting and Analyzing Ranking Sessions

2.2.1 Risk Ranking Workshops

The Deliberative Method for Ranking Risk follows multiple stages to create informed rankings of risk (Figure 3); our approach followed a modified approach similar to Willis et al. (Willis et al. 2010). Each step in the process is designed to encourage analytical thought over experiential thought while incorporating individual’s own values. Participants first learn from the risk summary sheets and rank the hazards based on their own judgment and experiences. Then participants are asked to rank the attributes in order of their concern, which are then used to develop a multi-attribute ranking of the hazards. For a second ranking, participants are asked to examine where their initial ranking diverge from the multi-attribute ranking based on their concerns and consider why they diverged. This re-focuses participants’ concerns on the attributes of the risks. The third step is a group ranking. Group discussion allows people to learn from each other’s perspectives while at the same time requires individuals to articulate the specific reasons why one risk may be of greater concern than another. However, there is the concern that group rankings can produce forced consensus. For a final step, individuals were allowed to dissent from the group rankings with a final individual ranking of the hazards. This process lasted 5–6 hours.

Figure 3:

Overview of Process used during Risk-Ranking Workshops (Adapted from Willis et al. 2010).

A facilitator guides the participants through this risk ranking process. The initial descriptions of the process and the definitions of the attributes are described using a script and supporting documents (see Lundberg 2013). The facilitator also reminded participants to focus on the risks as faced by the nation as a whole and not their own personal exposure, and challenged participants to support their conclusions. The facilitators also encouraged each member to express their opinion and sought different points of view.

We drew our focus group population from members of the general public. First, as Andrews noted, comparative risk efforts without some public transparency or are often ignored (Andrews 1998). Second, the perception of members of the general public can be important in their own right. People’s perceptions of risks, rightly or wrongly, can drive their behaviors (for example, Gigerenzer’s finding that people’s fear of the aviation sector after 9/11 drove them to the relatively riskier mode of transportation of driving, Gigerenzer 2004). Third, as the US is a representative government, the views of the people should be taken into account to at least some extent. Awareness of those views should be a bare minimum, if only to identify where they differ from official policy. Finally, following the National Academy of Sciences recommendation that DHS use validated methodologies (Committee to Review the DHS’s Approach to Risk Analysis 2010: p. 112); there is value in validating the methodology. The use of members of the public can be useful to validating the method generally so that it can be used in practical applications with risk or homeland security professionals is a logical next step.

Twenty-six individuals participated in three risk ranking workshops conducted in the fall of 2012. The first workshop was conducted in Pittsburgh, Pennsylvania, and the subsequent two workshops were conducted in Santa Monica, California. Most of the participants were recruited through online ads, although four were solicited through school parent/teacher organizations. Participants were offered $100 in return for 5–6 hours of participation.

This participant selection process provided a convenience sample, limiting the ability to extend the results of these ranking sessions to the concerns of US residents generally. The Deliberative Method for Ranking Risks will always have limitations in national representativeness due to the use of focus groups to identify concern, although other applications of the methodology have shown that representative samples can be developed if the population is constrained either geographically or organizationally (i.e. a city, a company, or a government agency) (Morgan et al. 2001; Willis et al. 2010). Instead of trying to develop a representative sample, we used purposeful sampling to identify a sample that covered a range of races, ages, educational levels, and genders.

While the risk ranking groups were not necessarily representative of their nation or city as a whole, they were purposely selected to cover an interesting range of characteristics. Participants were selected to create a group that was diverse in terms of gender, education, age, and race. Table 3 shows the demographic breakdown of the participants. No individuals over 40 with only a high school education applied, so none were selected. Similarly no younger individuals (in their 20s) with a post-graduate degree applied, so none were selected. All participants were US citizens and could speak and write English.

Table 3:

Summary Statistics of Workshop Participants.

2.2.2 Analyzing Ranking Session

The analysis of the ranking session examined the resulting rankings as well as the appropriateness of the process. The analysis of the results of the ranking sessions largely focused on the final individual rankings using the mean and standard deviation of the individual ranks, consistent with previous studies of the method. Additional analyses of the rankings were done comparing results across stages, comparing the final ranks based to ranks based on attributes. The analysis of the process involved a two page survey of participants comparing their perceptions of the workshop, including both the average and the distribution of their rankings, as well as targeted hypotheses from the ranking data.

3 Results

3.1 Participant Concern for the Hazards

3.1.1 Which Hazards were of Greatest Concern

The primary outcome of the deliberative risk ranking process is a final ranking from the hazard of most concern (number 1) to the hazard of least concern (number 10). Figure 4 shows the final individual rankings in this study, with the average of individuals’ final rankings within the bounds of the 25th and 75th percentiles of these rankings. This figure reveals three tiers of concern: a high-concern tier with earthquakes, hurricanes, and pandemic flu; anthrax attacks and cyber-attacks in the low-concern tier; and the rest of the hazards involve moderate-concern. The use of the 25th and 75th percentile describes the variation in any individuals’ ranking rather than bounding the estimate of the average of those rankings. Assuming a normal distribution for statistical purposes to determine bounds for the average ranking we can differentiate between the hazards to a large extent, but there is some overlap in the estimates of the average ranks for the closest of hazards. Notably, the bounds for the average rankings overlap when comparing hurricanes and earthquakes, toxic industrial chemical accidents and oil spills, anthrax and cyber-attacks, and tornadoes, terrorist nuclear detonations, and terrorist explosive bombings.

Figure 4:

Participants’ Final Rankings of Homeland Security Hazards in the US.

There is some evidence that people were basing their rankings on the attributes of consequence. One way to analyze the hazards is to rank them based on each attribute individually – for example, based solely on the best estimates of average lives lost per year, pandemic influenza was the greatest risk while cyber-attacks was the lowest risk. The participants’ elicited rankings of risk can be compared to these ranks based on each of the attributes. It would be preferable to analyze these rankings in a multi-attribute fashion, but given the limited number of hazards in the dataset such multivariate analyses cannot be done. Still, univariate Spearman correlations of the average individual elicited rankings and the rankings based on the attributes can provide some insight. Table 4 includes the correlation between the average individual rankings of risk and the rank based on expected mortality, but similar checks were done for each of the selected attributes (Lundberg 2013). For each of the attributes of consequence, there was positive correlation with the average of the individuals’ rankings. The correlations based on non-consequence attributes were negatively correlated, which is also expected – for example, less control was correlated with higher risk. However, the correlations with rankings based on non-consequence factors was smaller than those for those with the consequence-based factors, providing some evidence that people were basing their rankings on the attributes of the risk. This evidence is by no means conclusive but suggests an avenue for future research.

Table 4:

Average Individual and Group Rankings of Hazards.

3.1.2 Which Hazards had the Greatest/Least Consensus

The rankings demonstrate substantial agreement as to which hazards are of greatest concern in homeland security domain, as can be seen in both the individual rankings and the group rankings.

The ranking sessions were not designed to produce consensus on an individual level; while the information was designed to attenuate the disagreements based on the participants’ knowledge of the risk, their different perspectives on values and interests can lead to valid disagreements as to the relative concern for each of the hazards. The degree to which individuals agreed on the rankings varied from hazard to hazard. As discussed above, the 25th and 75th percentile ranges in Figure 4 show the variation in how individuals ranked hazards. Some hazards – notably terrorist nuclear detonations and toxic industrial chemical accidents – had noticeably less agreement over the rank of that hazard on an individual level. Further examination revealed that most of the individual rankings were generally distributed around a single value in a unimodal distribution. The one clear exception is the scenario for terrorist nuclear detonations, which has a much more uniform distribution and no clear peak. This reflects the lack of consensus regarding nuclear terrorism.

These differences in perspective can also be seen on the group level. The size of the groups was small enough that we could identify differences in perspective from group to group. Notably, while rankings were generally consistent with regards to natural disasters and most terrorist events, there was disagreement as to the level of concern over terrorist nuclear detonations in particular with one group ranking it the risk of greatest concern while another ranked it as the sixth greatest concern out of ten (Table 4).

These disagreements about concern over nuclear terrorism reflected differences in the perception of how large the risk actually was. In the group discussions, the participants voiced significant disagreement as to whether the likelihood of a terrorist nuclear detonation was low (with participants arguing that nuclear weapons had only been used (a) in war by a nation-state, not by terrorists, and (b) by the US instead of against the US) or high (with participants arguing that history shows a continual advance in weaponry and that the weapon of such a weapon is evitable in the long-run). The uncertainty as to the likelihood of a nuclear event was also reflected in the summary sheets, as terrorist nuclear detonations had the widest bounds on estimates of consequence of any of the selected hazards.

3.1.3 Which Attributes were Important to their Concern

Participants used more than one attribute in making their rankings. In the multi-attribute phase of the ranking sessions, participants were asked to identify the attributes that they considered when ranking the hazards. All respondents reported considering multiple attributes, with the lowest number of attributes reported as relevant being 10 and the median number of attributes being 15. Participants were most likely to report concern over attributes reflecting the consequences of the event. Any given attribute of consequence was reported as being considered by anywhere from 85% to 100% of the participants. Non-consequence attributes related to the psychometric paradigm were less likely to be reported as important but still played a role in the participant’s perceptions; these hazards were reported as being considered for between 39% (for time between exposure and health effects) and 69% (for natural/human-induced and ability of individuals to control their exposure) of the participants depending on the attribute.

In addition to the binary classification of important or not, respondents ranked the attributes from the most important to the least important. The average rankings of the attributes are presented in Figure 5. This figure presents the average of the individual rankings of concern for each attribute, along with the 25th and 75th percentiles of the individual rankings. The participants identified three attributes that they considered most concerning (greatest number killed in a single event, average number killed per year, and average more severe injuries or illnesses per year) and three non-consequence attributes that they considered least concerning (time between exposure and health effects, quality of scientific understanding, and combined uncertainty), with the majority of the attributes being of some concern. These attributes of moderate concern were to some extent indistinguishable; while some people reported more concern for one attribute over another, on average these relative concerns balanced out, leaving all moderate attributes similarly concerning on average.

Figure 5:

Average Individual Ranking of Attributes of Importance.

3.2 Assessing the Quality and Level of Support for the Ranking Process and Results

3.2.1 Consensus Reflected Increased Knowledge Rather than Forced Consensus

As described earlier, there was some degree of consensus on the ranking of the hazards, with individuals consistently ranking natural disasters as greater concerns than terrorist events or major accidents (Table 4). This consensus also grew throughout the process as evidenced by increased correlations among participants’ rankings from the initial to final ranking steps. Pairwise correlations of each individual’s ranking to each other individual’s ranking were calculated for the initial rankings and the final rankings, and then the average of those correlations was taken, consistent with previous studies in the method (Table 5). These correlations increased from the initial step to the final rankings.

Table 5:

Agreement among Individuals’ First and Final Rankings as Measured through Mean Pairwise Correlations of Results.

Producing consensus is not the goal of the risk ranking workshops, as different participants can bring their own values and perspectives to the consideration of risks. However, the ranking process is designed to increase participants’ knowledge of the risks, reducing misconceptions or misunderstandings (i.e. becoming more informed).

Several findings provide evidence that the increased consensus is the result of increased knowledge rather than forced consensus. First, participants were asked where they gained their knowledge of the risks; individuals reported bringing little prior knowledge to the ranking sessions and instead reported learning in each stage of the process (Figure 6). Compared to other studies using the Deliberative Method for Ranking Risks in conducted in other domains, individuals had less initial knowledge of the risks (mean 3.3 on a scale of 0–6, c.f. 4.03 in Willis et al. 2010) and relied more upon information developed through the process, including the initial ranking (mean 4.2, c.f. 3.35 in Willis et al. 2004, 3.92 in Willis et al. 2010), the examination of the attributes (mean 4.5, c.f. 3.78 in Willis et al. 2004, 4.14 in Willis et al. 2010), and the group discussion (mean 4.7, c.f. 4.82 in Willis et al. 2004, 4.59 in Willis et al. 2010) (Willis et al. 2004; Willis et al. 2010). The difference between the knowledge gained in each of the three states and the prior knowledge was statistically significant (t-tests of the differences of the mean scores provided p-values of 0.0001, less than 0.00001, and less than 0.00001, respectively).

Figure 6:

Sources of Knowledge that Informed Participant Rankings.

Additionally, this increased consensus reflects a greater consideration of the risks as described in each of the hazard sheets. In both the initial ranking and the final ranking, the individual’s rankings correlated with each of the attributes of consequence in the expected direction (e.g. the individuals’ rankings correlated with the rankings based solely on the number of deaths per year, number of deaths in a single event, etc.) Moreover, the correlation between the individuals’ rankings and the rankings based on each attribute of consequence either held steady or increased for each attribute of consequence.

On the other hand, there is always the concern that group dynamics can lead to forced consensus. Similar to previous studies, this was tested in several ways. First, individuals’ were asked directly as to what contributed to their final rankings. Individuals reported their final rankings to be based most on the calculated ranking (with participants reporting an average of 4.2 on a scale of 0–6), then group rankings (average 3.9) and initial rankings (average 3.5) (Figure 7).

Figure 7:

Individual Perceptions of the Contributions to their Final Ranking.

Second, these individual reports were tested empirically using a regression-based analysis. A linear regression was run using the final ranking as the outcome and the initial and group rankings as explanatory variables, consistent with other studies which used the Deliberative Method for Ranking Risks in other domains. Both the group rankings and the initial individual rankings were found to contribute to the final rankings to a statistically significant extent. While the group ranking was more associated with the final ranking (0.66, c.f. 0.60–0.84 in previous studies using this method) than the initial ranking was (0.36, c.f. 0.17–0.39 in previous studies using this method), both the initial and the group ranking coefficients were statistically significant (Table 6).

Table 6:

Association of Initial and Group Rankings on Individuals’ Final Rankings as Determined by a Regression Analysis.

Finally, forced consensus was addressed directly by asking participants the extent to which different points of view were discussed and individual opinions were encouraged. Participants reported that the sessions were very inclusive of individual opinions (average of 5.2 on a scale of 0–6, see Figure 8).

Figure 8:

Workshop Participation Encouraged Different Points of View.

Another concern involves whether the rankings are based on conceptual biases in addition to group dynamics. The Deliberative Method for Ranking Risks is designed to focus people on deliberating on the risks to the nation as a whole, reducing biases by using an analytical framework rather than an experiential framework for considering risks. But it is also possible that people are biased in their estimation of risk, ranking some hazard higher than they should because it is fresh in their memory or because they have personal experience with it. The timing and placement of the workshops allowed some limited testing of hypothesized biases. The influence of personal exposure to risk was tested by comparing respondents in the high earthquake risk area (Santa Monica) to those in the low earthquake risk area (Pittsburgh), while the influence of the availability heuristic was tested based on potential impact of three events: Hurricane Sandy, making landfall between the first and second session; a small scale cyber-event (smaller than our definition of a cyber-attack, a distinction of which our participants were aware), which occurred immediately before the first session and which the first session discussed but which was not discussed in subsequent sessions over a month later; and a chemical spill that occurred the morning of the third session (although it is unclear whether the group was aware of the spill as they did not discuss it). These specific comparisons showed no evidence of expected biases – the differences between groups were not statistically significant in three of the four cases, and ran counter to the hypothesized direction in the final case (i.e. the recent chemical spill was associated with a decrease in concern over chemical spills). These tests should not be read too broadly, as the lack of evidence could be due to a lack of statistical power, but provide no evidence of personal biases in the rankings.

3.2.2 Participants Supported the Process and Resulting Rankings

Participants were also asked to reflect on the process and its results. Measures of satisfaction in risk rankings serve two purposes. First, participants’ satisfaction over the ranking process and results serves as a measure of face validity. Second, risk rankings where the participants are satisfied with the results provide a more useful input to public risk-management decision making (Morgan et al. 2001; Willis et al. 2004; National Research Council 2008).

Our participants approved of the use of the Deliberative Method for Ranking Risks to address homeland security issues in multiple ways (Figure 9). Participants reported satisfaction with the groups ranking (average score=4.8), agreed that the group rankings were representative of their concerns (average score=4.3), and approved of submitting rankings to DHS for use in making decisions (average score=4.5). Each of these values was statistically different from the mid-point of three at a very high confidence level (p-values of 1e-10, 1.7e-6, and 7.4 e-9, respectively). These results were comparable to earlier studies using the Deliberative Method for Ranking Risks with regards to inclusiveness, satisfaction, representativeness, and utility of the rankings (Table 7).

Figure 9:

Participants’ Support for Using the Rankings to Develop Risk Management Policies.

Table 7:

Participant’s Perceptions of the Risk Ranking Workshops, on a Scale from 0 (Lowest) to 6 (Highest).

The questions asked were drawn from those asked in previous studies of the method (Morgan et al. 2001; Willis et al. 2004; Willis et al. 2010; Xu et al. 2011). In these studies, the survey questions were to evaluate the ranking process along with analyses of the correlations of rankings across sessions and among participants. The results in those previous studies showed the questions to be reliable, with significant correlation of the responses to those questions across sessions. Together the surveys and analyses of rankings provided a consistent perspective about performance of the ranking method. The results of this study were consistent with those of prior studies.

4 Discussion

4.1 It is Possible to Elicit Informed Rankings of Homeland Security Risk

The participants provided clear rankings, and reported satisfaction with both the process and the results of the sessions. Comparing participants’ rankings to the rankings based on the attributes of the risk provides some evidence that the rankings reflected knowledge of the risks. At the same time, the examinations of identifiable biases find no evidence of sources of availability bias. There is also little evidence for forced consensus; individuals reported feeling free to dissent from the group and provide their own point of view and there is evidence that both their individual rankings and group rankings contributed to their final rankings. The multiple stages all provided information, supporting the idea that the deliberative method guides participants to a more considered ranking based on the actual nature of the risks. These results were comparable to validated studies using the Deliberative Method for Ranking Risk in other domains.

There are significant limitations to this research. The set of hazards is not exhaustive nor representative of all the risks managed by DHS, but rather an interesting set of selected hazards. Incorporating other risks into these analyses may be useful. Additionally, the rankings are considered informed only insofar as the risk assessments that inform them are solid. The risks described in these assessments reflect considerable judgment, particularly with regards to selecting a best estimate for the risk from a range of estimates that can vary across several orders of magnitude. Other risk rankings informed by different assessments could lead to different results.

It is also important to recognize that these ranking sessions reflect convenience samples and questions of representativeness should be explored. The participants were selected online, and it may be that those who respond to online ads perceive risks differently. Additionally, the views of the respondents do not necessarily represent the views of the nation as a whole, or even necessarily of the city from which they were drawn. While larger and repeated samples can be useful, the logic of qualitative sampling is different from that of quantitative sampling and explores representativeness from examining specific hypotheses of concern (Patton 1990). Specific hypotheses that could be worthwhile to explore include: do people in urban areas see these risks differently from those in rural areas, do people in more conservative states see these risks differently from those in more liberal states, etc. The examination of these hypothesized differences could provide some evidence for or against representativeness.

Another limitation is that these results reflect public perceptions, not expert perceptions, suggesting experts as another group with which the rankings should be conducted. A comparison between experts and the lay public could be illuminative in itself; additionally, it may be more feasible to get representativeness of groups of experts because they would be drawn from smaller populations. Previous studies using the Deliberative Method for Ranking Risks have established that the method can produce representative results if the population of interest is constrained geographically or organizationally. Now that the method has been validated for general use in the homeland security domain, it may be useful to elicit concerns in risk experts within DHS (e.g. conducting these rankings with a representative or complete set of DHS’s Risk Steering Committee or with specific offices within DHS).

Finally, there is room for a deeper examination of the results of the rankings rather than just the process. While we find some evidence that the rankings are based on attributes of consequence more than non-consequence attributes associated with dread and uncertainty, the evidence is by no means conclusive; further research into the influence of information into the rankings would be useful. Also, while we find no evidence of bias in our results, this does not necessarily imply that no bias is present. It may be that we did not have strong enough examples to test, had too small of a sample, or that types of bias existed that we did not test for (we tested for availability bias and bias based on personal exposure). Additional research into the ability of the method to attenuate bias would be useful.

The results of the study support the need for a multi-attribute approach. DHS risk assessments too often involve only a single attribute of consequence, (Committee to Review the DHS’s Approach to Risk Analysis 2010) even as experts identify important attributes of consequence in disasters (Committee on Assessing the Costs of Natural Disasters 1999; Mileti 1999; Lindell and Prater 2003; Committee on Disaster Research in the Social Sciences: Future Challenges and Opportunities 2006; Committee to Review the DHS’s Approach to Risk Analysis 2010; Keeney and von Winterfeldt 2011). Our analyses provide support for these multi-attribute frameworks, as the elicited rankings reflected consideration of several dimensions of consequence but were dictated by no single aspect of consequence. While the rankings were correlated with health related consequences, they were also related to economic and societal consequences. Participants reported being more concerned about health related consequences than economic consequences, societal consequences, or non-consequence attributes but generally took a range of attributes into consideration. And as analyzed elsewhere, their reported concern is actually reflected in their rankings; participants do appear to consider not only lives lost and economic damages but other consequence and non-consequence attributes as well (Lundberg 2013). A wide range of health, economic, social, and environmental attributes should be included in comparative risk assessments.

These benefits of the Deliberative Method for Ranking Risk may be useful for setting DHS priorities. Alternative methods of comparing homeland security risks have proven problematic (Cox Jr 2009; Brown and Cox 2011): quantitative risk models have been useful in other domains, but the challenges of bringing together risks with diverse attributes of consequence and of estimating likelihood for terrorist hazards are so extensive that a National Academy of Science report recommended that DHS avoid using quantitative comparative risk assessments at this time (Committee to Review the DHS’s Approach to Risk Analysis 2010). Instead, the National Academy of Science report recommended the use of qualitative methods to elicit informed rankings but did not provide any guidance as to using these methods. Considering the risks without a structured approach may be useful but can also have limited engagement with the information or challenging of assumptions that can be associated with an inconsistent internal rankings. Surveys are one option, but these are susceptible to uninformed consideration and a range of known biases (Kahneman 2011). Alternatively, considering risks only on dimensions of dread and uncertainty omit other important characteristics of consequence (such as lives lost) (Slovic 1992). Indeed, our analysis that would say that in the domain of homeland security this would be counter-productive, as those terrorist risks associated with greater dread and the unknown are also associated with lesser consequences. Other qualitative methodologies, such as the Dutch National Risk Assessment, use approaches to incorporate public values that have limited validity (Willis et al. 2012; Vlek 2013).

This presents opportunities to improve national risk assessment methodologies in the homeland security domain. Strategic-level national risk assessments in the US (such the Strategic National Risk Assessment) have been qualitative but unstructured (DHS 2011). More recently, the consideration of risks informing the Quadrennial Homeland Security Review (QHSR) structured the hazards and attributes similar to the approach in this research (in steps A, B, and C), but provided this information to experts without guiding them through the process and repeatedly engaging them with the information as in our structured risk ranking process (in step D). While the research in this article only examined a lay population, other studies have identified similar improvements due to the method in expert rankings (Morgan et al. 2001; Willis et al. 2010). The addition of this validated structured consideration process could fit into DHS’s existing process with little additional effort.

Relatedly, of the six hazards, threats, and trends identified as strategic drivers of homeland security risk in the QHSR, four were related to terrorism while only one was related to natural disasters, counter to our finding that natural disasters were of greater concern to the public. This may represent a disconnect in the perception of risks in setting strategic priorities within DHS. The rankings indicated greater concern for natural hazards than terrorist hazards. This finding would suggest different priorities within DHS than their current emphasis on terrorist risk as expressed in the budget, organizational structures, and in the department priorities. Researchers have documented how the intentional nature of terrorism can amplify concerns over risks, all things being equal (Sunstein 2003); however, as we see here all things are not equal, and when focusing on deliberative consideration even lay people may be less concerned about terrorism. The correlations of individual rankings with rankings based on attributes of interest provide some evidence to support the hypothesis that the high ranking of natural disasters reflects an appreciation for the higher expected consequences of natural disasters. This finding should not be interpreted too broadly, as the rankings reflect a convenience sample and may not be reflective of the risk as viewed by the nation as a whole.

The QHSR is not the only DHS process involving risk rankings (a deliberative approach could also be useful for comparing assets in a jurisdictional Threat and Hazard Identification and Risk Assessment, for example), and risk rankings are only the first step in prioritizing risk reductions, but this presents one specific example of where the Deliberative Method for Ranking Risks may be useful for integrating values and deliberative consideration into homeland security risk rankings.

Funding: This research was supported by the United States Department of Homeland Security (DHS) through the National Center for Risk and Economic Analysis of Terrorism Events (CREATE) at the University of Southern California (USC) under award number 2010-ST-061-RE0001. However, any opinions, findings, and conclusions or recommendations in this document are those of the authors and do not necessarily reflect views of the United States Department of Homeland Security, or the University of Southern California, or CREATE.

References

  • Andrews, C. J. (1998) “Substance, Process, and Participation; Evaluating a Decade of Comparative Risk,” Paper Presented at the Annual Meeting of the American Collegiate Schools of Planning, Pasadena, CA.

  • Brown, Gerald G. and Jr Louis Anthony Cox (2011) “How Probabilistic Risk Assessment Can Mislead Terrorism Risk Analysts,” Risk Analysis, 31(2):196–204.

  • Committee on Assessing the Costs of Natural Disasters (1999) In: (edited by National Research Council) The Impacts of Natural Disasters: A Framework for Loss Estimation. Washington, DC: National Academies Press.

  • Committee on Disaster Research in the Social Sciences: Future Challenges and Opportunities (2006) In: (edited by National Research Council) Facing Hazards and Disasters: Understanding Human Dimensions. Washington, DC: National Academies Press.

  • Committee to Review the DHS’s Approach to Risk Analysis (2010) In: (edited by National Research Council of the National Acadamies) Review of the Department of Homeland Security’s Approach to Risk Analysis. Washington, DC: National Academies Press.

  • Cox Jr, L. A. T. (2008) “Some limitations of “Risk=Threat×Vulnerability×Consequence” for Risk Analysis of Terrorist Attacks,” Risk Analysis, 28(6):1749–1761.

  • Cox Jr, Louis Anthony (2009) “Improving Risk-Based Decision Making for Terrorism Applications,” Risk Analysis, 29(3):336–341.

  • Department of the Taoiseach (2014) In: (edited by Republic of Ireland Department of the Taoiseach) Draft National Risk Assessment. Dublin, Ireland.

  • DHS (2008) In: (edited by U.S. Department of Homeland Security Federal Emergency Management Agency) National Response Framework – Emergency Support Function #10 Annex. Washington, DC, USA: Department of Homeland Security.

  • DHS (2009) In: (edited by U.S. Department of Homeland Security) National Infrastructure Protection Plan. Washington, DC, USA: Department of Homeland Security.

  • DHS (2010) In: (edited by U.S. Department of Homeland Security) Quadrennial Homeland Security Review Report: A Strategic Framework for a Secure Homeland. Washington, DC, USA: Department of Homeland Security.

  • DHS (2011) In: (edited by U.S. Department of Homeland Security) The Strategic National Risk Assessment in Support of PPD 8: A Comprehensive Risk-Based Approach toward a Secure and Resilient Nation. Washington, DC, USA: Department of Homeland Security.

  • DHS-RSC (2010) In: (edited by U.S. Department of Homeland Security – Risk Steering Committee) DHS Risk Lexicon. Washington, DC, USA: Department of Homeland Security.

  • Drennan, Lynn T., Allan McConnell and Alastair Stark (2014) Risk and Crisis Management in the Public Sector. Routledge.

  • EPA (1987) In: (edited by U.S. Environmental Protection Agency (EPA)) Unfinished Business: A Comparative Assessment of Environmental Problems. Alexandria, VA: National Technical Information Service Report.

  • Ezell, B. C., S. P. Bennett, D. Von Winterfeldt, J. Sokolowski and A. J. Collins (2010) “Probabilistic Risk Analysis and Terrorism Risk,” Risk Analysis, 30(4):575–589.

  • Fischhoff, B., S. Lichtenstein, S. L. Derby, P. Slovic and R. L. Keeney (1984) Acceptable Risk. Cambridge, UK: Cambridge University Press.

  • Florig, H. K., M. G. Morgan, K. M. Morgan, K. E. Jenni, B. Fischhoff, P. S. Fischbeck and M. L. DeKay (2001) “A Deliberative Method for Ranking Risks (I): Overview and Test Bed Development,” Risk Analysis, 21(5):913–913.

  • Gigerenzer, G. (2004) “Dread Risk, September 11, and Fatal Traffic Accidents,” Psychological Science, 15(4):286–287.

  • Haimes, Y. Y., B. M. Horowitz, J. H. Lambert, J. Santos, K. Crowther and C. Lian (2005) “Inoperability Input-Output Model for Interdependent Infrastructure Sectors. II: Case Studies,” Journal of Infrastructure Systems, 11:80.

  • HSC/DHS (2005) In: (edited by Homeland Security Council in partnership with the U.S. Department of Homeland Security) National Planning Scenarios. Washington, DC.

  • International Risk Governance Council (2005) Risk Governance: Towards an Integrative Approach. Geneva: IRGC.

  • Jenni, K. E. (1997) “Attributes for Risk Evaluation,” Unpublished doctoral dissertation, Carnegie Mellon University.

  • Kahneman, Daniel (2011) Thinking, Fast and Slow. New York, USA: Macmillan.

  • Kahneman, Daniel and Amos Tversky (1982) “On the Study of Statistical Intuitions,” Cognition, 11(2):123–141.

  • Keeney, R. L. and D. von Winterfeldt (2011) “A Value Model for Evaluating Homeland Security Decisions,” Risk Analysis, 31(9):1470–1487.

  • Kunreuther, Howard and Erwann Michel-Kerjan (2004) Dealing with Extreme Events: New Challenges for Terrorism Risk Coverage in the US. Center for Risk Management and Decision Processes, Wharton School, University of Pennsylvania.

  • Lichtenstein, S., P. Slovic, B. Fischhoff, M. Layman and B. Combs (1978) “Judged Frequency of Lethal Events,” Journal of Experimental Psychology: Human Learning and Memory, 4(6):551.

  • Lindell, M. K. and C. S. Prater (2003) “Assessing Community Impacts of Natural Disasters,” Natural Hazards Review, 4:176.

  • Luiijf, H. A. M. and A. H. Nieuwenhuijs (2008) “Extensible Threat Taxonomy for Critical Infrastructures,” International Journal of Critical Infrastructures, 4(4):409–417.

  • Lundberg, Russell (2013) “Comparing Homeland Security Risks Using a Deliberative Risk Ranking Methodology,” Ph.D., Pardee RAND Graduate School (RGSD-319).

  • Lundberg, R. and H. H. Willis (2015) “Assessing Homeland Security Risks: a Comparative Risk Assessment of 10 Hazards,” Homeland Security Affairs, 11:1–24.

  • Mileti, D. S. (1999) Disasters by Design: A Reassessment of Natural Hazards in the United States. Washington, DC, USA: National Academies Press.

  • Morgan, K. M. (1999) “Development and Evaluation of a Method for Risk Ranking,” Unpublished doctoral dissertation, Carnegie Mellon University.

  • Morgan, M. G., B. Fischhoff, L. Lave and P. Fischbeck (1996) “A Proposal for Ranking Risk within Federal Agencies.” In: (Davies, J.C. ed.) Comparing Environmental Risks: Tools for Setting Government Priorities. Washington, DC: Routledge.

  • Morgan, M. G., H. K. Florig, M. L. DeKay and P. Fischbeck (2000) “Categorizing Risks for Risk Ranking,” Risk Analysis, 20(1):49–58.

  • Morgan, K. M., M. L. DeKay, P. S. Fischbeck, M. G. Morgan, B. Fischhoff and H. K. Florig (2001) “A Deliberative Method for Ranking Risks (II): Evaluation of Validity and Agreement among Risk Managers,” Risk Analysis, 21(5):923–923.

  • National Fire Protection Association (2007) “NFPA 1600: Standard on Disaster/Emergency Management and Business Continuity Programs.” In NFPA 1600: Standard on Disaster/Emergency Management and Business Continuity Programs. Nfpa.

  • National Research Council (2008) “Public Participation in Environmental Assessment and Decision Making. Panel on Public Participation in Environmental Assessment and Decision Making.” In: (Dietz, T. and P. C. Stern, eds.) Committee on the Human Dimensions of Global Change. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

  • Norwood, A. E., R. J. Ursano and C. S. Fullerton (2000) “Disaster Psychiatry: Principles and Practice,” Psychiatric Quarterly, 71(3):207–226.

  • OECD (2009) Innovation in Country Risk Management. Paris, France: Organization for Economic Co-operation and Development.

  • Patton, Michael Quinn (1990) Qualitative Evaluation and Research Methods. Thousand Oaks, California, USA: SAGE Publications, inc.

  • Rose, A. Z. (2009) “A Framework for Analyzing the Total Economic Impacts of Terrorist Attacks and Natural Disasters,” Journal of Homeland Security and Emergency Management, 6(1):9.

  • Slovic, P. (1987) “Perception of Risk,” Science, 236(4799):280–285.

  • Slovic, P. (1992) “Perceptions of Risk: Reflections on the Psychometric Paradigm.” In: (Krimsky, S. and D. Golding, eds.) Social Theories of Risk. New York: Praeger, pp. 117–152.

  • Slovic, P. and E. U. Weber (2002) Perception of Risk Posed by Extreme Events. New York, NY, USA: Columbia University Center for Hazards and Risk Research.

  • Slovic, P., B. Fischhoff and S. Lichtenstein (1985) “Characterizing Perceived Risk.” In: (Kates, R. W., S. Hohenemser and J. X. Kasperson, eds.) Perilous Progress: Managing the Hazards of Technology. Boulder, CO: Westview, pp. 91–125.

  • Slovic, Paul, Melissa L Finucane, Ellen Peters and Donald G MacGregor (2004) “Risk as Analysis and Risk as Feelings: Some thoughts about Affect, Reason, Risk, and Rationality,” Risk Analysis, 24(2):311–322.

  • Sunstein, Cass R. (2003) “Terrorism and Probability Neglect,” Journal of Risk and Uncertainty, 26(2–3):121–136.

  • Treverton, G. F., J. L. Adams, J. Dertouzos, A. Dutta, S. S. Everingham and E. V. Larson (2008) “The Costs of Responding to the Terrorist Threats: The US Case,” In: (Keefer, P. and N. Loayza, eds.) Terrorism, Economic Development, and Political Openness. New York, NY: Cambridge University Press, pp. 48–80.

  • Ursano, R. J., C. S. Fullerton and A. E. Norwood (1995) “Psychiatric Dimensions of Disaster: Patient Care, Community Consultation, and Preventive Medicine,” Harvard Review of Psychiatry, 3(4):196–209.

  • Viscusi, W. Kip and Richard J. Zeckhauser (2015) Recollection Bias and Its Underpinnings: Lessons from Terrorism-Risk Assessments (October 26, 2015). HKS Working Paper No. RWP15-066; Vanderbilt Law and Economics Research Paper No. 15-26. Available at SSRN: http://ssrn.com/abstract=2692253 or http://dx.doi.org/10.2139/ssrn.2692253. [Crossref]

  • Vlek, Charles (2013) “How Solid Is the Dutch (and the British) National Risk Assessment? Overview and Decision-Theoretic Evaluation,” Risk Analysis, 33(6):948–971.

  • Willis, H. H., M. L. DeKay, M. G. Morgan, H. K. Florig and P. S. Fischbeck (2004) “Ecological Risk Ranking: Development and Evaluation of a Method for Improving Public Participation in Environmental Decision Making,” Risk Analysis, 24(2):363–378.

  • Willis, H. H., J. MacDonald Gibson, R. A. Shih, S. Geschwind, S. Olmstead, J. Hu, A. E. Curtright, G. Cecchine and M. Moore (2010) “Prioritizing Environmental Health Risks in the UAE,” Risk Analysis, 30(12):1842–1856.

  • Willis, H. H., D. Potoglou, W. B. de Bruin and S. Hoorens (2012) The Validity of the Preference Profiles used for Evaluating Impacts in the Dutch National Risk Assessment. TR-1278, RAND Europe, Cambridge, UK.

  • Xu, J., H. K. Florig and M. L. DeKay (2011) “Evaluating an Analytic–Deliberative Risk-Ranking Process in a Chinese Context,” Journal of Risk Research, 14(7):899–918.

About the article

Published Online: 2016-04-13

Published in Print: 2016-04-01


Funding Source: University of Southern California

Award identifier / Grant number: 2010-ST-061-RE0001

This research was supported by the United States Department of Homeland Security (DHS) through the National Center for Risk and Economic Analysis of Terrorism Events (CREATE) at the University of Southern California (USC) under award number 2010-ST-061-RE0001. However, any opinions, findings, and conclusions or recommendations in this document are those of the authors and do not necessarily reflect views of the United States Department of Homeland Security, or the University of Southern California, or CREATE.



Citation Information: Journal of Homeland Security and Emergency Management, ISSN (Online) 1547-7355, ISSN (Print) 2194-6361, DOI: https://doi.org/10.1515/jhsem-2015-0065. Export Citation

Comments (0)

Please log in or register to comment.
Log in