Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Journal of Homeland Security and Emergency Management

Editor-in-Chief: Renda-Tanali, Irmak

Managing Editor: McGee, Sibel


IMPACT FACTOR 2018: 0.757

CiteScore 2018: 1.19

SCImago Journal Rank (SJR) 2018: 0.442
Source Normalized Impact per Paper (SNIP) 2018: 0.613

Online
ISSN
1547-7355
See all formats and pricing
More options …
Volume 16, Issue 1

Disaster Risk Analysis Part 2: The Systemic Underestimation of Risk

Aaida A. Mamuji / David Etkin
Published Online: 2019-01-30 | DOI: https://doi.org/10.1515/jhsem-2017-0006

Abstract

How risk is defined, the nature of methodologies used to assess risk, and the degree to which rare events should be included in a disaster risk analysis, are important considerations when developing policies, programs and priorities to manage risk. Each of these factors can significantly affect risk estimation. In Part 1 of this paper [Etkin, D. A., A. A. Mamuji, and L. Clarke. 2018. “Disaster Risk Analysis Part 1: The Importance of Including Rare Events.” Journal of Homeland Security and Emergency Management.] we concluded that excluding rare events has the potential to seriously underestimate the cumulative risk from all possible events,1 though including them can be very challenging both from a methodological and data availability perspective. Underestimating risk can result in flawed disaster risk reduction policies, resulting in insufficient attention being devoted to mitigation and/or prevention. In Part 2, we survey various governmental emergency management policies and methodologies in order to evaluate varying equations used to define risk, and to assess potential biases within disaster risk analyses that do comparative risk ranking. We find (1) that the equations used to define risk used by emergency management organizations are frequently less robust than they should or are able to be, and (2) that methodologies used to assess risk are often inadequate to properly account for the potential contribution of rare events. We conclude that there is a systemic bias within many emergency management organizations that results in underestimation of risk.

Keywords: disaster risk analysis; emergency management; hazard mitigation plan; rare events; risk matrix; worst case scenario

1 Introduction and Methodology

In Part 1 of this paper (Etkin, Mamuji, and Clarke 2018) we argue that it is important that rare events be included in a disaster risk analysis in order for that analysis not to be significantly biased towards a low estimation. Policy-making is best based upon accurate risk estimations if it is to avoid the problem of insufficient attention to mitigation and prevention strategies. Building on this argument, in this paper we explore how risk is assessed in various governmental emergency management organizations. Given that risk assessment processes are used to determine appropriate mitigation measures and priorities, sound methodologies are essential for emergency management to be efficacious.

There are numerous public policy documents that outline how rare events should be, or are, used in disaster risk analyses. These include guidance documents from standards organizations such as the Canadian Standards Association, federal/state hazard mitigation plans in the US, federal/provincial hazard and risk assessment guidelines in Canada, national risk registers, and various other guidance documents. Many of these documents aggregate risk spatially and across hazards, and therefore allow for comparison. Through a survey of hazard mitigation plans and risk assessment methodologies, we argue (1) that many of the risk equations used in emergency management risk analyses are poorly formulated, and (2) that many risk assessment methodologies, in particular ones that emphasize the use of risk matrices, do not sufficiently account for rare events (see Section 4.3 and Etkin, Mamuji, and Clarke 2018). The latter can happen through the non-inclusion of rare events, insufficient granularity of analysis, or by creating a false equivalency between frequent low consequence events and rare high consequence events.

Our findings are drawn from a detailed analysis of the most recent hazard mitigation plans for each of the 50 US states, 13 Canadian provinces and territories, and nine currently existing national risk registers (Australia, Denmark, Estonia, Finland, Ireland, the Netherlands, Sweden, Tasmania, United Kingdom). These documents are all available publically via the Internet. The plans were assessed for their use of risk equations, and for the rarest probability category used where quantitative values are provided. This paper begins with a review of risk definitions and risk matrices in emergency management literature, as well as notable emergency management standards and national policies. It then presents the results of our survey of risk analyses and critiques the risk assessment methodologies used by emergency management organizations. The paper concludes with a discussion on characteristics of effective risk assessments that serve the public good.

2 Understanding Risk

2.1 Risk Definitions

Historically, the development of risk theory goes back to at least the middle ages and the creation of probability theory.2 Since the middle 20th Century however, it has been increasingly embraced as a conceptual paradigm. The notion of risk has practical applications and is also used as a conceptual lens to better understand social and environmental processes, one example being the Risk Society (Beck 1992). To a large degree, its development as a field of practice was due to the growing needs of a scientific-industrial society to quantify threat and uncertainty (Zachmann 2014). The insurance industry is one example of this; to maintain profitability, insurance companies need to be able to quantify projected annual losses in order to develop risk-based premiums.

It is important to acknowledge that there is not one common definition of risk within academic and professional communities. At times it has been used to mean probability, hazard, threat, or some combination of all three (Blaikie et al. 2014). Within disaster and emergency management communities, risk is generally understood to involve some combination of hazard, exposure, and vulnerability. All three elements are needed for risk to exist: a hazard that has the potential to cause harm; exposure to a hazard; and a vulnerable target. In fact, the notions of hazard and vulnerability are intertwined, as each only has meaning within the context of the other. For example, the United Nations International Strategy for Disaster Reduction (UNISDR) defines risk as “the combination of the probability of an event and its negative consequences” (UNISDR 2016), while Blaikie et al. (2014) define risk = hazard × vulnerability. Similarly, according to the National Research Council, risk “can be defined as a hazard, a probability, a consequence, or a combination of probability and severity of consequences” (NRC 2007). The Department of Homeland Security defines risk as “potential for an unwanted outcome resulting from an incident, event, or occurrence, as determined by its likelihood and the associated consequences;” hazard as “natural or man-made source or cause of harm or difficulty;” threat as “natural or man-made occurrence, individual, entity, or action that has or indicates the potential to harm life, information, operations, the environment and/or property;” vulnerability as “physical feature or operational attribute that renders an entity open to exploitation or susceptible to a given hazard;” and resilience as "ability to resist, absorb, recover from or successfully adapt to adversity or a change in conditions” (Homeland Security 2008).

Capacity and resilience are also considered to be parts of vulnerability by some (Tierney and Bruneau 2007), while others consider them separately (e.g. ACF 2012). Where they are considered separately, risk is normally shown as being inversely proportional to them (i.e. risk ∝ vulnerability/capacity). Deconstructing vulnerability or consequences can be extraordinarily difficult for reasons that go beyond incomplete databases and a lack of knowledge. Since impacts depend upon the value society assigns to human suffering, the spaces we inhabit, our possessions, culture, and our constructed environment, there is an important element of social constructionism when evaluating risk. This introduces a strong subjective factor that is culturally dependent and very difficult to capture. For this reason, social scientists generally agree that there is no such thing as objective risk (Clarke and Chess 2008). Rather it is a powerful socially constructed concept that is used as a calculus or metaphor to assist with meaning making and decision-making. One cannot say that one definition of risk is correct and another is wrong; rather, the issue is how useful a particular understanding or definition is within the context that it is being used and with respect to the purpose of the user. Therefore, when considering various definitions of risk one must evaluate their contextual usefulness, their internal logic, and users’ goals. For comparative risk analyses, the methodologies and metrics used need to be consistent. Good risk assessments need to be transparent about their purpose, if and how expert judgement was included, why the metrics used are suitable, what elements are missing, and they must also be explicit regarding levels of uncertainty.

Hazard, vulnerability, capacity and resilience (all components of disaster risk) are complex factors that only have meaning with respect to each other. To illustrate, consider the hazard of rainfall. Most often a resource, when there is too little or too much of it, such as in a drought or flood, it becomes a hazard (Burton 1993). But what is too little or too much? That depends upon how society has adapted to it (Figure 1), which is highly variable. There are other complexities as well. Consider hurricanes for example. Rated on the Saffir-Simpson Hurricane Scale from 1 to 5 (which is defined by wind speed), their impact on a community will also depend upon their speed of motion and rate of rainfall; slow moving hurricanes will have a much greater impact on a region than a fast moving one because they will linger over communities for a longer period of time, with the result of greater flooding. Thus, simply characterizing hurricane hazard using a 1–5 hazard scale is a simplification that may lose important aspects of the hazard. From the perspective of risk prediction, an additional complication is that hazard, vulnerability, capacity and resilience are all evolving elements due to such factors as climate change, urbanization, quality of governance and other social-economic and environmental factors. Predicting how they will evolve injects large uncertainties into the risk assessment problem, to the point where it should be classified as a wicked problem (Levin et al. 2007).

Relationship between Resource and Hazard. Society has become adapted to a range of external events, such that they become a resource. But, beyond those ranges, events become hazardous. In the example used, a normal distribution exists where most values occur near the mean, and rare occurrences are at the low and high end of magnitudes. The left vertical axis (solid line) on the graph illustrates frequency of occurrence. The right vertical axis (dashed line) illustrates vulnerability, where society is well adapted to the most frequent events, but poorly adapted at the low and high end of magnitudes.
Figure 1:

Relationship between Resource and Hazard.

Society has become adapted to a range of external events, such that they become a resource. But, beyond those ranges, events become hazardous. In the example used, a normal distribution exists where most values occur near the mean, and rare occurrences are at the low and high end of magnitudes. The left vertical axis (solid line) on the graph illustrates frequency of occurrence. The right vertical axis (dashed line) illustrates vulnerability, where society is well adapted to the most frequent events, but poorly adapted at the low and high end of magnitudes.

Another factor to consider is that rather than being constants, vulnerability and resilience, as properties of systems (Haimes 2009), are variables in terms of how they characterize them. A community will have low vulnerability and high resilience to a range of hazards below some critical threshold, but then become highly vulnerable with lower resilience above it. This is the basis of fragility curves, which are used by engineers and risk analysts to quantify aspects of failure probability as a function of hazard level (Schultz et al. 2010). Again, these variables vary greatly from place to place and culture to culture, but if critical thresholds are not well understood, risk estimations will be flawed. One example of this is the failure of levees in New Orleans during Hurricane Katrina. Though supposedly designed to resist a Category 3 hurricane, they failed to do so (Leavitt and Kiefer 2006).

2.2 Risk Assessment Methodologies

Risk is enormously complex and assessing it in a robust way requires a detailed and thorough understanding of many factors, for which communities may not have sufficient knowledge, data, or even a general agreement on values. There is, therefore, a common conundrum that faces communities in their efforts to evaluate and rank the level of threats that they face: a robust risk assessment is difficult, resource intensive, and perhaps not even possible given the many constraints that exist, but such assessments are demanded by society and often required by policy or legislation.

In order to make risk assessment more tractable, many emergency management organizations have developed methodologies that allow them to rank the risks that they face in order to inform the allocation of resources, and the development of risk reduction policies and strategies. The Rhode Island Hazard Mitigation Plan, for example, notes that, “the HIRA [Hazard Identification and Risk Assessment] provides a factual basis for developing mitigation strategies and for prioritizing those jurisdictions that are most threatened and vulnerable to natural hazards” (Rhode Island, 2014, 41). These methodologies vary from organization to organization and are a mixture of quantitative and qualitative methods, often incorporating the use of a two dimensional risk matrix. A risk matrix ranks risks using a combination of hazard frequency and disaster consequence (Figure 2). The axes can be linear, logarithmic, or categorical. The strength of a risk matrix methodology is that it is intuitive and an easy-to-visualize method of ranking risks. For many emergency management organizations within communities with few resources and low expertise in risk assessment, it is considered to be a very useful tool (for example – Oregan, 2015, 203; Pennsylvania, 2013, 99). The risk matrix also has a number of weaknesses.

Example of a Risk Matrix.
Figure 2:

Example of a Risk Matrix.

The risk matrix approach assumes that probabilities and consequences are well understood and measurable; that stakeholder agreement exists in terms of the values that underlie consequences; that data is available; that different kinds of risks are comparable on the two scales of probability and consequence; that risk as defined by this model is a suitable construct for public decision-making; that bias and systematic errors in subjective risk assessment are tolerable; that arbitrary categories of frequency and consequence are able to adequately capture the intricacies needed for robust policy making; and that the process is transparent. It should also be noted that a risk matrix approach assumes that other risk management strategies, such as use of the precautionary principle, are not appropriate. For a more in-depth review of the serious limitations of risk matrices, see Beven et al. (2015), Duijm (2015), Hopkins (2016), Ball and Watt (2013), Hagmann and Cavelty (2012), and Anthony Tony Cox (2008).

Three additional traps in the use of a risk matrix relate to (a) capturing systemic impacts, (b) limitations related to the reliance upon specific metrics in order to measure consequences, and (c) non-linearity of variables.

2.2.1 Systemic Impacts and False Equivalence

With respect to systemic impacts, consider the hypothetical example of a community exposed to two risks, both with a mean annual expected economic loss of 1% of community assets. Assume that the first risk occurs annually (probability = 100%/year), while the second risk occurs with a return period of 100 years (probability = 1%/year). The community might easily recover from its annual risk and be sustainable indefinitely, but would be devastated by the latter one when it eventually occurs, and possibly unable to recover. Though mathematically equivalent using the metric of mean annual economic loss, considering them as equal creates a false equivalence. The symmetry within risk matrices between low-probability/high-consequence events and high-probability/low-consequence events is a poor representation of how impacts are systemically experienced by people, institutions and communities. Unpacking disaster consequences requires more than the aggregation of singular metrics; rare events should be analyzed from a system perspective.

2.2.2 Metrics

Estimates of disaster risk are often calculated through the use of specific quantitative metrics, though there are other approaches in use such as narratives and qualitative analysis. Clearly, any one metric will oversimplify consequences. But it must also be noted that using metrics such as economic loss or number of deaths are underlain by specific sets of values that are often unstated. For example, choosing total economic loss as compared to per capita economic loss will bias risk estimates towards developed nations and wealthier communities, and says nothing about who was affected or and what was lost. Similarly, choosing total number of fatalities says nothing about who died, or how large a fraction of an affected community was impacted. Even when data is normalized, the choice of normalization will affect risk estimations. For example, in North America the number of car accident fatalities per distance driven has declined dramatically over time, whereas the number of car accident fatalities normalized by population has remained fairly stable (Wilde 2013). The outcome of any risk evaluation process is highly dependent upon methodology and choice of metric.

2.2.3 Non-Linearity

Another trap in the use of risk matrices is that actual relationships between hazard frequency and consequence are generally non-linear, often reflecting critical thresholds beyond which losses increase exponentially (Michel-Kerjan et al. 2013). Such sensitivities are difficult to capture in a simple matrix with relatively few categories, hence the importance of granularity in the analysis. It should also be noted that whereas a particular scenario such as a 500-year earthquake can be represented as a point or a box in a risk matrix, risk in a more general sense is best represented by curves that cover the range of possible magnitudes (see for example, Lee and Rosowsky 2005).

The development of comparative risk assessment methodologies is an important area that requires further research, as noted by the Committee to Review the DHS’s Approach to Risk Analysis (2010). Lundberg and Willis (2016) present an interesting and useful article on comparative risk assessment using qualitative methods and expert judgement. This paper applies a deliberative risk ranking method to threats identified by the Department of Homeland Security, and found that the methodology was able to achieve a high level of consensus amongst the experts involved in evaluating those threats.

Within the context of the above discussion, the following sections focus on two specific issues, namely (1) how risk is mathematically defined by a number of emergency management organizations, and (2) how rare high consequence events are incorporated into their risk assessment methodologies, particularly risk matrix approaches.

3 Survey of Emergency Management Risk Assessments: US States/Canadian Provinces/National Risk Registers

3.1 EM Standards

There exist various standards that guide risk analyses. Most notable internationally is ISO 31000, a family of risk management standards codified by the International Organization of Standardization (ISO 31000, 2009). Drawing from Standards Australia’s own risk management principles and guidelines, ISO 31000 offers both a universal risk management vocabulary, and guidance on the selection and application of risk assessment techniques. Rather than refer to the chance or probability of loss, ISO 31000 conceptualizes risk as “the effect of uncertainty on objectives,” a paradigm shift that recognizes the potential of positive possibilities resulting from events, in addition to the negative ones (Purdy 2010; Lalonde and Boiral 2012). The ISO 31000 definition of risk is vague, especially given the inbuilt subjectivity in the terms “uncertainty” and “objectives.” Nonetheless, risk expressed mathematically in the standard is a combination of the consequences of an event (hazard) and the associated likelihood/probability of its occurrence, which can be expressed in a qualitative, semi-quantitative, or quantitative manner (ISO 31000, 2009). We note that uncertainty around likelihood and consequences does not explicitly appear in the risk equation, though the risk conceptualization is based upon it. ISO 31000 does not explicitly discuss the use of risk matrices.

Consensus standard organizations have also developed country-specific best practices in risk management. The National Fire Protection Association (NFPA) of the United States of America introduced NFPA 1600, a set of criteria that has been used for developing and assessing local emergency management and business continuity programs since 1991 (NFPA 2013). NFPA 1600 defines risk assessment as the process of hazard identification, and the analysis of probabilities, vulnerability, and impacts, and adopts the ISO 31000 definition of risk (NFPA 2013). The impact of the hazards, whether natural, human-caused or technological-caused events, range from the health and safety of persons in the affected area to the reputation of, or confidence in, an entity.

While the 2013 NFPA edition makes no reference to risk matrices, NFPA 1995 includes a section entitled Recommended Practice for Disaster Management, which contains a risk assessment code matrix approach to evaluating risk (NFPA 1600 1995). The first step is described as assessing the “worst credible result” of a disaster scenario and assigning a severity code to it: negligible, marginal, critical, or catastrophic. Use of the word “credible” allows for considerable variance in terms of how different users might interpret this guidance, as was illustrated in the tanker spill study in Part 1 of this series (see Etkin, Mamuji, and Clarke 2018). Each severity category includes consequences to personnel, the public, investment loss, the environment, compliance and mission impact. Those using this standard are advised to place hazards in the highest relevant category, if it meets one or more criteria. Hazard probability must then be estimated for each hazard, and categorized as follows: unlikely (<1%/year), possible (1–10%/year), likely (10–100%/year) or highly likely (very probable within the next year).

In May 2014, the UK’s Fire Protection Association (FPA) announced a formalized relationship to use NFPA’s codes and standards in FPA’s Property Risk Management Education and Certification Program (NFPA 2014). The Canadian equivalent, the Canadian Standards Association (CSA Groups) Z1600 standard, also adopts ISO 31000 definition of risk (CSA 2014). While it does not recommend the use of risk matrices or worst-case scenarios, it urges organizations to categorize hazards and threats by their frequency and severity “keeping in mind that there could be any possible combinations of frequency and severity for each” (CSA 2014, 36). Although these standards do not carry the force of the law, unless adopted by local government bodies, their conceptualization of risk informs numerous national policies.

In 2009, the European Commission adopted a Community approach on the prevention of natural and manmade disasters (CEC 2009), a Communication which sets out an overall disaster prevention framework for member countries, proposing measures to minimize the impacts of disasters. Further detailing the ISO 31000 definition of risk, it expresses risk as a function of the probability of occurrence of a hazard, the exposure (total value of all elements at risk), and the vulnerability (specific impact on exposure).

Risk = f(pEV)

In addition to hazard and risk identification, impact analysis, and risk assessments, other major components of the European Union (EU) disaster prevention framework include risk matrices, scenario development, risk management measures, and regular reviews at all levels of government.

Canadian federal legislation requires the identification and development of emergency plans for risks, but does not specify the need for a risk assessment, though this is present in some provincial legislation (Ontario 2009; Public Safety 2011). However, Public Safety Canada does provide guidance for lower levels of government on structuring a risk assessment. The All Hazards Risk Assessment Methodology Guidelines 2012–2013 includes a table of suggested return periods ranging from 1 to 100,000 years (Public Safety 2012). Similarly, under the Disaster Mitigation Act of 2000, all US states and territories are required to prepare and update a hazard mitigation plan every 3 years, with the aim of reducing or eliminating the effects of natural hazards (FEMA 2015). The Federal Emergency Management Agency (FEMA) in the US provides guidance to states on how to incorporate hazard frequency in a risk analysis, stating the following in its State Mitigation Plan Review Guide:

States can define the likelihood of a hazard occurring in terms of a general descriptor like unlikely, likely, or highly likely. If general descriptors are used, the plan must define the descriptors. For example, “highly likely” could be defined as “equals a nearly 100-percent chance of occurrence next year” or “happens every year.” It is recommended that these general descriptors only be used in instances where statistical probabilities or historical analyses are not available (FEMA 2015).

FEMA’s 2013 Threat and Hazard Identification and Risk Assessment Guide states that “Communities should consider only those threats and hazards that could plausibly occur,” which is an indication for the lack of attention being paid to the occurrence of rare events (FEMA 2013). Like the word “credible” discussed above, the use of “plausibly” allows for significantly different subjective interpretations.

The UK government monitors the most significant emergencies that the country and its citizens could face over the next 5 years through its National Risk Assessment (NRA) (UK Cabinet Office 2012). The National Risk Register (NRR), the public version of the assessment, refers to ‘reasonable worst case’ scenarios, which represent a challenging manifestation of the event “after highly implausible scenarios are excluded” (UK Cabinet Office 2012).

3.2 Survey of Risk Analyses by Emergency Management Organizations

The policies of some US States, Canadian provinces/territories, and countries explicitly provide guidance on the generation of a hazard and risk analysis that incorporates probability. These analyses are often formalized through the use of a risk matrix, which uses quantitative, qualitative or ordinal classifications of frequency and consequences to rate and rank risk, based upon varying equations. Table 1 provides a summary of risk equations used in state, provincial and country level Hazard Mitigation Plans (where risk equations are provided). Where risk matrices are used, the ordinal categories are described either qualitatively or quantitatively using linear or logarithmic scales. Risk is then calculated as either the product or sum of the ordinal scales. Table 2 provides a summary of the lowest probability categories used in policy documents.

Table 1:

Summary Table of Risk Equations used in State, Provincial and Country Hazard Mitigation Plans.a

Table 2:

Summary Table of Rarest Category Used (where Quantitative Values are Provided).a

3.3 Critique of Emergency Management Organization Risk Assessment Methodologies

This section uses specific examples from the surveyed emergency plans to illustrate the issues described above. Many states use HAZUS and RAMCAP to assist with their risk assessments. HAZUS (Hazards US) is a software package used by FEMA to estimate physical, economic, and social impacts of flooding, hurricane, coastal surge, tsunami, and earthquake disasters (FEMA 2018). HAZUS will calculate average annualized losses as well as losses from rare events. RAMCAP (Risk Analysis and Management for Critical Asset Protection) provides a methodology to asses and manage risks to critical infrastructure from terrorist attacks and naturally occurring hazards. Supply chain dependencies and exposure to hazardous locations are addressed. For terrorism, this is particularly challenging from a traditional risk perspective, since probabilities are essentially unknown. It incorporates the following steps: asset and threat characterization; consequence and vulnerability analysis; threat and risk assessment; and, risk management (Brashear and Jones 2010). RAMCAP uses the formula risk = (threat) × (vulnerability) × (consequence) to estimate risk. Here ‘threat’ is the likelihood that an adverse event will occur within a specified period, usually 1 year; ‘vulnerability’ is the probability that, given an adverse event, the estimated consequences will ensue, and ‘consequence’ is a metric of loss such as fatalities or economic losses. Both HAZUS and RAMCAP are objective and quantitative risk estimators.

3.3.1 Ohio

The Ohio risk analysis is based upon FEMA guidance, and uses a simple 5×4 risk matrix (OEMA 2011). Their analysis (Figure 3) ranks 25 risks on a scale of 1–5. This methodology includes events with “Little to no probability in next 100 years,” but cannot assign a risk ranking higher than 2 to such events, thereby rendering the most rare and catastrophic events to a low priority irrespective of their potential impact. By this measure, a nuclear accident that would destroy the state is of less risk than a more common event such as coastal flooding or wildfires. We argue that such a methodology results in an underestimation of rare catastrophic events, by relying upon mean annualized statistics that ignore larger systemic impacts.

Ohio Ranking Assessment (OEMA 2011, p. 70).
Figure 3:

Ohio Ranking Assessment (OEMA 2011, p. 70).

3.3.2 United Kingdom Risk Matrix

The UK uses a definition of risk = [hazard exposure × vulnerability]/[coping capacity], and provides a risk matrix in its National Risk Register (see Figure 4) (UK Cabinet Office 2015). We assume that by hazard exposure they mean likelihood, and by impact they mean vulnerability/coping capacity. Note that very rare events are included in this methodology. The hazards in this figure are scenario based, either reflecting serious potential or historical events.

UK Risk Matrix – UK National Risk Register (UK Cabinet Office 2015, p.13).
Figure 4:

UK Risk Matrix – UK National Risk Register (UK Cabinet Office 2015, p.13).

From a communication perspective, the scales used are potentially confusing to a lay user who is trying to understand relative risks. The likelihood scale on the x-axis is logarithmic, while the y-axis of impact ranges from insignificant (level 1) to catastrophic (level 5), are qualitatively defined and may or may not be logarithmic. The risk matrix as presented, therefore, is potentially misleading in that it actually shows log (risk), not risk as defined by the UK methodology (assuming that the y-axis is logarithmic). If the risk calculations are done using a linear scale (Figure 5), then the relative rankings visually appear to be quite different than in Figure 4. The relative risk calculations in Figure 5 are based upon multiplying annual probabilities by the impact scores of 1–4. If the impact ranges are logarithmic and not linear, then the relative importance of pandemic would be much greater.

UK Relative Risk Rankings using a Linear Scale. Note the dominance of pandemics.(original image)
Figure 5:

UK Relative Risk Rankings using a Linear Scale.

Note the dominance of pandemics.

3.3.3 Kansas

Kansas uses four ranges of categories of probability ranging from unlikely (score = 1) to highly likely (score = 4) (KDEM 2010). The lowest probability category of unlikely is defined as:

  • “Event is possible3 within the next 10 years

  • Event has up to 1 in 10 years chance of occurring (1/10 = 10%)

  • History of events is less than or equal to 10% likely per year

  • Event is ‘Unlikely’ but is possible of occurring.”

Hazards (e.g. floods and landslides) are considered generically. Thus floods will be given a single risk calculation, landslides another, etc. Ranges in hazard magnitude or specific scenarios are not examined, though historical events are listed. The formula used to compare risks (similar to the ones used by Arizona, Iowa, Oklahoma) is:

Cumulative Probability Risk Index (CPI) =(Probability ×0.45)+ (Magnitude/Severity ×0.30) +(Warning Time ×0.15) + (Duration ×0.10)

We argue that using warning time and duration generically for all hazards is a flawed methodology, since their significance in terms of consequence depends upon a number of factors that vary greatly. The warning time needed to save lives will be very different from that needed to prevent property damage, and the effectiveness of warning times depend upon the type of hazard, available resources, potential responses, the degree to which those responses can reduce risk, risk perception biases, how knowledgeable local populations are, etc. The warning categories used by Kansas, based upon FEMA guidelines, vary from level 1 (24+ hours) to level 4 (<6 hours). These timeframes are simply not appropriate for all hazards. For example, earthquake warnings of a few minutes can save many lives, although they will make no or little difference to property damage. Warnings of 6 hours are probably just as good for saving lives in an earthquake compared to 24 hours. Some disasters, such as drought, are difficult to frame in terms of a beginning and end time. Including generic warning time and duration in a risk equation for all hazards likely creates noise within risk calculations that may mask meaningful rankings.

Further, the CPRI formula makes an error of logic by adding probability and magnitude instead of multiplying them. Using this formula to calculate a risk that has a high probability but zero magnitude (e.g. frequent events that are meaningless), would result in a significant risk level, even though the risk is clearly zero. Similarly, it does not matter how large a potential impact is, if it cannot happen. This error of logic is seen in all additive risk formulations, such as the Risk Factor formula used by Idaho, Pennsylvania and Wyoming, and is also present in risk matrices that use additive formulas.

To illustrate, consider two 5×5 risk matrices (Table 3), one based upon the multiplicative formula risk = probability × consequences, and the other upon the additive formula risk = probability + consequences. The two risk profiles are quite different. In the former case, risk is uniformly zero where probability or consequences are zero, which makes intuitive sense with respect to how risk is generally understood. This is reflected in the Swiss national risk register, which notes that, “hazard and vulnerability must be simultaneously present at the same location to give rise to risk" (SCCA 2012, 18). In the latter case they are non-zero, which is not a realistic representation. In fact, a zero probability or zero consequence event can potentially have a risk rating of 50% of the maximum risk calculation in the matrix! As well, the range of risk estimations differs substantially, from 0–25 in the first case, to 1–10 in the latter. Risk calculations of this type should be multiplicative, not additive, for the two variables probability and consequence.

Table 3:

Risk as Multiplicative, as Compared to Additive.

It is interesting to consider the sources of risk definitions that are based upon the addition of probability and consequence, as opposed to their multiplication. Based on our survey, at least 18 US States use addition of these variables in their equation. In a review of the 2004 edition of the book At Risk (Blaikie et al. 1994), Joseph F. (2005) incorrectly references the equation for risk as Risk = Hazard + Vulnerability, whereas in At Risk, the equation is stated as Risk = Hazard × Vulnerability. According to Google Scholar the Cyr paper has been cited by 9 other publications, with both Marianti (2008) and Jamil and Ariadurai (2013) subsequently using the addition sign in the risk equation and attributing it to Blaikie et al. Further complicating matters, Marianti (2008) cites the equation “Disaster = Risk + Vulnerability,” which is a complete confusion of the terms as used in At Risk. There may be more examples as well, but it appears as if incorrect versions of the Blaikie et al. (1994 and 2014) risk equation worked their way into the academic and professional literature, and likely played a role in the adoption of additive definitions of risk by some emergency management organizations, though we do not know how many instances of this have occurred.

We also note that for the case of Kansas, few risk categories are identified (low, moderate and high), which results in a limited ability to discriminate between different risk situations. The Kansas risk assessment methodology suffers from all of the problems and traps inherent to risk matrices, including a problematical formulation of risk that adds instead of multiplies risk components.

3.3.4 The Netherlands

We note that the national risk assessment done by the Netherlands (Appendix 1) is unusually well done, and incorporates both error bars and a sensitivity analysis, which allows the user to better frame risk calculations.

3.3.5 Texas

In some cases, such as the Texas methodology, the terminology used can be confusing or unclear (TDEM 2013). Frequency of occurrence is categorized as (1) Highly likely: Event probable in next year, (2) Occasional: Event possible in next 5 years or (3) Unlikely: Event possible in next 10 years. Different users, for example, might think quite differently about the meanings of the word possible and probable. In fact, taken literally, any event with a non-zero probability would fit into the occasional or unlikely categories.

This example highlights the important issue of granularity. In the surveyed plans, the number of probability categories range from 3 to 6. When data is binned into only a few categories, information is lost, and the shape of the tail end of a distribution is obscured as compared to an analysis that is more granular (Virkar and Clauset 2014; Wheatley, Sovacool, and Sornette 2016). A practical example of this issue is how nuclear accidents are categorized, which is on a 7-point scale ranging from ‘anomaly’ to ‘major accident’ (IAEA 2017). The nuclear scale has been critiqued as not having enough categories to effectively rank nuclear risks (Smythe 2011). This problem can be avoided through the use of a larger number of bins, scenario planning that incorporates rare and/or worst case scenarios, or more sophisticated quantitative analyses that use continuous variables in risk calculations (Duijm 2015), such as the CAT models used by the insurance industry (Grossi 2005), which include outputs such as loss exceedance probabilities.

4 Discussion

The analysis above critiqued risk assessment methodologies used by emergency management organizations for several reasons, including: the absence of rare events; risk equations based upon adding variations of hazard, probability and consequence instead of multiplying them; the inclusion of terms in risk equations that do not add value but rather serve to increase the noise to signal ratio; and the use of equations that create a false equivalence between frequent low-consequence events and rare high-consequence events.

Some of these problems arise because of the effort to create objective, quantitative measures of risk that can form a rational basis for policy making. Though this is a very appealing goal, it is also an illusory one since risk is socially constructed (though there are certainly aspects of risk that are objective and quantitative). It is for this reason that the National Academy of Sciences review of the Department of Homeland Security recommend the further development and use of qualitative methodologies and better comparative risk assessment methods (Committee to Review the DHS’s Approach to Risk Analysis 2010).

Including worst case scenarios in quantitative risk assessments can be very challenging since their probability and consequences may not be well known. A qualitative or narrative-based analysis can still provide insight into how a community views such events in terms of their acceptable level of risk. Basing risk assessments only upon quantitative data and deterministic equations is a fantasy, giving the illusion of knowledge where it does not exist. Of course, quantitative analyses form a crucial part of any risk assessment, but only represent part of the whole.

Including systemic impacts is important. Very large events can restructure communities. And since every community is different, the systemic impacts from large events will depend upon variables such as social and economic capacity and resilience, amongst others. The impact of Hurricane Katrina on New Orleans is a good example of this; the urban residence pattern of that city was rapidly transformed, something that does not tend to happen from the cumulative impacts of more frequent smaller events that allow for incremental adaptive responses. There is no accepted standard way to measure this issue, but that does not take away from its importance.

Though not previously discussed, the metrics chosen to rank risk are always underlain by values which go largely unacknowledged. For example, the common use of total economic damage reinforces existing power and wealth structures. If attention is to be paid to the more vulnerable in society, then economic impact normalized by some measure of wealth would be a better metric. Rarely do these risk analyses discuss who is affected, what rights and obligations are owed to them by their governments, or what their duties are in terms of mitigating their own risks.

To serve the public good, a risk assessment should have the following characteristics:

  • It should be placed in context, such that its purpose is clearly understood.

  • Those who do the risk assessment need to be identified and any potential biases or conflicts of interest identified.

  • It should incorporate both quantitative and qualitative methodologies.

  • It should include the full range of possible events, so that risks can be properly evaluated. Communities can then decide upon their acceptable levels of risk and how much residual risk is tolerable.

  • A variety of metrics should be used, which reflect the interests of all the stakeholders. Values underlying the choice of metrics should be identified. More broadly, the ethical framework that underlies the risk assessment methodology should be made clear.

  • Systemic impacts need to be addressed in order to avoid the false equivalence most methodologies create between high-frequency low-impact events and low-frequency high-impact events.

  • Where quantitative methods and equations are being used, they should be qualified by caveats indicating their limitations and supported by robust analyses including sensitivity analyses such as used in the Netherlands national risk assessment.

5 Conclusion

For societies to rationally manage the risks that they face, it is essential for those risks to be assessed and compared. Doing so is not a trivial task and faces challenges including how risk is best defined, incomplete data and scientific knowledge, risk assessment methodological issues, and an awareness that the way risks are perceived and measured have strong subjective components with metrics that are value laden.

Mathematical definitions of risk are best considered as a metaphor or model, which is contextually useful for particular purposes. Their utility should be assessed in terms of their internal logic and their correlation to the policy purposes they are designed to serve.

Specifically, this paper has considered how emergency management organizations in the US and Canada, and various national risk registers, define and assess risk. We conclude that there is a systemic underestimation of risk as a result of problems with risk definition equations, and methodological issues that do not adequately account for rare high consequence events. We recommend the further development of guidance documents and/or standards for emergency management organizations on how risk should be assessed, and that specifically outline traps and methodological issues that can result in biased or unrealistic assessments.

Appendix 1

State, Provincial and National Risk Assessment References

State References

Alabama Emergency Management Agency. Alabama State Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://ema.alabama.gov/filelibrary/AL%20Standard%20State%20Mitigation%20Plan.pdf

Alaska Division Of Emergency Services. Alaska Hazard Mitigation Plan, 2013. Accessed December 11, 2016. https://ready.alaska.gov/plans/documents/Alaskas%20HMP%202016.pdf

Arizona Division Of Emergency Services. Arizona Hazard Mitigation Plan, 2013. Accessed December 11, 2016. https://dema.az.gov/sites/default/files/EM-PLN_2013%20State%20Hazard%20Mitigation%20PlanFINAL_0.pdf

Arkansas Office Of Emergency Services. Arkansas All Hazard Mitigation Plan, 2010. Accessed December 11, 2016. http://www.adem.arkansas.gov/aem/wp-content/uploads/2016/02/Arkansas%20All-Hazards%20Mitigation%20Plan%20-%202013%20FINAL.pdf

California Office Of Emergency Services. California Multi-Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://hazardmitigation.calema.ca.gov/docs/SHMP_Final_2013.pdf

Colorado Division Of Homeland Security And Emergency Management. Colorado Natural Hazards Mitigation Plan, 2013. Accessed December 11, 2016. https://www.colorado.gov/pacific/mars/natural-hazard-mitigation-plan-0

Connecticut Office Of Emergency Management. 2014 Connecticut Natural Hazards Mitigation Plan Update, 2014. Accessed December 11, 2016. http://www.ct.gov/deep/lib/deep/water_inland/hazard_mitigation/ct_nhmp_adopted_final.pdf

Florida Division Of Emergency Management. Florida State Risk Assessment Section 3, 2013. Accessed December 11, 2016. http://www.floridadisaster.org/mitigation/State/documents/2013stateplan/Section%203%20State%20Risk%20Assessment%20FINAL.pdf and http://www.floridadisaster.org/mitigation/State/documents/2013stateplan/Appendix%20C%20Risk%20Assessment%20Tables%20FINAL.pdf

Georgia Emergency Management Agency. Georgia Hazard Mitigation Strategy, 2014. Accessed December 11, 2016. http://www.gema.ga.gov/Mitigation/Documents/Planning/2014%20GHMS.pdf and http://www.gema.ga.gov/Plan%20Library/Georgia%20Hazard%20Mitigation%20Plan%20Appendices%20%20(2014).pdf

Hawaii Emergency Management Agency. “Multi-Hazard Mitigation Plan” 2017. Accessed January 11, 2017. http://dod.hawaii.gov/hiema/emergency-management/hazard-mitigation-plan/

Idaho Bureau Of Disaster Services/Military Division. Idaho Hazard Mitigation Plan, 2013. Accessed December 11, 2016. https://ioem.idaho.gov/Pages/Plans/Mitigation/SHMP.aspx

Illinois Emergency Management Agency. Illinois Natural Hazard Mitigation Plan, 2013. Accessed December 11, 2016. https://www.illinois.gov/iema/Mitigation/Documents/Plan_IllMitigationPlan.pdf

Indiana Emergency Management Agency. Indiana Standard Multi-Hazard Mitigation Plan, 2014. Accessed December 11, 2016. http://www.savi.org/savi/documents/Polis%20docs/Indiana%20SHMP%20FINAL.pdf

Iowa State Emergency Management Division. Iowa Hazard Mitigation Plan, Risk Assessment, 2013. Accessed December 11, 2016. http://www.homelandsecurity.iowa.gov/disasters/hazard_mitigation.html

Kansas Division Of Emergency Management, KDEM. Kansas Hazard Mitigation Plan, 2010. Accessed December 11, 2016. http://www.kansastag.gov/AdvHTML_doc_upload/CompleteKSHMP2.5.11.pdf

Kentucky Division Of Emergency Management. Kansas State Enhanced Hazard Mitigation Plan, 2010. Accessed December 11, 2016. http://kyem.ky.gov/recovery/Pages/State-Hazard-Mitigaton-Plan.aspx

Louisiana Office Of Emergency Preparedness. Louisiana Hazard Mitigation Plan, 2014. Accessed December 11, 2016. http://gohsep.la.gov/MITIGATE/HM-PLANNING/State-Hazard-Mitigation-Plan

Maine Emergency Management Agency. Maine State Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.maine.gov/mema/mitigation/mema_mit_plans.shtml

Maryland Emergency Management Agency. Maryland Hazard Mitigation Plan Update, 2011. Accessed December 11, 2016. http://mema.maryland.gov/community/Documents/2016%20Maryland%20Hazard%20Mitigation%20Plan%20final%202.pdf

Massachusetts Emergency Management Agency. Massachusett State Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.mass.gov/eopss/docs/mema/resources/plans/state-hazard-mitigation-plan/massachusetts-state-hazard-mitigation-plan.pdf

Michigan Division Of Emergency. Michigan Hazard Mitigation Plan, 2014. Accessed December 11, 2016. https://www.michigan.gov/documents/msp/MHMP_480451_7.pdf

Minnesota Division Of Emergency Management (2014). Minnesota State Hazard Mitigation Plan, https://dps.mn.gov/divisions/hsem/hazard-mitigation/Documents/State%20Plan%20Final%202014.pdf

Mississippi Emergency Management Agency. Mississippi State Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.msema.org/wp-content/uploads/2012/06/State-Hazard-Mitigation-Plan-2013.pdf

Missouri Emergency Management Agency. Missouri State Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://sema.dps.mo.gov/docs/programs/LRMF/mitigation/MO_Hazard_Mitigation_Plan_2013.pdf

Montana Division Of Disaster And Emergency Services (2010). State Of Montana Multi-Hazard Mitigation Plan And Statewide Hazard Assessment, http://montanadma.org/montana-mitigation-plan

Nebraska State Emergency Management Agency. Nebraska Hazard Mitigation Plan, 2014. Accessed December 11, 2016. https://nema.nebraska.gov/sites/nema.nebraska.gov/files/doc/hazmitplan.pdf

Nevada Division Of Emergency Management. Nevada Enhanced Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://dem.nv.gov/uploadedFiles/demnvgov/content/DEM/0_HazardMitigationPlan_FULL.pdf

New Hampshire Department Of Safety Homeland Security And Emergency Management. New Hampshire Multi-Hazard Mitigation Plan Update 2013, 2013. Accessed December 11, 2016. https://www.nh.gov/safety/divisions/hsem/HazardMitigation/documents/hazard-mitigation-plan.pdf

New Jersey Office of Emergency Management. New Jersey Hazard Mitigation Plan, Section 5. State Risk Assessment, 2014. Accessed December 11, 2016. http://www.ready.nj.gov/programs/mitigation_plan2014.html

New Mexico Division Of Emergency Management. New Mexico State Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.nmdhsem.org/uploads/files/NM%20HMP%20Final%209-30-13.pdf

New York State Emergency Management Office. New York State Hazard Mitigation Plan, Section 3.0: Hazard Identification And Risk Assessment, 2014. Accessed December 11, 2016. http://www.dhses.ny.gov/recovery/mitigation/documents/2014-shmp/Section-3-0-3-2-HazardProfile-Risk-Assessment.pdf

North Carolina Division Of Emergency Management. North Carolina State Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.ncdps.gov/Emergency-Management/EM-Community/Recovery-Mitigation/Hazard-Mitigation/Mitigation-Planning

North Dakota Division Of Emergency Management. North Dakota 2014 Multi-Hazard Mitigation Plan, Draft, October 2013, 2013. Accessed December 11, 2016. http://www.nd.gov/des/uploads/resources/845/nd_hazard_mitigation_plan_2013_update.pdf

Ohio Emergency Management Agency, OEMA. State Of Ohio Enhanced Hazard Mitigation Plan, Section 2: Hazard Identification & Risk Assessment, 2011. Accessed December 11, 2016. https://ohiosharpp.ema.state.oh.us/OhioSHARPP/Planning.aspx#shmp

Oklahoma Department Of Emergency Management. Oklahoma Standard Hazard Mitigation Plan Update for the Great State of Oklahoma, 2014. Accessed December 11, 2016. https://www.ok.gov/OEM/documents/Oklahoma%20State%20HM%20Plan%202014.pdf

Oregon Emergency Management Division. Oregon Natural Hazards Mitigation Plan, Chapter 2 Risk Assessment, 2015. Accessed December 11, 2016. https://www.oregon.gov/LCD/HAZ/docs/2015ORNHMP/2015ORNHMPApproved/Approved_2015ORNHMP.pdf and http://www.oregon.gov/LCD/HAZ/docs/2015ORNHMP/Approved_2015ORNHMP_5_RAState.pdf#page=151

Pennsylvania Emergency Management Agency. Pennsylvania Standard State All-Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.pema.pa.gov/responseandrecovery/Disaster-Assistance/Documents/General%20Mitigation%20Forms%20and%20Information/Pennsylvania%20State%20Hazard%20Mitigation%20Plan%20-%20Oct%2031%202013.pdf

Rhode Island Emergency Management Agency. Rhode Island 2014 Hazard Mitigation Plan Update, 2014. Accessed December 11, 2016. http://www.riema.ri.gov/resources/emergencymanager/mitigation/documents/RI%20HMP_2014_FINAL.pdf

South Carolina Emergency Preparedness Division. South Carolina Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.scemd.org/files/Mitigation/State_Hazard_Mitigation_Plan/1_SHMP_FINAL_2013.pdf

South Dakota Division Of Emergency Management (2013). South Dakota Hazard Mitigation Plan, Public Review Draft, 2013. Accessed December 11, 2016. http://dps.sd.gov/emergency_services/emergency_management/documents/SD_SHMP_PublicReviewDraft_rd.pdf

Tennessee Emergency Management Agency. Tennessee Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://www.tnema.org/ema/grants/documents/Tennessee%20Hazard%20Mitigation%20Plan%20-%202013.pdf

Texas Division Of Emergency Management (TDEM). State Of Texas Hazard Mitigation Plan, 2013. Accessed December 11, 2016. https://www.txdps.state.tx.us/dem/documents/txHazMitPlan.pdf

Utah Division Of Comprehensive Emergency Management. State Of Utah Hazard Mitigation Plan, Section Two Identifying Natural Hazards, 2014. Accessed December 11, 2016. https://docs.google.com/viewer?a=v&pid=sites&srcid=dXRhaC5nb3Z8dXRhaHxneDoyZGQyN2UwOTZkNzRhMTQ4

Vermont Division Of Emergency Management. Vermont Hazard Mitigation Plan, 2013. Accessed December 11, 2016. http://demhs.vermont.gov/sites/demhs/files/VT_SHMP2013%20FINAL%20APPROVED%20ADOPTED%202013%20VT%20SHMP.pdf

Virginia Department Of Emergency Management. Virginia Hazard Mitigation Plan, Chapter 3 Hazard Identification And Risk Assessment (HIRA), 2013. Accessed December 11, 2016. http://www.vaemergency.gov/em-community/recovery/haz-mit-plans

Washington Division Of Emergency Management. 2013 Washington State Enhanced State Hazard Mitigation Plan, Appendix 1 Hazard, Risk, And Vulnerability Primer, 2013. Accessed December 11, 2016. http://mil.wa.gov/other-links/enhanced-hazard-mitigation-plan

West Virginia Office of Emergency Services. 2013 West Virginia Statewide Standard Hazard Mitigation Plan Update, 2013. Accessed December 11, 2016. http://www.dhsem.wv.gov/MitigationRecovery/Documents/2013%20WV%20Statewide%20Hazard%20Mitigation%20Plan%20Update.pdf

Wisconsin Division Of Emergency Government. Wisconsin Hazard Mitigation Plan, 2011. Accessed December 11, 2016. http://emergencymanagement.wi.gov/mitigation/docs/2011%20State%20of%20Wisconsin%20Hazard%20Mitigation%20Plan.pdf

Wyoming Emergency Management Agency. Wyoming Multi-Hazard Mitigation Plan Comprehensive Update. 2014. Accessed December 11, 2016. http://wyohomelandsecurity.state.wy.us/library/2014mitigationplan/MITIGATIONDRAFTPLAN.pdf

Provincial References

Emergency Management Ontario, EMO. Hazard Identification And Risk Assessment For The Province Of Ontario, 2012. Accessed December 11, 2016. https://www.emergencymanagementontario.ca/english/emcommunity/ProvincialPrograms/hira/hira_2012.html

Northwest Territories Municipal And Community Affairs. Northwest Territories Hazard Identification Risk, 2014. Accessed December 11, 2016. Assessment, http://www.maca.gov.nt.ca/hira/

Nova Scotia Emergency Management Office. Hazard Risk Vulnerability Assessment (Hrva) Model Guidelines For Use, 2010. Accessed December 11, 2016. http://docplayer.net/7263188-Nova-scotia-emo-hazard-risk-vulnerability-assessment-hrva-model-guidelines-for-use-october-2010.html

Manitoba Office Of The Fire Commissioner. Manitoba Hazard Analysis And Risk Assessment, n.d. Accessed December 11, 2016. http://www.firecomm.gov.mb.ca/docs/hazard_analysis_risk_assessment.pdf

Country References

Emergency Management Australia. National Emergency Risk Assessment Guidelines, 2010. Accessed December 11, 2016. http://coastaladaptationresources.org/PDF-files/1438-National-Emergency-Risk-Assessment-Guidelines-Oct-2010.PDF

Danish Emergency Management Agency (DEMA), National Risk Profile (NRP), 2013. Accessed January 19, 2017. https://brs.dk/viden/publikationer/Documents/National_Risk_Profile_(NRP)_-_English-language_version.pdf

Tammepuu, Ants. “Emergency risk assessment in Estonia.” PhD diss., Estonian University of Life Sciences, 2014.

Finland Ministry of the Interior Publication. National Risk Assessment 2015, 2015. Accessed January 19, 2017. https://www.intermin.fi/download/65647_julkaisu_042016.pdf?ff47d27a36a7d388.

Ireland Department Of The Taoiseach. National Risk Assessment 2015 – Overview Of Strategic Risks, 2015. Accessed December 11, 2016. http://www.taoiseach.gov.ie/eng/Publications/Publications_2015/National_Risk_Assessment_2015.pdf

Netherlands. Working with scenarios, risk assessment and capabilities in the National Safety and Security Strategy of the Netherlands, 2009. Accessed December 11, 2016. https://english.nctv.nl/binaries/working-with-scenarios-risk-assessment-and-capabilities_tcm32-84297.pdf

Swedish Civil Contingencies Agency, SCCA. Guide To Risk And Vulnerability Analyses, 2012. Accessed December 11, 2016. https://www.msb.se/RibData/Filer/pdf/26267.pdf

Tasmanian Department of Police and Emergency Services. 2012 Tasmanian State Natural Disaster Risk Assessment, 202. Accessed December 11, 2016. http://www.ses.tas.gov.au/assets/files/EM%20Publications/disaster_resilience/2012%20TSNDRA%20Report.pdf

UK Cabinet Office. National Risk Register Of Civil Emergencies, 2015. Accessed December 11, 2016. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/419549/20150331_2015-NRR-WA_Final.pdf

References

  • Action Contre La Faim, ACF. 2012. Participatory Risk, Capacity & Vulnerability Analysis: A Practitioner Manual for Field Workers. ACF International. Accessed October 2, 2016. http://www.preventionweb.net/files/34092_34444acf2013practicalmanuelpcva1[1].pdf

  • Anthony Tony Cox, Louis. 2008. “What’s Wrong with Risk Matrices?” Risk analysis 28 (2): 497–512. CrossrefGoogle Scholar

  • Ball, David J., and John Watt. 2013. “Further Thoughts on the Utility of Risk Matrices.” Risk Analysis 33 (11): 2068–2078. CrossrefGoogle Scholar

  • Beck, Ulrich. 1992. Risk Society: Towards a New Modernity. Vol. 17. London and New York: Sage. Google Scholar

  • Beven, K. J., W. P. Aspinall, P. D. Bates, E. Borgomeo, K. Goda, J. W. Hall, T. Page, J. C. Phillips, J. T. Rougier, M. Simpson, D. B. Stephenson, P. J. Smith, T. Wagener, and M. Watson. 2015. “Epistemic Uncertainties and Natural Hazard Risk Assessment – Part 1: A Review of the Issues.” Nat Hazards Earth Syst Sci Discuss 3: 7333–7377. CrossrefGoogle Scholar

  • Blaikie, Piers, Terry Cannon, Ian Davis, and Ben Wisner. 1994. At Risk: Natural Hazards, People’s Vulnerability and Disasters. London: Routledge. Google Scholar

  • Blaikie, Piers, Terry Cannon, Ian Davis, and Ben Wisner. 2014. At Risk: Natural Hazards, People’s Vulnerability and Disasters. Cornwall: Routledge. Google Scholar

  • Brashear, Jerry, and James Jones. 2010. “Risk Analysis and Management for Critical Asset Protection (RAMCAP Plus).” In Wiley Handbook of Science and Technology for Homeland Security, USA: John Wiley & Sons. Google Scholar

  • Burton, Ian. 1993. The Environment as Hazard. Guilford Press. Google Scholar

  • Canadian Standards Association, CSA. 2014. CSA Z1600-14 Essentials–Emergency Management & Business Continuity Programs. Mississauga ON, Canada. Google Scholar

  • Clarke, Lee, and Caron Chess. 2008. “Elites and Panic: More to Fear than Fear Itself.” Social Forces 87 (2): 993–1014. CrossrefGoogle Scholar

  • Committee to Review the DHS’s Approach to Risk Analysis. 2010. In Review of the Department of Homeland Security’s Approach to Risk Analysis, edited by National Research Council of the National Acadamies, Washington, DC: National Academies Press. Google Scholar

  • Communication of the European Communities, CEC. 2009. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – A Community Approach on the Prevention of Natural and Man Made Disasters, Brussels. Accessed September 15, 2016. http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52009DC0082

  • Cyr, Joseph F. 2005. “At Risk: Natural Hazards, People’s Vulnerability, and Disasters.” Journal of Homeland Security and Emergency Management 2 (2). DOI: . CrossrefGoogle Scholar

  • Duijm, Nijs Jan. 2015. “Recommendations on the Use and Design of Risk Matrices.” Safety Science 76: 21–31. CrossrefGoogle Scholar

  • Edwards, A. W. F. 1983. “Pascal’s Problem: The ‘Gambler’s Ruin’.” International Statistical Review/Revue Internationale de Statistique50: 73–79. Google Scholar

  • Etkin, D. A., A. A. Mamuji, and L. Clarke. 2018. “Disaster Risk Analysis Part 1: The Importance of Including Rare Events.” Journal of Homeland Security and Emergency Management 15. DOI: . CrossrefGoogle Scholar

  • Federal Emergency Management Agency, FEMA. 2013. Threat and Hazard Identification and Risk Assessment Guide: Comprehensive Preparedness Guide. Accessed September 15, 2016. https://www.fema.gov/media-library/assets/documents/26335

  • Federal Emergency Management Agency, FEMA. 2015. State Mitigation Plan Review Guide. Accessed August 14, 2016. https://www.fema.gov/media-library-data/1425915308555-aba3a873bc5f1140f7320d1ebebd18c6/State_Mitigation_Plan_Review_Guide_2015.pdf

  • FEMA. HAZUS. 2018. https://www.fema.gov/hazus Accessed March 31. 

  • Grossi, Patricia. 2005. Catastrophe Modeling: A New Approach to Managing Risk. Vol. 25. Springer Science & Business Media. Google Scholar

  • Hagmann, Jonas, and Myriam Dunn Cavelty. 2012. “National Risk Registers: Security Scientism and the Propagation of Permanent Insecurity.” Security Dialogue 43 (1): 79–96. CrossrefGoogle Scholar

  • Haimes, Yacov Y. 2009. “On the Definition of Resilience in Systems.” Risk Analysis 29 (4): 498–501. CrossrefGoogle Scholar

  • Homeland Security. 2008. Risk Steering Committee, DHS Risk Lexicon. Accessed March 11, 2018 from https://www.dhs.gov/xlibrary/assets/dhs_risk_lexicon.pdf

  • Hopkins, Andrew. 2016. “How Much should be Spent to Prevent Disaster?: A Critique of Consequence Times Probability.” Australian Pipeliner: Official Publication of the Australian Pipelines and Gas Association 165: 70. Google Scholar

  • International Atomic Energy Agency, IAEA. 2017. The International Nuclear and Radiological Event Scale. Accessed January 18, 2017. http://www-ns.iaea.org/tech-areas/emergency/ines.asp

  • International Organization for Standardization, ISO. 2009. Risk Management–Principles and Guidelines. Geneva, Switzerland. Google Scholar

  • Jamil, S. Faiza, and S. Anbahan Ariadurai. 2013. “Prospects of Using Geosynthetic Materials for Disaster Mitigation–A Case Study.” Journal of Engineering and Technology of the Open University of Sri Lanka (JET-OUSL) 1. Google Scholar

  • Lalonde, Carole, and Olivier Boiral. 2012. “Managing Risks Through ISO 31000: A Critical Analysis.” Risk management 14 (4): 272–300. CrossrefGoogle Scholar

  • Leavitt, William M., and John J. Kiefer. 2006. “Infrastructure Interdependency and the Creation of a Normal Disaster the Case of Hurricane Katrina and the City of New Orleans.” Public Works Management & Policy 10 (4): 306–314. CrossrefGoogle Scholar

  • Lee, Kyung Ho, and David V. Rosowsky. 2005. “Fragility Assessment for Roof Sheathing Failure in High Wind Regions.” Engineering Structures 27 (6): 857–868. CrossrefGoogle Scholar

  • Levin, Kelly, Benjamin Cashore, Steven Bernstein, and Graeme Auld. 2007. “Playing it forward: Path Dependency, Progressive Incrementalism, and the “Super Wicked” Problem of Global Climate Change.” In International Studies Association 48th Annual Convention, Vol. 28. UK: Academic Conferences Limited. Google Scholar

  • Lundberg, Russell, and Henry H. Willis. 2016. “Deliberative Risk Ranking to Inform Homeland Security Strategic Planning.” Journal of Homeland Security and Emergency Management 13 (1): 3–33. Google Scholar

  • Marianti, Ruly. 2008. What is to be done with Disasters? A Literature Survey on Disaster Study and Response. No. 22532. East Asian Bureau of Economic Research. Google Scholar

  • Michel-Kerjan, Erwann, Stefan Hochrainer-Stigler, Howard Kunreuther, Joanne Linnerooth-Bayer, Reinhard Mechler, Robert Muir-Wood, Nicola Ranger, Pantea Vaziri, and Michael Young. 2013. “Catastrophe Risk Models for Evaluating Disaster Risk Reduction Investments in Developing Countries.” Risk analysis 33 (6): 984–999. CrossrefGoogle Scholar

  • National Fire Protection Association, NFPA. 1995. Recommended practice for disaster management. Massachusetts, USA: National Fire Protection Association. Google Scholar

  • National Fire Protection Association, NFPA. 2013. NFPA 1600: Standard on Disaster/Emergency Management and Business Continuity Programs. Massachusetts, USA. Google Scholar

  • National Fire Protection Association, NFPA. 2014. “NFPA and United Kingdom’s Fire Protection Association formalize relationship with Memorandum of Understanding.” News & Research. Accessed August 5, 2016. http://www.nfpa.org/news-and-research/news-and-media/press-room/news-releases/2014/nfpa-and-united-kingdoms-fire-protection-association-formalize-relationship-with-mou

  • National Research Council, NRC. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, D.C.: National Academies Press. Google Scholar

  • Ontario. 2009. Emergency Management and Civil Protection Act, R.S.O. 1990, c. E.9. Accessed January 14, 2017. https://www.ontario.ca/laws/statute/90e09

  • Public Safety Canada. 2011. An Emergency Management Framework for Canada: Ministers Responsible for Emergency Management. Second Edition. Ottawa, Canada. Accessed January 01, 2017. https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/mrgnc-mngmnt-frmwrk/mrgnc-mngmnt-frmwrk-eng.pdf

  • Public Safety Canada. 2012. All Hazard RIsk Assessment Methodology Guidelines 2012–2013. Ottawa ON, Canada. Accessed December 11, 2016. https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/ll-hzrds-ssssmnt/ll-hzrds-ssssmnt-eng.pdf

  • Purdy, Grant. 2010. “ISO 31000: 2009 – Setting a New Standard for Risk Management.” Risk Analysis 30 (6): 881–886. CrossrefGoogle Scholar

  • Schultz, Martin T., Ben P. Gouldby, Jonathan D. Simm, and Johannes L. Wibowo. 2010. Beyond the Factor of Safety: Developing Fragility Curves to Characterize System Reliability. No. ERDC-SR-10-1. ENGINEER RESEARCH AND DEVELOPMENT CENTER VICKSBURG MS GEOTECHNICAL AND STRUCTURES LAB. Google Scholar

  • Smythe, David. 2011. “An objective Nuclear Accident Magnitude Scale for Quantification of Severe and Catastrophic Events.” Physics Today: Points of View. Accessed January 3, 2017. http://www.physicstoday.org/daily_edition/points_of_view/an_objective_nuclear_accident_magnitude_scale_for_quantification_of_severe_and_catastrophic_events

  • Tierney, Kathleen, and Michel Bruneau. 2007. “Conceptualizing and Measuring Resilience: A Key to Disaster Loss Reduction.” TR News 250. Google Scholar

  • UK Cabinet Office. 2012. National Risk Register of Civil Emergencies. Accessed January 14, 2017. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/211858/CO_NationalRiskRegister_2012_acc.pdf

  • United Nations International Strategy for Disaster Reduction, UNISDR (2016). UNISDR Terminology, 2009. Accessed November 2, 2016. https://www.unisdr.org/we/inform/terminology

  • Virkar, Yogesh, and Aaron Clauset. 2014. “Power-Law Distributions in Binned Empirical Data.” The Annals of Applied Statistics 8 (1): 89–119. CrossrefGoogle Scholar

  • Wheatley, Spencer, Benjamin K. Sovacool, and Didier Sornette. 2016. “Reassessing the Safety of Nuclear Power.” Energy Research & Social Science 15: 96–100. CrossrefGoogle Scholar

  • Wilde, Gerald J. S. 2013. “Homeostasis Drives Behavioural Adaptation.” Behavioural Adaptation and Road Safety: Theory, Evidence and Action, 61–86. Chicago: CRC Press. Google Scholar

  • Zachmann, Karin. 2014. “Risk in Historical Perspective: Concepts, Contexts, and Conjunctions.” In Risk-A Multidisciplinary Introduction, 3–35. Switzerland: Springer International Publishing. Google Scholar

Footnotes

  • 2

    A correspondence between two mathematicians in 1654, Blaise Pascal and Pierre de Fermat, is sometimes referred to as the beginning of modern probability theory (Edwards 1983). 

  • 3

    We note that any event with a non-zero probability is possible within the next 10 years, including events with very high probabilities. 

About the article

Published Online: 2019-01-30


Citation Information: Journal of Homeland Security and Emergency Management, Volume 16, Issue 1, 20170006, ISSN (Online) 1547-7355, DOI: https://doi.org/10.1515/jhsem-2017-0006.

Export Citation

©2019 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in