This article reviews the main insights from selected literature on risk perception, particularly in connection with natural hazards. It includes numerous case studies on perception and social behavior dealing with floods, droughts, earthquakes, volcano eruptions, wild fires, and landslides. The review reveals that personal experience of a natural hazard and trust—or lack of trust—in authorities and experts have the most substantial impact on risk perception. Cultural and individual factors such as media coverage, age, gender, education, income, social status, and others do not play such an important role but act as mediators or amplifiers of the main causal connections between experience, trust, perception, and preparedness to take protective actions. When analyzing the factors of experience and trust on risk perception and on the likeliness of individuals to take preparedness action, the review found that a risk perception paradox exists in that it is assumed that high risk perception will lead to personal preparedness and, in the next step, to risk mitigation behavior. However, this is not necessarily true. In fact, the opposite can occur if individuals with high risk perception still choose not to personally prepare themselves in the face of a natural hazard. Therefore, based on the results of the review, this article offers three explanations suggesting why this paradox might occur. These findings have implications for future risk governance and communication as well as for the willingness of individuals to invest in risk preparedness or risk mitigation actions.
The Protective Action Decision Model (PADM) is a multistage model that is based on findings from research on people's responses to environmental hazards and disasters. The PADM integrates the processing of information derived from social and environmental cues with messages that social sources transmit through communication channels to those at risk. The PADM identifies three critical predecision processes (reception, attention, and comprehension of warnings or exposure, attention, and interpretation of environmental/social cues)—that precede all further processing. The revised model identifies three core perceptions—threat perceptions, protective action perceptions, and stakeholder perceptions—that form the basis for decisions about how to respond to an imminent or long‐term threat. The outcome of the protective action decision‐making process, together with situational facilitators and impediments, produces a behavioral response. In addition to describing the revised model and the research on which it is based, this article describes three applications (development of risk communication programs, evacuation modeling, and adoption of long‐term hazard adjustments) and identifies some of the research needed to address unresolved issues.
Avoiding dangerous climate change is one of the most urgent social risk issues we face today and understanding related public perceptions is critical to engaging the public with the major societal transformations required to combat climate change. Analyses of public perceptions have indicated that climate change is perceived as distant on a number of different dimensions. However, to date there has been no in‐depth exploration of the psychological distance of climate change. This study uses a nationally representative British sample in order to systematically explore and characterize each of the four theorized dimensions of psychological distance—temporal, social, and geographical distance, and uncertainty—in relation to climate change. We examine how each of these different aspects of psychological distance relate to each other as well as to concerns about climate change and sustainable behavior intentions. Results indicate that climate change is both psychologically distant and proximal in relation to different dimensions. Lower psychological distance was generally associated with higher levels of concern, although perceived impacts on developing countries, as an indicator of social distance, was also significantly related to preparedness to act on climate change. Our findings clearly point to the utility of risk communication techniques designed to reduce psychological distance. However, highlighting the potentially very serious distant impacts of climate change may also be useful in promoting sustainable behavior, even among those already concerned.
Behavioral decision research has demonstrated that judgments and decisions of ordinary people and experts are subject to numerous biases. Decision and risk analysis were designed to improve judgments and decisions and to overcome many of these biases. However, when eliciting model components and parameters from decisionmakers or experts, analysts often face the very biases they are trying to help overcome. When these inputs are biased they can seriously reduce the quality of the model and resulting analysis. Some of these biases are due to faulty cognitive processes; some are due to motivations for preferred analysis outcomes. This article identifies the cognitive and motivational biases that are relevant for decision and risk analysis because they can distort analysis inputs and are difficult to correct. We also review and provide guidance about the existing debiasing techniques to overcome these biases. In addition, we describe some biases that are less relevant because they can be corrected by using logic or decomposing the elicitation task. We conclude the article with an agenda for future research.
Flood hazards are the most common and destructive of all natural disasters. For decades, experts have been examining how flood losses can be mitigated. Just as in other risk domains, the study of risk perception and risk communication has gained increasing interest in flood risk management. Because of this research growth, a review of the state of the art in this domain is believed necessary. The review comprises 57 empirically based peer‐reviewed articles on flood risk perception and communication from the Web of Science and Scopus databases. The characteristics of these articles are listed in a comprehensive table, presenting research design, research variables, and key findings. From this review, it follows that the majority of studies are of exploratory nature and have not applied any of the theoretical frameworks that are available in social science research. Consequently, a methodological standardization in measuring and analyzing people's flood risk perceptions and their adaptive behaviors is hardly present. This heterogeneity leads to difficulties in comparing results among studies. It is also shown that theoretical and empirical studies on flood risk communication are nearly nonexistent. The article concludes with a summary on methodological issues in the fields of flood‐risk perception and flood‐risk communication and proposes an agenda for future research.
The United Nations Office for Disaster Risk Reduction reported that the 2011 natural disasters, including the earthquake and tsunami that struck Japan, resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Storms and floods accounted for up to 70% of the 302 natural disasters worldwide in 2011, with earthquakes producing the greatest number of fatalities. Average annual losses in the United States amount to about $55 billion. Enhancing community and system resilience could lead to massive savings through risk reduction and expeditious recovery. The rational management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics. In this article, a resilience definition is provided that meets a set of requirements with clear relationships to the metrics of the relevant abstract notions of reliability and risk. Those metrics also meet logically consistent requirements drawn from measure theory, and provide a sound basis for the development of effective decision‐making tools for multihazard environments. Improving the resiliency of a system to meet target levels requires the examination of system enhancement alternatives in economic terms, within a decision‐making framework. Relevant decision analysis methods would typically require the examination of resilience based on its valuation by society at large. The article provides methods for valuation and benefit‐cost analysis based on concepts from risk analysis and management.
Ground‐level ozone (O 3 ) and fine particulate matter (PM 2.5 ) are associated with increased risk of mortality. We quantify the burden of modeled 2005 concentrations of O 3 and PM 2.5 on health in the United States. We use the photochemical Community Multiscale Air Quality (CMAQ) model in conjunction with ambient monitored data to create fused surfaces of summer season average 8‐hour ozone and annual mean PM 2.5 levels at a 12 km grid resolution across the continental United States. Employing spatially resolved demographic and concentration data, we assess the spatial and age distribution of air‐pollution‐related mortality and morbidity. For both PM 2.5 and O 3 we also estimate: the percentage of total deaths due to each pollutant; the reduction in life years and life expectancy; and the deaths avoided according to hypothetical air quality improvements. Using PM 2.5 and O 3 mortality risk coefficients drawn from the long‐term American Cancer Society (ACS) cohort study and National Mortality and Morbidity Air Pollution Study (NMMAPS), respectively, we estimate 130,000 PM 2.5 ‐related deaths and 4,700 ozone‐related deaths to result from 2005 air quality levels. Among populations aged 65–99, we estimate nearly 1.1 million life years lost from PM 2.5 exposure and approximately 36,000 life years lost from ozone exposure. Among the 10 most populous counties, the percentage of deaths attributable to PM 2.5 and ozone ranges from 3.5% in San Jose to 10% in Los Angeles. These results show that despite significant improvements in air quality in recent decades, recent levels of PM 2.5 and ozone still pose a nontrivial risk to public health.
Recent natural and man‐made catastrophes, such as the Fukushima nuclear power plant, flooding caused by Hurricane Katrina, the Deepwater Horizon oil spill, the Haiti earthquake, and the mortgage derivatives crisis, have renewed interest in the concept of resilience , especially as it relates to complex systems vulnerable to multiple or cascading failures. Although the meaning of resilience is contested in different contexts, in general resilience is understood to mean the capacity to adapt to changing conditions without catastrophic loss of form or function. In the context of engineering systems, this has sometimes been interpreted as the probability that system conditions might exceed an irrevocable tipping point. However, we argue that this approach improperly conflates resilience and risk perspectives by expressing resilience exclusively in risk terms. In contrast, we describe resilience as an emergent property of what an engineering system does , rather than a static property the system has . Therefore, resilience cannot be measured at the systems scale solely from examination of component parts. Instead, resilience is better understood as the outcome of a recursive process that includes: sensing, anticipation, learning, and adaptation. In this approach, resilience analysis can be understood as differentiable from, but complementary to, risk analysis, with important implications for the adaptive management of complex, coupled engineering systems. Management of the 2011 flooding in the Mississippi River Basin is discussed as an example of the successes and challenges of resilience‐based management of complex natural systems that have been extensively altered by engineered structures.
In recent years, shale gas formations have become economically viable through the use of horizontal drilling and hydraulic fracturing. These techniques carry potential environmental risk due to their high water use and substantial risk for water pollution. Using probability bounds analysis, we assessed the likelihood of water contamination from natural gas extraction in the Marcellus Shale. Probability bounds analysis is well suited when data are sparse and parameters highly uncertain. The study model identified five pathways of water contamination: transportation spills, well casing leaks, leaks through fractured rock, drilling site discharge, and wastewater disposal. Probability boxes were generated for each pathway. The potential contamination risk and epistemic uncertainty associated with hydraulic fracturing wastewater disposal was several orders of magnitude larger than the other pathways. Even in a best‐case scenario, it was very likely that an individual well would release at least 200 m 3 of contaminated fluids. Because the total number of wells in the Marcellus Shale region could range into the tens of thousands, this substantial potential risk suggested that additional steps be taken to reduce the potential for contaminated fluid leaks. To reduce the considerable epistemic uncertainty, more data should be collected on the ability of industrial and municipal wastewater treatment facilities to remove contaminants from used hydraulic fracturing fluid.
Dependence assessment among human errors in human reliability analysis (HRA) is an important issue. Many of the dependence assessment methods in HRA rely heavily on the expert's opinion, thus are subjective and may sometimes cause inconsistency. In this article, we propose a computational model based on the Dempster‐Shafer evidence theory (DSET) and the analytic hierarchy process (AHP) method to handle dependence in HRA. First, dependence influencing factors among human tasks are identified and the weights of the factors are determined by experts using the AHP method. Second, judgment on each factor is given by the analyst referring to anchors and linguistic labels. Third, the judgments are represented as basic belief assignments (BBAs) and are integrated into a fused BBA by weighted average combination in DSET. Finally, the CHEP is calculated based on the fused BBA. The proposed model can deal with ambiguity and the degree of confidence in the judgments, and is able to reduce the subjectivity and improve the consistency in the evaluation process.
Despite the prognoses of the effects of global warming (e.g., rising sea levels, increasing river discharges), few international studies have addressed how flood preparedness should be stimulated among private citizens. This article aims to predict Dutch citizens’ flood preparedness intentions by testing a path model, including previous flood hazard experiences, trust in public flood protection, and flood risk perceptions (both affective and cognitive components). Data were collected through questionnaire surveys in two coastal communities ( n = 169, n = 244) and in one river area community ( n = 658). Causal relations were tested by means of structural equation modeling (SEM). Overall, the results indicate that both cognitive and affective mechanisms influence citizens’ preparedness intentions. First, a higher level of trust reduces citizens’ perceptions of flood likelihood, which in turn hampers their flood preparedness intentions (cognitive route). Second, trust also lessens the amount of dread evoked by flood risk, which in turn impedes flood preparedness intentions (affective route). Moreover, the affective route showed that levels of dread were especially influenced by citizens’ negative and positive emotions related to their previous flood hazard experiences. Negative emotions most often reflected fear and powerlessness, while positive emotions most frequently reflected feelings of solidarity. The results are consistent with the affect heuristic and the historical context of Dutch flood risk management. The great challenge for flood risk management is the accommodation of both cognitive and affective mechanisms in risk communications, especially when most people lack an emotional basis stemming from previous flood hazard events.
This study compares two widely used approaches for robustness analysis of decision problems: the info‐gap method originally developed by Ben‐Haim and the robust decision making (RDM) approach originally developed by Lempert, Popper, and Bankes. The study uses each approach to evaluate alternative paths for climate‐altering greenhouse gas emissions given the potential for nonlinear threshold responses in the climate system, significant uncertainty about such a threshold response and a variety of other key parameters, as well as the ability to learn about any threshold responses over time. Info‐gap and RDM share many similarities. Both represent uncertainty as sets of multiple plausible futures, and both seek to identify robust strategies whose performance is insensitive to uncertainties. Yet they also exhibit important differences, as they arrange their analyses in different orders, treat losses and gains in different ways, and take different approaches to imprecise probabilistic information. The study finds that the two approaches reach similar but not identical policy recommendations and that their differing attributes raise important questions about their appropriate roles in decision support applications. The comparison not only improves understanding of these specific methods, it also suggests some broader insights into robustness approaches and a framework for comparing them.
Two images, black swans and perfect storms, have struck the public's imagination and are usedat times indiscriminatelyto describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms leads to a combination of both types of uncertainties into a single probability measureBayesian probabilityand accounts only for risk aversion. Yet, the decisionmaker may also want to be ambiguity averse. This article presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty. These measures involve monitoring of signals, precursors, and near-misses, as well as reinforcement of the system and a thoughtful response strategy. It also involves careful examination of organizational factors such as the incentive system, which shape human performance and affect the risk of errors. In all cases, including rare events, risk quantification does not allow prediction of accidents and catastrophes. Instead, it is meant to support effective risk management rather than simply reacting to the latest events and headlines.
Major nuclear accidents, such as the recent accident in Fukushima, Japan, have been shown to decrease the public's acceptance of nuclear power. However, little is known about how a serious accident affects people's acceptance of nuclear power and the determinants of acceptance. We conducted a longitudinal study ( N = 790) in Switzerland: one survey was done five months before and one directly after the accident in Fukushima. We assessed acceptance, perceived risks, perceived benefits, and trust related to nuclear power stations. In our model, we assumed that both benefit and risk perceptions determine acceptance of nuclear power. We further hypothesized that trust influences benefit and risk perceptions and that trust before a disaster relates to trust after a disaster. Results showed that the acceptance and perceptions of nuclear power as well as its trust were more negative after the accident. In our model, perceived benefits and risks determined the acceptance of nuclear power stations both before and after Fukushima. Trust had strong effects on perceived benefits and risks, at both times. People's trust before Fukushima strongly influenced their trust after the accident. In addition, perceived benefits before Fukushima correlated with perceived benefits after the accident. Thus, the nuclear accident did not seem to have changed the relations between the determinants of acceptance. Even after a severe accident, the public may still consider the benefits as relevant, and trust remains important for determining their risk and benefit perceptions. A discussion of the benefits of nuclear power seems most likely to affect the public's acceptance of nuclear power, even after a nuclear accident.
Risk matrices—tables mapping “frequency” and “severity” ratings to corresponding risk priority levels—are popular in applications as diverse as terrorism risk analysis, highway construction project management, office building risk analysis, climate change risk management, and enterprise risk management (ERM). National and international standards (e.g., Military Standard 882C and AS/NZS 4360:1999) have stimulated adoption of risk matrices by many organizations and risk consultants. However, little research rigorously validates their performance in actually improving risk management decisions. This article examines some mathematical properties of risk matrices and shows that they have the following limitations. (a) Poor Resolution . Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks (“range compression”). (b) Errors . Risk matrices can mistakenly assign higher qualitative ratings to quantitatively smaller risks. For risks with negatively correlated frequencies and severities, they can be “worse than useless,” leading to worse‐than‐random decisions. (c) Suboptimal Resource Allocation . Effective allocation of resources to risk‐reducing countermeasures cannot be based on the categories provided by risk matrices. (d) Ambiguous Inputs and Outputs . Categorizations of severity cannot be made objectively for uncertain consequences. Inputs to risk matrices (e.g., frequency and severity categorizations) and resulting outputs (i.e., risk ratings) require subjective interpretation, and different users may obtain opposite ratings of the same quantitative risks. These limitations suggest that risk matrices should be used with caution, and only with careful explanations of embedded judgments.
Recently, considerable attention has been paid to a systems‐based approach to risk, vulnerability, and resilience analysis. It is argued that risk, vulnerability, and resilience are inherently and fundamentally functions of the states of the system and its environment. Vulnerability is defined as the manifestation of the inherent states of the system that can be subjected to a natural hazard or be exploited to adversely affect that system, whereas resilience is defined as the ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time, and composite costs, and risks. Risk, on the other hand, is probability based, defined by the probability and severity of adverse effects (i.e., the consequences). In this article, we look more closely into this approach. It is observed that the key concepts are inconsistent in the sense that the uncertainty (probability) dimension is included for the risk definition but not for vulnerability and resilience. In the article, we question the rationale for this inconsistency. The suggested approach is compared with an alternative framework that provides a logically defined structure for risk, vulnerability, and resilience, where all three concepts are incorporating the uncertainty (probability) dimension.
Phthalic acid esters (phthalates) are used as plasticizers in numerous consumer products, commodities, and building materials. Consequently, phthalates are found in human residential and occupational environments in high concentrations, both in air and in dust. Phthalates are also ubiquitous food and environmental contaminants. An increasing number of studies sampling human urine reveal the ubiquitous phthalate exposure of consumers in industrialized countries. At the same time, recent toxicological studies have demonstrated the potential of the most important phthalates to disturb the human hormonal system and human sexual development and reproduction. Additionally, phthalates are suspected to trigger asthma and dermal diseases in children. To find the important sources of phthalates in Europeans, a scenario‐based approach is applied here. Scenarios representing realistic exposure situations are generated to calculate the age‐specific range in daily consumer exposure to eight phthalates. The scenarios demonstrate that exposure of infant and adult consumers is caused by different sources in many cases. Infant consumers experience significantly higher daily exposure to phthalates in relation to their body weight than older consumers. The use of consumer products and different indoor sources dominate the exposure to dimethyl, diethyl, benzylbutyl, diisononyl, and diisodecyl phthalates, whereas food has a major influence on the exposure to diisobutyl, dibutyl, and di‐2‐ethylhexyl phthalates. The scenario‐based approach chosen in the present study provides a link between the knowledge on emission sources of phthalates and the concentrations of phthalate metabolites found in human urine.
Despite the growing scientific consensus about the risks of global warming and climate change, the mass media frequently portray the subject as one of great scientific controversy and debate. And yet previous studies of the mass public's subjective assessments of the risks of global warming and climate change have not sufficiently examined public informedness, public confidence in climate scientists, and the role of personal efficacy in affecting global warming outcomes. By examining the results of a survey on an original and representative sample of Americans, we find that these three forces—informedness, confidence in scientists, and personal efficacy—are related in interesting and unexpected ways, and exert significant influence on risk assessments of global warming and climate change. In particular, more informed respondents both feel less personally responsible for global warming, and also show less concern for global warming. We also find that confidence in scientists has unexpected effects: respondents with high confidence in scientists feel less responsible for global warming, and also show less concern for global warming. These results have substantial implications for the interaction between scientists and the public in general, and for the public discussion of global warming and climate change in particular.