Risk based inspection (RBI) planning for engineering systems is considered. Due to difficulties in formulating computationally tractable approaches for RBI for systems, most procedures hitherto have focused exclusively on individual components or have considered system effects in a very simplified manner only. Several studies have pointed to the importance of taking systems effect into account in inspection planning. Especially for large engineering systems it is not possible to identify cost optimal solutions if the various types of functional and statistical dependencies in the systems are not explicitly addressed. Based on new developments in RBI for individual components, the present paper presents an integral approach for the consideration of entire systems in inspection planning. The various aspects of dependencies in the systems are presented and discussed, followed by an introduction to the decision problems encountered in inspection and maintenance planning of structural systems. It is then shown how these decision problems can be consistently represented by decision theoretical models. The presentation of a practical procedure for the inspection planning for steel structures subject to fatigue concludes the paper.
► AK-MCS is a reliability method based on Kriging and Monte Carlo Simulation. ► AK-MCS is an active learning classifier. ► Moderate computation time to accurately estimate the failure probability. ► AK-MCS is performed successfully on complicated cases including high dimensionality. An important challenge in structural reliability is to keep to a minimum the number of calls to the numerical models. Engineering problems involve more and more complex computer codes and the evaluation of the probability of failure may require very time-consuming computations. Metamodels are used to reduce these computation times. To assess reliability, the most popular approach remains the numerous variants of response surfaces. Polynomial Chaos and Support Vector Machine are also possibilities and have gained considerations among researchers in the last decades. However, recently, Kriging, originated from geostatistics, have emerged in reliability analysis. Widespread in optimisation, Kriging has just started to appear in uncertainty propagation and reliability studies. It presents interesting characteristics such as exact interpolation and a local index of uncertainty on the prediction which can be used in active learning methods. The aim of this paper is to propose an iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way. The method is called AK-MCS for Active learning reliability method combining Kriging and Monte Carlo Simulation. It is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function. Several examples from literature are performed to illustrate the methodology and to prove its efficiency particularly for problems dealing with high non-linearity, non-differentiability, non-convex and non-connex domains of failure and high dimensionality.
The sources and characters of uncertainties in engineering modeling for risk and reliability analyses are discussed. While many sources of uncertainty may exist, they are generally categorized as either aleatory or epistemic. Uncertainties are characterized as epistemic, if the modeler sees a possibility to reduce them by gathering more data or by refining models. Uncertainties are categorized as aleatory if the modeler does not foresee the possibility of reducing them. From a pragmatic standpoint, it is useful to thus categorize the uncertainties within a model, since it then becomes clear as to which uncertainties have the potential of being reduced. More importantly, epistemic uncertainties may introduce dependence among random events, which may not be properly noted if the character of uncertainties is not correctly modeled. Influences of the two types of uncertainties in reliability assessment, codified design, performance-based engineering and risk-based decision-making are discussed. Two simple examples demonstrate the influence of statistical dependence arising from epistemic uncertainties on systems and time-variant reliability problems.
► A new multi-stage framework to analyze urban infrastructure system resilience under multiple hazards is developed. ► The effects of different strategies for resilience improvement are compared using power grids. ► Under limited resources, recovery sequences play a crucial role in resilience improvement. ► Increases of 0.034% in power grid expected annual resilience may save millions of dollars in operations and maintenance per year. ► The expected annual resilience is more effectively improved by managing random hazards than hurricane hazards. This paper proposes a new multi-stage framework to analyze infrastructure resilience. For each stage, a series of resilience-based improvement strategies are highlighted and appropriate correlates of resilience identified, to then be combined for establishing an expected annual resilience metric adequate for both single hazards and concurrent multiple hazard types. Taking the power transmission grid in Harris County, Texas, USA, as a case study, this paper compares an original power grid model with several hypothetical resilience-improved models to quantify their effectiveness at different stages of their response evolution to random hazards and hurricane hazards. Results show that the expected annual resilience is mainly compromised by random hazards due to their higher frequency of occurrence relative to hurricane hazards. In addition, under limited resources, recovery sequences play a crucial role in resilience improvement, while under sufficient availability of resources, deploying redundancy, hardening critical components and ensuring rapid recovery are all effective responses regardless of their ordering. The expected annual resilience of the power grid with all three stage improvements increases 0.034% compared to the original grid. Although the improvement is small in absolute magnitude due to the high reliability of real power grids, it can still save millions of dollars per year as assessed by energy experts. This framework can provide insights to design, maintain, and retrofit resilient infrastructure systems in practice.
This paper studies the reliability of infinite slopes in the presence of spatially variable shear strength parameters that increase linearly with depth. The mean trend of the shear strength parameters increasing with depth is highlighted. The spatial variability in the undrained shear strength and the friction angle is modeled using random field theory. Infinite slope examples are presented to investigate the effect of spatial variability on the depth of critical slip line and the probability of failure. The results indicate that the mean trend of the shear strength parameters has a significant influence on clay slope reliability. The probability of failure will be overestimated if a linearly increasing trend underlying the shear strength parameters is ignored. The possibility of critical slip lines occurring at the bottom of the slope decreases considerably when the mean trend of undrained shear strength is considered. The linearly increasing mean trend of the friction angle has a considerable effect on the distribution of the critical failure depths of sandy slopes. The most likely critical slip line only lies at the bottom of the sandy slope under the special case of a constant mean trend.
This paper presents a survey on the development and use of Artificial Neural Network (ANN) models in structural reliability analysis. The survey identifies the different types of ANNs, the methods of structural reliability assessment that are typically used, the techniques proposed for ANN training set improvement and also some applications of ANN approximations to structural design and optimization problems. ANN models are then used in the reliability analysis of a ship stiffened panel subjected to uniaxial compression loads induced by hull girder vertical bending moment, for which the collapse strength is obtained by means of nonlinear finite element analysis (FEA). The approaches adopted combine the use of adaptive ANN models to approximate directly the limit state function with Monte Carlo simulation (MCS), first order reliability methods (FORM) and MCS with importance sampling (IS), for reliability assessment. A comprehensive comparison of the predictions of the different reliability methods with ANN based LSFs and classical LSF evaluation linked to the FEA is provided.
Electric power systems are critical to economic prosperity, national security, public health and safety. However, in hurricane-prone areas, a severe storm may simultaneously cause extensive component failures in a power system and lead to cascading failures within it and across other power-dependent utility systems. Hence, the hurricane resilience of power systems is crucial to ensure their rapid recovery and support the needs of the population in disaster areas. This paper introduces a probabilistic modeling approach for quantifying the hurricane resilience of contemporary electric power systems. This approach includes a hurricane hazard model, component fragility models, a power system performance model, and a system restoration model. These coupled four models enable quantifying hurricane resilience and estimating economic losses. Taking as an example the power system in Harris County, Texas, USA, along with real outage and restoration data after Hurricane Ike in 2008, the proposed resilience assessment model is calibrated and verified. In addition, several dimensions of resilience as well as the effectiveness of alternative strategies for resilience improvement are simulated and analyzed. Results show that among technical, organizational and social dimensions of resilience, the organizational resilience is the highest with a value of 99.964% (3.445 in a proposed logarithmic scale) while the social resilience is the lowest with a value of 99.760% (2.620 in the logarithmic scale). Although these values seem high in absolute terms due to the reliability of engineered systems, the consequences of departing from ideal resilience are still high as economic losses can add up to $83 million per year.
The moving least-squares method (MLSM) is a more accurate approach compare to the least-squares method (LSM) based approach in approximating implicit response of structure. The advantage of MLSM over LSM is explored to reduce the number of iterations required to obtain the updated centre point of design of experiment (DOE) to construct the final response surface for efficient reliability analysis of structures. The initial response surface is constructed based on a simplified DOE with mean values of the random variables as the centre point and updated successively to obtain the improved response surface. The reliability of structure is evaluated using this final response surface. The basis of the efficiency of the proposed method hinges on the use of simplified DOE instead of computationally involved full factorial design to achieve desired accuracy. As MLSM is more accurate compare to LSM in evaluating response surface polynomial, the centre point obtained is expected to be more accurate during iterations. Thus, the number of iteration in the update procedure will reduce and the accuracy of computed reliability will also improve. The improved performance of the proposed approach with regard to efficiency and accuracy is elucidated with the help of three numerical examples.
► The proposed method aims at assessing small failure probabilities. ► The basic underlying concept is similar to subset simulation. ► A SVM surrogate is adaptively built at each threshold of the limit state function. ► The efficiency of the method is assessed on some challenging examples. Estimating small probabilities of failure remains quite a challenging task in structural reliability when models are computationally demanding. FORM/SORM are very suitable solutions when applicable but, due to their inherent assumptions, they sometimes lead to incorrect results for problems involving for instance multiple design points and/or nonsmooth failure domains. Recourse to simulation methods could therefore be the only viable solution for these kinds of problems. However, a major shortcoming of simulation methods is that they require a large number of calls to the structural model, which may be prohibitive for industrial applications. This paper presents a new approach for estimating small failure probabilities by considering subset simulation proposed by S.-K. Au and J. Beck from the point of view of Support Vector Machine (SVM) classification. This approach referred as SMART (“ ”) is detailed and its efficiency, accuracy and robustness are assessed on three representative examples. A specific attention is paid to series system reliability and problems involving moderately large numbers of random variables.
With complex performance functions and time-demanding computation of structural responses, the estimation of small failure probabilities is a challenging problem in engineering. Although Subset Simulation (SS) is a powerful tool for small probabilities, the computation amount is still large for time-consuming numerical procedures. Metamodelling is an important approach to increase the computational efficiency for engineering problems, however, a larger set of sample points is required for higher accuracy. This is a time-consuming task when the performance function needs to be numerically evaluated. To address this issue, AK–SS: an active learning method combining Kriging model and SS is proposed in this paper. The efficiency of this new method relies upon the advantages of SS in evaluating small failure probabilities and the Kriging model with active learning and updating characteristic for approximating the true performance function. The proposed method is applied to several benchmark functions in the literature, and to the reliability analysis of a shield tunnel, which requires finite element analysis. The results demonstrated that as compared to the other approaches in literature, AK–SS can provide accurate solutions more efficiently, making it a promising approach for structural reliability analyses involving small failure probabilities, high-dimensional performance functions, and time-consuming simulation codes in practical engineering.
Slope reliability under incomplete probability information is a challenging problem. In this study, three copula-based approaches are proposed to evaluate slope reliability under incomplete probability information. The Nataf distribution and copula models for characterizing the bivariate distribution of shear strength parameters are briefly introduced. Then, both global and local dispersion factors are defined to characterize the dispersion in probability of slope failure. Two illustrative examples are presented to demonstrate the validity of the proposed approaches. The results indicate that the probabilities of slope failure associated with different copulas differ considerably. The commonly used Nataf distribution or Gaussian copula produces only one of the various possible solutions of probability of slope failure. The probability of slope failure under incomplete probability information exhibits large dispersion. Both global and local dispersion factors increase with decreasing probability of slope failure, especially for small coefficients of variation and strongly negative correlations underlying shear strength parameters. The proposed three copula-based approaches can effectively reduce the dispersion in probability of slope failure and significantly improve the estimate of probability of slope failure. In comparison with the Nataf distribution, the copula-based approaches result in a more reasonable estimate of slope reliability.
The primary goal of seismic provisions in building codes is to protect life safety through the prevention of structural collapse. To evaluate the extent to which current and past building code provisions meet this objective, the authors have conducted detailed assessments of collapse risk of reinforced-concrete moment frame buildings, including both ‘ductile’ frames that conform to current building code requirements, and ‘non-ductile’ frames that are designed according to out-dated (pre-1975) building codes. Many aspects of the assessment process can have a significant impact on the evaluated collapse performance; this study focuses on methods of representing modeling parameter uncertainties in the collapse assessment process. Uncertainties in structural component strength, stiffness, deformation capacity, and cyclic deterioration are considered for non-ductile and ductile frame structures of varying heights. To practically incorporate these uncertainties in the face of the computationally intensive nonlinear response analyses needed to simulate collapse, the modeling uncertainties are assessed through a response surface, which describes the median collapse capacity as a function of the model random variables. The response surface is then used in conjunction with Monte Carlo methods to quantify the effect of these modeling uncertainties on the calculated collapse fragilities. Comparisons of the response surface based approach and a simpler approach, namely the first-order second-moment (FOSM) method, indicate that FOSM can lead to inaccurate results in some cases, particularly when the modeling uncertainties cause a shift in the prediction of the median collapse point. An alternate simplified procedure is proposed that combines aspects of the response surface and FOSM methods, providing an efficient yet accurate technique to characterize model uncertainties, accounting for the shift in median response. The methodology for incorporating uncertainties is presented here with emphasis on the collapse limit state, but is also appropriate for examining the effects of modeling uncertainties on other structural limit states.
The inherent spatial variability of soils is one of the major sources of uncertainties in soil properties, and it can be characterized explicitly using random field theory. In the context of random fields, the spatial correlation between the values of a soil property concerned at different locations is represented by its correlation structure (i.e., correlation functions). How to select a proper correlation function for a particular site has been a challenging task, particularly when only a limited number of project-specific test results are obtained during geotechnical site characterization. This paper develops a Bayesian model comparison approach for selection of the most probable correlation function among a pool of candidates (e.g., single exponential correlation function, binary noise correlation function, second-order Markov correlation function, and squared exponential correlation function) for a particular site using project-specific test results and site information available prior to the project (i.e., prior knowledge, such as engineering experience and judgments). Equations are derived for the proposed Bayesian model comparison approach, in which the inherent spatial variability is modeled explicitly using random field theory. Then, the proposed method is illustrated and validated through simulated cone penetration test (CPT) data and four sets of real CPT data obtained from the sand site of the US National Geotechnical Experimentation Sites (NGES) at Texas A&M University. In addition, sensitivity studies are performed to explore the effects of prior knowledge, the measurement resolution (i.e., sampling interval), and data quantity (i.e., sampling depth) on selection of the most probable correlation function for soil properties. It is found that the proposed approach properly selects the most probable correlation function and is applicable for general choices of prior knowledge. The performance of the method is improved as the measurement resolution improves and the data quantity increases.
To enhance computational efficiency in reliability analysis, metamodeling has been widely adopted for reliability assessment. This work develops an efficient reliability method which takes advantage of the daptive upport ector achine (ASVM) and the onte arlo imulation (MCS). A pool-based ASVM is employed for metamodel construction with the minimum number of training samples, for which a learning function is proposed to sequentially select informative training samples. Then MCS is employed to compute the failure probability based on the SVM classifier obtained. The proposed method is applied to four representative examples, which shows great effectiveness and efficiency of ASVM-MCS, leading to accurate estimation of failure probability with rather low computational cost. ASVM-MCS is a powerful and promising approach for reliability computation, especially for nonlinear and high-dimensional problems.
This paper studies the effect of cascading failures in the risk and reliability assessment of complex infrastructure systems. Conventional reliability assessment for these systems is limited to finding paths between predefined components and does not include the effect of increased flow demand or flow capacity. Network flows are associated with congestion-based disruptions which can worsen path-based predictions of performance. In this research, overloads due to cascading failures are modeled with a tolerance parameter that measures network element flow capacity relative to flow demands in practical power transmission systems. Natural hazards and malevolent targeted disruptions constitute the triggering events that evolve into widespread failures due to flow redistribution. It is observed that improvements in network component tolerance alone do not ensure system robustness or protection against disproportionate cascading failures. Topological changes are needed to increase cascading robustness at realistic tolerance levels. Interestingly, targeted topological disruptions of a small fraction of network components can affect system-level performance more severely than earthquake or lightning events that trigger similar fractions of element failure. Also, regardless of the nature of the hazards, once the triggering events that disrupt the networks under investigation occur, the additional loss of performance due to cascading failures can be orders of magnitude larger than the initial loss of performance. These results reinforce the notion that managing the risk of network unavailability requires a combination of redundant topology, increased flow carrying capacity, and other non-conventional consequence reduction strategies, such as layout homogenization and the deliberate inclusion of weak links for network islanding. Furthermore, accepted ideas that rare loss of performance events occur exponentially less frequent as the performance reduction intensifies contrast with more frequent network vulnerabilities that result from initial hazard-induced failures and subsequent cascading-induced failure effects. These compound hazard-cascading detrimental effects can have profound implications on infrastructure failure prevention strategies.
In the stochastic dynamic analysis of nonlinear structures, the strategy of point selection plays a critical role in achieving the tradeoffs between the accuracy and efficiency. To this end, cooperated with the concept of the extended F-discrepancy (EF-discrepancy), the Koksma–Hlawka inequality, which bounds the worst error of cubature formulae, is extended to the cases involving non-uniform distributions in the present paper. Further, in order to avoid the computational complexity of EF-discrepancy, the Generalized F-discrepancy (GF-discrepancy) is introduced. In light of the quantitative equivalence between the EF-discrepancy and GF-discrepancy, the extended Koksma–Hlawka inequality could be modified by replacing the EF-discrepancy with the GF-discrepancy. Thereby the rationality of adopting GF-discrepancy as the objective function of point selection is theoretically supported. Thus, by reducing the GF-discrepancy, a new strategy of representative point set determination via rearrangement is proposed. The proposed approach is then applied to stochastic dynamical response analysis of strong nonlinear structures by incorporated into the probability density evolution method, showing its effectiveness for practical applications. Problems to be further studied are outlined.
Reliability sensitivity analysis aims at studying the influence of the parameters in the probabilistic model onto the probability of failure of a given system. Such an influence may either be quantified on a given range of values of the parameters of interest using a parametric analysis, or only locally by means of its partial derivatives. This paper is concerned with the latter approach when the limit-state function involves the output of an expensive-to-evaluate computational model. In order to reduce the computational cost it is proposed to compute the failure probability by means of the recently proposed meta-model-based importance sampling method. This method resorts to the adaptive construction of a Kriging meta-model which emulates the limit-state function. Then, instead of using this meta-model as a surrogate for computing the probability of failure, its probabilistic nature is used in order to build an quasi-optimal instrumental density function for accurately computing the actual failure probability through importance sampling. The proposed estimator of the failure probability recasts as a product of two terms. The augmented failure probability is estimated using the emulator only, while the correction factor is estimated using both the actual limit-state function and its emulator in order to quantify the substitution error. This estimator is then differentiated by means of the score function approach which enables the estimation of the gradient of the failure probability without any additional call to the limit-state function (nor its Kriging emulator). The approach is validated on three structural reliability examples.
Although the influence of ground motion duration on liquefaction and slope stability is widely acknowledged, its influence on structural response is a topic of some debate. This study examines the effect of ground motion duration on the collapse of reinforced concrete structures by conducting incremental dynamic analysis on nonlinear multiple-degree-of-freedom models of concrete frame buildings with different structural properties. Generalized linear modeling regression techniques are used to predict the collapse capacity of a structure, and the duration of the ground motion is found to be a significant predictor of collapse resistance. As a result, the collapse risk of the analyzed buildings is higher on being subjected to longer duration ground motions, as compared to shorter duration ground motions having the same ground motion intensity. Ground motion duration affects the collapse capacity of highly deteriorating (non-ductile) and less deteriorating (ductile) concrete structures. Therefore, it is recommended to consider the duration of the ground motion in addition to its intensity and frequency content in structural design and assessment of seismic risk. ▸ Effect of ground motion duration on collapse of concrete structures is assessed. ▸ A generalized linear model quantifying structural collapse capacity is defined. ▸ Ground motion duration, building period and ductility affect collapse capacity. ▸ Structural collapse risk is higher for longer duration ground shaking. ▸ Ground motion duration is an important factor for seismic risk assessment.
The structural reliability analysis is typically based on a model that describes the response, such as maximum deformation or stress, as a function of several random variables. In principle, reliability can be evaluated once the probability distribution of the response becomes available. The paper presents a new method to derive the probability distribution of a function of random variables representing the structural response. The derivation is based on the maximum entropy principle in which constraints are specified in terms of the fractional moments, in place of commonly used integer moments. In order to compute the fractional moments of the response function, a multiplicative form of dimensional reduction method (M-DRM) is presented. Several examples presented in the paper illustrate the numerical accuracy and efficiency of the proposed method in comparison to the Monte Carlo simulation method.
This paper presents an innovative fully-probabilistic Performance-Based Hurricane Engineering (PBHE) framework for risk assessment of structural systems located in hurricane-prone regions. The proposed methodology is based on the total probability theorem and disaggregates the risk assessment into elementary components, namely hazard analysis, structural characterization, environment–structure interaction analysis, structural analysis, damage analysis, and loss analysis. This methodology accounts for the multi-hazard nature of hurricane events by considering both the separate effects of and the interaction among hurricane wind, flood, windborne debris, and rainfall hazards. A discussion on the different sources of hazard is provided, and vectors of intensity measures for hazard analyses are proposed. Suggestions on the selection of appropriate parameters describing the interaction between the environmental actions and the structure, the structural response, and the resulting damage are also provided. The proposed PBHE framework is illustrated through an application example consisting of the performance assessment of a residential building subjected to windborne debris and hurricane strong winds. The PBHE framework introduced in this paper represents a step toward a rational methodology for probabilistic risk assessment and design of structures subjected to multi-hazard scenarios.