Decision Sciences Statistics, Probability and Uncertainty

Risk and Safety Analysis

Description

This cluster of papers covers recent advances in risk analysis and management, with a focus on topics such as Bayesian networks, fault tree analysis, quantitative risk assessment, dynamic safety analysis, domino effect in industrial accidents, inherent safety, human reliability analysis, process safety, hazardous materials transportation, and uncertainty handling.

Keywords

Bayesian Networks; Fault Tree Analysis; Quantitative Risk Assessment; Dynamic Safety Analysis; Domino Effect; Inherent Safety; Human Reliability Analysis; Process Safety; Hazardous Materials Transportation; Uncertainty Handling

1. Introduction. 2. Toxicology. 3. Industrial Hygiene. 4. Source Models. 5. Toxic Release and Dispersion Models. 6. Fires and Explosions. 7. Designs to Prevent Fires and Explosions. 8. Introduction to … 1. Introduction. 2. Toxicology. 3. Industrial Hygiene. 4. Source Models. 5. Toxic Release and Dispersion Models. 6. Fires and Explosions. 7. Designs to Prevent Fires and Explosions. 8. Introduction to Reliefs. 9. Relief Sizing. 10. Hazards Identification. 11. Risk Assessment. 12. Accident Investigations. 13. Case Histories. Appendix I. Normal Safety Review Report. Appendix II. Solution Vapor Pressure Data. Appendix III. Unit Conversion Constants.
The primary purpose of the Handbook is to present methods, models, and estimated human error probabilities (HEPs) to enable qualified analysts to make quantitative or qualitative assessments of occurrences of … The primary purpose of the Handbook is to present methods, models, and estimated human error probabilities (HEPs) to enable qualified analysts to make quantitative or qualitative assessments of occurrences of human errors in nuclear power plants (NPPs) that affect the availability or operational reliability of engineered safety features and components. The Handbook is intended to provide much of the modeling and information necessary for the performance of human reliability analysis (HRA) as a part of probabilistic risk assessment (PRA) of NPPs. Although not a design guide, a second purpose of the Handbook is to enable the user to recognize error-likely equipment design, plant policies and practices, written procedures, and other human factors problems so that improvements can be considered. The Handbook provides the methodology to identify and quantify the potential for human error in NPP tasks.
Resilience engineering has since 2004 attracted widespread interest from industry as well as academia. Practitioners from various fields, such as aviation and air traffic management, patient safety, off-shore exploration and … Resilience engineering has since 2004 attracted widespread interest from industry as well as academia. Practitioners from various fields, such as aviation and air traffic management, patient safety, off-shore exploration and production, have quickly realised the potential of resilience engineering and have became early adopters. The continued development of resilience engineering has focused on four abilities that are essential for resilience. These are the ability a) to respond to what happens, b) to monitor critical developments, c) to anticipate future threats and opportunities, and d) to learn from past experience - successes as well as failures. Working with the four abilities provides a structured way of analysing problems and issues, as well as of proposing practical solutions (concepts, tools, and methods). This book is divided into four main sections which describe issues relating to each of the four abilities. The chapters in each section emphasise practical ways of engineering resilience and feature case studies and real applications. The text is written to be easily accessible for readers who are more interested in solutions than in research, but will also be of interest to the latter group.
In the social sciences, two prevailing definitions of risk are: (1) risk is a situation or event where something of human value (including humans themselves) is at stake and where … In the social sciences, two prevailing definitions of risk are: (1) risk is a situation or event where something of human value (including humans themselves) is at stake and where the outcome is uncertain; (2) risk is an uncertain consequence of an event or an activity with respect to something that humans value. According to these definitions, risk expresses an ontology (a theory of being) independent of our knowledge and perceptions. In this paper, we look closer into these two types of definitions. We conclude that the definitions provide a sound foundation for risk research and risk management, but compared to common terminology, they lead to conceptual difficulties that are incompatible with the everyday use of risk in most applications. By considering risk as a state of the world, we cannot conclude, for example, about the risk being high or low, or compare different options with respect to risk. A rephrasing of the two definitions is suggested: Risk refers to uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value.
Multivariable methods of analysis can yield problematic results if methodological guidelines and mathematical assumptions are ignored. A problem arising from a too-small ratio of events per variable (EPV) can affect … Multivariable methods of analysis can yield problematic results if methodological guidelines and mathematical assumptions are ignored. A problem arising from a too-small ratio of events per variable (EPV) can affect the accuracy and precision of regression coefficients and their tests of statistical significance. The problem occurs when a proportional hazards analysis contains too few "failure" events (e.g., deaths) in relation to the number of included independent variables. In the current research, the impact of EPV was assessed for results of proportional hazards analysis done with Monte Carlo simulations in an empirical data set of 673 subjects enrolled in a multicenter trial of coronary artery bypass surgery. The research is presented in two parts: Part I describes the data set and strategy used for the analyses, including the Monte Carlo simulation studies done to determine and compare the impact of various values of EPV in proportional hazards analytical results. Part II compares the output of regression models obtained from the simulations, and discusses the implication of the findings.
In quantitative uncertainty analysis, it is essential to define rigorously the endpoint or target of the assessment. Two distinctly different approaches using Monte Carlo methods are discussed: (1) the end … In quantitative uncertainty analysis, it is essential to define rigorously the endpoint or target of the assessment. Two distinctly different approaches using Monte Carlo methods are discussed: (1) the end point is a fixed but unknown value (e.g., the maximally exposed individual, the average individual, or a specific individual) or (2) the end point is an unknown distribution of values (e.g., the variability of exposures among unspecified individuals in the population). In the first case, values are sampled at random from distributions representing various “degrees of belief” about the unknown “fixed” values of the parameters to produce a distribution of model results. The distribution of model results represents a subjective confidence statement about the true but unknown assessment end point. The important input parameters are those that contribute most to the spread in the distribution of the model results. In the second case, Monte Carlo calculations are performed in two dimensions producing numerous alternative representations of the true but unknown distribution. These alternative distributions permit subject confidence statements to be made from two perspectives: (1) for the individual exposure occurring at a specified fractile of the distribution or (2) for the fractile of the distribution associated with a specified level of individual exposure. The relative importance of input parameters will depend on the fractile or exposure level of interest. The quantification of uncertainty for the simulation of a true but unknown distribution of values represents the state‐of‐the‐art in assessment modeling.
Let $S_0$ be any sequential probability ratio test for deciding between two simple alternatives $H_0$ and $H_1$, and $S_1$ another test for the same purpose. We define $(i, j = … Let $S_0$ be any sequential probability ratio test for deciding between two simple alternatives $H_0$ and $H_1$, and $S_1$ another test for the same purpose. We define $(i, j = 0, 1):$ $\alpha_i(S_j) =$ probability, under $S_j$, of rejecting $H_i$ when it is true; $E_i^j (n) =$ expected number of observations to reach a decision under test $S_j$ when the hypothesis $H_i$ is true. (It is assumed that $E^1_i (n)$ exists.) In this paper it is proved that, if $\alpha_i(S_1) \leq \alpha_i(S_0)\quad(i = 0,1)$, it follows that $E_i^0 (n) \leq E_i^1 (n)\quad(i = 0, 1)$. This means that of all tests with the same power the sequential probability ratio test requires on the average fewest observations. This result had been conjectured earlier ([1], [2]).
The incident command system (ICS) is a particular approach to assembly and control of the highly reliable temporary organizations employed by many public safety professionals to manage diverse reso... The incident command system (ICS) is a particular approach to assembly and control of the highly reliable temporary organizations employed by many public safety professionals to manage diverse reso...
Safety assessments of technological systems, such as nuclear power plants, chemical process facilities, and hazardous waste repositories, require the investigation of the occurrence and consequences of rare events. The subjectivistic … Safety assessments of technological systems, such as nuclear power plants, chemical process facilities, and hazardous waste repositories, require the investigation of the occurrence and consequences of rare events. The subjectivistic (Bayesian) theory of probability is the appropriate framework within which expert opinions, which are essential to the quantification process, can be combined with experimental results and statistical observations to produce quantitative measures of the risks from these systems. A distinction is made between uncertainties in physical models and state-of-knowledge uncertainties about the parameters and assumptions of these models. The proper role of past and future relative frequencies and several issues associated with elicitation and use of expert opinions are discussed.
A critical decision method is described for modeling tasks in naturalistic environments characterized by high time pressure, high information content, and changing conditions. The method is a variant of a … A critical decision method is described for modeling tasks in naturalistic environments characterized by high time pressure, high information content, and changing conditions. The method is a variant of a J.C. Flanagan's (1954) critical incident technique extended to include probes that elicit aspects of expertise such as the basis for making perceptual discriminations, conceptual discriminations, typicality judgments, and critical cues. The method has been used to elicit domain knowledge from experienced personnel such as urban and wildland fireground commanders, tank platoon leaders, structural engineers, design engineers, paramedics, and computer programmers. A model of decision-making derived from these investigations is presented as the theoretical background to the methodology. Instruments and procedures for implementing the approach are described. Applications of the method include developing expert systems, evaluating expert systems' performance, identifying training requirements, and investigating basic decision research issues.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
6423MetricsTotal Downloads64Last 6 Months9Last 12 Months20Total Citations23Last 6 Months0Last 12 Months0View all metrics 6423MetricsTotal Downloads64Last 6 Months9Last 12 Months20Total Citations23Last 6 Months0Last 12 Months0View all metrics
Risk matrices—tables mapping “frequency” and “severity” ratings to corresponding risk priority levels—are popular in applications as diverse as terrorism risk analysis, highway construction project management, office building risk analysis, climate … Risk matrices—tables mapping “frequency” and “severity” ratings to corresponding risk priority levels—are popular in applications as diverse as terrorism risk analysis, highway construction project management, office building risk analysis, climate change risk management, and enterprise risk management (ERM). National and international standards (e.g., Military Standard 882C and AS/NZS 4360:1999) have stimulated adoption of risk matrices by many organizations and risk consultants. However, little research rigorously validates their performance in actually improving risk management decisions. This article examines some mathematical properties of risk matrices and shows that they have the following limitations. (a) Poor Resolution . Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks (“range compression”). (b) Errors . Risk matrices can mistakenly assign higher qualitative ratings to quantitatively smaller risks. For risks with negatively correlated frequencies and severities, they can be “worse than useless,” leading to worse‐than‐random decisions. (c) Suboptimal Resource Allocation . Effective allocation of resources to risk‐reducing countermeasures cannot be based on the categories provided by risk matrices. (d) Ambiguous Inputs and Outputs . Categorizations of severity cannot be made objectively for uncertain consequences. Inputs to risk matrices (e.g., frequency and severity categorizations) and resulting outputs (i.e., risk ratings) require subjective interpretation, and different users may obtain opposite ratings of the same quantitative risks. These limitations suggest that risk matrices should be used with caution, and only with careful explanations of embedded judgments.
Abstract : Fault trees represent problem situations by organizing things that could go wrong into functional categories. Such trees are essential devices for analyzing and evaluating the fallibility of complex … Abstract : Fault trees represent problem situations by organizing things that could go wrong into functional categories. Such trees are essential devices for analyzing and evaluating the fallibility of complex systems. They follow many different formats, sometimes by design, other times inadvertently. Major results were: people were quite insensitive to what had been left out of a fault tree, increasing the amount of detail for the tree as a whole or just for some of its branches produced small effects on perceptions, and the perceived importance of a particular branch was increased by presenting it in pieces (as two separate component branches). Aside from their relevance for the study of problem solving, such results may have important implications for how best to inform the public about technological risks and to involve it in policy decisions and how experts should perform fault-tree analyses of the risks from technological systems.
Abstract The hazard function of time-related events, such as death or reoperation following heart valve replacement, often is time-varying in a structured fashion, as is the influence of risk factors … Abstract The hazard function of time-related events, such as death or reoperation following heart valve replacement, often is time-varying in a structured fashion, as is the influence of risk factors associated with the events. A completely parametric system is presented for the decomposition of time-varying patterns of risk into additive, overlapping phases, descriptively labeled as early, constant-hazard, and late. Each phase is shaped by a different generic function of time constituting a family of nested equations and is scaled by a separate logit-linear or log-linear function of concomitant information. Model building uses maximum likelihood estimation. The resulting parametric equations permit hazard function, survivorship function, and probability estimates and their confidence limits to be portrayed and adjusted for concomitant information. These provide a comprehensive analysis of time-related events from which inferences may be drawn to improve, for example, the management of patients with valvar heart disease.
Risk assessment and management was established as a scientific field some 30–40 years ago. Principles and methods were developed for how to conceptualise, assess and manage risk. These principles and … Risk assessment and management was established as a scientific field some 30–40 years ago. Principles and methods were developed for how to conceptualise, assess and manage risk. These principles and methods still represent to a large extent the foundation of this field today, but many advances have been made, linked to both the theoretical platform and practical models and procedures. The purpose of the present invited paper is to perform a review of these advances, with a special focus on the fundamental ideas and thinking on which these are based. We have looked for trends in perspectives and approaches, and we also reflect on where further development of the risk field is needed and should be encouraged. The paper is written for readers with different types of background, not only for experts on risk.
N OT SO LONG AGO I experienced an emergency landing. We had been aloft only a short time when the pilot announced some mechanical failure. As we headed toward the … N OT SO LONG AGO I experienced an emergency landing. We had been aloft only a short time when the pilot announced some mechanical failure. As we headed toward the nearest airport, the man behind me, no less frightened than I, said to his companion, Here's where my luck runs out. A few minutes later we touched down to a safe landing amidst foam trucks and asbestos-clad fire fighters. On the ground I ran into the pilot and asked him about the trouble. His response was vague, but he did indicate that something had been wrong with the rudder. How, then, was he able to direct and land the plane? He replied that the situation had not really been as ominous as it had seemed: the emergency routines we had followed were necessary precautions and he had been able to compensate for the impairment of the rudder by utilizing additional features of the aircraft. There were, he said, safety factors built into all planes. Happily, such matters had not been left to chance, luck, as we say. For a commercial airliner is a very redundant system, a fact which accounts for its reliability of performance; a fact which also accounts for its adaptability.
This study employed fuzzy AHP methodology to assess risks in pilot transfer operations, identifying four critical hazards through expert evaluations with 19 maritime professionals. The highest risks include: pilot falls … This study employed fuzzy AHP methodology to assess risks in pilot transfer operations, identifying four critical hazards through expert evaluations with 19 maritime professionals. The highest risks include: pilot falls from ladder (Cr1, 0.20077), pilot boat entanglement displacing ladder (Cr12, 0.17512), compression between ship and boat (Cr11, 0.14466), and limb entrapment in ladder (Cr7, 0.11002). These factors collectively represent over 60% of total risk weight, highlighting mechanical and human-factor dangers in transfer operations. The fuzzy AHP approach effectively quantifies expert judgments, addressing uncertainties in risk assessment. Findings emphasize the need for targeted safety measures: smart ladder systems with fall prevention, enhanced boat handling training, standardized distance protocols, and ergonomic ladder designs. This research provides a data-driven framework for prioritizing interventions to improve pilot transfer safety, offering practical insights for maritime operators and regulators to reduce accidents during this high-risk operation.
O objetivo desse estudo é aplicar a escala Potencial Value Level (PVL) para avaliar o valor de Pesquisa, Desenvolvimento e Inovação em safety innovation na indústria de óleo e gás, … O objetivo desse estudo é aplicar a escala Potencial Value Level (PVL) para avaliar o valor de Pesquisa, Desenvolvimento e Inovação em safety innovation na indústria de óleo e gás, bem como avaliar os pontos fortes, fracos, fraquezas e ameaças com a literatura existente. A metodologia adotada envolveu a aplicação do método de Análise Hierárquica de Processos (AHP) estruturado em duas camadas, com base nos critérios: Royalties, Participações, Intangíveis e Estratégicos. A coleta de dados foi realizada por meio de entrevistas com sete especialistas, incluindo professores, doutorandos e profissionais com experiência em gestão de portfólio. A análise foi conduzida com o apoio do software Microsoft Excel, permitindo a comparação da escala PVL com as escalas tradicionais TRL e CRL. Os resultados indicaram que a escala PVL é eficaz na priorização de projetos sob a ótica da geração de valor, com destaque para o critério intangível, frequentemente negligenciados por métodos tradicionais. No entanto, por ser um indicador espera-se que seja combinado com outros indicadores e a sua calibragem depende de uma escolha coerente de critérios. A aplicação multicritério demonstrou coerência nas avaliações entre os decisores e forneceu evidências sobre a utilidade da escala PVL como ferramenta de apoio à decisão. A contribuição teórica da pesquisa reside na proposta de um modelo inovador de avaliação de valor aplicado ao contexto de PD&amp;I. Em termos práticos, o estudo oferece aos gestores um instrumento estruturado para a seleção de projetos de inovação em segurança, alinhado às estratégias organizacionais e à criação de valor em ambientes de elevada complexidade e incerteza.
This paper develops a comprehensive framework for the risk assessment of 115 kV power circuit breakers (PCBs) by evaluating their condition, replacement needs, and criticality to the electrical network. The … This paper develops a comprehensive framework for the risk assessment of 115 kV power circuit breakers (PCBs) by evaluating their condition, replacement needs, and criticality to the electrical network. The primary objective is to create a risk assessment tool that enhances maintenance practices and improves operational efficiency. The framework begins with a condition assessment, quantified through the use of a health index, derived from historical diagnostic test results and routine checks. The next step involves a replacement assessment, using a replacement index that considers factors such as age, rating adequacy, and technological obsolescence to determine the necessity of replacement. Finally, a criticality assessment is performed using a criticality index, which evaluates the PCB’s role in the network by factoring in location, load importance, failure severity, and the consequences of failure on network operations. By integrating these indices, the framework offers a holistic view of the associated risks. The methodology is applied to assess the risk of 149 sample PCBs across 30 substations in Thailand, with relevant data collected for each unit. The resulting risk assessments support proactive maintenance, minimize downtime, optimize the allocation of limited resources, and enhance the overall efficiency, reliability, and safety of the electrical network.
<ns3:p>The article attempts to thoroughly address the issue of suspending firefighters from their duties during the years 2013–2024. It involves the applicable legal regulations, stages of administrative procedures, and the … <ns3:p>The article attempts to thoroughly address the issue of suspending firefighters from their duties during the years 2013–2024. It involves the applicable legal regulations, stages of administrative procedures, and the authorities responsible for issuing such decisions. The article also examines the legal and professional implications of suspension. A novel aspect in the research approach is the inclusion of statistical data regarding the number and course of proceedings in the discussed period, which not only describe the legal mechanisms but also evaluate their practical functioning. The aim of the article is to analyze the legal and practical aspects of suspending firefighters from their duties and to assess the effectiveness of this institution in light of the data from 2013–2024. The work aims not only to present the procedures but also to add value by diagnosing administrative practice in this area, constituting a new element in Polish literature on the subject. The following research questions have been posed: Is the firefighter suspension procedure in line with the principles of the rule of law and does it provide the officer with appropriate guarantees? What are the legal, professional, and social effects of suspension from service? Was the suspension institution applied more frequently in certain years and for what reasons? The following hypotheses have also been formulated: 1. The procedure for suspending an officer is highly formalized and standardized, yet not always transparent to the participants in the proceedings. 2. Suspension causes significant and lasting legal and social effects on the officer, regardless of the final outcome. 3. During the analyzed period, significant quantitative and qualitative changes in the application of the suspension institution can be observed. The study applied the method of legal (dogmatic) analysis through a review of the legislation governing the suspension of officers in their duties, case law, and literature on the subject. The empirical part employed statistical analysis—the numerical data concerning administrative proceedings conducted by the relevant authorities in the years 2013–2024 were compiled. This data was obtained from official sources (reports from the National Headquarters of the Fire Service, public inquiries), allowing a reliable comparative analysis. The innovative nature of the work is demonstrated in the combination of normative analysis with an empirical assessment of the effectiveness and frequency of suspension in practice. Previous publications have mainly focused on the description of legal provisions—this study fills a gap in the literature by evaluating to what extent suspensions are implemented in accordance with the letter and spirit of the law and what effects they have on officers.</ns3:p>
This study identifies and validates critical risk factors in storage tank construction projects through a Systematic Literature Review (SLR) and expert judgment using Aiken’s V method. Initially, 103 journal articles … This study identifies and validates critical risk factors in storage tank construction projects through a Systematic Literature Review (SLR) and expert judgment using Aiken’s V method. Initially, 103 journal articles were screened, with 43 selected for in-depth analysis, revealing 33 causal factors and six key risk categories. A Focus Group Discussion (FGD) involving industry professionals (project managers, QA personnel, safety officers) enriched the findings by incorporating practical insights missing in academic literature. Eight experts then evaluated these factors using Aiken’s V, validating 13 causal factors and four risk factors as highly significant. Key causal factors included Structure Design, Material Delivery, and Foundation Design, while major risk factors were financial loss, non-compliance, workplace accidents, and poor-quality outcomes. The study establishes a structured risk model for storage tank projects, supporting future quantitative risk analysis and mitigation strategies.
To obtain the License Renewal (LR) for the Long-Term Operation (LTO) of a Nuclear Power Plants (NPPs) is important ensuring the integrity of Structures, Systems, and Components (SSCs). Additionally, the … To obtain the License Renewal (LR) for the Long-Term Operation (LTO) of a Nuclear Power Plants (NPPs) is important ensuring the integrity of Structures, Systems, and Components (SSCs). Additionally, the SSCs must continue to perform their intended functions under Design Extended Conditions (DEC). DEC encompasses a wide range of conditions beyond the design basis, including both severe and non-severe scenarios. The ageing process can compromise the intended functions of SSCs in a NPP. NPPs must demonstrate their ability to cope with DEC without compromising safety. The objective of this paper is to describe one methodology have been developed to identify SSCs with scope of LR needed to cope with the DEC in LTO. The proposed methodology follows the recommendation of U.S. Nuclear Regulatory Commission (NRC) requirements and the International Atomic Energy Agency (IAEA) safety standards and demonstrating their commitment to safety and environmental protection. The main contributions of the new methodology is to identify the systems, components and subcomponents needed to cope with design extensions conditions that not have Maintenance Program (MP) or Aging Management Program (AMP) and address these components in the plant level AMP or recommend an Aging Managements Review (AMR) according to their characteristic and function. Accomplish the intended safety functions of SSCs throughout the plant's operational lifetime including during LTO is crucial to safety and reliable operation of the plant. An example of application of the proposed methodology is presented.
Applying the Ishikawa and the Bow-Tie techniques to identify root causes in a real-life passenger injury case while boarding a suburban train is examined, the effectiveness of existing control measures … Applying the Ishikawa and the Bow-Tie techniques to identify root causes in a real-life passenger injury case while boarding a suburban train is examined, the effectiveness of existing control measures (barriers) is analyzed, and new ones are proposed in the article. The issue of enhancing passenger safety in suburban railway transport remains relevant despite the general downward trend in the number of railway incidents in Ukraine. Following signing the Association Agreement with the European Union, Ukraine undertook obligations to implement European standards and practices, including those related to risk assessment and the investigation of railway incidents as part of the overall risk evaluation process. However, the techniques for identifying root causes, as defined in ISO 31010:2019, have not yet been implemented in forensic expert practice or official and technical investigations. A real railway incident that was previously subject to forensic examination – a case of passenger injury during boarding an electric multiple unit train is analyzed in this study. Root causes and the central event – trapping the passenger’s limb by the train doors – were identified using techniques provided by ISO 31010:2019. The paper outlines the application areas for these techniques and presents a comparative analysis. Potential consequences of the central event were examined, along with the existing barriers between root causes and the central event, and between the central event and its consequences. New barriers aimed at mitigating the effects of the central event are proposed. The scientific novelty of the research lies in the application of European approaches to root cause identification of railway incidents, which had not previously been applied in domestic expert or investigative practice. The practical value of this study is in demonstrating the introduction of new root cause analysis techniques into forensic practice and certified forensic methodologies. Further research should focus on applying ISO 31010:2019-compliant root cause analysis techniques to other types of railway incidents, such as derailments, rolling stock collisions, and strikes involving vehicles or pedestrians, as well as on the use of additional techniques outlined in ISO 31010:2019 to enhance railway safety.
Background: The transportation of petroleum products via multimodal logistics systems is a complex process subject to operational inefficiencies and elevated risk exposure. The efficient and resilient transportation of petroleum products … Background: The transportation of petroleum products via multimodal logistics systems is a complex process subject to operational inefficiencies and elevated risk exposure. The efficient and resilient transportation of petroleum products increasingly depends on multimodal logistics systems, where operational risks and process inefficiencies can significantly impact safety and performance. This study addresses the research question of how an integrated risk-based and workflow-driven approach can enhance the management of oil products logistics in complex port environments. Methods: A dual methodological framework was applied at the Port of Midia, Romania, combining a probabilistic risk assessment model, quantifying incident probability, infrastructure vulnerability, and exposure, with dynamic business process modeling (BPM) using specialized software. The workflow simulation replicated real-world multimodal oil operations across maritime, rail, road, and inland waterway segments. Results: The analysis identified human error, technical malfunctions, and environmental hazards as key risk factors, with an aggregated major incident probability of 2.39%. BPM simulation highlighted critical bottlenecks in customs processing, inland waterway lock transit, and road tanker dispatch. Process optimizations based on simulation insights achieved a 25% reduction in operational delays. Conclusions: Integrating risk assessment with dynamic workflow modeling provides an effective methodology for improving the resilience, efficiency, and regulatory compliance of multimodal oil logistics operations. This approach offers practical guidance for port operators and contributes to advancing risk-informed logistics management in the petroleum supply chain.
Traditional methods for mission reliability assessment under operational testing conditions exhibit some limitations. They include coarse modeling granularity, significant parameter estimation biases, and inadequate adaptability for handling heterogeneous test data. … Traditional methods for mission reliability assessment under operational testing conditions exhibit some limitations. They include coarse modeling granularity, significant parameter estimation biases, and inadequate adaptability for handling heterogeneous test data. To address these challenges, this study establishes an assessment framework using a vehicular missile launching system (VMLS) as a case study. The framework constructs phase-specific reliability block diagrams based on mission profiles and establishes mappings between data types and evaluation models. The framework integrates the maximum entropy criterion with reliability monotonic decreasing constraints, develops a covariate-embedded Bayesian data fusion model, and proposes a multi-path weight adjustment assessment method. Simulation and physical testing demonstrate that compared with conventional methods, the proposed approach shows superior accuracy and precision in parameter estimation. It enables mission reliability assessment under practical operational testing constraints while providing methodological support to overcome the traditional assessment paradigm that overemphasizes performance verification while neglecting operational capability development.
Presented on 28 May 2025: Session 23 In 2023, the global oil, gas, and energy sector experienced 27 fatalities across 17 separate incidents, according to data from the International Association … Presented on 28 May 2025: Session 23 In 2023, the global oil, gas, and energy sector experienced 27 fatalities across 17 separate incidents, according to data from the International Association of Oil and Gas Producers. A concerning trend is the disproportionate impact on contractors, who accounted for 78% of these fatalities, continuing a pattern observed since 2019. The increasing reliance on contractors within companies’ business models creates inherent safety vulnerabilities due to less direct oversight. Several factors contribute to the rise in contractor-related incidents, including contractors undertaking increasingly higher-risk activities requiring specialised skill sets and competencies not typically held internally, coupled with budget and margin pressures, tight deadlines, high workforce turnover, disparities between contractor qualifications and capability, and differing safety cultures and leadership styles between organisations. Drawing on experience partnering with global oil, gas, and energy clients, it examines how to embed safety within each stage of the contractor management framework: qualification, selection, contract and planning, induction, execution, verification, and closeout. Central to this framework is strong leadership. Leaders must engage and unite teams, and importantly, empower others to do the same. This fosters an environment and culture where employees and contractors prioritise mutual care. Strong leadership cultivates a culture that drives outstanding safety performance, efficiency, and productivity; these elements are intrinsically linked. To access the Oral Presentation click the link on the right. To read the full paper click here
Presented on 28 May 2025: Session 14 As an operator in the oil and gas industry developing new energy and low-carbon solutions, we recognise the importance of effective Process Safety … Presented on 28 May 2025: Session 14 As an operator in the oil and gas industry developing new energy and low-carbon solutions, we recognise the importance of effective Process Safety Management (PSM) to avoid major accident and environmental events due to loss of control of hazardous substances. In support of this, Woodside embarked on an initiative in 2014 to create an integrated approach to PSM. In 2015, an internal team was assembled to review industry best practice and implement the solution with support from Joint Venture Partners’ Subject Matter Experts and DuPont Sustainable Solutions (now dss+). The Energy Institute’s Process Safety Management Framework was selected as it was a comprehensive, multi-sector and open-source framework. This program of work ran for 3 years with a focus on: (1) leadership buy-in and support; (2) incorporating process safety requirements into the integrated Woodside Management System; (3) defining Process Safety Critical Roles and associated competency requirements; (4) developing a training and coaching model; (5) consistently applying process safety risk assessment methodologies; (6) aligning Equipment and Management System Performance Standards; (7) establishing a suite of performance metrics and a governance framework; (8) learning from audits, investigations and others to improve. Now, 10 years on from the start of this initiative and following the merger with BHP’s petroleum business in 2022, we will discuss the PSM competency program element of this work. To access the Oral Presentation click the link on the right. To read the full paper click here
Presented on 28 May 2025: Session 14 Across the world, process safety incidents continue to occur at an unacceptably high rate. This paper explores the application of bowtie and AcciMap … Presented on 28 May 2025: Session 14 Across the world, process safety incidents continue to occur at an unacceptably high rate. This paper explores the application of bowtie and AcciMap analyses to perform meta-analyses of these incidents. Bowties are used to investigate what insights can be derived about causes, consequences and barrier performance. AcciMap analysis is used to investigate what insights can be derived about organisational contributions to barrier performance during incidents. Based on the findings from these analyses, recommendations are made about future directions to develop more timely and effective learning from incident systems. To access the Oral Presentation click the link on the right. To read the full paper click here
Presented on 28 May 2025: Session 14 This paper aims to enhance the understanding of professionals involved in the development, engineering, operation and regulation of gas processing systems, particularly with … Presented on 28 May 2025: Session 14 This paper aims to enhance the understanding of professionals involved in the development, engineering, operation and regulation of gas processing systems, particularly with respect to carbon capture and storage (CCS), regarding gas dispersion analysis. Traditionally, knowledge in this area is derived from experience in natural gas projects and simplified simulation analyses. This study investigates gas dispersion behaviour in the presence of obstructions during accidental methane and CO2 releases using computational fluid dynamics (CFD) simulations with FLACS (FLame ACceleration Software). By modelling obstructed dispersion scenarios for both gases, the assessment examines the influence of operating conditions, facility layout and environmental factors. The study highlights similarities and differences in CO2 and methane cloud development, offering insights into their dispersion characteristics. FLACS CFD modelling provides a detailed representation of transient and steady-state cloud behaviour, including obstruction effects, compared to free-field dispersion models like ‘EFFECTS’ or ‘Phast’, which, while faster and more user-friendly, offer a more simplified analysis. This comparison underscores the advantages of high-fidelity CFD simulations in complex gas dispersion scenarios. The results indicate that simplified methodologies may fail to fully capture cloud extent and, in some cases, underestimate associated risks. To access the Oral Presentation click the link on the right. To read the full paper click here
The intrinsic hazards associated with high-pressure hydrogen, combined with electromechanical interactions in hybrid architectures, pose significant challenges in predicting potential system risks during the conceptual design phase. In this paper, … The intrinsic hazards associated with high-pressure hydrogen, combined with electromechanical interactions in hybrid architectures, pose significant challenges in predicting potential system risks during the conceptual design phase. In this paper, a risk analysis methodology integrating systems theoretic process analysis (STPA), D-S evidence theory, and Bayesian networks (BN) is established. The approach employs STPA to identify unsafe control actions and analyze their loss scenarios. Subsequently, D-S evidence theory quantifies the likelihood of risk factors, while the BN model’s nodal uncertainties to construct a risk network identifying critical risk-inducing events. This methodology provides a comprehensive risk analysis process that identifies systemic risk elements, quantifies risk probabilities, and incorporates uncertainties for quantitative risk assessment. These insights inform risk-averse design decisions for hydrogen–electric hybrid powered aircraft. A case study demonstrates the framework’s effectiveness. The approach bridges theoretical risk analysis with early-stage engineering practice, delivering actionable guidance for advancing zero-emission aviation.
As the global transition to carbon neutrality accelerates, hydrogen energy has emerged as a key alternative to fossil fuels due to its potential to reduce carbon emissions. Many countries, including … As the global transition to carbon neutrality accelerates, hydrogen energy has emerged as a key alternative to fossil fuels due to its potential to reduce carbon emissions. Many countries, including Korea, are constructing hydrogen refueling stations; however, safety concerns persist due to accidents caused by equipment failures and human errors. While various accident analysis models exist, the application of the root cause analysis (RCA) technique to hydrogen refueling station accidents remains largely unexplored. This study develops an RCA modeling map specifically for hydrogen refueling stations to identify not only direct and indirect causes of accidents, but also root causes, and applies it to actual accident cases to provide basic data for identifying the root causes of future hydrogen refueling station accidents. The RCA modeling map developed in this study uses accident cause investigation data from accident investigation reports over the past five years, which include information on the organizational structure and operational status of hydrogen refueling stations, as well as the RCA handbook. The primary defect sources identified were equipment defect, personal defect, and other defects. The problem categories, which were the substructures of the primary defect source “equipment defect,” consisted of four categories: the equipment design problem, the equipment installation/fabrication problem, the equipment reliability program problem, and the equipment misuse problem. Additionally, the problem categories, which were the substructures of the primary defect source “personal defect,” consisted of two categories: the company employee problem and the contract employee problem. The problem categories, which were the substructures of the primary defect source “other defects,” consisted of three categories: sabotage/horseplay, natural phenomena, and other. Compared to existing accident investigation reports, which identified only three primary causes, the RCA modeling map revealed nine distinct causes, demonstrating its superior analytical capability. In conclusion, the proposed RCA modeling map provides a more systematic and comprehensive approach for investigating accident causes at hydrogen refueling stations, which could significantly improve safety practices and assist in quickly identifying root causes more efficiently in future incidents.
Yong‐Shik Lee | Edward Elgar Publishing eBooks
Yong‐Shik Lee | Edward Elgar Publishing eBooks
Land transportation is a key pillar in the freight transportation system that is often faced with various operational risk events. This study aims to identify risk agents and develop mitigation … Land transportation is a key pillar in the freight transportation system that is often faced with various operational risk events. This study aims to identify risk agents and develop mitigation strategies for transporting goods through land transportation. The House of Risk (HOR) method identifies risks, determines priority risk agents, and formulates effective mitigation strategies. Based on observations and questionnaire analysis, 14 risk events caused by 21 risk agents were identified, of which 12 agents were categorized as priority and nine as non-priority. Delays in issuing delivery notes or bills of lading (A4) were identified as the risk agent with the highest priority index of 3954.71. Of the 16 risk-handling strategies formulated, periodic inspection and maintenance of vehicles (PA2) has the highest Effectiveness to Difficulty (ETD) value of 63755.2. The results of this study provide a systematic framework for risk management in land transportation to improve the effectiveness and reliability of freight operations.
Some nuclear and radiological accidents happened on last decades, providing to specialized personnel a huge change in terms of protocols and safety measures. In Brazil, the experience during the response … Some nuclear and radiological accidents happened on last decades, providing to specialized personnel a huge change in terms of protocols and safety measures. In Brazil, the experience during the response to the radiologic accident in Goiânia/GO in 1987 showed the need for a rapid and intense mobilization of human resources to act in several areas of knowledge (radiological monitoring of personnel and areas, dosimetry, waste management, logistical support, social communications, among others). At that time, most of the people involved did not have the opportunity to have previously received even necessary training to act in an event of that nature and magnitude. Taking into account the global scenario, the International Atomic Energy Agency (IAEA) has issued a series of documents that aim to guide its Member States, to achieve an adequate level of preparedness to respond to emergency situations of nuclear or radiological origin. The paper brings an introduction, the methodology applied, a contextualization about nuclear accidents; emphasis on ocurrences in Peru; the accident in Goiânia. Finally, the preparation and response to accidents; the Nuclear and Radiological Emergency Response System (SAER), and a conclusion.
ABSTRACT In the early stages of system design, due to the lack of detailed information, unconstrained methods are often adopted. Among these, the weighted factor allocation method is a common … ABSTRACT In the early stages of system design, due to the lack of detailed information, unconstrained methods are often adopted. Among these, the weighted factor allocation method is a common approach. However, this method is mainly suitable for series systems. Although some extended methods allow its application to more complex series‐parallel systems, these methods are often influenced by subjective scoring. To address this issue, we propose a system reliability allocation method based on fault tree models and optimization methods. This approach constructs a fault tree model considering the impact of common cause failures and establishes an objective function between system reliability and the reliability to be allocated. Through optimization algorithms, the minimum value of the objective function is found, and the optimal solution of this objective function is the allocation result of the reliability target. This method can be implemented in the early stages of system development and does not rely on expert experience. We demonstrate the effectiveness of this allocation method through a case study.
This paper explores the application and evolution of Configuration Risk Management (CRM) systems in Chinese nuclear power plants, focusing on their alignment with the National Nuclear Safety Administration (NNSA)’s technical … This paper explores the application and evolution of Configuration Risk Management (CRM) systems in Chinese nuclear power plants, focusing on their alignment with the National Nuclear Safety Administration (NNSA)’s technical policies. It examines how CRM integrates risk management action matrices, risk limits, and monitoring tools to manage risks effectively across both second-generation plants with a higher baseline Core Damage Frequency (CDF) and third-generation plants with a lower baseline CDF. The study highlights the significant advancements in CRM system development, including the improvement of risk monitoring tools and the establishment of standardized technical guidelines, and underscores the critical role of ongoing regulatory support and CRM system enhancement to ensure the safe and sustainable operation of nuclear power plants in China, offering valuable insights for future nuclear safety management.
<title>Abstract</title> The chemical industry occupies a critical position in the national economy, but its complex production process leads to many risks in the chemical process system. From the system theory … <title>Abstract</title> The chemical industry occupies a critical position in the national economy, but its complex production process leads to many risks in the chemical process system. From the system theory perspective, this study proposes a risk analysis method that integrates system theory process analysis (STPA) and improved decision-making test and evaluation laboratory interpretation structural model (DEMATEL-ISM) for chemical process systems. Firstly, the potential risk factors of the system are comprehensively identified by STPA, including the risks of equipment failure, operation error and element interaction anomaly; Then, the interactions of the risk factors are quantitatively analyzed by the improved DEMATEL-ISM method, and the threshold is determined by introducing the Maximum Mean Entropy Decrease (MMDE) algorithm, which constructs the hierarchical structure of the risk factors, revealing the critical risk points and the transmission paths. The results show that the temperature sensor failure and control algorithm error are the critical cause factors,the actuator failure is the critical result factor, and the risk transfer path matches the actual operation logic of the system. This study provides a scientific risk identification and analysis tool for chemical process system safety management.
This study presents a metric selection framework and a normalization method for the quantitative assessment of cyber resilience, with a specific focus on availability as a core dimension. To develop … This study presents a metric selection framework and a normalization method for the quantitative assessment of cyber resilience, with a specific focus on availability as a core dimension. To develop a generalizable evaluation model, service types from 1124 organizations were categorized, and candidate metrics applicable across diverse operational environments were identified. Ten quantitative metrics were derived based on five core selection criteria—objectivity, reproducibility, scalability, practicality, and relevance to resilience—while adhering to the principles of mutual exclusivity and collective exhaustiveness. To validate the framework, two availability-oriented metrics—Transactions per Second (TPS) and Connections per Second (CPS)—were empirically evaluated in a simulated denial-of-service environment using a TCP SYN flood attack scenario. The experiment included three phases: normal operation, attack, and recovery. An Area Under the Curve (AUC)-based Normalized Resilience Index (NRI) was introduced to quantify performance degradation and recovery, using each organization’s Recovery Time Objective (RTO) as a reference baseline. This approach facilitates objective, interpretable comparisons of resilience performance across systems with varying service conditions. The findings demonstrate the practical applicability of the proposed metrics and normalization technique for evaluating cyber resilience and underscore their potential in informing resilience policy development, operational benchmarking, and technical decision-making.
Caj Frostell | Edward Elgar Publishing eBooks
Tool combines analysis of CRUD deposition, boron concentrations and neutron flux to assess for safety risks in nuclear reactors as eroded products deposit on fuel rods. Tool combines analysis of CRUD deposition, boron concentrations and neutron flux to assess for safety risks in nuclear reactors as eroded products deposit on fuel rods.
Abstract Oil refineries operate in environments of high pressure and temperature. Therefore, maintaining the availability of these assets is a challenge to the maintenance teams. The paper deals with the … Abstract Oil refineries operate in environments of high pressure and temperature. Therefore, maintaining the availability of these assets is a challenge to the maintenance teams. The paper deals with the development of maintenance key performance indicators (KPIs), considering both leading (post-failure) and lagging metrics for maintenance performance. Traditionally, these KPIs are looked upon in isolation; possible relationships between them are not considered. The present study investigated the relationships between various lagging maintenance KPIs relating production rate to equipment availability. The study was conducted under the refinery crude unit, which is generally considered one of the most important parts of any oil refinery. A regression model was developed that empirically relates equipment availability with the production rate. This was further extended to an empirical equation correlating equipment availability and downtime duration with various downtime factors. The results indicate that the availability of equipment has a high correlation with the production rate. This relationship is driven by factors of downtime, which have been sorted according to duration and type. These equations greatly support the management decisions in the development of maintenance strategies. In addition, the research quantifies the costs related to each factor of downtime, which can be applied in formulating future maintenance strategies.
Accurately forecasting emergency material demand during the initial stages of disaster response is challenging due to communication disruptions and data scarcity. This study proposes a hybrid model integrating regression analysis … Accurately forecasting emergency material demand during the initial stages of disaster response is challenging due to communication disruptions and data scarcity. This study proposes a hybrid model integrating regression analysis and intelligent analysis to estimate casualties and predict emergency supply needs indirectly. A case study of five earthquake-affected villages validates the model, using building collapse rates and population data to calculate casualties and determine the demand for essential supplies, including food, water, medicine, and tents. The findings demonstrate that the proposed approach effectively addresses the “black box” condition by utilizing correction factors for population density, disaster preparedness, and emergency response capacity, providing a structured framework for rapid and accurate demand forecasting in disaster scenarios.
Purpose Specific degradation modes like corrosion shorten the service life of equipment. However, using existing corrosion modeling approaches is difficult because the necessary variables are rarely collected, and traditional models … Purpose Specific degradation modes like corrosion shorten the service life of equipment. However, using existing corrosion modeling approaches is difficult because the necessary variables are rarely collected, and traditional models do not readily elucidate service life impacts. There is a need for using more widely available data to measure the effects of such degradation mechanisms on the service life of facility components. This insight can improve the ability to plan repairs or develop effective mitigation programs. Design/methodology/approach Traditional corrosion models require features like time-of-wetness or ion concentration that are difficult to collect and associate with in situ equipment. This study instead utilized more widely available maintenance and inspection text data. Natural language processing techniques group the components into subpopulations of similar degradation modes. Reliability models for the subpopulations infer the impact of distress mechanisms on the service life of components. Findings Corrosion effects of building enclosures reduced the predictive mean time to failure by about 12 years. This result was validated with a statistically significant t -test and Bayesian analysis. Similar results were demonstrated on other component types in electrical, mechanical and structural systems. These findings help identify specific risks that may be targeted for mitigation policies without collecting degradation-specific data features. Originality/value The traditional approach requires collection of physical reliability modeling features like ion concentration or time of wetness, which are rarely collected for facility components. The developed model instead relies on more widely available maintenance text data to measure the impact of corrosion on facility equipment service life.
Marine terminals have an important place in the global crude oil supply. Tugboats are one of the major components for marine terminals. Safe and trouble-free operation of tugboats is of … Marine terminals have an important place in the global crude oil supply. Tugboats are one of the major components for marine terminals. Safe and trouble-free operation of tugboats is of critical importance. In this study, a risk analysis was performed by considering the safety, environmental and economic effects of main engine failures for a tugboat operating in a crude oil terminal. In this context, firstly, the importance levels of safety, environmental and economic criteria were determined with the fuzzy AHP method. Then, the risk ranking was carried out for 26 failure modes with the fuzzy TOPSIS method by considering the safety, environmental and economic effects together. The results showed that the most important risk factor for the marine terminal was safety, followed by economic and environmental factors. Then, the risk ranking of failure modes was performed with the fuzzy TOPSIS by considering the importance weights of the risk factors, and the riskiest failure was determined as fuel line leakage. This was followed by air filter blockage and back pressure in the exhaust system, respectively. This study provides a comprehensive risk assessment for tugboats operating in a crude oil terminal and is expected to be an important guide for the relevant stakeholders
The failure mode and effect analysis (FMEA) method, which estimates the risk levels of systems or components solely based on the multiplication of simple risk rating indices, faces several limitations. … The failure mode and effect analysis (FMEA) method, which estimates the risk levels of systems or components solely based on the multiplication of simple risk rating indices, faces several limitations. These include the risk of inaccurate risk level judgment and the potential for misjudgments due to human factors, both of which pose significant threats to the safe operation of aircraft. Therefore, a Probabilistic Language based on a cumulative prospect theory (Probabilistic Language, PL) risk assessment strategy was proposed, combining the technique for order preference with similarity to an ideal solution (TOPSIS). The probabilistic language term value and probability value were fused in the method through the cumulative prospect theory, and a new PL measure function was introduced. The comprehensive weights of evaluation strategies were determined by calculating the relevant weights of various indicators through the subjective expert weight and objective entropy weight synthesis. So, a weighted decision matrix was constructed to determine the ranking order close to the ideal scheme. Finally, the risk level of each failure mode was ranked according to its close degree to the ideal situation. Through case validation, the consistency of risk ranking was improved by 23.95% compared to the traditional FMEA method. The rationality of weight allocation was increased by 18.2%. Robustness was also enhanced to some extent. Compared with the traditional FMEA method, the proposed method has better rationality, application, and effectiveness. It can provide technical support for formulating a new generation of airworthiness documents for the risk level assessment of civil aircraft and its subsystem components.