Engineering Control and Systems Engineering

Fault Detection and Control Systems

Description

This cluster of papers focuses on the application of various data-driven and statistical techniques for process fault detection and diagnosis in industrial settings. It covers topics such as process monitoring, fault isolation, soft sensors, model-based diagnosis, and the use of machine learning in analyzing and improving industrial processes.

Keywords

Process Monitoring; Fault Detection; Data-Driven Techniques; Statistical Analysis; Soft Sensors; Model-Based Diagnosis; Industrial Processes; Multivariate Statistical Methods; Fault Isolation; Machine Learning

Introduction to fault detection and diagnosis discrete linear systems random variables parameter estimation fundamentals analytical redundancy concepts parity equation implementation of residual generators design for structured residuals design for directional … Introduction to fault detection and diagnosis discrete linear systems random variables parameter estimation fundamentals analytical redundancy concepts parity equation implementation of residual generators design for structured residuals design for directional residuals residual generation for parametric faults robustness in residual generation statistical testing of residuals model identification for the diagnosis of additive faults diagnosing multiplicative faults by parameter estimation.
With the ever increase of complexity and expense of industrial systems, there is less tolerance for performance degradation, productivity decrease and safety hazards, which greatly stimulates to detect and identify … With the ever increase of complexity and expense of industrial systems, there is less tolerance for performance degradation, productivity decrease and safety hazards, which greatly stimulates to detect and identify any kinds of potential abnormalities and faults as early as possible, and implement realtime fault-tolerant operation for minimizing performance degradation and avoiding dangerous situations.During the last four decades, fruitful results were reported about fault diagnosis and fault-tolerant control methods and their applications in a variety of engineering systems.The three-part survey paper aims to give a comprehensive review for real-time fault diagnosis and fault tolerant control with particular attention on the results reported in the last decade.In the first-part review, fault diagnosis approaches and their applications are reviewed comprehensively from model-based and signal-based perspectives, respectively. Index Terms-Analytical redundancy
Part 1 Probability and Random Variables 1 The Meaning of Probability 2 The Axioms of Probability 3 Repeated Trials 4 The Concept of a Random Variable 5 Functions of One … Part 1 Probability and Random Variables 1 The Meaning of Probability 2 The Axioms of Probability 3 Repeated Trials 4 The Concept of a Random Variable 5 Functions of One Random Variable 6 Two Random Variables 7 Sequences of Random Variables 8 Statistics Part 2 Stochastic Processes 9 General Concepts 10 Random Walk and Other Applications 11 Spectral Representation 12 Spectral Estimation 13 Mean Square Estimation 14 Entropy 15 Markov Chains 16 Markov Processes and Queueing Theory
Abstract Multivariate statistical procedures for monitoring the progress of batch processes are developed. The only information needed to exploit the procedures is a historical database of past successful batches. Multiway … Abstract Multivariate statistical procedures for monitoring the progress of batch processes are developed. The only information needed to exploit the procedures is a historical database of past successful batches. Multiway principal component analysis is used to extract the information in the multivariate trajectory data by projecting them onto low‐dimensional spaces defined by the latent variables or principal components. This leads to simple monitoring charts, consistent with the philosophy of statistical process control, which are capable of tracking the progress of new batch runs and detecting the occurrence of observable upsets. The approach is contrasted with other approaches which use theoretical or knowledge‐based models, and its potential is illustrated using a detailed simulation study of a semibatch reactor for the production of styrene‐butadiene latex.
Abstract Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence … Abstract Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values, the resulting sequence is referred to as the "residuals," which can be regarded as estimates of the errors. If the appropriate model has been chosen, there will be zero autocorrelation in the errors. In checking adequacy of fit it is therefore logical to study the sample autocorrelation function of the residuals. For large samples the residuals from a correctly fitted model resemble very closely the true errors of the process; however, care is needed in interpreting the serial correlations of the residuals. It is shown here that the residual autocorrelations are to a close approximation representable as a singular linear transformation of the autocorrelations of the errors so that they possess a singular normal distribution. Failing to allow for this results in a tendency to overlook evidence of lack of fit. Tests of fit and diagnostic checks are devised which take these facts into account.
This book is downloadable from http://www.irisa.fr/sisthem/kniga/. Many monitoring problems can be stated as the problem of detecting a change in the parameters of a static or dynamic stochastic system. The … This book is downloadable from http://www.irisa.fr/sisthem/kniga/. Many monitoring problems can be stated as the problem of detecting a change in the parameters of a static or dynamic stochastic system. The main goal of this book is to describe a unified framework for the design and the performance analysis of the algorithms for solving these change detection problems. Also the book contains the key mathematical background necessary for this purpose. Finally links with the analytical redundancy approach to fault detection in linear systems are established. We call abrupt change any change in the parameters of the system that occurs either instantaneously or at least very fast with respect to the sampling period of the measurements. Abrupt changes by no means refer to changes with large magnitude; on the contrary, in most applications the main problem is to detect small changes. Moreover, in some applications, the early warning of small - and not necessarily fast - changes is of crucial interest in order to avoid the economic or even catastrophic consequences that can result from an accumulation of such small changes. For example, small faults arising in the sensors of a navigation system can result, through the underlying integration, in serious errors in the estimated position of the plane. Another example is the early warning of small deviations from the normal operating conditions of an industrial process. The early detection of slight changes in the state of the process allows to plan in a more adequate manner the periods during which the process should be inspected and possibly repaired, and thus to reduce the exploitation costs.
Estimation of the probability of an event as a function of several independent variables Get access STROTHER H. WALKER, STROTHER H. WALKER Johns Hopkins University Search for other works by … Estimation of the probability of an event as a function of several independent variables Get access STROTHER H. WALKER, STROTHER H. WALKER Johns Hopkins University Search for other works by this author on: Oxford Academic Google Scholar DAVID B. DUNCAN DAVID B. DUNCAN Johns Hopkins University Search for other works by this author on: Oxford Academic Google Scholar Biometrika, Volume 54, Issue 1-2, June 1967, Pages 167–179, https://doi.org/10.1093/biomet/54.1-2.167 Published: 01 June 1967 Article history Received: 01 June 1966 Revision received: 01 January 1967 Published: 01 June 1967
In this paper, the three principal control effects found in present controllers are examined and practical names and units of measurement are proposed for each effect. Corresponding units are proposed … In this paper, the three principal control effects found in present controllers are examined and practical names and units of measurement are proposed for each effect. Corresponding units are proposed for a classification of industrial processes in terms of the two principal characteristics affecting their controllability. Formulas are given which enable the controller settings to be determined from the experimental or calculated values of the lag and unit reaction rate of the process to be controlled. These units form the basis of a quick method for adjusting a controller on the job. The effect of varying each controller setting is shown in a series of chart records. It is believed that the conceptions of control presented in this paper will be of assistance in the adjustment of existing controller applications and in the design of new installations.
Often in control design it is necessary to construct estimates of state variables which are not available by direct measurement. If a system is linear, its state vector can be … Often in control design it is necessary to construct estimates of state variables which are not available by direct measurement. If a system is linear, its state vector can be approximately reconstructed by building an observer which is itself a linear system driven by the available outputs and inputs of the original system. The state vector of an <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</tex> th order system with <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</tex> independent outputs can be reconstructed with an observer of order <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n-m</tex> . In this paper it is shown that the design of an observer for a system with <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">M</tex> outputs can be reduced to the design of <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</tex> separate observers for single-output subsystems. This result is a consequence of a special canonical form developed in the paper for multiple-output systems. In the special case of reconstruction of a single linear functional of the unknown state vector, it is shown that a great reduction in observer complexity is often possible. Finally, the application of observers to control design is investigated. It is shown that an observer's estimate of the system state vector can be used in place of the actual state vector in linear or nonlinear feedback designs without loss of stability.
Simple descriptive techniques probability models for time series estimation in the time domain forecasting stationary processes in the frequency domain spectral analysis bivariate processes linear systems state-space models and the … Simple descriptive techniques probability models for time series estimation in the time domain forecasting stationary processes in the frequency domain spectral analysis bivariate processes linear systems state-space models and the Kalman filter non-linear models multivariate time series modelling some other topics.
The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined … The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.
Recently, to ensure the reliability and safety of modern large-scale industrial processes, data-driven methods have been receiving considerably increasing attention, particularly for the purpose of process monitoring. However, great challenges … Recently, to ensure the reliability and safety of modern large-scale industrial processes, data-driven methods have been receiving considerably increasing attention, particularly for the purpose of process monitoring. However, great challenges are also met under different real operating conditions by using the basic data-driven methods. In this paper, widely applied data-driven methodologies suggested in the literature for process monitoring and fault diagnosis are surveyed from the application point of view. The major task of this paper is to sketch a basic data-driven design framework with necessary modifications under various industrial operating conditions, aiming to offer a reference for industrial process monitoring on large-scale industrial processes.
Abstract This paper provides an overview and analysis of statistical process monitoring methods for fault detection, identification and reconstruction. Several fault detection indices in the literature are analyzed and unified. … Abstract This paper provides an overview and analysis of statistical process monitoring methods for fault detection, identification and reconstruction. Several fault detection indices in the literature are analyzed and unified. Fault reconstruction for both sensor and process faults is presented which extends the traditional missing value replacement method. Fault diagnosis methods that have appeared recently are reviewed. The reconstruction‐based approach and the contribution‐based approach are analyzed and compared with simulation and industrial examples. The complementary nature of the reconstruction‐ and contribution‐based approaches is highlighted. An industrial example of polyester film process monitoring is given to demonstrate the power of the contribution‐ and reconstruction‐based approaches in a hierarchical monitoring framework. Finally we demonstrate that the reconstruction‐based framework provides a convenient way for fault analysis, including fault detectability, reconstructability and identifiability conditions, resolving many theoretical issues in process monitoring. Additional topics are summarized at the end of the paper for future investigation. Copyright © 2003 John Wiley &amp; Sons, Ltd.
1. The Basics 2. Parameter Estimation I 3. Parameter Estimation II 4. Model Selection 5. Assigning Probabilities 6. Non-parametric Estimation 7. Experimental Design 8. Least-Squares Extensions 9. Nested Sampling 10. … 1. The Basics 2. Parameter Estimation I 3. Parameter Estimation II 4. Model Selection 5. Assigning Probabilities 6. Non-parametric Estimation 7. Experimental Design 8. Least-Squares Extensions 9. Nested Sampling 10. Quantification Appendices Bibliography
Abstract In this paper, the three principal control effects found in present controllers are examined and practical names and units of measurement are proposed for each effect. Corresponding units are … Abstract In this paper, the three principal control effects found in present controllers are examined and practical names and units of measurement are proposed for each effect. Corresponding units are proposed for a classification of industrial processes in terms of the two principal characteristics affecting their controllability. Formulas are given which enable the controller settings to be determined from the experimental or calculated values of the lag and unit reaction rate of the process to be controlled. These units form the basis of a quick method for adjusting a controller on the job. The effect of varying each controller setting is shown in a series of chart records. It is believed that the conceptions of control presented in this paper will be of assistance in the adjustment of existing controller applications and in the design of new installations.
Abstract This book presents a comprehensive treatment of the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded … Abstract This book presents a comprehensive treatment of the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as being made up of distinct components such as trend, seasonal, regression elements and disturbance elements, each of which is modelled separately. The techniques that emerge from this approach are very flexible. Part I presents a full treatment of the construction and analysis of linear Gaussian state space models. The methods are based on the Kalman filter and are appropriate for a wide range of problems in practical time series analysis. The analysis can be carried out from both classical and Bayesian perspectives. Part I then presents illustrations to real series and exercises are provided for a selection of chapters. Part II discusses approximate and exact approaches for handling broad classes of non-Gaussian and nonlinear state space models. Approximate methods include the extended Kalman filter and the more recently developed unscented Kalman filter. The book shows that exact treatments become feasible when simulation-based methods such as importance sampling and particle filtering are adopted. Bayesian treatments based on simulation methods are also explored.
Abstract Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence … Abstract Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values, the resulting sequence is referred to as the "residuals," which can be regarded as estimates of the errors. If the appropriate model has been chosen, there will be zero autocorrelation in the errors. In checking adequacy of fit it is therefore logical to study the sample autocorrelation function of the residuals. For large samples the residuals from a correctly fitted model resemble very closely the true errors of the process; however, care is needed in interpreting the serial correlations of the residuals. It is shown here that the residual autocorrelations are to a close approximation representable as a singular linear transformation of the autocorrelations of the errors so that they possess a singular normal distribution. Failing to allow for this results in a tendency to overlook evidence of lack of fit. Tests of fit and diagnostic checks are devised which take these facts into account.
By projecting the data into a lower-dimensional space that accurately characterizes the state of the process, dimensionality reduction techniques can greatly simplify and improve process monitoring procedures. Principal component analysis … By projecting the data into a lower-dimensional space that accurately characterizes the state of the process, dimensionality reduction techniques can greatly simplify and improve process monitoring procedures. Principal component analysis (PCA) is such a dimensionality reduction technique. It produces a lower-dimensional representation in a way that preserves the correlation structure between the process variables, and is optimal in terms of capturing the variability in the data.
This paper examines the combined and separate effects of geopolitical risk, economic policy uncertainty, and oil prices on the stock market within a multivariate time-frequency framework, focusing on Saudi Arabia … This paper examines the combined and separate effects of geopolitical risk, economic policy uncertainty, and oil prices on the stock market within a multivariate time-frequency framework, focusing on Saudi Arabia as an oil-rich country. We implement the wavelet local multiple correlation approach using monthly data from January 2000 to December 2024. Our results reveal that oil prices, geopolitical risk, and economic uncertainty are key drivers of Saudi market behavior. The joint and individual effects vary significantly across time scales and frequencies. Increasing uncertainty surrounding economic policies and rising geopolitical tensions in the region have intensified the impact of oil price movements on the Saudi market. These findings have several implications for portfolio managers, foreign investors, and policymakers. When analyzing and forecasting stock returns, portfolio managers should consider oil prices, geopolitical risk, and changes in economic policy uncertainty.
ABSTRACT An aero‐engine usually works in time‐varying operating condition. The coupling effect between gas path fault mode and operating condition makes the characteristics of fault mode vary with the operating … ABSTRACT An aero‐engine usually works in time‐varying operating condition. The coupling effect between gas path fault mode and operating condition makes the characteristics of fault mode vary with the operating condition. It increases the difficulty of fault diagnosis of gas path system. Moreover, class imbalance commonly exists between normal data and fault mode data. To address these challenges, we propose a multi‐stage semi‐supervised fault diagnosis method for gas path system, considering time‐varying operating condition and class imbalance. At the first stage, a multilayer perceptron (MLP) is constructed to identify the operating condition based on altitude, Mach number, fuel flow, and high and low‐pressure rotor speed. At the second stage, an auxiliary classifier Wasserstein generative adversarial network with gradient penalty (ACWGAN‐GP) model is developed to create fault samples during the self‐training process. All the data are divided into three categories, that is, pressure, speed, and temperature data, which are used to train fault diagnosis mode, respectively. The Dempster–Shafer evidence theory is employed for the information fusion at the decision level. The effectiveness of our proposed method is validated by data generated by a gas turbine simulation program.
Roll-to-roll (R2R) manufacturing processes demand precise control of web or yarn velocity and tension, alongside robust mechanisms for handling system failures. This paper presents an integrated approach combining high-performance control … Roll-to-roll (R2R) manufacturing processes demand precise control of web or yarn velocity and tension, alongside robust mechanisms for handling system failures. This paper presents an integrated approach combining high-performance control with reliable fault detection for an experimental R2R system. A model-based cascade control strategy is designed, incorporating system identification, radius compensation for varying roll diameters, and a Kalman filter to mitigate load sensor noise, ensuring accurate regulation of yarn velocity and tension under normal operating conditions. In parallel, a data-driven fault detection layer uses Gaussian Process Regression (GPR) models, trained offline on healthy operating data, to predict yarn tension and motor speeds. During operation, discrepancies between measured and GPR-predicted values that exceed predefined thresholds trigger an immediate shutdown of the system, preventing material loss and equipment damage. Experimental trials demonstrate tension regulation within ±0.02 N and velocity errors below ±5 rad/s across varying roll diameters, while yarn-break and motor-fault scenarios are detected within a single sampling interval (&lt;100 milliseconds) with zero false alarms. This study validates the integrated system’s capability to enhance both the operational precision and resilience of R2R processes against critical failures.
Determining the physical parameters of pulsating variable stars such as RR Lyrae is essential for understanding their internal structure, pulsation mechanisms, and evolutionary state. In this study, we present a … Determining the physical parameters of pulsating variable stars such as RR Lyrae is essential for understanding their internal structure, pulsation mechanisms, and evolutionary state. In this study, we present a machine learning framework that uses feedforward artificial neural networks (ANNs) to infer stellar parameters—mass (M), luminosity (log(L/L⊙)), effective temperature (log(Teff)), and metallicity (Z)—directly from Transiting Exoplanet Survey Satellite (TESS) light curves. The network is trained on a synthetic grid of RRab light curves generated from hydrodynamical pulsation models spanning a broad range of physical parameters. We validate the model using synthetic self-inversion tests and demonstrate that the ANN accurately recovers the input parameters with minimal bias. We then apply the trained model to RRab stars observed by the TESS. The observed light curves are phase-folded, corrected for extinction, and passed through the ANN to derive physical parameters. Based on these results, we construct an empirical period–luminosity–metallicity (PLZ) relation: log(L/L⊙) = (1.458 ± 0.028) log(P/days) + (–0.068 ± 0.007) [Fe/H] + (2.040 ± 0.007). This work shows that ANN-based light-curve inversion offers an alternative method for extracting stellar parameters from single-band photometry. The approach can be extended to other classes of pulsators such as Cepheids and Miras.
Current differential protection, using either phase currents or negative-sequence components, is commonly applied for the sensitive protection of power transformers. However, this method proves insufficient for autotransformers, particularly when their … Current differential protection, using either phase currents or negative-sequence components, is commonly applied for the sensitive protection of power transformers. However, this method proves insufficient for autotransformers, particularly when their tertiary winding is fully loaded, as demonstrated in this paper. To address this limitation, the authors’ previously proposed negative-sequence integral approach for power transformers has been adapted and evaluated for three-winding autotransformers. The results show that this integral protection offers significantly higher sensitivity than current differential schemes while maintaining security during external faults with current transformer saturation.
Abstract This paper proposes a fault‐tolerant predictive tracking control strategy for the industrial process based on a high‐order fully actuated (HOFA) system method. First, a novel system representation method is … Abstract This paper proposes a fault‐tolerant predictive tracking control strategy for the industrial process based on a high‐order fully actuated (HOFA) system method. First, a novel system representation method is employed to model the industrial process as a HOFA system. Subsequently, a fault‐compensated HOFA predictive fault‐tolerant control scheme is introduced, which includes two components: HOFA feedback stabilization and HOFA model predictive tracking control. Within this framework, an incremental predictive model is developed to replace the reduced‐order prediction model by employing a Diophantine equation. The cost function, which incorporates tracking performance, is subsequently minimized using multi‐step output prediction. Additionally, sufficient conditions for the stability and tracking performance of the closed‐loop HOFA system are derived. The advantage of this approach lies in its ability to reduce system dimensionality while effectively eliminating the impact of faults on system stability through the introduction of an observer‐based compensation concept. This ensures stable operation under fault conditions, or even operation unaffected by faults. Finally, the effectiveness and reliability of the proposed method are validated through a case study involving a nonlinear reactor.
Abstract Aiming at the question of information loss between layers when mining spatiotemporal features of process data and whether pseudo‐labels are generated for unlabelled data, this paper proposes the dynamically … Abstract Aiming at the question of information loss between layers when mining spatiotemporal features of process data and whether pseudo‐labels are generated for unlabelled data, this paper proposes the dynamically scaling spatio‐temporal semi‐supervised adaptive networks based soft sensor for industrial process (DSST‐SSAN). In order to extract the local temporal correlation features and decrease the information loss between layers, the dynamic scaled spatio‐temporal feature module is constructed, the local prediction models between the current input and the hidden layer features are built in each hidden layer of the long short‐term memory (LSTM) network respectively, the prediction deviations of multiple local models are calculated and the dynamic scaled factors are constructed to update the corresponding hidden layer features. The spatial features are extracted in parallel using graph attention network (GAT), and the spatio‐temporal features are obtained by fusion to establish a soft sensor model. To address the lack of modelling labelling data, a semi‐supervised thresholding mechanism is proposed to filter the pseudo‐labelled data for adaptive data accumulation. The threshold is constructed using the likelihood root mean square of the root mean square error (RMSE) and mean absolute error (MAE) of the labelled data, which can determine whether the unlabelled data need to generate pseudo‐labels and perform modelling data accumulation and thus update the model. The effectiveness of the proposed method is confirmed by simulation experiments on two industrial cases, debutane tower and sulphur recovery.
Abstract The shipboard atomic interferometric gravimeter is a high-precision gravity measurement instrument applied broadly in geophysics, ocean exploration, inertial navigation, and other precision measurement physics fields. However, environmental vibration noise … Abstract The shipboard atomic interferometric gravimeter is a high-precision gravity measurement instrument applied broadly in geophysics, ocean exploration, inertial navigation, and other precision measurement physics fields. However, environmental vibration noise has been one of the most serious factors limiting its performance. Based on the above, this paper developed an empirical wavelet transform technique. It optimizes the minimum value of the trend boundary and utilizes it to extract features from vibration signals of shipboard atomic gravimeters. The boundary of the segmented spectrum is determined by calculating the minimum value of the trend component, and the key spectral negentropy is employed to filter each initial frequency band to extract the frequency band containing vibration information. The left and right demarcations of the extracted frequency bands are adopted as the ultimate boundaries for reconstruction, after which the components containing vibration information can be obtained. This method improves how the empirical wavelet transforms complexly segments the boundary and reduces the problem of too many invalid components. Validation of the vibration interference signals from simulated signals and shipboard atomic gravimeters in dynamic environments showed that our method could accurately decompose the signal components and improve the decomposition efficiency. It provides a theoretical basis for the design of the active vibration isolation system of shipboard atomic gravimeters.&amp;#xD;
A single type of sensor signal cannot fully represent the operational status of mechanical equipment, leading to incomplete state characterization and inaccurate diagnostics. This paper proposes an innovative fault diagnosis … A single type of sensor signal cannot fully represent the operational status of mechanical equipment, leading to incomplete state characterization and inaccurate diagnostics. This paper proposes an innovative fault diagnosis method based on Convolutional AutoEncoders combined with multivariate information fusion to accurately identify the overall health status of bearings by analyzing various sensor data. Our approach leverages the Convolutional AutoEncoders to effectively integrate heterogeneous sensor data from multiple sources, including vibration and sound signals, with data augmentation and normalization techniques for preprocessing, thereby improving the model’s generalization capability and accuracy. Furthermore, the integration of K- means clustering and a Sparse Attention mechanism enables precise recognition of critical fault features. The model’s effectiveness is validated through comprehensive performance evaluation using confusion matrices and visualization techniques. Experimental results demonstrate that our method achieves high accuracy and robustness in fault diagnosis tasks, offering a significant advancement in intelligent maintenance and fault prediction of rolling bearings by addressing the limitations of traditional methods.
With the advancement of industrial manufacturing and the shift toward high automation, machines have increasingly taken over many production tasks, greatly improving efficiency and reducing human labor. However, this also … With the advancement of industrial manufacturing and the shift toward high automation, machines have increasingly taken over many production tasks, greatly improving efficiency and reducing human labor. However, this also introduces new challenges, particularly the inability of machines to autonomously detect and diagnose faults. Such undetected issues may cause unexpected breakdowns, interrupting critical operations, leading to economic losses and potential safety hazards. To address this, the present study proposes a novel hybrid deep learning framework that integrates Convolutional Neural Networks (CNN) for feature extraction with Transformer architecture for temporal modeling. The model is validated using NASA’s CMAPSS dataset, a widely used benchmark that includes multi-sensor data and Remaining Useful Life (RUL) labels from aeroengines. By learning from time-series sensor data, the framework achieves accurate RUL predictions and early fault detection. Experimental results show that the model attains over 97% accuracy under both single and multiple operating conditions, highlighting its robustness and adaptability. These findings suggest the framework’s potential in developing intelligent maintenance systems and contribute to the field of Prognostics and Health Management (PHM), enabling more reliable, efficient, and self-monitoring industrial systems.
The European Union’s 2050 targets for decarbonization and electrification are promoting the widespread integration of heat pumps for space heating, cooling, and domestic hot water in buildings. However, their energy … The European Union’s 2050 targets for decarbonization and electrification are promoting the widespread integration of heat pumps for space heating, cooling, and domestic hot water in buildings. However, their energy and environmental performance can be significantly compromised by soft faults, such as refrigerant leakage or heat exchanger fouling, which may reduce system efficiency by up to 25%, even with maintenance intervals every two years. As a result, the implementation of self-fault detection, diagnosis, and evaluation (FDDE) tools based on operational data has become increasingly important. The complexity and added value of these tools grow as they progress from simple fault detection to quantitative fault evaluation, enabling more accurate and timely maintenance strategies. Direct fault measurements are often unfeasible due to spatial, economic, or intrusiveness constraints, thus requiring indirect methods based on low-cost and accessible measurements. In such cases, overlapping fault symptoms may create diagnostic ambiguities. Moreover, the accuracy of FDDE approaches depends on the type and number of sensors deployed, which must be balanced against cost considerations. This paper provides a comprehensive review of current FDDE methodologies for heat pumps, drawing insights from the academic literature, patent databases, and commercial products. Finally, the role of artificial intelligence in enhancing fault evaluation capabilities is discussed, along with emerging challenges and future research directions.
This paper presents a method for estimating the parameters of the dynamic error signal in a measurement chain. The method is based on knowledge of the transmittance of subsequent elements … This paper presents a method for estimating the parameters of the dynamic error signal in a measurement chain. The method is based on knowledge of the transmittance of subsequent elements in the chain and analysis of the spectrum of the processed signal. The proposed approach allows for determining the parts of uncertainty budget of the measurement chain related with dynamic errors, considering the currently processed signal and the calculation of the expanded uncertainty associated with the dynamic error signal. The paper describes the proposed error model, the algorithm for estimating the error signal parameters, and detail measurement and simulation experiments conducted to verify the effectiveness of the proposed method.
In order to enhance the management and application of fault knowledge within intelligent production lines, thereby increasing the efficiency of fault diagnosis and ensuring the stable and reliable operation of … In order to enhance the management and application of fault knowledge within intelligent production lines, thereby increasing the efficiency of fault diagnosis and ensuring the stable and reliable operation of these systems, we propose a fault diagnosis methodology that leverages knowledge graphs. First, we designed an ontology model for fault knowledge by integrating textual features from various components of the production line with expert insights. Second, we employed the ALBERT–BiLSTM–Attention–CRF model to achieve named entity and relationship recognition for faults in intelligent production lines. The introduction of the ALBERT model resulted in a 7.3% improvement in the F1 score compared to the BiLSTM–CRF model. Additionally, incorporating the attention mechanism in relationship extraction led to a 7.37% increase in the F1 score. Finally, we utilized the Neo4j graph database to facilitate the storage and visualization of fault knowledge, validating the effectiveness of our proposed method through a case study on fault diagnosis in CNC machining centers. The research findings indicate that this method excels in recognizing textual entities and relationships related to faults in intelligent production lines, effectively leveraging prior knowledge of faults across various components and elucidating their causes. This approach provides maintenance personnel with an intuitive tool for fault diagnosis and decision support, thereby enhancing diagnostic accuracy and efficiency.
Prof. Mandar Padhye | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
In gearboxes, load fluctuations on the gearbox and gear defects are two major sources of vibration. Further, at times, measurement of vibration in the gearbox is not easy because of … In gearboxes, load fluctuations on the gearbox and gear defects are two major sources of vibration. Further, at times, measurement of vibration in the gearbox is not easy because of the inaccessibility in mounting the vibration transducers. For detecting different type of gear tooth faults an experimental data is taken from single stage gearbox set up. Vibration analysis techniques are used for detection of fault in gear system, fluctuation in gear load. A method for detecting the evolution of gear faults based on time-amplitude analysis through LabVIEW. By comparing Signals of defective condition with healthy condition through vibration analyzer in which, analysis is carried out with the signal to trace of vibration. The validation is done successfully by taking input signal from accelerometer to LabVIEW program. It is for calculating effective statistical parameters in defective condition for time &amp; amplitude domain analysis. The actual position in angle of rotation for one tooth missing in gearbox is also investigated by using LabVIEW program. It is also helpful tool for health monitoring of gears in different conditions. Key Words: Gearbox, Vibration Analysis, Accelerometer
As a critical technology for industrial system reliability and safety, machine monitoring and fault diagnostics has advanced transformatively with Large Language Models (LLMs). This paper reviews LLM based monitoring and … As a critical technology for industrial system reliability and safety, machine monitoring and fault diagnostics has advanced transformatively with Large Language Models (LLMs). This paper reviews LLM based monitoring and diagnostics methodologies, categorizing them into in-context learning, fine tuning, retrieval augmented generation, multimodal learning, and time series approaches, analyzing advances in diagnostics and decision support. It identifies bottlenecks like limited industrial data and edge deployment issues, proposing a three stage roadmap to highlight LLMs’ potential in shaping adaptive, interpretable PHM frameworks.
Abstract In industrial production systems, rapid and accurate fault diagnosis is crucial for enhancing productivity, ensuring safety, and reducing the risk of accidents. However, since fault data is generally imbalanced, … Abstract In industrial production systems, rapid and accurate fault diagnosis is crucial for enhancing productivity, ensuring safety, and reducing the risk of accidents. However, since fault data is generally imbalanced, existing sampling methods are typically sensitive to noisy data and prone to generating noisy data. Meanwhile, the high dimensionality of fault data further significantly weakens fault diagnosis performance. To solve the above problems, this paper proposes an improved hybrid sampling method based on diffusion model with two-stage noise processing combined with counterfactual feature optimization for fault diagnosis(CITL-PM-CITL-CE): Firstly, we introduced a confidence evaluation mechanism to optimize the&amp;#xD;Tomek Links method and proposed a data-cleaning approach named CITL. Based on CITL and the diffusion model, a hybrid sampling method (CITL-PM-CITL) with a two-stage noise processing mechanism is designed to create a balanced dataset and reduce noisy data.In industrial production systems, rapid and accurate fault diagnosis is crucial for enhancing productivity, ensuring safety, and reducing the risk of accidents. However, since fault data is generally imbalanced, existing sampling methods are typically sensitive to noisy data and prone to generating noisy data. Meanwhile, the high dimensionality of fault data further significantly weakens fault diagnosis performance. To solve the above problems, this paper proposes an improved hybrid sampling method based on diffusion model with two-stage noise processing combined with counterfactual feature optimization for fault diagnosis(CITL-PM-CITL-CE): Firstly, we introduced a confidence evaluation mechanism to optimize the Tomek Links method and proposed a data-cleaning approach named CITL. Based on CITL and the diffusion model, a hybrid sampling method (CITL-PM-CITL) with a two-stage noise processing mechanism is designed to create a balanced dataset and reduce noisy data. Secondly, to eliminate redundant features and improve data quality, a feature optimization method based on counterfactual explanations is employed for dimensionality reduction. Finally, fault diagnosis is performed on the optimized dataset.
Abstract A new design of observer-based compensators for robust asymptotic rejection of persistent disturbances is presented. A systematic method to prevent controller windup caused by input saturation is the observer … Abstract A new design of observer-based compensators for robust asymptotic rejection of persistent disturbances is presented. A systematic method to prevent controller windup caused by input saturation is the observer technique, i.e., driving the observer by the constrained plant input. When using the internal model principle this technique does not stabilize the compensator during saturation. Therefore, several additional measures have been developed to overcome this problem. The approach presented here follows a completely different path. It starts with a compensator of sufficient order, which is first designed for pole placement. By subsequently modifying the feedback of the observer states and, if necessary, of the system outputs, the conditions for a robust disturbance rejection are satisfied. Thus, the observer technique directly assures windup prevention.
The thermal data of the hypersonic wind tunnel field accurately reflect the aerodynamic performance and key parameters of the aircraft model. However, the prediction of the temperature in hypersonic wind … The thermal data of the hypersonic wind tunnel field accurately reflect the aerodynamic performance and key parameters of the aircraft model. However, the prediction of the temperature in hypersonic wind tunnels has problems such as a large delay, nonlinearity and multivariable coupling. In order to reduce the influence brought by temperature changes and improve the accuracy of temperature prediction in the field control of hypersonic wind tunnels, this paper first combines kernel principal component analysis (KPCA) with phase space reconstruction to preprocess the temperature data set of wind tunnel tests, and the processed data set is used as the input of the temperature-prediction model. Secondly, support vector regression is applied to the construction of the temperature prediction model for the hypersonic wind-tunnel temperature field. Meanwhile, aiming at the problem of difficult parameter-combination selection in support vector regression machines, an Improved African Vulture Optimization Algorithm (IAVOA) based on adaptive chaotic mapping and local search enhancement is proposed to conduct combination optimization of parameters in support vector regression. The improved African Vulture Optimization Algorithm (AVOA) proposed in this paper was compared and analyzed with the traditional AVOA, PSO (Particle Swarm Optimization Algorithm) and GWO (Grey Wolf Optimizer) algorithms through 10 basic test functions, and the superiority of the improved AVOA algorithm proposed in this paper in optimizing the parameters of the support vector regression machine was verified in the actual temperature data in wind-tunnel field control.
<title>Abstract</title> Today, with the progress of the automotive industry and its ancillary industries, as well as the progress of the communication and telecommunications industries, and as a result of the … <title>Abstract</title> Today, with the progress of the automotive industry and its ancillary industries, as well as the progress of the communication and telecommunications industries, and as a result of the growth and development of ITS, a new approach has emerged in automotive companies and their accessory manufacturers to provide service to vehicles in the event of breakdowns and problems. In the meantime, a special method based on receiving information from the vehicle with the help of the OBD II device has become more prominent. In most existing methods, the goal is to transfer the vehicle data collected from the sensors and other parts of the vehicle by the vehicle ECU to a server or application program using OBD II in order to use this data to detect vehicle defects. In most of the research and methods presented, the focus has been on one or a few parameters in the ECU, for example, working on parameters that indicate correct fuel consumption so that if there is a defect in the vehicle and its polluting properties, the defect can be fixed to help protect the environment. In this article, the goal is to present an architecture for a vehicle health monitoring center based on connected vehicles. This project aims to provide a monitoring center to take steps in diagnosing vehicle defects and ensure the health of the vehicles connected to this center. In this regard, an application under the Android operating system has been developed, so that after connecting the OBD II device to the vehicle, it connects to it using Bluetooth and transfers the data that this device reads from the ECU to the user's mobile phone and, while storing it in the phone, sends it to the monitoring center server, where the data is analyzed and with its help, possible vehicle damage is detected and action is taken to fix it. Our main goal is to design an architecture for the monitoring center. After evaluating its efficiency in comparison with existing methods, the proposed method was shown to be more complete in terms of quality than all of them, and its implementation was also determined to be easier and more cost-effective. Also, in the quantitative evaluation of the data obtained from testing the proposed method on the Peugeot 206 car, interesting results were obtained from the data continuity of the parameters and the relationships between them, which can be used to introduce the appropriate model of vehicle behavior. In addition to introducing the appropriate model in the behavior of a perfect vehicle with respect to the health of the tested vehicle, a series of relationships were presented that were obtained after analyzing the data received from the vehicle.
Fault diagnosis and identification are important goals in ensuring the safe production of industrial processes. This article proposes a data reconstruction method based on Center Nearest Neighbor (CNN) theory for … Fault diagnosis and identification are important goals in ensuring the safe production of industrial processes. This article proposes a data reconstruction method based on Center Nearest Neighbor (CNN) theory for fault diagnosis and abnormal variable identification. Firstly, the k-nearest neighbor (k-NN) method is used to monitor the process and determine whether there is a fault. Secondly, when there is a fault, a high-precision CNN reconstruction algorithm is used to reconstruct each variable and calculate the reconstructed control index. The variable that reduces the control index the most is replaced with the reconstructed variable in sequence, and the iteration is carried out until the control index is within the control range, and all abnormal variables are finally determined. The accuracy of the CNN reconstruction method was verified through a numerical example. Additionally, it was confirmed that the method is not only suitable for fault diagnosis of a single sensor but also can be used sensor faults that occur simultaneously or propagate due to variable correlation. Finally, the effectiveness and applicability of the proposed method were validated through the penicillin fermentation process.
Abstract Myocardial infarction (MI) continues to be a leading cause of death worldwide. The precise quantification of infarcted tissue is crucial to diagnosis, therapeutic management, and post‐MI care. Late gadolinium … Abstract Myocardial infarction (MI) continues to be a leading cause of death worldwide. The precise quantification of infarcted tissue is crucial to diagnosis, therapeutic management, and post‐MI care. Late gadolinium enhancement‐cardiac magnetic resonance (LGE‐CMR) is regarded as the gold standard for precise infarct tissue localization in MI patients. A fundamental limitation of LGE‐CMR is the invasive intravenous introduction of gadolinium‐based contrast agents that present potential high‐risk toxicity, particularly for individuals with underlying chronic kidney diseases. Herein, a completely non‐invasive methodology is developed to identify the location and extent of an infarct region in the left ventricle via a machine learning (ML) model using only cardiac strains as inputs. In this approach, the remarkable performance of a multi‐fidelity ML model is demonstrated, which combines rodent‐based in‐silico‐generated training data (low‐fidelity) with very limited patient‐specific human data (high‐fidelity) in predicting LGE ground truth. The results offer a new paradigm for developing feasible prognostic tools by augmenting synthetic simulation‐based data with very small amounts of in vivo human data. More broadly, the proposed approach can significantly assist with addressing biomedical challenges in healthcare where human data are limited.
Haifa Ethabet , Leila Dadi , Mohamed Aoun +1 more | Proceedings of the Institution of Mechanical Engineers Part I Journal of Systems and Control Engineering
In this paper, we address the challenge of designing an interval Fault Detection (FD) method for continuous-time Switched Linear Systems (SLS) in the Unknown But Bounded Error (UBBE) context. The … In this paper, we address the challenge of designing an interval Fault Detection (FD) method for continuous-time Switched Linear Systems (SLS) in the Unknown But Bounded Error (UBBE) context. The considered systems are subject to unknown but bounded disturbances (state and output disturbances) in predefined intervals. A novel technique is developed to construct residual framers using interval observer with <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" overflow="scroll"> <mml:mrow> <mml:msub> <mml:mrow> <mml:mi>L</mml:mi> </mml:mrow> <mml:mrow> <mml:mi>∞</mml:mi> </mml:mrow> </mml:msub> </mml:mrow> </mml:math> criterion. The proposed approach is used to supply more degrees of design freedom and to achieve accurate FD results. The design conditions of the proposed FD interval observer are provided in terms of Linear Matrix Inequalities (LMIs) adopting a common quadratic Lyapunov function, under an arbitrary switching signal. Furthermore, a fault detection decision is proposed to indicate the presence of faults using interval analysis. The performances of the proposed approach is highlighted on illustrative examples and comparative studies.
The aim of this study was to evaluate machine learning algorithms’ capacity to improve prescriptive maintenance. A pumping system consisting of two hydraulic pumps with an electric motor from a … The aim of this study was to evaluate machine learning algorithms’ capacity to improve prescriptive maintenance. A pumping system consisting of two hydraulic pumps with an electric motor from a Spanish petrochemical company was used as a case study. Sensors were used to record data on the variables, with the target variable being the bearing temperature of the electric motor. Several regression models and a neural network time series model were tested to model the system variables. A bearing temperature sensitivity analysis was conducted based on the coefficients obtained from the optimization of the regression model. To fully exploit the capabilities of these techniques for application in this field, we designed a reference framework intended to foster model deployment in an industrial context by promoting the self-monitoring and updating of the models when required. The impact on decision-making processes is explored using reinforcement learning in the context of this framework.