Monotone Piecewise Cubic Interpolation

Type: Article
Publication Date: 1980-04-01
Citations: 2274
DOI: https://doi.org/10.1137/0717021

Abstract

Necessary and sufficient conditions are derived for a cubic to be monotone on an interval. These conditions are used to develop an algorithm which constructs a visually pleasing monotone piecewise cubic interpolant to monotone data. Several examples are given which compare this algorithm with other interpolation methods.

Locations

  • SIAM Journal on Numerical Analysis
We present a C 1 interpolating scheme to deal with the problem of monotonicity (that is when data is monotone, the interpolant should also preserve the monotonicity). The scheme uses … We present a C 1 interpolating scheme to deal with the problem of monotonicity (that is when data is monotone, the interpolant should also preserve the monotonicity). The scheme uses piecewise rational cubic function, in a most general form which involves four free parameters in its description in each interval. We derive data dependent sufficient conditions on these parameters which ensure the monotonicity of monotone data. These parameters are also used to provide freedom to the user to refine the appearance of the curves interactively.
An idea due to J. Butland (Computer Graphics 80, Online Public., Ltd., 1980, 409-422) leads to an improved algorithm for piecewise monotonic piecewise cubic interpolation. The modifications to the algorithm … An idea due to J. Butland (Computer Graphics 80, Online Public., Ltd., 1980, 409-422) leads to an improved algorithm for piecewise monotonic piecewise cubic interpolation. The modifications to the algorithm published (Siam J. Numer. Anal. 17, 2 (April 1980), 238-246) and plans for software implementing the new algorithm are described. 4 figures.
A novel algorithm for computing monotone order six piecewise polynomial interpolants is proposed. Algebraic constraints for enforcing monotonicity are provided that align with quintic monotonicity theory. The algorithm is implemented, … A novel algorithm for computing monotone order six piecewise polynomial interpolants is proposed. Algebraic constraints for enforcing monotonicity are provided that align with quintic monotonicity theory. The algorithm is implemented, tested, and applied to several sample problems to demonstrate the improved accuracy of monotone quintic spline interpolants compared to the previous state-of-the-art monotone cubic spline interpolants.
An explicit representation of a C 1 piecewise rational cubic spline has been developed, which can produce a monotonic interpolant to given monotonic data. The explicit representation is easily constructed, … An explicit representation of a C 1 piecewise rational cubic spline has been developed, which can produce a monotonic interpolant to given monotonic data. The explicit representation is easily constructed, and numerical experiments indicate that the method produces visually pleasing curves. Furthermore, an error analysis of the interpolant is given.
A simple and effective algorithm to construct a monotonicity preserving cubic Hermite interpolant for data with rapid variations is presented. Constraining the derivatives of the interpolant according to geometric considerations … A simple and effective algorithm to construct a monotonicity preserving cubic Hermite interpolant for data with rapid variations is presented. Constraining the derivatives of the interpolant according to geometric considerations makes the interpolant consistent with local monotonicity properties of the data. Numerical examples are given that compare the quality and accuracy of the proposed interpolation method with other standard interpolants.
A monotonicity preserving piecewise rational cubic interpolation function is proposed and the interpolation function is C1 continuous. Because the expression has an adjustable parameter, the interpolation curve has more flexibility. A monotonicity preserving piecewise rational cubic interpolation function is proposed and the interpolation function is C1 continuous. Because the expression has an adjustable parameter, the interpolation curve has more flexibility.
A simple and effective algorithm to construct a monotonicity-preserving cubic Hermite interpolant for data with rapid variations is presented. Constraining the derivatives of the interpolant according to geometric considerations makes … A simple and effective algorithm to construct a monotonicity-preserving cubic Hermite interpolant for data with rapid variations is presented. Constraining the derivatives of the interpolant according to geometric considerations makes the interpolant consistent with local monotonicity properties of the data. Numerical examples are given that compare the quality and accuracy of the proposed interpolation method with other standard interpolants.
Fritsch and Carlson [SIAM J. Numer. Anal., 17 (1980), pp. 238–246] developed an algorithm which produces a monotone $C^1 $ piecewise cubic interpolant to a monotone function. We show that … Fritsch and Carlson [SIAM J. Numer. Anal., 17 (1980), pp. 238–246] developed an algorithm which produces a monotone $C^1 $ piecewise cubic interpolant to a monotone function. We show that their algorithm yields a third-order approximation, while a modification is fourth-order accurate.
Monotonicity preserving interpolation is very important in many sciences and engineering based problems. This paper discuss the monotonicity preserving interpolation for monotone data set by using C2 rational cubic spline … Monotonicity preserving interpolation is very important in many sciences and engineering based problems. This paper discuss the monotonicity preserving interpolation for monotone data set by using C2 rational cubic spline interpolant (cubic/quadratic) with three parameters. The data dependent sufficient conditions for the monotonicity are derived with two degree freedom. Numerical results suggests that the proposed C2 rational cubic spline preserves the monotonicity of the data and outperform the performance of the other rational cubic spline schemes in term of visually pleasing.
In this paper, we consider features concerning approximation for data by using piecewise interpolation techniques.Numerical examples are given which compare piecewise cubic interpolation methods. In this paper, we consider features concerning approximation for data by using piecewise interpolation techniques.Numerical examples are given which compare piecewise cubic interpolation methods.
A monotonicity preserving piecewise rational interpolation function with Linear denominator is proposed and the interpolation function is C1 continuous. Because the expression has an adjustable parameter, the interpolation curve has … A monotonicity preserving piecewise rational interpolation function with Linear denominator is proposed and the interpolation function is C1 continuous. Because the expression has an adjustable parameter, the interpolation curve has more flexibility.
Spline interpolation has been used in several applications due to its favorable properties regarding smoothness and accuracy of the interpolant. However, when there exists a discontinuity or a steep gradient … Spline interpolation has been used in several applications due to its favorable properties regarding smoothness and accuracy of the interpolant. However, when there exists a discontinuity or a steep gradient in the data, some artifacts can appear due to the Gibbs phenomenon. Also, preservation of data monotonicity is a requirement in some applications, and that property is not automatically verified by the interpolator. In this paper, we study sufficient conditions to obtain monotone cubic splines based on Hermite cubic interpolators and propose different ways to construct them using non-linear formulas. The order of approximation, in each case, is calculated and several numerical experiments are performed to contrast the theoretical results.
Spline interpolation has been used in several applications due to its favorable properties regarding smoothness and accuracy of the interpolant. However, when there exists a discontinuity or a steep gradient … Spline interpolation has been used in several applications due to its favorable properties regarding smoothness and accuracy of the interpolant. However, when there exists a discontinuity or a steep gradient in the data, some artifacts can appear due to the Gibbs phenomenon. Also, preservation of data monotonicity is a requirement in some applications, and that property is not automatically verified by the interpolator. In this paper, we study sufficient conditions to obtain monotone cubic splines based on Hermite cubic interpolators and propose different ways to construct them using non-linear formulas. The order of approximation, in each case, is calculated and several numerical experiments are performed to contrast the theoretical results.
The intent of the paper is to introduce C1 cubic trigonometric fractal interpolation functions with two shape parameters. The sufficient conditions by restricting the shape parameters and vertical scaling factors … The intent of the paper is to introduce C1 cubic trigonometric fractal interpolation functions with two shape parameters. The sufficient conditions by restricting the shape parameters and vertical scaling factors for shape preserving interpolation for a prescribed set of positive data are also obtained. The results are verified by simple example.
A simple and effective algorithm to construct a monotonicity-preserving cubic Hermite interpolant for data with rapid variations is presented. Constraining the derivatives of the interpolant according to geometric considerations makes … A simple and effective algorithm to construct a monotonicity-preserving cubic Hermite interpolant for data with rapid variations is presented. Constraining the derivatives of the interpolant according to geometric considerations makes the interpolant consistent with local monotonicity properties of the data. Numerical examples are given that compare the quality and accuracy of the proposed interpolation method with other standard interpolants.
Many fields of science rely on the collection of samples and estimation of true population distributions from those samples. There are several effective nonparametric methods for approximating a true distribution … Many fields of science rely on the collection of samples and estimation of true population distributions from those samples. There are several effective nonparametric methods for approximating a true distribution from empirical data, however it is unclear which methods produce the best approximations in practice. This work presents a case study on the effectiveness of various distribution approximations. Results show that piecewise linear approximations produce the smallest maximum absolute error, while the classic empirical distribution function (EDF) produces the smallest median absolute error as well as the smallest first quartile and minimum absolute error when approximating a distribution from a sample. When building distribution prediction models, the piecewise quintic and cubic approximations produce the lowest absolute error at most error percentiles. This case study encourages more research on the best methods of fitting empirical data with smooth functions to generate accurate distribution approximations.
For linear and fully non-linear diffusion equations of Bellman-Isaacs type, we introduce a class of approximation schemes based on differencing and interpolation.As opposed to classical numerical methods, these schemes work … For linear and fully non-linear diffusion equations of Bellman-Isaacs type, we introduce a class of approximation schemes based on differencing and interpolation.As opposed to classical numerical methods, these schemes work for general diffusions with coefficient matrices that may be nondiagonal dominant and arbitrarily degenerate.In general such schemes have to have a wide stencil.Besides providing a unifying framework for several known first order accurate schemes, our class of schemes includes new first and higher order versions.The methods are easy to implement and more efficient than some other known schemes.We prove consistency and stability of the methods, and for the monotone first order methods, we prove convergence in the general case and robust error estimates in the convex case.The methods are extensively tested.
This paper discusses the construction of new<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M2"><mml:mrow><mml:msup><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive … This paper discusses the construction of new<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M2"><mml:mrow><mml:msup><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M3"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">α</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>,<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M4"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">β</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>, and<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M5"><mml:mrow><mml:msub><mml:mrow><mml:mi>γ</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>. The sufficient conditions for the positivity are derived on one parameter<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M6"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">γ</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>while the other two parameters<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M7"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">α</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>and<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M8"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">β</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M9"><mml:mrow><mml:msup><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M10"><mml:mrow><mml:msup><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M11"><mml:mrow><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>,<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M12"><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn mathvariant="normal">1</mml:mn><mml:mo>,</mml:mo><mml:mo>…</mml:mo><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:math>. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M13"><mml:mrow><mml:msup><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M14"><mml:mi>f</mml:mi><mml:mfenced separators="|"><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:mfenced><mml:mo>∈</mml:mo><mml:msup><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:msup><mml:mfenced open="[" close="]" separators="|"><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mn mathvariant="normal">0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfenced></mml:math>is also investigated in detail.
A curve interpolation scheme for monotonic data has been developed. This scheme uses piecewise rational cubic functions. The parameters, in the description of the rational interpolant, have been constrained to … A curve interpolation scheme for monotonic data has been developed. This scheme uses piecewise rational cubic functions. The parameters, in the description of the rational interpolant, have been constrained to preserve the shape of the data as well as to control the shape if further modification is required. An algorithm has been constructed, for the economical C/sup 1/ curve implementation and pleasant demonstration. Furthermore, the C/sup 2/ case has also been considered for further smoothing the interpolant.
Discrete data are abundant and often arise as counts or rounded data. Yet even for linear regression models, conjugate priors and closed-form posteriors are typically unavailable, which necessitates approximations such … Discrete data are abundant and often arise as counts or rounded data. Yet even for linear regression models, conjugate priors and closed-form posteriors are typically unavailable, which necessitates approximations such as MCMC for posterior inference. For a broad class of count and rounded data regression models, we introduce conjugate priors that enable closed-form posterior inference. Key posterior and predictive functionals are computable analytically or via direct Monte Carlo simulation. Crucially, the predictive distributions are discrete to match the support of the data and can be evaluated or simulated jointly across multiple covariate values. These tools are broadly useful for linear regression, nonlinear models via basis expansions, and model and variable selection. Multiple simulation studies demonstrate significant advantages in computing, predictive modeling, and selection relative to existing alternatives.
Abstract We propose a class of numerical schemes for mixed optimal stopping and control of processes with infinite activity jumps and where the objective is evaluated by a nonlinear expectation. … Abstract We propose a class of numerical schemes for mixed optimal stopping and control of processes with infinite activity jumps and where the objective is evaluated by a nonlinear expectation. Exploiting an approximation by switching systems, piecewise constant policy timestepping reduces the problem to nonlocal semi-linear equations with different control parameters, uncoupled over individual time steps, which we solve by fully implicit monotone approximations to the controlled diffusion and the nonlocal term, and specifically the Lax–Friedrichs scheme for the nonlinearity in the gradient. We establish a comparison principle for the switching system and demonstrate the convergence of the schemes, which subsequently gives a constructive proof for the existence of a solution to the switching system. Numerical experiments are presented for a recursive utility maximization problem to demonstrate the effectiveness of the new schemes.
This paper presents a sensitivity analysis for the hypersonic aero-thermal convective heat transfer from the free molecular to the slip-flow regime for cylindrical and cubic geometries. The analyses focus on … This paper presents a sensitivity analysis for the hypersonic aero-thermal convective heat transfer from the free molecular to the slip-flow regime for cylindrical and cubic geometries. The analyses focus on a surface-averaged heat transfer coefficient at various atmospheric conditions. The sensitivity analyses have been performed by coupling a High Dimensional Model Representation based approach and a Direct Simulation Monte Carlo code. The geometries have been tested with respect to different inputs parameters; altitude, attitude, wall temperature and geometric characteristics. After the initial sensitivity analyses, the N-dimensional surrogate models of the surface-averaged heat transfer coefficient have been defined and tested. Hereby, a shape-based DSMC mesh refinement correction factor for reducing the overall analyses computational times is also presented.
This paper describes an algorithm for interpolating data on a rectangular mesh by piecewise bicubic functions. In [SIAM J. Numer. Anal., 26 (1989), pp. 1–9] the authors presented a monotonicity-preserving … This paper describes an algorithm for interpolating data on a rectangular mesh by piecewise bicubic functions. In [SIAM J. Numer. Anal., 26 (1989), pp. 1–9] the authors presented a monotonicity-preserving algorithm for data that are monotone in both variables. The present paper shows how the algorithm can be modified to treat data that are monotone in only one variable.
We develop theory and methodology for the problem of nonparametric registration of functional data that have been subjected to random deformation (warping) of their time scale. The separation of this … We develop theory and methodology for the problem of nonparametric registration of functional data that have been subjected to random deformation (warping) of their time scale. The separation of this phase variation (horizontal variation) from the amplitude variation (vertical variation) is crucial in order to properly conduct further analyses, which otherwise can be severely distorted. We determine precise nonparametric conditions under which the two forms of variation are identifiable. These show that the identifiability delicately depends on the underlying rank. By means of several counterexamples, we demonstrate that our conditions are sharp if one wishes a genuinely nonparametric setup; and in doing so we caution that popular remedies such as structural assumptions or roughness penalties can easily fail. We then propose a nonparametric registration method based on a local variation measure, the main element in elucidating identifiability. A key advantage of the method is that it is free of any tuning or penalisation parameters regulating the amount of alignment, thus circumventing the problem of over/under-registration often encountered in practice. We provide asymptotic theory for the resulting estimators under the identifiable regime, but also under mild departures from identifiability, quantifying the resulting bias in terms of the amplitude variation's spectral gap.
Abstract Background Although many patients receive good prognoses with standard therapy, 30–50% of diffuse large B-cell lymphoma (DLBCL) cases may relapse after treatment. Statistical or computational intelligent models are powerful … Abstract Background Although many patients receive good prognoses with standard therapy, 30–50% of diffuse large B-cell lymphoma (DLBCL) cases may relapse after treatment. Statistical or computational intelligent models are powerful tools for assessing prognoses; however, many cannot generate accurate risk (probability) estimates. Thus, probability calibration-based versions of traditional machine learning algorithms are developed in this paper to predict the risk of relapse in patients with DLBCL. Methods Five machine learning algorithms were assessed, namely, naïve Bayes (NB), logistic regression (LR), random forest (RF), support vector machine (SVM) and feedforward neural network (FFNN), and three methods were used to develop probability calibration-based versions of each of the above algorithms, namely, Platt scaling (Platt), isotonic regression (IsoReg) and shape-restricted polynomial regression (RPR). Performance comparisons were based on the average results of the stratified hold-out test, which was repeated 500 times. We used the AUC to evaluate the discrimination ability (i.e., classification ability) of the model and assessed the model calibration (i.e., risk prediction accuracy) using the H-L goodness-of-fit test, ECE, MCE and BS. Results Sex, stage, IPI, KPS, GCB, CD10 and rituximab were significant factors predicting the 3-year recurrence rate of patients with DLBCL. For the 5 uncalibrated algorithms, the LR (ECE = 8.517, MCE = 20.100, BS = 0.188) and FFNN (ECE = 8.238, MCE = 20.150, BS = 0.184) models were well-calibrated. The errors of the initial risk estimate of the NB (ECE = 15.711, MCE = 34.350, BS = 0.212), RF (ECE = 12.740, MCE = 27.200, BS = 0.201) and SVM (ECE = 9.872, MCE = 23.800, BS = 0.194) models were large. With probability calibration, the biased NB, RF and SVM models were well-corrected. The calibration errors of the LR and FFNN models were not further improved regardless of the probability calibration method. Among the 3 calibration methods, RPR achieved the best calibration for both the RF and SVM models. The power of IsoReg was not obvious for the NB, RF or SVM models. Conclusions Although these algorithms all have good classification ability, several cannot generate accurate risk estimates. Probability calibration is an effective method of improving the accuracy of these poorly calibrated algorithms. Our risk model of DLBCL demonstrates good discrimination and calibration ability and has the potential to help clinicians make optimal therapeutic decisions to achieve precision medicine.
"For how many days during the past 30 days was your mental health not good?" The responses to this question measure self-reported mental health and can be linked to important … "For how many days during the past 30 days was your mental health not good?" The responses to this question measure self-reported mental health and can be linked to important covariates in the National Health and Nutrition Examination Survey (NHANES). However, these count variables present major distributional challenges: the data are overdispersed, zero-inflated, bounded by 30, and heaped in five- and seven-day increments. To meet these challenges, we design a semiparametric estimation and inference framework for count data regression. The data-generating process is defined by simultaneously transforming and rounding (STAR) a latent Gaussian regression model. The transformation is estimated nonparametrically and the rounding operator ensures the correct support for the discrete and bounded data. Maximum likelihood estimators are computed using an EM algorithm that is compatible with any continuous data model estimable by least squares. STAR regression includes asymptotic hypothesis testing and confidence intervals, variable selection via information criteria, and customized diagnostics. Simulation studies validate the utility of this framework. STAR is deployed to study the factors associated with self-reported mental health and demonstrates substantial improvements in goodness-of-fit compared to existing count data regression models.
Accelerated cooling (ACC) is a key technology in producing thermomechanically controlled processed (TMCP) steel plates. In a TMCP process, hot plates are subjected to a strong cooling resulting in a … Accelerated cooling (ACC) is a key technology in producing thermomechanically controlled processed (TMCP) steel plates. In a TMCP process, hot plates are subjected to a strong cooling resulting in a complex microstructure leading to increased strength and fracture toughness. The microstructure, residual stresses, and flatness deformations are strongly affected by the temperature evolution during the cooling process. Therefore, the full control (quantification) of the temperature evolution is essential regarding plate design and processing. It can only be achieved by a thermophysical characterization of the material and the cooling system. In this paper, the focus is on the thermophysical characterization of the material properties which govern the heat conduction behavior inside of the plates. Mathematically, this work considers a specific inverse heat conduction problem (IHCP) utilizing inner temperature measurements. The temperature evolution of a heated steel plate passing through the cooling device is modeled by a 1D nonlinear partial differential equation with temperature-dependent material parameters which describe the characteristics of the underlying material. Usually, the material parameters considered in IHCPs are often defined as functions of the space and/or time variables only. Since the measured data (the effect) and the unknown material properties (the cause) depend on temperature, the cause-to-effect relationship cannot be decoupled. Hence, the parameter-to-solution operator can only be defined implicitly. By proposing a parametrization approach via piecewise interpolation, this problem can be resolved. Lastly, using simulated measurement data, the presentation of the numerical procedure shows the ability to identify the material parameters (up to some canonical ambiguity) without any a priori information.
Although I have been involved in the development of programs for numerical evaluation of integrals for a number of years, it was only recently that I became aware of the … Although I have been involved in the development of programs for numerical evaluation of integrals for a number of years, it was only recently that I became aware of the importance of providing such tools in BASIC. I admit to having had a built-in predjudice against people using BASIC. To me it seemed that these people were either solving trivial problems, or were too lazy to learn FORTRAN. But in the last year or so, I have been made aware that BASIC is very much an alive language with sophisticated scientific users. The growth in popularity of the microcomputer is partly responsible for this. At NBS I also see many more scientists who are not as computer oriented as those I met at Los Alamos and this has some effect too. In this talk I would like to describe a few important quadrature techniques that are essential to have and how they can be best implemented in BASIC. This seems to me a rather neglected area. There may be a few odd bits of software around in BASIC, but most are not what I would call state-of-the-art. If I am wrong about this I would be happy to be corrected. We could publish a bibliography of such programs here in the SIGNUM Newsletter. In the first section, as a convenience to those of us who are not very familiar with the language, I outline several of the differences with FORTRAN.
Network telemetry, characterized by its efficient push model and high-performance communication protocol (gRPC), offers a new avenue for collecting fine-grained real-time data. Despite its advantages, existing network telemetry systems lack … Network telemetry, characterized by its efficient push model and high-performance communication protocol (gRPC), offers a new avenue for collecting fine-grained real-time data. Despite its advantages, existing network telemetry systems lack a theoretical basis for setting measurement frequency, struggle to capture informative samples, and face challenges in setting a uniform frequency for multi-metric monitoring. We introduce FineMon, an innovative adaptive network telemetry scheme for precise, fine-grained, multi-metric data monitoring. FineMon leverages a novel Two-sided Frequency Adjustment (TFA) to dynamically adjust the measurement frequency on the Network Management System (NMS) and infrastructure sides. On the NMS side, we provide a theoretical basis for frequency determination, drawing on changes in the rank of multi-metric data to minimize monitoring overhead. On the infrastructure side, we adjust the frequency in real-time to capture significant data fluctuations. We propose a robust Enhanced-Subspace-based Tensor Completion (ESTC) to ensure accurate recovery of fine-grained data, even with noise or outliers. Through extensive experimentation with three real datasets, we demonstrate FineMon's superiority over existing schemes in reduced measurement overhead, enhanced accuracy, and effective capture of intricate temporal features.
In this article, we propose an efficient approach for inverting computationally expensive cumulative distribution functions. A collocation method, called the Stochastic Collocation Monte Carlo sampler (SCMC sampler), within a polynomial … In this article, we propose an efficient approach for inverting computationally expensive cumulative distribution functions. A collocation method, called the Stochastic Collocation Monte Carlo sampler (SCMC sampler), within a polynomial chaos expansion framework, allows us the generation of any number of Monte Carlo samples based on only a few inversions of the original distribution plus independent samples from a standard normal variable. We will show that with this path-independent collocation approach the exact simulation of the Heston stochastic volatility model, as proposed in Broadie and Kaya [Oper. Res., 2006, 54, 217–231], can be performed efficiently and accurately. We also show how to efficiently generate samples from the squared Bessel process and perform the exact simulation of the SABR model.
The monotonicity and stability of difference schemes for, in general, hyperbolic systems of conservation laws with source terms are studied. The basic approach is to investigate the stability and monotonicity … The monotonicity and stability of difference schemes for, in general, hyperbolic systems of conservation laws with source terms are studied. The basic approach is to investigate the stability and monotonicity of a non-linear scheme in terms of its corresponding scheme in variations. Such an approach leads to application of the stability theory for linear equation systems to establish stability of the corresponding non-linear scheme. The main methodological innovation is the theorems establishing the notion that a non-linear scheme is stable (and monotone) if the corresponding scheme in variations is stable (and, respectively, monotone). Criteria are developed for monotonicity and stability of difference schemes associated with the numerical analysis of systems of partial differential equations. The theorem of Friedrichs (1954) is generalized to be applicable to variational schemes with non-symmetric matrices. A new modification of the central Lax-Friedrichs (LxF) scheme is developed to be of the second order accuracy. A monotone piecewise cubic interpolation is used in the central schemes to give an accurate approximation for the model in question. The stability and monotonicity of the modified scheme are investigated. Some versions of the modified scheme are tested on several conservation laws, and the scheme is found to be accurate and robust. As applied to hyperbolic conservation laws with, in general, stiff source terms, it is constructed a second order scheme based on operator-splitting techniques.
Longitudinal studies are those in which the same variable is repeatedly measured at different times. These studies are more likely than others to suffer from missing values. Since the presence … Longitudinal studies are those in which the same variable is repeatedly measured at different times. These studies are more likely than others to suffer from missing values. Since the presence of missing values may have an important impact on statistical analyses, it is important that they should be dealt with properly. In this paper, we present “Copy Mean”, a new method to impute intermittent missing values. We compared its efficiency in eleven imputation methods dedicated to the treatment of missing values in longitudinal data. All these methods were tested on three markedly different real datasets (stationary, increasing, and sinusoidal pattern) with complete data. For each of them, we generated nine types of incomplete datasets that include 10%, 30%, or 50% of missing data using either a Missing Completely at Random, a Missing at Random, or a Missing Not at Random missingness mechanism. Our results show that Copy Mean has a great effectiveness, exceeding or equaling the performance of other methods in almost all configurations. The effectiveness of linear interpolation is highly data-dependent. The Last Occurrence Carried Forward method is strongly discouraged.
Abstract Interval-censored data can arise in questionnaire-based studies when the respondent gives an answer in the form of an interval without having pre-specified ranges. Such data are called self-selected interval … Abstract Interval-censored data can arise in questionnaire-based studies when the respondent gives an answer in the form of an interval without having pre-specified ranges. Such data are called self-selected interval data. In this case, the assumption of independent censoring is not fulfilled, and therefore the ordinary methods for interval-censored data are not suitable. This paper explores a quantile regression model for self-selected interval data and suggests an estimator based on estimating equations. The consistency of the estimator is shown. Bootstrap procedures for constructing confidence intervals are considered. A simulation study indicates satisfactory performance of the proposed methods. An application to data concerning price estimates is presented.
Polynomial interpolation is an important component of many computational problems. In several of these computational problems, failure to preserve positivity when using polynomials to approximate or map data values between … Polynomial interpolation is an important component of many computational problems. In several of these computational problems, failure to preserve positivity when using polynomials to approximate or map data values between meshes can lead to negative unphysical quantities. Currently, most polynomial-based methods for enforcing positivity are based on splines and polynomial rescaling. The spline-based approaches build interpolants that are positive over the intervals in which they are defined and may require solving a minimization problem and/or system of equations. The linear polynomial rescaling methods allow for high-degree polynomials but enforce positivity only at limited locations (e.g., quadrature nodes). This work introduces open-source software (HiPPIS) for high-order data-bounded interpolation (DBI) and positivity-preserving interpolation (PPI) that addresses the limitations of both the spline and polynomial rescaling methods. HiPPIS is suitable for approximating and mapping physical quantities such as mass, density, and concentration between meshes while preserving positivity. This work provides Fortran and Matlab implementations of the DBI and PPI methods, presents an analysis of the mapping error in the context of PDEs, and uses several 1D and 2D numerical examples to demonstrate the benefits and limitations of HiPPIS.
Abstract Gambling disorder and problem gambling are characterized by persistent and repetitive problematic gambling behavior. Attentional bias toward gambling-related stimuli such as casino chips, dice, roulette, etc. have been observed … Abstract Gambling disorder and problem gambling are characterized by persistent and repetitive problematic gambling behavior. Attentional bias toward gambling-related stimuli such as casino chips, dice, roulette, etc. have been observed in problem gamblers, but it remains unclear whether stimuli in gambling tasks elicit greater attention and pupillary responses in problem gamblers. To address this issue, we administrated problem and non-problem gamblers a gambling task accompanied by eye-tracking measurements, in which the participants were required to choose one of the paired pictures to receive monetary rewards and avoid punishments. Concerning attentional allocation, problem gamblers showed a greater attentional preference for the right-hand pictures in the decision and feedback phases, and compared to non-problem gamblers, problem gamblers’ attention was particularly high around the center. Concerning pupillary dynamics, pupillary dilation in response to rewards and punishments was observed only in problem gamblers. Accordingly, asymmetrical allocation of attention by problem gamblers may reflect greater concentration on the gambling task, and pupillary dynamics in problem gamblers may reflect hypersensitivity to wins and losses.
We present a numerical procedure to compute the nodes and weights in rational Gauss-Chebyshev quadrature formulas. Under certain conditions on the poles, these nodes are near best for rational interpolation … We present a numerical procedure to compute the nodes and weights in rational Gauss-Chebyshev quadrature formulas. Under certain conditions on the poles, these nodes are near best for rational interpolation with prescribed poles (in the same sense that Chebyshev points are near best for polynomial interpolation). As an illustration, we use these interpolation points to solve a differential equation with an interior boundary layer using a rational spectral method. The algorithm to compute the interpolation points (and, if required, the quadrature weights) is implemented as a Matlab program.
In this paper, smoothing curve or surface with both interpolation conditions and inequality constraints is considered as a general convex optimization problem in a Hilbert space. We propose a new … In this paper, smoothing curve or surface with both interpolation conditions and inequality constraints is considered as a general convex optimization problem in a Hilbert space. We propose a new approximation method based on a discretized optimization problem in a finite-dimensional Hilbert space under the same set of constraints. We prove that the approximate solution converges uniformly to the optimal constrained interpolating function. An efficient algorithm is derived and numerical examples with bound and monotonicity constraints in one and two dimensions are given. A comparison with existing monotone cubic splines interpolation algorithms in terms of linearized energy criterion is included.