This cluster of papers focuses on the theory, implementation, and optimization of floating-point arithmetic for scientific computation. It covers topics such as interval analysis, high-precision computation, hardware implementation on FPGAs, numerical verification methods, decimal floating-point arithmetic, accuracy optimization, Taylor models, and handling interval uncertainty.
Floating-Point Arithmetic; Interval Analysis; High-Precision Computation; Hardware Implementation; Numerical Verification Methods; FPGA Acceleration; Decimal Floating-Point; Accuracy-Guaranteed Bit-Width Optimization; Taylor Models; Interval Uncertainty