A Block Algorithm for Matrix 1-Norm Estimation, with an Application to 1-Norm Pseudospectra

Type: Article

Publication Date: 2000-01-01

Citations: 142

DOI: https://doi.org/10.1137/s0895479899356080

Abstract

The matrix 1-norm estimation algorithm used in LAPACK and various other software libraries and packages has proved to be a valuable tool. However, it has the limitations that it offers the user no control over the accuracy and reliability of the estimate and that it is based on level 2 BLAS operations. A block generalization of the 1-norm power method underlying the estimator is derived here and developed into a practical algorithm applicable to both real and complex matrices. The algorithm works with n × t matrices, where t is a parameter. For t=1 the original algorithm is recovered, but with two improvements (one for real matrices and one for complex matrices). The accuracy and reliability of the estimates generally increase with t and the computational kernels are level 3 BLAS operations for t > 1. The last t-1 columns of the starting matrix are randomly chosen, giving the algorithm a statistical flavor. As a by-product of our investigations we identify a matrix for which the 1-norm power method takes the maximum number of iterations. As an application of the new estimator we show how it can be used to efficiently approximate 1-norm pseudospectra.

Locations

  • SIAM Journal on Matrix Analysis and Applications - View
  • MIMS EPrints (University of Southampton) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat A Symmetry Preserving Algorithm for Matrix Scaling 2014 Philip A. Knight
Daniel Ruiz
Bora Uçar
+ Perturbation analysis on T-eigenvalues of third-order tensors 2021 Changxin Mo
Weiyang Ding
Yimin Wei
+ EFFICIENT ALGORITHM FOR L∞-NORM CALCULATIONS 2006 Vasile Sima
+ Analytical block-diagonalization of matrices using unit spherical tensors 1993 G J Bowden
M. J. Prandolini
+ PDF Chat Perturbation Analysis for t-Product-Based Tensor Inverse, Moore-Penrose Inverse and Tensor System 2022 Zhengbang Cao
Pengpeng Xie
+ Parameter Sensitivity Analysis of the SparTen High Performance Sparse Tensor Decomposition Software: Extended Analysis 2020 Jeremy M. Myers
Daniel Dunlavy
Keita Teranishi
D. S. Hollman
+ MATLAB Tensor Toolbox 2006 Tamara G. Kolda
Brett W. Bader
+ Constrained Low-Rank Matrix/Tensor Factorisation 2020 Shuai Jiang
+ PDF Chat Recent Numerical and Conceptual Advances for Tensor Decompositions — A Preview of Tensorlab 4.0 2019 Nico Vervliet
Michiel Vandecappelle
Martijn Boussé
Rob Zink
Lieven De Lathauwer
+ Codebase release 0.3 for ITensor 2022 Matthew Fishman
Steven R. White
E. Miles Stoudenmire
+ Parameter Sensitivity Analysis of the SparTen High Performance Sparse Tensor Decomposition Software 2020 Jeremy M. Myers
Daniel Dunlavy
Keita Teranishi
Daisy S. Hollman
+ PDF Chat Parameter Sensitivity Analysis of the SparTen High Performance Sparse Tensor Decomposition Software (Extended Analysis) 2020 Jeremy B. Myers
Daniel Dunlavy
Keita Teranishi
David S Hollman
+ Calculating the Isotropic Subspace of a Symmetric Quasi-Definite Matrix 2018 Х. Д. Икрамов
+ Novel Algorithms for Exact and Efficient L1-NORM-BASED Tucker2 Decomposition 2018 Dimitris G. Chachlakis
Panos P. Markopoulos
+ Obtaining Pseudo-inverse Solutions With MINRES 2023 Yang Liu
Andre Milzarek
Fred Roosta
+ Tensor-Tensor Product Toolbox 2018 Canyi Lu
+ kbatseli/STEROID: Symmetric Tensor decomposition and tensor QR algorithm for symmetric tensors 2019 Kim Batselier
+ PDF Chat Accurate Solution of Structured Least Squares Problems via Rank-Revealing Decompositions 2013 N. Castro-González
Johan Ceballos
Froilán M. Dopico
Juan M. Molera
+ Acute perturbation for Moore-Penrose inverses of tensors via the T-Product 2022 Zhanlei Cong
Haifeng Ma
+ Convergence analysis of iterative methods for computing the T-pseudoinverse of complete full-rank third-order tensors based on the T-product 2023 Pablo Soto-Quiros