Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian …
Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian predictive density includes prior parameters, thus we suffer from the selection of the prior parameters. In this study, we consider two types of predictive densities, posterior predictive and plug-in, for observations from an exponential distribution of Type-II censored data. We discuss a suitable predictive density using the risk with the Kullback–Leibler loss function. In our setting, we consider a Gamma prior, which is a conjugate prior for mathematical tractability. We prove that the posterior predictive density with an improper Gamma prior provides the dominance of the posterior predictive density over the plug-in densities without depending on the selection of an unknown parameter in our setting. Finally, we show that the posterior predictive density outperforms the plug-in densities in terms of coverage probabilities for unobserved data by censoring in a simulation study.
Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian …
Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian predictive density includes prior parameters, thus we suffer from the selection of the prior parameters. In this study, we consider two types of predictive densities, posterior predictive and plug-in, for observations from an exponential distribution of Type-II censored data. We discuss a suitable predictive density using the risk with the Kullback–Leibler loss function. In our setting, we consider a Gamma prior, which is a conjugate prior for mathematical tractability. We prove that the posterior predictive density with an improper Gamma prior provides the dominance of the posterior predictive density over the plug-in densities without depending on the selection of an unknown parameter in our setting. Finally, we show that the posterior predictive density outperforms the plug-in densities in terms of coverage probabilities for unobserved data by censoring in a simulation study.
Journal Article On asymptotic properties of predictive distributions Get access FUMIYASU KOMAKI FUMIYASU KOMAKI The Institute of Statistical Mathematics4–6–7 Minami-Azabu, Minato-ku, Tokyo 106, Japan Search for other works by this …
Journal Article On asymptotic properties of predictive distributions Get access FUMIYASU KOMAKI FUMIYASU KOMAKI The Institute of Statistical Mathematics4–6–7 Minami-Azabu, Minato-ku, Tokyo 106, Japan Search for other works by this author on: Oxford Academic Google Scholar Biometrika, Volume 83, Issue 2, June 1996, Pages 299–313, https://doi.org/10.1093/biomet/83.2.299 Published: 01 June 1996 Article history Received: 01 December 1994 Revision received: 01 August 1995 Published: 01 June 1996
This work treats the problem of estimating the predictive density of a random vector when both the mean vector and the variance are unknown. We prove that the density of …
This work treats the problem of estimating the predictive density of a random vector when both the mean vector and the variance are unknown. We prove that the density of reference in this context is inadmissible under the Kullback–Leibler loss in a nonasymptotic framework. Our result holds even when the dimension of the vector is strictly lower than three, which is surprising compared to the known variance setting. Finally, we discuss the relationship between the prediction and the estimation problems.
In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. …
In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. Although the conditional survival functions were expressed by integral forms in previous studies, we derive the conditional survival functions in closed forms and thereby reduce the computation cost. In addition, we calculate the predictive confidence intervals of unobserved values and coverage probabilities of unobserved values by using the posterior predictive survival functions.
A family of estimators, each of which dominates the "usual" one, is given for the problem of simultaneously estimating means of three or more independent normal random variables which have …
A family of estimators, each of which dominates the "usual" one, is given for the problem of simultaneously estimating means of three or more independent normal random variables which have a common unknown variance. Charles Stein [4] established the existence of such estimators (for the case of a known variance) and later, with James [3], exhibited some, both for the case of unknown common variances considered here and for other cases as well. Alam and Thompson [1] have also obtained estimators which dominate the usual one. The class of estimators given in this paper contains those of James and Stein and also those of Alam and Thompson.
We investigate shrinkage methods for constructing predictive distributions. We consider the multivariate Normal model with a known covariance matrix and show that there exists a shrinkage predictive distribution dominating the …
We investigate shrinkage methods for constructing predictive distributions. We consider the multivariate Normal model with a known covariance matrix and show that there exists a shrinkage predictive distribution dominating the Bayesian predictive distribution based on the vague prior when the dimension is not less than three. Kullback–Leibler divergence from the true distribution to a predictive distribution is adopted as a loss function.
Let $X| μ\sim N_p(μ,v_xI)$ and $Y| μ\sim N_p(μ,v_yI)$ be independent p-dimensional multivariate normal vectors with common unknown mean $μ$. Based on only observing $X=x$, we consider the problem of obtaining …
Let $X| μ\sim N_p(μ,v_xI)$ and $Y| μ\sim N_p(μ,v_yI)$ be independent p-dimensional multivariate normal vectors with common unknown mean $μ$. Based on only observing $X=x$, we consider the problem of obtaining a predictive density $\hat{p}(y| x)$ for $Y$ that is close to $p(y| μ)$ as measured by expected Kullback--Leibler loss. A natural procedure for this problem is the (formal) Bayes predictive density $\hat{p}_{\mathrm{U}}(y| x)$ under the uniform prior $π_{\mathrm{U}}(μ)\equiv 1$, which is best invariant and minimax. We show that any Bayes predictive density will be minimax if it is obtained by a prior yielding a marginal that is superharmonic or whose square root is superharmonic. This yields wide classes of minimax procedures that dominate $\hat{p}_{\mathrm{U}}(y| x)$, including Bayes predictive densities under superharmonic priors. Fundamental similarities and differences with the parallel theory of estimating a multivariate normal mean under quadratic loss are described.
Let $\mathbf{X}$ be a $p$-variate $(p \geqq 3)$ vector normally distributed with mean $\mathbf{\theta}$ and covariance matrix $\Sigma$, positive definite but unknown. Let $A$ be a $p \times p$ Wishart …
Let $\mathbf{X}$ be a $p$-variate $(p \geqq 3)$ vector normally distributed with mean $\mathbf{\theta}$ and covariance matrix $\Sigma$, positive definite but unknown. Let $A$ be a $p \times p$ Wishart matrix with parameters $(n, \Sigma)$, independent of $\mathbf{X}$. To estimate $\mathbf{\theta}$ relative to quadratic loss function $(\hat{\mathbf{\theta}} - \mathbf{\theta})'\Sigma^{-1}(\hat{\mathbf{\theta}} - \mathbf{\theta})$, we obtain a family of minimax estimators $\mathbf{\delta}(\mathbf{X}, \mathbf{A})$ based on $\mathbf{X}$ and $\mathbf{A}$ through $\mathbf{X}$ and $\mathbf{X}'\mathbf{A}^{-1}\mathbf{X}$. It is shown that there are minimax estimators of the form $\mathbf{\delta}(\mathbf{X}, \mathbf{A})$ which are also generalized Bayes. A special case where $\Sigma = \sigma^2\mathbf{I}$ is also considered.
Fitting a parametric model or estimating a parametric density function plays an important role in a number of statistical applications. Two widely-used methods, one replacing the unknown parameter by an …
Fitting a parametric model or estimating a parametric density function plays an important role in a number of statistical applications. Two widely-used methods, one replacing the unknown parameter by an efficient estimate and so termed estimative and the other using a mixture of the possible density functions and commonly termed predictive, are compared. On a general criterion of closeness of fit based on a discriminating information measure the predictive method is shown to be preferable. Explicit measures of the relative closeness of predictive and estimative fits are obtained for gamma and multinomial models.
In this paper, the methods of information geometry are employed to investigate a generalized Bayes rule for prediction. Taking α-divergences as the loss functions, optimality, and asymptotic properties of the …
In this paper, the methods of information geometry are employed to investigate a generalized Bayes rule for prediction. Taking α-divergences as the loss functions, optimality, and asymptotic properties of the generalized Bayesian predictive densities are considered. We show that the Bayesian predictive densities minimize a generalized Bayes risk. We also find that the asymptotic expansions of the densities are related to the coefficients of the α-connections of a statistical manifold. In addition, we discuss the difference between two risk functions of the generalized Bayesian predictions based on different priors. Finally, using the non-informative priors (i.e., Jeffreys and reference priors), uniform prior, and conjugate prior, two examples are presented to illustrate the main results.
Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian …
Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian predictive density includes prior parameters, thus we suffer from the selection of the prior parameters. In this study, we consider two types of predictive densities, posterior predictive and plug-in, for observations from an exponential distribution of Type-II censored data. We discuss a suitable predictive density using the risk with the Kullback–Leibler loss function. In our setting, we consider a Gamma prior, which is a conjugate prior for mathematical tractability. We prove that the posterior predictive density with an improper Gamma prior provides the dominance of the posterior predictive density over the plug-in densities without depending on the selection of an unknown parameter in our setting. Finally, we show that the posterior predictive density outperforms the plug-in densities in terms of coverage probabilities for unobserved data by censoring in a simulation study.
In the case of prior knowledge about the unknown parameter, the Bayesian predictive density coincides with the Bayes estimator for the true density in the sense of the Kullback‐Leibler divergence, …
In the case of prior knowledge about the unknown parameter, the Bayesian predictive density coincides with the Bayes estimator for the true density in the sense of the Kullback‐Leibler divergence, but this is no longer true if we consider another loss function. In this paper we present a generalized Bayes rule to obtain Bayes density estimators with respect to any α‐divergence, including the Kullback‐Leibler divergence and the Hellinger distance. For curved exponential models, we study the asymptotic behaviour of these predictive densities. We show that, whatever prior we use, the generalized Bayes rule improves (in a non‐Bayesian sense) the estimative density corresponding to a bias modification of the maximum likelihood estimator. It gives rise to a correspondence between choosing a prior density for the generalized Bayes rule and fixing a bias for the maximum likelihood estimator in the classical setting. A criterion for comparing and selecting prior densities is also given.