Author Description

Login to generate an author description

Ask a Question About This Mathematician

This study treats an asymptotic distribution for measures of predictive power for generalized linear models (GLMs). We focus on the regression correlation coefficient (RCC) that is one of the measures … This study treats an asymptotic distribution for measures of predictive power for generalized linear models (GLMs). We focus on the regression correlation coefficient (RCC) that is one of the measures of predictive power. The RCC, proposed by Zheng and Agresti is a population value and a generalization of the population value for the coefficient of determination. Therefore, the RCC is easy to interpret and familiar. Recently, Takahashi and Kurosawa provided an explicit form of the RCC and proposed a new RCC estimator for a Poisson regression model. They also showed the validity of the new estimator compared with other estimators. This study discusses the new statistical properties of the RCC for the Poisson regression model. Furthermore, we show an asymptotic normality of the RCC estimator.
We define q-linear arithmetical functions and -q-linear ones and show the algebraic independence over C(z) of their generating functions.is the q-adic expansion of n ∈ N. The sum n≤x s … We define q-linear arithmetical functions and -q-linear ones and show the algebraic independence over C(z) of their generating functions.is the q-adic expansion of n ∈ N. The sum n≤x s q (n) and also the power sum n≤x s q (n) l (l ≄ 1) have been extensively studied (cf.[1], [8], [9]).In this paper we introduce q-linear functions and -q-linear ones and prove the algebraic independence of the generating functions and their values.Our method of proof is to apply two basic theorems in transcendence theory of Mahler functions (see Lemmas 2.1 and 2.2 below).
Summary In this article, we study the statistical properties of the goodness‐of‐fit measure m pp proposed by (Eshima & Tabata 2007, Statistics & Probability Letters 77, 583–593) for generalised linear … Summary In this article, we study the statistical properties of the goodness‐of‐fit measure m pp proposed by (Eshima & Tabata 2007, Statistics & Probability Letters 77, 583–593) for generalised linear models. Focusing on the special case of Poisson regression using the canonical log link function, and assuming a random vector X of covariates, we obtain an explicit form for m pp that enables us to study its properties and construct a new estimator for the measure by utilising information about the shape of the covariate distribution. Simulations show that the newly proposed estimator for m pp exhibits better performance in terms of mean squared error than the simple unbiased covariance estimator, especially for larger absolute values of the slope coefficients. In contrast, it may be more unstable when the value of the slope coefficient is close to boundary of the domain of the moment generating function for the corresponding covariate. We illustrate the application of m pp on a data set of counts of complaints against doctors working in an emergency unit in hospital, in particular, showing how our proposed estimator can be efficiently computed across a series of candidate models.
Let $d\geq 2$ be an integer. In 2010, the second, third, and fourth authors gave necessary and sufficient conditions for the infinite products $$ \prod_{\textstyle {k=1\atop U_{d^k}\neq-a_i}}^{\infty}\biggl( 1+\frac{a_i}{U_{d^k}}\bigg)\quad (i=1,\dots, Let $d\geq 2$ be an integer. In 2010, the second, third, and fourth authors gave necessary and sufficient conditions for the infinite products $$ \prod_{\textstyle {k=1\atop U_{d^k}\neq-a_i}}^{\infty}\biggl( 1+\frac{a_i}{U_{d^k}}\bigg)\quad (i=1,\dots,
In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. … In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. Although the conditional survival functions were expressed by integral forms in previous studies, we derive the conditional survival functions in closed forms and thereby reduce the computation cost. In addition, we calculate the predictive confidence intervals of unobserved values and coverage probabilities of unobserved values by using the posterior predictive survival functions.
In this paper, we consider an estimation for the unknown parameters of a conditional Gaussian MA(1) model. In the majority of cases, a maximum-likelihood estimator is chosen because the estimator … In this paper, we consider an estimation for the unknown parameters of a conditional Gaussian MA(1) model. In the majority of cases, a maximum-likelihood estimator is chosen because the estimator is consistent. However, for small sample sizes the error is large, because the estimator has a bias of O(nāˆ’ 1). Therefore, we provide a bias of O(nāˆ’ 1) for the maximum-likelihood estimator for the conditional Gaussian MA(1) model. Moreover, we propose new estimators for the unknown parameters of the conditional Gaussian MA(1) model based on the bias of O(nāˆ’ 1). We investigate the properties of the bias, as well as the asymptotical variance of the maximum-likelihood estimators for the unknown parameters, by performing some simulations. Finally, we demonstrate the validity of the new estimators through this simulation study.
This paper studies a Metropolis-Hastings (MH) algorithm of unknown parameters for a multinomial logit model. The MH algorithm which is one of the Bayesian estimation requires prior and proposal distributions. … This paper studies a Metropolis-Hastings (MH) algorithm of unknown parameters for a multinomial logit model. The MH algorithm which is one of the Bayesian estimation requires prior and proposal distributions. A selection of the prior and proposal distributions is an important issue of the Bayesian estimation. However, there is no a decisive approach for the determination of prior and proposal distributions. A posterior distribution is obtained from two distributions. The MH algorithm generates samples from the posterior distribution of the unknown parameters. Unless we give appropriate distributions, it leads to an inappropriate posterior distribution. In this paper, we discuss differences in the behaviors of autocorrelation functions in a selection of the proposal distributions.
s constant is defined by the alternating sum of reciprocals of terms of Sylvester's sequence minus 1. Davison and Shallit proved the transcendence of the constant and Becker improved it.In … s constant is defined by the alternating sum of reciprocals of terms of Sylvester's sequence minus 1. Davison and Shallit proved the transcendence of the constant and Becker improved it.In this paper, we study rationality of functions satisfying certain functional equations and generalize the result of Becker by a variant of Mahler's method.
In this study, we consider a bias reduction of the conditional maximum likelihood estimators for the unknown parameters of a Gaussian second-order moving average (MA(2)) model. In many cases, we … In this study, we consider a bias reduction of the conditional maximum likelihood estimators for the unknown parameters of a Gaussian second-order moving average (MA(2)) model. In many cases, we use the maximum likelihood estimator because the estimator is consistent. However, when the sample size n is small, the error is large because it has a bias of $O({n^{-1}})$. Furthermore, the exact form of the maximum likelihood estimator for moving average models is slightly complicated even for Gaussian models. We sometimes rely on simpler maximum likelihood estimation methods. As one of the methods, we focus on the conditional maximum likelihood estimator and examine the bias of the conditional maximum likelihood estimator for a Gaussian MA(2) model. Moreover, we propose new estimators for the unknown parameters of the Gaussian MA(2) model based on the bias of the conditional maximum likelihood estimators. By performing simulations, we investigate properties of this bias, as well as the asymptotic variance of the conditional maximum likelihood estimators for the unknown parameters. Finally, we confirm the validity of the new estimators through this simulation study.
Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian … Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian predictive density includes prior parameters, thus we suffer from the selection of the prior parameters. In this study, we consider two types of predictive densities, posterior predictive and plug-in, for observations from an exponential distribution of Type-II censored data. We discuss a suitable predictive density using the risk with the Kullback–Leibler loss function. In our setting, we consider a Gamma prior, which is a conjugate prior for mathematical tractability. We prove that the posterior predictive density with an improper Gamma prior provides the dominance of the posterior predictive density over the plug-in densities without depending on the selection of an unknown parameter in our setting. Finally, we show that the posterior predictive density outperforms the plug-in densities in terms of coverage probabilities for unobserved data by censoring in a simulation study.
We compute the exact irrationality exponents of certain series of rational numbers, first studied in a special case by Hone, by transforming them into suitable continued fractions. We compute the exact irrationality exponents of certain series of rational numbers, first studied in a special case by Hone, by transforming them into suitable continued fractions.
We state and prove three general formulas allowing to transform formal finite sums into formal continued fractions and apply them to generalize certain expansions in continued fractions given by Hone … We state and prove three general formulas allowing to transform formal finite sums into formal continued fractions and apply them to generalize certain expansions in continued fractions given by Hone and Varona.
This paper proposes a method for deriving interpretable common factors based on canonical correlation analysis applied to the vectors of common factors and manifest variables in the factor analysis model. … This paper proposes a method for deriving interpretable common factors based on canonical correlation analysis applied to the vectors of common factors and manifest variables in the factor analysis model. First, an entropy-based method for measuring factor contributions is reviewed. Second, the entropy-based contribution measure of the common-factor vector is decomposed into those of canonical common factors, and it is also shown that the importance order of factors is that of their canonical correlation coefficients. Third, the method is applied to derive interpretable common factors. Numerical examples are provided to demonstrate the usefulness of the present approach.
Let $\{x_n\}$ be a sequence of rational numbers greater than one such that $x_{n+1} \geq x^2_n$ for all sufficiently large $n$ and let $\varepsilon_n \in \{-1,1\}$. Under certain growth conditions … Let $\{x_n\}$ be a sequence of rational numbers greater than one such that $x_{n+1} \geq x^2_n$ for all sufficiently large $n$ and let $\varepsilon_n \in \{-1,1\}$. Under certain growth conditions on the denominators of $x_{n+1}/x^2_n$ we prove that the irrationality exponent of the number $\sum^{\infty}_{n=1} \varepsilon_n/x_n$ is equal to $\limsup_{n\to\infty}(\log x_{n+1}/\log x_n)$.
We generalize the na\"ive estimator of a Poisson regression model with measurement errors as discussed in Kukush et al. [1]. The explanatory variable is not always normally distributed as they … We generalize the na\"ive estimator of a Poisson regression model with measurement errors as discussed in Kukush et al. [1]. The explanatory variable is not always normally distributed as they assume. In this study, we assume that the explanatory variable and measurement error are not limited to a normal distribution. We clarify the requirements for the existence of the na\"ive estimator and derive its asymptotic bias and asymptotic mean squared error (MSE). In addition, we propose a consistent estimator of the true parameter by correcting the bias of the na\"ive estimator. As illustrative examples, we present simulation studies that compare the performance of the na\"ive estimator and new estimator for a Gamma explanatory variable with a normal error or a Gamma error.
We generalize the naive estimator of a Poisson regression model with a measurement error as discussed in Kukush et al. in 2004. The explanatory variable is not always normally distributed … We generalize the naive estimator of a Poisson regression model with a measurement error as discussed in Kukush et al. in 2004. The explanatory variable is not always normally distributed as they assume. In this study, we assume that the explanatory variable and measurement error are not limited to a normal distribution. We clarify the requirements for the existence of the naive estimator and derive its asymptotic bias and asymptotic mean squared error (MSE). The requirements for the existence of the naive estimator can be expressed using an implicit function, which the requirements can be deduced by the characteristic of the Poisson regression models. In addition, using the implicit function obtained from the system of equations of the Poisson regression models, we propose a consistent estimator of the true parameter by correcting the bias of the naive estimator. As illustrative examples, we present simulation studies that compare the performance of the naive estimator and new estimator for a Gamma explanatory variable with a normal error or a Gamma error.
Logistic regression models have a severe problem called separation. The maximum likelihood estimator does not exist in logistic regression models for data structures under separation. Under separation, the forcibly estimated … Logistic regression models have a severe problem called separation. The maximum likelihood estimator does not exist in logistic regression models for data structures under separation. Under separation, the forcibly estimated maximum likelihood estimate may have an extremely large value. Separation often occurs when the size of dataset is small. Consequently, goodness-of-fit measures based on the likelihood ratio and those based on covariance functions using the maximum likelihood estimate indicate that the model is excessively good regardless of the cause of the separation. The Firth and exact logistic regression methods are valid estimation methods for separation problems. Therefore, we propose methods to reasonably evaluate the goodness-of-fit measures of statistical models under separation with dataset of a small sample size with the abovementioned methods. The goodness-of-fit measures based on covariance functions which are a generalization of the multiple correlation coefficient, referred to as the regression correlation coefficient and the entropy coefficient of determination are then used combined with the abovementioned methods for the separation data. In addition, we conducted a data analysis using the definition of the non separation ratio based on the regression depth.
Logistic regression models have a severe problem called separation. The maximum likelihood estimator does not exist in logistic regression models for data structures under separation. Under separation, the forcibly estimated … Logistic regression models have a severe problem called separation. The maximum likelihood estimator does not exist in logistic regression models for data structures under separation. Under separation, the forcibly estimated maximum likelihood estimate may have an extremely large value. Separation often occurs when the size of dataset is small. Consequently, goodness-of-fit measures based on the likelihood ratio and those based on covariance functions using the maximum likelihood estimate indicate that the model is excessively good regardless of the cause of the separation. The Firth and exact logistic regression methods are valid estimation methods for separation problems. Therefore, we propose methods to reasonably evaluate the goodness-of-fit measures of statistical models under separation with dataset of a small sample size with the abovementioned methods. The goodness-of-fit measures based on covariance functions which are a generalization of the multiple correlation coefficient, referred to as the regression correlation coefficient and the entropy coefficient of determination are then used combined with the abovementioned methods for the separation data. In addition, we conducted a data analysis using the definition of the non separation ratio based on the regression depth.
Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian … Abstract Many researchers have proposed numerous Bayesian predictive densities for Type-II censored data that is generated by ordered observations. However, their evaluations of predictive densities were insufficient because the Bayesian predictive density includes prior parameters, thus we suffer from the selection of the prior parameters. In this study, we consider two types of predictive densities, posterior predictive and plug-in, for observations from an exponential distribution of Type-II censored data. We discuss a suitable predictive density using the risk with the Kullback–Leibler loss function. In our setting, we consider a Gamma prior, which is a conjugate prior for mathematical tractability. We prove that the posterior predictive density with an improper Gamma prior provides the dominance of the posterior predictive density over the plug-in densities without depending on the selection of an unknown parameter in our setting. Finally, we show that the posterior predictive density outperforms the plug-in densities in terms of coverage probabilities for unobserved data by censoring in a simulation study.
We generalize the naive estimator of a Poisson regression model with a measurement error as discussed in Kukush et al. in 2004. The explanatory variable is not always normally distributed … We generalize the naive estimator of a Poisson regression model with a measurement error as discussed in Kukush et al. in 2004. The explanatory variable is not always normally distributed as they assume. In this study, we assume that the explanatory variable and measurement error are not limited to a normal distribution. We clarify the requirements for the existence of the naive estimator and derive its asymptotic bias and asymptotic mean squared error (MSE). The requirements for the existence of the naive estimator can be expressed using an implicit function, which the requirements can be deduced by the characteristic of the Poisson regression models. In addition, using the implicit function obtained from the system of equations of the Poisson regression models, we propose a consistent estimator of the true parameter by correcting the bias of the naive estimator. As illustrative examples, we present simulation studies that compare the performance of the naive estimator and new estimator for a Gamma explanatory variable with a normal error or a Gamma error.
We generalize the na\"ive estimator of a Poisson regression model with measurement errors as discussed in Kukush et al. [1]. The explanatory variable is not always normally distributed as they … We generalize the na\"ive estimator of a Poisson regression model with measurement errors as discussed in Kukush et al. [1]. The explanatory variable is not always normally distributed as they assume. In this study, we assume that the explanatory variable and measurement error are not limited to a normal distribution. We clarify the requirements for the existence of the na\"ive estimator and derive its asymptotic bias and asymptotic mean squared error (MSE). In addition, we propose a consistent estimator of the true parameter by correcting the bias of the na\"ive estimator. As illustrative examples, we present simulation studies that compare the performance of the na\"ive estimator and new estimator for a Gamma explanatory variable with a normal error or a Gamma error.
This paper proposes a method for deriving interpretable common factors based on canonical correlation analysis applied to the vectors of common factors and manifest variables in the factor analysis model. … This paper proposes a method for deriving interpretable common factors based on canonical correlation analysis applied to the vectors of common factors and manifest variables in the factor analysis model. First, an entropy-based method for measuring factor contributions is reviewed. Second, the entropy-based contribution measure of the common-factor vector is decomposed into those of canonical common factors, and it is also shown that the importance order of factors is that of their canonical correlation coefficients. Third, the method is applied to derive interpretable common factors. Numerical examples are provided to demonstrate the usefulness of the present approach.
In this study, we consider a bias reduction of the conditional maximum likelihood estimators for the unknown parameters of a Gaussian second-order moving average (MA(2)) model. In many cases, we … In this study, we consider a bias reduction of the conditional maximum likelihood estimators for the unknown parameters of a Gaussian second-order moving average (MA(2)) model. In many cases, we use the maximum likelihood estimator because the estimator is consistent. However, when the sample size n is small, the error is large because it has a bias of $O({n^{-1}})$. Furthermore, the exact form of the maximum likelihood estimator for moving average models is slightly complicated even for Gaussian models. We sometimes rely on simpler maximum likelihood estimation methods. As one of the methods, we focus on the conditional maximum likelihood estimator and examine the bias of the conditional maximum likelihood estimator for a Gaussian MA(2) model. Moreover, we propose new estimators for the unknown parameters of the Gaussian MA(2) model based on the bias of the conditional maximum likelihood estimators. By performing simulations, we investigate properties of this bias, as well as the asymptotic variance of the conditional maximum likelihood estimators for the unknown parameters. Finally, we confirm the validity of the new estimators through this simulation study.
Let $\{x_n\}$ be a sequence of rational numbers greater than one such that $x_{n+1} \geq x^2_n$ for all sufficiently large $n$ and let $\varepsilon_n \in \{-1,1\}$. Under certain growth conditions … Let $\{x_n\}$ be a sequence of rational numbers greater than one such that $x_{n+1} \geq x^2_n$ for all sufficiently large $n$ and let $\varepsilon_n \in \{-1,1\}$. Under certain growth conditions on the denominators of $x_{n+1}/x^2_n$ we prove that the irrationality exponent of the number $\sum^{\infty}_{n=1} \varepsilon_n/x_n$ is equal to $\limsup_{n\to\infty}(\log x_{n+1}/\log x_n)$.
Summary In this article, we study the statistical properties of the goodness‐of‐fit measure m pp proposed by (Eshima & Tabata 2007, Statistics & Probability Letters 77, 583–593) for generalised linear … Summary In this article, we study the statistical properties of the goodness‐of‐fit measure m pp proposed by (Eshima & Tabata 2007, Statistics & Probability Letters 77, 583–593) for generalised linear models. Focusing on the special case of Poisson regression using the canonical log link function, and assuming a random vector X of covariates, we obtain an explicit form for m pp that enables us to study its properties and construct a new estimator for the measure by utilising information about the shape of the covariate distribution. Simulations show that the newly proposed estimator for m pp exhibits better performance in terms of mean squared error than the simple unbiased covariance estimator, especially for larger absolute values of the slope coefficients. In contrast, it may be more unstable when the value of the slope coefficient is close to boundary of the domain of the moment generating function for the corresponding covariate. We illustrate the application of m pp on a data set of counts of complaints against doctors working in an emergency unit in hospital, in particular, showing how our proposed estimator can be efficiently computed across a series of candidate models.
We compute the exact irrationality exponents of certain series of rational numbers, first studied in a special case by Hone, by transforming them into suitable continued fractions. We compute the exact irrationality exponents of certain series of rational numbers, first studied in a special case by Hone, by transforming them into suitable continued fractions.
We state and prove three general formulas allowing to transform formal finite sums into formal continued fractions and apply them to generalize certain expansions in continued fractions given by Hone … We state and prove three general formulas allowing to transform formal finite sums into formal continued fractions and apply them to generalize certain expansions in continued fractions given by Hone and Varona.
s constant is defined by the alternating sum of reciprocals of terms of Sylvester's sequence minus 1. Davison and Shallit proved the transcendence of the constant and Becker improved it.In … s constant is defined by the alternating sum of reciprocals of terms of Sylvester's sequence minus 1. Davison and Shallit proved the transcendence of the constant and Becker improved it.In this paper, we study rationality of functions satisfying certain functional equations and generalize the result of Becker by a variant of Mahler's method.
This study treats an asymptotic distribution for measures of predictive power for generalized linear models (GLMs). We focus on the regression correlation coefficient (RCC) that is one of the measures … This study treats an asymptotic distribution for measures of predictive power for generalized linear models (GLMs). We focus on the regression correlation coefficient (RCC) that is one of the measures of predictive power. The RCC, proposed by Zheng and Agresti is a population value and a generalization of the population value for the coefficient of determination. Therefore, the RCC is easy to interpret and familiar. Recently, Takahashi and Kurosawa provided an explicit form of the RCC and proposed a new RCC estimator for a Poisson regression model. They also showed the validity of the new estimator compared with other estimators. This study discusses the new statistical properties of the RCC for the Poisson regression model. Furthermore, we show an asymptotic normality of the RCC estimator.
In this paper, we consider an estimation for the unknown parameters of a conditional Gaussian MA(1) model. In the majority of cases, a maximum-likelihood estimator is chosen because the estimator … In this paper, we consider an estimation for the unknown parameters of a conditional Gaussian MA(1) model. In the majority of cases, a maximum-likelihood estimator is chosen because the estimator is consistent. However, for small sample sizes the error is large, because the estimator has a bias of O(nāˆ’ 1). Therefore, we provide a bias of O(nāˆ’ 1) for the maximum-likelihood estimator for the conditional Gaussian MA(1) model. Moreover, we propose new estimators for the unknown parameters of the conditional Gaussian MA(1) model based on the bias of O(nāˆ’ 1). We investigate the properties of the bias, as well as the asymptotical variance of the maximum-likelihood estimators for the unknown parameters, by performing some simulations. Finally, we demonstrate the validity of the new estimators through this simulation study.
In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. … In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. Although the conditional survival functions were expressed by integral forms in previous studies, we derive the conditional survival functions in closed forms and thereby reduce the computation cost. In addition, we calculate the predictive confidence intervals of unobserved values and coverage probabilities of unobserved values by using the posterior predictive survival functions.
Let $d\geq 2$ be an integer. In 2010, the second, third, and fourth authors gave necessary and sufficient conditions for the infinite products $$ \prod_{\textstyle {k=1\atop U_{d^k}\neq-a_i}}^{\infty}\biggl( 1+\frac{a_i}{U_{d^k}}\bigg)\quad (i=1,\dots, Let $d\geq 2$ be an integer. In 2010, the second, third, and fourth authors gave necessary and sufficient conditions for the infinite products $$ \prod_{\textstyle {k=1\atop U_{d^k}\neq-a_i}}^{\infty}\biggl( 1+\frac{a_i}{U_{d^k}}\bigg)\quad (i=1,\dots,
This paper studies a Metropolis-Hastings (MH) algorithm of unknown parameters for a multinomial logit model. The MH algorithm which is one of the Bayesian estimation requires prior and proposal distributions. … This paper studies a Metropolis-Hastings (MH) algorithm of unknown parameters for a multinomial logit model. The MH algorithm which is one of the Bayesian estimation requires prior and proposal distributions. A selection of the prior and proposal distributions is an important issue of the Bayesian estimation. However, there is no a decisive approach for the determination of prior and proposal distributions. A posterior distribution is obtained from two distributions. The MH algorithm generates samples from the posterior distribution of the unknown parameters. Unless we give appropriate distributions, it leads to an inappropriate posterior distribution. In this paper, we discuss differences in the behaviors of autocorrelation functions in a selection of the proposal distributions.
We define q-linear arithmetical functions and -q-linear ones and show the algebraic independence over C(z) of their generating functions.is the q-adic expansion of n ∈ N. The sum n≤x s … We define q-linear arithmetical functions and -q-linear ones and show the algebraic independence over C(z) of their generating functions.is the q-adic expansion of n ∈ N. The sum n≤x s q (n) and also the power sum n≤x s q (n) l (l ≄ 1) have been extensively studied (cf.[1], [8], [9]).In this paper we introduce q-linear functions and -q-linear ones and prove the algebraic independence of the generating functions and their values.Our method of proof is to apply two basic theorems in transcendence theory of Mahler functions (see Lemmas 2.1 and 2.2 below).
The aim of this paper is to prove the transcendence of certain infinite products. As applications, we get necessary and sufficient conditions for transcendence of the value of % MathType!MTEF!2!1!+-% … The aim of this paper is to prove the transcendence of certain infinite products. As applications, we get necessary and sufficient conditions for transcendence of the value of % MathType!MTEF!2!1!+-% feaaeaart1ev0aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXanrfitLxBI9gBaerbd9wDYLwzYbItLDharqqt% ubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq% -Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0x% fr-xfr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyuam% aaBaaaleaacaaIXaGaaGimaaqabaGccqGH9aqpciGGSbGaaiOBaiaa% ysW7caWGRbWaaSbaaSqaaiaadsfacaaIXaaabeaakiaac+cacaWGRb% WaaSbaaSqaaiaadsfacaaIYaaabeaakiabg2da9iabgkHiTmaabmaa% baGaamyramaaBaaaleaacaWGHbaabeaakiaac+cacaWGsbaacaGLOa% GaayzkaaGaey41aq7aaiWaaeaadaqadaqaaiaadsfadaWgaaWcbaGa% aGOmaaqabaGccqGHsislcaWGubWaaSbaaSqaaiaaigdaaeqaaaGcca% GLOaGaayzkaaGaai4laiaacIcacaWGubWaaSbaaSqaaiaaikdaaeqa% aOGaaGjbVlaadsfadaWgaaWcbaGaamysaaqabaGccaGGPaaacaGL7b% GaayzFaaaaaa!5C4A! $\Pi_{k=0}^{\infty}(1+a_{k}^{(1)}{z_{1}r^{k}}+\cdot\cdot\cdot+a_{k}^{(m)}{z_{m}r^{k}})$ at appropriate algebraic points, where r ≄ 2 is an integer and {an (i)}n≄ 0 (1 ≤ i ≤ m) are suitable sequences of algebraic numbers.
An Engel series is a sum of reciprocals of a non-decreasing sequence $$(x_n)$$ of positive integers, which is such that each term is divisible by the previous one, and a … An Engel series is a sum of reciprocals of a non-decreasing sequence $$(x_n)$$ of positive integers, which is such that each term is divisible by the previous one, and a Pierce series is an alternating sum of the reciprocals of a sequence with the same property. Given an arbitrary rational number, we show that there is a family of Engel series which when added to it produces a transcendental number $$\alpha $$ whose continued fraction expansion is determined explicitly by the corresponding sequence $$(x_n)$$ , where the latter is generated by a certain nonlinear recurrence of second order. We also present an analogous result for a rational number with a Pierce series added to or subtracted from it. In both situations (a rational number combined with either an Engel or a Pierce series), the irrationality exponent is bounded below by $$(3+\sqrt{5})/2$$ , and we further identify infinite families of transcendental numbers $$\alpha $$ whose irrationality exponent can be computed precisely. In addition, we construct the continued fraction expansion for an arbitrary rational number added to an Engel series with the stronger property that $$x_j^2$$ divides $$x_{j+1}$$ for all j.
We consider series of the form $$\begin{aligned} \frac{p}{q} +\sum _{j=2}^\infty \frac{1}{x_j}, \end{aligned}$$ where $$x_1=q$$ and the integer sequence $$(x_n)$$ satisfies a certain non-autonomous recurrence of second order, which entails that … We consider series of the form $$\begin{aligned} \frac{p}{q} +\sum _{j=2}^\infty \frac{1}{x_j}, \end{aligned}$$ where $$x_1=q$$ and the integer sequence $$(x_n)$$ satisfies a certain non-autonomous recurrence of second order, which entails that $$x_n|x_{n+1}$$ for $$n\ge 1$$ . It is shown that the terms of the sequence, and multiples of the ratios of successive terms, appear interlaced in the continued fraction expansion of the sum of the series, which is a transcendental number.
This paper studies summary measures of the predictive power of a generalized linear model, paying special attention to a generalization of the multiple correlation coefficient from ordinary linear regression. The … This paper studies summary measures of the predictive power of a generalized linear model, paying special attention to a generalization of the multiple correlation coefficient from ordinary linear regression. The population value is the correlation between the response and its conditional expectation given the predictors, and the sample value is the correlation between the observed response and the model predicted value. We compare four estimators of the measure in terms of bias, mean squared error and behaviour in the presence of overparameterization. The sample estimator and a jack-knife estimator usually behave adequately, but a cross-validation estimator has a large negative bias with large mean squared error. One can use bootstrap methods to construct confidence intervals for the population value of the correlation measure and to estimate the degree to which a model selection procedure may provide an overly optimistic measure of the actual predictive power. Copyright Ā© 2000 John Wiley & Sons, Ltd.
We obtain a general transcendence theorem for the solutions of a certain type of functional equation. A particular and striking consequence of the general result is that, for any irrational … We obtain a general transcendence theorem for the solutions of a certain type of functional equation. A particular and striking consequence of the general result is that, for any irrational number w , the function takes transcendental values at all algebraic points α with 0 < |α| < 1.
We consider a family of integer sequences generated by nonlinear recurrences of the second order, which have the curious property that the terms of the sequence, and integer multiples of … We consider a family of integer sequences generated by nonlinear recurrences of the second order, which have the curious property that the terms of the sequence, and integer multiples of the ratios of successive terms (which are also integers), appear interlaced in the continued fraction expansion of the sum of the reciprocals of the terms. Using the rapid (double exponential) growth of the terms, for each sequence it is shown that the sum of the reciprocals is a transcendental number.
In this article we consider the problem of prediction for a general class of Gaussian models, which includes, among others, autoregressive moving average time‐series models, linear Gaussian state space models … In this article we consider the problem of prediction for a general class of Gaussian models, which includes, among others, autoregressive moving average time‐series models, linear Gaussian state space models and Gaussian Markov random fields. Using an idea presented in Sjƶstedt‐De Luna and Young (2003) , in the context of spatial statistics, we discuss a method for obtaining prediction limits for a future random variable of interest, taking into account the uncertainty introduced by estimating the unknown parameters. The proposed prediction limits can be viewed as a modification of the estimative prediction limit, with unconditional, and eventually conditional, coverage error of smaller asymptotic order. The modifying term has a quite simple form and it involves the bias and the mean square error of the plug‐in estimators for the conditional expectation and the conditional variance of the future observation. Applications of the results to Gaussian time‐series models are presented.
Part 1 Background scope notation distributions derived from normal distribution. Part 2 Model fitting: plant growth sample birthweight sample notation for linear models exercises. Part 3 Exponential family of distributions … Part 1 Background scope notation distributions derived from normal distribution. Part 2 Model fitting: plant growth sample birthweight sample notation for linear models exercises. Part 3 Exponential family of distributions and generalized linear models: exponential family of distributions generalized linear models. Part 4 Estimation: method of maximum likelihood method of least squares estimation for generalized linear models example of simple linear regression for Poisson responses MINITAB program for simple linear regression with Poisson responses GLIM. Part 5 Inference: sampling introduction for scores sampling distribution for maximum likelihood estimators confidence intervals for the model parameters adequacy of a model sampling distribution for the log-likelihood statistic log-likelihood ratio statistic (deviance) assessing goodness of fit hypothesis testing residuals. Part 6 Multiple regression: maximum likelihood estimation least squares estimation log-likelihood ratio statistic multiple correlation coefficient and R numerical example residual plots orthogonality collinearity model selection non-linear regression. Part 7 Analysis of variance and covariance: basic results one-factor ANOVA two-factor ANOVA with replication crossed and nested factors more complicated models choice of constraint equations and dummy variables analysis of covariance. Part 8 Binary variables and logistic regression: probability distributions generalized linear models dose response models general logistic regression maximum likelihood estimation and the log-likelihood ratio statistic other criteria for goodness of fit least squares methods remarks. Part 9 Contingency tables and log-linear models: probability distributions log-linear models maximum likelihood estimation hypothesis testing and goodness of fit numerical examples remarks. Appendices: conventional parametrizations with sum-to-zero constraints corner-point parametrizations three response variables two response variables and one explanatory variable one response variable and two explanatory variables.
SUMMARY A technique is given for the Edgeworth type asymptotic expansion for the joint as well as marginal and conditional distributions of the maximum likelihood estimators in autoregressive moving-average (ARMA) … SUMMARY A technique is given for the Edgeworth type asymptotic expansion for the joint as well as marginal and conditional distributions of the maximum likelihood estimators in autoregressive moving-average (ARMA) models. Our methodology is illustrated and results on the expansions for some simple ARMA models are presented.
A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of … A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices and potential applications of the methods to numerical problems arising in statistics, are discussed.
Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for … Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several variance reduction techniques and also give guidance on the choice of sample size and allocation.
In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. … In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. Although the conditional survival functions were expressed by integral forms in previous studies, we derive the conditional survival functions in closed forms and thereby reduce the computation cost. In addition, we calculate the predictive confidence intervals of unobserved values and coverage probabilities of unobserved values by using the posterior predictive survival functions.
Let a ∈ ā„• [setmn ] {0, 1} and let bn be a sequence of rational integers satisfying bn = O(Ī·āˆ’2n) for every Ī· ∈]0, 1[. We prove that the … Let a ∈ ā„• [setmn ] {0, 1} and let bn be a sequence of rational integers satisfying bn = O(Ī·āˆ’2n) for every Ī· ∈]0, 1[. We prove that the number S = [sum ]+āˆžn=0 1/(a2n + bn) is transcendental by using a special form of Mahler's transcendence method.
This work treats the problem of estimating the predictive density of a random vector when both the mean vector and the variance are unknown. We prove that the density of … This work treats the problem of estimating the predictive density of a random vector when both the mean vector and the variance are unknown. We prove that the density of reference in this context is inadmissible under the Kullback–Leibler loss in a nonasymptotic framework. Our result holds even when the dimension of the vector is strictly lower than three, which is surprising compared to the known variance setting. Finally, we discuss the relationship between the prediction and the estimation problems.
A generalization of the coefficient of determination R2 to general regression models is discussed. A modification of an earlier definition to allow for discrete models is proposed. A generalization of the coefficient of determination R2 to general regression models is discussed. A modification of an earlier definition to allow for discrete models is proposed.
A family of estimators, each of which dominates the "usual" one, is given for the problem of simultaneously estimating means of three or more independent normal random variables which have … A family of estimators, each of which dominates the "usual" one, is given for the problem of simultaneously estimating means of three or more independent normal random variables which have a common unknown variance. Charles Stein [4] established the existence of such estimators (for the case of a known variance) and later, with James [3], exhibited some, both for the case of unknown common variances considered here and for other cases as well. Alam and Thompson [1] have also obtained estimators which dominate the usual one. The class of estimators given in this paper contains those of James and Stein and also those of Alam and Thompson.
Abstract This article considers maximum likelihood (ML) and restricted maximum likelihood (REML) estimation of time series regression models with autoregressive AR(p) noise. Approximate biases of the ML and REML estimators … Abstract This article considers maximum likelihood (ML) and restricted maximum likelihood (REML) estimation of time series regression models with autoregressive AR(p) noise. Approximate biases of the ML and REML estimators of the AR parameters, based on their approximate representations, are derived. In addition, a bias result for the ML estimator (MLE) of the error variance is established. Numerical results are presented to illustrate the biases of the MLE and REML estimator for the AR parameters, and simulation results are provided to assess the adequacy of our approximations. The impact of bias of the AR estimates on testing of linear trend in a regression trend model is also investigated. For a time series of short or moderate sample length, the REML estimator is generally much less biased than the MLE. Consequently, the REML approach leads to more accurate inferences for the regression parameters.
Let $X| μ\sim N_p(μ,v_xI)$ and $Y| μ\sim N_p(μ,v_yI)$ be independent p-dimensional multivariate normal vectors with common unknown mean $μ$. Based on only observing $X=x$, we consider the problem of obtaining … Let $X| μ\sim N_p(μ,v_xI)$ and $Y| μ\sim N_p(μ,v_yI)$ be independent p-dimensional multivariate normal vectors with common unknown mean $μ$. Based on only observing $X=x$, we consider the problem of obtaining a predictive density $\hat{p}(y| x)$ for $Y$ that is close to $p(y| μ)$ as measured by expected Kullback--Leibler loss. A natural procedure for this problem is the (formal) Bayes predictive density $\hat{p}_{\mathrm{U}}(y| x)$ under the uniform prior $Ļ€_{\mathrm{U}}(μ)\equiv 1$, which is best invariant and minimax. We show that any Bayes predictive density will be minimax if it is obtained by a prior yielding a marginal that is superharmonic or whose square root is superharmonic. This yields wide classes of minimax procedures that dominate $\hat{p}_{\mathrm{U}}(y| x)$, including Bayes predictive densities under superharmonic priors. Fundamental similarities and differences with the parallel theory of estimating a multivariate normal mean under quadratic loss are described.
We investigate shrinkage methods for constructing predictive distributions. We consider the multivariate Normal model with a known covariance matrix and show that there exists a shrinkage predictive distribution dominating the … We investigate shrinkage methods for constructing predictive distributions. We consider the multivariate Normal model with a known covariance matrix and show that there exists a shrinkage predictive distribution dominating the Bayesian predictive distribution based on the vague prior when the dimension is not less than three. Kullback–Leibler divergence from the true distribution to a predictive distribution is adopted as a loss function.
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a … We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.
We summarize and classify results on the evaluations and expressions in closed from of various reciprocal sums of Fibonacci numbers and Lucas numbers using classical functions, such as theta functions, … We summarize and classify results on the evaluations and expressions in closed from of various reciprocal sums of Fibonacci numbers and Lucas numbers using classical functions, such as theta functions, complete elliptic integrals of the first and second kind, Lambert series, and q‐exponential functions. In the final section we give closed form expressions of series of rational functions and evaluate series involving Fibonacci and Lucas numbers.
Journal Article On asymptotic properties of predictive distributions Get access FUMIYASU KOMAKI FUMIYASU KOMAKI The Institute of Statistical Mathematics4–6–7 Minami-Azabu, Minato-ku, Tokyo 106, Japan Search for other works by this … Journal Article On asymptotic properties of predictive distributions Get access FUMIYASU KOMAKI FUMIYASU KOMAKI The Institute of Statistical Mathematics4–6–7 Minami-Azabu, Minato-ku, Tokyo 106, Japan Search for other works by this author on: Oxford Academic Google Scholar Biometrika, Volume 83, Issue 2, June 1996, Pages 299–313, https://doi.org/10.1093/biomet/83.2.299 Published: 01 June 1996 Article history Received: 01 December 1994 Revision received: 01 August 1995 Published: 01 June 1996
Let $d\geq 2$ be an integer. In 2010, the second, third, and fourth authors gave necessary and sufficient conditions for the infinite products $$ \prod_{\textstyle {k=1\atop U_{d^k}\neq-a_i}}^{\infty}\biggl( 1+\frac{a_i}{U_{d^k}}\bigg)\quad (i=1,\dots, Let $d\geq 2$ be an integer. In 2010, the second, third, and fourth authors gave necessary and sufficient conditions for the infinite products $$ \prod_{\textstyle {k=1\atop U_{d^k}\neq-a_i}}^{\infty}\biggl( 1+\frac{a_i}{U_{d^k}}\bigg)\quad (i=1,\dots,
Statistical models whose independent variables are subject to measurement errors are often referred to as 'errors-in-variables models'. To correct for the effects of measurement error on parameter estimation, this paper … Statistical models whose independent variables are subject to measurement errors are often referred to as 'errors-in-variables models'. To correct for the effects of measurement error on parameter estimation, this paper considers a correction for score functions. A corrected score function is one whose expectation with respect to the measurement error distribution coincides with the usual score function based on the unknown true independent variables. This approach makes it possible to do inference as well as estimation of model parameters without additional assumptions. The corrected score functions of some generalized linear models are obtained.
Abstract Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications … Abstract Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications have been described in the literature. Applied statisticians who are new to these methods may have several questions and concerns, however: How much effort and expertise are needed to design and use a Markov chain sampler? How much confidence can one have in the answers that MCMC produces? How does the use of MCMC affect the rest of the model-building process? At the Joint Statistical Meetings in August, 1996, a panel of experienced MCMC users discussed these and other issues, as well as various "tricks of the trade" This article is an edited recreation of that discussion. Its purpose is to offer advice and guidance to novice users of MCMC—and to not-so-novice users as well. Topics include building confidence in simulation results, methods for speeding and assessing convergence, estimating standard errors, identification of models for which good MCMC algorithms exist, and the current state of software development. Key Words: Bayesian softwareConvergence assessmentGibbs samplerMetropolis-Hastings algorithm