Author Description

Login to generate an author description

Ask a Question About This Mathematician

Coauthor Papers Together
Péter Major 2
G. Tusn�dy 2
P. R�v�sz 1
Let $\{x_1, x_2,\cdots\}$ be a sequence of i.i.d.r.v. with mean zero, variance one, and (1) $\mathbf{P}(|x_k| \geqq \lambda) \leqq C \exp(-\alpha\lambda^\varepsilon)$ for positive $\alpha, \varepsilon$. Let $f(t, x)$ (with its … Let $\{x_1, x_2,\cdots\}$ be a sequence of i.i.d.r.v. with mean zero, variance one, and (1) $\mathbf{P}(|x_k| \geqq \lambda) \leqq C \exp(-\alpha\lambda^\varepsilon)$ for positive $\alpha, \varepsilon$. Let $f(t, x)$ (with its first partial derivatives) be of slow growth in $x$, let $F_n(x)$ be the distribution function of $(1/n) \sum^n_1 f(k/n, s_k/n^{\frac{1}{2}})$ where $s_k = x_1 + x_2 + \cdots + x_k$, and let $F(x)$ be the distribution function of $\int^1_0 f(t, w(t)) dt$ where $\{w(t)\}$ is Brownian motion. Then $\sup_x |F_n(x) - F(x)| = O((\log n)^\beta/n^{\frac{1}{2}})$ provided $F(x)$ has a bounded derivative. The proof uses the Skorokhod representation; also, a theorem is proven which would indicate that the Skorokhod representation cannot be used in general to obtain a rate of convergence better than $O(1/n^{\frac{1}{4}})$. A corresponding result is obtained if (1) is replaced by the existence of a finite $p$th moment, $p \geqq 4$.
Article DataHistorySubmitted: 04 June 1956Published online: 17 July 2006Publication DataISSN (print): 0040-585XISSN (online): 1095-7219Publisher: Society for Industrial and Applied MathematicsCODEN: tprbau Article DataHistorySubmitted: 04 June 1956Published online: 17 July 2006Publication DataISSN (print): 0040-585XISSN (online): 1095-7219Publisher: Society for Industrial and Applied MathematicsCODEN: tprbau
Let $(X_j ),j = 1,2, \cdots $, be a sequence of independent random variables with the distribution functions $V_j (x)$. We assume the existence of ${\bf D}X_j = \sigma _j^2 … Let $(X_j ),j = 1,2, \cdots $, be a sequence of independent random variables with the distribution functions $V_j (x)$. We assume the existence of ${\bf D}X_j = \sigma _j^2 ,s_n^2 = \sum\nolimits_{j = 1}^n {\sigma _j^2 } ,{\bf E}X_j = 0,j = 1,2, \cdots $. We put \[ Z_n = \sum\limits_{j = 1}^n X_j /s_n . \] With the aid of the saddlepoint method of function theory several local limit theorems are derived, in complete analogy to the previously known integral limit theorems for large deviations of H. Cramér [1] and V. Petrov [5]. These authors considered the behavior of the function ${\bf P}\{ Z_n < x\} = F_n (x)$ for $n \to \infty $, where x together with n becomes infinite ("large deviations"). V. Petrov generalized Cramér's theorem from the case of identically distributed $X_j $ to the general case and at the same time improved the remainder term and the growth of x. The present work shows that their method of proof, namely the introduction of a definite transformation of the distribution laws of the $X_j $, was very natural. The present work makes consistent use of the function theoretic possibilities that are given by the assumption that the functions \[ M_j (z) = {\bf E}_{e^{zX_j } } = \int_{ - \infty }^\infty {e^{zy} } dv_j (y) \] are analytic in a strip $| {\operatorname{Re} z} | < A$. Theorem 1.Let conditions A—C be fulfilled. Then for sufficiently largeneach$Z_n $possesses a distribution density$p_{z_n } (x)$. Assume further that$x > 1 $and$x = o(\sqrt n )$for$n \to \infty $. Then one has\[ \frac{{p_{z_n } (x)}}{{\varphi (x)}} = e^{(x/\sqrt n )\lambda _n (x/\sqrt n )} \left[ {1 + O\left( {\frac{x}{{\sqrt n }}} \right)} \right], \]where$\lambda _n (t)$is a power series converging, uniformly inn, for sufficiently small values$| t |$, and$\varphi (x)$is the density o f the normal distribution. For negative x there is a similar relation. For identically distributed $X_j $ the condition C can be considerably weakened. In this case Theorem 2 holds. Also in the case of a lattice-like distribution of the random variables $X_j $ an analogous limit relation holds (Theorem 3).
Let $H(y\mid x)$ be a family of distribution functions depending upon a real parameter $x,$ and let $M(x) = \int^\infty_{-\infty} y dH(y \mid x)$ be the corresponding regression function. It … Let $H(y\mid x)$ be a family of distribution functions depending upon a real parameter $x,$ and let $M(x) = \int^\infty_{-\infty} y dH(y \mid x)$ be the corresponding regression function. It is assumed $M(x)$ is unknown to the experimenter, who is, however, allowed to take observations on $H(y\mid x)$ for any value $x.$ Robbins and Monro [1] give a method for defining successively a sequence $\{x_n\}$ such that $x_n$ converges to $\theta$ in probability, where $\theta$ is a root of the equation $M(x) = \alpha$ and $\alpha$ is a given number. Wolfowitz [2] generalizes these results, and Kiefer and Wolfowitz [3], solve a similar problem in the case when $M(x)$ has a maximum at $x = \theta.$ Using a lemma due to Loeve [4], we show that in both cases $x_n$ converges to $\theta$ with probability one, under weaker conditions than those imposed in [2] and [3]. Further we solve a similar problem in the case when $M(x)$ is the median of $H(y \mid x).$
Asymptotic properties are established for the Robbins-Monro [1] procedure of stochastically solving the equation $M(x) = \alpha$. Two disjoint cases are treated in detail. The first may be called the … Asymptotic properties are established for the Robbins-Monro [1] procedure of stochastically solving the equation $M(x) = \alpha$. Two disjoint cases are treated in detail. The first may be called the "bounded" case, in which the assumptions we make are similar to those in the second case of Robbins and Monro. The second may be called the "quasi-linear" case which restricts $M(x)$ to lie between two straight lines with finite and nonvanishing slopes but postulates only the boundedness of the moments of $Y(x) - M(x)$ (see Sec. 2 for notations). In both cases it is shown how to choose the sequence $\{a_n\}$ in order to establish the correct order of magnitude of the moments of $x_n - \theta$. Asymptotic normality of $a^{1/2}_n(x_n - \theta)$ is proved in both cases under a further assumption. The case of a linear $M(x)$ is discussed to point up other possibilities. The statistical significance of our results is sketched.