Economics, Econometrics and Finance â€ș General Economics, Econometrics and Finance

Monetary Policy and Economic Impact

Description

This cluster of papers covers a wide range of topics in macroeconomic analysis, including monetary and fiscal policy, economic growth, financial development, business cycles, panel data analysis, interest rates, exchange rates, inflation dynamics, and structural change. The papers provide insights into the empirical and theoretical aspects of these macroeconomic phenomena and their implications for policy making.

Keywords

Monetary Policy; Fiscal Policy; Economic Growth; Financial Development; Business Cycles; Panel Data Analysis; Interest Rates; Exchange Rates; Inflation Dynamics; Structural Change

This paper develops a new approach to the problem of testing the existence of a level relationship between a dependent variable and a set of regressors, when it is not 
 This paper develops a new approach to the problem of testing the existence of a level relationship between a dependent variable and a set of regressors, when it is not known with certainty whether the underlying regressors are trend- or first-difference stationary. The proposed tests are based on standard F- and t-statistics used to test the significance of the lagged levels of the variables in a univariate equilibrium correction mechanism. The asymptotic distributions of these statistics are non-standard under the null hypothesis that there exists no level relationship, irrespective of whether the regressors are I(0) or I(1). Two sets of asymptotic critical values are provided: one when all regressors are purely I(1) and the other if they are all purely I(0). These two sets of critical values provide a band covering all possible classifications of the regressors into purely I(0), purely I(1) or mutually cointegrated. Accordingly, various bounds testing procedures are proposed. It is shown that the proposed tests are consistent, and their asymptotic distribution under the null and suitably defined local alternatives are derived. The empirical relevance of the bounds procedures is demonstrated by a re-examination of the earnings equation included in the UK Treasury macroeconometric model. Copyright © 2001 John Wiley & Sons, Ltd.
The panel data unit root test suggested by Levin and Lin (LL) has been widely used in several applications, notably in papers on tests of the purchasing power parity hypothesis. 
 The panel data unit root test suggested by Levin and Lin (LL) has been widely used in several applications, notably in papers on tests of the purchasing power parity hypothesis. This test is based on a very restrictive hypothesis which is rarely ever of interest in practice. The Im–Pesaran–Shin (IPS) test relaxes the restrictive assumption of the LL test. This paper argues that although the IPS test has been offered as a generalization of the LL test, it is best viewed as a test for summarizing the evidence from a number of independent tests of the sample hypothesis. This problem has a long statistical history going back to R. A. Fisher. This paper suggests the Fisher test as a panel data unit root test, compares it with the LL and IPS tests, and the Bonferroni bounds test which is valid for correlated tests. Overall, the evidence points to the Fisher test with bootstrap-based critical values as the preferred choice. We also suggest the use of the Fisher test for testing stationarity as the null and also in testing for cointegration in panel data.
Traditional econometric models assume a constant one-period forecast variance. To generalize this implausible assumption, a new class of stochastic processes called autoregressive conditional heteroscedastic (ARCH) processes are introduced in this 
 Traditional econometric models assume a constant one-period forecast variance. To generalize this implausible assumption, a new class of stochastic processes called autoregressive conditional heteroscedastic (ARCH) processes are introduced in this paper. These are mean zero, serially uncorrelated processes with nonconstant variances conditional on the past, but constant unconditional variances. For such processes, the recent past gives information about the one-period forecast variance. A regression model is then introduced with disturbances following an ARCH process. Maximum likelihood estimators are described and a simple scoring iteration formulated. Ordinary least squares maintains its optimality properties in this set-up, but maximum likelihood is more efficient. The relative efficiency is calculated and can be infinite. To test whether the disturbances follow an ARCH process, the Lagrange multiplier procedure is employed. The test is based simply on the autocorrelation of the squared OLS residuals. This model is used to estimate the means and variances of inflation in the U.K. The ARCH effect is found to be significant and the estimated variances increase substantially during the chaotic seventies.
I. INTRODUCTION In this paper we describe a method for testing the null of no cointegration in dynamic panels with multiple regressors and compute approximate critical values for these tests. 
 I. INTRODUCTION In this paper we describe a method for testing the null of no cointegration in dynamic panels with multiple regressors and compute approximate critical values for these tests. Methods for non-stationary panels, including panel unit root and panel cointegration tests, have been gaining increased acceptance in recent empirical research. To date, however, tests for the null of no cointegration in heterogeneous panels based on Pedroni (1995, 1997a) have been limited to simple bivariate examples, in large part due to the lack of critical values available for more complex multivariate regressions. The purpose of this paper is to fill this gap by describing a method to implement tests for the null of no cointegration for the case with multiple regressors and to provide appropriate critical values for these cases. The tests allow for considerable heterogeneity among individual members of the panel, including heterogeneity in both the long-run cointegrating vectors as well as heterogeneity in the dynamics associated with short-run deviations from these cointegrating vectors.
Abstract In a recent paper, Bai and Perron ( 1998 ) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple 
 Abstract In a recent paper, Bai and Perron ( 1998 ) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least‐squares operations of order O ( T 2 ) for any number of breaks. Our method can be applied to both pure and partial structural change models. Second, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program. Copyright © 2002 John Wiley & Sons, Ltd.
Efficient estimators of cointegrating vectors are presented for systems involving deterministic components and variables of differing, higher orders of integration. The estimators are computed using GLS or OLS, and Wald 
 Efficient estimators of cointegrating vectors are presented for systems involving deterministic components and variables of differing, higher orders of integration. The estimators are computed using GLS or OLS, and Wald Statistics constructed from these estimators have asymptotic x2 distributions. These and previously proposed estimators of cointegrating vectors are used to study long-run U.S. money (Ml) demand. Ml demand is found to be stable over 1900-1989; the 95% confidence intervals for the income elasticity and interest rate semielasticity are (.88,1.06) and (-.13, -.08), respectively. Estimates based on the postwar data alone, however, are unstable, with variances which indicate substantial sampling uncertainty.
Existing strategies for econometric analysis related to macroeconomics are subject to a number of serious objections, some recently formulated, some old. These objections are summarized in this paper, and it 
 Existing strategies for econometric analysis related to macroeconomics are subject to a number of serious objections, some recently formulated, some old. These objections are summarized in this paper, and it is argued that taken together they make it unlikely that macroeconomic models are in fact over identified, as the existing statistical theory usually assumes. The implications of this conclusion are explored, and an example of econometric work in a non-standard style, taking account of the objections to the standard style, is presented. THE STUDY OF THE BUSINESS cycle, fluctuations in aggregate measures of economic activity and prices over periods from one to ten years or so, constitutes or motivates a large part of what we call macroeconomics. Most economists would agree that there are many macroeconomic variables whose cyclical fluctuations are of interest, and would agree further that fluctuations in these series are interrelated. It would seem to follow almost tautologically that statistical models involving large numbers of macroeconomic variables ought to be the arena within which macroeconomic theories confront reality and thereby each other. Instead, though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models. In this lecture I intend to discuss some aspects of this situation, attempting both to offer some explanations and to suggest some means for improvement. I will argue that the style in which their builders construct claims for a connection between these models and reality-the style in which is achieved for these models-is inappropriate, to the point at which claims for identification in these models cannot be taken seriously. This is a venerable assertion; and there are some good old reasons for believing it;2 but there are also some reasons which have been more recently put forth. After developing the conclusion that the identification claimed for existing large-scale models is incredible, I will discuss what ought to be done in consequence. The line of argument is: large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification; the restrictions imposed in the usual style of identification are neither essential to constructing a model which can perform these functions nor innocuous; an alternative style of identification is available and practical. Finally we will look at some empirical work based on an alternative style of macroeconometrics. A six-variable dynamic system is estimated without using 1 Research for this paper was supported by NSF Grant Soc-76-02482. Lars Hansen executed the computations. The paper has benefited from comments by many people, especially Thomas J. Sargent
This paper contains the likelihood analysis of vector autoregressive models allowing for cointegration. The author derives the likelihood ratio test for cointegrating rank and finds it asymptotic distribution. He shows 
 This paper contains the likelihood analysis of vector autoregressive models allowing for cointegration. The author derives the likelihood ratio test for cointegrating rank and finds it asymptotic distribution. He shows that the maximum likelihood estimator of the cointegrating relations can be found by reduced rank regression and derives the likelihood ratio test of structural hypotheses about these relations. The author shows that the asymptotic distribution of the maximum likelihood estimator is mixed Gaussian, allowing inference for hypotheses on the cointegrating relation to be conducted using the Chi( squared) distribution. Copyright 1991 by The Econometric Society.
This paper proposes a very tractable approach to modeling changes in regime. The parameters of an autoregression are viewed as the outcome of a discrete-state Markov process. For example, the 
 This paper proposes a very tractable approach to modeling changes in regime. The parameters of an autoregression are viewed as the outcome of a discrete-state Markov process. For example, the mean growth rate of a nonstationary series may be subject to occasional, discrete shifts. The econometrician is presumed not to observe these shifts directly, but instead must draw probabilistic inference about whether and when they may have occurred based on the observed behavior of the series. The paper presents an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter
A study documents some features of aggregate economic fluctuations sometimes referred to as business cycles. The investigation uses quarterly data from the postwar US economy. The fluctuations studied are those 
 A study documents some features of aggregate economic fluctuations sometimes referred to as business cycles. The investigation uses quarterly data from the postwar US economy. The fluctuations studied are those that are too rapid to be accounted for by slowly changing demographic and technological factors and changes in the stocks of capital that produce secular growth in output per capita. The study proposes a procedure for representing a times series as the sum of a smoothly varying trend component and a cyclical component. The nature of the comovements of the cyclical components of a variety of macroeconomic time series is documented. It is found that these comovements are very different than the corresponding comovements of the slowly varying trend components.
This paper considers estimation and testing of vector autoregressio n coefficients in panel data, and applies the techniques to analyze the dynamic relationships between wages an d hours worked in 
 This paper considers estimation and testing of vector autoregressio n coefficients in panel data, and applies the techniques to analyze the dynamic relationships between wages an d hours worked in two samples of American males. The model allows for nonstationary individual effects and is estimated by applying instrumental variables to the quasi-differenced autoregressive equations. The empirical results suggest the absence of lagged hours in the wage forecasting equation. The results also show that lagged hours is important in the hours equation. Copyright 1988 by The Econometric Society.
The overall test for lack of fit in autoregressive-moving average models proposed by Box & Pierce (1970) is considered. It is shown that a substantially improved approximation results from a 
 The overall test for lack of fit in autoregressive-moving average models proposed by Box & Pierce (1970) is considered. It is shown that a substantially improved approximation results from a simple modification of this test. Some consideration is given to the power of such tests and their robustness when the innovations are nonnormal. Similar modifications in the overall tests used for transfer function-noise models are proposed
The relationship between co-integration and error correction models, first suggested in Granger (1981), is here extended and used to develop estimation procedures, tests, and empirical examples. If each element of 
 The relationship between co-integration and error correction models, first suggested in Granger (1981), is here extended and used to develop estimation procedures, tests, and empirical examples. If each element of a vector of time series x first achieves stationarity after differencing, but a linear combination a'x is already stationary, the time series x are said to be co-integrated with co-integrating vector a. There may be several such co-integrating vectors so that a becomes a matrix. Interpreting a'x,= 0 as a long run equilibrium, co-integration implies that deviations from equilibrium are stationary, with finite variance, even though the series themselves are nonstationary and have infinite variance. The paper presents a representation theorem based on Granger (1983), which connects the moving average, autoregressive, and error correction representations for co-integrated systems. A vector autoregression in differenced variables is incompatible with these representations. Estimation of these models is discussed and a simple but asymptotically efficient two-step estimator is proposed. Testing for co-integration combines the problems of unit root tests and tests with parameters unidentified under the null. Seven statistics are formulated and analyzed. The critical values of these statistics are calculated based on a Monte Carlo simulation. Using these critical values, the power properties of the tests are examined and one test procedure is recommended for application. In a series of examples it is found that consumption and income are co-integrated, wages and prices are not, short and long interest rates are, and nominal GNP is co-integrated with M2, but not M1, M3, or aggregate liquid assets.
We examine properties of residual-based tests for the null of no cointegration for dynamic panels in which both the short-run dynamics and the long-run slope coefficients are permitted to be 
 We examine properties of residual-based tests for the null of no cointegration for dynamic panels in which both the short-run dynamics and the long-run slope coefficients are permitted to be heterogeneous across individual members of the panel. The tests also allow for individual heterogeneous fixed effects and trend terms, and we consider both pooled within dimension tests and group mean between dimension tests. We derive limiting distributions for these and show that they are normal and free of nuisance parameters. We also provide Monte Carlo evidence to demonstrate their small sample size and power performance, and we illustrate their use in testing purchasing power parity for the post–Bretton Woods period.I thank Rich Clarida, Bob Cumby, Mahmoud El-Gamal, Heejoon Kang, Chiwha Kao, Andy Levin, Klaus Neusser, Masao Ogaki, David Papell, Pierre Perron, Abdel Senhadji, Jean-Pierre Urbain, Alan Taylor, and three anonymous referees for helpful comments on various earlier versions of this paper. The paper has also benefited from presentations at the 1994 North American Econometric Society Summer Meetings in Quebec City, the 1994 European Econometric Society Summer Meetings in Maastricht, and workshop seminars at the Board of Governors of the Federal Reserve, INSEE-CREST Paris, IUPUI, Ohio State, Purdue, Queens University Belfast, Rice University–University of Houston, and Southern Methodist University. Finally, I thank the following students who provided assistance in the earlier stages of the project: Younghan Kim, Rasmus Ruffer, and Lining Wan.
In the first half of the paper I study spurious regressions in panel data. Asymptotic properties of the least-squares dummy variable (LSDV) estimator and other conventional statistics are examined. The 
 In the first half of the paper I study spurious regressions in panel data. Asymptotic properties of the least-squares dummy variable (LSDV) estimator and other conventional statistics are examined. The asymptotics of LSDV estimator are different from those of the spurious regression in the pure time-series. This has an important consequence for residual-based cointegration tests in panel data, because the null distribution of residual-based cointegration tests depends on the asymptotics of LSDV estimator. In the second half of the paper I study residual-based tests for cointegration regression in panel data. I study Dickey–Fuller (DF) tests and an augmented Dickey–Fuller (ADF) test to test the null of no cointegration. Asymptotic distributions of the tests are derived and Monte Carlo experiments are conducted to evaluate finite sample properties of the proposed tests.
This paper introduces methods to compute impulse responses without specification and estimation of the underlying multivariate dynamic system. The central idea consists in estimating local projections at each period of 
 This paper introduces methods to compute impulse responses without specification and estimation of the underlying multivariate dynamic system. The central idea consists in estimating local projections at each period of interest rather than extrapolating into increasingly distant horizons from a given model, as it is done with vector autoregressions (VAR). The advantages of local projections are numerous: (1) they can be estimated by simple regression techniques with standard regression packages; (2) they are more robust to misspecification; (3) joint or point-wise analytic inference is simple; and (4) they easily accommodate experimentation with highly nonlinear and flexible specifications that may be impractical in a multivariate context. Therefore, these methods are a natural alternative to estimating impulse responses from VARs. Monte Carlo evidence and an application to a simple, closed-economy, new-Keynesian model clarify these numerous advantages.
An abstract is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content. An abstract is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.
This paper considers tests for parameter instability and structural change with unknown change point. The results apply to a wide class of parametric models that are suitable for estimation by 
 This paper considers tests for parameter instability and structural change with unknown change point. The results apply to a wide class of parametric models that are suitable for estimation by generalized method of moments procedures. The asymptotic distributions of the test statistics considered here are nonstandard because the change point parameter only appears under the alternative hypothesis and not under the null. The tests considered here are shown to have nontrivial asymptotic local power against all alternatives for which the parameters are nonconstant. The tests are found to perform quite well in a Monte Carlo experiment reported elsewhere. Copyright 1993 by The Econometric Society.
This paper develops asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to 
 This paper develops asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to zero. Asymptotic representations are provided for various instrumental variable statistics, including the two-stage least squares (TSLS) and limited information maximum- likelihood (LIML) estimators and their t-statistics. The asymptotic distributions are found to provide good approximations to sampling distributions with just 20 observations per instrument. Even in large samples, TSLS can be badly biased, but LIML is, in many cases, approximately median unbiased. The theory suggests concrete quantitative guidelines for applied work. These guidelines help to interpret Angrist and Krueger's (1991) estimates of the returns to education: whereas TSLS estimates with many instruments approach the OLS estimate of 6%, the more reliable LIML and TSLS estimates with fewer instruments fall between 8% and 10%, with a typical confidence interval of (6%, 14%).
Journal Article The Optimal Degree of Commitment to an Intermediate Monetary Target Get access Kenneth Rogoff Kenneth Rogoff University of Wisconsin—Madison Search for other works by this author on: Oxford 
 Journal Article The Optimal Degree of Commitment to an Intermediate Monetary Target Get access Kenneth Rogoff Kenneth Rogoff University of Wisconsin—Madison Search for other works by this author on: Oxford Academic Google Scholar The Quarterly Journal of Economics, Volume 100, Issue 4, November 1985, Pages 1169–1189, https://doi.org/10.2307/1885679 Published: 01 November 1985
ABSTRACT This paper analyzes the relation of stock volatility with real and nominal macroeconomic volatility, economic activity, financial leverage, and stock trading activity using monthly data from 1857 to 1987. 
 ABSTRACT This paper analyzes the relation of stock volatility with real and nominal macroeconomic volatility, economic activity, financial leverage, and stock trading activity using monthly data from 1857 to 1987. An important fact, previously noted by Officer (1973) , is that stock return variability was unusually high during the 1929–1939 Great Depression. While aggregate leverage is significantly correlated with volatility, it explains a relatively small part of the movements in stock volatility. The amplitude of the fluctuations in aggregate stock volatility is difficult to explain using simple models of stock valuation, especially during the Great Depression.
We estimate a forward-looking monetary policy reaction function for the postwar United States economy, before and after Volcker's appointment as Fed Chairman in 1979. Our results point to substantial differences 
 We estimate a forward-looking monetary policy reaction function for the postwar United States economy, before and after Volcker's appointment as Fed Chairman in 1979. Our results point to substantial differences in the estimated rule across periods. In particular, interest rate policy in the Volcker-Greenspan period appears to have been much more sensitive to changes in expected inflation than in the pre-Volcker period. We then compare some of the implications of the estimated rules for the equilibrium properties of inflation and output, using a simple macroeconomic model, and show that the Volcker-Greenspan rule is stabilizing.
We interpret fluctuations in GNP and unemployment as due to two types ofdisturbances: disturbances that have a permanent effect on output and disturbance.sthat do not.We interpret the first as supply 
 We interpret fluctuations in GNP and unemployment as due to two types ofdisturbances: disturbances that have a permanent effect on output and disturbance.sthat do not.We interpret the first as supply disturbances, the second as demand disturbances.We find that demand disturbances have a hump shaped effect on both output and unem ployment; the effect peaks after a year and vanishes after two to five years.Up to a scale factor, the dynamic effect on unemployment of demand disturbances is a mirror image of that on output.The effect of supply disturbances on output increases steadily over time, to reach a peak after two years and a plateau after five years.'Favorab1e supply disturbances may initially increase unem ployment.This is followed by a decline in unemployment, with a slow return over time to its original value.While this dynamic characterization is fairly sharp, the data are not as specific as to the relative contributions of demand and supply disturbances to output fluctuations.We find that the time series of demand-determined output fluctuations has peaks and troughs which coincide with most of the NBER troughs and peaks.But variance decompositions of output at various horizons giving the respective contributions of supply and demand disturbances are not precisely estimated.For instance, at a forecast horizon of four quarters, we find that, under alternative assumptions, the contribution of demand disturbances ranges from 40 to over 95 per cent.
We present a model embodying moderate amounts of nominal rigidities that accounts for the observed inertia in inflation and persistence in output. The key features of our model are those 
 We present a model embodying moderate amounts of nominal rigidities that accounts for the observed inertia in inflation and persistence in output. The key features of our model are those that prevent a sharp rise in marginal costs after an expansionary shock to monetary policy. Of these features, the most important are staggered wage contracts that have an average duration of three quarters and variable capital utilization.
The ‘credit channel’ theory of monetary policy transmission holds that informational frictions in credit markets worsen during tight-money periods. The resulting increase in the external finance premium--the difference in cost 
 The ‘credit channel’ theory of monetary policy transmission holds that informational frictions in credit markets worsen during tight-money periods. The resulting increase in the external finance premium--the difference in cost between internal and external funds--enhances the effects of monetary policy on the real economy. The authors document the responses of GDP and its components to monetary policy shocks and describe how the credit channel helps explain the facts. They discuss two main components of this mechanism, the balance sheet and bank lending channels. The authors argue that forecasting exercises using credit aggregates are not valid tests of this theory.
Optimization of the part of consumers is shown to imply that the marginal utility of consumption evolves according to a random walk with trend. To a reasonable approximation, consumption itself 
 Optimization of the part of consumers is shown to imply that the marginal utility of consumption evolves according to a random walk with trend. To a reasonable approximation, consumption itself should evolve in the same way. In particular, no variable apart from current consumption should be of any value in predicting future consumption. This implication is tested with time-series data for the postwar United States. It is confirmed for real disposable income, which has no predictive power for consumption, but rejected for an index of stock prices. The paper concludes that the evidence supports a modified version of the life cycle--permanent income hypothesis.
In this paper we describe a method for testing the null of no cointegration in dynamic panels with multiple regressors and compute approximate critical values for these tests. Methods for 
 In this paper we describe a method for testing the null of no cointegration in dynamic panels with multiple regressors and compute approximate critical values for these tests. Methods for non-stationary panels, including panel unit root and panel cointegration tests, have been gaining increased acceptance in recent empirical research. To date, however, tests for the null of no cointegration in heterogeneous panels based on Pedroni (1995, 1997a) have been limited to simple bivariate examples, in large part due to the lack of critical values available for more complex multivariate regressions. The purpose of this paper is to fill this gap by describing a method to implement tests for the null of no cointegration for the case with multiple regressors and to provide appropriate critical values for these cases. The tests allow for considerable heterogeneity among individual members of the panel, including heterogeneity in both the long-run cointegrating vectors as well as heterogeneity in the dynamics associated with short-run deviations from these cointegrating vectors.
Using a Bayesian likelihood approach, we estimate a dynamic stochastic general equilibrium model for the US economy using seven macroeconomic time series. The model incorporates many types of real and 
 Using a Bayesian likelihood approach, we estimate a dynamic stochastic general equilibrium model for the US economy using seven macroeconomic time series. The model incorporates many types of real and nominal frictions and seven types of structural shocks. We show that this model is able to compete with Bayesian Vector Autoregression models in out-of-sample prediction. We investigate the relative empirical importance of the various frictions. Finally, using the estimated model, we address a number of key issues in business cycle analysis: What are the sources of business cycle fluctuations? Can the model explain the cross correlation between output and inflation? What are the effects of productivity on hours worked? What are the sources of the “Great Moderation”? (JEL D58, E23, E31, E32)
A number of panel unit root tests that allow for cross-section dependence have been proposed in the literature that use orthogonalization type procedures to asymptotically eliminate the cross-dependence of the 
 A number of panel unit root tests that allow for cross-section dependence have been proposed in the literature that use orthogonalization type procedures to asymptotically eliminate the cross-dependence of the series before standard panel unit root tests are applied to the transformed series. In this paper we propose a simple alternative where the standard augmented Dickey–Fuller (ADF) regressions are augmented with the cross-section averages of lagged levels and first-differences of the individual series. New asymptotic results are obtained both for the individual cross-sectionally augmented ADF (CADF) statistics and for their simple averages. It is shown that the individual CADF statistics are asymptotically similar and do not depend on the factor loadings. The limit distribution of the average CADF statistic is shown to exist and its critical values are tabulated. Small sample properties of the proposed test are investigated by Monte Carlo experiments. The proposed test is applied to a panel of 17 OECD real exchange rate series as well as to log real earnings of households in the PSID data. Copyright © 2007 John Wiley & Sons, Ltd.
This paper develops asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to 
 This paper develops asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to zero. Asymptotic representations are provided for various instrumental variable statistics, including the two-stage least squares (TSLS) and limited information maximum- likelihood (LIML) estimators and their t-statistics. The asymptotic distributions are found to provide good approximations to sampling distributions with just 20 observations per instrument. Even in large samples, TSLS can be badly biased, but LIML is, in many cases, approximately median unbiased. The theory suggests concrete quantitative guidelines for applied work. These guidelines help to interpret Angrist and Krueger's (1991) estimates of the returns to education: whereas TSLS estimates with many instruments approach the OLS estimate of 6%, the more reliable LIML and TSLS estimates with fewer instruments fall between 8% and 10%, with a typical confidence interval of (6%, 14%).
Journal Article An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area Get access Frank Smets, Frank Smets 1European Central Bank and CEPR Search for other works by this 
 Journal Article An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area Get access Frank Smets, Frank Smets 1European Central Bank and CEPR Search for other works by this author on: Oxford Academic Google Scholar Raf Wouters Raf Wouters 2National Bank of Belgium Search for other works by this author on: Oxford Academic Google Scholar Journal of the European Economic Association, Volume 1, Issue 5, 1 September 2003, Pages 1123–1175, https://doi.org/10.1162/154247603770383415 Published: 01 September 2003
The paper reviews the recent literature on monetary policy rules. We exposit the monetary policy design problem within a simple baseline theoretical framework. We then consider the implications of adding 
 The paper reviews the recent literature on monetary policy rules. We exposit the monetary policy design problem within a simple baseline theoretical framework. We then consider the implications of adding various real world complications. Among other things, we show that the optimal policy implicitly incorporates inflation targeting. We also characterize the gains from making a credible commitment to fight inflation. In contrast to conventional wisdom, we show that gains from commitment may emerge even if the central bank is not trying to inadvisedly push output above its natural level. We also consider the implications of frictions such as imperfect information.
Weak instruments can produce biased IV estimators and hypothesis tests with large size distortions. But what, precisely, are weak instruments, and how does one detect them in practice? This paper 
 Weak instruments can produce biased IV estimators and hypothesis tests with large size distortions. But what, precisely, are weak instruments, and how does one detect them in practice? This paper proposes quantitative definitions of weak instruments based on the maximum IV estimator bias, or the maximum Wald test size distortion, when there are multiple endogenous regressors. We tabulate critical values that enable using the first-stage F-statistic (or, when there are multiple endogenous regressors, the Cragg–Donald [1993] statistic) to test whether the given instruments are weak.
Abstract The difference and system generalized method of moments (GMM) estimators are growing in popularity. As implemented in popular software, the estimators easily generate instruments that are numerous and, in 
 Abstract The difference and system generalized method of moments (GMM) estimators are growing in popularity. As implemented in popular software, the estimators easily generate instruments that are numerous and, in system GMM, potentially suspect. A large instrument collection overfits endogenous variables even as it weakens the Hansen test of the instruments’ joint validity. This paper reviews the evidence on the effects of instrument proliferation, and describes and simulates simple ways to control it. It illustrates the dangers by replicating Forbes [ American Economic Review (2000) Vol. 90, pp. 869–887] on income inequality and Levine et al. [ Journal of Monetary Economics ] (2000) Vol. 46, pp. 31–77] on financial sector development. Results in both papers appear driven by previously undetected endogeneity.
Abstract This monograph is concerned with the statistical analysis of multivariate systems of non‐stationary time series of type I(1). It applies the concepts of cointegration and common trends in the 
 Abstract This monograph is concerned with the statistical analysis of multivariate systems of non‐stationary time series of type I(1). It applies the concepts of cointegration and common trends in the framework of the Gaussian vector autoregressive model. The main result on the structure of cointegrated processes as defined by the error correction model is Grangers representation theorem. The statistical results include derivation of the trace test for cointegrating rank, test on cointegrating relations, and test on adjustment coefficients and their asymptotic distributions.
India is considered to be 4th largest economy as far as GDP is concerned, but major question lies that are it really growing? And what are those macro economic factors? 
 India is considered to be 4th largest economy as far as GDP is concerned, but major question lies that are it really growing? And what are those macro economic factors? Which has been considered as parameter of the growth? To answer this questions researcher has made an effort to check impact of the macroeconomic environment on Nifty50, as various studies shows that there has been strong relation between Nifty50 and GDP. The Indian stock market, measured by market capitalization to GDP ratio, has a significant and increasing contribution to the country's economy. This ratio has reached a 15-year high of 140.2%, amplifying that the total value of listed companies exceeds India's GDP by a substantial margin. This signifies a impactful increase in the stock market's influence on the overall economy. Thus researcher wants to test the relationship between Nifty50 & Macroeconomic variables which are inflation, interest rates, exchange rates, FII, silver prices, and gold prices. For the purpose of the research Last five years data are considered i.e 2020-2024 with descriptive research design and statistical regression is measured so as to identify the most crucial variable amongst all macroeconomic variables that affects the Nifty50. Though, study does not claim any relation amongst them before 2020 and after 2024.
Using microdata from the European Consumer Survey (CES) for 11 European countries and 53 months, we investigate the formation and heterogeneity of inflation expectations as well as their theory consistency 
 Using microdata from the European Consumer Survey (CES) for 11 European countries and 53 months, we investigate the formation and heterogeneity of inflation expectations as well as their theory consistency with the Phillips curve in the euro area, and across countries and demographic groups. We examine how individuals in the euro area form their inflation expectations. Our findings show that people place significant weight on their current perception of inflation. Past experiences with prices also play a role, though to a lesser extent. Importantly, the formation of expectations tends to be forward-looking rather than backward-looking. A similar pattern emerges when we analyze the consistency of these expectations and perceptions with the Phillips Curve theory. Individuals in the euro area generally do not hold theory-consistent expectations regarding inflation. We find notable variations across gender, age, income, education level, household size regarding the formation of inflation expectation.
Bu çalıßma, TĂŒrkiye’de Doğrudan Yabancı Yatırımların (DYY) çevre kirliliği ĂŒzerindeki etkisini Kirlilik Sığınağı Hipotezi (KSH) ve Kirlilik Hale Hipotezi (KHH) kapsamında analiz etmektedir. 1975-2022 dönemine ait karbon emisyonları, kißi baßına 
 Bu çalıßma, TĂŒrkiye’de Doğrudan Yabancı Yatırımların (DYY) çevre kirliliği ĂŒzerindeki etkisini Kirlilik Sığınağı Hipotezi (KSH) ve Kirlilik Hale Hipotezi (KHH) kapsamında analiz etmektedir. 1975-2022 dönemine ait karbon emisyonları, kißi baßına enerji tĂŒketimi, DYY girißleri, ticari açıklık, gayrisafi sabit sermaye olußumu ve ekonomik bĂŒyĂŒme değißkenleri kullanılarak Genißletilmiß Sınır Testi (A-ARDL) yöntemi uygulanmıßtır. Bulgular, DYY’nin kısa ve uzun dönemde CO₂ emisyonlarını artırdığını ve KSH’nin TĂŒrkiye için geçerli olduğunu göstermektedir. Özellikle enerji yoğun ve karbon salınımı yĂŒksek sektörlere yönelen DYY’ler, çevresel sĂŒrdĂŒrĂŒlebilirliği tehdit etmektedir. Bu sonuçlar, DYY politikalarının çevre dostu teknolojilere ve yenilenebilir enerji yatırımlarına yönlendirilmesi gerektiğini vurgulamaktadır. Ayrıca, enerji verimliliğini artıran yapısal dönĂŒĆŸĂŒm politikalarının uygulanması, karbon emisyonlarını azaltma noktasında kritik bir rol oynayacaktır. Gelecekte yapılacak çalıßmalarda sektörel ve bölgesel dĂŒzeyde analizlerin derinleßtirilmesi, DYY’lerin çevresel etkilerini daha kapsamlı bir ßekilde ortaya koyacaktır.
TĂŒrkiye’de 2006 yılından itibaren uygulanan açık enflasyon hedeflemesi çerçevesinde, para politikasının temel amacı fiyat istikrarını sağlamak olup; 2018 yılında yaßanan döviz kuru atağının ve pandeminin akabinde enflasyonun hızla yĂŒkseldiği ve 
 TĂŒrkiye’de 2006 yılından itibaren uygulanan açık enflasyon hedeflemesi çerçevesinde, para politikasının temel amacı fiyat istikrarını sağlamak olup; 2018 yılında yaßanan döviz kuru atağının ve pandeminin akabinde enflasyonun hızla yĂŒkseldiği ve hedeflenen enflasyon oranlarından belirgin ßekilde ayrıßtığı görĂŒlmektedir. Özellikle son dönemde kredi kartı harcamalarının yanı sıra KOBİ’lere (kĂŒĂ§ĂŒk ve orta boy ißletmeler) kullandırılan kredilerin de talep enflasyonuna sebebiyet verip vermediği tartıßılmaktadır. Bu nedenle, enflasyonda kredi kartı harcamalarındaki ve KOBİ’lere kullandırılan kredilerdeki artıßın ne derece etkili olduğu 2008:1-2024:6 dönemi için aylık veri seti kullanılarak VAR (Vektör Otoregresif- Vector Autoregressive) model çerçevesinde analiz edilmißtir. Analiz neticesinde, değißkenler arasında uzun dönemde eßbĂŒtĂŒnleßme ilißkisinin bulunduğu; KOBİ kredileri ve kredi kartı harcamaları ile enflasyon arasında karßılıklı Granger nedensellik ilißkisi bulunduğu; bir baßka deyißle KOBİ kredileri ve kredi kartı harcamaları enflasyonun Granger nedeni iken aynı zamanda enflasyonun da söz konusu iki değißkenin Granger nedeni olduğu ve hatta KOBİ kredilerinin kredi kartı harcamalarının Granger nedeni olduğu görĂŒlmĂŒĆŸtĂŒr. Enflasyondaki değißmenin açıklanmasında, varyans ayrıßtırması analizi sonuçlarına göre kredi kartı harcamalarındaki değißmenin etkisinin KOBİ kredilerindeki değißmenin etkisinden daha fazla olduğu anlaßılmıßtır.
This study aims to identify the economic and financial factors affecting investments in stocks, exchange traded funds, and private-sector debt instruments. For this purpose, three different models are developed based 
 This study aims to identify the economic and financial factors affecting investments in stocks, exchange traded funds, and private-sector debt instruments. For this purpose, three different models are developed based on the dependent and independent variables used in the study and the period range of the study is determined as 2008:01 - 2023:07. The current study follows a time series analysis process that takes structural breaks into account and conducts cointegration, causality, impulse-response and variance decomposition analyses. According to the short-term findings, stock investments are affected by inflation, interest rates, reserves, CDS, investor sentiment, risk appetite, and consumer loans; fund investments are affected by inflation, interest rates, reserves, investor sentiment, risk appetite, and consumer loans; and private sector debt instruments are affected by interest rates, reserves, risk appetite, and consumer loans. In addition, according to the long-term findings, stock investments are affected by all independent variables used in the study; fund investments are affected by inflation, interest rates, reserves, investor sentiment, risk appetite, and consumer loans; and finally, private sector debt instruments are affected by inflation, interest rates, reserves, risk appetite, and consumer loans.
Currency in circulation (CIC) is an important variable in monetary policy as it affects liquidity and guides the currency issuance operations of central banks. This paper proposes a novel approach 
 Currency in circulation (CIC) is an important variable in monetary policy as it affects liquidity and guides the currency issuance operations of central banks. This paper proposes a novel approach to forecast CIC using central bank balance sheet variables, namely assets and liabilities other than currency issued. The balance sheet approach is able to generate monthly CIC forecasts as opposed to demand-for-currency models anchored on quarterly Gross Domestic Product (GDP). This allows for more responsive currency policy, particularly during crisis periods when precautionary motives intensify—reflected in a decoupling of GDP and CIC—or when spikes in currency demand arise due to heightened transaction motives. Dynamic time series regression models are estimated to operationalize the balance sheet approach and are compared to baseline predictive methods such as Error-Trend-Seasonality (ETS) models, Autoregressive Integrated Moving Average (ARIMA), and seasonal naïve methods. Results show that including balance sheet variables significantly improves the predictive ability of CIC models in terms of mean absolute percentage error (MAPE) and root mean squared scaled error (RMSSE). These findings hold across multiple training and test sets through time series cross-validation, suggesting stability of forecast accuracy results.
ABSTRACT We trace the developments in the empirical trade literature to make fifteen recommendations for estimating gravity equations, which are structured in three categories: data, estimating equation, and heterogeneity. We 
 ABSTRACT We trace the developments in the empirical trade literature to make fifteen recommendations for estimating gravity equations, which are structured in three categories: data, estimating equation, and heterogeneity. We also offer practical tips and identify areas where further research is needed. Based on these recommendations, we specify a comprehensive estimating model, which can serve as a benchmark for gravity estimations even when it is not possible to implement all of our recommendations. The proposed methods should be useful for gravity estimations beyond international trade, e.g., migration, foreign investment, cross‐border patenting, and other flows.
Abstract We show that undercapitalized banks with large holdings of government bonds subject to sovereign default risk lead to a new crowding‐out channel: deficit‐financed fiscal stimuli lead to higher bond 
 Abstract We show that undercapitalized banks with large holdings of government bonds subject to sovereign default risk lead to a new crowding‐out channel: deficit‐financed fiscal stimuli lead to higher bond yields, triggering capital losses for the banks. Banks then cut back loans, which reduces fiscal multipliers. Crowding out increases for longer maturity bonds and higher sovereign default risk. We estimate a dynamic stochastic general equilibrium (DSGE) model with financial frictions for Spain and find strong support for these results. The cumulative multiplier decreases substantially with the size of the stimulus, and with the amount of time between the announcement and implementation of the stimulus.
Abstract This paper proposes a simple iterative method—time iteration—to solve linear rational expectation models. I prove that this method converges to the desired stable solution, and provide the conditions under 
 Abstract This paper proposes a simple iterative method—time iteration—to solve linear rational expectation models. I prove that this method converges to the desired stable solution, and provide the conditions under which the solution is unique. Apart from its transparency and simplicity of implementation, the method provides a straightforward approach to solving models with less standard features, such as regime‐switching frameworks in which constraints may occasionally bind—for example, the zero lower bound.
İmalat sanayisi, bir ĂŒlkenin ekonomik yapısının ve kalkınma sĂŒrecinin ana bileßenlerinden biri olarak önemli bir konumda bulunmaktadır. Ekonomik gelißim sĂŒrecinde yĂŒksek katma değerli ĂŒretim, ĂŒlkelere rekabet avantajı sağlama açısından kritik 
 İmalat sanayisi, bir ĂŒlkenin ekonomik yapısının ve kalkınma sĂŒrecinin ana bileßenlerinden biri olarak önemli bir konumda bulunmaktadır. Ekonomik gelißim sĂŒrecinde yĂŒksek katma değerli ĂŒretim, ĂŒlkelere rekabet avantajı sağlama açısından kritik bir rol oynamaktadır. TĂŒrkiye’nin imalat sanayisi, kĂŒresel sanayileßme sĂŒreçlerinden etkilenirken, dĂŒĆŸĂŒk teknoloji kullanımı ve ithalata bağımlılık gibi yapısal sorunlarla karßılaßmaktadır. Bu bağlamda, imalat sanayi ĂŒretici fiyat endeksi, piyasalardaki dalgalanmalara karßı sektörĂŒn dayanıklılığını analiz etmek için kritik bir gösterge niteliği taßımakta ve sektörĂŒn sĂŒrdĂŒrĂŒlebilirliği ile ekonomik istikrar arasındaki etkileßimleri anlamamıza yardımcı olmaktadır. Bu çalıßma, TĂŒrkiye’ de imalat sanayi enflasyonunu belirleyen makroekonomik dinamikleri; imalat sanayi yurt içi fiyat endeksi, sanayi ĂŒretim endeksi, sabit sermaye yatırım harcamaları, imalat sanayi kapasite kullanım oranı, reel efektif döviz kuru, ham petrol varil fiyatları, enerji fiyatları (TUFE-Enerji fiyatlarındaki değißim), TĂŒrkiye 2 yıllık gösterge Faiz oranı (TR Faiz oranı) ve ABD federal fon oranı (FED faiz oranı) gibi makroekonomik değißkenler kullanarak En kĂŒĂ§ĂŒk kareler (EKK) ve Ağırlıklı ortalama en kĂŒĂ§ĂŒk kareler (WALS) yöntemleriyle incelemektedir. Analiz sonuçları, sanayi ĂŒretim endeksi, sabit sermaye yatırım harcamaları, FED faiz oranı ve ham petrol varil fiyatlarında meydana gelen bir birimlik artıßların TĂŒrkiye imalat sanayi enflasyonu ĂŒzerinde pozitif ve istatistiksel olarak anlamlı etkileri olduğunu sonucuna ulaßılmıßtır. Buna karßın, reel efektif döviz kuru, imalat sanayi kapasite kullanım oranı ve TR faiz oranında meydana gelen bir birimlik artıßların ise TĂŒrkiye imalat sanayi enflasyonu ĂŒzerinde negatif ve istatistiksel olarak anlamlı etkileri olduğu tespit edilmißtir. Ayrıca, makro ĂŒretim modelinde TR faiz oranı sabit bir parametre olarak kabul edilmiß ve yapılan analiz sonucunda, TR faiz oranındaki bir birimlik artıßın TĂŒrkiye imalat sanayi enflasyonu ĂŒzerinde pozitif ve anlamlı bir etkisi olduğu tespit edilmißtir. Ancak, FED faiz oranının dahil olduğu modelde, TR faiz oranının ißareti negatif çıkmaktadır. Bu durum, FED faiz oranlarındaki değißimlerin, TR faiz oranları ĂŒzerinde dolaylı bir baskı olußturduğunu göstermektedir.
Finansal problemlerin analizi ve çözĂŒmĂŒ, geleneksel matematiksel yöntemlerin sınırlarını zorlayan karmaßıklıklar içerir. Özellikle finansal piyasalarda gözlemlenen volatilite, bellek etkileri ve uzun vadeli bağımlılıklar gibi özellikler, klasik diferansiyel denklemlerle modellemede yetersiz 
 Finansal problemlerin analizi ve çözĂŒmĂŒ, geleneksel matematiksel yöntemlerin sınırlarını zorlayan karmaßıklıklar içerir. Özellikle finansal piyasalarda gözlemlenen volatilite, bellek etkileri ve uzun vadeli bağımlılıklar gibi özellikler, klasik diferansiyel denklemlerle modellemede yetersiz kalabilir. Bu bağlamda, fraksiyonel diferansiyel denklemler (FDD), finansal matematikte yenilikçi bir yaklaßım sunarak bu tĂŒr karmaßık sĂŒreçleri daha etkili bir ßekilde temsil etme potansiyeline sahiptir. Fraksiyonel matematik, tĂŒrev ve integral ißlemlerinin tam sayı olmayan dereceleriyle çalıßarak, anomalik yayılma sĂŒreçlerini ve bellek etkilerini modelleme imkanı sağlar. Bu özellikler, finansal sistemlerdeki uzun vadeli bağımlılıkları, geçmiß olayların mevcut durumlara etkisini ve piyasaların fraktal doğasını daha doğru bir ßekilde açıklamak için gĂŒĂ§lĂŒ bir araç sunar. Bu makalede, fraksiyonel diferansiyel denklemlerin teorik temelleri ele alınarak, bu denklemlerin finansal problemlerdeki uygulanabilirliği detaylı bir ßekilde incelenmißtir. Özellikle volatilite analizi, opsiyon fiyatlandırma, risk yönetimi ve portföy optimizasyonu gibi temel finansal alanlarda fraksiyonel modellerin sunduğu avantajlar tartıßılmıßtır. Geleneksel Black-Scholes modelinin fraksiyonel versiyonu gibi spesifik uygulamalar, piyasaların daha gerçekçi bir ßekilde modellenmesini mĂŒmkĂŒn kılarak bu yöntemlerin potansiyelini göstermektedir. Ayrıca, finansal verilerin fraksiyonel zaman serisi analizine tabi tutulması, uzun vadeli bellek etkilerinin ve anomalik piyasa davranıßlarının daha iyi anlaßılmasını sağlamaktadır. Makale aynı zamanda, fraksiyonel denklemlerin çözĂŒmĂŒnde kullanılan analitik ve numerik yöntemlere de ıßık tutmaktadır. Sonlu fark yöntemleri, spectral yaklaßımlar ve GrĂŒnwald-Letnikov tekniği gibi nĂŒmerik yöntemler, fraksiyonel denklemlerin çözĂŒmĂŒnde kritik bir rol oynar. Bunun yanı sıra, yapay zeka destekli algoritmaların, finansal verilerden Ă¶ÄŸrenerek daha etkili çözĂŒmler sunma potansiyeline sahip olduğu vurgulanmıßtır. Ancak, fraksiyonel diferansiyel denklemlerin çözĂŒmĂŒnde karßılaßılan zorluklar ve yĂŒksek hesaplama maliyetleri, bu alanda daha fazla çalıßmaya ihtiyaç duyulduğunu göstermektedir. Sonuç olarak, fraksiyonel diferansiyel denklemler, finansal problemlerin matematiksel çözĂŒmĂŒnde yeni ufuklar açmaktadır. Gelecekte, daha gelißmiß hesaplama yöntemlerinin ve veri odaklı yaklaßımların entegrasyonu ile bu modellerin finansal matematikteki rolĂŒ daha da artacaktır. Bu çalıßma, fraksiyonel denklemlerle finansal problemlerin çözĂŒmĂŒne yönelik teorik bir temel sunmanın yanı sıra, uygulama ve araßtırma alanlarında ilham verici bir rehber olmayı amaçlamaktadır.
This research studies the impact of macroeconomic announcement surprises on daily U.S. Treasury excess returns during the heart of Alan Greenspan’s tenure as Federal Reserve Chair, addressing the possible limitations 
 This research studies the impact of macroeconomic announcement surprises on daily U.S. Treasury excess returns during the heart of Alan Greenspan’s tenure as Federal Reserve Chair, addressing the possible limitations of standard static regression (SSR) models, which may suffer from omitted variable bias, parameter instability, and poor mis-specification diagnostics. To complement the SSR framework, an automated general-to-specific (Gets) modeling approach, enhanced with modern indicator saturation methods for robustness, is applied to improve empirical model discovery and mitigate potential biases. By progressively reducing an initially broad set of candidate variables, the Gets methodology steers the model toward congruence, dispenses unstable parameters, and seeks to limit information loss while seeking model congruence and precision. The findings, herein, suggest that U.S. Treasury market responses to macroeconomic news shocks exhibited stability for a core set of announcements that reliably influenced excess returns. In contrast to computationally costless standard static models, the automated Gets-based approach enhances parameter precision and provides a more adaptive structure for identifying relevant predictors. These results demonstrate the potential value of incorporating interpretable automated model selection techniques alongside traditional SSR and Markov switching approaches to improve empirical insights into macroeconomic announcement effects on financial markets.
Bu çalıßma, TĂŒrkiye'de katılım bankacılığı ile ekonomik bĂŒyĂŒme arasındaki ilißkiyi incelemeyi amaçlamaktadır. Özellikle katılım bankacılığının TĂŒrkiye ekonomisinin bĂŒyĂŒmesi ĂŒzerindeki etkisini analiz etmek hedeflenmißtir. Çalıßmada, 2005: Q4 - 2024: Q1 dönemleri 
 Bu çalıßma, TĂŒrkiye'de katılım bankacılığı ile ekonomik bĂŒyĂŒme arasındaki ilißkiyi incelemeyi amaçlamaktadır. Özellikle katılım bankacılığının TĂŒrkiye ekonomisinin bĂŒyĂŒmesi ĂŒzerindeki etkisini analiz etmek hedeflenmißtir. Çalıßmada, 2005: Q4 - 2024: Q1 dönemleri arasında katılım bankacılığının kredi ve toplanan fon verileri ile ekonomik bĂŒyĂŒme göstergesi olarak GSYİH verileri analiz edilmißtir. Katılım bankacılığı ile ekonomik bĂŒyĂŒme arasındaki uzun dönemli ilißkiyi belirlemek amacıyla, değißkenlerin eßbĂŒtĂŒnleßik olup olmadığını test etmek için ARDL Sınır Testi uygulanmıßtır. Değißkenler arasındaki nedenselliğin yönĂŒnĂŒ ve niteliğini incelemek maksadıyla Toda-Yamamoto uygulanmıßtır. ARDL Sınır Testi sonuçları, katılım bankalarının sağladığı kredi ve fonların uzun vadede ekonomik bĂŒyĂŒmeye olumlu ve anlamlı bir katkı sağladığını ortaya koymaktadır. Bu bulgu, katılım bankacılığı faaliyetlerinin zaman içinde sĂŒrdĂŒrĂŒlebilir ekonomik bĂŒyĂŒmeye katkı sunduğunu göstermektedir. Toda-Yamamoto Nedensellik Testi sonuçları, katılım bankalarının sağladığı krediler ile GSYİH arasında çift yönlĂŒ bir nedensellik ilißkisinin var olduğunu göstermektedir. Bu bulgu, katılım bankalarının kredilerinin ekonomik bĂŒyĂŒmeyi etkilediği gibi, ekonomik bĂŒyĂŒmenin de katılım bankalarının kredi hacmini etkileyebileceğini ifade etmektedir.
We establish a correspondence between Lorenz curves and Pickands dependence functions, thereby reframing the construction of any bivariate extreme‑value copula as an inequality problem. We discuss the conditions under which 
 We establish a correspondence between Lorenz curves and Pickands dependence functions, thereby reframing the construction of any bivariate extreme‑value copula as an inequality problem. We discuss the conditions under which a Lorenz curve generates a closed‑form Pickands model, considerably expanding the small set of tractable parametrizations currently available. Furthermore, the Pickands measure‑generating function M can be written explicitly in terms of the quantile function underlying the Lorenz curve, providing a constructive route to model specification. Finally, classical inequality indices like the Gini coincide with scale‑free, rotation‑invariant indices of global upper‑tail dependence, thereby complementing local coefficients such as the upper tail dependence index λU.
<title>Abstract</title> This paper evaluates the performance of machine learning models for nowcasting U.S. GDP, comparing them to traditional econometric approaches in both fixed-horizon and rolling-horizon settings. We construct composite indicators 
 <title>Abstract</title> This paper evaluates the performance of machine learning models for nowcasting U.S. GDP, comparing them to traditional econometric approaches in both fixed-horizon and rolling-horizon settings. We construct composite indicators from high-frequency macroeconomic data and assess models across varying forecast lead times. The results show that during periods of economic volatility, machine learning models—particularly a manually tuned Multi-Layer Perceptron—significantly reduce forecasting error compared to autoregressive benchmarks. In more stable periods, simpler models perform comparably. Horizon-stratified evaluation reveals that combining high-frequency indicators with autoregressive components delivers the most accurate forecasts in the critical 4–8 week window before official GDP releases. Our findings underscore the conditional value of machine learning techniques in improving real-time macroeconomic surveillance and decision-making.
<title>Abstract</title> <bold>Purpose</bold> – This study compares four dynamic modeling frameworks—Laplace Transforms, Ordinary Differential Equations (ODEs), Partial Differential Equations (PDEs), and Delayed Differential Equations (DDEs)—to assess their relative effectiveness in capturing 
 <title>Abstract</title> <bold>Purpose</bold> – This study compares four dynamic modeling frameworks—Laplace Transforms, Ordinary Differential Equations (ODEs), Partial Differential Equations (PDEs), and Delayed Differential Equations (DDEs)—to assess their relative effectiveness in capturing the macroeconomic impacts of tariff-induced policy shocks. The central aim is to determine which modeling approach most accurately reflects delayed feedback mechanisms, nonlinear adjustment dynamics, and policy lags inherent in real-world economies. <bold>Design/methodology/approach</bold> – We develop a stylized two-sector trade-economy model incorporating domestic and imported output, the consumer price index, and government subsidies. Each model—Laplace, ODE, PDE, and DDE—is subjected to an equivalent tariff shock scenario. We utilize asymptotic expansions, delay differential simulations, and numerical solvers to explore dynamic behavior under both instantaneous and lagged policy responses. In particular, the DDE framework incorporates a dual-delay structure to reflect both endogenous market inertia and exogenous policy implementation lags. Supplementary analyses using bifurcation diagrams, cobweb plots, and Lyapunov exponents provide insight into system stability, oscillations, and potential chaotic dynamics. <bold>Findings</bold> – The comparative analysis reveals that while ODEs and PDEs can model smooth or spatially diffused dynamics, they fail to capture the effects of feedback delays and memory-dependent behavior. Laplace transform methods, although analytically concise, are inadequate in time-domain interpretation. In contrast, Delayed Differential Equations (DDEs) offer superior capability in modeling the nonlinear, lagged, and oscillatory features of policy shocks. The dual-delay DDE system uniquely reflects how economic and policy-driven delays interact to shape inflation, output volatility, and long-term equilibrium shifts. <bold>Originality/value</bold> – This paper provides one of the first comprehensive comparisons of classical and advanced time-evolution models under a unified economic context with explicit focus on policy-induced delays. By highlighting the importance of dual-delay DDEs in modern macroeconomic modeling, it contributes a robust framework for evaluating trade interventions, inflationary impacts, and policy design under delayed feedback conditions—critical for real-time economic forecasting and decision-making in policy environments where timing is pivotal. To further the analysis results, we also employed advanced mathematical tools such as: ‱ <bold>Lyapunov exponent analysis</bold>}, to assess the system’s sensitivity to initial conditions and detect chaotic regimes. ‱ <bold>Bifurcation diagrams</bold>, to explore stability boundaries and transition behaviors under varying delay parameters; ‱ <bold>Cobweb plots and discrete economic maps</bold>, to simulate adjustment dynamics in consumer and producer behavior; ‱ <bold>Time-evolving simulations</bold>, to evaluate policy impacts under realistic, non-instantaneous conditions. <bold>JEL Classification</bold> – C61, C62, C63, C65, F13
This study aims to study the effect of systemic risk on economic growth and provide GDP growth forecasts for five emerging countries. To do so, Mixed Data Sampling (MIDAS) models 
 This study aims to study the effect of systemic risk on economic growth and provide GDP growth forecasts for five emerging countries. To do so, Mixed Data Sampling (MIDAS) models are used to incorporate the Emerging Market Bond Index (EMBI) and the Composite Leading Indicator (CLI) as explanatory variables. This methodology combines different time series frequencies and various weighting schemes and lag structures. The main empirical results suggest that MIDAS models provide accurate forecasts, in particular during periods of strong recessions, as in the subprime crisis. Moreover, the results show similarities with many previous studies on emerging countries, which demonstrate the accuracy of MIDAS models. This research provides a deeper understanding of the impact of systemic risk on economic growth and the financial factors that influence the latter, thereby contributing substantially to better economic policy design in developing countries.
The Phillips curve is one of the most widely debated economic patterns. Its practical application, including for adjusting monetary policy, still causes significant disagreement among economists. In this regard, understanding 
 The Phillips curve is one of the most widely debated economic patterns. Its practical application, including for adjusting monetary policy, still causes significant disagreement among economists. In this regard, understanding the nature (essence) of the Phillips curve is an urgent task. The purpose of the study is to substantiate the hypothesis that the Phillips curve is based on a different pattern than is currently believed among economists. Methods of analysis and synthesis, system and logical analysis, were used. The empirical basis of the study is based on statistical data of the US economy for the period from 1980 to 2022. The essence of the study: real analysis of economic indicators (real wages, real GDP, etc.) in the vast majority of cases takes precedence over nominal analysis of economic indicators (nominal wages, nominal GDP, etc.). These two analyzes are the same if prices remain constant. It was during this period of Phillips’s study of the British economy (1862–1913) that prices remained virtually unchanged. The rest of the Phillips curve (1914–1957) was heavily influenced by non-economic factors and may therefore be less significant. Since Phillips originally defined his curve as an inverse relationship between nominal wages and unemployment, at constant prices this means that there is an inverse relationship between real wages and unemployment. This dependence is explained by the author by the fact that the UK economy already had a cyclical pattern, when during economic growth real wages rise and unemployment falls, and vice versa. Conclusion : It is quite reasonable to believe that the above curve shows an inverse relationship between fluctuations in unemployment and fluctuations in real wages.
La presente investigación se centra en estudiar los efectos de las noticias digitales en las expectativas de inflación de los consumidores. De forma específica, se realiza un anålisis de las 
 La presente investigación se centra en estudiar los efectos de las noticias digitales en las expectativas de inflación de los consumidores. De forma específica, se realiza un anålisis de las noticias que informan sobre la tasa de cambio con el objetivo de identificar el proceso de formación de expectativas de inflación de estos agentes. Para ello, se utilizan noticias de periódicos digitales de la economía colombiana para el periodo 2009-2018. A través de un anålisis macroeconométrico de series de tiempo, se encuentra que existe una influencia significativa de los medios digitales en la formación de expectativas de los consumidores. En particular, el mayor volumen de noticias sobre la tasa de cambio tiene la capacidad de aumentar las expectativas de inflación de los consumidores.
Purpose: The study examines to test asymmetry of income and wealth inequality on GDP growth rate in India during 1995-2023 through NARDL model. Methods: The paper applied Shin et al. 
 Purpose: The study examines to test asymmetry of income and wealth inequality on GDP growth rate in India during 1995-2023 through NARDL model. Methods: The paper applied Shin et al. (2014) model to estimate asymmetry in NARDL model, applied Dicky and Fuller (1979) model for unit root test, applied Breusch-Pagan model (1979) to test the serial correlation and heteroscedasticity tests. Stability test was done by following Page (1954). Symmetry test was applied by using Wald test (1943). The data on income inequality and wealth inequality were collected from the World Inequality Data Lab and data on GDP growth rate were collected from the World Bank. Results: The paper finds that positive and negative changes of cumulative dynamic multipliers of both income and wealth inequality impact on GDP growth favourably and adversely. Positive and negative responses diverge away from positive and negative long run limits and asymmetry lines of wealth and income inequality have no convergence. Cointegration of wealth and income inequality have negative impacts on GDP growth rate. Conclusion: NARDL model can help policy makers to conduct fiscal and monetary policy and other welfare measures in both short run and long run to ameliorate inequalities towards sustainable GDP growth rate. The model can justify how positive and negative responses of asymmetries in short and long run affect GDP growth.
Based on the understanding of the impact of inflation on monetary policy, the company's policy will improve the company's financial performance by increasing the company's operating income, increasing the amount 
 Based on the understanding of the impact of inflation on monetary policy, the company's policy will improve the company's financial performance by increasing the company's operating income, increasing the amount of cash, and utilizing the short-term loans received. This study aims to analyze the impact of inflation on the liquidity of Asiacell Telecommunications Company in Iraq from 2012 to 2023 by determining the impact of inflation on the company's liquidity variables (operating income, current operating surplus, cash, and short-term loans received). We used a quantitative analysis method, and the study results showed that inflation has a positive moral impact on all the components of financial flows examined in this study. The results also showed that there is conditional volatility and asymmetry in the response of the variables to cash liquidity, because for the variables operating income and cash, the response to positive shocks is greater than the response to adverse shocks, while the opposite is true for the variables current operating surplus and short-term loans received. The response to adverse shocks is greater than the response to positive shocks. This study provides analytical insights on inflation and liquidity that can help improve the financial stability of Asiacell and assist the company in formulating more effective fiscal policies.
Abstract In this paper, we demonstrate a forecastable approach to revisions in the BLS’s monthly Employment Situation report using a Bayesian hierarchical model. By incorporating labor market and economic activity 
 Abstract In this paper, we demonstrate a forecastable approach to revisions in the BLS’s monthly Employment Situation report using a Bayesian hierarchical model. By incorporating labor market and economic activity measures, our model accurately predicts both the level and sign of data revisions. Enhancing the ability to forecast data revisions can significantly improve financial market efficiency and support better policy decisions by government and central bank officials, who often depend on initial employment estimates or endure time-consuming revisions to achieve a more accurate understanding of the labor market.
Abstract I develop a two-asset heterogeneous-agent New Keynesian model with search and matching frictions in the labor market, which extends the transmission mechanism of monetary policy to household consumption. Uninsurable 
 Abstract I develop a two-asset heterogeneous-agent New Keynesian model with search and matching frictions in the labor market, which extends the transmission mechanism of monetary policy to household consumption. Uninsurable countercyclical unemployment risk plays a crucial role in the transmission of monetary shocks to consumption through a novel channel driven by countercyclical precautionary saving motives. Following an increase in the real interest rate, unconstrained households raise their liquid savings and reduce current consumption to insure against the risk of lower future individual labor income, resulting from longer expected unemployment durations. This mechanism accounts for 16 % of the total decline in consumption in a model calibrated to a realistic wealth distribution. The strength of the countercyclical precautionary saving motive depends on the degree of wage rigidity and the fiscal policy rule in general equilibrium. Additionally, I extend the sequence-space Jacobian algorithm to a continuous-time framework, where the efficiency of constructing partial equilibrium Jacobians is enhanced by a generalized approach to handling a large number of income grid points in the heterogeneous-agent block.
This paper analyses how stock market volatility responds to monetary policy during bull and bear market phases in the period from the first quarter of 1990Q1 to the fourth quarter 
 This paper analyses how stock market volatility responds to monetary policy during bull and bear market phases in the period from the first quarter of 1990Q1 to the fourth quarter of 2023Q4 using MS-VAR model. It investigated stock market fluctuation in both the bull and bear periods using the composite index of the Nigerian Stock Exchange (NSE), the All Share Index and the most appropriate monetary policy indicator, the interest rates. The monetary policy shocks were found to positively respond to the stock market volatility with relatively small volatility in the first regime. For the second regime, the graph shows that an increase in monetary policy shock positively affects volatility at the onset before afterwards turning the move into the negative side of volatility. This policy advice is that the Central Bank of Nigeria (CBN) should be extra cautious when setting and enforcing fiscal measures. Furthermore, due to the erratic performance by Nigeria stock market, the government and the relevant authorities should avoid interfering with the market during these situations as such interferences may trigger further instabilities to the market, because such measures merely slow the causes down, and do not bring lasting solutions.
Fluctuations in the balance of payments are a reflection of the instability of the exchange rate; crisis factors also greatly affect the deficit or surplus in the balance of payments. 
 Fluctuations in the balance of payments are a reflection of the instability of the exchange rate; crisis factors also greatly affect the deficit or surplus in the balance of payments. If the exchange rate depreciates, a country will increase exports because domestic prices are relatively cheaper than foreign prices so that it is one of the competitive forces to increase exports, and vice versa if a country experiences appreciation, the country will increase imports. This study aims to analyze and compare whether there is a causal relationship between the balance of payments and currency exchange rates in ASEAN countries. This study uses the Granger Causality Test method, therefore the data of this study is in the form of a time series, namely the years 2005-2019. Only Myanmar and the Philippines have the exchange rate and balance of payments variables which have a causal relationship. This is in line with the curve in the introduction where the exchange rates of the two countries are relatively higher than those of Indonesia and Vietnam, which reach tens to tens of thousands of rupiah. This means that no matter how low the exchange rate (depreciation) is, there is no or little possibility for the countries of Indonesia and Vietnam to export, which reduces the current account, which is part of the balance of payments)
Emilia Tomczyk | Przegląd Statystyczny Statistical Review
This study examines the strength of the consensus on the expected prices across the European Union (EU) countries with respect to various factors: seniority in the EU (‘old’ vs. ‘new’ 
 This study examines the strength of the consensus on the expected prices across the European Union (EU) countries with respect to various factors: seniority in the EU (‘old’ vs. ‘new’ EU Member States, i.e. those that joined the community in 2004), the size of the economy (small vs. large) and currency cohesion (eurozone vs. local-currency countries). The results show that the lowest consensus on expected prices and relatively little variation in such a consensus occur in the ‘old’ EU countries. Opinions on the direction of the expected price changes vary substantially, but this variation remains stable in time. For almost every EU country, the consensus on the expected prices is higher in the ‘regular times’ subsample than in the ‘pandemic and war’ subsample, and for many countries, the differences in the strength of the consensus are larger for the ‘pandemic and war’ subsample. As far as the correlation with the observed price changes is concerned, the highest correlation coefficients are noted for small economies. Analysing correlation coefficients across subsamples shows that during difficult times of the pandemic and war, seniority in the EU helps the respondents to predict the direction of the expected price changes more in line with the actual price developments.
In this paper, we apply GrĂŒnwald–Letnikov-type fractional-order calculus to simulate the growth of Serbia’s gross domestic product (GDP). We also compare the fractional-order model’s results with those of a similar 
 In this paper, we apply GrĂŒnwald–Letnikov-type fractional-order calculus to simulate the growth of Serbia’s gross domestic product (GDP). We also compare the fractional-order model’s results with those of a similar integer-order model. The significance of variables is assessed by the Akaike Information Criterion (AIC). The research demonstrates that the GrĂŒnwald–Letnikov fractional-order model provides a more accurate representation compared to the standard integer-order model and performs very accurately in predicting GDP values.
We study discretionary monetary policy in an economy where economic agents have quasi-hyperbolic discounting. We demonstrate that a benevolent central bank is able to keep inflation under control for a 
 We study discretionary monetary policy in an economy where economic agents have quasi-hyperbolic discounting. We demonstrate that a benevolent central bank is able to keep inflation under control for a wide range of discount factors. If the central bank, however, does not adopt the household’s time preferences and tries to discourage early-consumption and delayed-saving, then a marginal increase in steady state output is achieved at the cost of a much higher average inflation rate. Indeed, we show that it is desirable from a welfare perspective for the central bank to quasi-hyperbolically discount by more than households do. Welfare is improved because this discount structure emphasizes the current-period cost of price changes and leads to lower average inflation. We contrast our results with those obtained when policy is conducted according to a Taylor-type rule.
We revisit the reversal puzzle: A counterintuitive contraction of inflation in response to an interest rate peg. We show that it is intimately related to the degree of agents' anticipation. 
 We revisit the reversal puzzle: A counterintuitive contraction of inflation in response to an interest rate peg. We show that it is intimately related to the degree of agents' anticipation. If agents perfectly anticipate the peg, reversals occur depending on the duration of the peg. If they do not anticipate the peg, reversals are absent. In the case of imperfect anticipation, implemented by a Markov-switching framework, we measure the degree of anticipation by the frequency of the peg regime. Even if the frequency of the peg takes on a value twice as large as empirically observed, the reversal puzzle is absent.