For How Long Do IMF Forecasts of World Economic Growth Stay Up-to-date?
Katja Heinisch, Axel Lindner
Applied Economics Letters,
No. 3,
2019
Abstract
This study analyses the performance of the International Monetary Fund (IMF) World Economic Outlook output forecasts for the world and for both the advanced economies and the emerging and developing economies. With a focus on the forecast for the current year and the next year, we examine the durability of IMF forecasts, looking at how much time has to pass so that IMF forecasts can be improved by using leading indicators with monthly updates. Using a real-time data set for GDP and for indicators, we find that some simple single-indicator forecasts on the basis of data that are available at higher frequency can significantly outperform the IMF forecasts as soon as the publication of the IMF’s Outlook is only a few months old. In particular, there is an obvious gain using leading indicators from January to March for the forecast of the current year.
Read article
Expectation Formation, Financial Frictions, and Forecasting Performance of Dynamic Stochastic General Equilibrium Models
Oliver Holtemöller, Christoph Schult
Abstract
In this paper, we document the forecasting performance of estimated basic dynamic stochastic general equilibrium (DSGE) models and compare this to extended versions which consider alternative expectation formation assumptions and financial frictions. We also show how standard model features, such as price and wage rigidities, contribute to forecasting performance. It turns out that neither alternative expectation formation behaviour nor financial frictions can systematically increase the forecasting performance of basic DSGE models. Financial frictions improve forecasts only during periods of financial crises. However, traditional price and wage rigidities systematically help to increase the forecasting performance.
Read article
Bottom-up or Direct? Forecasting German GDP in a Data-rich Environment
Katja Heinisch, Rolf Scheufele
Empirical Economics,
No. 2,
2018
Abstract
In this paper, we investigate whether there are benefits in disaggregating GDP into its components when nowcasting GDP. To answer this question, we conduct a realistic out-of-sample experiment that deals with the most prominent problems in short-term forecasting: mixed frequencies, ragged-edge data, asynchronous data releases and a large set of potential information. We compare a direct leading indicator-based GDP forecast with two bottom-up procedures—that is, forecasting GDP components from the production side or from the demand side. Generally, we find that the direct forecast performs relatively well. Among the disaggregated procedures, the production side seems to be better suited than the demand side to form a disaggregated GDP nowcast.
Read article
Predicting Earnings and Cash Flows: The Information Content of Losses and Tax Loss Carryforwards
Sandra Dreher, Sebastian Eichfelder, Felix Noth
Abstract
We analyse the relevance of losses, accounting information on tax loss carryforwards, and deferred taxes for the prediction of earnings and cash flows up to four years ahead. We use a unique hand-collected panel of German listed firms encompassing detailed information on tax loss carryforwards and deferred taxes from the tax footnote. Our out-of-sample predictions show that considering accounting information on tax loss carryforwards and deferred taxes does not enhance the accuracy of performance forecasts and can even worsen performance predictions. We find that common forecasting approaches that treat positive and negative performances equally or that use a dummy variable for negative performance can lead to biased performance forecasts, and we provide a simple empirical specification to account for that issue.
Read article
Optimizing Policymakers' Loss Functions in Crisis Prediction: Before, Within or After?
Peter Sarlin, Gregor von Schweinitz
Abstract
Early-warning models most commonly optimize signaling thresholds on crisis probabilities. The expost threshold optimization is based upon a loss function accounting for preferences between forecast errors, but comes with two crucial drawbacks: unstable thresholds in recursive estimations and an in-sample overfit at the expense of out-of-sample performance. We propose two alternatives for threshold setting: (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. Given probabilistic model output, it is intuitive that a decision rule is independent of the data or model specification, as thresholds on probabilities represent a willingness to issue a false alarm vis-à-vis missing a crisis. We provide simulated and real-world evidence that this simplification results in stable thresholds and improves out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models.
Read article
Should Forecasters Use Real-time Data to Evaluate Leading Indicator Models for GDP Prediction? German Evidence
Katja Heinisch, Rolf Scheufele
Abstract
In this paper we investigate whether differences exist among forecasts using real-time or latest-available data to predict gross domestic product (GDP). We employ mixed-frequency models and real-time data to reassess the role of survey data relative to industrial production and orders in Germany. Although we find evidence that forecast characteristics based on real-time and final data releases differ, we also observe minimal impacts on the relative forecasting performance of indicator models. However, when obtaining the optimal combination of soft and hard data, the use of final release data may understate the role of survey information.
Read article
Qual VAR Revisited: Good Forecast, Bad Story
Makram El-Shagi, Gregor von Schweinitz
Journal of Applied Economics,
No. 2,
2016
Abstract
Due to the recent financial crisis, the interest in econometric models that allow to incorporate binary variables (such as the occurrence of a crisis) experienced a huge surge. This paper evaluates the performance of the Qual VAR, originally proposed by Dueker (2005). The Qual VAR is a VAR model including a latent variable that governs the behavior of an observable binary variable. While we find that the Qual VAR performs reasonable well in forecasting (outperforming a probit benchmark), there are substantial identification problems even in a simple VAR specification. Typically, identification in economic applications is far more difficult than in our simple benchmark. Therefore, when the economic interpretation of the dynamic behavior of the latent variable and the chain of causality matter, use of the Qual VAR is inadvisable.
Read article
Optimizing Policymakers' Loss Functions in Crisis Prediction: Before, Within or After?
Peter Sarlin, Gregor von Schweinitz
Abstract
Early-warning models most commonly optimize signaling thresholds on crisis probabilities. The ex-post threshold optimization is based upon a loss function accounting for preferences between forecast errors, but comes with two crucial drawbacks: unstable thresholds in recursive estimations and an in-sample overfit at the expense of out-of-sample performance. We propose two alternatives for threshold setting: (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. We provide simulated and real-world evidence that this simplification results in stable thresholds and improves out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models.
Read article