Who Benefits from GRW? Heterogeneous Employment Effects of Investment Subsidies in Saxony Anhalt
Eva Dettmann, Mirko Titze, Antje Weyh
IWH Discussion Papers,
No. 27,
2017
Abstract
The paper estimates the plant level employment effects of investment subsidies in one of the most strongly subsidized German Federal States. We analyze the treated plants as a whole, as well as the influence of heterogeneity in plant characteristics and the economic environment. Modifying the standard matching and difference-in-difference approach, we develop a new procedure that is particularly useful for the evaluation of funding programs with individual treatment phases within the funding period. Our data base combines treatment, employment and regional information from different sources. So, we can relate the absolute effects to the amount of the subsidy paid. The results suggest that investment subsidies have a positive influence on the employment development in absolute and standardized figures – with considerable effect heterogeneity.
Read article
Qual VAR Revisited: Good Forecast, Bad Story
Makram El-Shagi, Gregor von Schweinitz
Journal of Applied Economics,
No. 2,
2016
Abstract
Due to the recent financial crisis, the interest in econometric models that allow to incorporate binary variables (such as the occurrence of a crisis) experienced a huge surge. This paper evaluates the performance of the Qual VAR, originally proposed by Dueker (2005). The Qual VAR is a VAR model including a latent variable that governs the behavior of an observable binary variable. While we find that the Qual VAR performs reasonable well in forecasting (outperforming a probit benchmark), there are substantial identification problems even in a simple VAR specification. Typically, identification in economic applications is far more difficult than in our simple benchmark. Therefore, when the economic interpretation of the dynamic behavior of the latent variable and the chain of causality matter, use of the Qual VAR is inadvisable.
Read article
College Choice and the Selection of Mechanisms: A Structural Empirical Analysis
J.-R. Carvalho, T. Magnac, Qizhou Xiong
Abstract
We use rich microeconomic data on performance and choices of students at college entry to study the interaction between the revelation of college preferences through exams and the selection of allocation mechanisms. We propose a method in which preferences and expectations of students are identified from data on choices and multiple exam grades. Counterfactuals we consider balance costs arising from congestion and exam organization. Moving to deferred acceptance or inverting the timing of choices and exams are shown to increase welfare. Redistribution among students or schools is sizeable in all counterfactual experiments.
Read article
Bottom-up or Direct? Forecasting German GDP in a Data-rich Environment
Katja Drechsel, Rolf Scheufele
Abstract
This paper presents a method to conduct early estimates of GDP growth in Germany. We employ MIDAS regressions to circumvent the mixed frequency problem and use pooling techniques to summarize efficiently the information content of the various indicators. More specifically, we investigate whether it is better to disaggregate GDP (either via total value added of each sector or by the expenditure side) or whether a direct approach is more appropriate when it comes to forecasting GDP growth. Our approach combines a large set of monthly and quarterly coincident and leading indicators and takes into account the respective publication delay. In a simulated out-of-sample experiment we evaluate the different modelling strategies conditional on the given state of information and depending on the model averaging technique. The proposed approach is computationally simple and can be easily implemented as a nowcasting tool. Finally, this method also allows retracing the driving forces of the forecast and hence enables the interpretability of the forecast outcome.
Read article
Qual VAR Revisited: Good Forecast, Bad Story
Makram El-Shagi, Gregor von Schweinitz
Abstract
Due to the recent financial crisis, the interest in econometric models that allow to incorporate binary variables (such as the occurrence of a crisis) experienced a huge surge. This paper evaluates the performance of the Qual VAR, i.e. a VAR model including a latent variable that governs the behavior of an observable binary variable. While we find that the Qual VAR performs reasonably well in forecasting (outperforming a probit benchmark), there are substantial identification problems. Therefore, when the economic interpretation of the dynamic behavior of the latent variable and the chain of causality matter, the Qual VAR is inadvisable.
Read article
Bottom-up or Direct? Forecasting German GDP in a Data-rich Environment
Katja Drechsel, Rolf Scheufele
Abstract
This paper presents a method to conduct early estimates of GDP growth in Germany. We employ MIDAS regressions to circumvent the mixed frequency problem and use pooling techniques to summarize efficiently the information content of the various indicators. More specifically, we investigate whether it is better to disaggregate GDP (either via total value added of each sector or by the expenditure side) or whether a direct approach is more appropriate when it comes to forecasting GDP growth. Our approach combines a large set of monthly and quarterly coincident and leading indicators and takes into account the respective publication delay.
Read article
The Performance of Short-term Forecasts of the German Economy before and during the 2008/2009 Recession
Katja Drechsel, Rolf Scheufele
International Journal of Forecasting,
No. 2,
2012
Abstract
The paper analyzes the forecasting performance of leading indicators for industrial production in Germany. We focus on single and pooled leading indicator models both before and during the financial crisis. Pairwise and joint significant tests are used to evaluate single indicator models as well as forecast combination methods. In addition, we investigate the stability of forecasting models during the most recent financial crisis.
Read article
Should We Trust in Leading Indicators? Evidence from the Recent Recession
Katja Drechsel, Rolf Scheufele
Abstract
The paper analyzes leading indicators for GDP and industrial production in Germany. We focus on the performance of single and pooled leading indicators during the pre-crisis and crisis period using various weighting schemes. Pairwise and joint significant tests are used to evaluate single indicator as well as forecast combination methods. In addition, we use an end-of-sample instability test to investigate the stability of forecasting models during the recent financial crisis. We find in general that only a small number of single indicator models were performing well before the crisis. Pooling can substantially increase the reliability of leading indicator forecasts. During the crisis the relative performance of many leading indicator models increased. At short horizons, survey indicators perform best, while at longer horizons financial indicators, such as term spreads and risk spreads, improve relative to the benchmark.
Read article
Evaluating the German (New Keynesian) Phillips Curve
Rolf Scheufele
North American Journal of Economics and Finance,
2010
Abstract
This paper evaluates the New Keynesian Phillips curve (NKPC) and its hybrid variant within a limited information framework for Germany. The main interest resides in the average frequency of price re-optimization by firms. We use the labor income share as the driving variable and consider a source of real rigidity by allowing for a fixed firm-specific capital stock. A GMM estimation strategy is employed as well as an identification robust method based on the Anderson–Rubin statistic. We find that the German Phillips curve is purely forward-looking. Moreover, our point estimates are consistent with the view that firms re-optimize prices every 2–3 quarters. These estimates seem plausible from an economic point of view. But the uncertainties around these estimates are very large and also consistent with perfect nominal price rigidity, where firms never re-optimize prices. This analysis also offers some explanation as to why previous results for the German NKPC based on GMM differ considerably. First, standard GMM results are very sensitive to the way in which orthogonality conditions are formulated. Further, model mis-specifications may be left undetected by conventional J tests. This analysis points out the need for identification robust methods to get reliable estimates for the NKPC.
Read article
Is there a Superior Distance Function for Matching in Small Samples?
Eva Dettmann, Claudia Becker, Christian Schmeißer
Abstract
The study contributes to the development of ’standards’ for the application of matching algorithms in empirical evaluation studies. The focus is on the first step of the matching procedure, the choice of an appropriate distance function. Supplementary o most former studies, the simulation is strongly based on empirical evaluation ituations. This reality orientation induces the focus on small samples. Furthermore, ariables with different scale levels must be considered explicitly in the matching rocess. The choice of the analysed distance functions is determined by the results of former theoretical studies and recommendations in the empirical literature. Thus, in the simulation, two balancing scores (the propensity score and the index score) and the Mahalanobis distance are considered. Additionally, aggregated statistical distance functions not yet used for empirical evaluation are included. The matching outcomes are compared using non-parametrical scale-specific tests for identical distributions of the characteristics in the treatment and the control groups. The simulation results show that, in small samples, aggregated statistical distance functions are the better
choice for summarising similarities in differently scaled variables compared to the
commonly used measures.
Read article