Dear forum. Methods I know so far for giving numerical values to parameters can be summarized to my best knowledge in: i) calibrating from previous literature findings, ii) IRF-matching (from an SVAR, as noted here), and iii) directly estimating from a BVAR posterior distribution.
I’d be grateful if you could guide me on other methods that could exist and, that are used and accepted as good empirical contrastation of DSGE models. And also if you may tell me some of the difficulty-robustness relationship in each approach. As for example BVAR could be the more robust, but requires a very good understanding of some special statistical and numerical methods.
I have seen published DSGE papers that use their criteria for calibrating the ability of the model to replicate second moments, at least in the sign. Is this approach robust?
Also, is IRF-matching commonly used in journal published papers about DSGE?
(By “robust” I’m referring to a method that is statistically valid, or at least is widely accepted in literature.)
This is a good question, and I struggled with it for a bit. I think what your intuition is telling you is that there is a specific algorithm to do this “correctly”. I think I find this (Bayesian estimation and estimation of DSGE models, in general) to be more “art”, if that makes any sense; I don’t think there is a 100% proven correct way to do this. Ultimately, you are creating a stethoscope to diagnose your patient with. Two different doctors will read and interpret the stethoscope results differently. With that being said, Eric Sims’ notes are a great place to start:
You are alluding to the differences between full information and partial information approaches. Some of the discussion can be found at
Estimating the model with full information techniques requires you to fully specify the data generating process. That is often hard. For that reason, you still find papers using GMM/SMM or IRF-matching (potentially in the form of indirect inference).
That’s a point, actually with the model performing good with empirical second moments will be enough, and think I can accomplish that without any method that is out of my reach at the moment as Bayesian estimation.
Well that’s true, in order to do a business-cycle moment matching, will it just suffice to filter the cycle component from data, compute second moments and use some routine to minimize the difference between theoretical moments and the ones that I get from data, by changing the values of some parameters?
Also, which would be an efficient method of doing so? It comes to my mind some sort of gradient-based optimization (like in machine learning), may you know some example or guide of such a process? Thanks!
This should be covered in e.g. Greene’s econometrics textbook. Full information maximum likelihood methods employ the likelihood function, which incorporates the full stochastic information of the model. There is no further information beyond the likelihood you would need to know. Partial information methods in contrast rely on a subset of the information embedded in the likelihood.
Thanks! Also I was checking the code that uses unstable method_of_moments function in Dynare, but I’m not sure if I’ll be able to use it properly for my particular needs, then may you have an example of “hand-made” moment matching?
Besides, I still see a lot of usage of HP-filter for cycle extraction, but I wonder if you still recommend the usage of this filter for that purpose, or if we should use another methods that have been proposed. I’d be very grateful to hear some of your comments on that, and also if you could recommend some kind of literature that refers to this problem of filter choosing. Thanks!
I’ll be checking the code, thanks. In the other hand, as far as I understand, as I’m using (more likely escenario) indirect inference the choosing of the filter is more a matter of making the simulation and the data comparable, in that sense it’s not extremely necessary to choose a “perfect” filter for my work and use regular HP will work good, am I right?
In the other hand, if considering Bayesian estimation with Dynare, it’s not completely clear to me as in
if few observations (~50 in my case, of about 7 variables) is too little for Bayesian estimation, for example if I were to perform an IRF-matching with an empirical VAR, having >6 variables would kill my degrees of freedom fastly, what would be the drawbacks of this kind in bayesian estimation? And if I were to adventure in trying this estimation, what should I read first?
I have a question on the hp-filter and the new GMM estimation toolbox provided in the Dynare 4.7 unstable version, that somehow relates to this thread.
Actually, I want to estimate dynamic parameters in my model by minimizing the distance between a few second order (theoretical) moments in my data and in my model. My idea was to compare filtered moments in both the data and the model (with the hp-filter), because in the literature the moments that are presented are often based on this filter, and I wanted to have comparable values.
However, I am not sure how to minimize the distance between hp-filtered data moments and hp-filtered theoretical moments with the new method_of_moments command. The prefilter option only concerns data from what I get. And I use the hp-filter option in the stoch_simul command (which no longer seems to work with Dynare 4.7 for theoretical moments), but the method_of_moments command does not rely on this.
Is there a way to hp-filter the theoretical moments within the new method_of_moments command or should I keep relying on my own code (which takes much more time in running)? If there is no way to do this with the new command, is there a good reason for this? I am aware that the use of the hp-filter is much debated, but still, most of the recent papers that I read use this filter.
When I use the method_of_moments command, the theoretical second-order moments that I get are quite different from those obtained when applying the hp-filter (variances are much larger), so this is quite an issue.