Best methods for calibrating/estimating parameters

Dear forum. Methods I know so far for giving numerical values to parameters can be summarized to my best knowledge in: i) calibrating from previous literature findings, ii) IRF-matching (from an SVAR, as noted here), and iii) directly estimating from a BVAR posterior distribution.

I’d be grateful if you could guide me on other methods that could exist and, that are used and accepted as good empirical contrastation of DSGE models. And also if you may tell me some of the difficulty-robustness relationship in each approach. As for example BVAR could be the more robust, but requires a very good understanding of some special statistical and numerical methods.

I have seen published DSGE papers that use their criteria for calibrating the ability of the model to replicate second moments, at least in the sign. Is this approach robust?

Also, is IRF-matching commonly used in journal published papers about DSGE?

(By “robust” I’m referring to a method that is statistically valid, or at least is widely accepted in literature.)

Thanks, answers to this would really help me!

2 Likes

This is a good question, and I struggled with it for a bit. I think what your intuition is telling you is that there is a specific algorithm to do this “correctly”. I think I find this (Bayesian estimation and estimation of DSGE models, in general) to be more “art”, if that makes any sense; I don’t think there is a 100% proven correct way to do this. Ultimately, you are creating a stethoscope to diagnose your patient with. Two different doctors will read and interpret the stethoscope results differently. With that being said, Eric Sims’ notes are a great place to start:

1 Like

You are alluding to the differences between full information and partial information approaches. Some of the discussion can be found at

Estimating the model with full information techniques requires you to fully specify the data generating process. That is often hard. For that reason, you still find papers using GMM/SMM or IRF-matching (potentially in the form of indirect inference).

1 Like

That’s a point, actually with the model performing good with empirical second moments will be enough, and think I can accomplish that without any method that is out of my reach at the moment as Bayesian estimation.

If my objective is pure good replication of second empirical moments, will it suffice to make a moment matching to a VAR directly or would it be better to make a irf-matchig?

It sounds as if you want to match the second moments, i.e. do moment matching. I don’t know why you you want to match a VAR for that purpose.

Well that’s true, in order to do a business-cycle moment matching, will it just suffice to filter the cycle component from data, compute second moments and use some routine to minimize the difference between theoretical moments and the ones that I get from data, by changing the values of some parameters?

Also, which would be an efficient method of doing so? It comes to my mind some sort of gradient-based optimization (like in machine learning), may you know some example or guide of such a process? Thanks!

Yes, that would work. Dynare’s unstable version has a GMM/SMM-tool to do that. See e.g. tests/estimation/method_of_moments/AnScho/AnScho_MoM.mod · master · Dynare / dynare · GitLab

You can use different optimizers. A gradient-based one often works well.

2 Likes

May you have any literature recommendation for terminology clarification, in order for me to understand well the difference between full and partial information approaches? Thanks!

This should be covered in e.g. Greene’s econometrics textbook. Full information maximum likelihood methods employ the likelihood function, which incorporates the full stochastic information of the model. There is no further information beyond the likelihood you would need to know. Partial information methods in contrast rely on a subset of the information embedded in the likelihood.

1 Like

Thanks! Also I was checking the code that uses unstable method_of_moments function in Dynare, but I’m not sure if I’ll be able to use it properly for my particular needs, then may you have an example of “hand-made” moment matching?

Besides, I still see a lot of usage of HP-filter for cycle extraction, but I wonder if you still recommend the usage of this filter for that purpose, or if we should use another methods that have been proposed. I’d be very grateful to hear some of your comments on that, and also if you could recommend some kind of literature that refers to this problem of filter choosing. Thanks!

  1. The file at DSGE_mod/Born_Pfeifer_RM_Comment.mod at master · JohannesPfeifer/DSGE_mod · GitHub contains a basic SMM.
  2. I have never recommended using the HP-filter for ML or Bayesian estimation (see e.g. Demean the series or not - #2 by jpfeifer). The only exception is indirect inference. See HP Filter simulated variables
2 Likes

I’ll be checking the code, thanks. In the other hand, as far as I understand, as I’m using (more likely escenario) indirect inference the choosing of the filter is more a matter of making the simulation and the data comparable, in that sense it’s not extremely necessary to choose a “perfect” filter for my work and use regular HP will work good, am I right?

In the other hand, if considering Bayesian estimation with Dynare, it’s not completely clear to me as in

if few observations (~50 in my case, of about 7 variables) is too little for Bayesian estimation, for example if I were to perform an IRF-matching with an empirical VAR, having >6 variables would kill my degrees of freedom fastly, what would be the drawbacks of this kind in bayesian estimation? And if I were to adventure in trying this estimation, what should I read first?

Thanks!!

  1. Yes, the filter is an auxilary model.
  2. You may want to consult
1 Like

Hi,

I have a question on the hp-filter and the new GMM estimation toolbox provided in the Dynare 4.7 unstable version, that somehow relates to this thread.

Actually, I want to estimate dynamic parameters in my model by minimizing the distance between a few second order (theoretical) moments in my data and in my model. My idea was to compare filtered moments in both the data and the model (with the hp-filter), because in the literature the moments that are presented are often based on this filter, and I wanted to have comparable values.

However, I am not sure how to minimize the distance between hp-filtered data moments and hp-filtered theoretical moments with the new method_of_moments command. The prefilter option only concerns data from what I get. And I use the hp-filter option in the stoch_simul command (which no longer seems to work with Dynare 4.7 for theoretical moments), but the method_of_moments command does not rely on this.

Is there a way to hp-filter the theoretical moments within the new method_of_moments command or should I keep relying on my own code (which takes much more time in running)? If there is no way to do this with the new command, is there a good reason for this? I am aware that the use of the hp-filter is much debated, but still, most of the recent papers that I read use this filter.

When I use the method_of_moments command, the theoretical second-order moments that I get are quite different from those obtained when applying the hp-filter (variances are much larger), so this is quite an issue.

Many thanks!

1 Like