Measurement error always hit upper bound

Dear Johannes,

Whenever I add in measurement errors as shocks into **whatever **model to estimate, the posterior standard deviation of measurement error is always hitting the upper bound.
In such scenario, sometimes mode_compute=4 or 9 might fail since Hessian is not positive definite. Then I try to use mode_compute=6 to run MCMC, which is the only way to
solve the problem. If it shows that such MCMC chains finally converge, would the results reliable? Please find attached 3 convergence figures and 1 posterior plot and 1 mode check plot.

Best regards,
Huan
twoshocks4me_CheckPlots1.pdf (15.2 KB)
twoshocks4me_udiag3.pdf (18.7 KB)
twoshocks4me_udiag2.pdf (23.7 KB)
twoshocks4me_udiag1.pdf (24.1 KB)
twoshocks4me_PriorsAndPosteriors1.pdf (61.4 KB)

Unfortunately, this is a common problem. Any model misspecification is likely to turn up in the measurement error. As most models are severely misspecified, the estimated measurement error tends to be large. It regularly happens that measurement error hits the upper bound if the bound is relatively low. This for example happens in the Schmitt-Grohe/Uribe (2012) “What’s news in business cycles” paper and the Garcia-Cicco et al (2010) AER paper with the measurement error specified in the paper (see the note to github.com/JohannesPfeifer/DSGE_mod/tree/master/GarciaCicco_et_al_2010).

However, even if the mode is at the boundary, the posterior parameter distribution is typically not degenerate. The MCMC, when started with any valid covariance matrix will correctly sample from the posterior. It might just be inefficient, i.e. take many draws to achieve convergence. Instead of mode_compute=6 you might want to directly use the

option (see the manual).

Another way around this is to use a more informative prior on the measurement error that pushes the posterior towards the lower bound.

[quote=“jpfeifer”]Unfortunately, this is a common problem. Any model misspecification is likely to turn up in the measurement error. As most models are severely misspecified, the estimated measurement error tends to be large. It regularly happens that measurement error hits the upper bound if the bound is relatively low. This for example happens in the Schmitt-Grohe/Uribe (2012) “What’s news in business cycles” paper and the Garcia-Cicco et al (2010) AER paper with the measurement error specified in the paper (see the note to github.com/JohannesPfeifer/DSGE_mod/tree/master/GarciaCicco_et_al_2010).

However, even if the mode is at the boundary, the posterior parameter distribution is typically not degenerate. The MCMC, when started with any valid covariance matrix will correctly sample from the posterior. It might just be inefficient, i.e. take many draws to achieve convergence. Instead of mode_compute=6 you might want to directly use the

option (see the manual).

Another way around this is to use a more informative prior on the measurement error that pushes the posterior towards the lower bound.[/quote]

Many thanks,Johannes.

If I make model comparison according to log data density, but one model has one measurement error as a shock, while the other has No measurement error. Would that be a problem?

No problem. The measurement error is part of the model and nothing in model comparison requires nested models. As long as the data is the same, there is not an issue.

[quote=“jpfeifer”]Unfortunately, this is a common problem. Any model misspecification is likely to turn up in the measurement error. As most models are severely misspecified, the estimated measurement error tends to be large. It regularly happens that measurement error hits the upper bound if the bound is relatively low. This for example happens in the Schmitt-Grohe/Uribe (2012) “What’s news in business cycles” paper and the Garcia-Cicco et al (2010) AER paper with the measurement error specified in the paper (see the note to github.com/JohannesPfeifer/DSGE_mod/tree/master/GarciaCicco_et_al_2010).

However, even if the mode is at the boundary, the posterior parameter distribution is typically not degenerate. The MCMC, when started with any valid covariance matrix will correctly sample from the posterior. It might just be inefficient, i.e. take many draws to achieve convergence. Instead of mode_compute=6 you might want to directly use the

option (see the manual).

Another way around this is to use a more informative prior on the measurement error that pushes the posterior towards the lower bound.[/quote]

Hi, Johannes,
In terms of mcmc_jumping_covariance, should I use it like this: (I have already run MCMC once)

estimation(mode_compute=0,mcmc_jumping_covariance=hessian, mode_file=name_mh_mode, replic=1000000,...)?

Thanks,
Huan

That combination is unusual. If you already have a mode-file with a hessian, using mode_compute=0 with a mode-file already does the trick without specifying

[quote=“jpfeifer”]That combination is unusual. If you already have a mode-file with a hessian, using mode_compute=0 with a mode-file already does the trick without specifying

Dear Johannes,

Since it is common that measurement error hitting upper bound, in such case, is shock_decomposition results still reliable? (If you change the measurement error upper bound , then shock decomposition result would change correspondingly?)

Thanks in advance.
Huan

Dear Huan,
the shock_decomposition will be as reliable as you whole estimation. If you worry that the measurement error is implausibly large, you should not trust any results, including the shock_decomposition.