If I feel my model’s impulse response is very strange and severely deviates from reality, can Bayesian estimation, with data and a relatively large prior standard deviation, help me revise my impulse response?
Generally, if the model does not make sense when you can pick reasonable parameter values, having the data select the parameter values will not help. Most of the time, non-sensible results from a calibrated model indicate there is a problem with the model setup.
When examining materials related to global sensitivity analysis, I’ve found that parameters have a significant impact on model behavior. Often, when manually calibrating, I might only select a value within a very wide range, lacking a strong prior belief about its true value. Moreover, applying global sensitivity analysis to large models with numerous parameters is complex, and even when results are available, interpreting them and pinpointing issues can become complicated. Given this context, and having considered the connection between DSGE and VAR, I am curious whether data could automatically assist me in achieving a reasonable ‘calibration’ of parameters.
If your model’s prior support includes the range of plausible outcomes, you can of course try what estimation gives you.