In a linearized DSGE model, after reaching a StateSpace model, we’ll usually have some matrices where each element/component is a function of several structural parameters.
Also, in most economics papers, when doing bayesian estimation, namely using a MetropolisHastings algorithm, I see many authors put priors on the parameters (this procedure seems to help in getting an economic interpretation) instead of on the matrices in the StateSpace model and the proposal chosen for the parameters is most of the times a multivariate Tdistribution, where the location parameter of this distribution follows \theta[t]=\theta[t1]+\frac{\partial}{\partial \theta} p(\mathbf{y}\theta[t1]).
I have the following questions:

The likelihood function maybe very ‘complex’, and increasing with the number of parameters and sample size. How does one compute the above derivative for the proposal? Is it feasible to just define p(\mathbf{y}\theta)=\text{Likelihood}(\theta) and then trying to numerically maximise it?

The chosen proposal does not guarantees that the drawn parameters from it will obey the linearized DSGE, let alone the original DSGE, i.e., when drawing from the proposal I may get several inconsistent values. How does one make sure to get the right draws?
Hi,
I do not know the papers you are referring to. In Dynare the default jumping distribution is the Gaussian distribution centred on the previous state of the chain and with a covariance matrix given by the inverse of the Hessian matrix of the posterior kernel evaluated at the posterior mode.
Optionally you can use a multivariate Student distribution (see the posterior_sampler_options
option in the reference manual). Why do you need the gradient of the likelihood here?
Obviously, ideally we should use a proposal distribution as close as possible to the targeted posterior distribution… But it is not possible to do that, since the targeted distribution is unknown. The MetropolisHasting algorithm defines a stochastic process such that its ergodic distribution is the posterior distribution. This works with a Gaussian proposal (jumping distribution), even if the posterior is non Gaussian. All this is nicely explained in Tierney (The Annals of Statistics, 1994).
Best,
Stéphane.
Stéphane thanks for the prompt reply.
So, let’s work with the Gaussian. The support of the Gaussian is the whole space (R or R^n if multivariate).
However, the economic structural parameters will not be ‘living’ in the whole space… Only in a very specific region. So, it could be that the gaussian draws from the ‘forbidden’ (outside) region, and it has no economic/physical sense. So, how does one proceed in this case?
These cases are automatically discarded in the Metropolis Hastings iterations, since the prior density should be zero in these regions (provided you carefully choose your priors to be consistent with what economic theory suggests).
Best,
Stéphane.
That being said, the MetropolisHastings algorithm often performs better if you reparameterize your model to have the unbounded support you suggest. For example, if you have a parameter that can only be positive, estimating the log parameter instead with a Gaussion proposal circumvents the issue that you mention. This is discussed e.g. in
Adolfson, Malin, Jesper Lindé, and Mattias Villani (2007). “Bayesian analysis of DSGE models  some comments”. Econometric Reviews 26 (24), 173–185.
1 Like
That’s very interesting! It should increase the efficiency of the MH algorithm, at least intuitively it’s what I would say. Thanks for the reference.