# Prior distribution problem

Hello.
As far as I understand the following line
"stderr e_z, inv_gamma_pdf, 1, 1"
means that we set inverse gamma distribution with mean 1 and standart deviation 1 as prior for standart deviation of e_z. Are there any restrictions to parameter value of inverse gamma distribution (except mu,std > 0)?
The thing is that when I add new parameter that is absent in the system of equations then I expect to get posterior distribution of this parameter equal to its prior. When I set parameters of inverse gamma prior to 1 and 4 correspondingly then I get what I expected. But if I set parameters equal to 0.00125 and 4 correspondingly then posterior variance of the parameter of interest is far less than 16 (approximately 10^(-7)). I ran 500000 MH iterations.
Thank you in advance
Excuse me for my English

I think you are not understanding the concepts behind Bayesian estimation. The posterior will be a weighted average between the prior and the ML estimate, with the weights being determined by the relative precision. The prior will only be equal to the posterior if the data is not informative or the prior is really strong (or equal to the ML estimate). Moreover, the prior must be independent of the data. Changing the prior to be equal to the posterior is therefore not allowed.

Regarding

An additional restriction in Dynare is that the variance needs to be finite.

What would be a posterior distribution of a parameter ( let’s call it Q ) if it’s prior is h(q) and my system of equations doesn’t have such a parameter? But optimization takes into account this parameter? I think that I would get posterior of this parameter that is equal to h(q). So now we’re turning back to my question - what’s wrong with inverse gamma prior? Because I haven’t get parameters of posterior that I’d expected to get.

Excuse me for my English
Thank you in advance

If your parameter does not show up in the model, then you should indeed get a posterior that looks like the prior, because the data is uninformative. If that is not the case, your Metropolis-Hastings algorithm most probably has not yet converged and does not correctly sample from the posterior.