I have an estimation where a restrictive prior for one parameter allows the estimation to run. I am not sure about the mechanism beyond this result and the economic explanation.
What is the correct explanation to give in a paper/chapter in this case to justify this restrictive prior choice ?
I am sorry in advance if my question was already posted and answered.
The smaller is the prior variance, the more you have a priori information relative to the sample information. By decreasing the prior variance you also decrease the weight of the likelihood function in the posterior density. In the limit, as the prior variance converges to zero, you do not exploit any sample information to identify the parameters, you do calibration… Obviously this is easier than ML estimation. All this is explained in the textbook by Arnold Zellner (1971, An introduction to Bayesian Inference in Econometrics).
Thanks I better understand now :). I will take a look to this book.
In my example, I have moved from a beta distribution (0.5, 0.1) to a beta distribution with (0.5,0.05).
When you say
what is the problem you face. Changing the prior is not OK, if it masks deeper underlying issues.
I am agree with you. In my final estimation I keep the “classic” distribution after solving a problem in my model. However, this manipulation showed me where the problem comes from :).