Estimation of unused parameters improves the estimation?!?

Hi,
this is the first time for me, actively using this forum. Hence, I would like to start with saying, that it is really helpful! I found out that most problems I had, have already been discussed by someone else. :smiley:

Here is my problem/question: I estimate a huge model with about 56 equations and more than 40 parameters. I need to use the monte carlo based optimization routine (mode_compute = 6) to find starting values for the MH. I have several shocks in the model and I set some of them to to zero that seem to be of no importance after a first estimation. However, I forgot to cancel out the AR(1) parameters of 3 shocks that are set to 0, when estimating the model (so I estimated 3 unused parameters). After I realized this I repeated the estimation without estimating these parameter and the results dramtically changed!? In fact, the results with the unused parameters are much better! And I checked the code several times to be sure, that the parameters are unused (they only appear multiplied by a zero shock).

Now I have 3 questions:

  1. Has anyone an explanation for this? I thought, that this would not change the estimation of any other parameter, since varying the unsed parameter does not change anything in the model dynamics.

  2. Is it possible that this improves the starting values somehow, so that the MH converge to a different result?

  3. Most important: Is the estimation reliable? I was really happy about the result! :slight_smile:

Thanks for any comments/suggestions/solutions.

Best,
Michael

[quote=“Michael Paetz”]Hi,

Now I have 3 questions:

  1. Has anyone an explanation for this? I thought, that this would not change the estimation of any other parameter, since varying the unsed parameter does not change anything in the model dynamics.
    [/quote]

The presence of additional (unused) parameters will modify the progression of Metropolis iterations. The priors for the unused parameters will modify the posterior. The number of additional parameters will change the constant in the computation of the likelihood.

This will not modify the starting values, but the progression of the iterative procedure. Be careful about the criteria that you use to decide that one result is better than the other.

Probably not. You need at least to run the convergence diagnostics (with at least two chains).

Best

Michel

Thanks, I will check this!