Hi,

To be short, if I go for alpha=0.351 (alpha being capital share) and calibrate my model, two of my Lagrange multipliers become 0.0036 and 0.0143. When I go for alpha=0.34, the two Lagrange multipliers become 0.0085 and 0.0192. The more I decrease alpha, the higher the multipliers become. I prefer alpha to be as high as possible to be more in line with other DSGE papers and micro estimations but the more it increases the more the Lagrange multipliers near 0 (alpha=0.3568 is the roof as it will set one of the multipliers to almost 0). I assume that small Lagrange multipliers imply restrictions that are not representative of the data and therefore them being weak and unreal restrictions (am I right?). Is there a general unwritten rule for the acceptable minimum value of a Lagrange multiplier? I don’t know the main DSGE papers Lagrange multipliers to compare mine to. So, I seem to be stuck both ways: If I decrease alpha it would move further away from its real value. If I increase alpha the weaker and the more unjustifiable the two restrictions become.

Which Lagrange multipliers are you talking about? The only true restriction is that they need to be positive. Their size does not have an economic restriction, but rather a numeric one. You will run into numerical problems Note that the units of the Lagrange multiplier will depend on the setup. Sometimes you can avoid issues by rescaling them.

Thank you, Dr. Pfeifer. The two Lagrange multipliers are for working capital finance restriction and loan collateral restriction. The working capital requirement restriction is the smaller one, although the other one is also very small. Can you please explain what numerical problems my model can run into? I have been able to Bayesian estimate my model but for example my habit formation in consumption is estimated to be 0.26 which is much smaller than the prevalent papers. The Rotemberg parameter which is usually 37 has also been estimated to be about 10. Although, I have included a novel monetary policy transmission channel in the model which I guess may have caused the estimations for habit formation in consumption and Rotemberg parameter to be smaller relative to other DSGE paper estimates. Also, can you please explain how I can rescale to increase the Lagrange multipliers’ values? should I just like multiply the restriction with 0.02 and the Lagrange multiplier will become 50 times larger? and that whether the rescale change will affect the Bayesian estimation results in the hope of getting higher habit formation and Rotemberg parameter values (that is if there is something wrong with 0.26 habit formation and 10 Rotemberg parameter)? Also, besides changing the results, can it help increase the estimation speed? (Now it takes 12 minutes to find the modes and a subsequent 3.5 hours for MCMC)

Another question is that I spent a lot of time to tweak prior distributions and change observable variables in order to be able to get the estimation to work (mode_compute=4). Is it possible that if I again spend a lot more time I may find another set of prior distributions that will result in higher habit formation and Rotemberg parameter values? or I shouldn’t waste my time?

Also, one of the important shocks in the estimation, out of 11 shocks, has been estimated to have a standard deviation of 0.022. But then, when I make the model with all the estimated parameters and give only one shock to that specific parameter with the size of the standard deviation previously estimated, it results in explosive dynamics and requires pruning. Is this normal that although a shock (among other shocks) has been estimated to have a SD of 0.022, shocking it by the same amount in anther file alone will result in explosive dynamics? Doesn’t the explosive dynamic present itself in the estimation process? Or the other 10 shocks will somehow cancel out the explosive dynamics of each other and therefore there would not be a problem regarding explosive dynamics of lone SD estimation in the process of estimation?