Prior distribution and strange simulation results

Dear professor jpfeifer,

I wrote a paper on unemployment dynamics and constructed an open DSGE model. Total workers are normalized to 1, the steady- state employment is calibrated to 0.96 and therefore steady- state unemployment is calibrated to 0.04.

I try to estimate the variances of shocks using bayesian method, but I met problems that I can not handle:

As far as I know, most paper set standard deviations of shocks to 1% and I set prior distributions of shocks as follows: stderr e_f, inv_gamma_pdf,0.01, 2; where e_f is the total world interest rate shock which should large than 1, and I just use this shock for example. What surprising me is the results of 400 periods simulation.

I find total employment is larger than 1 sometimes, unemployment is therefore negative sometimes, total foreign interest rate is less than 1 sometimes, even output have negative values sometimes, all level values of variables are wrong.

If I set set prior distributions of shocks as follows: stderr e_f, inv_gamma_pdf,0.0001, 0.01; I find every level values of variables are just fine: employment is less than 1, output and unemployment are all positive, total interest rate all large than 1.

I think this indicates that the model is not allowed for large volatility. Is it because the steady states values that have no room for large volatility? Can you explain the problem I met? I think there is something serious wrong but I really do not understand.

This problem has confused me for more than 2 weeks and I hope for your kindly helps.

Really thank you professor.

Best wishes.

The problem is the solution technique (perturbation), where you are approximating the model with a polynomial. There is no way to restrict a polynomial to be bounded. So as long as you work with perturbation techniques, there is always some probability of simulations violating any bound. The problem will be bigger, the more volatile the shocks are as they will put you further away from the steady state.

Thank you so much, professor. I still don’t know how to handle this problem. Can you give me some advice? Should I log-linearize the model instead of using level variable? Please teach me how to handle this problem that really confused me for long time.

There is not really anything you can do about this. If you cannot reduce the shock size and the number of problematic observations is too high, you can only try to use a more accurate solution technique.

Really thank you. Professor. Can you tell me why I met this problem? I see most paper can set the variances of shocks to 1 percent or even bigger. But why I can’t? I believe the model settings are quite normal. Can you please give me more explanations about my problem? thank you

It depends on the shock size and the model’s amplification of shocks. For some models there is no issue in others you will hit the variable bounds

Thank you. Professor. May I ask you some more question?

First, I see papers always set different standard devisions of identical shocks. For example, standard devision of technoleogy shock is set to be 1% or 50% or even more than100%. Does it mean there is no consensus value and tandard devision of shocks can be set different values according to different models? For me, can I set the standard devision of shocks to 0.01% (which is a very small value) to avoid problematic observations? Is this make sense?

Second, I see this statement in paper, " We calibrate the standard deviation of the technology processes such that the model replicates the observed volatility of output during our sample period." How to do this calibration? I really want to estimate my model using Bayesian method and the model can fit the real data well, I mean the moments. Can you give me some suggestions on model fit?

Thank you so much professor and always appreciate your kindly helps.

  1. There are two big ways of setting shock processes: fitting the overall model to fit the data like GDP volatility or estimating an exogenous process directly without the rest of the model. For the former case, results will obviously differ across models as the transmission mechanism changes. Bayesian estimation falls into that first category.
  2. When working with levels, you need to be careful to get the scaling right. A 1% TFP shock needs to be coded as 0.01, not 1. The 100% you mention is most likely a linearized model that has been scaled by 100.

Thank you, professor. For me, if I set the variances of shocks to 0.01%, I find everything is OK with no problematic observations. Can you tell me if 0.01% make sense? Moreover, I want to estimate some parameters using Bayesian method. How to set post distribution of standard devisions of shocks? I just want to estimate the model and the model can fit the real data well, but I find no way. If I set variances of shocks to 1% as most paper set, I always get large number of probkenatic observations and the model then can not fit the real data and even wrong as I see. So can you give me some suggestions on setting the standard devisions of shocks. Sorry to bother you and really thank you.

That sounds like there is still an issue in your model. Could you provide the mod-file?

Of cause, professor, The mod-file set the standard devisons of shocks as “stderr e_piw, inv_gamma_pdf,0.01,2;”. The paper I wrote studying the unemployment dynamics and normalized total workers to 1. The mod-file can run but you will see:

First, if you end the mod after identification, you will obtain 400 simulation results and there will be large number of problematic observations. Just see interest rate (should be large than 1), employment (should be less than 1) and output (should be positive numbers).

Second, if you finish running the mod, it will never fit the real data. I think that is “garbage in, garbage out” as you taught me before.

Sorry to bother you but hope you can help me in this probelm. Really appreciate for your help, professor.
employment2022.mod (17.8 KB)
unemployment2022.xlsx (16.7 KB)

I would focus on the simulated moments. You can see in the variance decomposition that the world inflation shock and the foreign interest rate shock explain most of the variable movements. After that, there is the monetary policy shock, whose variance you set to 1%. But that is way too large, because it implies 4 percent for the annualized interest rate. Also, having 10 shocks with a variance of 1 percent will make the model quite volatile.

Thank you, professor, I really don’t understand why 1% variance implies 4 percent for the annualized interest rate, and can you give me some detail suggestions on how to modify the model to make it fit the data well? for monetary policy shock, is 0.01% variance a suitable value?

I have one more question, professor. If the Bayesian estimation result of standard devisons, e.g. government expenditure shock is 0.17 (just for example), then is this result make sense? This will surely means a large number of problematic observations. Thank you professor.

  1. You are modeling the quarterly interest rate. The annual one is roughly four times the quarterly one. Thus, a 25 basis point shock to the quarterly interest rate corresponds to 1 percent of the annual one.
  2. No, 1 basis point is unrealistically small for a monetary policy shock.
  3. 17 percent for a true government spending shock is really huge (unless it is exogenous absorption including net exports)
  4. Usually the observed data should restrict the number of problematic equations. The bounded data should not allow for too much volatility.

Thank you professor, I re-read SM(2007) and CMR(Risk shocks, 2014) and find I can set the standard deviations of shocks by referring to these papers at stderr e_iff, inv_gamma_pdf,0.002,0.0033.
Anyway, can the simulated total interest rates less than 1?If we set discount rate at 0.99, then steady-state interest rate will be about 1.01. If the standard deviation of interest rate shock is set 0.2% as in CMR(2014), then I think the simulated total interest rates can be less than 1 sometimes, is this right? thank you, professor.

Yes, that can indeed happen. It is well-known that these models do not implement the zero lower bound.

Thank you for your kindly helps, professor, learned so much from you. Now total interest rate never have problematic observations if I set the standard devisions of interest rate shock at 0.2%, and I think my model transmission mechanism may be have some probles that always produce problematic total labor demand observations (large than 1), I will working on it after your useful suggestions, thank you so much.