Blanchard & Kahn conditions are not satisfied: no stable equilibrium & The Jacobian of the static model is singular

  1. mode_compute=6 is a very inefficient mode-finder. Its strength is not in finding the mode, but delivering a positive definite Hessian.
  2. mode_compute=5 worked well in finding the region of highest posterior density. mode_compute=6 then helped getting a Hessian.
  3. Try a sequence of mode-finders. That often works well.
  4. I would only use 1 chain with TaRB. Yes, you should adjust the jscale to get a better proposal density. 10000 draws should be fine (and will still take some time).

The model and mode file you provided worked with one chain and 10 000 draws, but I got

"Estimation::marginal density: Let me try again.
Estimation::marginal density: There’s probably a problem with the modified harmonic mean estimator.

ESTIMATION RESULTS

Log data density is -Inf."

The acceptance ratio was quite low, only 10%, could this be the reason or is this reflected by some of the priors? I’ll try to re-run it now and adjust the jscale but the TaRB was very computational heavy, 10 000 draws took me 50h to run.

I would first investigate the trace plots.

I’m sorry to disturb you, but I get the same error over and over when I change the value of the jscale
"
Error using chol_SE (line 74)
A is not symmetric

Error in posterior_sampler_iteration (line 114)
proposal_covariance_Cholesky_decomposition_upper=chol_SE(inverse_hessian_mat,0);

Error in posterior_sampler_core (line 197)
[par, logpost, accepted, neval] = posterior_sampler_iteration(TargetFun, last_draw(curr_block,:), last_posterior(curr_block), sampler_options,dataset_,dataset_info,options_,M_,estim_params_,bayestopt_,mh_bounds,oo_);

Error in posterior_sampler (line 121)
fout = posterior_sampler_core(localVars, fblck, nblck, 0);

Error in dynare_estimation_1 (line 471)
posterior_sampler(objective_function,posterior_sampler_options.proposal_distribution,xparam1,posterior_sampler_options,bounds,dataset_,dataset_info,options_,M_,estim_params_,bayestopt_,oo_);

Error in dynare_estimation (line 118)
dynare_estimation_1(var_list,dname);

Error in model_TaRB.driver (line 808)
oo_recursive_=dynare_estimation(var_list_);

Error in dynare (line 281)
evalin(‘base’,[fname ‘.driver’]);"

I guess this is related to the mode file?

What did you change? I am still trying to replicate the issue.

The first thing I did was just to run the files, and It worked. Then I changed the number of draws to 10 000, and got the error. I fixed that by reducing the number of chains from two to one and it worked with 10 000 draws but the acceptance rate was really low and no log data density as stated two days ago.

Since then I’ve experimented with different jscale values and number of draws in the range 1-300, on some values it worked and some it didn’t strangely enough. It worked with 300 draws but not with 10 000 etc. I just finished 1 chain, 10 000 draws with a jscale value of 0.14 when the error happened approx 1.5h into the estimation.

I’m testing a jscale of 0.1 as we speak, it worked with 1000 draws on saturday…

Could you please try with

I’ll try that after current estimation is complete. It seems to be working with jscale 0.1. I want to see the acceptance ratio, maybe everything is fine. I’ll get back in 1-2 days depending on the runtime. Thanks.

I’m trying this now, I’ll get back with the result.

It worked, thanks. Still very low acceptance ratio, guess I’ve re-estimate till I’m in the 25-35% interval.

Everything works. Thank you for your help.

I do have some last questions however:

  1. The marginal likelihood that researchers report and compare models with, is it fine to report the Log data density reported in the Estimation results? Is some alternations needed?
    Screenshot 2022-04-27 at 20.38.30

  2. Regarding the shocks, to my understanding Dynare computes one SD shocks, but if I model my shocks as var eps_XX; stderr 0.1; this means the shock and Bayesian(or standard) IRF will be 0.1 SD meaning a 10% increase if the model is log-linearized? Meaning the IRFs will show the respons from a 10% increase shock? All shocks (Bayesian/stoch_simul irf etc) are orthogonalised?

  3. Lastly, the only thing that’s missing in my thesis is some kind of model fit or robust/sensitivity analysis, but I’m a bit unsure how /what to do in a fully estimated model. I think I’ve read here on the forum that a model fit won’t work if the model is completely estimated, so I’m just wondering what would you, as an examiner, like to see in the form of robustness checks in an estimated model for a master thesis?

Thanks a lot.
Sincerely,

  1. Yes, that is the marginal data density often reported. But it mostly makes sense in the context of model comparison.
  2. I am not entirely sure what you mean. But yes, reported IRFs are to one standard deviation shocks. 0.1 usually means 10 percent in a log-linearized model where the data were not scaled by 100.
  3. If it were me, I would be happy with a well-estimated model and a decent interpretation of the finding. I would not ask for additional robustness checks beyond the usual diagnostics.
  1. Yes, that is the marginal data density often reported. But it mostly makes sense in the context of model comparison.

Will it make sense if I run different versions of my model? I have one baseline version, and a second where I model the shock process differently than AR(1), and then I estimate restricted versions of my model where I shut down the frictions one at a time and compare the VD.

  1. I am not entirely sure what you mean. But yes, reported IRFs are to one standard deviation shocks. 0.1 usually means 10 percent in a log-linearized model where the data were not scaled by 100.

Will the IRF always show one SD regardless of how I specify the shocks? It doesn’t matter if I specify stderr 0.1; or stderr 0.01;?
I did not scale the data, they are in log-diff or as stated in the manual for interest/inflation.

  1. If it were me, I would be happy with a well-estimated model and a decent interpretation of the finding. I would not ask for additional robustness checks beyond the usual diagnostics.

Just to be clear, which are the usual diagnostics you are referring to?

Sincerely,

  1. Yes, that is what many people do. Whether that fits your paper, I don’t know.
  2. It will always be one standard deviation. Of course, how you specify the standard deviation will matter.
  3. mode_check, prior posterior plots convergence diagnostics.
  1. It will always be one standard deviation. Of course, how you specify the standard deviation will matter.

I see, so it’s not possible to get the IRF plots to show 0.1 SD or 0.5 etc?

Also, when I go to the generated model folder in matlab model/Output/model_PriorsAndPosteriors.eps I want to change the name of these plots to the corresponding greek letter. For example on line 742 the first name appears as

(SE_epss_y) t 

but I cannot seem to manage to display it as sigma_y. Any suggestion?

Have a great weekend!
Sincerely,

Due to linearity, you can scale the IRFs as you want. I would suggest to create the desired graphs manually in Matlab based on the stored results in oo_.

  1. mode_check, prior posterior plots convergence diagnostics.

Maybe I should report the second moments from the model and compare with the data? (This seems to be quite common in the literature). When i run the stoch_simul command after estimation, can I use the theoretical moments that Dynare prints?

I haven’t really done this before so I’m a bit uncertain how to proceed, Thanks.

Yes, that cannot hurt. And yes, you can use the reported theoretical moments from stoch_simul after estimation.

Yes, that cannot hurt. And yes, you can use the reported theoretical moments from stoch_simul after estimation .

  1. Does it matter if I use moments_varendo in the estimation or not?

  2. What is the best/simplest way to obtain/calculate the moments (SD) from the data? Couldn’t find it in the manual/forum etc.

I’m not using any hp filter, just

estimation(posterior_sampling_method='tailored_random_block_metropolis_hastings', datafile=dataset1, first_obs=1, mh_replic=12000, mh_nblocks=1, mh_drop=0.1, mh_jscale=0.49, mode_compute=0, mode_file='model1_mode.mat', mode_check, bayesian_irf, diffuse_filter, prior_trunc=0) y c i n pi r tot rer;
stoch_simul(order=1, conditional_variance_decomposition=[1,8,32], irf=0) y c i n pi r tot rer;

and my data are log-differenced already.

Thank you.

  1. No, that does not matter.
  2. Simply compute the standard deviation on your data using Matlab.