Likelihood only!

I may be missing something, but you are seem to be confusing the ordering in M_.params with the one used in estimation. E.g. you take parameter 20 in xparam1 to be the s_tech, but the actual ordering is in bayestopt_.name. There you can see that the first entry is the standard error of ea (which is s_tech). The parameters with the flat likelihood are actually the constants of the observation equation.

Yes you’re right. I didn’t realize that the parameter order was different when estimating. Thank you very much!!

Let’s say that I use this method and end up with some estimates of my parameters. Is there then some way to relatively easy make inference of these parameters in Dynare?

Which type of inference do you mean? And how did you estimate your parameters. You can always work with Delta-methods yourself, but Dynare does not provide prepackage routines for inference on external estimates.

With inference I mean being able to say how certain each one of the parameter values are. I estimated it by the use of an algorithm that changed the parameter vector with regard to the likelihood until it reach an optimum. Yeah I had something like the delta method in mind or maybe a MCMC.

See for example Ruge-Murcia “Methods to Estimate Dynamic Stochastic General Equilibrium Models” on how to use the inverse hessian to get the asymptotic distribution of parameters from the likelihood function. Using numerical differentiation, you can easily compute this from the likelihood function. The relevant part from Dynare’s code in the current snapshot is:

hh = reshape(hessian(dsge_likelihood,xparam1, ... options_.gstep,dataset_,dataset_info,options_,M_,estim_params_,bayestopt_,oo_),nx,nx); invhess = inv(hh); stdh = sqrt(diag(invhess)); oo_.posterior.optimization.Variance = invhess;

Oki, nice! Thanks for the advice :slight_smile:

When estimating I’ve realised that every once in a while Matlab gives the following warnings

  1. From the m-file evaluate_steady_state.m row 86: ys = ys_init-jacob\fvec.
    Then it sometimes gives the warning message that the jacobian is badly scaled and the results may be inaccurate

  2. In the m-file dyn_first_order_solver.m row 315: ghu = - A_ \ fu;
    Then it sometimes gives the warning message that A_ is badly scaled and the results may be inaccurate

  3. In the m-file lyapunov_symm2.m It’s regarding the q matrix row 158 and row 185.
    Then it sometimes gives the warning message that q is badly scaled and the results may be inaccurate

How to think about this problems? I’ve realised that in regions when this happens a lot the log likelihood function sometimes goes to a very high value. Which is unrealisticly high and sometimes even 0, which is the highest possible value that a log likelihood function can get.

These are areas where the parameters result in numerical problems and results should be taken with a grain of salt as they may be numerically imprecise. That’s why you get the warning. Inversion of a near-singular matrix is required in those steps.

Regarding your last statement: it’s not true that 0 is the highest value of a log-likelihood as we are talking about pdfs of continuous distributions. See [Computing Log Likelihood and Optimization)

Also, when you say high likelihood values, are we talking about the actual likelihood or minus the likelihood. Because ideally those problems result in low likelihoods and the draws are rejected.

Oki, you’re right about the 0 thing.

I mean the log likelihood, so in absolute terms the lower value the better (more likely). But can’t it be that these inaccuracy problems produces unrealistically good values and therefore tricks one to believe that this is the best likelihood value? I mean, is there a guarantee that the values are always wrong in the since that they produces a bad likelihood? Shouldn’t the inaccuracy go both ways?

Yes, it can go both ways. The only problematic warning of the three warnings you refer to is the second.

The first is just a warning during the solution of the steady state, but there is still a check whether the solution actually is a steady state. So nothing can go wrong here.
Similarly, the third one refers to the initialization of the Kalman filter. Any problems here should be minimal as initial conditions usually die out quickly.

The warning about an inaccurate solution is more tricky. In theory, the results may be inaccurate and the effect on the likelihood might go both ways. Unfortunately, there is no way to know the size of the error as there is no other way to compute the actual correct solution. The accuracy of the likelihood depends on the accuracy of the solution and is of the same order (see also onlinelibrary.wiley.com/doi/10.1111/j.1468-0262.2006.00650.x/abstract and onlinelibrary.wiley.com/doi/10.3982/ECTA7669/abstract). So unless you think there is a massive problem with your solution for some parameters you should be fine.

I tried out different parameter vectors, for the model, in order to get different log likelihood values. I got values around the interval (-1700,-2300). And then I also got a few values of 0.01- and -10. Since this deviated so much I figured something was wrong and restarted everything and used the following command in the set up estimation

set_dynare_seed(‘clock’);
estimation(order=1, datafile=usmodel_data_eget, mh_replic=10, mh_nblocks=2, mh_jscale=0.8, nograph, nodiagnostic);

Do you agree that this should do the trick of generating a slightly different initial estimation set up?

And then I compared all the parameter vectors in order to see if anyone gave a different likelihood value compared to the previous set up. As it turns out The extremely low values of 0.01- and -10 and maybe three other vectors generated the error value 1e+8 or values very close to 1e+8. I also noted some patterns in those extreme values regarding the warning in dyn_first_order_solver.m row 315: ghu = - A_ \ fu;. rcond(A_) was 1e-7. I usually don’t see such low values for rcond(A_).

What do you say about an approach where I punish the likelihood function when rcond(A_) gets very low and therefore inaccurate?

Could you provide me with example codes?

Penalizing when the solution is inaccurate should work (if you know that a low rcond only means exactly this). But I was wondering whether your few outliers materially affect the posterior distributions.

When I run the code that I’m attaching I get.

rcond(jacob) rcond(A_) log likelihood
1,57584115349113e-10 0,000772325365577031 -1001,73193838161
4,84036186378225e-06 0,000423716426874983 -1174,39290562767
5,06986120088668e-06 0,000465094332509841 -1173,43665614883
5,84371209498350e-11 3,60035055286181e-09 -124,641919336953
6,56431006951448e-06 0,000759552906717238 -1177,21986016283
1,44535382775040e-06 0,000148846120569051 -1176,61074901448

As you can see, there are two columns that deviates from the rest when it comes to the log likelihood. Namely, column 1 and 4, especially 4. For 4 you see that the Jacobian in evaluate_steady_state.m row 86: ys = ys_init-jacob\fvec has a lower rcond value than the others and so does rcond(A) in dyn_first_order_solver.m row 315: ghu = - A \ fu;

I attach my files in a zip file. All you have to do is run the m-file “AA_run.m” and you will get the minus log likelihood value in the vector “end_vec”.

\\Glm
2014-09-16 ML.zip (20.4 KB)

Sorry, but there is at least one file missing in the zip-file. I cannot run the mod-file, because min_max_funk cannot be found.

Hiii, I have my model expressed in term of logs. so, ctilda=css*exp©, where, ctilda is the values of the simulation in matlab and css the value of the steady state.
I want to recover my original “c”, but when matlab simulates the serie gives me negatives values, so I cannot apply logarithm,

I want to know which transformation I can do to recover my “c”.

THANKX!

@lilianagnr Please don’t clutter old posts with new questions. This is not a catch-all post. That consumption becomes negative is strange and indicates more problems with your model. That investment sometimes becomes negative in first order approximations can happen, but for consumption this is strange and suggests a wrong shock size or something similar.

Sorry about the missing file it should be there now with this new attachment.

\\Glm
2014-09-17 ML.zip (20.7 KB)

I am investigating the issue, but I seems to be something else. For some reason, for those values the forecast error matrix becomes singular.

Could you please try putting

use_univariate_filters_if_singularity_is_detected=0;
in the estimation command.