Identification problem with diffuse filter

Dear users,

Before estimating my model ,I tried to run identification command .But the matlab kept throwing error message even if I have put 'order =1 ’ into identification command. My question is how to create ‘static_params_derivs.m’ file to make identification command

work?Or maybe the error is caused by some other reason.The error message is listed below:

Testing calibration
Error using get_perturbation_params_derivs (line 449)
For analytical parameter derivatives ‘static_params_derivs.m’ file is needed, this can be created by putting
identification(order=1) into your mod file.

Error in get_identification_jacobians (line 153)
oo.dr.derivs = get_perturbation_params_derivs(M, options, estim_params, oo, indpmodel, indpstderr, indpcorr, d2flag);

Error in identification_analysis (line 139)
[MEAN, dMEAN, REDUCEDFORM, dREDUCEDFORM, DYNAMIC, dDYNAMIC, MOMENTS, dMOMENTS, dSPECTRUM, dSPECTRUM_NO_MEAN, dMINIMAL, derivatives_info] = get_identification_jacobians(estim_params_, M_, oo_, options_, options_ident, indpmodel, indpstderr, indpcorr, indvobs);

Error in dynare_identification (line 485)
identification_analysis(params, indpmodel, indpstderr, indpcorr, options_ident, dataset_info, prior_exist, 1); %the 1 at the end implies initialization of persistent variables

Error in Sector41_model.driver (line 60554)
dynare_identification(options_ident);

Error in dynare (line 281)
evalin(‘base’,[fname ‘.driver’]);

Here’s my mod file.I have zipped it and uploaded.
Mymodel.rar (254.0 KB)
I thank you in advance for your help.
jason

Hi,everyone

I have checked the ‘get_perturbation_params_derivs.m’ file and found that the error was caused by ‘out of memory’ problem when the ‘get_perturbation_params_derivs.m’ file tried to create a large matrix [ 4020x 4020x 7061].

I tried to limit the derivative order with respect to the parameters as suggested by prof. jpfeifer in this topic but still not worked out.
https://forum.dynare.org/t/identification-tools-memory-issue/6255

Is there any way to deal with the out of memory problem when using dynare?
Thanks!

@wmutschl Is there any way around this limitation?

I downloaded your files, but “par.mod” is missing. Could you upload your model again, so I can try to replicate your issue?

tl;dr: use the following options:

  • analytic_derivation_mode=-2 or analytic_derivation_mode=-1
  • no_identification_minimal
  • no_identification_spectrum
  • no_identification_strength

Are you sure you want to estimate a model with 650 equations? Are 42 similar sectors/countries/states really necessary?

In more detail:

I can replicate your memory issue and was not even able to run identification on my server with 96GB RAM. Indeed in the params_derivs files the preprocessor initializes the objects with a zero statement and MATLAB errors out as the model dimensions are just too large. Typically, one could overcome this by e.g. using sparse instead of zeros, or the use_dll option which creates compiled files; however, we don’t feature this for the params_derivs files; we still rely on MATLAB code to evaluate these.

So, the underlying issue is that you have a very large model (over 650 equations). To solve the issue you can run the identification command with analytic_derivation_mode=-2. This does not use the params_derivs files (which contain the analytic derivatives of the model equations and steady-state with respect to the parameters) to compute the parameter Jacobian of the steady-state, but instead simply uses numerical differentiation to compute the parameter derivative of the steady-state. The actual Jacobians that are used to check local identification are still computed in closed-form. So this is what I usually suggest to people trying to run the identification toolbox with large models.
Alternatively, you could also try analytic_derivation_mode=-1 where all derivatives and all Identification Jacobians are computed numerically.
Also you should deactivate the criteria based on the minimal system and the spectrum as these are quite expensive to compute for large models. Likewise, deactivating the strength saves computational time. So the identification command would be:

identification(order=1,parameter_set=calibration,diffuse_filter,analytic_derivation_mode=-2,no_identification_minimal,no_identification_spectrum,no_identification_strength);

which outputs the following:

======== Identification Analysis ========

Testing calibration
  
Note that differences in the criteria could be due to numerical settings,
numerical errors or the method used to find problematic parameter sets.
Settings:
    Derivation mode for Jacobians:                         Numerical
    Method to find problematic parameters:                 Nullspace and multicorrelation coefficients
    Normalize Jacobians:                                   Yes
    Tolerance level for rank computations:                 robust
    Tolerance level for selecting nonzero columns:         1e-08
    Tolerance level for selecting nonzero singular values: 1e-03

REDUCED-FORM:
    All parameters are identified in the Jacobian of steady state and reduced-form solution matrices (rank(Tau) is full with tol = robust).

MOMENTS (ISKREV, 2010):
    All parameters are identified in the Jacobian of first two moments (rank(J) is full with tol = robust).

==== Identification analysis completed ====

It took about 6 minutes on my server. So local identification seems to be okay. Of course, you could activate the strength by dropping no_identification_strength, but this will take quite some time to finish.
Either way, I am not sure how USEFUL this insight is, because your goal seems to be to estimate quite a large model! Is this really useful to you?!
Because looking at the mod file, the underlying model structure seems to be quite straightforward, however, you are blowing up the size of the model by repeating the equations for 42 regions/countries/sectors. Is this really necessary for what you want to study? Maybe try to decrease the number of sectors, otherwise I doubt that your estimation will bring you any meaningful results at all.

One more thing, for estimation you don’t have to declare the measurement errors as varexo, so you could get rid of the c_obs equations and simply declare c as observables and put a stderr c in the estimated_params block. For identification this is not yet possible, but I will introduce that feature very soon.

Hope that helps a bit?

1 Like

I appreciate your detailed advice and kind help! You are right, the model I tried to estimate is very large. The I-O linkages in the multi-sector model increase the parameters exponentially. I think setting some parameters homogeneous will decrease the model complexity.
Again, Your help is very much appreciated.