Non-linear identification toolbox and measurement errors

Dear All,

I have tried using the new features by Willy Mutschler for non-linear identication within Dynare.

When running non-linear estimation, We have to inlcude measurement errors in the estim_param block and try to recover the standard deviation of the error.
Should I include measurement errors within the model block when using the non-linear identification toolbox?
In the linear case, I usually include measurement errors within the model block to check whether they allow to help identify additional parameters - as for instance, shown in the complementary code to https://www.researchgate.net/publication/333420538_The_effect_of_observables_functional_specifications_model_features_and_shocks_on_identification_in_linearized_DSGE_models ).

Would this operation have the same meaning in a non-linear context?
I’ll try to explain better myself …
Beside solving issues of non-identification due to parameters simplifying out from the solution of first-order approximations, higher-order approximations help solving issues of weak identification by increasing the curvature of the likelihood function for some target parameters.
However, compared to the initial model in order to run non-linear estimation we have to include measurement errors.
(My understanding is we include measurement errors in a non-linear estimation process to also provide a distribution to sample particles, initialize algorithms, helping with issues of stochastic singularity.)
I believe this is also helping outperforming on linear estimation (without measurement errors).
So, I guess I should hope the estimated standard deviation for measurement errors should be quite small in order to rely on the truethfullness of my estimated parameters. Am I right?

Many thanks in advance for your help.

Best regards,

DB

@wmutschl This one is for you.

Yes. Currently, the identification toolbox does not allow setting measurement errors in estimated_params block, but you should add them to the model when you check identification. However, when you then go ahead and do estimation with order > 1 you have to declare them in the estimated_params block as measurement errors and remove from the model block (as the particle filter relies on this). I am aware that this is inconsistent and will fix this soon in the identification toolbox.

Regarding your other question on measurement errors:

Measurement errors in Dynare enter the measurement equations additively,
y_t = f( x_{t-1}, u_t) + e_t
where e_t are independent of u_t. The identification checks are based on the rank of the Jacobian of moments or spectral density, and we have that the rank of a matrix is subadditive, i.e. rank(A+B) <= rank(A)+rank(B). So, in my experience, the improvement (or solving) theoretical identification of parameters comes solely from the higher-order approximations of f and not from inclusion of measurement errors e_t. Also, you are right, it is always good if you have small standard deviations of measurement errors.

PS: I find the discussion of measurement errors in Atkinson, Richter, and Throckmorton (2019, JME) quite good.