How we should calculate the standard deviation of estimated parameters in IRF Matching? (for inference,… aims) Are they important?
In the available sample code on Github, there is not such an ability.
Thank you so much Dr Pfiefer.
Actually the formula is a little bit har for me to implement. Do you have any easier ideas?
By the way, what are the standard errors in the CMAES method in your code (IRF Matching on Github). You know, if we use @#define CMAES=1, some standard errors are reported for each estimated parameter. Aren’t they simply what we want(standard error of estimators)?
The formula is complicated, but it is the correct formula. There are no shortcuts.
How about the standard errors reported in CMAES algorithm in your code? what are they if they are not the SEs of estimators?
Which standard errors do you mean? The output is
[x_opt_hat, fhat, COUNTEVAL, STOPFLAG, OUT, BESTEVER] = cmaes('IRF_matching_objective',x_start,H0,cmaesOptions,IRF_empirical,IRF_weighting);
if you run the code (when @#define CMAES=1), it reports(displays) the mean and standard deviation for estimators. If you run it now, you can see it. at the end of the iterations, it displays a SE.
That cannot be the correct SE, because CMAES does not know about e.g. the sampling uncertainty underlying empirical IRFs.