I’ve been reading RugeMurcia (2012) paper where the author defines the SMM estimation for DSGE models. Also I’ve seen papers where there’s performed moment matching by minimizing (to zero most of the times) the norm between simulated moments and empirical moments, but do not mention that this procedure is actually SMM.
In my work I’m doing the same procedure of minimizing (hopefully to zero) the norm of simulated and empirical moments, nonetheless I’m a bit confused in if this is actually SMM, since i just make the minimization and that’s it. Then,

is it safe to say (academically speaking) that just minimizing the norm of the diff. between simulated and empirical moments is SMM? And if not what “would it take” for that minimization to be an actual SMM?

Also in the RugeMurcia paper the minimization is performed to the sum of squares of the diff. (different to my “norm” approach"), but in the case that my model has a perfect fit with empirical moments, the estimators would be the same (I think), then I’d have the same RugeMurcia estimator?

Is there any kind of test or something that I should perform to my estimators (using the method I described was using) in order to assess the results I’m getting?
Another thing worth mentioning is that the paper I’m referring to calls the procedure just “calibration” not “estimation”. Is there any difference in this case?
Thanks.