Transforming simmulated Interest Rate Data

Hello,

I have a question regarding the transformations of interest rate data after a simulation.

My linearized model is in quarterly frequency, and I have empirical interest rate data for each quarter. But the data was in annual percentage terms. Therefore I had transform the data as follows:

r_obs = (1 + r_data/400) - mean(1 + r_data/400)

where r_data is the empirical and r_obs is the data I used for input to my model.

My question

Do I need to transform the simulated data from dynare in the same way as below before I compare the second moments.
Or should I just compare the data straight out of the simulation to the r_obs moments

Example:
r_sim_obs = (1 + r_sim/400) - mean(1 + r_sim/400)

now compare the VAR(r_sim_obs) to VAR(r_obs)

Any insight would be greatly appreciated.

Thank you,
Richard

The data transformation r_obs will be a quarterly gross interest rate. The question is now that the output of your model is. Your description does not tell us this. If your model simulates r_obs, you can directly compare them.

Hello,

Thank you for the reply.

In my model I map the observed variable straight to the model variable as follows:
r_obs = r ;

and the interest rate ® is used in the Taylor Rule as follows:
r = dom_tal_r * r(-1) + dom_tal_inf * ppi + dom_tal_out * ygap + r_shk_domestic;

If I am understanding your reply correctly, since I am not transforming the r_obs in the model and just running a simulation I can compare the simulated r moments to the empirical data directly.

Is this correct?

If the above statement is correct, what would be an example where I could not compare the simulation data moments directly?
I would like to know for a future reference.

Thank you in advance.

Regards,
Richard

The whole point of an observation equation is to define a transformation of the data that has a perfect correspondence to a model variable.

Hello,

I wish check my understand on comparing moments.

Givens:

Model is quarterly.

I had transform the data as follows in Excel:

r_obs = (1 + r_data/400) - mean(1 + r_data/400)

where r_data is the empirical and r_obs is the data I used for input to my model.

In my model, I map the observed variable straight to the model variable as follows:
r_obs = r ;

where r is the interest rate variable in my model.

After running a simulation for r, I can compare the variance of simulated r to the variance r_obs directly?

Or do I need to transform the simulated r value first and then compare its variance? I belief I do not…
Reason I am asking is because my simulated r has a much higher variance that the variance r_obs.

My model’s output

MOMENTS OF SIMULATED VARIABLES
VARIABLE MEAN STD. DEV. VARIANCE SKEWNESS KURTOSIS
r 0.078086 0.310664 0.096512 -0.225035 -0.309958

My empirical transformed data variance

and the r_obs variance = 0.000006012

Thank you,
Richard

By having

you are telling the program that both variables are identical. Thus, they measure the same concept and can be compared without any other transformation. In principle, any difference in moments between r_obs in the data and r generated by the model is punished in the likelihood function. It the fit is poor, you should check the fit of your model and/or try the endogenous_prior option.