Btw, we’re amazed by the quality and speed of replies; a tremendous help!

Our question:

We run simul on a deterministic model with temporary shocks. Our model is already linearized in percent deviation from steady state. Thus our steady state values for all variables are zero.

We ran across three related phenomena:

if we specify X simulation periods, we get (X+2) simulated values

if we specify a number of simulation periods smaller than what seems to be needed for the system to return “naturally” to steady state, there appears a kink in the simulated values towards the last periods, where these values drop to steady state. Is this some sort of forced convergence?

if we specify X simulation periods and then subsequently Y simulation periods (where Y>X), the value of any variable in any given period common to both samples will differ (taking a higher value in the case of Y simulations).

the first one is the initial condition before the start of the simulation. The last one is the terminal condition, after the last period of simulation.

The simplifying assumption of the algorithm is to impose return to equilibrium in finite time when, in theory, the convergence is only asymptotical. If the number of periods of the simulation is too small, the assumption distorts the true dynamics.

You should use a number of periods in the simulation (X) such as simulations with Y periods (Y>X) don’t differ more than an accuracy factor that you can live with.