Steady state vs. mean of simulation (basic question)

Hello community,

I am using stoch_simul() with order=2 to simulate a DSGE model with capital that is written in levels and has some non-linear equations. I noticed that output is in the steady state (as reported by Dynare right after you run it) is smaller than the level of output that I get in a simulation without any shocks. What I mean by ‘simulation without any shocks’ is that I first adjusted the matrix DynareResults.exo_simul in simult.m such that there are no realizations of shocks, and then simulate the model using stoch_simul.

In particular, output in the shock-free simulation is by about 2 percent greater than output in the steady state. Does this have to do with the fact that households are unaware of shocks in the steady state, but anticipate the possibility of shocks (even when they never occur) in the simulation? In other words, am I comparing the deterministic steady state with the stochastic steady state here, and the difference in caused by precautionary savings or something like that?

Many thanks for any conformation that I am on the right track, or for any clarifications if I am not :slightly_smiling_face:
Simon

Essentially, you are computing the stochastic steady state/ergodic mean in the absence of shocks. In nonlinear models it will be different from the deterministic steady state.