Policy functions with third order approximation

Hi,

I am running a DSGE model with a third order approximation (Dynare 4.4.3), and after I obtain policy functions
I simulate variables with intentionally provided shocks (like one-time shock at time 0) using simult_ function.

One problem that I encounter was that the simulated series do not converge to its steady state values.
More weird thing is that the simulated series with no shocks provided (just like deterministic case)
do not even stay in the steady state (theoretically, it should).

After quite a bit of investigation, it seems like this problem could be due to g_0 term in the policy function,
which captures uncertainty corrections.

My question is:
Is there a way to adjust the mean of simulated series to its steady state?
By the way, shifting down the simulated series by the gap between the steady state and the mean of simulated series does not solve the problem.

Any help will be appreciated and please let me know if I need to clarify the problem in more details.

At third order, there is an uncertainty correction. For that reason, simulations without shocks will not converge to the deterministic steady state, but rather to the “stochastic steady state” or “ergodic mean in the absence of shocks”. This is expected behavior. For details, you can have a look at the Appendix to Born/Pfeifer (2014): “Risk Matters: A comment” at aeaweb.org/articles.php?doi=10.1257/aer.104.12.4231 and [Simult_ and nonzero IRFs in higher-order approximations)

I just missed the lengthy conversation about this issue in the previous post.
Thanks for your help!