Understanding stoch_simul output

I still have some confusion about the output of stoch_simul after reading this post:
https://forum.dynare.org/t/question-about-understanding-irfs-in-dynare/10622/4

I looked at the outputs of a simple test model and I understand, that when I have an already log-linearized model and I run perfect_foresight_solver, I receive percentage deviations from steady-state as output, which is what the variables in my model already measure in that case.

However, I’m not sure what the output means if I run stoch_simul (order=1 and only first period shock). It appears to be approximately the same output, that I receive with perfect_foresight_solver, but multiplied by a factor of ten.
The post above states, that when my model already measures percentages, I will get the percentage deviation from steady state, so I expected to receive the same output, that i got in the perfect_foresight_solver case. Why is it different by a factor of ten?

test_linear.mod (564 Bytes)
test_nonlinear.mod (1.0 KB)

Dynare does a one-standard deviation shock for stoch_simul. So it must be

shocks;
    stderr epsilon=0.01;
end;

instead of var epsilon=0.01. That explains the roughly factor 10.