Policy function simplification question

Hello everyone,
Sorry for this basic question, I’m pretty new to modeling in a stochastic setting so I am not sure if I am supposed to do a transformation of the following form:

Given a reservation cut-off value for a binary decision defined by x_{t}^*=(y_{t} x_{t-1}^* x_{t}^* +1)/(E[y_{t+1}] E[x_{t+1}^*] x_{t}^*)

Where y(t) follows a stochastic process: ln y_t=a ln y_{t-1}+u and u is normally distributed with mean zero.

Can I simply reformulate my condition to: x_{t+1}^*=E[x_{t+1}^*]=(y_{t} x_{t-1}^* x_{t}^* +1)/(E[y_{t+1}] x_{t}^*x_{t}^*)

Starting from a steady state I could get values for x_{t-1} and x_t as i know previous developments of y. Is this such a step valid or am I missing something crucial?

Thank you for your time.
Best,
Mirjam

Edit: Format

Could you please use the LaTeX-capabilities to make the above formulas readable? Use e.g. $y_t^*$ to get y_t^*

Of course, thank you for the hint. I tried to do it at first but did not incorporate the “$”.

If you really encoded the texed equations, then the above transformation is valid. Everything on the right-hand side is dated time t due to the conditional expectations, so you are not erroneously divding within an expected value.

However, I do not understand the part with

why does the expected value drop?