I have a similar issue. Here is what I’m doing and let me know if there is a more efficient way to do it.
Particularly, I’m testing my DSGE in real-life forecasting, both 1) in an automated setup where the data and short-term forecasts are fed into a DSGE, and then a DSGE forms a medium-term forecasts, 2) as well as testing a platform for experts to play with their judgemental forecasts and see what the model spits out. To my understanding the
conditional_forecast command is not suitable/convenient in either case (not only because one has to define an exogenous variable that drives the endogenous variable - which might not be easy to do e.g. for the short-term forecast of domestic GDP, but also the way to make the forecasting process automated and allow different conditional forecast horizons for different variables).
Rather, one would use Kalman smoother to get the rest of the variables and horizons (judgement horizons can differ across variables). However, because I’m using measurement errors to get reasonable forecasting performance, those measurement errors are also applied to the judgemental/short-term forecasts. This situation that measurement errors are applied to the forecasts, though might be logical (because forecasts might be prone to errors), sometimes might not be desirable if one wants to pin down a variable and see the full effect on the rest of variables and also have a proper accounting of macro variables ( Y=C + I + G + NX).
For that reason, I’m using a “shadow variable” without measurement error which is observed only for the forecasting horizon (the historical values are nans), while the same variable with the measurement error is observed for both the historical data and for forecasts; then I’m using both the variable with and without the measurement error as observables; in this way I can impose measurement-error-free forecasts, while allowing for some measurement error in the past.