Shock Size in Deterministic Models

Hi all,

I am trying to performe an exercise where I want to test the effect of the government subsidy during recession. I want to see what happens if I increase the TFP shock size along the same gov. subsidy shock size. For e.g. gov. subsidy shock size=0.01 and
a) TFP shock size = - 0.01
b) TFP shock size = - 0.03 or more

The problem is whenever I try to use a TFP shock size larger than 0.011 the model fails and the perfect foresight solution can’t be found.

I am using a modified New Keynesian model, i.e., a labor selection model with endogenous firing and hiring and I am solving it as a deterministic model. Workers face heterogeneous operating costs that follow a logistic probability distribution over the interval + and -.

My guess is that the solver fails due to the value of the standard deviation parameter.

Any suggestions on how can I fix this?


Without the codes it is impossible to tell.

Here is the respective code.

fc25_ua.mod (8.4 KB) This is the model only with TFP shock
loadMat.m (693 Bytes)
Values.csv (1.8 KB)
fc25a_ups.mod (9.2 KB) This is the model with both shocks

Did you check whether the solution shows anything suspicious when you move closer to shock values that don’t work? Maybe you hit something infeasible.