Issue with Perfect Foresight Solver in Basic Pissarides Model

Hello everyone,

I am working on the basic Pissarides search and matching model (Chapter 1 from his book) and trying to compute the transition path from one steady state to another using the perfect foresight solver. My goal is to simulate the path between steady states when there is a deterministic shock to productivity.

I can successfully compute the steady states using an external function and I am able to plot the path between steady states using the perfect foresight solver, provided the size of the productivity shock is small. However, when I increase the magnitude of the shock, although I still find the new steady state, the perfect foresight solver fails to compute the transition path beyond a certain shock size.

I’ve already tried extending the number of periods in the simulation, but the solver still does not converge. My understanding was that with a basic model like the one I am using, the solver should be able to handle this task. Are there any adjustments I can make to the solver settings to improve convergence, or is it possible that my model setup itself is causing the issue?

I’m attaching the .mod file along with the external function I’m using to compute the steady state. Any guidance or suggestions on what might be going wrong or how to make the model more robust to these kinds of issues would be greatly appreciated.

Best regards,

example_PissaridesII.mod (1.7 KB)
example_PissaridesII_steadystate.m (3.2 KB)

The permanent shock size of 50 percent seems to be the issue. That is a very large shock that causes the steady state to shift dramatically and apparently causes numerical issues. TFP goes from Z=1 to Z=148.413. In steady state, you have
\ln Z=\rho \ln Z +\varepsilon
which means the new steady state is
Z=e^{\varepsilon/(1-\rho)}.
Is that really what you intended?

Hi Professor Pfeifer,

Thank you very much for your response and for pointing out the issue with the shock size. Indeed, such a large shock was not intended for practical purposes, but rather as part of my attempt to explore the model’s behavior under extreme conditions and understand when it might encounter numerical difficulties.

It seems that while the new steady state is identified, the perfect foresight solver struggles to compute the transition path when the shock is too large.

My broader goal is to work with a more complex model, but I wanted to first test scenarios in a simpler environment, starting with the basic model from Pissarides. My thinking was that if it doesn’t work well in this simpler setting, it is likely to encounter similar issues in the more complex model, even with more reasonable shocks.

Therefore I have a few follow-up questions that would help clarify my approach:

  1. Is there a way to tackle this problem, i.e., find a transition path even when the economy receives an unusually large (perhaps unrealistic) shock? Are there limitation?

  2. If I use a stochastic shock of similar magnitude, and the model finds a path back to the original steady state, does that imply that it should also be able to find the path to the new steady state in the deterministic case? If not, what might cause this difference?

I appreciate your feedback and guidance, and I look forward to learning more as I refine my approach.

Best regards

  1. For very large shocks, you may encounter numerical over-/underflow issues. It sometimes helps to express everything in logs to compress the actual numbers.
  2. stoch_simul works with a linear approximation. Here, the shock size does not matter.

Thank you very much, Professor Pfeifer. Your comments were extremely helpful. I continued developing the model I had in mind, which includes two types of workers, M and N. Additionally, I maintain two skill markets: low and high qualification.

In this model, there is a household with logarithmic preferences, and unlike standard search and matching models, I incorporate a fixed cost K to start offering vacancies in a given market (L and H). Therefore, in the steady state, the value of having an open vacancy will not be equal to zero but will be equivalent to the fixed cost K .

I have structurally calibrated the model and use an external function to find the steady state.

Unfortunately, when applying a deterministic productivity shock, the perfect foresight simulation only converges if the shock is very small, even though I do find the steady state after the shock. What could be going wrong? I understand that this model adds new elements, and I’m concerned that some of the new details I’ve added might be causing the model to fail.

I would appreciate any help or suggestions on how to resolve this issue.

I’m attaching the .mod file along with the external function I’m using to compute the steady state.

Best regards

example_PissaridesIV_steadystate.m (3.3 KB)
example_PissaridesIV.mod (10.7 KB)
solve_SS_HL_MN.m (3.0 KB)

The stst_guess.mat is missing.

Yes, sorry for the inconvenience. Here is the missing file.

stst_guess.mat (1.7 KB)

It seems that already tiny shocks create large numerical differences in some variables. Have you tried using the logs of the larger variables or introducing normalizing constants?

Thank you, Professor Pfeifer.

Could you clarify which specific variables might be considered “larger” in this context? I’m not sure if you’re referring to variables with particularly high values or something else in particular.

Regarding normalizing constants, I haven’t implemented any yet. The only relevant constants are the fixed costs—should I consider normalizing these? However, the model assumes the price index is normalized to 1, so would it still be possible to normalize this constant?

Thank you again for your help!

I was talking about the orders of magnitude difference in

i       		 0.010101
W_HM    		 205.647
W_HN    		 205.647
W_LM    		 374.746
W_LN    		 187.735
U_HM    		 202.569
U_HN    		 202.569
U_LM    		 365.108
U_LN    		 182.889
C       		 0.0441079

Particularly for the value functions, you can easily rescale them. Utility is ordinal and scaling by e.g. 1/1000 should not affect anything in economic terms but may facilitate numerics.