Estimation in sub-samples

Hi.
I would like to estimate a model in which some parameters change over 2 sub-samples.
Here’s an example.

y = 0.9*E(y(+1)) + alphaa2*r + eps_y;
r = gamma*y + eps_r;

I would like to estimate alphaa2 that should be the same in the whole sample and the gamma that has one value for the first half of the sample and another one for the second half.
I could think of some approaches:

  1. First step: estimate (alphaa2, gamma) in the whole sample using ML.
    Second step: estimate again in each subsample (using ML), fixing alpha2 in the value found before and estimating only gamma in each sub-sample.

  2. First step: estimate (alphaa2, gamma) in the whole sample using ML.
    Second step: estimate again in each subsample using Bayesian approach, fixing alpha2 as in (1). For gamma I could use the value found in ML estimation as the mean/mode of my prior distribution.

  3. Do the same as (1) using Bayesian approach for each estimation.

  4. Introducing observed dummy variables (dum as in mod file annex), estimating everything once (maybe using varexo_det) :

y = 0.9*E(y(+1)) + alphaa2*r + eps_y;
r =(gamma1 +gamma2*dum)*y + eps_r;

Here are some problems/thoughts that I can find of in each approach.

  1. I can’t see any problem (is there one?)

  2. The priors of bayesian should not be get from the same data.

  3. Since I am estimating with Bayesian in the whole sample, it seems weird for me to collapse the distribution of alphaa2 in the second step, but I am not really sure if it is an issue (at least not other than use the same data for a prior).
    Another point is that I would like to update the distribution of gamma found in the first step. I asked some question about it a while ago and I don’t think it is feasible in dynare, am I right?

  4. I am declaring the model as nonlinear, is it a problem?
    I think this approach works fine for ML. If the structural change is unexpected, can I estimate it using Bayesian approach?

Would anyone do some comments about it? Am I missing something?

data_toym.xlsx (11.5 KB)
toym.mod (1.0 KB)

  1. The dummy approach will not work. See Structural break in Bayesian estimation and DYNARE syntaxis
  2. The correct way would be as in

All other approaches will not deliver the full joint density.

Thank you for your prompt response, professor Pfeifer.
I didn’t understand some things:

  1. I couldn’t follow Michel suggestion on this. What does “Then have X and the extra variable acting additively in the y equation” mean? I supposed that the extra variable in that example is the dummy times x. This is in the model file annex.

  2. I understand that other approaches won’t deliver the full joint density, but I would like to understand if there is something actually incorrect on those, although they aren’t the desirable way.

Thank you again.

toym4.mod (1.7 KB)
data_toym4.xlsx (12.6 KB)

  1. The point from Michel is that nonlinear setups like the one you envisioned will not survive linearization. It’s far from clear whether a linear alternative exist.
  2. The big problem is: the density of alphaa2 will depend on gamma and vice versa. By fixing alphaa2 to a value that conditions on no break in gamma, you are neglecting that dependence. It does not matter whether you are doing ML or Bayesian estimation. That same problem remains.

Thank you, professor Pfeifer.
Everything’s clear for me now.