Here is the problem:
I have 2 models (model A and model B) and 4 subsamples (20151955; 20152007; 20071985; 19851955).
I run Bayesian estimations for these 2 models over each periods (mode_compute=6). All the estimations are ok and acceptance rates are goods.
I obtain that marginal density for model B is better than marginal density for model A during 20152007; 20071985; and 19851955.
But over the full sample, 20151955, marginal density for model A is better than marginal density for model B.

How can I justify such result ?

Is such result could come from demeaning a la smets and wouters 2007 ? which I follow…
Thank you for your answers.
Best,
Jonathan
Are the priors kept the same? And how big are the differences? Do the Laplace approximation and the modified harmonic mean estimator deliver the same consistent picture?
Hi Johannes,
Are the priors kept the same?
Yes
And how big are the differences?
Big (around 20)
Do the Laplace approximation and the modified harmonic mean estimator deliver the same consistent picture?
Yes
That is strange. I thought that the log marginal data density of two subsamples together is (almost) equal to the sum of the log marginal data densities of the subsamples as you could write the MDD p(Y_{1:T}M) as the product of conditional densities using the prediction decomposition. The basic difference would be the conditioning at the beginning of the second sample.
But this would rule out what you observe. So I have to think more about this.
I don’t think detrending as in SW2007 is the reason. In their model, the data used is the same across samples as the constant terms are estimated. You were right if they were using data that has been detrended for each sample individually.
Hi Johannes,
thank you for your answer.
What do you mean exactly when u said: "You were right if they were using data that has been detrended for each sample individually."
Do you mean that if I have demeaned only with the average of the considered subsample (and not with the average of the overall sample), this could explain the picture I have ?
If I understand well, and even if it is not precised in SW2007, SW have used subsamples including demeaning from the overall sample.
I thought that, if I want to analyze, for instance, the subsample 19852007, I do not have to consider data outside from this subsample (then, I also do not have to consider average calculated with data from outside of this subsample).
Another remark: because I used a subsamplespecific detrending, is the use of different reference points (for instance, base 100 in oct 1997 for estimating 19852007; base 100 in jul2010 for estimating 20072015 etc) could impact marginal densities too ?
What is the good thing/practice to do, in terms of data transformations, in order to analyze different subsamples without being impacted by data from outside the considered subsample ?
When you demean the individual samples with their respective means, you are changing the data/model from the subsamples to the full sample. Say the full sample is 40 years with a an average growth rate of 2%, with 1% growth in the first 20 years and 3% in the second 20 years (forget about compounding issues). Concatenating the demeaned first 20 observations and last 20 observations will not be the same as demeaning the full sample with 2%.
Another way to see this is by realizing that it can be interpreted as estimating different models where the mean of the observables is a parameter that differs across samples (1% in the first subsample model, 3% in the second subsample model and 2% in thee full sample model). As this parameter is different for each subsample, the corresponding model differs. It would be different if you estimated the mean growth rate and kept the prior the same across models.
Different reference points are an issue when they change the data as you are trying to compare the capability of different models in explaining the SAME data. If you use growth rates, any transformation that keeps these growth rates constant is therefore no problem.