Dear Johannes,
In DSGE model estimation, observable variables such as Y, C are taken as per capita, not in its total form. Is it just because we are interested in per capita output or consumption growth instead of total?
Currently I am looking at Born, Peter & Pfeifer (2013) model for data issues, in the model design I don’t see anything that hints per capita variable instead of total.

Second, from the guide for observation equations, I understand if the interest rate series is in percentage points (net) unit, I need to add 1 to make it gross interest rate if R is in gross rate in the model. In the attached series FEDFUNDS from Fred, the unit is in percent. So I should not add 1, right? To convert it quarterly, I just divide it by 4 or taking geometric mean. Is that correct? Later I de-mean the series.

First: When not doing explicit detrending, we typically set up our models abstracting from population growth. Our representative household then is one that is not growing over time. To match this to the data, the data must be purged from effects due to population growth, which the model cannot replicate. Often this goes implicit in the model description. This is why our JEDC paper does not explicitly mention it, but you need to keep it in mind when going to the data.

Second: when your interest rate is in the form of 5 meaning 5 percent, your transformation should be something like 1+(FEDFUNDS/400)
to get a gross quarterly interest rate. If you match this to the log of the nominal gross interest in your model, you need to take the log as well.

I am trying to check my understanding of Dynare output generated after estimation. In oo_Theoretical moments file, I understand the correlation file gives autocorrelation (default is 1 to 5 order?) and cross correlations between variables specified during estimation. So, these are the autocorrelation of model generated data that can be compared to autocorrelations of actual data. We read oo_PostTheoreticalMoments.dsge.correlation.Mean.var1.var1 as autocorrelation and oo_PostTheoreticalMoments.dsge.correlation.Mean.var1.var2 as cross correlation?

How about getting standard deviation of model generated data? Is it in the covariance file where we see matrix like oo_PostTheoreticalMoments.dsge.covariance.Mean.var1.var1? What does the variance in oo_PostTheoreticalMoments.dsge.correlations.Variance.var1 says?
Apologies if these are too basic questions to ask here. I am posting this after a little search in past forum discussion. Thanks in advance for any help.

_PostTheoreticalMoments.dsge.correlations
the correlations. The next field after this says which moment of the distribution of these objects you are considering. For example,

Is the variance of the autocorrelations of var1 over the MCMC draws, i.e. a measure of the uncertainty over the autocorrelation statistic.

Thank you Johannes for your reply. I still have questions and will be grateful to have your comments on the following:

Originally after one-sided HP-Filtering of the historical data in E-views, I checked the autocorrelation and S.D there. For example, autocorr (1) for Y_obs was 0.90.
Later, after estimation, I collected the filtered variables from estimation result and put them into E-views to see the S.D and correlograms (which gives autocorrelations (AC) and partial ACs) and match with those in the Theoretical moments file. For Y, autocorr(1) appears 0.88 in E-views but in oo_, the mean autocorrelations_first order is 0.88. If

generates the autocorrelation coefficient for var1, then why they are different?

The filtered variables and the 1-step ahead prediction contains the same values.

This point is not about Dynare, but any comment will be highly appreciated. I do not understand clearly why the hourly wage data for Nonfarm Business sector for US appear so different in FRED database (series ID: COMPRNFB) and in BLS series PRS85006103. The Fred one gives ‘Nonfarm Business Sector: Real Compensation Per Hour’ and the description says all historical data were revised. While the BLS series gives original data value (in nominal term).
In the first source, for instance, 1961Q1 hourly real wage index is 53.258 while in the second source hourly nominal wage index is 8.12. Apart from, real/nominal issue, what is to consider while taking the wage data into our DSGE model in such case?

I am not following. Which objects are you comparing? What is the unexpected difference? If you are comparing theoretical moments and data moments: Bayesian estimation is not about moment matching. Unless you have a perfect fit, there will be differences.

If you look into the manual, you will see that they are same concepts.

This is all about inflation. That should explain the difference. In the base year, both are equal. When taking wage data to the model, it is often recommended to allow for measurement error.

I understand 2, 3. Thank you.
For 1, what I was saying is the moments of model generated data in Dynare and Eviews appear different. I am not saying comparison between actual data moment and model generated data moment, rather it is the moment of 1 step ahead forecast data in the two programs. I took the 1-step ahead forecast data to Eviews to get the standard deviation and later checking the same with the Theoretical moments.

Measurement Error in listing 12 of the guide for observation equations is a special form of structural shocks that appear only in the respective observation equation. Since we do not include any process for this shock in model equation, can we observe more variable based on the number of measurement errors? For example, if I have five shocks and five observables and I specify that I observe 1 of this var with ME (same in listing 12), then can I add another observable? What implication it will have on identification?

It seems you are confusing something. If you take the 1-step ahead forecast data to Eviews, your are computing the moments over the time dimension, given the mean forecasts over draws. What Dynare provides you with is the standard deviation/uncertainty about about the forecasts for a given time t. That is, you are looking at very different objects. That is why Eviews is providing only one number while Dynare provides a vector.

In general, the restriction for avoiding stochastic singularity is that you can have at most as many observables as shocks. Shocks are both structural shocks and measurement errors. In your example, you could have another observable. But an additional restriction is that non of these shocks must be perfectly collinear, i.e. they must be separately identified. For example, say you have

and both shocks only enter in this equation. In this case, there is no way to separate the two and you cannot add another observable.

Thank you for your reply. Can an example be example in my case, if I have a wage markup shock and add a measurement error in wage, then the two shocks will be perfectly co-linear? Hence, not separately identifiable.

Looks like I need to understand more about Theoretical moments.
As I understand oo_.PosteriorTheoreticalMoments.dsge.correlation.Mean.x.x gives autocorrelations and oo_…x.y gives cross correlations between x and y. In my result : oo_.PosteriorTheoreticalMoments.dsge.correlation.Mean.x.y and
oo_.PosteriorTheoreticalMoments.dsge.correlation.Mean.y.x give me different values. What is the reason for this different cross corr(x,y)? I have seen an old post on this, but no answer there.

Second, in oo_.PosteriorTheoreticalMoments.dsge.covariance.Mean.x.x I have a value of 16.098. Why the variance looks so large? Is it because the data series were scaled by 100?

The reason is that the correlations are the cross-correlations starting at lag 1. Thus, they are not symmetric. cor(x_t,y_{t-1}) is not the same as cor(y_t,x_{t-1}) as the latter would be cor(x_{t+1},y_{t}).

Regarding the large variance: yes, if you scaled with 100 this is the reason.

I have tested identification of my model and part of the result in command window is as follows:

==== Identification analysis ====

Testing prior mean

All parameters are identified in the model (rank of H).

All parameters are identified by J moments (rank of J)

Monte Carlo Testing

Testing MC sample

All parameters are identified in the model (rank of H).

All parameters are identified by J moments (rank of J)

==== Identification analysis completed ====

Is it okay to do identification test before estimation and continue with the same command? Or, is it still recommended to comment out ‘identification’ once the test is done and then start estimation? I think I have seen somewhere this recommendation. I am using version 4.4.3. And so far the result looks okay. For interpreting identification result I am following Ratto (2011) as you referenced.

I would still recommend commenting out the identification command. It may happen that identification changes some options before the estimation command that result in unexpected ocurrences. In particular, identification will usually set