Theoretical Questions Related to Bayesian Estimation


The purpose of this thread is to (hopefully) compile in one place all theory related questions with respect to Bayesian Estimation. Dynare does a lot of the work in the background, which is both convenient and dangerous. Hence, if you have a theory related question with respect to your work please post it here.

Here are a couple of general references to start with:

Chapter 9 of the Handbook of Macroeconomics: Solution and Estimation Methods for DSGE Models by Fernandez-Villaverde, Rubio-Ramirez and Schorfheide

“The Econometrics of DSGE Models” by Fernandez-Villaverde


I have a specific question related to the choice of observable variables to use and how to pick the additional shocks in order to identify them. For instance, I have a New Keynesian model with financial frictions and have the same 7 standard observables (and shocks) as in Smets and Wouter (2007). I also have data on 2 financial variables and would also like to include data on bond holdings. (Bonds are not neutral in my model) .However, I am not sure if that would make sense within the scope of the estimation. Also, how do we generally choose shocks in a model?



Generally, there is no good guidance. It is mostly trial and error. See also the section on selecting observables in Pfeifer(2013): “A Guide to Specifying Observation Equations for the Estimation of DSGE Models”


Thanks Johannes. I wasn’t aware you had written such an extensive guide yourself. Another paper I found with respect to choosing variables is Canova et al. (2014)


On a separate note, suppose one wants to introduce a new observable into a model. How does one also introduce the additional shock into the system? Should the shock be related somehow to the observable (or what might be affecting it)?


Usually, theory will guide you. It is a matter of debate whether full information estimation is sensible if you need to add a new structural shock for each variable you want to consider. One obvious way out is to assume measurement error.


Hi Johannes,

Are you aware of any papers (apart from Leeper et al. (2010) that you cite in your estimation guide) that use government debt as an observable in the estimation?

For the purposes of what I am trying to do, I need to use holdings of government debt by private investors as an observable in the estimation. The model and question are not fiscal in nature, but I am not quite sure how I need to modify that data to make it consistent with the model. I can provide more details for what I am trying to do, if you think you can help me out with this.


No, I am not aware of other papers. The reason that papers often do not consider debt as an observable is that the fit between data and model is very imperfect. In the model we usually have a zero coupon one-period bond, while in practice there are various maturities. Thus, you can only compare the market values of the bonds in the model and in the data, because face values are meaningless with different durations. The other complication is stationarizing the data. You could look at growth rates of the market value, but that is most probably not what you are looking for as you will lose the level information. So using the debt-to-GDP-ration may be an option.


Thanks Johannes.

I have 2 general question related to treating data prior to feeding it into the model. I have read your guide and it is very helpful, but I want to clear up some things I was confused about.

Take consumption for example. I define consumption as in Smets and Wounters (2007).Namely C=ln(PCEC/GDPDEF)/LNSIndex). Now, SW multiply this directly by 100 and then take the difference C_t-C_(t-1) and get their observable c_obs. In my case, I want to use a one-sided HP filter. Hence, I take C hp filter it and get c=c_trend+c_cycle. Hence, in my case c_obs=c_cycle. Then, in the model I can scale it by 100 and define it as c_obs=100*c_cycle. Is everything in this procedure correct in treating non-stationary variables for estimation?

Second, I have seen that some people also demean inflation and interest rates to the data and the resulting series is used as observable. Is that something that I should do?

For instance. Inflation is defined as: LN(DGPDEF)-LN(GDPDEF(-1). Then the resulting series from this transformation is demeaned. The demeaned series is then used as an observable rather than LN(DGPDEF)-LN(GDPDEF(-1) alone.

  1. Yes, the first step is correct. You generate percentage deviations of real GDP per capita from its trend by one-sided HP filtering the log real GDP per capita. Multiplying the result by 100 makes a value of 1 stand for 1 percent. As the filter is a linear one, this is equivalent to multiplying the log by 100 before filtering.
  2. Whether you take out the mean depends on whether you want to estimate or calibrate the mean inflation rate. SW estimate the mean and therefore cannot take it out of the data.


If I were to multiply the results before filtering by 100, then in the model c_obs=c rather than c_obs=100*c since I had already scaled the data by 100 before entering it observation equations. Is that correct?


No, not necessarily. The model is linear, so multiplying by 100 can be used to simply have the shock standard deviations scaled up by a factor of 100, i.e.
stderr epsilon = 1
means a 1 percent shock.


So if the observables are scaled by 100 prior to estimation and then once again (entered in the model as x_obs=x*100) a shock with a value of stderr = 0.01 would be interpreted as 0.01%?


On a separate note, how does one choose the standard deviations of the different shocks in estimating a model. I was trying to read on the theory behind this, but I wasn’t able to find a specific discussion on this.

  1. I don’t understand the first question. Are you multiplying with 100 twice? If yes, specifying x_obs=x*100 would only take into account one of the 100. So stderr = 0.01 would indeed be interpreted as 0.01%.
  2. Prior elicitation is complicated. For shocks, we usually specify them in percentage deviations and then use a rather diffuse prior allowing for shocks in the range of e.g. 0% to 10%. That is about the range typically found in economic applications.


Hi Johannes,

All the models that I have seen using first difference filter also estimate the mean inflation and interest rate. Can one first difference the data and still demean inflation and interest rates? For instance: define inflation as LN(GDPDEF)-LN(GDPDEF(-1). Can I demean the resulting series? Same with the Fed Funds Rate. Say I use Fed Funds/400. Can I demean the resulting series?

Secondly, what is the best way to transform labor hours into an observable.In previous posts on the forum I have seen that some people have suggested the log of the growth rate: LN(N)-LN(N(-1)).

  1. Of course you can do that if you are not interested in estimating the discount factor or steady state inflation
  2. Either you use growth rates for hours as well or percentage deviations from their mean


Hi Johannes,

I have a general question on the estimation of models with a specified trend and non-demeaned data versus a model without a trend and with demeaned data. In terms of estimating a mean, I understand that one may want to estimate the steady state value of a variable, but what does the trend buy you in terms of estimation?

For instance , if I have a term given by x(a_obs-b_obs) where a and b are some data and x is a parameter to be estimated, should I expect a significant difference in the estimate of x depending on how I specify the observables? I am not sure if that’s a useful example.


Whether you specify a trend or not typically depends on whether you want to consider the effect of shocks to the trend. Most papers with an explicitly specified trend consider for example permanent TFP shocks.

With “demeaning” you mean demeanded growth rates, I guess. In that case, the question is mostly whether you want to know the uncertainty about the average trend growth or whether you impose that it perfectly corresponds to the mean growth rate observed in your finite sample.


Thanks Johannes.

I have a question related to estimation of models during the ZLB period. Assuming that one does not want to explicitly estimate a model where the ZLB is binding, there shouldn’t be any technical issues in a standard model related to using the short-term rate as observable? I mean that the small variation in the short-term rate during this period shouldn’t generate any technical issues with respect to the estimation? Is that correct? I have estimated a a model with a demeaned short-term interest rate series between 2008 and 2017 and did not encounter explicit errors.