Simplest NL NK Model Estimation

Hi,

I am working on a number of non-linear NK models, but up until now I mostly concentrated on producing results based on calibration. I started looking into the alternatives and realised that estimation of non-linear models is an order of magnitude harder than that of the linear ones, because of the incompatibility of the Kalman filter approach.

Now, given that time is a limited resource and developing new (/improving the current) estimation techniques is not the goal of my thesis, but rather to produce some empirical justification for the results that I obtain, herein lies the question. What is the simplest way to estimate a non-linear model (medium scale: around 60 endogenous variables, 25-30 parameters, 7 shocks; solved using 2nd order approximation and the non-linearities come in the form of LINEX adjustment costs - *)? When I say the simplest, I mean the simplest to implement in matlab/dynare given some knowledge on how to estimate linear models in dynare using MH or MCMC type algorithms (if that is of any use…).

Briefly skimming the relevant literature, I come accross Particle Filtered MCMC, Simulated Method of Moments (SMM), Generalised Method of Moments (GMM) and more.

Any recommendations in how to best approach this would be very helpful. Any relevant references are even more helpful. There seems to be a lot of recent work out there, but as things stand - most of it is highly user-unfriendly and requires more than a couple of months investment of time.

P.S. I have already looked into Born/Pfeifer (2014): “Risk Matters: A comment”.

Thank you in advance.*

The Particle filter will most probably not work well with a model that size. Both GMM and SMM are in principle possible, but you will run into identification issues as we did in Born/Pfeifer (2014): Policy risk and the business cycle, Journal of Monetary Economics. That’s why we went for a Bayesian version of SMM. GMM could be feasible, but tends to run into problems with large models, because theoretical moment computation for large pruned state spaces present particular computational challenges.

Looking at the dynare manual yet again, it (by it I mean particle filter, GMM or SMM) doesn’t seem to be built in into anything similar to mode_compute for linear models. I was wondering if there are any examples of dynare code using a Bayesian SMM, ideally with some PDF file of guidance?

I did have a look at Born_Pfeifer_2014/smm_diff_function.m in github.com/JohannesPfeifer/DSGE_mod/commit/7f3940c2eb9e70d48751309b472ed1aa0cbbd83e, but more examples (+more user-friendly) would be quite welcome.

If not, would the dynare team consider producing a document with a very simple non-linear model (a la An & Schorfheide (2007) Econometrics Review) where the emphasis is on how to prepare data for non-linear estimation (such as Bayesian SMM) and how to specify the estimation algorithms? I wager that there would be quite a bit of interest in this.

Thanks

That is still on my to-do-list. Do you have an specific questions?

Let me get back to you in a day or two. I will replicate An & Schorfheide (2007) with a slight tweak to introduce a simple downward price (or wage) rigidity and write a little PDF (+latex) with a description on how I set it up. This should be a concrete starting point.

I’ll put in some references of people who did something similar too, only (again) the code is not available online. My main interest is in understanding how the SMM works in theory (which I can read in other papers) and in practice (which I don’t yet understand).

So here is a code and a PDF of a simple model with downward price rigidity. There are 9 endogenous variables, 2 shocks, 13 parameters (6 of which are structural and can be calibrated). The question is, having simulated this model successfully (no collinearity, irfs are smooth etc.), how do I go about estimating it using SMM? Since it is not built into dynare, an auxiliary .m file has to be written, which I’m guessing also requires quite a bit of knowledge of dynare internals - all of which I’m interested in.

What would be the best place to start?

Thanks

After some more digging through the web, I found this:

dynare.org/dynare-matlab-m2html/matlab/simulated_moments_estimation.html

I’ll have a go and let you know how I get on.

That is an old function that does not really work. Please give me a few days to get back to you.

Yes please, We would really appreciate a ‘minimal’ working version of SMM estimation for a minimal model. Less is more. Jpfeifer, your replication codes for papers are helpful but a bit too complex and most parts of the codes are not specific to the SMM task . Would appreciate to learn from the simplest example if you can provide one.

Hi Johannes,

While we’re waiting for the guidance on the SMM, I started looking into the ways in which this mini model can be estimated using the particle filter (I found this helpful: [Particle filter dynare)). First things first, I am thinking of applying this to the UK, since CPI is particularly positively-skewed in many OECD economies (but not the US). Here are some questions in terms of specifying the observation equations in a non-linear setting (I have also read your handout ‘A Guide to Specifying Observation Equations for the Estimation of DSGE Models’ in great detail).

  1. There is no government expenditure or trade balance in this stylised model. So to specify the observation equation that links the model output to data GDP, firstly one must subtract the gross fixed capital formation, government expenditure and net exports from the time series of GDP to get the series of interest (i.e. Y_t=GDP_t-GOV_EXP_t-NX_t-I_t) ? I’m not sure if people usually subract NX_t in closed economy settings though (e.g. SW(2007) or SGU(2012)). If not, how so?

  2. Output in the model is stationary, but in the data of GDP is non-stationary. Specifically, data GDP grows over time due to population growth and technological innovation. So Y_t needs to be divided by some measure of Population or Labour Force (say N_t) in order to translate the data in per capita terms. Also, while the theoretical model incorporates non-stationary technological innovations, the code is expressed in ‘per effective worker’ terms (i.e. de-trended by productivity growth), so the time series from the data Y_t needs to be de-trended in some way to produce an accurate mapping. As far as I understand, simple HP filter or polynomial de-trending will not do, because these methods are typically scale-dependent (i.e. you log them first and then de-trend them, but that defeats the purpose of this non-linear model). So the observation equation I intend to use incorporates a gross de-meaned growth rate (i.e. Y_t/Y_{t-1}-mu_y) . Is this a legitimate approach? What are the potential caveats or other suggestions?

As for Inflation, it is also a gross growth rate, only I it is not de-meaned. Rather, the long-run trend is calibrated to a parameter equivalent to the quarterly inflation target. Nominal rate of interest is the annualised interest rate divided by 400 (as in the guide).

  1. If there are 2 shocks in the model, but I want to have 3 observable variables (GDP, CPI Inflation and Nominal Interest Rate), I should introduce 1 measurement error (e.g. in the observation equation for output) in order to avoid stochastic singularity, right? The reason I ask, is because there are examples in the guide (e.g. Listing 2 on page 16), where there are 3 observables, 1 shock and 1 measurement error. It doesn’t seem to add up.

  2. Is there any downside to including additional observables (say 2-3 more) using this measurement error approach?

  3. If I specify the observation equations in the above way, but decide to choose order=1 in stoch_simul, will there be any hurdles for dynare to estimate the linear version of the model? Since it is simple to nest the model in a linear setting, I’m thinking of using it as a cross-check, only I expect parameters phi and zeta not to be uniquely identified up to a first order (instead people often use the slope of the Phillips curve for such purposes). I could also just use a quadratic functional form to start with.

I’ll upload the code with particle filter estimation + data once I get the green light for the above specifications of observation equations.

Thanks in advance

I wrote the code for the particle filter estimation (see attached - data cannot be uploaded because xlsx extention is not admissible). I had no problems estimating this at order 1 with quadratic adjustment costs. However, as soon as I introduce LINEX at order=2, the parameter driving their asymmetry (i.e. zeta) is ‘not identified’.

[quote]==== Identification analysis ====

Testing prior mean

WARNING !!!
The rank of H (model) is deficient!

zeta is not identified in the model!
[dJ/d(zeta)=0 for all tau elements in the model solution!]

WARNING !!!
The rank of J (moments) is deficient!

zeta is not identified by J moments!
[dJ/d(zeta)=0 for all J moments!]

==== Identification analysis completed ====[/quote]

I’ve tried: 1) setting order=3; 2) fixing phi (i.e. not estimating the ratio of phi and zeta); 3) a more informative zeta prior (i.e. normal_pdf) 4) changing the prior mean (because the Jacobian may contain NaNs when zeta is close to zero). However, there is still an identification issue. Running the particle filter estimation for a couple of hours ends up in the following error message:

[quote]Estimation using a non linear filter!

Loading 105 observations from as_uk_data.xlsx

Initial value of the log posterior (or likelihood): -30403.4145

==========================================================
Change in the covariance matrix = 10000.
Mode improvement = 31752.4905
New value of jscale = 6.8518e-17

==========================================================
Change in the covariance matrix = 3.4106e-15.
Mode improvement = 1.7528e-07
New value of jscale = 1.4526e-18

Warning: Matrix is singular to working precision.

In dynare_estimation_1 (line 452)
In dynare_estimation (line 89)
In as_bpf (line 224)
In dynare (line 180)
Error using chol
Matrix must be positive definite.
Error in gmhmaxlik (line 197)
dd = transpose(chol(CovJump));
Error in dynare_estimation_1 (line 438)
gmhmaxlik(objective_function,xparam1,[lb ub],…
Error in dynare_estimation (line 89)
dynare_estimation_1(var_list,dname);
Error in as_bpf (line 224)
dynare_estimation(var_list_);
Error in dynare (line 180)
evalin(‘base’,fname) ; [/quote]

What could be the cause of this identification issue?

Thanks in advance

Let me start with the last one: identification tests currently rely on first order approximations. Higher order ones are not yet implemented. That should explain the issue. Now to the others:

  1. Net exports for the US are so small that people often abstract from them. In Smets/Wouters 2007, they are lumped together with G into an exogenous spending component.

  2. That is not correct. With the HP filter (which you should never use in its two-sided version) you assume that the trend you filter out is a linear combination of the observables. But that is an assumption about the trend, not the data. Similarly with a polynomial. Logging the data before detrending is not a problem. It is simply an invertible transformation of the data that does not lose any information (it has nothing to do with approximation). That being said, you can use gross growth rates or log differences. It should not make a big difference if you approximate the log difference or the fraction nonlinearly. Using log differences is somewhat more common.

  3. Yes, you need (at least) as many shocks as observables. The Listing 2 you refer to has more than two shocks (note the “…” between the two specified shocks)

  4. The only one I see is that you may have a hard time defending your estimation if you find the various measurement errors explaining most of the data

  5. Everything you describe should be independent of the approximation order

Could you please rename xls to mod and upload the file, together with the mod file itself?
We really appreciate a working example here.

[quote=“jd1090”]I wrote the code for the particle filter estimation (see attached - data cannot be uploaded because xlsx extention is not admissible). I had no problems estimating this at order 1 with quadratic adjustment costs. However, as soon as I introduce LINEX at order=2, the parameter driving their asymmetry (i.e. zeta) is ‘not identified’.

[quote]==== Identification analysis ====

Testing prior mean

WARNING !!!
The rank of H (model) is deficient!

zeta is not identified in the model!
[dJ/d(zeta)=0 for all tau elements in the model solution!]

WARNING !!!
The rank of J (moments) is deficient!

zeta is not identified by J moments!
[dJ/d(zeta)=0 for all J moments!]

==== Identification analysis completed ====[/quote]

I’ve tried: 1) setting order=3; 2) fixing phi (i.e. not estimating the ratio of phi and zeta); 3) a more informative zeta prior (i.e. normal_pdf) 4) changing the prior mean (because the Jacobian may contain NaNs when zeta is close to zero). However, there is still an identification issue. Running the particle filter estimation for a couple of hours ends up in the following error message:

[quote]Estimation using a non linear filter!

Loading 105 observations from as_uk_data.xlsx

Initial value of the log posterior (or likelihood): -30403.4145

==========================================================
Change in the covariance matrix = 10000.
Mode improvement = 31752.4905
New value of jscale = 6.8518e-17

==========================================================
Change in the covariance matrix = 3.4106e-15.
Mode improvement = 1.7528e-07
New value of jscale = 1.4526e-18

Warning: Matrix is singular to working precision.

In dynare_estimation_1 (line 452)
In dynare_estimation (line 89)
In as_bpf (line 224)
In dynare (line 180)
Error using chol
Matrix must be positive definite.
Error in gmhmaxlik (line 197)
dd = transpose(chol(CovJump));
Error in dynare_estimation_1 (line 438)
gmhmaxlik(objective_function,xparam1,[lb ub],…
Error in dynare_estimation (line 89)
dynare_estimation_1(var_list,dname);
Error in as_bpf (line 224)
dynare_estimation(var_list_);
Error in dynare (line 180)
evalin(‘base’,fname) ; [/quote]

What could be the cause of this identification issue?

Thanks in advance[/quote]