Optimal simple rule -- replication of Huang and Liu (JME, 2005)


Dear Dynare team,

I am using the “osr” command to replicate the optimal simple rule (TR1) in Huang and Liu (JME, 2005), which features a closed-economy two-stage New Keynesian model. The osr command can be run successfully, but cannot reproduce the results as shown in Table 3 in the paper. Especially, the estimated coefficients are not stable with different choices of initial values of the coefficients in Taylor rules. How can I fix the problem?

Just for reference, in writing the mod file, I am strictly following the definition of optimal monetary policy problem as specified at the beginning of Section 5 in the paper.

Attached are my mod file, and the paper.



TaylorRule_HuangLiu.mod (1.3 KB)

Huang and Liu_2005_JME_Inflation targeting, what inflation rate to target.pdf (310.4 KB)


The problem is that you are looking for a global maximum. When you use different initial values and get different answers you should compare the values of the objective function. Do you get values lower than what happens at the parameters reported in the paper? Also, did you try a different opt_algo?


Dear Prof,

Thanks a lot for your quick response. I have tried with different “opt_algo”, such as “opt_algo=2,8,102”. It reports even weird estimates, where some of the estimated coefficients are larger than 100.

I played with different initial values, and it is true that when the initial values are close to the parameters reported in the paper, I get the lowest objective junction. But by ex ante, how should I set up the initial values? Also to be noted, the estimated results in the current algorithm are always near the initial values (I guess it is probably a local minimum).


Most papers in the literature actually impose bounds on the parameters so they do not go through the roof. If you are facing local maxima, a typical approach is to try different starting values and then use the point with the best value for the objective function.


Thanks a lot!

There is one more relevant question. In the current code, the shock is set to be “var e1; stderr 0.02^2; var e2; stderr 0.02^2;”, which yields a similar estimation as in the paper. But in the paper, standard deviation of the shock is 0.02 but not 0.02^2, where I think 0.02 should be the right scale as in other papers in literature. If I set “var e1; stderr 0.02; var e2; stderr 0.02;”, it turns out that the estimated coefficient in Taylor rule is super big, i.e., more than 1000. What’s wrong here?

Is that because, in the paper, the endogenous variables in welfare loss function is specified as deviation from flexible price equilibrium, but “osr” solves for the case of deviations from steady state?


As is, you are confusing variance and standard deviation. Or course, if you want to compare your results to the original paper, you need to use the same objective function. I infer from your message that you are not doing that right now.



Probably I did not express my problem correctly. If I strictly follow Huang and Liu’s parameters, my estimate of the coefficients in Taylor is extremely large, like over 1000. Attached is my mod file that strictly following their paper. Would you please tell me where I am wrong?

TaylorRule_HuangLiu.mod (1.5 KB)


Do you know whether they imposed bounds on the valid parameter range? That is quite common, unfortunately often without explicitly stating so.


I am not quite sure whether they impose the bounds or not. But the issue is that, no matter how large the bounds I impose, the estimated results are always pretty close to the upper bound. As I take a bigger upper bound, the estimated results also increase accordingly.


That is not unusual. Often the optimal parameter values in such exercises are actually \infty, so the choice of an upper bound is crucial. Does the value of the objective change much if you put in a restriction?


Got it! It does not change much. For instance, if I increase the upper bound of the coefficients before inflation measures from 3 to 5, objective function decreases 1%; if I increase the bound from 3 to 10, objective function decreases 3%.

Instead of using “osr”, I am wondering whether an alternative approach would work better, and hope for your idea: using a search algorithm (like fminsearch) to estimate the coefficients – given a group of coefficients, simulate the linear system and then update coefficients to minimize unconditional variances based on the objective function.



What you outline is exactly how Dynare implements the osr-command. Run an optimizer over a set of parameter to minimize an objective defined over unconditional variances.