One-sided hp filter on matrix of time series or 1-by-1?

Hi,

I would like to ask a few questions:

  1. Should I apply the one-sided hp filter on the matrix of my data (all my data together) or use it on the time series one by one? Because it seems that the function can get a matrix of time series. And if I have to use it on all my data at once, should I differentiate between aggregate variables like consumption and investment and other variables like inflation, interest rate, and share variables (percentage variables)? If yes, should I again differentiate the second group into interest rate and inflation together and share variables together?
  2. What is the difference between one-sided hp filter and quadratic detrending? Because I am currently using hp filter but a similar paper to my work has used quadratic detrending. What goes in the mind of someone to choose one? Do they use different methods and choose the one that seemingly estimates easier?
  3. Is it important and does it make a difference if I use billion dollars or million dollars for y, c, g, i or like 1000 hours or 1 hour for labor for scaling in my time series data?
  4. When I want to make my time series per capita, should I divide by total population over 16 or by employee count?
  5. I have two share parameters in my model that I have the data for, and they are calibrated for the model as the mean of their respective time series. I have made those two parameters into endogenous variables and have given them a shock each with steady state of data mean and put the rho and sigma of the shock for the estimation to estimate beside other shocks. I have also given their two real world time series as observables. Is what I am doing ok? And if it is, should I demean it normally or use the hp filter? Because these two are percentage (between 0 and 1) variables I’m not sure how to deal with the data preprocessing.
  6. In the paper ‘Risk Shocks’, they have just demeaned the log of hours worked per capita as labor data. Why didn’t they detrend it with one-side hp filter or differencing like their aggregate variables? Should I detrend my labor observable data with one-sided hp filter as same as my other observables (excluding inflation, interest rates (and possibly share/percentage variables which I asked)? I thought I should detrend labor data as same as aggregate variables but by reading Risk Shocks I don’t know what is right.
  7. There is no way I can calibrate the Frisch elasticity of labor supply based on my equations. I just put it as 1. Is it ok? It’s a typical new Keynesian utility function with real estate in it. The equation including the parameter Phi is like L^Phi/W = lambda_1. With lambda_1 being the lagrangian multiplier of one of my constraints which its steady state is coefficient/C. So, I basically have to calibrate C*(L^Phi)/W=(a number based on data if i take average or a time series if I don’t take average), which I can’t, the left side is too complex (I have the right side). The values on the left side are scale relevant so I can’t use a method like a regression on the log of the equation to calculate Phi.

Thank you in advance.

  1. The one-sided HP filter is univariate. If you pass a matrix, each series will be detrended one by one.
  2. Each filter eliminates different frequency components as the trend part that is not supposed to be explained by the business cycle model. Each researcher has a different prior on how the trend should look like.
  3. Ideally, your data is passed in logs in order to turn exponential into linear trends. Because the logarithm will make series unitless, the scaling will not matter. Otherwise, it may matter.
  4. That depends on whether your model is supposed to explain variables per capita or per worker. There is no general rule.
  5. Without more context it is impossible to tell. Are the shares supposed to be constant?
  6. There is a big debate in the literature on whether hours worked contain a trend. If you think they don’t, then obviously there is no need for detrending.
  7. If you are not going to estimate your model, you need to pick a number. A unit elasticity can be justified.

Thank you very much Prof. Pfeifer for all the answers.

  1. For Q#5, the two variables are constant parameters in the final model, but I change them to endogenous variables as shocks and steady state of data mean in order to help estimate the model. Although they are finally going to be static parameters in the model, they help explain the variance in the data a lot, hence giving shocks to them. I cannot endogenize them in such a way that their values are determined from FOCs. They only come from the data. So, giving them shocks and adding their data as observable variables (the data which their average is the variables steady states) is all I can do.
  2. And as for Q#7, I am Bayesian estimating my model, mostly to determine the coefficients of adjustment costs, (other parameters are calibrated by hand as some FOCs provide their value as the division of two data and can be calculated and get averaged before the Bayesian estimation) but I think in all the other papers I have read the Frisch elasticity parameter is set without estimation and beforehand that’s why I didn’t consider estimating it. Is it normal to estimate it alongside other parameters for my model? Should I estimate it?
  3. I just came up with another question. My share variables (suppose x and y), each, are the division of a data by another (x = a/b, y = c/d). For x, I divide the unchanged data of a by unchanged data of b. And the same for y. And because they are share variables, I don’t detrend them and just demeaned them (whether with log or not based on their specification in my model). Is this ok? I tried filtering the data of a,b,c,d and producing x and y but the results don’t seem right. Should I just divide the data without any change and then at last filter the share variables? Or if I may ask broadly, is the choice filtering/differencing only dependent on whether the share variables have trend? So that if they don’t have trend (they don’t) then I should certainly not filter or difference them. All in all, I have to choose between filtering a,b,c,d and producing x,y or not filtering a,b,c,d and producing x,y. And after deciding between those two methods, I have to decide whether I have to filter x,y or not.
  4. And another question. Papers report including observable variables of inflation, interest rate, and their share variables as the demeaned versions of them without log. So probably they don’t put exp() for them in the model. And probably they do that to get % change not % change from steady state. Is it ok to also put exp for those kinds of variables and add observable as demeaned logs? It doesn’t seem to me the results should change but I’m not very sure. I have added exp() for them, so I don’t want to change them that’s why I asked. After my Bayesian estimation I will add the values of the estimated parameters to my other mod file which is the same model but without any exp() in it. Thats why the interest rate, inflation, and share variables being exp() in the estimation file is not important for me.
  5. Also, I have included quarterly YoY inflation as observable. Is it ok or should it be price index growth from t-1 to t (instead of t-3 to t which will be YoY)? I have used quarterly year over year inflation (t-3 to t) for my calibration and estimation but in my model the inflation is a normal t-1 to t variable. I’m not sure if it is ok to do so. Do papers use the t-1,t method? I don’t want to be intransigent on the specification, just what other papers normally do is fine for me for now.
  6. And the last one. I know about the spikes at population growth in 1990 and 2000 and the need for smoothing the growth rate and if needed reconstructing the population index. So, I will do as explained in the observable guide in your website. But I also have the ‘Nonfarm business sector hours worked for all employed persons’ index. You have specified for the employee data it is better to per capita it with the unsmoothed population data as they both have similar distortions, and they cancel each other out. But I don’t know if the ‘Nonfarm business sector hours worked for all employed persons’ index has also the similar distortion as the population data or not. Should I per capita the ‘Nonfarm business sector hours worked for all employed persons’ index with smoothed or unsmoothed population times series?

Thanks again.

  1. That strikes me as problematic. If you require them to be time-varying to explain the data, assuming them to be constant in the model is an obviously poor assumption that will be hard to defend.
  2. Most calibrated parameters are set based on long-run averages and are rather uncontroversial. That does not apply to the Frisch elasticity, whose value can be very controversial. That’s why it’s often estimated.
  3. Usually, there is no point in filtering non-trending variables (except for people sometimes only wanting their model to explain business cycle frequency movements in all variables, in which case you may want to filter everything).
  4. The difference is essentially whether you estimate your model based on net or gross interest/inflation rates. Usually, that should not matter at all.
  5. Whether you use one-period or year on year inflation is equivalent, except for the latter requiring more initial periods for initialization (which may be less efficient).
  6. Looking at the data, it seems that there are no suspicious spikes around Census dates. I would thus use the smoothed population series.

Thank you prof. Pfeifer, I got all my answers.
Just want to mention that I have included the two said parameters/variables as observables for now because they can’t be endogenized (not just in my model, in any model). Those two variables/parameters change in real world for the US only as of exogenous shocks. I have never seen any paper being able to or even trying to endogenize them. But I will surely consider what you explained if I get to be able to change the observables without hindering the variance explanation of observables by the model. I have seen in a paper (Financial business cycles, Matteo Iacoviello) that loan default distributionary shocks are included as observables without it being endogenized in the model (has been added as a shock with zero mean to the model). Also, this same paper has added tfp as observable although its tfp in production function has no growth. If I remember correctly, I think I got the thought of adding observables of exogenous variables (with their shock) to the model, if their variance is quite important for the model, from this paper.