Stationarity of data

I am trying to estimate(Bayesian) a small open economy model with real consumption per capita, real oil price(Brent price/US CPI), headline inflation, food inflation, quarterly interest rates, RER(nominal exchange rate*US CPI/home CPI).
Data period :2004Q1 to 2019 Q4.
I have seasonally adjusted my data using ARIMA X13.
ADF tests on my data fails to reject null.(Except for consumption)
I find first differences also to be non stationary( except for inflation rates, but does feeding first differences of inflation rate even make any sense?)
HP filtered log(C) is stationary.
Demeaned interest rates, RER, real oil price and inflation rates are also not stationary.
My model is declared in levels.

  1. Can I use data which doesn’t pass ADF test?
  2. My shock sizes for quarterly oil price is in the range 0.15( decimal points). Is this plausible?
    When I estimate measurement error ls, oil price shock shrinks to 0.00x, but it’s measurement error comes to 0.15.
  3. Also, when I include COVID 19 dat, shock sizes for TFP and oil price are very large. Is this definitively a model error or data prep error?
    In all cases ,I have consistently updated my observation equations.
  4. Oil price is a stationary AR1 process in my model. I have taken the steadystate value as 1. I have also normalised steady state RER to 1. Could you please advice if these normalisations are valid?
    All my variables are real except interest rates.There is no unit root.
  5. Should I necessarily match steady states of my model with the average of my data?I did calibrate certain ratios according to country long run averages.Should I also match beta(discount factor) with the discount factor implied by the sample?
    @jpfeifer @stepan-a
  1. Failing to reject the null is not equivalent to there actually being a unit root. Your sample is rather short, which may reduce the power of the test. Generally, there are good theoretical arguments for considering growth rates to be stationary. The levels are clearly trending.
  2. That suggests the data has a volatility that requires a shock of 0.15. Why do you have measurement error for that series. The spot price should be well-measured.
  3. Covid is a massive break in the data. That is not surprising.
  4. Yes, such normalizations are usually unproblematic.
  5. Not necessarily. Sometimes there are good reasons to fix e.g. the discount factor. But if its value is too far from the model-implied steady state value (assuming your data is informative on the mean), you will get problems.