Can I specifiy in my “initval” block the exact values of my endogenous variables for my baseline year (2017) ? I have calibrated the parameters of my model upon these values (the ratios).
As an example, fixing the value of my GDP to 2’500’000 million euros (Y=2500), and labor to 23 million workers (L=23)? I often see papers with “steady state values” for the endogenous variables which are lower than 1, like Y=0.74, C=0.58 etc (because of the normalization of time endowment to 1 I guess). I assume they computed these values from steady state équations and parameters. So maybe I should fix the values of my variables from my BGP équations and parameters.

I’m doing deterministic simulations, so I would like graphs with actual values of my endogenous variables.

Short version: If you want to achieve particular values for endogenous variables you will need to calibrate the structural parameters which determine the steady states accordingly. For example, labor supply is often taken as a “proportion of time spent working” and thus calibrated to ~1/3 by, among other things, the choice of a labor supply disutility parameter. Many references probably exist showing this process. Appendix A.2 of one of my working papers [link] goes into detail for the news-shock model of Jaimovich and Rebelo (2009).

Considerably Longer version:

Yes, and you should as well. As you probably know the solution technique in Dynare and in most DSGE modeling involves linearizing the equilibrium conditions. Linearization implies it is being done “around some particular variable constellation”. The convention has been to pick the nonstochastic steady state (of the BGP in a stationary model) as this constellation since this is where the economy will return in the long run.

The steady state values of endogenous variables will be entirely pinned down by the values of parameters. Think, for example, of the simple Keynesian cross from intermediate macroeconomics: the “solution” is output as a function of autonomous expenditure and a multiplier which are both entirely determined by structural parameters; since everything else is a function of output itself (or exogenous), the other endogenous variables will inherit this quality. One can then see how these values adjust in response to alternate parameterizations and choose the one which is “best” in some sense.

This would be simple if you had just a few variables and parameters. But often DSGE models have a large number of both, and changing one parameter might result in a better fit for one variable while sacrificing the fit on another. One very simple way of choosing the values for parameters is through the method of simulated moments (MSM) which compares the moments implied by your model at a given constellation to those of the data and adjusts parameters to minimize the (weighted) sum of forecast errors. You as the modeler must choose (1) the target moments (2) the parameters to vary and (3) the weighting matrix. This is what Beaudry and Portier (2004) [link] do to calibrate a few parameters for which there is little data. They have a footnote providing further references for more technical detail and applications. This discussion on Dynare forums will also help [link].

Finally, a comment on the steady state values you have previously seen: fitting the parameters to the levels for these macroeconomic data might come at a high cost: while you can probably reasonably approximate the levels at some point in time, I suspect this will sacrifice the reasonableness of the transition equations with respect to the data. In that case your simulations might begin reasonably at your somewhat arbitrary beginning period and then diverge from the data.

DSGE models have done fairly well explaining the “stylized facts” for many advanced economies - indeed, this focus on matching stylized facts is one of the main contributions of Kydland and Prescot (1982) [link]. You might consider calibrating the BGP to match features of the BGP e.g. data on growth rates, cross-correlations, and autocorrelations of key macroeconomic variables and then retroactively calculate the implied paths for the variables you want to express in levels.

I am sure others on this board have practical advice for implementing this.

@bdombeck’s answer mostly applies to stochastic simulations with perturbation techniques where you indeed need to remove the trend/balanced growth path.
For your perfect foresight simulation in contrast, you don’t need to do this. You can leave the trend in. See e.g. https://github.com/JohannesPfeifer/DSGE_mod/blob/master/Solow_model/Solow_nonstationary.mod
The tricky part is getting the level right. The problem is that the problem is usually constant returns to scale. So it involves some arbitrary constants like the level of technology or the labor disutility parameter that need to be set. Because of this, it is often easier to work with a normalized version of the model and then scale everything up.

I actually thought of normalizing my model à la Klump et al. (2008), if that’s what you meant. And yes I would like to keep the trends in.

My issue is with my some of the exogenous variables now. I, indeed, can compute the BGP values for all my endogenous variables from the BGP équations and parameters. However, I have three exogenous variales (Investment, capital and subsidies in the electricity system) that I have to fix in Dynare because I forecast these “elsewhere”. If I have a BGP value of, let’s say, 2.89 for the stock of capital of my economy; how do I fix the value of the stock of capital of the electricity system (= to 8 billion euros for 2020 e.g.) ? Same applies for my two other exogenous variables. I could also take as an example excise taxes and social transfers that I have in my model, which are fixed too.

Your model typically only determines ratios, not levels. But that means you can work backwards. If you have the steady state of capital and the capital to output ratio, you can use that to take the empirical output and compute capital.

But what happens to my exogenous variables. I can’t specify SS values for endogenous variables which are between 0 and 4 let’s say, and values for my exogenous variables which amount for billion euros like social transfers or excise taxes.

Most of my taxes are in %, but I have two excises taxes on my fuel and electricity products and prices for fuels and electricity (which are exogenous and fix). So I can’t make it an ad valorem tax (can I?).

Okay. I have an economy, composed of households, firms and public authority:

Households consume different goods, of which electricity and petroleum products, and make labor décisions (I could easily take off that one as I don’t really car about labor décisions);

Firms have three factors of productions: labor, capital and energy (which is a bundle of electricity and petroleum products);

The public authority takes taxes on consumption (ad valorem), labor and capital revenues (ad valorem), and on energy products (excise taxes) from households and firms

I have several exogenous variables: social transfers from the public authority, Investment/capital/subsidies in the electricity system, price and taxes of energy products (electricity and petroleum products).
I already have the forecasted data for my energy variables (taxes/prices for electricity and petroleum products; and investments/capital/subsidies in the electricity sector) from an engineer bottom-up model from 2017 to 2035.

I’d like to constraint my economy with these “energy” exogenous variables, and see :

a) how investments/subsidies in the electricity system (stemming from the energy transition policy) impact the GDP;
b) how deterministic shocks on energy taxes and prices impact the energy consumption behaviour of my agents and the finances of my public authority in the long run (from 2017 to 2035).

That is why I have a specific model with “exogenous” (meaning I already have the values for all of the period of simulation) variables on the energy side of my model.

The thing with the prices is tricky. In the end, the model only requires relative prices, not absolute ones. One way to get around this is to determine the appropriate relative price again via a ratio, like the tax revenue (price times quantity) as a share of GDP.

I can do that for energy excice taxes, making them a share of total price before tax and hence expressing them in %. However, for energy prices it’s another issue.
I agree with the relative price part. But, the price of other (than energy) goods is fixed to 1 for simplicity. Let’s say I take electricity demand for households. This will be expressed in terms of the relative price of electricity price (200 euros per MWh of consumption) and consumption price, which in my model is normalized to 1. So the relative price at the end will give me 200 euros…

I see many papers simulating an oil price shock in Dynare. Maybe I should suppose that the price of my energies is unity, so I can have the following expression (Pe + Te)*E, Te being the excise tax expressed in “%” of the energy price before tax. But I guess that having a price equal to 1 or 200 does not matter as if I want to increase the price of energy of 5% let’s say, the outcome on the energy consumption would be the same (?) whether the price is at 1 or 200, it is just a matter of normalization (?).

I’m getting back to this because I have a hard time finding the output in the model. Indeed, all of my variables depends in the SS (in my case BGP) of the SS output. My problem is that I have a CES function. I can’t find the output at SS from that function, I mean I tried to. I usually see papers with a Cobb-Douglas function, so it is way easier to find the output at SS.

I have tow hints:

Should I fix steady state output at 1? I saw in Klump et al. (2011) that they fix N0 and K0 to find Y0. I wonder how they set that K0 (in my case N0 equals to 1).
or

A) Should I express all my steady state variables in deviation from output? Hence, I would have a clear SS in the initval block for each ratio relative to output depending only on parameters.
B) If I do apply that technique, how do I found the levels after I run my model? Should I multiply the ratios obtained by the point of normalisation of my output from my data (my problem is normalized a la Cantore et Levine (2011))?

I’ve followed your paper. Is it normal that I find a normalisation factor of 53806841.31 ? My model is more complexe as I have multiple nested CES.
I mean, as long as all my variables (especially an output of 1 at BGP) are okay with that normalisation factor, the value shouldn’t matter, right ?

If I calibrated all my parameters based on long-run ratios, do I have to compute the BGP of my variables ? Or can I just use the long-run ratios for these variables as I based my calibration on those?

Normally, if I use the calibrated parameters and compute the BGP variables from my FOCs, I should find that my variables at BGP are the value of the ratios I used, right ?

Because the problem is that I don’t find exactly my long-run ratios if I use my FOCs and parameters to compute variables at BGP. It’s a difference of 0.001 for the variables at BGP, but when scaling everything up (multiplying all the variables at BGP by the real value of Output), the difference gets huge.