Returning the Gradient


I want to run an outside optimization routine using python. Trying both variational inference or hamiltonian monte carlo. I imagine there must be a way to do this, but I was wondering if there was an easy way to solve a model then return both the likelihood and the gradient, so I can pass that to the python code. I would prefer analytic gradients using analytical derivation but would be fine either way.


Dynare’s dsge_likelihood.m can return the analytic gradient. See e.g. tests/analytic_derivatives/fs2000_analytic_derivation.mod · master · Dynare / dynare · GitLab