I have the following general theoretical question concerning Bayesian estimation of DSGE models: suppose we have a medium-scale DSGE model that features 14 exogenous shocks. The model is to be estimated on 10 observable times series. Given that there are more structural shocks than observable times series used in the estimation can one be sure that the Bayesian estimation will be able to identify properly the shocks?

I am asking this question because the Dynare manual stipulates that in Bayesian estimation the number os shocks should be at least as many as the number of observable variables and therefore that includes the scenario in which the shocks exceed the number of observables.

Hi,
of course you can never be sure, if the shocks are properly identified, as you may for example miss other kinds of shocks in your model that may be correlated with the ones you try to identify or because your model is simply mispecified. Moreover, it matters which data you use as observables, see e.g. ftp://ftp.ncsu.edu/pub/ncsu/economics/RePEc/pdf/whatvariables.pdf.
What Bayesian estimation of DSGE models as implemented in Dynare does, is choosing the parameters of the model (including the standard deviations of the shocks) as to maximize the likelihood of your model. I.e. it maximizes the fit of your model to the data. If you have unobserved states, this is done by using the Kalman filter. If your true error terms/shocks are normally distributed, this is the best you can due (the Kalman filter gives the optimal estimates). In this sense, given the data and the model, you cannot improve on these results.
The reason you need at least as many shocks as observables is that you have otherwise “stochastic singularity” (see e.g. the introduction of cireq.umontreal.ca/publications/17-2003-cah.pdf). In this case, estimation is simply impossible. Hence, you need as many shocks as observables as a technical requirement to estimate the model. A common ways around this constraint is to introduce measurement error. If you have more shocks than observables this poses no proble a priori. Identification is achieved by maximizing the joint likelihood of the model given the data.