Most of the time working on Bayesian models, I spend time diagnosing divergences and bad model fits. It is surprising how much struggle in diagnosing a broken model can be spared when proper prior predictive checks are done.

In my case, this often means testing if the majority of predictions, generated from prior draws can reproduce the experimental observations. Although, I routinely do this, oftentimes, making predictions for a specific scenario are a bit laborious and sometimes I neglect them and use some standard checks.

When the model than fails, I wonder if it was because of the choice of priors or wrong model specification, loose time and nerves about the model and end up with several model specifcation, each not halfway good.

Only when I finally sit down, take my time, go through the model step by step, and make proper prior predictions incorporating the data as I go, I end up with a solid model sepcification including reasonable priors.

And surpisingly often, divergences go away or—if present—the underlying course of the divergences may be revealed.

Of course in this way priors are informed by the data, but I think if it is ensured that priors are covering the whole range of possible data additionally allow for surrounding values with a considerable margin, I don’t see a problem with priors informed by the dataset.