Statistical decisions based partly or solely on predictions from probabilistic models may be sensitive to model mis-specification. Statisticians are taught from an early stage that all "models are wrong" but little formal guidance exists on how to assess the impact of model approximation, or how to proceed when optimal actions appear sensitive to model fidelity. In this talk I will present a general applied framework to address this issue. This builds on diagnostic techniques, including graphical approaches and summary statistics, to help highlight decisions made through minimized expected loss that are sensitive to model mis-specification. The stability of decision-systems can then be assessed by considering perturbations within a neighborhood of model space centered at the (mis-specified) approximating model. This neighborhood can either be defined via an information (Kullback-Leibler) divergence, or using the Dirichlet Process as a non-parametric extension to the model. A Bayesian approach is adopted throughout, although the methods are agnostic to this position.