

This gives the team the opportunity to reflect on which capabilities they could work on to improve their metrics, which can be much more effective than overdetailed out-of-the-box tooling. For example, before spending weeks building up sophisticated dashboard tooling, consider just regularly taking the DORA quick check in team retrospectives. We recommend always to keep in mind the ultimate intention behind a measurement and use it to reflect and learn. Stability metrics only make sense if they include data about real incidents that degrade service for the users. In particular when it comes to the stability metrics (MTTR and change fail percentage), CD pipeline data alone doesn't provide enough information to determine what a deployment failure with real user impact is. We're still observing misguided approaches with tools that help teams measure these metrics based purely on their continuous delivery (CD) pipelines. We're still big proponents of these metrics, but we've also learned some lessons. This research and its statistical analysis have shown a clear link between high-delivery performance and these metrics they provide a great leading indicator for how a delivery organization as a whole is doing. To measure software delivery performance, more and more organizations are defaulting to the four key metrics as defined by the DORA research program: change lead time, deployment frequency, mean time to restore (MTTR) and change fail percentage.
