How do you measure the success of a design system?
October 7, 2020—There are many ways to asses the impact of a design system, varying from fully automated approaches—like scanning a codebase for design system code—to manual assessments that identify where and how the system is used. But the metric that underpins all these approaches is adoption. Product teams cannot get value from the system if they are not using it, and they cannot benefit from new capabilities if their implementation is not up to date.
At Morningstar we’ve been tracking adoption of our design system since the first release back in 2017, using a combination of surveys, coverage assessments, and direct engagement with product teams. While all these methods have value, all are labor-intensive, and provide an incomplete and subjective view of system use.
To more effectively serve product teams, we needed a comprehensive view, and we needed to replace time-consuming surveys with a self-service, automated approach.
To more effectively serve product teams, we needed to replace time-consuming surveys with a self-service, automated approach.
As we began our discovery, we had a few principles that guided our approach:
Our goal is to inform decisions, not dictate them.
With the above goals in mind, the MDS team will soon be launching an automated, self-service dashboard to help product teams track their adoption of MDS version 3, following a three-step process that allows teams to adopt gradually in accordance with their road map.
The first two steps focus on planning. Product teams complete a brief survey to share the link to their repo(s) along with a few other details, and then follow up with the MDS team to ensure we have tools and resources to support their requirements. We will also use this data to provide targeted communications regarding updates and changes to the system.
The third step—for products to upgrade to MDS version 3 and remove dependencies on earlier versions—will be tracked automatically from usage data, based on the MDS dependencies defined for each project in source code.
Tracking MDS dependencies in source code ensures a single and comprehensive view of system usage. While components, capabilities, and even applications are often reused and deployed in multiple contexts, all source repositories are accessible through a common environment, allowing scans to be automated.
Since each MDS V3 component is a separate package, we can identify the specific versions of each component used. This will allow us to identify which teams may be out of date with one or more components, and which need to be notified when new versions are released. It will also allow us to ensure we invest in the most used and most valuable components.
While usage metrics assess the the scope and impact of design system use, it is important to note that usage does not indicate coverage or quality. While the dashboard tells us which versions of MDS components a given application is using, it does not tell us how many instances of those components exist, how they are deployed, or whether they are used in accordance with brand guidelines. Just as a design system does not replace the need for designers, this dashboard does not replace the need for close collaboration with product teams to ensure the system is meeting real user needs. It simply provides another lens to guide and inform those efforts as the system evolves.
Just as a design system does not replace the need for designers, usage metrics do not replace the need for close collaboration with product teams to ensure the system is meeting real user needs.
Once the dashboard is live, the next step is to work with product teams to ensure all products and capabilities are accurately reflected on the dashboard. As adoption proceeds, we will follow up with tools to help product teams assess coverage and quality, and offer consulting to teams that require more focused and extended support from MDS to ensure they are getting the most out of the system?