Over the last few years, I’ve become increasingly uneasy about the role of “adoption” as a metric for a design system’s success. My working theory is that the metric’s popular precisely and solely because design systems have, historically, been so difficult to measure. Often, the adoption of a given component is the easiest thing most design systems can measure.1
But adoption is a fuzzy, imprecise metric. And it’s often an opaque one. Focusing on the adoption of a given component obscures other, more critical measures of success. Adoption can’t convey how successfully teams are using a given component, or why they’re using it; it can’t tell you how effectively the pages built with your design system are performing, or how satisfied your users are. Focusing solely on adoption prevents organizations from having harder — but more valuable — conversations with teams, and getting a richer, more qualitative sense of how your design system is performing for its community.
Additionally, a single-minded focus on adoption causes organizations to treat design systems work as something akin to, well, library maintenance. Looking at adoption alone can encourage a “component machine go brr” mindset toward the design system, treating it more like a software platform in need of patches and releases. And that, in turn, often relegates the critical infrastructural tasks — the community management, the relationship-building, the ongoing research — to the wayside.
I don’t want to suggest that it’s not beneficial to measure the adoption of your design system across an organization. It absolutely can be. But if that’s the sole marker of success that you’re tracking, that might signal a different set of challenges.
That’s not to say it’s easy, mind you. For many organizations, it remains fiendishly complicated. (And often, expensive.) ↩