Measuring Service Performance at Government of Alberta

The Overview
Working within the public sector means navigating complex governance environments, cross-ministry dependencies, and high-impact decision pathways. Even minor design changes can have wide-ranging effects, making thoughtful planning and stakeholder alignment essential. And here I was, working on a dashboard that needed to do more than just show numbers. It had to accurately reflect how every service under every ministry was performing, highlight the true story behind each service, and reveal how effectively the finances were being utilized.
The Problem
The primary challenge was low user engagement and adoption. Each quarter, teams were required to update the dashboard with new performance metrics - some standardized, others unique to individual services. To users, this felt like yet another administrative task piled on top of their already heavy workloads.
Compounding the issue, the dashboard failed to accurately reflect the real story behind the numbers. Certain metrics were measured only once, yet the dashboard displayed them as stagnant with no progress because there was no way to indicate an end date for completed targets. When metrics didn’t meet expectations, users had no place to add context or explanations, leaving executives with an incomplete or misleading picture.
Over time, this created a cycle of frustration: users dreaded returning to the dashboard, updates were delayed, and when they did come back after long gaps, they often couldn’t remember where or how to make the necessary changes. The result was a tool that was neither trusted nor effectively used.
My Actions & Processes as a Senior Front-End Designer
When I joined the project, my first step was to deeply understand the product, the people using it, and the operational realities behind their frustrations. Asking questions has always been my way of building relationships, uncovering assumptions, and fully understanding a system end-to-end. For this project, it became the foundation of my process.
Initiating User Research through Usability Testing
I recommended starting with usability tests to hear directly from users—what worked for them, what didn’t, and why the dashboard had become a burden rather than an effective tool. Leading these sessions required careful preparation, so I began by segmenting users into meaningful groups:
-
frequent users
-
users who had access but rarely used the dashboard
-
new or inexperienced users
For each group, I designed tailored question sets to draw out relevant insights. All sessions were moderated, with detailed notes and observations captured. For many participants, this was the first time they felt truly heard.

Usability Test Work
Synthesizing Insights into Actionable Themes
After completing the sessions, I analyzed the feedback, organized insights into categories, and translated them into clear opportunity areas. This helped surface patterns in behavior, pain points, and system gaps that were previously invisible to the team.

Post-usability test workshop
Designing Targeted Enhancements
Based on user needs and business priorities, I worked on several key improvements, including:
-
enabling monthly metric updates for greater granularity
-
adding a context field so users could explain unexpected performance trends
-
incorporating cost data to give executives clearer visibility into investment vs. outcomes
-
enhancing onboarding experience by designing a setup wizard to help users access setup tutorial anytime they desired
-
enabling users to add status and end dates for metrics for improved tracking
These enhancements directly addressed the core adoption issues and provided executives with a more accurate narrative behind the numbers.
Hi-fidelity Design






Impact
-
Improved performance visibility: Users could now measure service performance at a far more granular level, gaining clearer insights into trends, progress, and operational health.
-
Reduced cognitive load: By shifting from quarterly manual aggregation to monthly inputs, the design eliminated mental calculations and reduced opportunities for errors.
-
Increased user trust and engagement: Through research sessions where users finally felt heard, feedback became more frequent and constructive, creating a healthier feedback loop between teams and stakeholders.
-
Higher platform adoption: With clearer workflows, contextual fields, and more accurate storytelling through data, adoption rates increased as users felt the dashboard now reflected their work more honestly and effectively.
-
Greater decision-making confidence for executives: Enhanced metric granularity and the ability to understand “the story behind the numbers” allowed leadership to make more informed, context-aware decisions.
-
Improved onboarding and task recall: Introducing an always-accessible setup wizard helped users quickly remember how to navigate and use key dashboard functionalities. This reduced their reliance on lengthy guidance documents which they previously avoided and significantly cut down the time spent searching for instructions.
-
Streamlined access to supporting information: Adding an external link within the tool allowed users to directly navigate to relevant documents for additional context, reducing friction and helping them reference critical information without leaving the workflow or searching through multiple systems.
-
Clearer performance narratives and accountability: Adding target timelines to metrics helped communicate not just what was being measured but until when, giving executives a clearer understanding of progress and intent. Introducing status indicators enabled admins and leadership to quickly spot risks or flags, directing attention to the areas that needed intervention the most.
-
Enhanced financial visibility and value tracking: Introducing a cost component for each service gave leadership a clearer understanding of where funds were being allocated and how effectively they were being utilized, enabling more informed budgeting and investment decisions.