top of page

CASE STUDY 01

Measuring

digital service

Performance

A performance dashboard that stopped showing data - and started telling the truth behind it.

AI-ASSISTED RESEARCH SYNTHESIS 

CLIENT

Govt. of Alberta

DOMAIN

Public Sector . Service Design

MY ROLE

CORE METHODS

Senior Product Designer

Usability testing . Research synthesis . Hi-fi design

IMPACT

ADOPTION

EXECUTIVE CONFIDENCE

COGNITIVE LOAD

OVERVIEW

Where every design decision

has consequences.

Working within the public sector means navigating complex governance environments, cross-ministry dependencies, and high-impact decision pathways. Even minor design changes can have wide-ranging effects, making thoughtful planning and stakeholder alignment essential.

 

The challenge: a dashboard that needed to do more than display numbers. It had to accurately reflect how every service under every ministry was performing, reveal the true story behind each metric, and show how effectively public funds were being utilized.

Screenshot 2025-11-18 at 5.26.10 PM.png

THE PROBLEM

A tool 

nobody trusted.

The primary challenge was low user engagement and adoption. Each quarter, teams were required to update the dashboard with new performance metrics - some standardized, others unique to individual services. To users, it felt like yet another administrative burden piled on top of already heavy workload.

" When  metrics didn't meet expectations, users had no place to add context - leaving executives with an incomplete, often misleading picture."

Compounding the issue: certain metrics were measured only once, yet the dashboard displayed them as permanently stagnant - because they was no way to mark a target as complete. Over time this created a cycle of frustration. Users dreaded returning. Updates were delayed. And when they finally did come back, they couldn't remember how to navigate the tool.

 

The result: a tool that was neither trusted nor effectively used by the people it was built for.

RESEARCH AND DISCOVERY

Listening before

designing.

My first step was to deeply understand the product, the people using it and the operational realities behind their frustrations. I recommended starting with usability tests - segmenting users into meaningful groups so each session could surface distinct and relevant insight.

GROUP 01

GROUP 02

GROUP 03

Frequent Users

Lapsed Users

New Users

Power users who knew the system's workarounds and could articulate exactly where it broke down.

Had access but rarely engaged - their avoidance was most telling signals about adoption barriers.

Fresh eyes that revealed onboarding failures invisible to anyone already familiar with the tool. 

For many participants, it was the first time they felt truly heard. All sessions were moderated with detailed observation notes captured throughout.

SPP MIRO 2.png

USABILITY TEST SYNTHESIS - MIRO

AI IN THIS PROJECT

From raw notes to clear themes - faster.

After completing usability sessions, I used AI to hep process and cluster of qualitative feedback. Interview transcripts and and observation notes were fed into an AI synthesis tool to surface recurring language patterns and behavioral themes - work that previously took days of manual affinity mapping compressed into focused hours. The output wasn't the insight, my output was. AI just cleared the noise.

TOOLS USED: CHATGPT . NOTEBOOKLM 

SPP Miro 1.png

POST USABILITY TEST WORKSHOP SYNTHESIS

DESIGN SOLUTIONS

Targeted enhancements,

not a redesign.

1

Monthly Metric Update

Greater granularity in how performance is tracked - shifting from quarterly to monthly inputs to reduce cognitive load and improve accuracy.

2

Context Field

A dedicated space for users to explain unexpected performance trends - so executives see the story not just the trend.

3

Cost Data Integration

4

Setup Wizard

Financial visibility added per service - enabling leadership to connect investment decisions to actual outcomes.

An always-accessible onboarding tool so users could re-learn the dashboard without hunting through documentation.

5

Status and End Dates

Metrics could now be marked complete - eliminating the false "no progress" signal that had been eroding trust in the data.

6

External Link Integration

Direct link to supporting documents within the tool - reducing the friction of context switching mid-task.

HIGH FIDELITY DESIGNS

The screens

that shipped.

Setup Wizard 2.png
Adding External Links .png
Deactivating Metrics 1.png
Monthly Cadence 4.png
Cost.png
Target Timeline 3.png

THE IMPACT

What actually

changed.

These weren't incremental polish updates. Each enhancement directly closed a gap between how the systems worked and what users needed - resulting in a tool that was used more, trusted more, and made better decisions possible.

Improved Performance Visibility

Monthly inputs replaced quarterly aggregation - giving terms and leadership a far more granular, accurate view of service health.

Reduced Cognitive Load

Smaller, more frequent updates eliminated the mental overhead of recalling months of data in a single session.

Increased Trust & Engagement

Users who felt heard during research became advocates. Feedback loops between teams and stakeholders became healthier and more constructive.

Higher Platform Adoption

Clearer workflows and more honest data storytelling gave users confidence that the dashboard reflected their work accurately.

Better Executive Decisions

Context fields and target timelines gave leadership the narrative behind the numbers - not just the numbers themselves.

Financial Clarity

Cost data per service enabled leadership to connect investment decisions to service outcomes for the first time. 

NEXT CASE STUDY

IMPROVING MINING OPERATIONS @NUTRIEN

bottom of page