Healthcare Client - AI Supported Triage Experience
Client Project| UX Design
To respect confidentiality, this case study is intentionally abstracted and focuses on my design process, thinking, and decisions rather than final production screens or client‑specific details.

Overview
This project explored how human‑centered and AI design can support people navigating healthcare uncertainty. Working as part of a cross‑functional team that included researchers, strategists, and fellow designers, we set out to envision a future‑state navigation experience that blends empathetic guidance with AI support.
The collective goal was to reduce confusion and decision fatigue while preserving user agency and trust. My role focused on translating shared insights and principles into concrete design solutions that demonstrate how the experience could function in real‑world scenarios.
Time
Nov 2025 - Jan 2026
Methods & Tools
Research Methods
-
Stakeholder workshop
-
Workshop analysis
-
Persona/scenario‑based design
-
Rapid prototyping for concept validation
Tools
-
Figma (flows, wireframes, proof‑of‑concept prototypes)
-
FigJam / Mural (workshops, synthesis)
The Challenge
Healthcare navigation often breaks down when users are unsure what type of care they need, how urgent their situation is, or what their next step should be. While technology can help, overly automated solutions risk eroding trust especially in high‑stress moments.
As a team, we aligned on the need for an experience that:
-
Supports decision‑making without forcing it
-
Makes recommendations transparent and explainable
-
Adapts to different user needs and comfort levels
Team Approach
This work was highly collaborative. As a group, we:
-
Synthesized research findings and identified core user challenges
-
Defined experience principles around trust, clarity, and optionality
-
Aligned on how intelligent assistance should support and not replace the human judgment
Regular workshops, critiques, and feedback loops ensured design decisions stayed grounded in shared goals and user needs.
My Individual Approach
Within this collaborative framework, I owned key aspects of the design execution by:
-
Designing three persona‑based prototypes to stress‑test the experience across different user contexts, emotional states, and levels of digital comfort
-
Leading the UX design of AI‑informed interaction patterns, ensuring guidance was optional, explainable, and easy to bypass
-
Defining and applying design guardrails for responsible AI use, such as clear rationale (“why”), confirmation moments, and visible paths to human support
-
Creating clear flows and visual narratives that helped the team evaluate how the experience would feel over time
My focus was on turning abstract principles into tangible, testable designs that the broader team could discuss, refine, and build upon.
The Design
Rather than designing a single linear flow, the solution was explored through scenario‑based prototypes, each reflecting a different user need—from quick reassurance to long‑term care coordination.
Across all scenarios, key design decisions included:
-
Presenting guidance as suggestions, not directives
-
Clearly explaining decision tradeoffs in plain language
-
Maintaining consistent tone, accessibility, and trust cues
-
Designing for continuity across steps, not one‑off interactions
These choices ensured the experience felt supportive and adaptable rather than prescriptive.
Key Outcomes
The resulting prototypes demonstrated how a navigation experience could:
-
Reduce confusion and decision fatigue
-
Increase confidence in next steps
-
Support both digital‑first and human‑assisted pathways
-
Feel respectful, approachable, and trustworthy
-
The work provided a clear foundation for future exploration and helped align stakeholders around a shared, user‑centered vision.
Reflection
This project reinforced the importance of designing restraint alongside innovation, particularly when working with AI in sensitive domains like healthcare. Collaborating closely with the team strengthened my ability to balance multiple perspectives while still taking ownership of execution and decision‑making.
The project made me more confident in my ability to design AI‑informed experiences that are transparent, empathetic, and grounded in real human needs.
