Senior Product & Service Designer

I design products that hold up under real conditions — not just in the slide.

Eight years in product design. I work on the systems behind the screens — onboarding flows, provider tools, financial habits, AI workflows — across digital health, fintech, and enterprise platforms. I co-founded a fintech that raised $65M+ before that.

8+ yrs · Product & service design 500K+ users · Digital health $65M+ raised · Fintech founder
Goli Bahri
What I actually do

I design the whole product, not just the pretty parts.

That means flows, edge cases, internal tools, and the awkward moments between teams where most products quietly break.

I'm best on hard, multi-stakeholder problems where there isn't a clean answer yet — and where shipping the right thing matters more than shipping the obvious one.

01 / Selected work

Three products. One way of thinking.

Most "user problems" are actually team problems wearing a costume.

02 / About

The short version.

Goli Bahri

A decade in.

I'm Goli — a Senior Product & Service Designer based in Toronto.

Over the past few years, my work has shifted from designing screens to designing systems — the kind that sit behind complex, real-life experiences. Most recently at TELUS Health, I worked across three connected products: a digital CBT platform, a financial wellbeing experience, and internal care workflows built on Dynamics 365.

What that really means: I've been designing for situations where people are already under pressure — navigating mental health support, financial stress, or operational complexity behind the scenes. That changes how you think about design. It's not about making things look good. It's about making things work — for everyone involved.

Before that, I co-founded a fintech crowdfunding platform (Mehrabane), where I built the product from 0 to 1. That experience shaped how I think about ownership, trade-offs, and impact. When you're close to real users, real money, and real outcomes, you stop designing in isolation.

These days, I'm most interested in problems that sit at the intersection of product, service, and operations — where design can actually shift how a system behaves, not just how it looks.

How I think.

I start with outcomes, not features. Before I open Figma, I want to understand:

  • What actually changes for someone if this works?
  • What breaks if it doesn't?
  • Who else is carrying the weight of this decision?

A lot of "UX problems" aren't really UX problems — they're system problems, workflow gaps, or misaligned incentives showing up in the interface.

I care about design that holds up in reality. Not just in a review, not just in a prototype — but in the messy, cross-functional, real-world environment where people are trying to do their jobs.

A few things I've learned along the way:

  • The most interesting problems are almost always systemic
  • Onboarding is rarely just onboarding
  • Internal tools are the product
  • If something feels like a user problem, it's often a team problem underneath

How I work.

I do my best work when I'm embedded in the problem — not just handed a brief.

I partner closely with product, engineering, research, and domain experts to shape direction early. I like getting into the details of how things actually work — where the constraints are, what's expensive, what's fragile, what's overlooked.

I'm direct, and I care about clarity. I'll push when something doesn't make sense, but I'm quick to adapt when there's a better idea in the room.

I've also spent time mentoring designers and helping teams get sharper — clearer thinking, better collaboration, fewer "we'll figure it out later" moments.

Outside of work.

I care a lot about the human side of what we build.

Mental health, financial dignity, and everyday experiences that quietly make life easier — those are the kinds of problems I want to work on. The ones that don't always get attention, but matter deeply to people.

I still think like a builder. I'm drawn to 0→1 ideas, complex systems, and teams where design has a voice in shaping direction — not just execution.

On a more personal note: I love LEGO (the more intricate, the better), I collect Funko Pops, and I'll always pick an owl as my favorite anything. I'm also a big comic book fan — DC and Marvel — probably because I've always been drawn to layered worlds, complex characters, and stories that connect across systems… which, now that I think about it, isn't that different from how I approach design.

Case study · 01

Trust, by design.

A digital CBT platform — therapist-led, multi-stakeholder, end-to-end.

CBT platform across desktop, mobile, and tablet
Role
Senior Product Designer (lead). End-to-end across web, iOS, Android + internal tools.
Team
Cross-functional: clinical leads, 2 PMs, 4 engineers, researcher, ops. Only senior designer on surface area.
Timeline
~18 months · multiple releases
01 — The hook

A mental health service that earns trust before it asks for it.

This was a digital CBT platform serving people in active mental health care — patients on one side, licensed therapists on the other, an internal ops team holding it all together.

It's the kind of product where every design decision has consequences. A confusing onboarding step doesn't just lower a conversion rate; it loses someone at the exact moment they were brave enough to ask for help. A clunky therapist tool doesn't just irritate the provider; it eats into the 50 minutes a patient is paying for.

I led product design across the patient experience (web responsive + native mobile) and the therapist + internal tooling underneath it. My job was to make the whole service feel like one product — not a stack of disconnected surfaces stitched together by support tickets.

02 — Context & problem space

Three problems, tangled together.

The product had a real problem: people were signing up, then disappearing.

Onboarding was long, intake-heavy, and felt clinical in the wrong way. Patients had to answer dozens of screening questions before they understood what the service even was. Therapists were getting matched to patients based on rules that made sense to ops but felt random to everyone else. And the internal team was running half the service out of spreadsheets.

A trust problem
Patients didn't have enough signal early to feel safe handing over their hardest stuff.
A service-design problem
The product was a thin layer on top of a heavy operational process, and the seams showed.
A system problem
Patient experience, therapist experience, and ops were being designed in three different rooms, optimizing for three different things.

This wasn't an onboarding redesign. It was a service redesign that happened to start at onboarding.

03 — Approach

I refused to start in Figma.

I sat in on therapist sessions (with consent), shadowed ops, watched five recorded patient onboardings without sound, and read every single drop-off support ticket from the last quarter. The picture that emerged was different from the brief.

The brief said: make onboarding shorter. The reality was: people aren't dropping off because it's long. They're dropping off because they don't trust the system enough to keep going.

Earn trust in 90 seconds, not 90 minutes.
Move heavy intake later. Show real human signal — therapist faces, real wait times, what a session feels like — before asking for anything emotionally expensive.
Design the service, not just the screens.
Patient onboarding, therapist matching, and ops triage are one system. Designed apart, they leak. Designed together, they reinforce each other.
Make the invisible work visible.
The ops team was doing extraordinary work that the product was hiding. We brought parts of it forward to make the service feel safer, not just smoother.

I made tradeoffs along the way. We chose to delay some clinical screening to later in the journey, which clinical pushed back on. We agreed on a structured compromise: defer the length of screening, never the safety-critical parts. That decision held up under audit and improved completion meaningfully.

04 — Key work & solutions

What we shipped — and why each piece mattered.

A reshaped onboarding spine.
Instead of a long form, a guided story — what's bothering you, what kind of help works for you, here's what we'd recommend, here's the therapist we think fits. Heavy clinical screening moved to a quieter post-match flow.
A therapist-first matching experience.
We made the therapist the hero of the moment, not the form. Real photos, written voice, the kind of cases they tend to take on. Matching went from algorithmic to considered.
A unified provider workspace.
A single workspace pulling together patient queue, session notes, scheduling, and safety flags. Therapists arrived to sessions less frazzled.
Internal ops dashboards.
A real-time triage tool that surfaced where patients were stuck and which cases needed a human. Service quality stopped being a guess.
A shared design language.
One system across patient app, therapist tools, and ops. Trust on a mental health product compounds when everything looks like the same hand made it.
05 — Impact

Directional, NDA-safe — but real.

  • Onboarding completion improved meaningfully — biggest single lift in the product's history at that point.
  • Time-to-first-session dropped, mostly because matching got faster and clearer.
  • Therapist tooling cut admin time per session — translating into more presence in actual sessions.
  • Ops handled significantly more cases without growing headcount.
  • Patient sentiment shifted from "I wasn't sure if this was real" to "I felt like the system actually saw me."

That last one is the one I'm proudest of.

06 — Reflection

What I'd do differently.

What I learned. Trust isn't a screen, it's a sequence. You build it by designing what people see and what they feel coming.

What I'd do differently. I'd bring ops into the design process even earlier. We treated them as stakeholders for too long. They're co-designers — they know the failure modes nobody else sees.

The senior takeaway

A lot of mental health products try to solve trust through tone. That's table stakes. Real trust comes from system design. Tone gets you in the door. Systems keep people in the room.

Case study · 02

Habits, not features.

A consumer financial wellbeing platform — designed for the regular Tuesday, not the launch screenshot.

Financial wellbeing — personalized home and daily check-in
Role
Lead Product Designer. Personalization engine, behavior loops, core product surfaces.
Team
PM, behavioral scientist, 2 engineers, data, growth. Closely with leadership on direction.
Timeline
~12 months · two major releases
01 — The hook

Most finance apps are dashboards in disguise.

They show you numbers, they don't change your relationship with them.

This product was trying to do something harder: actually shift how people behave with money. Save a little more. Spend a little more intentionally. Stop dreading the app icon. The hard part wasn't building features — the category is full of features. The hard part was building a product people would come back to once the novelty wore off.

I led product design across the parts of the product that decided whether people stayed: onboarding, personalization, daily habit loops, and the moments where the app had to either nudge or get out of the way.

02 — Context & problem space

Acquisition was fine. Retention was the story.

When I joined, the product had decent acquisition and weak retention. Classic curve: big install spike, fast drop-off, the long tail you don't want.

Personalization was performative.
The app said it was personalized. In practice, two users with very different financial lives saw nearly identical experiences.
The habit loop was thin.
No clear reason to come back tomorrow. Notifications were generic. Win moments were buried under chrome.
The product was speaking the wrong language.
Most users didn't enjoy thinking about finance. They felt anxious, behind, or just bored. The product was answering questions they weren't asking.

The business problem was retention. The product problem was that we hadn't yet earned a place in someone's day.

03 — Approach

Why would someone open this on a regular Tuesday?

I anchored the work in a single question: what would have to be true for someone to want to open this app on a regular Tuesday — when nothing exciting is happening with their money?

That question reshaped a lot of the roadmap.

Personalization people can feel.
If the system is personalized but the user can't tell, it's not personalized — it's just expensive backend.
Make the smallest action the most rewarding.
The biggest behavior change came from making "did the small thing" feel as good as "did the big thing." We over-celebrated small wins on purpose.
Respect that money is emotional.
A calm friend who's good with numbers — not a coach yelling about goals.

Tradeoffs: growth wanted aggressive nudges; retention wanted fewer, smarter ones. I made the case that aggressive notifications were borrowing trust from a product that hadn't built any yet. We landed on a quieter, more personal system that performed better long-term.

04 — Key work & solutions

The spine of retention.

A personalization layer that actually shows up.
Different home screens depending on where someone was in their financial life. Same product, very different experience.
Habit loops, designed deliberately.
One keystone habit (a small daily check-in), with the rest of the product reinforcing it. Became the spine of retention.
Goal-setting that feels like a conversation.
A few questions, a recommendation, a path. Goals that fit people's lives, not aspirations they'd feel guilty about.
A redesigned notification system.
Fewer, way more relevant. Open rates up. Unsubscribes down.
A new product voice.
Less coachy, less finance-bro. The voice change shifted sentiment in a way you could feel in support tickets.
05 — Impact

From acquisition story to habit story.

  • Day-30 retention improved meaningfully — the kind of lift you usually only get from a major new feature.
  • The keystone habit became the most reliable predictor of long-term retention.
  • Notification engagement up. Unsubscribes down.
  • NPS moved in the right direction. Sentiment shifted from "I'm stressed" to "I actually like checking in."

On the business side, this work changed how leadership talked about the product — from "an acquisition story" to "a habit story."

06 — Reflection

The emotional layer is where retention lives.

What I learned. In behavior-change products, the design unit isn't a screen, it's a moment in someone's week. If you don't design the moment, you're just decorating it.

What I'd do differently. I'd push earlier on personalization being a product principle, not a feature on the roadmap. We treated it like a thing we'd add and ended up rebuilding it later.

The senior takeaway

Finance products over-design the rational layer and under-design the emotional one. You can't math your way out of how someone feels about money — but you can design around it.

Case study · 03

AI, exactly where the work is.

An AI assistant inside a clinical platform — built for therapists and ops, not patients.

AI case management — case detail with AI suggestion sidebar
Role
Senior Product Designer. AI workflows: scoping, flows, error states, trust model, integration.
Team
PM, ML / AI engineering, clinical leads, ops, compliance. Design partner across all.
Timeline
~9 months · framing to staged rollout
01 — The hook

The hardest part of designing for AI right now isn't the model.

It's deciding where the model belongs in the work — and where it absolutely doesn't.

This project added an AI layer to an existing clinical platform. The audience was therapists and internal staff, not patients. The goal wasn't to make the AI visible or impressive; it was to remove a category of busywork that was quietly burning out the clinical team — without introducing new risks.

My job: make AI useful, controllable, and almost boring — the highest compliment you can pay an AI feature in a healthcare context.

02 — Context & problem space

A service-delivery problem dressed up as a software problem.

Therapists were spending a meaningful chunk of every working day on admin: documenting sessions, catching up on patient context, writing follow-ups, updating the rest of the team. The longer the caseload, the worse it got.

The work was high-stakes.
Clinical notes affect care. We couldn't afford "AI got it slightly wrong and nobody caught it" failure modes.
The system wasn't built for AI.
Adding AI on top of a mature platform meant designing for retrofit, not greenfield. Every new flow had to live alongside non-AI flows.
Trust was the real product.
Therapists are highly trained professionals. They don't need a chatbot. They need an assistant that respects their judgment and their license.
03 — Approach

Design the trust model before the UI.

I started by mapping the actual workday — not the idealized one. That gave us a list of candidate AI moments. Then we filtered hard.

Not every task should be AI-assisted.
We were ruthless about cutting candidate AI features. AI earns its keep on long, repetitive, low-judgment work — not short, easy, high-trust work.
Trust model first, UI second.
For each task we defined: what does AI propose, what does the human approve, what's logged, what's reversible. The UI was a translation of the trust model.
Always show the seams.
Drafts were drafts. Suggestions were suggestions. Confidence was visible. The product never pretended to know more than it did.

PM wanted full automation; I pushed for human-in-the-loop. Automation we couldn't audit was automation we couldn't ship in healthcare. We landed on staged automation — confidence-building first, automation later. Compliance loved this. Clinical loved this. Ops eventually loved this.

04 — Key work & solutions

Useful, controllable, almost boring.

AI-assisted session notes.
AI drafts a structured note, the therapist edits, the system learns their voice. Drafts always labeled, never auto-saved as final.
Patient context summaries.
A "before-session brief" — short, source-linked, editable. Replaced re-reading entire case histories.
A shared trust pattern.
Same labels (Draft, Suggested, Auto), same affordances (edit, override, dismiss, see source) across every AI surface.
An ops-side review surface.
Internal dashboard to monitor AI quality at the system level. Made the AI program legible to leadership and gave us a real feedback loop.
Failure states designed first.
Most AI products design the happy path and bolt on errors. We did the opposite.
05 — Impact

High adoption, healthy override rates, clean compliance sign-off.

  • Therapist admin time per session dropped meaningfully — described by clinical leadership as one of the most material caseload-sustainability improvements in years.
  • Adoption was unusually high for an internal AI feature — driven by the trust-first design choices, not the model.
  • Override rates were healthy and trending in the right direction over time.
  • Ops gained a visibility layer they'd never had into clinical workload — changed how they staffed.
  • Compliance signed off cleanly because the trust model was documented in the design itself.
06 — Reflection

There's almost always a second user.

What I learned. The hardest design problem in AI products isn't the surface — it's deciding what the AI is allowed to do, and how clearly the user can see and control it. The UI is the easy part once those decisions are made.

What I'd do differently. I'd build the ops-side observability surface in the very first release, not the second. Design teams talk about "the user," but for AI products there's almost always a second user — the team operating the system — and they need product-grade tools, not afterthought dashboards.

The senior takeaway

The bar for AI in healthcare isn't "can it do the task?" It's "can a clinician trust it enough to keep their license intact?" Once you frame it that way, the design problem becomes much clearer.