Eight years in product design. I work on the systems behind the screens — onboarding flows, provider tools, financial habits, AI workflows — across digital health, fintech, and enterprise platforms. I co-founded a fintech that raised $65M+ before that.
I design the whole product, not just the pretty parts.
That means flows, edge cases, internal tools, and the awkward moments between teams where most products quietly break.
I'm best on hard, multi-stakeholder problems where there isn't a clean answer yet — and where shipping the right thing matters more than shipping the obvious one.
A multi-sided platform serving patients, therapists, and internal ops teams. I owned the design system, the onboarding spine, and the provider workflows.
Read case study →
Most finance apps treat people like spreadsheets. We treated them like humans trying not to feel broke. I led product design across habit loops, nudges, and personalized journeys.
Read case study →Therapists were drowning in admin. I designed an AI layer that drafted notes, surfaced relevant patient context, and quietly removed an entire category of busywork.
Read case study →Most "user problems" are actually team problems wearing a costume.
I'm Goli — a Senior Product & Service Designer based in Toronto.
Over the past few years, my work has shifted from designing screens to designing systems — the kind that sit behind complex, real-life experiences. Most recently at TELUS Health, I worked across three connected products: a digital CBT platform, a financial wellbeing experience, and internal care workflows built on Dynamics 365.
What that really means: I've been designing for situations where people are already under pressure — navigating mental health support, financial stress, or operational complexity behind the scenes. That changes how you think about design. It's not about making things look good. It's about making things work — for everyone involved.
Before that, I co-founded a fintech crowdfunding platform (Mehrabane), where I built the product from 0 to 1. That experience shaped how I think about ownership, trade-offs, and impact. When you're close to real users, real money, and real outcomes, you stop designing in isolation.
These days, I'm most interested in problems that sit at the intersection of product, service, and operations — where design can actually shift how a system behaves, not just how it looks.
I start with outcomes, not features. Before I open Figma, I want to understand:
A lot of "UX problems" aren't really UX problems — they're system problems, workflow gaps, or misaligned incentives showing up in the interface.
I care about design that holds up in reality. Not just in a review, not just in a prototype — but in the messy, cross-functional, real-world environment where people are trying to do their jobs.
A few things I've learned along the way:
I do my best work when I'm embedded in the problem — not just handed a brief.
I partner closely with product, engineering, research, and domain experts to shape direction early. I like getting into the details of how things actually work — where the constraints are, what's expensive, what's fragile, what's overlooked.
I'm direct, and I care about clarity. I'll push when something doesn't make sense, but I'm quick to adapt when there's a better idea in the room.
I've also spent time mentoring designers and helping teams get sharper — clearer thinking, better collaboration, fewer "we'll figure it out later" moments.
I care a lot about the human side of what we build.
Mental health, financial dignity, and everyday experiences that quietly make life easier — those are the kinds of problems I want to work on. The ones that don't always get attention, but matter deeply to people.
I still think like a builder. I'm drawn to 0→1 ideas, complex systems, and teams where design has a voice in shaping direction — not just execution.
On a more personal note: I love LEGO (the more intricate, the better), I collect Funko Pops, and I'll always pick an owl as my favorite anything. I'm also a big comic book fan — DC and Marvel — probably because I've always been drawn to layered worlds, complex characters, and stories that connect across systems… which, now that I think about it, isn't that different from how I approach design.
A digital CBT platform — therapist-led, multi-stakeholder, end-to-end.
This was a digital CBT platform serving people in active mental health care — patients on one side, licensed therapists on the other, an internal ops team holding it all together.
It's the kind of product where every design decision has consequences. A confusing onboarding step doesn't just lower a conversion rate; it loses someone at the exact moment they were brave enough to ask for help. A clunky therapist tool doesn't just irritate the provider; it eats into the 50 minutes a patient is paying for.
I led product design across the patient experience (web responsive + native mobile) and the therapist + internal tooling underneath it. My job was to make the whole service feel like one product — not a stack of disconnected surfaces stitched together by support tickets.
The product had a real problem: people were signing up, then disappearing.
Onboarding was long, intake-heavy, and felt clinical in the wrong way. Patients had to answer dozens of screening questions before they understood what the service even was. Therapists were getting matched to patients based on rules that made sense to ops but felt random to everyone else. And the internal team was running half the service out of spreadsheets.
This wasn't an onboarding redesign. It was a service redesign that happened to start at onboarding.
I sat in on therapist sessions (with consent), shadowed ops, watched five recorded patient onboardings without sound, and read every single drop-off support ticket from the last quarter. The picture that emerged was different from the brief.
The brief said: make onboarding shorter. The reality was: people aren't dropping off because it's long. They're dropping off because they don't trust the system enough to keep going.
I made tradeoffs along the way. We chose to delay some clinical screening to later in the journey, which clinical pushed back on. We agreed on a structured compromise: defer the length of screening, never the safety-critical parts. That decision held up under audit and improved completion meaningfully.
That last one is the one I'm proudest of.
What I learned. Trust isn't a screen, it's a sequence. You build it by designing what people see and what they feel coming.
What I'd do differently. I'd bring ops into the design process even earlier. We treated them as stakeholders for too long. They're co-designers — they know the failure modes nobody else sees.
A lot of mental health products try to solve trust through tone. That's table stakes. Real trust comes from system design. Tone gets you in the door. Systems keep people in the room.
A consumer financial wellbeing platform — designed for the regular Tuesday, not the launch screenshot.
They show you numbers, they don't change your relationship with them.
This product was trying to do something harder: actually shift how people behave with money. Save a little more. Spend a little more intentionally. Stop dreading the app icon. The hard part wasn't building features — the category is full of features. The hard part was building a product people would come back to once the novelty wore off.
I led product design across the parts of the product that decided whether people stayed: onboarding, personalization, daily habit loops, and the moments where the app had to either nudge or get out of the way.
When I joined, the product had decent acquisition and weak retention. Classic curve: big install spike, fast drop-off, the long tail you don't want.
The business problem was retention. The product problem was that we hadn't yet earned a place in someone's day.
I anchored the work in a single question: what would have to be true for someone to want to open this app on a regular Tuesday — when nothing exciting is happening with their money?
That question reshaped a lot of the roadmap.
Tradeoffs: growth wanted aggressive nudges; retention wanted fewer, smarter ones. I made the case that aggressive notifications were borrowing trust from a product that hadn't built any yet. We landed on a quieter, more personal system that performed better long-term.
On the business side, this work changed how leadership talked about the product — from "an acquisition story" to "a habit story."
What I learned. In behavior-change products, the design unit isn't a screen, it's a moment in someone's week. If you don't design the moment, you're just decorating it.
What I'd do differently. I'd push earlier on personalization being a product principle, not a feature on the roadmap. We treated it like a thing we'd add and ended up rebuilding it later.
Finance products over-design the rational layer and under-design the emotional one. You can't math your way out of how someone feels about money — but you can design around it.
An AI assistant inside a clinical platform — built for therapists and ops, not patients.
It's deciding where the model belongs in the work — and where it absolutely doesn't.
This project added an AI layer to an existing clinical platform. The audience was therapists and internal staff, not patients. The goal wasn't to make the AI visible or impressive; it was to remove a category of busywork that was quietly burning out the clinical team — without introducing new risks.
My job: make AI useful, controllable, and almost boring — the highest compliment you can pay an AI feature in a healthcare context.
Therapists were spending a meaningful chunk of every working day on admin: documenting sessions, catching up on patient context, writing follow-ups, updating the rest of the team. The longer the caseload, the worse it got.
I started by mapping the actual workday — not the idealized one. That gave us a list of candidate AI moments. Then we filtered hard.
PM wanted full automation; I pushed for human-in-the-loop. Automation we couldn't audit was automation we couldn't ship in healthcare. We landed on staged automation — confidence-building first, automation later. Compliance loved this. Clinical loved this. Ops eventually loved this.
What I learned. The hardest design problem in AI products isn't the surface — it's deciding what the AI is allowed to do, and how clearly the user can see and control it. The UI is the easy part once those decisions are made.
What I'd do differently. I'd build the ops-side observability surface in the very first release, not the second. Design teams talk about "the user," but for AI products there's almost always a second user — the team operating the system — and they need product-grade tools, not afterthought dashboards.
The bar for AI in healthcare isn't "can it do the task?" It's "can a clinician trust it enough to keep their license intact?" Once you frame it that way, the design problem becomes much clearer.