The Problem
Goldman Sachs' Controllers sign off on the accuracy of daily P&L data across trading desks — investigating hundreds of data quality discrepancies ("exceptions") every day before financial close. The existing tooling gave them raw data but zero intelligence. Controllers spent hours on what should take minutes: reviewing each exception individually, switching between four systems for context, and hand-documenting every action for audit. Roughly 70% of exceptions were duplicates of the same root cause, each still requiring the full manual cycle. No grouping, no prioritisation, no AI.
My Role
Sole UX designer on the engagement, owning the full process — discovery workshops, system mapping, wireframes, prototypes, and user testing facilitation. Worked directly with the Goldman Sachs Controllers AI Working Group in weekly design cycles, collaborating with their UX Lead, Working Group Lead, and domain SMEs across Revenue and Risk teams.
Discovery & Design Philosophy
Early work focused on building a mental model of the exception review lifecycle through stakeholder workshops and technical deep-dives across the domain.
The critical insight emerged early: stakeholders didn't want a chatbot. They wanted more proactive AI suggestions embedded in the workflow, not a reactive conversational interface. This became the foundational principle.
Key Decisions
Through rapid, stakeholder-validated iteration:
Before/after over rationale. Summary tables show "current → suggested value" for fast action. AI reasoning lives in hover tooltips and drill-down panels via progressive disclosure.
Grouped exceptions with bulk actions. Since ~70% are duplicates, grouping by pattern with per-action-type rows dramatically reduces cognitive load while preserving audit traceability.
Non-blocking AI processing. Replaced a full-screen blocking modal with row-level inline indicators and a collapsible side drawer — Controllers keep working while AI runs.
Rejection as a forward path. Rejecting a suggestion prompts context addition (file uploads, data source selection) and AI re-analysis, not a dead end.
Invisible audit trails. Every action logged automatically with timestamps and attribution. Zero extra documentation effort from the user.
Expanding to Risk
As Revenue matured, I designed a version-branching strategy extending the same architecture to Risk Controllers. Beyond a different status model (three-state: Not Resolved, Non-blocking, Blocking — where blocking exceptions prevent sign-off), Risk discovery surfaced a new capability: "risk explains" — AI-driven post-exception attribution analysis that helps Risk Controllers understand why a break occurred, not just how to resolve it. This opened up risk-specific action types like adjustments, rolls, and risk acceptances that don't exist in the Revenue workflow.
Current State
The prototype is in structured user testing with Revenue Controllers, Risk sessions planned next. The north star: turn a multi-hour daily review into minutes — AI does the heavy lifting, humans review and make the final call.

You may also like

Back to Top