Available for projects

EARLY NOV 2025

2:33 PM

CASABLANCA (GMT+1)

AI • Enterprise Tools

AI • Enterprise Tools

AI agent dashboard for OCP sales teams

Designing trust, visibility, and control into AI collaboration

In April–May 2025, during a short engagement with OCP (Casablanca), I led a conceptual product exploration to help internal sales reps work with AI agents (not around them).

The brief

Make AI visible, controllable, and accountable so reps feel confident delegating work without losing context or client ownership.

I worked primarily solo, using my own design system to move from research to wireframes and high-fidelity proof-of-concept quickly.

Inputs

Desk research (Brave AI, ChatGPT, Hugging Face), literature on AI trust in high-stakes professions

One line that sums it up

Make the AI visible. Keep humans in control. Earn trust every click.

my role

Lead Product Designer (sole)

Timeline

April 2025 - May 2025

WHAT I DID

Research & Strategy
Information Architecture & UX
UI & Systems
Prototyping & Wireframes
Collaboration & Delivery

What this unlocks

While not yet implemented, the concept establishes a scalable framework for human-AI collaboration in enterprise sales.

These hypotheses would be validated in a pilot with sales reps:

-40

%

Manual status updates (measured via telemetry & logs)

-30–50

%

Time to prep a client update (task-based study)

The Problem

OCP’s sales motions span supplier quotes, logistics, delivery tracking, and B2B negotiations.

Autonomous agents were being introduced.
But adoption stalled because reps:

- couldn’t see what the AI was doing or why,
- feared losing control of client relationships, and
- worried about black-box decisions in high-value deals.

Reframed challenge

How might we help sales teams collaborate effectively with AI agents (monitoring, guiding, or taking over workflows), without losing control or context?

Designing for visibility, control, and collaboration

With limited user access, I grounded the work in assumption-based research and a focused persona,

Karim (mid-level OCP rep managing 10–15 complex deals).
Empathy mapping and theme clustering surfaced three truths:

- Visibility builds trust: Every AI action must be inspectable.
- Human-in-the-loop by default: Reps need to approve, edit, or dismiss.
- One hub beats many tools: Keep all context and actions in a single place.

From these, I defined the product principles:

- Transparency: Show what happened, why, from which data, and with what confidence.
- Control: Human authority at every step; nothing irreversible; undo is a promise.
- Collaboration: The AI should feel like a teammate: proactive, auditable, never opaque.

Trust & Safety Model

To make the system adoptable in a high-stakes context, I proposed explicit states and guardrails:

  • Suggest — AI proposes actions; requires approval.

  • Auto-run (safe) — Low-risk automations (drafts, reminders) run; undo available.

  • Await — Pauses when info is missing; asks user.

  • Escalate — Flags low confidence or conflicts; routes to owner/manager.

  • Explain — Every action shows sources, reasoning, and confidence.

  • Audit — Immutable action log; version history.

  • Recovery — Undo/Redo + “Revert to checkpoint.”

This framework turns an “AI feature” into an enterprise-ready teammate.

Solution

I explored low-fi flows first, then moved to high-fi using my existing design system to accelerate detail and consistency.

Deals Overview — the command center

A high-level dashboard across all active deals with AI status signals (e.g., “awaiting approval,” “blocked by missing PO,” “follow-up scheduled”).

- Priority cues highlight deals needing human attention now.
- Global search + saved views reduce tool-switching.
- At-a-glance totals (pipeline value, SLAs at risk) inform daily planning.

AI Panel — collaborate in plain language

A chat-like surface to summarize, compare quotes, draft emails, or request next steps with structured outputs (not just text):

- Responses are rendered as Action Cards (e.g., “Draft reply to client,” “Generate quote v2,” “Schedule follow-up”).
- Each card shows why (source snippets), confidence, and impact (“saves 15m manual entry”).
- One-click Approve / Edit / Dismiss keeps humans in charge.

Deal Details — a complete, auditable timeline

A timeline logging every AI + human action with timestamps and data lineage:

- Emails sent, quotes generated, status changes, follow-ups paused; all traceable.
- Layered detail: clean summaries at a glance; expandable rationale for peer review or audits.
- Role-aware permissions ensure sensitive steps require the right approvals.

Reflections & Lessons Learned

Transparency + layered detail turned the AI from a “black box” into a credible teammate.

What I learned

Trust in AI isn’t built on intelligence, it’s built on visibility, accountability, and human authority.
Sales reps won’t care how advanced the model is if they can’t see why it acts or intervene when needed.

When designing AI for high-stakes, high-value professions, the measure of success isn’t whether the AI is smart, it’s whether the human feels empowered, respected, and still in control.

The challenge ahead

The hardest balance remains: how much information is “enough.”

Too little detail and trust collapses; too much, and reps get overwhelmed. This is where real-world validation will be critical.

Designing clarity where stakes are highest.

6+ YEARS | SaaS, Fintech & Enterprise Systems Design

© 2025 Abdellah ibach

Designing clarity where stakes are highest.

6+ YEARS | SaaS, Fintech & Enterprise Systems Design

© 2025 Abdellah ibach

Designing clarity where stakes are highest.

6+ YEARS | SaaS, Fintech & Enterprise Systems Design

© 2025 Abdellah ibach

Contact

GET IN TOUCH

I usually respond within 24 hours. Let’s explore how I can help your team scale with clarity.

Contact

GET IN TOUCH

I usually respond within 24 hours. Let’s explore how I can help your team scale with clarity.