Agentic AI Discovery Research
Shaping the future of AI at JPMC
Reframing the AI assistant’s purpose from a helpdesk chatbot to a personalized workflow companion
Conclusion
Final Thoughts and lessons learned
What started as a fairly straightforward research project–figure out how people might use an AI assistant– quickly turned into something much bigger. We found that people weren’t resistant to AI– they were resistant to things that made theit jobs harder, more confusing, or riskier. That completely changed the direction of the product. Instead of building a chatbot that could do everything, we focused on abuilding somehting that could do a few things really well, quietly and contextually.
Impact
The team moved away from a generic AI interface toward deeper integration with Outlook and Teams
“Assistant” became less about full autonomy and more about showing up in the right moments with the right guardrails
Personalization and transparency became non-negotionable features, not afterthoughts
Lessons Learned
This project reminded me that being a researcher isn’t always about collecting insights. It’s about creating clarity when there are too many assumptions, and advocating for users when it’s easy to prioritize speed.
1:1 User Interviews
11 Participants
30 mins
UserTesting
Transcripts
Lucid Board
Excel


Affinity mapping board showing early quote clustering (left) and final categories (right)

Brain map showing relationship between painpoints and opportunities. Credits: My teammate– Mary
1.
Plan
The screener captured key variables:
Intrest in generative AI
Experience level in using GenAI tools
Line of Business (LOB)
Current role and seniority
Region and tenure
TL;DR
As JPMC prepared to launch its enterprise wide agentic AI assistant, my team and I led a one-month discovery study to understand employee expectations, ideal use cases, and concerns.
I joined midstream and quickly took the lead on refining the research plan, moderating interviews, and synthesizing findings into product-shaping insights.
We conducted 15 interviews across lines of business and job functions. Our findings shaped key product bets––such as native outlook integration, user controlled automation, and trust-building UI patterns––and were presented to leadership to influence MVP scope and adoption strategy.
Scope
Stakeholder Interviews
Contextual Inquiries
Thematic Analysis
Opportunity Framing
Timeline
1 Month
Tools Used

What Did We Learn?
Key Themes & Insights
We set out to understand how employees might use a new AI assistant. But what we discovered was something deeper: people didnt just want help–they needed to trust that help, understand it, and feel in control of it.
Employees saw AI as a tool to ask–not a partner to work with
Most people thought of GenAI as something you prompt–like a smarter search bar. It was seen as something useful, but only for quick tasks like summarizing text or rewriting an email. The idea of an “agentic assistant” felt far away
“I use it for rewriting stuff or summarizing emails. It’s helpful, but I wouldn’t call it an assistant.”
This constrained view limited engagement and surfaced a new risk: employees would underuse the product unless it’s capabilities were clearly reframed
Implication
The assistant needed to shift from “ask me something” to “here’s how I can support your work holistically.” We reccommended onboarding flows that would expand mental models through role-based scenarios.
Trust was brittle–users wanted AI to act, but feared being left out of the loop
Many participants expressed a willingness to delegate work to AI, especially repetitive tasks. But they drew a hard line at full autonomy because they believed they–not the AI–would be held responsible for mistakes
“If it sends something with an error, it’s on me. Not the tool. That’s why I don’t let it send anything on its own.”
This sense of risk ownership was deeply ingrained across roles and seniority
Implication
AI could draft, organize, and suggest, but employees needed final review and edit rights.
Use cases weren’t universal–they were deeply role specific.
Many participants expressed a willingness to delegate work to AI, especially repetitive tasks. But they drew a hard line at full autonomy because they believed they–not the AI–would be held responsible for mistakes
“If it sends something with an error, it’s on me. Not the tool. That’s why I don’t let it send anything on its own.”
This sense of risk ownership was deeplu ingrained across roles and seniority
Role
Desired AI Support
Unique Challenges
Relationship Manager
Follow up tasks, Research
Client tone & escalation control
Engineer
Debugging, Creating Jira’s
No integration with dev stack
Manager
Meeting recaps, dashboards
Overload of shallow summaries
HR
Resume Filtering, Policy drafting
Trust in data boundries
Implication
Push for personalization as a core product pillar–not just preferences, but adaptive behavior based on role, team, and toolset.
Timeline
Plan
WEEK 1
Stakeholder interviews, research reframing, and user segmentation
Test
WEEK 1
15 contextual interviews across business units and levels
Analyze
WEEK 1
Affinity mapping, pain point clustering, segment based needs
Deliver
WEEK 1
Presentation creation
Deliver insights to stakeholders
The Problem
The AI assistant was being built quickly, and momentum was high. But as the hype grew, so did the the uncertainty. The product team had a lot of questions that they wanted to answer:
Would employees trust AI to act on their behalf?
Could the tool adapt to very different needs across roles and levels?
Would it integrate into existing workflows?
What does a successful launch look like?
Study Objectives
1.
Understand Employee Expectations for a Gen AI solution
2.
Identify high-value use cases across job functions
3.
Uncover blockers to trust adoption and personalization
4.
Inform MVP functionality and onboarding strategy
Understanding the problem
Stakeholder Interviews
To sharpen our focus, I began by interviewing key stakeholders from product, design, and engineering. These conversions helped define priorities, surfaced early hypotheses, and ensured buy-in before participant research began.
I believe stakeholder interviews are absolutely critical when a new research request is made. I set a quick meeting and follow a basic outline I created to gather information.
Stakeholder Interview Questions
Setting the scope
Research Document
After the stakeholder meeting, we collaboratively defined project objectives and research questions, and meticulously outlined our approach to the entire project, covering aspects such as methodology, participant recruitment strategies, and data analysis
Creating a research plan


The rest of my team started using this template for future studies, I love it when I can simplify processes for others!
Key Takeaways
Q:
What is the product?
A GenAI assistant to help employees complete tasks, access support, and surface info across systems like Outlook, Zoom, Confluence, and ServiceNow.
Q:
What do you want to learn?
The team was curious about where GenAI fits best in workflows, how employees will trust it, and what it must do to be truly valuable–not just novel.
Q:
Who are the users?
Segmenting was key: client-facing employees (e.g., sales) business ops (e.g., HR, Finance) and product/tech teams (e.g., engineers).
Q:
What will you do with the results?
Shape MVP features, escalation flows, and role based onboarding strategies.
Methodology & Recruitment
To ensure a range of perspectives across the firm, we used a randomized sampling approach and a structured screener to recruit participants.
We began by generating a randomized list of ~250 employees from the internal directory, with employees out of scope (e.g., branch and contact center staff who require special permissions) removed. We then distributed the screener via email to the remaining employees.
We used these variables to segment participants into three core groups:
Business Operations
Product & Tech
Client Facing
250
Employees Contacted
18
Completed Screener
15
Interviews Scheduled
11
Interviews COmplete
This was a discovery study, focused on surfacing directional insights rather than reaching statistical significants.
Based on qualitative research best practices and time constraints, we aimed for 4-6 participants per segment.
Contextual Interviews
Interview Script & Design
The script was co-developed with partners from the product and design teams to align research goals with upcoming roadmap decisions. Together, we ensured the questions would uncover:
Current workflow challenges
Experiences with existing GenAI tools
Trust, usability, and security perceptions
Opportunities for an AI assistant to provide meaningful value
Data Analysis & Synthesis
Affinity Mapping
After completing the interviews, we reviewed session recordings and created detailed notes for each participant. These notes were translated into digital sticky notes and color coded by participant to maintain traceability across insights.
We cionducted a collaborative affinity mapping workshop, organizing over 150 insdividual quotes into 9 primary themes and 63 sub-themes
This process allowed us to identify recurring behavoirs, pain points, and patterns across user segments, while preserving nuances in region, role, and AI familiarity
Brainmapping
Using insights from the affinity map and stakeholder discussions, we framed free major opportunity areas where Agentic AI could meaningfully contribute:
Integration: Connect across existing systems (Zoom, Outlook, Confluence)
Automation: Eliminate low-value repetitive tasks (e.g., summaries, scheduling)
Personalization: Adapt to tone, context, and user preferences
We mapped each opportunity back to real world examples from the interviews, such as:
Following up on meeting notes
Managing email overload
Reformatting content into templates
Scheduling meetings and reminders
Interview Script
The discussion guide followed a semi-structured format with four parts:
Current Workday Background
What tasks are repetitive or frustrating?
Where do employees seek help or info?
Current AI Usage and Perception
Have they used GenAI tools before?
What worked well? What didnt?
Identifying Use Cases
What would compel them to use AI?
Where could AI help streamline their day?
Concerns
What tasks should AI not do?
Would they trust AI with sensitive data?

Screener made via Optimal Workshop