Why AI initiatives succeed when they start with the employees closest to the work—and what leaders can learn from failed top-down rollouts.
Artificial intelligence promised a revolution in how work gets done. For many organizations, that promise has translated into mandates: “Every team needs to use AI.” “We’re investing in this enterprise AI suite.” “Find ways to apply this tool to your workflow by next quarter.”
The intent is understandable. Leaders want to stay competitive, improve efficiency, and unlock new capabilities. But in practice, many of these top-down AI deployments fail to deliver meaningful ROI. Some never gain traction at all.
Why? Because AI adoption isn’t a technology problem. It’s a work design problem.
A growing body of case studies from global brands shows that the organizations seeing real results with AI aren’t the ones mandating tool use from the C-suite. They’re the ones empowering the people closest to the day-to-day work to experiment, evaluate, and evolve their processes organically.
In other words: successful AI adoption starts at the bottom, not the top.
And as someone who has spent the last several years in a role dedicated to designing how work gets done—not just what gets done—I’ve seen firsthand why bottom-up AI adoption is the most reliable, sustainable way to integrate new technology into human systems.
Contents
- The Problem With Top-Down AI Mandates
- What We’ve Learned From Failed Top-Down Rollouts
- Why Bottom-Up AI Adoption Works
- AI Literacy vs. Workflow Literacy: A Critical Distinction Leaders Miss
- The Hidden “AI Friction Costs” That Top-Down Mandates Overlook
- A Note on Shadow AI
- A Framework for Bottom-Up AI Adoption
- Bottom-Up AI Adoption Isn’t About Tools — It’s About Designing Better Work
The Problem With Top-Down AI Mandates
When leadership pushes AI from the top without involving employees in the design process, the rollout often suffers from three predictable flaws:
- It misdiagnoses the real problems in the workflow.
Leadership sees inefficiencies in aggregate; employees experience them in detail. - It assumes a universal value-add, even when work is highly contextual.
AI does not improve every task equally; sometimes it introduces more steps or more verification work. - It bypasses the cultural and psychological realities of change.
Humans resist tools they didn’t choose, don’t trust, or don’t see as helpful.
These issues aren’t theoretical. The last decade is full of high-profile examples where top-down tech rollouts didn’t produce what leaders expected.
What We’ve Learned From Failed Top-Down Rollouts
IBM Watson for Oncology
IBM attempted to revolutionize cancer treatment with AI-powered clinical decision support. Leadership championed the breakthrough before clinicians validated its usefulness. In practice, Watson frequently recommended unsafe or irrelevant treatment options. Doctors couldn’t rely on it, and hospitals quietly backed away.
The lesson: No matter how powerful the technology, AI must be validated by domain experts—the people who understand the nuance of the work.
Amazon’s AI Hiring Tool
Amazon’s leadership envisioned a streamlined, automated hiring system. They rolled out an AI model trained on historical hiring data. But engineers didn’t involve recruiters early enough in evaluating the system’s biases. The AI learned to downgrade résumés containing the word “women’s,” penalize graduates from all-women’s colleges, and favor terms more frequently found in men’s résumés.
The project was scrapped.
The lesson: AI systems inherit and amplify the institutional biases present in the data they’re trained on. Without domain experts involved in validating how AI makes decisions, top-down AI initiatives can unintentionally reinforce inequity and produce unreliable outcomes.
Zillow Offers (Zillow’s AI Home-Flipping Program)
Zillow invested heavily in an AI-driven home-buying model. The model overestimated housing prices. Local real estate experts like agents and market analysts flagged issues early, but the algorithm remained the single source of truth. The company ultimately shut down the program and laid off a quarter of its workforce after massive losses.
The lesson: Insights from people on the ground matter more than algorithmic forecasts. Top-down trust in models without bottom-up feedback can be catastrophic.
McDonald’s Dynamic Yield AI Drive-Thru System
McDonald’s spent $300M to deploy AI-powered drive-thru systems that automatically adjusted menus based on weather, time of day, or trending sales. Franchise owners and employees provided feedback that the system increased complexity, created incorrect orders, and made the drive-thru slower. After years of underperformance, McDonald’s discontinued the program.
The lesson: If the tool complicates the workflow of the frontline employees, it will not succeed—even at massive scale.
Across industries, the theme repeats: When employees don’t participate in evaluating or shaping AI, the AI doesn’t deliver.
Why Bottom-Up AI Adoption Works
Bottom-up AI adoption flips the traditional model. Instead of mandating AI tools and waiting for people to integrate them, leaders empower employees to identify and test the AI solutions that actually improve their work.
This approach is more successful because it leverages three powerful advantages:
1. Employees Understand Their Work Better Than Anyone
Employees can see micro-opportunities for improvement that leadership doesn’t have visibility into—tiny inefficiencies, repetitive tasks, or data gaps that compound over time. When they choose tools that solve these issues, adoption happens naturally.
For example, a content specialist knows which parts of the writing process slow them down. A project manager knows where tasks get stuck. A developer knows where documentation or QA bottlenecks occur.
When the people closest to the problem choose the tool, the ROI is immediate and real.
2. Bottom-Up AI Adoption Creates Psychological Safety
Change is easier when people feel ownership over it.
Leadership-driven mandates often trigger resistance, not because employees dislike change, but because experience tells them that tools forced on them may actually increase their workload.
Bottom-up AI adoption signals: “We trust your expertise. You know your work best. Your decisions matter.”
That creates:
- agency
- intrinsic motivation
- greater exploration
- reduced tool fatigue
- stronger collaboration
Instead of performing change, employees co-create it.
3. AI Works Best When It’s Role-Specific, Not Universal
AI produces the highest ROI when its use cases align with:
- task-level complexity
- domain knowledge
- repeatable workflows
- contextual decision-making
This is why bottom-up pilots outperform top-down rollouts. Employees can quickly determine whether:
- the AI output is accurate
- the tool saves time
- it introduces new friction
- it integrates smoothly with existing systems
Leaders get better data. Employees get better tools. Teams get better outcomes.
AI Literacy vs. Workflow Literacy: A Critical Distinction Leaders Miss
Many leaders believe the primary obstacle to AI adoption is AI literacy: teaching people how AI works and how to use it.
But literacy alone doesn’t create ROI.
The deeper issue is workflow literacy: understanding where AI fits naturally into the sequence of tasks that make up someone’s role.
These are two different kinds of knowledge:
- AI literacy
“I understand how this tool functions and what it can generate.” - Workflow literacy
“I understand where this tool reduces cognitive load or unlocks value within my actual work.”
You can have high AI literacy and still reject a tool because it breaks your workflow.
You can have low AI literacy and still find a tool invaluable because it fits seamlessly into the task at hand.
Bottom-up adoption ensures that the workflow owners evaluate where AI is helpful—and where it’s not.
This distinction is rarely acknowledged in leadership conversations, yet it’s the difference between AI being used and AI being abandoned. And it reinforces a broader truth: leaders must think like experience designers. When leaders intentionally shape workflows around how people actually think, work, and collaborate, technology becomes an enhancer rather than a disruption. I explored this more deeply in our UX leadership and workplace innovation guide, where human-centered design serves as the foundation for meaningful workplace change.
The Hidden “AI Friction Costs” That Top-Down Mandates Overlook
Every new AI tool introduces friction before it creates value. Leaders often underestimate these costs, which include:
- tool-switching costs (context shift kills productivity)
- verification costs (AI hallucinations require review)
- integration gaps (tools that don’t talk to existing systems)
- cognitive load (learning yet another interface)
- workflow disruption (work must bend around the tool, not vice versa)
- trust costs (if early output is poor, adoption plummets)
Top-down mandates magnify these friction points because employees weren’t involved in choosing solutions that align with their workflows. And in an environment where hundreds of new AI tools seem to launch every month, many leaders feel pressure to “do something with AI” quickly—often adding unnecessary friction in the process. As I wrote in our guide to navigating the AI gold rush, the real challenge isn’t adopting more tools; it’s choosing the right ones that solve genuine workflow problems.
Bottom-up adoption, by contrast, reduces friction because employees select tools that already complement how they work.
This single insight explains why some AI deployments fail at extraordinary cost — and why others spread effortlessly inside teams.
A Note on Shadow AI
When employees feel restricted, unsupported, or pressured to use ineffective top-down tools, they often turn to unapproved AI tools on their own.
This “shadow AI” is already happening everywhere.
It’s a sign not of employee disobedience, but of misalignment between actual work needs and leadership-driven tool choices.
Bottom-up AI adoption isn’t just a path to ROI — it’s a path to governance and safety.
A Framework for Bottom-Up AI Adoption
To move from theory to practice, leaders can follow a simple, repeatable model.
1. Invite Experimentation, Don’t Mandate It
Instead of announcing, “We’re adopting AI,” try:
- “Explore any AI tools that might improve your workflow.”
- “If you find something that saves you time, share it.”
This opens the door without forcing it.
2. Encourage Role-Level Pilots
Small, rapid tests allow teams to:
- explore AI in a low-stakes way
- validate improvements
- uncover risks
- document learnings
These pilots become a knowledge base for the entire organization.
3. Build an Internal AI Exchange
Make learning social:
- Create a Slack/Teams channel for AI discoveries
- Hold biweekly “show and share” sessions
- Build an internal wiki of vetted tools, prompts, and workflows
- Encourage cross-functional insight sharing
Innovation spreads faster when it’s communal.
4. Evaluate Through Workflow Impact
Instead of focusing on the tool itself, evaluate:
- where it cuts time
- where it adds friction
- where accuracy matters
- where human oversight is required
- how it interacts with existing systems
- how people feel using it
This workflow-first lens is what separates sustainable AI adoption from novelty.
5. Scale What Works — and Only What Works
Bottom-up pilot data is your filter.
Only scale tools that demonstrate:
- repeatable benefit
- low friction cost
- integration potential
- employee buy-in
- measurable ROI
This eliminates shiny-object syndrome and protects your teams from tool overload.
Bottom-Up AI Adoption Isn’t About Tools — It’s About Designing Better Work
Bottom-up adoption is often misunderstood as leaderless or hands-off. In reality, it requires thoughtful leadership and strategic intent.
Leaders must create:
- psychological safety
- room for experimentation
- tolerance for failed pilots
- clarity around goals
- structures for knowledge sharing
Organizations that do this well are not only faster at adopting AI — they’re better at adapting to the changes AI brings.
Because the truth is: AI will evolve. Tools will change. Workflows will shift.
But the organizations that trust their people—and design work around human-AI collaboration—will move the fastest and thrive the most.
In the next decade, the most competitive companies will not be the ones with the most AI tools, but the ones that have designed work in a way that lets humans and AI collaborate fluidly.
This is the future leaders should start building now — a thoughtful, human-centered AI adoption strategy that empowers people first.
- Rethinking AI Adoption: The Bottom-Up Method Modern Leaders Need - November 22, 2025
- How Xponent21 Became a Globally Recognized Authority in AI SEO - October 8, 2025
- The ROI of AI SEO: How ROI Stacking Multiplies Revenue, Efficiency, and Brand Value - September 10, 2025
