Global leaders are grappling with five core tensions as artificial intelligence (AI) integrates into workplaces, according to insights from over 100 builders, executives, investors, advisors, and researchers.
These tensions encompass the relationship between experts and novices, centralization versus decentralization in governance, the impact of AI on hierarchical structures, the balance between speed and deliberate implementation, and the interplay of top-down versus peer-driven change initiatives.
Research indicates early applications of AI have shown dual effects. Polish endoscopists utilizing AI for cancer detection experienced improved accuracy in AI-supported procedures but a decline in non-AI tasks. Students using AI for SAT-style essay drafting initially showed creativity spikes; however, those starting with AI-generated ideas exhibited reduced alpha-wave activity and produced highly similar output. A 2025 study across 20 European countries further revealed that workers in highly automated jobs reported decreased purpose, reduced control, and increased stress, despite tasks becoming technically easier.
The new Work AI Institute at Glean compiled these findings into the “AI Transformation 100,” an annotated list of ideas for leveraging AI’s benefits and mitigating its drawbacks. This initiative aims to differentiate genuine transformation from hype, understanding AI’s progress, stagnation, and unintended consequences.
One tension arises from AI’s ability to blur the lines between experts and novices, enabling non-specialists to perform tasks historically requiring extensive training, such as coding or data analysis. This shift expands contribution opportunities but risks mistaking AI fluency for true mastery. John Lilly, a Duolingo board member, noted that non-engineers prototyped a chess course using AI in four months, outperforming other internal initiatives because experts, if involved too early, might express reasons why a project would not work. Google teams have also adopted a “prototype-first” approach, utilizing AI-powered “vibe coding” to build working demos before drafting proposals, accelerating iteration. However, overreliance on novices can lead to “AI slop”—outputs that appear convincing but lack substance. Stanford research indicates a decline in entry-level developer hiring, while demand for senior engineers has increased, suggesting companies rely on experts to counter this phenomenon.
To address this, strategies include allowing generalists to initiate projects with AI but ensuring experts refine and scale successful outcomes. At Stitch Fix, former Chief Algorithms Officer Eric Colson described custom algorithms flagging unmet needs, with human designers then selecting options that aligned with the brand and quality standards. Organizations should also involve top employees, such as clinicians or data experts, in AI model training and pilot programs from the outset, as recommended by TELUS VP of AI Alexandre Guilbault, who said, “The best people are the ones who can drive the biggest transformation.” Embedding experts within local teams, like Glean’s “Glean on AI” team for functional automation and the “AI Outcomes” team for customer solutions, also facilitates the identification and development of AI-driven processes.
Another tension involves the balance between centralizing and decentralizing AI control within organizations. Centralization, often through AI centers of excellence, aims to enforce standards and manage risk but can stifle innovation due to extensive approval processes. Conversely, decentralized AI development can lead to rapid but uncoordinated innovation, resulting in fragmented tools and digital exhaustion, according to UC Santa Barbara professor Paul Leonardi.
To navigate this, companies centralize high-risk areas like data governance and infrastructure for security, while decentralizing low-risk experimentation such as workflow automations. Organizations should also avoid creating symbolic AI roles without budget or authority, instead distributing AI responsibilities within existing teams. Technology choices should include enterprise-grade governance features, such as security and audit trails, while allowing individual teams flexibility. Booking.com’s HR teams implemented an AI-powered search platform that ensures employees only access information for which they have permission, according to Senior Engineering Manager Tadeu Faedrich, who said, “We didn’t want people finding documents they shouldn’t have access to.”
The third tension concerns the trend toward flatter organizational hierarchies. While AI automates routine decisions and reporting, many leaders assume it enables the removal of management layers for faster movement. However, Michael Arena, former Chief Talent Officer at General Motors, found that excessive flattening can overload managers and create bottlenecks, especially when supervising more than seven direct reports. Managers often work 10 to 13-hour days and still struggle with their duties.
Organizations should evaluate their work modes before flattening. If work primarily involves “heads-down” tasks requiring minimal coordination, AI agents can manage routine tasks, allowing managers to lead larger teams. For “heads-up” work, which requires real-time interdependence and communication, maintaining smaller team sizes allows managers to focus on coaching, judgment, and relationship-building. AI should lighten, not eliminate, management by offloading administrative tasks like status updates and scheduling. Workday VP of People Analytics Phil Wilburn noted his team no longer compiles briefing decks or weekly updates because an AI system aggregates unstructured data from Slack and project plans, allowing him to use AI to compile briefs or research topics before meetings. AI has removed administrative burdens without replacing management functions.
The fourth tension involves the impulse to rapidly adopt AI versus the need for careful, deliberate integration. An excessive focus on speed can create decision-implementation gaps, where new tools are adopted quickly but without addressing existing systemic issues or understanding technological fit. This can lead to uneven adoption, delays, or abandonment of AI initiatives. Northwestern University professor Hatim Rahman described a hospital project where training AI for medical diagnostics requires thousands of ultrasound images, but existing healthcare efficiency pressures minimize image acquisition. Patient consent processes and interdepartmental conflicts further slow progress, leading to longer implementation times than expected. Technicians also resist the project, fearing data misuse for performance monitoring or job cuts.
- Protect slow modes by building speed bumps into creative and strategic work, including checkpoints and reflection periods. Perry Klebahn, who leads the Stanford d.school’s Launchpad accelerator, observed that while AI speeds prototype generation, it can diminish founders’ commitment to ideas, as they perceive them as too easily generated.
- Reward learning over showmanship. Udemy’s “U-Days” AI learning events award prizes for business impact, measurable improvement, and peer feedback, rather than solely for flashy demonstrations.
- Conduct an “AI Residue” test: Remove all AI-related jargon from pitches to assess the underlying substance. If the remaining content is insubstantial, it indicates a weak idea.
The final tension addresses whether AI transformation should be driven top-down or peer-driven. Top-down leadership is crucial for company-wide adoption, with Worklytics data showing teams were twice as likely to adopt AI tools if their managers used them first. However, excessive top-down pressure can lead to resistance or superficial compliance. Over-reliance on bottom-up efforts can result in fragmentation and uncoordinated experimentation. A CTO noted this could be compared to “hundreds of little speed boats racing in different directions.”
To balance these approaches, companies establish rhythms of change. A Fortune 20 retailer’s CEO maintains AI as a standing topic in monthly VP meetings, and a cross-functional steering committee aligns adoption and use cases. Departmental staff meetings include an “AI moment” for sharing experiences. Organizations also plan for experiments to fail, recognizing that approximately 80% of AI projects may not meet initial productivity goals. A Fortune 500 organization does not redesign jobs until there is “convincing evidence” AI will increase efficiency. Review cycles are implemented to capture lessons from employees at lower levels, ensuring failures fuel learning. Measuring impact over activity is also critical. Zendesk Senior VP of Engineering Nan Guo uses a balanced scorecard of six engineering productivity metrics, including cycle time and change failure rate, rather than superficial indicators like logins or prompt counts. Formalizing peer networks, such as Uber’s initiative that identified 53 early AI champions across functions, fosters internal learning communities and engagement.
Successful leaders navigate the complexities of AI by treating these tensions as design features rather than flaws. They remain adaptable, recognizing that the optimal path forward is provisional. “Fifty percent of what we teach you will turn out to be wrong,” a Harvard Medical School dean reportedly told incoming students, reflecting the uncertainty in AI implementation. Leaders who exhibit humility and cultivate organizational flexibility will be best equipped to continuously learn and adapt as the technological landscape evolves.





