AI Workplace Success: Leadership, Lab & Crowd

Discover the three-pillar framework for AI transformation: Leadership vision, experimental Labs, and empowered employee Crowds driving real results.

Companies are charging into AI transformation with incomplete information and mixed results. As an AI founder, I’ve spent countless hours with organizations across industries grappling with how to adopt AI. From these conversations and my own research, I’ve identified:

Four key realities about AI in the workplace today.

1. AI is dramatically boosting individual productivity.

Employees who use generative AI report getting complex tasks done in a fraction of the time. In one survey, workers said that using AI turned 90-minute tasks into 30-minute tasks, essentially tripling their productivity. Self-reports can be exaggerated, but controlled studies back up significant (if smaller) gains. For example, a field experiment at a Fortune 500 call center found that giving customer service agents an AI assistant raised their output by 14% on average. Developers using GitHub’s AI coding assistant completed tasks 26% faster than those without it. Whether through writing, coding, or planning, AI is helping individuals work quickly and, in some cases, produce higher-quality output than before.

2. Many workers are already using AI, often quietly.

AI adoption on the ground has been explosive. A representative study in Denmark early last year found that 65% of marketers, 64% of journalists, and even 30% of lawyers had already used AI on the job. In the U.S., the share of workers who say they use AI at work jumped from about 30% in late 2024 to over 40% by April 2025. This surge has made ChatGPT one of the world’s busiest websites. Yet much of this AI use is happening under the radar. Surveys find that official company-provided AI tools often see only ~20% adoption, while many more employees use AI informally or in secret. Why the secrecy? Some fear punishment under vague AI policies; others worry that admitting huge efficiency gains will just lead to higher workload expectations or even job cuts. In other cases, workers simply aren’t sure how to apply AI effectively, beyond the basic training they might have received. In short, frontline employees are both the innovators and the “secret agents” of workplace AI — enthusiastically using it where they can, but often without organizational support or awareness.

3. There’s far more transformative potential in today’s AI than most companies realize.

The current generation of AI systems can do more than draft emails or summarize text. They can perform deep, complex analyses and multi-step tasks that would have seemed like science fiction a year ago. For instance, new “deep research” tools can autonomously crawl hundreds of sources and produce a detailed 30-page report with citations in minutes — work that might take human analysts weeks to compile. Early versions of AI “agents” are appearing that can carry out sequences of tasks (like researching a market, then generating a business plan, then creating a slideshow). In my own trials, I gave an AI agent a couple of paragraphs' description of a hypothetical startup along with a clear direction. In response, it generated a working website, a PowerPoint deck, and a 45-page business model analysis, complete with market research and financial projections, in just a couple of prompts. The analysis wasn’t perfect, but it was remarkably thorough — arguably more comprehensive than what a team of human consultants might produce in days. Every month, AI tools are getting smarter (at reasoning, coding, and understanding context) and more capable of high-quality output. The ceiling of what you can do with “today’s AI” keeps rising, and most organizations have only scratched the surface.

4. Yet companies so far are capturing only a fraction of these gains.

Paradoxically, while individuals tout big efficiency boosts from AI, at the organizational level, we’re not yet seeing dramatic performance improvements. Many firms report only modest productivity upticks from their AI pilots. Broad economic data shows no major jump in labor productivity, and no reduction in hours worked, through the end of 2024. One large-scale study of workplaces found no significant impact of AI on employees’ overall output or earnings. On average, workers saved only about 3% of their time with AI, with minimal effect on business metrics. In other words, the 10x individual speed-ups aren’t yet translating into 10x organizational performance. There’s also no evidence (so far) of AI-driven mass layoffs or wage declines across industries, despite a few high-profile tech companies announcing staff cuts in favor of AI. In sum, lots of people are using AI and seeing personal productivity wins, but most companies have not figured out how to turn those individual wins into broad, lasting advantages. Why not?

The reason is that improving individual productivity with AI doesn’t automatically improve organizational productivity. To capture AI’s benefits at scale, companies can’t just let employees “figure it out” ad hoc, nor can they simply install an AI tool and call it a day. It requires organizational innovation — rethinking workflows, incentives, and even the fundamental design of jobs. Over the decades, many organizations have grown accustomed to outsourcing innovation to consultants or buying off-the-shelf software solutions. But with AI, there is no turnkey solution or expert playbook yet: even AI vendors themselves are often surprised by how people end up using their tools. Every company’s context is different, and we are all figuring this out in real time. Gaining an edge with AI means learning faster than others and adapting on the fly. In my experience, the companies that are starting to see real performance boosts have one thing in common: they are harnessing the efforts of Leadership, Lab, and Crowd. These are the three pillars of effective AI transformation in an organization. Let’s break down each of these and how they work together.

Leadership

Any successful AI adoption starts as a leadership challenge. Leaders must urgently recognize both the opportunities and the risks that AI presents for the organization’s future. Six months ago, many executives were on the fence; today, we’re finally seeing a shift. In fact, a wave of internal memos from CEOs has been making the rounds, all carrying a similar message: AI is here, it’s critical to our future, and everyone in the company needs to get on board. Shopify’s CEO, Tobi Lütke, for example, told employees that using AI “reflexively” is now a baseline expectation at the company. Duolingo’s CEO, Luis von Ahn, proclaimed an “AI-first” strategy, urging teams to embrace AI or risk falling behind. Similar mandates have come from leaders at firms ranging from tech giants to banks and retail. This sudden top-down urgency is a positive development — it signals that leadership is waking up to the AI moment.

But urgency alone isn’t enough. The next step for leaders is to paint a vivid picture of what an AI-powered future looks like for the organization. It’s not sufficient to say “AI is important, we must use it” or to tout potential efficiency gains. Employees need to hear how work will change and what the destination might be. Will AI make their day-to-day tasks easier? Will it free them from drudgery to focus on creative work? How will success be measured in an AI-augmented team? Crucially, what happens to employees if AI makes certain tasks 10× faster — will they be downsized, or will they be retrained for new opportunities? These are the questions on everyone’s mind. Research on organizational change shows that people respond to concrete, specific visions of the future, not abstract promises. As a leader, you may not have all the answers (no one does right now!), but you should articulate a clear vision or set of principles. For example, you might declare: “Five years from now, our sales process will run 24/7 with AI assistants qualifying leads, while human reps focus only on closing deals and managing relationships — and we’ll grow the team, not shrink it, as efficiency increases.” A vision like that addresses why to adopt AI and what it will ideally achieve, giving your people a sense of direction and reassurance. Without this clarity, workers may either resist change or misuse the new tools in counterproductive ways.

Having an overall vision also forces leadership to anticipate how work and roles will evolve in an AI-enabled organization. AI isn’t going to replace most jobs outright in the near term, but it will replace or alter specific tasks within jobs. Leaders need to start asking: if AI can now do X task in seconds, do we still need humans doing X at all, or should they focus on Y? I’ve spoken with legal team managers who realize that AI can handle the first pass of expensive research memos, which changes how they allocate junior lawyers’ time (maybe those lawyers spend more time on client interaction and strategic counsel, and less on case law review). In software development, tools that can generate code (“vibe coding”) mean engineers might spend relatively more time on design and architecture, and less on writing boilerplate code. In marketing and content creation, the rise of AI-generated video is a hint of things to come. For example, Google’s latest generative video model can produce a short advertisement clip with sound from just a text prompt (“An ad for Cheesy Otters crackers…”) in seconds. When any employee with a prompt can conjure up a polished video or a working app, it fundamentally changes the speed and cycle of projects. Leaders must start experimenting with new workflows that integrate AI and human work hand-in-hand. It might mean redefining job descriptions (“prompt editor,” “AI workflow designer” could become roles), or dismantling old process bottlenecks. One company I’ve observed took the bold step of reorganizing its product teams: instead of a central IT dev group handling all coding, they embedded software engineers within cross-functional teams alongside product managers, domain experts, and marketers. These small squads were empowered to “vibe-work” — rapidly prototype and launch ideas using AI tools — without layers of approval. The result? Projects that used to take 6 months across siloed departments were getting done in a few days by a focused team using AI to handle much of the grunt work. This kind of radical re-thinking of org structure might not be right for every company, but it illustrates the scale of change that AI enables. Leaders should encourage pilot programs and skunkworks projects to explore what’s now possible, and use those to inform a broader transformation strategy.

Finally, leadership sets the tone for how the entire organization approaches AI. Beyond vision, leaders need to address the culture and policies around AI use. If the default stance is fear or strict control (e.g., “don’t use AI or you’ll be fired”), employees will either hide their usage or avoid the tech altogether. Instead, smart leaders create safe zones for experimentation. They might explicitly designate certain projects or departments where any use of AI is allowed (as long as laws and basic ethics are observed), so people don’t feel paralyzed by compliance worries. They also rewrite policies to be specific — e.g,. “Feel free to use AI to assist with coding or writing, but do not paste confidential client data into external tools” — rather than a blanket “no AI” edict. I often see the legal department acting as a choke point here; leadership should push them to update outdated privacy concerns. (For instance, major AI providers now let companies opt out of having data used for training, and there are enterprise-grade models that meet strict privacy standards. Shadow AI use is already happening regardless — it’s safer to allow it with guidelines than to drive it underground.) Leaders can further incentivize and model AI adoption. Some companies have offered substantial rewards — extra vacation days, public recognition, even cash bonuses — to employees who come up with game-changing uses of AI in their workflow. The message is: we want you to experiment and share what you learn. And when executives themselves use AI in visible ways (say, a VP demoing in a meeting how they used ChatGPT to help analyze a business problem), it sends a powerful signal that “this is not cheating or trivial — this is our new way of working.” In summary, Leadership’s role is to set the vision and the stage: communicate urgency and optimism, define guardrails that encourage innovation, and reorganize resources to explore AI’s potential. But leaders don’t have to figure out every detail alone. That’s where the other two pillars — the Crowd and the Lab — come into play.

The Crowd

In the context of AI adoption, “The Crowd” means your general workforce — all the employees on the front lines of doing the actual work. They are crucial because true innovation in how AI can be used often bubbles up from the ground. Why? There is no manual for how to apply AI to every job — we’re all learning by doing. Experienced workers, who deeply understand their tasks and pain points, are usually the ones to spot clever ways an AI tool can help. I’ve seen accountants build AI prompts to automate checking Excel sheets for errors, recruiters using AI to draft tailored outreach emails in seconds, and project managers who create whole risk assessment reports via AI that previously took them days. These are things no outside consultant could have perfectly pre-defined, because they rely on intimate knowledge of the work. So, empowering the Crowd to experiment is key to unlocking AI’s value. When employees start trying AI on their own tasks, they discover workflows that managers or IT might never have imagined.

Many companies say they want this bottom-up innovation. It’s becoming common for firms (even in regulated industries) to roll out a chatGPT-style assistant to all staff, along with basic training sessions on “how to write good prompts.” The results, however, have been mixed. Typically, you see an initial spike of curiosity followed by a plateau: perhaps 15–20% of employees become regular users of the official AI tool, and the rest revert to old habits. When surveyed, those official users report only minor productivity improvements. This can lead management to conclude, “Well, I guess AI isn’t that big a deal here.” Meanwhile, a quiet revolution is happening under their noses: in recent surveys, over 40% of workers admit to using AI tools at work in some form — often using public tools or writing their own little scripts — and many of them swear it has dramatically improved their effectiveness. The discrepancy reveals two groups of employees: the “Secret Cyborgs” and the unsure majority. The Secret Cyborgs are those actively (but covertly) using AI to boost their work. They keep it secret for a variety of very human reasons: fear of being penalized for breaking some rule, fear that if they admit an AI co-produced their work, then their own contributions will be discounted, or fear that if they make their workflow too efficient, they might work themselves out of a job. On the other hand, the unsure majority are those who haven’t moved past superficial uses of AI. They tried the company chatbot once or twice, got an irrelevant answer, and shrugged. Or they simply don’t know which parts of their job could be made easier with AI — especially if they’re not tech savvy or if their manager hasn’t encouraged experimentation.

Tackling both issues, the hiding and the hesitation, is essential. Leadership (as discussed) must create an environment where using AI is encouraged and safe. If employees are scared that “AI efficiency = layoffs,” they’ll understandably keep their AI tricks to themselves. Leaders should explicitly reassure teams that productivity gains will be used to grow or improve the business, not just to cut headcount. For example, they might promise, “If AI lets us handle two times the workload, we’ll aim to reassign people to new projects and markets we couldn’t tackle before — not replace them.” Backing this up with actions (like not immediately slashing team size when an AI tool comes in) builds trust. Also, celebrate those who use AI openly: make it clear that figuring out how to boost your work with AI is a path to promotion, not a shameful shortcut. Some companies have instituted internal forums or “AI fairs” where employees demo their AI-augmented workflows to peers and execs, trading tips and getting recognition. This kind of knowledge sharing turns secret cyborgs into proud pioneers.

For the employees who are unsure how to start, education and tools from The Lab (next section) will be critical. But even at the Crowd level, there’s a lot that can be done. Beyond generic prompt-engineering classes, it helps to give people concrete, hands-on practice in communicating with AI and integrating it into daily tasks. One approach we’ve seen is running internal hackathons or challenges: for instance, “Use our company data + an AI tool to solve X problem,” with teams competing. Another approach is to identify “AI champions” in different departments — early adopters who can coach their colleagues one-on-one on simple use cases (like a salesperson showing others how she uses GPT to draft proposal outlines). The goal is to lower the barrier for the skeptics: once they see a few practical examples relevant to their job, the lightbulb often goes off.

It’s worth noting that the Crowd will produce a stream of innovative ideas and needs if you listen to them. Perhaps a finance analyst comes up and says, “If only we had a custom AI that could pull data from these 5 systems and answer my questions, I’d save 10 hours a week.” Or a customer support rep says, “I’ve been using ChatGPT at home to summarize customer emails, but it would be great if it had real-time access to our knowledge base.” These insights are gold. They should flow directly into your AI Lab (or whatever innovation process you have) because they highlight high-impact opportunities. In short, your employees on the front lines are an extension of your R&D team in the era of AI. By encouraging experimentation, surfacing their discoveries, and reducing fear, The Crowd can significantly boost company-wide performance. But to truly capitalize on that, you need structure and technical support, which is where the Lab comes in.

The Lab

If the Crowd is decentralized, the Lab is a more centralized engine for AI innovation. This isn’t a traditional R&D lab in an ivory tower, nor just a data science team doing analysis. Think of it as a deployment and discovery task force for AI, a group charged with both exploring future possibilities and exploiting current opportunities. The Lab’s mission is twofold: build new capabilities quickly and continuously, and chart the path ahead. To do this, it should be composed of a mix of people: some engineers or data scientists, yes, but also savvy non-engineers and domain experts. In fact, some of the best Lab members are often those very same enthusiastic employees from the Crowd who were hacking together AI solutions in their old departments. By pulling them into a dedicated team, you free them to focus on multiplying those solutions across the organization.

What does the Lab actually do on a day-to-day basis? First and foremost, it builds and iterates on AI-driven tools and workflows for the company. A good Lab operates with startup-like agility: identify a use case, prototype a solution in days, test it with real users, gather feedback, refine, and either scale it up or scrap it and move on. For example, if marketing folks are manually sorting hundreds of incoming customer messages, the Lab might build a quick AI system to auto-tag and route those messages — maybe using an off-the-shelf model fine-tuned on your data. If consultants in your firm are spending hours making slide decks, the Lab could create an internal “AI Slide Assistant” that generates draft PowerPoints (i.e., Gamma) from a few bullet points. The key is rapid implementation of ideas coming from the field (the Crowd). The Lab should almost have a conveyor belt from Crowd insights to pilot solutions. One week, they’re packaging a clever prompt someone wrote into a reusable app; the next, they’re wiring up an AI agent to handle a routine process end-to-end. By quickly spreading these innovations, the Lab ensures that a brilliant trick discovered by one employee can benefit hundreds of employees.

Secondly, the Lab needs to develop AI benchmarks and evaluation metrics that matter for your business. It’s not enough to rely on generic benchmarks like coding tests or trivia quizzes (those are what AI model vendors use to tout their model’s prowess). Your Lab should figure out, for instance, which model is best at writing a client-ready report in your company’s style, or which chatbot gives the most helpful answers about your product catalog. This might involve creating sample tasks and scoring different AI tools on accuracy, clarity, and so on. Some of these evaluations can be automated, but many will require human judgment, and that’s okay. You can literally have experts do a “blind taste test” of outputs from Model A vs Model B and decide which is higher quality for the task. Track these results over time. You might find that a cheaper open-source model works just as well as an expensive API for a particular task, saving costs. Or you might see that while no current AI can adequately reconcile two conflicting legal documents (for example), the gap is closing with each model release, which informs you that this task might be automatable next year. (Anthropic, an AI company, published a useful guide on creating custom benchmarks for organizational AI evaluation — a good starting reference.) The Lab essentially becomes your in-house AI performance center, constantly asking: How good are the AIs now at what we need? and When should we switch or upgrade our tools?

Third, a forward-looking Lab will build things that don’t fully work… yet. This is a bit counterintuitive in a business setting — why build something that fails? The point is to anticipate the future. Suppose there’s a core business process (say, drafting a complex contract, or managing a supply chain schedule) that today is too complicated for AI to handle alone. The Lab could attempt to create an AI agent or workflow to do it anyway, knowing it will perform poorly at first. By doing so, you learn where the current technology falls short — maybe the agent can draft 80% of a contract but misses important nuances, or it can handle routine scheduling but fails when an unexpected event occurs. You keep these prototype systems around and periodically plug in the latest AI models. One of these days, you’ll find that a new model has crossed the threshold and your once-failing prototype suddenly works pretty well. Because you’ve already built the pipeline, you can immediately consider deploying it. In effect, you’ve pre-invested in solutions for when the tech catches up. Given how fast AI is advancing, this approach can put you miles ahead of competitors who only start integrating new capabilities months after they emerge.

Finally, the Lab should create “AI provocations” — demos and experiments that jolt the organization’s thinking. These are not meant for immediate ROI; they are meant to broaden imaginations and overcome inertia. For example, the Lab might fully automate a fictional project proposal, from initial client query to final slide deck, just to show it’s (almost) possible. Or generate a hyper-personalized fake marketing campaign for a made-up product in an afternoon. Or have two AI agents simulate a negotiation between a customer and a salesperson. These demos can be shown in town halls or team meetings to spark discussion. Often, seeing is believing — when people witness an AI doing something that normally only an expert would do, it can inspire them to rethink their own work and get more creative with AI. Provocations can also flush out legitimate concerns and ethical questions, which leadership and the Lab can then address proactively.

In summary, the Lab is the bridge between possibility and practice. It takes the raw energy and ideas from the Crowd, adds technical savvy and resources, and turns them into tangible solutions aligned with leadership’s vision. It also feeds information back up to leadership — for instance, what the latest models can or can’t do — helping refine strategy. Many companies starting their AI journey don’t have a formal “Lab” at first; often, it begins as a small task force by hiring a single talented “AI guru” who starts spinning up projects. But as you scale, dedicating a team (even a modest one) to this function makes a huge difference. It creates a center of gravity for AI knowledge and helps ensure the organization as a whole moves in a coordinated way, rather than fragmenting into disparate AI experiments.

Rethinking the Nature of Work

Even with strong Leadership, an empowered Crowd, and an active Lab, companies may need to confront a deeper question: Are we doing the right work? All our organizational structures and processes were built in an era when human brainpower was the only intelligence available. Now that we have machines that can provide intelligence on demand, we must rethink some very basic assumptions about how work is organized.

Consider this: if an AI can generate a comprehensive research report in 30 minutes, the bottleneck is no longer doing the research — it’s defining what questions to research and deciding how to act on the findings. If writing code becomes 10 times faster thanks to AI, the scarce resource isn’t code; it’s a clear understanding of user needs and creative ideas for new features (since pumping out code is cheap). If content (blogs, social media, even video) can be churned out almost instantly by generative models, then simply producing content is not a differentiator — the focus shifts to strategy, curation, and truly original creativity, as well as building trust with an audience. In short, when AI takes over certain tasks, it elevates the importance of the tasks around those tasks. We have to ask, “What is truly valuable here? What should humans focus on, now that AI can do X or Y?” This may lead to stopping certain activities altogether. For example, if an internal report can be auto-generated in seconds but nobody actually reads 100 auto-generated reports, maybe the team that used to spend days preparing reports should instead spend time on synthesizing insights, building relationships, or something with higher impact.

The pace of technological change adds urgency to this reconsideration. Six months ago, most AIs couldn’t reliably analyze a spreadsheet and make business recommendations; now, some can. A year ago, we didn’t have generative models that could create short videos with sound from text; now we do. And tomorrow, AI agents might be able to browse the web, execute code, or interact with our internal databases autonomously to accomplish goals. Every new capability means tasks that were firmly in humans’ domain might become shared with (or handed off to) AI. Organizations need to become extremely adaptable. This is fundamentally a learning challenge: the companies that thrive will be those that can learn and relearn how to work every time AI opens a new door. It’s less about any single tool and more about the mindset and process of adapting. That’s why the feedback loops between Leadership, Lab, and Crowd are so important — they create a system for continuous learning. The Crowd finds what works on the ground, the Lab amplifies and evaluates it, and Leadership adjusts strategy and vision accordingly, which in turn encourages the Crowd to explore further.

Crucially, companies cannot outsource this adaptation. You can and should leverage outside experts for technical help, buy great AI tools, and learn from industry best practices, of course. But no consultant can tell you exactly how your unique combination of people, culture, and operations should integrate AI, at least not yet. In the early days of electricity, factories had to rethink their layouts (no longer clustering machines around a single power source). In the early days of the internet, businesses had to rethink processes entirely (e.g., moving from paper forms to web portals). We’re in a similar moment. AI isn’t just another IT system to install; it’s a general-purpose capability that will infuse every process, every role, in unpredictable ways. It demands organizational transformation, not just technology deployment.

The encouraging news is that the sooner you start, the better positioned you’ll be. We are in a messy, uncertain phase — there will be experiments that fail, and course corrections needed. But sitting on the sidelines is riskier. By developing the Leadership-Lab-Crowd triad, you create an organization that learns by doing. You’ll make mistakes, but you’ll also make discoveries, and you’ll be able to react when the environment shifts. In contrast, organizations that wait for “proven” playbooks may find that by the time certainty arrives, they’re already years behind in experience.

In conclusion, making AI work in your company means empowering your people at all levels. It means leaders are setting a bold, clear direction and ensuring a culture of trust and experimentation. It means unleashing the creativity of employees (“the Crowd”) by encouraging them to find AI solutions for the work they know best. And it means establishing a dedicated capability (“the Lab”) to turn those solutions into scalable tools, evaluate new technologies, and keep the organization on the cutting edge. Done right, this creates a virtuous cycle of innovation: leadership guides and learns, the crowd experiments and shares, and the lab builds and propels the whole organization forward.

I invite you to join this conversation. How is your organization approaching AI integration? What successes or hurdles have you encountered in aligning leadership vision, grassroots innovation, and dedicated AI teams? The best practices for AI in business are still being written — let’s discuss and write them together. Feel free to contact me or share your thoughts, examples, or questions, and let’s learn from each other about crafting effective AI strategies for our organizations. Together, by tapping into our collective insight, we can figure out how to truly make AI work for everyone.

Get Your 5-Minute AI Edge

This content is free, but you must subscribe to access Dr. Costa's critical AI insights on technologies and policies delivered before your first meeting.

I consent to receive newsletters via email. Terms of use and Privacy policy.

Already a subscriber?Sign in.Not now

Reply

or to participate.