Author: Nick

  • AI signal and noise: The first wave in learning and enablement

    AI is already becoming a coworker for people in the enablement space. At Learning Technologies 2026, nearly every participant in each AI session agreed on one thing: you cannot be a viable partner to your stakeholders without using AI.

    Fosway data backs the sentiment: nearly half of L&D work could be done by AI within the next few years, and 80% of teams plan to use AI to resource projects this year. But what does that mean?

    The question isn’t whether AI will be adopted, but whether what we’re using it for is solving the right problems and what new problems it could solve next.

    AI started with content, but that’s not the end

    To answer that, let’s take a look at a familiar content creation workflow. Most enablement and L&D teams operate inside a process that hasn’t fundamentally changed in decades, but with the introduction of AI content tools, the workflow is changing.

    A typical learning content lifecycle workflow for enablement

    Let’s break this down.

    1. Knowledge lives in scattered sources: SharePoint, Confluence, Notion, Google Drive, Seismic, the heads of SMEs, half a dozen Slack channels
    2. When a business need surfaces, an instructional designer is asked to build something (usually a learning or content). This could be a product launch, a sales or skill gap, a product or strategy update, etc
    3. They compile knowledge from those sources, design the experience, build it in Rise or another authoring tool, and hand it to an admin
    4. The admin uploads it to the LMS as a SCORM package

    But the content lifecycle is not done…

    • From there, learners engage (or don’t)
    • Feedback comes back through course evaluations
    • Analytics report pass/fail
    • Completions are used to track compliance (notably not outcomes)
    • When the source content changes, an updates loop fires (eventually)
    • Updates are flagged for work in the backlog
    • The cycle restarts

    The process is designed for maximum reach, but it targets no one’s specific needs. It’s not personalized. It takes weeks to build. It is hard to keep up-to-date. Feedback gets lost, forgotten, or buried under the next request. Analytics aren’t about business outcomes; instead they are stuck at pass/fail.

    By the time learning reaches the learner, the moment of need has often passed, and forget about recall. No one is revisiting this content once it’s marked as complete.

    Is AI here to save us?

    Now look at where AI showed up first. In the middle of that diagram.

    A typical learning content lifecycle workflow for enablement, now with AI

    AI authoring tools, avatar generators, course assemblers, AI-generated assessments, AI writing assistants embedded in every authoring platform, and AI coaching tools like Yoodli that let learners practice and get real-time feedback before high-stakes moments. My team at Snowflake uses Yoodli and I’m a fan of what they’re building.

    There are three reasons why AI tools targeted content creation first:

    1. AI is good at content and patterns without full context. It can generate a first draft from a brief. It can turn a transcript into a course outline. It can produce passable assessments. None of this requires AI to understand your full knowledge base, your sales process, or your analytics stack. And that’s good, because it doesn’t have access to most of those anyway. Not yet.
    2. Content acceleration is the most obvious productivity win. Enablement and L&D leaders have been promised faster content production for decades, and ATD’s research is clear on why. Two-thirds of learning teams cite limited resources as their top barrier to effective learning, roughly 40% cite scope creep, and another 37% can’t get enough SME time to keep their materials accurate. These are exactly the bottlenecks AI content tools were built to dissolve.
    3. It fits inside the existing workflow. AI in content creation doesn’t require rewiring the process. The instructional designer’s role gets faster, not different. The LMS still distributes the content. Analytics still report pass/fail. Nothing else has to change.
    ATD top barriers to L&D output, aligned with expected impact from AI

    While AI has sped up the reaction, the real problem remains: the cycle is still reactive. Still generalized. Still pass/fail.

    AI made the same shape of work move faster, but the shape itself was the constraint. As Fosway flagged, most AI activity is concentrated on the production side of learning rather than on outcomes.

    If you’re measuring AI by content speed, you’re measuring fit to the old workflow, not outcomes.

    The first wave is real. The next wave is the work.

    This is the first wave of AI in enablement: making the existing shape of work faster. Real wins. Real savings. Not yet transformation.

    The next wave is rewiring the shape of work itself. A connective membrane between knowledge, skills, and signals. The central brain that turns AI from accelerant into the operating layer. I’ll cover how this is starting to take shape in the next post.

    This is part two of a series on what I took away from Learning Technologies 2026. Part three: the central brain. What the connective layer between knowledge, skills, and outcomes actually looks like, with the architecture that makes it real.

  • Skills are not the new gamification

    Skills are not the new gamification

    I came in skeptical. I left convinced. What happens now will define enablement’s next evolution.

    Two weeks immersed in the L&D world: Docebo’s Inspire 2026, then Learning Technologies 2026 in London.

    Across every track and keynote, one theme was inescapable: skills. Yes, “AI” was plastered across every vendor booth, but it was a buzzword in search of problems to solve. Everyone was offering AI content creation, and there are plenty of creepy AI avatar vendors. But underneath the noise, only a handful are trying to crack how AI actually drives outcomes.

    My first reaction to skills: another hype cycle. Points, badges, leaderboards. We’ve seen this pattern before.

    Then the inversion hit. The AI on the floor was flashy and feature-driven. The skills work in the sessions was digging into the heart of what enablement solves for. AI felt like a demo. Skills felt like a transformation. That’s not a knock on AI, but the substance was in solving skills at scale.

    Gamification’s hangover

    Skills entered the conversation carrying gamification’s baggage. Gamification wasn’t wrong. For most organizations, it was applied without objectives. People got excited by Duolingo and Salesforce Trailhead and wanted their own slice without doing the underlying work of aligning incentives to outcomes. Yu-kai Chou’s Octalysis framework lays out the eight core drives behind real behavior change. Almost no LMS gamification module engages with any of it.

    Skills programs are one bad design decision away from the same fate.

    So when Docebo bought a small skills vendor, 365Talents, and announced plans to integrate them, my first reaction was: how cute, they’re filling a product gap. Another vendor solution without vision. But that view inverted, first on Docebo’s main stage, then in London, where fully dedicated tracks focused on skills. Something was up.

    The turn

    We’d been investigating how to reset skills and competencies at Snowflake, rethinking how to do skills entirely. There’s no shortage of frameworks and tools. None of them quite fit.

    Then Docebo’s keynote changed my read. Instead of trotting out a complex framework of levels and methodologies, they framed everything around outcomes and argued that the skills layer matters only if it’s connected to what’s actually happening in the work. That’s a different conversation than “let’s standardize a taxonomy.”

    The picture sharpened in London. LT2026 had dedicated tracks for skills, with every block covering skills strategy, assessment, or implementation. The strongest session was Koreen Pagano’s From Roles to Skills: What It Really Takes to Become a Skills-Based Organisation. She listed the historic places skills programs die: stuck on definitions, stuck on ontology frameworks, stuck on proficiency scales, stuck on technology-first approaches. Every one of those failure modes is a place where the work was too slow, too manual, or too brittle to keep up with how organizations actually change.

    What’s different now is that AI lets you cut through all four. You don’t need a perfect taxonomy if a system can infer skills from work product. You don’t need a manual proficiency scale if you can pull signals from real performance. You don’t need a six-month technology rollout if you can stand up a usable system in weeks. The historic blockers weren’t conceptual. They were operational. AI dissolves the operational ones.

    That reframes the whole thing. Skills aren’t trying to be the new feature. They’re trying to be the new substrate.

    Why this is different

    Gamification was a UI pattern. Or really, what should have been a psychological model of motivation that mostly got implemented as a UX layer on top of learning. Toggle it on, toggle it off. The underlying course was the same.

    Skills are an organizing layer. They’re the thing that lets you ask: do we have this capability? Where? At what depth? Building toward what? Are we delivering it through training, hiring, contracting, or AI?

    You can’t answer any of those questions with badges and certifications. Those are signals of what someone has already done, not what their current capabilities are today.

    What the demos skip

    Skills don’t work in isolation. They connect a knowledge layer (product docs, GTM strategy, sales collateral, tooling) to a signals layer (call transcripts, CRM, product usage, customer data, the work itself). Without those two, a skills system is just another static taxonomy on the LMS. Never updated. Never connected to outcomes.

    The value isn’t in the layers themselves. It’s in the interplay. When a CRM signal shows a customer interested in a specific use case but the rep never positions the relevant product, you can dig in: has the rep done the training, the roleplay, the validation? When someone joins with deep product knowledge from a prior role, why make them sit through the 101? Skills are how you meet people where they are, accelerate the work, and stay aligned to outcomes.

    The data was sobering. Deloitte: 90% of organizations are moving toward skills-based models, but only 1 in 5 have operationalized them beyond traditional job descriptions. Most are saying the right things without rewiring anything, and that’s where this could still go sideways. That version of “skills” really will be gamification 2.0.

    Fosway adds the technical layer. 74% of L&D leaders say their current LMS doesn’t meet their AI expectations, even as live AI capabilities in learning systems doubled (11% to 26%) in a single year. That 74% isn’t an indictment of skills as a concept. It’s an indictment of stacks that can’t connect skills to knowledge to signals to outcomes.

    The vendors with answers weren’t on the main stage presenting their solutions. They were in the booth conversations, talking about agentic content systems, knowledge layers feeding AI tutors, signal pipelines from Gong and Salesforce, learner profiles that update from behavior instead of from self-assessment. The vendor capabilities are clear. The architecture and execution gaps are wide open.

    The year ahead

    Skills aren’t the new gamification. They’re a new learning object.

    Instead of tracking completions and advancements solely through learning, we have an opportunity to measure real-world behavior as a skill, and use training and AI-powered roleplays to fill in the knowledge and skill gaps.

    If you’re an enablement or L&D leader, this is the theme of your year. The vendors won’t hand it to you. The frameworks won’t get you there. Three things to take with you:

    1. AI is part of the workforce now. Build, Buy, Borrow, Bot. The Red Thread research’s 4B model captures it: AI is a labor option alongside humans. Someone (or something) has to decide who does which work, and skills are how that decision gets made. The superpower: pair skills data with real-world signals and do what wasn’t possible before. Two prerequisites. Your content has to be retrievable, not packaged inside courses. Your systems have to capture signals (or AI can pull them from systems you already have, like Gong).
    2. Pick outcomes, not roles. Skills only matter when they’re tied to objectives. Pick the outcomes that count, map the skills behind them, and build for those, not for the org chart. Done right, your team’s skills data tells you what the org can actually do and what gaps to close. Done wrong, it’s a taxonomy that decorates the LMS.
    3. Dig in. We’re in uncharted territory. Plenty of opportunity, plenty of dead ends. Pilot small. Connect tight. One team, one outcome, one feedback loop. The orgs that win this cycle won’t be the ones with the cleanest taxonomy. They’ll be the ones connecting skills to the rest of their stack while everyone else is still admiring the framework.

    Gamification rewarded completions. Skills measure work.


    This is part one of a series on what I took away from Learning Technologies 2026. Next up: AI’s noise vs. AI’s signal, and what “AI that actually drives outcomes” looks like in practice.