Why Readiness Is Shaping the Future of Work: A Conversation with Alicia Sanchez, CAIO at MPF Federal

by akwaibomtalent@gmail.com

Undercurrents surfaces insights from leaders driving capacity-building
and performance improvement beyond traditional L&D
to explore where learning happens and where it’s headed.

By Mark Britz

AI didn’t arrive in organizations through strategy decks or roadmaps; it showed up quietly through curiosity, convenience, peer-pressure, and fear.

People are already using it, leaders are already behind it, and learning teams are already sensing that something at the very core of their work has shifted.

This tension sits at the center of my conversation with Alicia Sanchez, Chief AI Officer at a federal contracting firm and someone whose career has consistently lived just ahead of where learning eventually lands.

Alicia and I go way back. Sometime around 2010, she was an early innovator in games and simulations, and we were both attending the Learning Solutions conference. She has since pivoted, drawn upon her learning and games experience, and is now playing a significant role in the AI readiness space.

Alicia doesn’t describe AI as a future capability. She describes it as current behavior. “You can’t manage what you can’t imagine,” she told me. “And a lot of organizations still can’t imagine how work is actually changing.”

Her concern isn’t that organizations lack tools or training. It’s that they’re treating AI like something that can be scheduled, then rolled out once governance is perfect, data is pristine, and everyone has completed a course.

Meanwhile, the work has already moved on!

Readiness Isn’t a Phase

When people hear “AI readiness,” Alicia says, they often assume she’s talking about tools, platforms, or upskilling programs. However, she is not.

Readiness, as she frames it, is about performance—whether people understand what they’re trying to accomplish, why it matters, and how AI fits into that outcome.

She breaks readiness into two parallel realities: organizational readiness and workforce readiness. Both must exist.

Across those, she looks at three dimensions: preparedness, ability, and willingness:

  • Preparedness asks whether people have access to the right tools and foundational understanding.
  • Ability looks at whether they can actually demonstrate proficiency in real workflows.
  • Willingness—the human side of change—is where most efforts quietly break down.

“If people don’t have the willingness to explore,” she said, “it doesn’t matter how thoughtful the system is. Humans can and will bomb it.”

This isn’t a learning problem. It’s a motivation, trust, and design problem. And no amount of training fixes that.

Learning Is the Wrong Starting Point

“We keep asking how AI fits into the way
we already work, instead of asking how
the work itself should change.”

Alicia Sanchez

One of the clearest throughlines in Alicia’s thinking is that learning has been positioned too late in the change cycle.

AI tools are often dropped into organizations with expectations already attached: faster work, better quality, reduced costs. When those gains don’t immediately materialize, the reflex is to train more.

But the issue isn’t that people don’t know how to use the tools. It’s that the tools are being layered onto workflows that were never designed for them.

 “We keep asking how AI fits into the way we already work, instead of asking how the work itself should change,” she said.

That kind of disruption isn’t solved with content. It requires rethinking processes, roles, and expectations and preparing people emotionally and cognitively for that rethink.

When Work Gets Rewritten

As Alicia talked, I started to see AI as less about automation and more about permission.

Permission to question why work is done the way it is, to remove steps that only exist because “that’s how we’ve always done it.” Permission to design new workflows instead of preserving old ones.

This is where her work starts to resemble organizational design more than traditional enablement. AI flattens access to expertise and democratizes information. In doing so, it reshapes where authority, decision-making, and value actually live.

Experts still matter but so do novices. In fact, Alicia sees their interaction as essential.

Experts understand why processes evolved the way they did, she explained, while novices aren’t constrained by legacy assumptions. When those perspectives collide, organizations can finally ask the most important question of all: Is this still necessary?

The Human Role Doesn’t Disappear; It Moves

Throughout our conversation, Alicia returned to one idea again and again: AI doesn’t remove humans from the system; it relocates them.

As access to information becomes ubiquitous, knowing things becomes less valuable than knowing when, where, and how to apply them.

“The availability of information isn’t the stopgap anymore,” she said. “Application is.”

This shift makes the human role more consequential, especially in moments of high ambiguity, high risk, and ethical consequence. Those are the points where AI must hand control back to people. And this is where learning expertise still matters.

L&D’s Control Problem

For learning teams, Alicia sees a familiar risk emerging: When faced with disruption, the instinct is to tighten control and own the learning.

“The more L&D tries to control things that are out of their control,” she said, “the more they become a bottleneck.”

But people won’t wait. They’ll work around formal channels to get what they need. Shadow AI will flourish. And learning will become something that happens despite the organization, not because of it.

Where L&D still has unique value is not in owning content, but in shaping conditions, helping define when human judgment is required, designing learning into the flow of work, and enabling people to operate responsibly inside rapidly changing systems.

That requires letting go of the course mindset and leaning harder into what learning professionals already understand best: behavior, motivation, and decision-making under uncertainty.

‘Respawn’ Points, Not Roadmaps

One of Alicia’s most telling metaphors comes from her background in games.

She doesn’t believe AI transformation follows a linear roadmap. She thinks in terms of “respawn points”—when a game character reappears after disappearing.

“If something goes sideways,” she asked, “how far back do you fall? Do you have a place to come back to that isn’t business-ending?”

Organizations that experiment thoughtfully—piloting, learning, adjusting—can push forward without betting everything on a single outcome. They can explore new ways of working while preserving the ability to recover, regroup, and redirect.

The Undercurrent

What stayed with me after speaking with Alicia is how clearly she sees the present moment: AI adoption isn’t coming later. It’s happening now, often invisibly, unevenly, and without support. The longer organizations pretend otherwise, the more risk they introduce. Readiness, in this sense, isn’t a phase before learning. It is the work. For L&D professionals, the opportunity isn’t to defend familiar roles. It’s to step into new ones, designing for judgment, enabling experimentation, and helping organizations stay human while work itself is being rewritten.

You may also like

Leave a Comment