AI Doesn't Fail Where You Think It Fails
“Human + AI Alliance” session at SXSW26, Mach 16, 2026
Four weeks out from #SXSW26, one moment keeps coming back to me.
Sixty seconds into our session, Margaret Spence and I did something you're not supposed to do at a conference.
We put the slides away.
The room was an interesting mix. Gen Z students. HR executives. AI developers. UX designers. A crowd that rarely sits in the same room, let alone has a real conversation.
We looked at them and said, "We prepared a deck. But you're going to decide what this actually looks like."
Then we asked a single question.
What are you most afraid of?
What came back is the reason I'm writing this.
The fears weren't what you'd expect.
Nobody said "the singularity."
Nobody mentioned rogue models.
Nobody brought up “machines taking over”.
What they shared was deeply human:
1. Misalignment on my team.
2. Losing my professional identity.
3. Not having the space to think.
I've been working in talent development long enough to recognize that list. It isn't a list about AI. It's a list about belonging, meaning, and cognitive overload.
And that was the reveal of the room.
AI doesn't fail on the server. It fails in the human.
The industry still treats AI implementation like a software upgrade. Pick the vendor. Pick the model. Roll it out. Train the team. Measure adoption.
But the 83% failure rate in AI projects isn't a technology problem.
It's a collaboration problem. It's a human design problem. It's a leadership problem.
In our session, you could watch it playing out in real time. AI developers were in one corner, UX designers in another. HR leaders are somewhere in the middle. Everyone polite. Nobody is really listening.
They were repeating the same complaints to different audiences: unclear goals, poor communication, and a lack of shared purpose.
Four silos. One failing implementation. Zero surprises.
The voice that's missing from most AI conversations
Here is what I keep noticing in organization after organization.
When AI gets introduced at work, three groups are usually in the room. Technology. Strategy. Finance.
The people who actually understand how humans learn, adapt, and grow at work, HR, Talent Development, L&D, change managers, and coaches are brought in afterward, if at all. Usually, to "handle adoption."
That's backward.
AI adoption is not a training issue. It's a change issue. And change is a human issue.
When we skip the people who understand human adaptation, we don't get faster rollouts. We get quieter resistance, eroded trust, and employees under implicit pressure to use AI every day or risk their jobs with no clarity, no real conversation, and no shared purpose.
What I learned by throwing out the slides
The best feedback we got at #SXSW wasn't about the content. It was about the space.
People told us our session was the best one they attended, because they got to talk.
Not to be talked at.
Not be prompted.
Not to be pitched.
Talk.
Margaret said something that I've been sitting with ever since.
“People don't need more prompts. They need a reflecting space.”
Space to voice what they're afraid of.
Space to sit next to someone who thinks nothing like them.
Space to realize that AI is not a technical transition. It's a human one.
If you're a leader rolling out AI right now, I'd invite you to sit with four questions before your next initiative:
1. Who is actually at the table designing this, and who's missing?
2. Have we asked our people what they're afraid of, and did we listen without defending?
3. Are we building a human transition or a software rollout?
4. Is the purpose of this change clear enough that a new hire could repeat it in one sentence?
If any of those land awkwardly, you don't have an AI problem. You have a leadership and design problem that AI is about to amplify.
The future belongs to the deeply human.
That was the theme of SXSW26. I didn't invent it. It kept rising to the top, no matter which session you walked into.
Even in a year dominated by AI, what wouldn't stay quiet was purpose. I saw it in a panel of three young CEOs running Kulture City, Newman's Own Foundation, and HOPE Hydration, talking about building organizations around something bigger than themselves. I saw it in a session that, imperfectly, tried to connect Ikigai to the work of leadership. I heard it in the hallway conversations after our own session ended.
In a year when almost every stage touched AI, what wouldn't stay quiet was human. Mattering. Trust. Purpose. The space to think. The irreplaceable ones.
As Nataliya Kosmyna of MIT warned on one of the stages, AI will finish your prompt today. Tomorrow, I will try to finish your thought.
The leaders who win the next decade won't be the ones with the best stack. They'll be the ones who built the cross-functional alliance technology, design, and human development, before everyone else realized they needed one.
And they'll still know how to hold space.
If your team had sixty seconds and total safety to answer one question about the change happening at work right now, what do you think they'd actually say?
I'd love to read your thinking in the comments.

