A few weeks ago, I was digging through old files looking for insights and analogies while prepping some material for a guest lecture at DePaul. I came across a LinkedIn article draft I wrote in June 2019 called “5 Lessons I’ve Learned from Brewing Beer.”
I never published it.
I remembered writing it. I was a year and a half into home brewing at the time, sitting on my back porch with a glass of Poolside Shandy (my summer recipe, trademark pending, not really), and I thought I was writing about customer experience. Apparently, I didn’t think it was ready. I saved the draft and moved on.
I re-read it last week with fresh eyes.
I wasn’t writing about customer experience. I was writing about AI adoption. I just didn’t know it yet. And apparently, neither version of me thought it was worth publishing — until now.
Every lesson I pulled from five years of fermenting five-gallon batches in my garage maps almost perfectly to the conversations I’m having right now with colleagues, industry analysts, students, and organizations trying to figure out how to actually build AI capability. Not just buy it.
Let me show you what I mean. And let me add what seven years of hindsight gives me that 2019 Fred didn’t have.
Lesson 1: Results Are Only Good If You Can Repeat Them
What I wrote in 2019:
Sierra Nevada didn’t build their second brewery near their original one in California. They built it in Asheville, South Carolina, because the river water there has the same mineral composition as their source water out west. Same ingredients. Same process. Same result. Same customer expectation met, every single time.
Samuel Adams does the same thing. They engineer the water in their other locations to match the composition in Boston. They go to those lengths because their customers expect a consistent experience from every glass.
My point in 2019 was about brand consistency and customer experience. Meet the expectation every time, or you lose the relationship.
What it actually means in 2026:
This is the most underdiagnosed failure mode in AI adoption right now.
Companies celebrate the pilot. The proof of concept works beautifully. Leadership gets excited. The deck goes to the board or an executive team. Then they try to scale it, and nothing replicates. The model behaves differently in production than it did in the demo. The workflow that worked in one team breaks in another. Output quality degrades the moment another person runs the same process.
And everyone blames the technology.
The technology is not the problem. The problem is that nobody engineered for repeatability.
Sierra Nevada didn’t assume their second brewery would produce the same beer automatically. They studied the inputs, identified the variables, and intentionally controlled for them. Most organizations deploying AI skip this step entirely. They get a win, declare victory, and move on without ever documenting what produced the win in the first place.
Repeatable AI results require the same intentional engineering Sierra Nevada applied to water composition. Documented prompts. Standardized workflows. Defined quality benchmarks. A clear record of what input produced what output under what conditions.
Without that, every AI win is a one-time event. And one-time events don’t build organizational capability.
MIT’s NANDA initiative studied 300 public AI deployments and found that only 5% of AI pilots achieve meaningful business impact. S&P Global puts the abandonment rate even more bluntly: 42% of companies scrapped most of their AI initiatives in 2025, up from 17% the year before. Nearly half of all proof-of-concepts never make it to production.
That is not a technology problem. That is a repeatability problem.
Lesson 2: Experimentation Can Lead to Brilliance — or Disaster
What I wrote in 2019:
Home brewing taught me to document everything. Every change to a recipe, every adjustment to the process, every batch that came out wrong. Write it down, because you won’t remember six weeks from now when you crack the first bottle and try to figure out what happened.
ABT. Always Be Testing. I wrote that this was almost silly to call a lesson, because every marketer knows it. And yet even in the most mature marketing organizations, disciplined testing is not actually the norm. You have to document what works before you can experiment intelligently on top of it.
The frustrating thing about brewing, I noted, is that you wait weeks to find out if an experiment worked. Unlike digital marketing, where you can run an A/B test and have results by Thursday, brewing demands patience between hypothesis and data.
What it actually means in 2026:
AI collapsed that feedback loop to almost nothing.
You can test a prompt variation in thirty seconds. You can run a workflow experiment in an afternoon. You can iterate on an AI-assisted process faster than any A/B test I ever ran as a marketing leader.
That speed is the most powerful thing about working with AI tools right now. It is also, frankly, the most dangerous.
The faster you can experiment, the faster you can drift without ever building anything durable. I see this constantly. Teams are running AI experiments at a pace nobody could have imagined five years ago, and almost none of it is documented. What prompt produced that output? What model? What context was provided? What changed between the version that worked and the version that didn’t?
Nobody knows. Because nobody wrote it down.
The organizations building real AI capability treat every deployment like a brew log. What went in. What came out. What changed. What they’re trying next and why. Most AI experimentation inside companies today is completely undocumented, which means every win is accidental, every loss is unexplained, and institutional knowledge evaporates the moment the person who figured it out leaves the team.
Always Be Testing still applies. It just requires Always Be Documenting first.
Lesson 3: You Can’t Buy Your Way to Competency
What I wrote in 2019:
You can spend $7,000 on a home brewing system. Pumps, pots, oxygenation tools, a full professional-grade setup. I know people who have done it. And the equipment doesn’t make you a better brewer if you don’t know the fundamentals.
I started with a basic kit. A five-gallon pot, a spoon, some buckets. After a couple of batches and a few complaints from my family about the smell inside the house, I invested in some new gear to brew outside. The gear helped. But the gear was only useful because I’d already put in the time to understand the process.
My point in 2019 was about organizational investment. The best CX technology stack in the world doesn’t produce great customer experiences if you haven’t invested in the people running it. Who’s your rockstar developer? What are they doing to mentor the junior team members? Where’s the organizational knowledge being built, not just stored?
What it actually means in 2026:
This one landed harder than any other when I re-read it.
Organizations are spending real money on AI. Enterprise licenses. Platform fees. Implementation consultants. Customization projects. I’m not saying those investments are wrong. I’m saying they are the $7,000 brewing system, and most of the people who bought them have never fermented anything.
The tool is not the capability. The tool is the ceiling your capability can reach, if your team has built the foundation to get there.
AI literacy is the foundation. The ability to prompt effectively. To evaluate outputs critically. To understand where the technology is reliable and where it fails. To identify which workflows actually benefit from AI and which ones don’t. To know the difference between a good result and a confident-sounding wrong answer.
Without that foundation, you have expensive software and a team that doesn’t know how to use it. You have a $7,000 brewing system and no one who understands fermentation.
Pluralsight’s 2025 AI Skills Report found that 65% of organizations abandoned AI projects specifically because their people didn’t have the skills to execute them. Not because the technology failed. Because the humans weren’t equipped.
BCG found that only one in three employees say they’ve been properly trained on AI. McKinsey found that 80% of organizations know upskilling is the most effective lever for closing the skills gap — and only 28% are actually investing in it.
We know what works. We’re just not doing it. The investment that matters most right now is not the platform. It’s the people.
Lesson 4: Patience Is a Virtue, But Not at the Expense of…
The honest part:
Here’s where I have to come clean.
The original post ends here. Lesson four has a title and nothing else. The body is blank. I never finished writing it.
I have no idea what “not at the expense of” was supposed to say. My best guess is momentum. Or competitive position. Or the window of opportunity that closes while you’re waiting for certainty.
I’m going to finish it now. Seven years late.
What it actually means in 2026:
Patience in AI adoption is not a virtue. It is a liability dressed up as prudence.
I understand the instinct. AI is moving fast, the landscape is confusing, the ROI frameworks aren’t clean, and nobody wants to make a major organizational bet on something that might look completely different in eighteen months. I get it. I’ve had those conversations. I’ve felt that uncertainty myself.
But the organizations that waited for AI to “mature” before experimenting lost two years of institutional learning they cannot buy back. The learning curve is not theoretical. It’s real, and it takes time to climb. The teams that started building in 2022 and 2023, even imperfectly, even with tools that have since been superseded, developed judgment that the late movers are still trying to acquire.
The ones waiting now for a clear ROI framework before committing are making the same mistake with a shorter runway.
Only 6% of organizations currently qualify as AI high performers — meaning they’re seeing real, measurable impact on the bottom line. That means 94% of companies are still on the wrong side of the divide. The window is open. But it won’t stay that way indefinitely.
Patience is a virtue, but not at the expense of momentum.
The feedback loop in AI is faster than anything we’ve worked with before. The cost of a failed experiment is low. The cost of not experimenting, of waiting on the sidelines while your team watches from the edge of the pool, is measured in the capability gap between your organization and the ones that jumped in.
Jump in.
What This Is Actually About
I want to be clear about something: this isn’t a post about being prescient. I wasn’t. I wrote about brewing beer because I liked brewing beer, and I drew some marketing lessons because that’s what I did.
What I’m pointing at is something more useful than hindsight.
The fundamentals don’t change. Repeatability. Disciplined experimentation. Investing in people over tools. Not confusing patience with avoidance. These were the right principles for building great customer experiences in 2019. They are the right framework for building AI capability in 2026.
The technology changed. Human organizational behavior didn’t.
The companies struggling with AI adoption right now are not struggling because the technology is too hard. They’re struggling because they haven’t applied the same operational discipline to AI that any good brewer applies to a batch of beer. Document the inputs. Control the variables. Invest in the craft before you invest in the equipment. And don’t wait so long for perfect conditions that you miss the window entirely.
I’ve been thinking about this stuff for longer than I realized. Every week, I go down rabbit holes, experimenting with my own ideas, and the more I’m convinced that the gap isn’t technical.
It’s operational. It’s cultural. And it’s closeable.
Go back and read something you wrote five years ago. You might be surprised what you already knew.
