I’ve been concerned about the future of work in a world increasingly dominated by AI for some time. Specifically, around the balance of using AI for efficiency in the creation of knowledge work and the actual value being delivered. The result of this concern is what is now being called AI Slop. If you’ve ever read a GenAI-generated memo that sounds like it was written by a robot with a corporate thesaurus addiction, you’ve experienced AI slop.
Sadly, it is everywhere. Generic blog posts. Unhelpful summaries. LinkedIn comments that read like ChatGPT tried to be inspirational but forgot the context. AI is writing more, and if we’re not careful, the result is that we’re going to be thinking less.
This isn’t just a content problem. It’s a thinking problem.
The Rise of AI in Knowledge Work
Marketers, analysts, consultants, students — knowledge workers across the board are integrating AI into their workflows. I have, too. I have several custom GPTs that help me do my work. And that’s not necessarily a bad thing. AI speeds up drafting, summarizes information, and automates tedious tasks. Nothing is perfect. No matter how I use AI in my work, I’m still responsible for the output, and I always review the work before delivering it. AI and the use of AI are not going away. Pandora’s has been opened and is not closing.
But there’s a difference between using AI to accelerate your thinking and letting it replace your thinking. That’s where my growing concern lies. When outputs are taken at face value, or used without critical review, we end up with automation posing as insight. That’s AI slop in action. And that’s content that looks credible but crumbles under scrutiny, ultimately risking your credibility and reputation.
When Polished Meets Problematic: Deloitte’s Costly AI Misstep
Headlines were made recently when Deloitte admitted that it used AI, and inaccuracies were missed. I think we’ve all thought the risks of AI slop were more theoretical. Now, we have evidence that it’s showing up in high-stakes environments and deliverables by one of the most recognized brands in consulting.
Deloitte Australia recently had to repay a portion of a government contract after submitting a report riddled with fabricated sources and bogus legal citations. The report, which analyzed IT systems for the country’s welfare compliance framework, included references that didn’t exist and court cases that were never filed. Oops! I’d hate to be the team that delivered that report.
Deloitte later confirmed that generative AI was used during the drafting process, which isn’t necessarily bad. And while the tool produced well-formatted output, it also introduced “hallucinated” content that made it into the final report. Government officials flagged it. Media outlets piled on. And the credibility hit was real.
“The AI-generated report was well-organized but full of inaccuracies. Ultimately, Deloitte had to refund part of its AU$440,000 contract.”
— Fortune
Where was the human in the loop?
The Myth of the AI Shortcut
I use AI tools, custom GPTs, and 3rd third-party tools to help me draft content, strategies, etc, for work and for personal use. There are times when I feel like the “AI shortcut” isn’t really there. I spend more time rethinking what I’m trying to say, refactoring prompts, and looking at the output. AI is not always a shortcut to getting what I want the final output to be at a quality level I desire. That’s where I feel the critical thinking comes in. I want a better output. And there are times I find it hard to articulate into a prompt or a custom instruction what is already in my head from years of experience. So for me, AI is a tool, but not always a shortcut. Others see AI as the “easy button” and thus a shortcut to a goal or deadline.
The Deloitte situation should serve as a cautionary tale. The companies that have the biggest names in the business, who have rolled out AI solutions first, who are considered seasoned professionals, can fall into the trap of taking the output at face value. When critical thinking, or just human oversight, is removed from the process, the results may look clean but lack depth, accuracy, or accountability. That’s a dangerous place to be.
I can guarantee that companies all over the world are seeing AI Slop show up in:
- Campaign briefs built entirely from AI-generated fluff
- Strategy decks that sound insightful but say nothing
- Research reports citing non-existent data points
And while these don’t always make headlines, they erode trust just the same. Your clients will notice. Your customers will see. Years of building trust can be thrown away in a moment’s notice when your team is not using AI responsibly.
Future Knowledge Workers: Gen Z Enters the Chat
OK, so knowledge workers, marketers, and others today are figuring out how to use AI in their work, and companies are still figuring out their AI policies and literacy programs. Here’s my larger concern. College students who are our next wave of knowledge workers are more comfortable using AI tools than most senior executives. But comfort isn’t the same as competence, and understanding the real-world impact of not critically thinking about how they use AI for work product. We have a gap in our future knowledge workers not being prepared for using AI in the real world.
“Critical thinking, problem solving, and adaptability are emerging as the most in-demand cognitive skills for Gen Z.”
McKinsey & Company
According to McKinsey, as AI tools become embedded in academic and professional life, students must learn how to question, refine, and contextualize what those tools produce — not just prompt and copy-paste.
Otherwise, we’re setting them up to become prompt jockeys, not strategic thinkers.
Therefore, I really hope universities and academic programs, even going down to high school or grade school, can find a way to adjust to helping students not just think about using AI responsibly, but also educate them on core critical thinking skills, finding ways to articulate what they are thinking into written form. This will be a must-have skill to make better use of AI in the future. It will also be a critical skill to separate signal from noise in the abundance of content that will continue to be created by AI.
AI Isn’t the Problem. Thoughtless Use Is.
Generative AI gives us speed. It gives us structure. But it doesn’t give us intent, discernment, or values.
Gartner’s research backs this up: nearly 47% of digital workers struggle to find the information they need, even with AI-enhanced tools at their disposal. I can personally speak to this in that I have connected tools like Microsoft Co-Pilot Studio (paid version) and ChatGPT to knowledge infrastructures like SharePoint and Google Drive and I still can’t find things I am looking for. What is supposed to be a “simple task” ends up taking more time to get the correct information.
It’s not just about access to tools — it’s about knowing what questions to ask, what answers to trust, and when something looks too good to be true.
Five Ways to Reinforce Critical Thinking in an AI World
Easy to say, not easy to do. No matter how much we want to trust our AI tools, we have to continue to bring critical thinking into the mix. Here are five ways to reinforce our habits when working with AI tools.
- Challenge the output — If something seems perfect, ask why. Dig deeper.
- Evaluate the source — Can the logic be traced? Are the citations real?
- Push for originality — Reward thought leadership, not volume.
- Create room for disagreement — If AI says X, what does your team think? Why?
- Model smart AI use — Treat it like a junior analyst. Capable, but in need of supervision.
The biggest theme here is Human In The Loop.
My Final Thoughts
AI isn’t replacing critical thinking. But it’s making it optional. And that’s the real risk.
In a world filled with AI slop, polished nonsense with no depth, the people who pause, ask better questions, and think a layer deeper will stand out. Can you stand up and justify your work product? Can you use AI to help you craft a better output vs. just taking it at face value? Can you shape our inputs to get better outputs?
Those are the people companies will pay a premium for.