Most AI Critics Are Wrong About Slop
Artificial intelligence will make everything better, faster, and cheaper
Hi there,
Many AI critics get its risks completely wrong.
My theory? AI, used correctly, won't create slop, poor quality, or security breaches.
Instead, AI will make things better, faster, cheaper.
The main challenge is skilling up fast enough to actually take advantage of these new tools in whatever you do. In safety-critical domains we’ll need heavier guardrails, but the current AI adoption rate is just too slow.
So what does "better" even mean here?
An outcome is better if it gives customers more value - pretty straightforward. It's also better when it has more features, better testing, solid proof of compliance, or fewer negative impacts on people and systems.
With AI, developers should build better products (though I've seen plenty who don't use it well yet). Engineers can simulate more edge cases and tackle complex maths faster. Consultants can deliver work that's much closer to what clients actually need.
A cracking example came from Anthropic's Claude Code team last month. They shared on a Latent Space podcast how AI helped fix small bugs and ship small feature requests almost entirely on its own.
You know the ones - those "nice-to-haves" sitting in every backlog, the quality-of-life tweaks often ignored because nobody has time. Every piece of software has hundreds of these. Just imagine squashing all those small bugs quickly once teams actually let AI handle the suitable tasks.
What about faster? What's that really mean?
Faster means customers get outcomes sooner - duh. But speed isn't just idea-to-launch time... it's about fixing fundamentally slow processes. "Time-to-value" is really the key metric here. Incumbents face huge AI disruption from agile challengers who get this.
With a blank editor and a prompt, you can quickly explore ideas and solve problems early if you're skilled. But man, those legacy code bases, rigid processes, and corporate bureaucracy mean endless approvals before even starting anything useful.
One reason faster with AI is actually possible is because more tasks can be completed by agents in parallel - you can fire off a deep research query and come back to your initial starting point for further investigation. I've been doing this with my project planning lately, and it's wild how much ground you can cover.
And cheaper?
Cheaper means paying less or getting much more for the same money. Think of a skilled worker at $100 an hour. For $4,000 a week, you get 40 hours of work, maybe one or two project deliverables. That's typical human output.
Now, use that $4,000 for AI instead. At $15 per million output tokens, it buys about 266 million tokens. If a big AI task, like handling huge data or writing a long document, needs two million tokens, $4,000 funds about 133 such AI jobs. This shows massive potential for information processing at an unheard-of scale.
AI transforms how we solve problems. People still guide these AI tasks, using their skills for strategy and quality checks. This tech gives huge leverage - firms can handle big data or create foundational work more efficiently. So, the same money gives way more output or makes entirely new large projects viable, changing what your money can do.
This is one reason why I laugh when people complain about the costs of AI usage via APIs. Even if you’re spending $100 a day on AI tokens to vibe code (which is really hard to do, believe me I’ve tried) via OpenRouter or Requesty - that is so much cheaper than going to market to hire a skilled developer.
Why are so many getting this wrong?
Too many AI critics misjudge AI by simply not using the tools right. I'm not dismissing real issues like hallucinations or errors - new tech always has challenges. But much anti-AI talk is just misunderstanding. Many use outdated 2023 arguments against tools that have improved dramatically since then.
Few critics share their actual prompts, models, or context for bad results. That's why I'm sceptical of much AI pushback. If you dislike a ChatGPT output, share the chat link! It's like academia's replicability crisis: if I can't replicate the critique, is that an acceptable standard for judging new tech?
A key lesson in prompting: break big problems into small ones. Give the model enough context and upload files and chat back-and-forth to increase the context window in use.
This gives models more than their training data, helping them deliver your desired outcome. A prompt window isn't Google, nor a guaranteed-output machine. "Garbage in, garbage out" still applies: clean data and clear thinking lead to good AI output.
The use of tooling like MCPs and web search and integration with Google Drive or other file system type tooling further increases the ability of the model to get the additional context required to deliver the right value to the end user.
What about the downsides?
Yes, AI has risks. So does crossing the road! I'm tired of AI risk catastrophising. It's mostly tech risks firms should already manage: security, quality, testing, compliance, ethics.
Focusing on "AI slop" misses a bigger point about poor AI adoption. AI slop is just one part of a bigger economic picture in our competitive world.
The real downside: slow AI adopters will be shocked when competitors deliver better, faster, cheaper results.
This impacts workers too. Remaining jobs will need higher skills to justify human time over AI tokens. Many jobs will become unjustifiable because the output will cost an order of magnitude more than solving the same problems using tokens.
Real disruption is just around the corner, and it doesn’t seem like many leaders at the political or corporate level are really across this to the extent they should be given the enormous societal impact this will have.
Have a nice weekend,
Brennan
There was always slop and there will always be slop.
Sturgeon's Law: 90% of everything is crap. This was true without AI and remains true with AI.
The Red Queen effect: we must keep running to stay in the same place. Yesterday's cheap, safe, and fast is tomorrow's expensive, unsafe, and slow.
Slop will never go away. Like change, entropy, and taxes.