Over the past year, I have spent more time than ever auditing sites that collapsed after trying to scale with AI. The pattern has become so common that our team started using a term that captures what is really happening: AI waste.
It is the build-up of low value, low differentiation AI output that damages search performance, erodes trust and contributes to a digital ecosystem that is becoming more polluted by the month.
In March 2025, I published an analysis in Search Engine Land that documented an 80 percent ranking collapse caused by aggressive AI scaling.
At the time, I thought it would be a wake up call. Instead, it has become a template for what we have continued to see throughout 2025. Different businesses. Different industries. Similar mistakes.
I work with AI every day. I also write about using it responsibly and have done two talks on the subject, BrightonSEO and Women in Technical SEO. I am definitely not anti AI. I mean even this piece was built on me uploading my previous work and getting that first draft I can edit more quickly.
AI is an incredible accelerant for research, analysis and structured content tasks. But over the last year I have watched something that worries me: the rise of what we at Vixen have started to call ‘AI waste’.
How we use AI tools impacts every aspects of our lives, professional and personal. AI waste isn’t just about the risk (and impact) of using AI at scale on the business, it’s also about the impact this has on the world we live in. How it affects things like sustainability, ethics and the quality of our digital ecosystem as a whole. As someone I admire immensely in our industry puts it: ‘churning our AI slop at scale is peeing in the community pool’.
What AI Waste Looks Like in Practice
So, when I talk about AI waste, I am not only talking about weak content. I am talking about the wider consequences that come with poor AI use.
In the audits I have completed this year and during my own journey with Vixen and AI, some of the failures have looked like this:
- Using LLMs for everything at the kitchen sink when there is a perfectly suitable alternative
- Using general purpose models for specialised tasks
- More than 90 percent duplication across created content
- Large volumes of text that contribute nothing new to the topic
- A deluge of FAQ and listicle type content created by AI
- Failing to show EEAT even though I personally know the company has the receipts
- Security issues such as PII information visible in AI powered tools
- Workflows implemented that don’t batch or clean data
- Engagement metrics that appear healthy but do not convert
- High authority domains with almost no branded search interest
- Sudden and severe ranking loss after a Core Update
- Running excessive prompt iterations instead of improving inputs
Wider impacts on the world we live in are well documented by now. Here’s a recent one to make the point, peer reviewed study published in Nature Scientific Reports provided the clearest evidence so far. Failed or repeated AI attempts can use between 8 and 53 times more energy than successful ones. A single prompt looks insignificant on its own. But multiply that by billions of queries per day and the waste becomes enormous.
At the same time, MIT research from August 2025 shows how 95% of generative AI companies fail and the biggest chunk of budget is used in marketing and sales when in reality the gains are more present in back office automation.
Google is now serving responses in conversational formats via AI Overviews and AI Mode right there from the SERPS, where my opinion is still very divided. From a user perspective there is something really attractive about reducing the cognitive load that is involved with traditional blue links. A good and a bad thing. And, not just because of the sustainability dimension. People are trusting those answers more because they are in a conversational format. This doesn’t match well with the fact that LLM’s will never be completely accurate.
AI tools are popping (and beginning to fail) left right and centre and the Economist welcomes us to the AI trough of disillusionment. All while we are collectively contributing to the problem. Either by failing repeatedly or by succeeding but at what cost.
Why Frameworks Matter More Than Tools
This year, Vixen Digital became one of the first UK agencies to achieve ISO 42001 for Responsible AI Management, alongside ISO 27001 for Information Security. It gives us a formal structure for how we evaluate, deploy and monitor AI use across our work.
But you do not need an ISO certificate to reduce AI waste. You need a framework.
A framework forces conversations before damage is done. It shifts your process from reactive to intentional. It reduces unnecessary iterations. It keeps teams honest about whether AI is the right tool for the task.
The core principles I recommend are simple:
- Invest in training so you can understand what LLMs are – my personal favourites are Lazarina Stoy’s ML for SEO courses and Britney Muller’s Actionable AI for Marketers.
- Assess whether AI is necessary for the task a couple of great resources to help here: Lazarina Stoy’s excellent problem formulation framework or Elias Dabbas’ LLM app framework
- Use specialised models where possible. For example, ChatGPT is not the best option for tasks like classification. You are better off using KeyBERT or BERTopic!
- Continuously search for tools that can optimise the process not just give you the results ie. on my list to test is now DSpy since it helps optimising prompts and that’s where a lot of quality issues arise
- Evaluate the models for environmental efficiency. None will be 100% bullet proof here but at least try to use a more efficient model.
- Build clear oversight and QA checkpoints
- Run controlled tests instead of large batches
- Define thresholds where an AI attempt should be abandoned
- Keep transparency policies active and visible
The Way Forward
AI is not the problem. AI used without intention is the problem.
AI waste is what happens when we chase scale without strategy and convenience without consideration. It hurts rankings. It hurts trust. And now, it also hurts the planet.
If businesses learn anything from the past year, I hope it is this. You can leverage AI at speed and still protect quality. You can scale your content pipeline and still maintain originality. You can experiment with new tools and still stay accountable.
It starts with a framework. It starts with asking the right questions before hitting generate. And it starts with accepting that sometimes the most responsible decision is to use less AI, not more.

