Google's latest algorithm update, allowing for AI-generated content to score well, leaves website owners fuming as they watch visitor numbers plummet.
In the grand spectacle of the internet, Google plays the role of both the puppet master and the punching bag. Their latest "helpful content" update, meant to help us untangle the digital web, is causing quite a commotion. Instead of cheers, it's met with a resounding chorus of jeers.
Imagine a world where AI-generated content reigns supreme, where thought-provoking articles are outshone by 500-word machine gibberish. This dystopian nightmare is the reality many website owners now face. According to posters on a dedicated webmaster forum, their numbers are down between 15% and 30%.
Google's tinkering with the Search guidelines is the talk of the town. They dropped the "written by people, for people" mantra for "helpful content written for people." This shift in language has website owners breaking a sweat as they fear an impending AI content tsunami.
Google says that they want to curb content mills, those sites regurgitating old information to boost rankings. However, the irony is thicker than a dictionary in this situation. Google is promoting AI content while aiming to penalize content that's AI-like. John Mueller, Senior Search Analyst / Search Relations team lead at Google had this to say: "I think you should focus on unique, compelling, high-quality content that adds to the web," Mueller wrote on X. "As you have it now, it looks like a compilation of ChatGPT output on topics that tons of sites have already covered." Fair enough, that makes sense. In August, he also said, "By definition (I'm simplifying), if you're using AI to write your content, it's going to be rehashed from other sites."
Wait. So, you can use AI to create “unique, compelling, high-quality content that adds to the web”, but "By definition (I'm simplifying), if you're using AI to write your content, it's going to be rehashed from other sites."
I’m scratching my head to figure out how rehashed content can be unique. I suppose, taken literally, it can be. But then we’d be playing with semantics.
Another conundrum here is distinguishing between AI-generated and human-crafted content. Google is making it clear that AI content isn't a no-go; it just has to meet their quality standards. However, in the fast-paced world of content generation, it's a battle royale to identify who's who and the machines are getting smarter on both sides.
As AI content creation gets easier and faster, we're on the brink of an AI content flood. Google's task of distinguishing quality among machine-generated posts will be as Herculean as cleaning out King Augeas' stables. This algorithmic epic is only beginning.
As you get ready to click away to a new story, hopefully here, I’ll leave you with one terrifying piece of information for context – In January, CNN reported that CNET had “issued corrections on a number of articles, including some that it described as “substantial,” after using an artificial intelligence-powered tool to help write dozens of stories.”
Quoting directly from the CNN article:
“Guglielmo said CNET used an “internally designed AI engine,” not ChatGPT, to
help write 77 published stories since November. She said this amounted to about
1% of the total content published on CNET during the same period, and was done
as part of a “test” project for the CNET Money team “to help editors create a
set of basic explainers around financial services topics.”
Some headlines from stories written using the AI tool include, “Does a Home Equity Loan Affect Private Mortgage Insurance?” and “How to Close A Bank Account.”
“Editors generated the outlines for the stories first, then expanded, added to and edited the AI drafts before publishing,” Guglielmo wrote. “After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit.””
Let’s pause right there. This is CNET, a large, popular news outlet. CNET was using AI to write stories – 1% of their output over the course of a couple of months. These articles were used as a “test”. The articles were edited by editors – presumably human – before they went live on CNET’s site.
And they contained errors.
AI content is here, the genie is out of the bottle. But what’s truly worrying is that a serious news site like CNET can’t use AI to produce error-free content despite it going through an editorial review. What’s also worrying, given the above, is that if we believe the complaints mentioned by those angry site owners, AI-written content is beating out human-written content in terms of search results.
As we've said before, we at Finance Magnates welcome our robot overlords.