It talks more here. I will summarize however below.
First, there are two takeaways from the article:
“It would be wrong to conclude that “AI content does not work” from such an experiment.
However, this demonstrated to me that at that particular time, Google:
-
Was not classifying unsupervised GPT-3 content as “quality.”
-
Could detect and remove such results with a raft of other signals.”
What I think this means is clear. The simple answers will no longer be an easy SERP listing. Now you have to provide significant value by having the context of the facts, and the supporting reasons why a search engine user should trust your content. It seems that Google is doing pretty well keeping spam at bay. The article talked about two updates they made that caused this AI content to get deindexed and suppressed.
For me, I have another takeaway from this. We read from another source that GPT4 and GPT3 have different levels of trustworthiness based on how it change over time. I think there is going to be some kind of certification that an AI is passing a daily test of accuracy with facts and that will be obvious. With AI there should always be a way to challenge its results and to stop whatever process will happen because of its faulty analysis. I don’t want HAL arguing with me about the pod bay doors.
The other thing that is interesting here is that quality is going upwards. In the past, it was enough just to have any site on the Internet, and now you are facing increasing demands from Google and other sources to be even more valuable as time passes. I wonder if other websites can innovate themselves to stay ahead of Google’s AI and still be relevant sources of primary information. Can experts stand out when people can appear like experts just by using AI?
All these questions and more will be answered if the AI is right.