
Every time that I use AI which is mostly ChatGPT/Copilot lately I find it tells me things that are not factually true. When I challenge it to give a source, it will lie about the source! It gave me a link once that lead to another post and when I told it was a lie, it didn’t respond.
Here is a New Scientist article that says AI hallucinations are getting worse – and they’re here to stay It is a byproduct of how the models are made apparently.
So what does this mean? It means that if we can’t trust the AI output, we need to check it which requires a human which means that there is a perhaps marginal improvement in productivity for it being sorted, but perhaps even more of a time waste due to the complications.
Some technological improvements are real and physical like the seat belt. Others are logical like the computer which helped end the war. Other developments like AI seem to be a wash. Sure we have lots of new images and trashy stories, but are our lives really better because of it?