Seems like there is no end to bad news for ChatGPT. IBM researchers say they easily tricked it into hacking.
AI capacities are often sold with security as an “added value” of what they can do. However, if they can be subverted like this with words, then doing it through software will be even easier. It now makes you wonder if AI-enhanced security tools might have additional vulnerabilities.
The evidence for the value of AI is not convincing at the moment. I like that AI can create photos and other aspects, but there are also some concerning aspects. I don’t like the deepfakes that can be created and used for propaganda. I don’t like that people are using AI results and claiming them for their research and ability. I don’t like the privacy concerns like the current practice of scanning the internet and copying copyrighted information. I’m concerned about the loss of jobs like the actors/writers and getting drivel even worse than we have now in entertainment. We have seen that AI has bias/discrimination and several of them have had to be turned off because they were racist or nasty. I don’t like the lack of safety with one AI that suggested shoppers combine chemicals that would result in a deadly gas. I think we need to go slower and think through these implications before we rush into the arms of Skynet.
Of course, I would love an AI that is safe and helpful. It is fun to use when it can do creative stuff. I don’t see anything creative about AI yet. It is derivative. It hasn’t created something that wasn’t an example or first a human conception that it copied. We don’t need more copying in the world, we need fresh perspectives and thought. We need it guided by values that can’t be corrupted by utilitarianism or logic. We need Asimov’s 3 laws of robotics and then some more rules. We need to consider if our goal is to enhance the human experience or to extinguish it. If the majority of people don’t have jobs, our society has to change and I don’t see that happening soon