Isn’t this disturbing? Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
Turns out that there are suspected images and confirmed images of child sex abuse. If companies are not screening their datasets, what does that say about their process and ethics?
Imagine asking for an AI image and it creates something illegal. I think in our haste to use AI and the greed of decision-makers, we haven’t put proper ethical boundaries on its use. We always do things with speed as the priority, and not with ethics. How hard would it have been to have some people screen datasets to make sure that they were suited for the purpose? Now we have to question the datasets and if we are truly training AI for what we want.
Right now, I see AI being trained without regard for accuracy or ethics. How crucial it is that we carefully design an intelligence that will supersede us. If we build it with corruption, what kind of output do we think it will have? Capitalism is not a good way to ensure ethical behavior. It makes a very old idea of philosopher kings more appealing. Rather than being guided by self-interest and special lobbies, we need someone enlightened who can show us the way. Perhaps a republic is an idea that has failed.
More importantly, now that we know this can we at least scrub all the datasets and verify them by a third party that they are what they claim to be? Or is child and sexual abuse just the cost of doing business, and it is only a problem when it embarrasses the companies that use it? Is that family values? Is that a self-regulated market that doesn’t need government intervention? Is that competition and the free market is self-regulated? The formation and concentration of power creating these monopolies doesn’t seem self-regulated to me.