You can read this excellent article here about how AI makes racial problems worse in banking.
There is a lot of hidden bias and assumptions in banking. I know because I used to work in banking. I saw firsthand how decisions that could be considered racist were part of the everyday assumptions. For example, banks don’t want to lend to those with a certain credit score. I asked why that was and made the case for more widened criteria, and was told this was the way it was. Then a year later they lowered their expectations and now my request was granted. Not because of me I assure you.
AI can be an excuse for whatever BS a company wants to get away with. AI is the new consultant. If a consultant does something that causes a problem, they just get rid of him. It was the consultant’s fault, not the company’s or the bank’s. As long as there is plausible deniability, companies can do whatever they want.
To me, AI is almost getting to the idea of a puppet. It is something that can be anything, and which ultimately means nothing. If the AI chat makes a mistake, well that was a rogue employee or bad data. Or it was a hacker or any other number of excuses. The less of a human touch on something, the less clear the chain of responsibility. Should we blame the third-party AI chat company?