
No one can make someone happy all of the time. Neither can AI. Here are three examples of clients that were unhappy when I used AI.
Client 1: I had a successful initial interview in which he said he was impressed with my knowledge. After the second interview he wanted me to write up a statement of work saying how I planned to fix one of the issues they were facing. I did that using ChatGPT. The recruiter told me that he was unimpressed that I used ChatGPT and they were no longer interested in me. Ok then.
Client 2: During the engagement they asked me lots of questions about what is possible in different scenarios. I used ChatGPT and wrote up 14 different papers with 75 pages total. The client said that it was too much to read, but that was the consequences of those different plans they wanted. I made the client happy in the end by not writing another paper and just using very simple language that was less than a paragraph. People ask questions, but they don’t always want the answers.
Client 3: During the engagement I used Copilot to prepare some of the answers that they wanted, and the feedback was that AI was something they could use and they wanted to know things from my words and point of view. Ok then. I will research using AI and put everything in a very brief and limited language. The same hour I was told this, the client shared with me an AI excerpt and I had to laugh. I’m leaving AI to the professionals.
What can we conclude from these experiences? I think we can conclude that people don’t want to read all of the details to understand their questions, and that giving the least amount of information in a summary is what people want. I think we can also conclude that even if you are told not to use parts of AI just like we used to use Google search results, the appearance that you are doing everything yourself is better than the reality. The perception has to be that the human is essential, even when the real answer comes from a google or AI search.
Got it?