
ChatGPT does very well with summarizing things. However one of its issues is that the prompts that you do to make it summarize can take some practice to think like ChatGPT.
Today for example I had a web page with alot of detailed information and I needed to convert that into a csv. The first two attempts didn’t do it, and they were both wildly different. Finally I gave up and I just said convert everything into a csv and it did it correctly. When I gave it basically no directions, then was the only time that it worked.
Many people have said that prompt engineering requires you to be detailed, and I have tried detailed prompts and have gotten worse results. If getting complicated doesn’t work, try more simple prompts. I think this is a lag between how fast the AI models are being updated and the lack of communication by those AI companies on how prompt engineering should work.
With that said, I tried Copilot Prompt analysis for some prompts that I had done. I didn’t find that it helped me at all with output that was more usable. Maybe I am an edge case because I work in IT and I need information in a very specific output, usually csv or markdown and most people may not care out it is output. For my clients, the field names are just as important as the data and they have to be double checked and I randomly fact check things too.
Still I don’t trust ChatGPT/Copilot output. You need to double check everything. It likes to sound confident and make things up.