You can read more in this article titled ChatGPT’s odds of getting code questions correct are worse than a coin flip
To summarize it says:

ChatGPT, OpenAI’s fabulating chatbot, produces wrong answers to software programming questions more than half the time, according to a study from Purdue University. That said, the bot was convincing enough to fool a third of participants.
I shared earlier that ChatGPT was also getting the answer wrong to basic math questions. This has increased over time. This means that ChatGPT is getting less useful, not more. Kind of a reverse intelligence where it gets dumber as it gets older.
If it only knows a limited set of facts, and it can’t do factual things, what use is it? I see this being popular for people who are too lazy to write and/or can’t write. It is a way to pretend to be smarter than they are and to use the cliff notes of being human. If we surrender our human ability to a machine, what kind of ability will we have to be critical thinkers?