ChatGPT answers 52% of programming questions incorrectly

The popular AI service ChatGPT gives 52% of incorrect answers to programming questions.

About it says in a study by Purdue University that was recently presented at the Computer-Human Interaction conference.

The researchers reviewed more than 517 questions on Stack Overflow.

«We found that 52% of ChatGPT responses contain misinformation, 77% of responses are more verbose than human responses, and 78% of responses have varying degrees of inconsistency», — they wrote.

It is especially worrying that quite a few programmers use ChatGPT for consultations. The researchers found that 35% prefer ChatGPT and 39% do not notice errors.

«Subsequent semi-structured interviews showed that polite language, textbook-style answers, and comprehensiveness were among the main reasons that made ChatGPT’s answers more convincing, so participants tended to trust it», the researchers write.

It should be noted that Stack Overflow traffic has dropped by about 35% in a year and a half. The drop started after the release of ChatGPT, an AI chatbot. In response, Stack Overflow announced OverflowAI, a set of features powered by artificial intelligence.

Stack Overflow even decided to ban the use of ChatGPT on the platform, stating that it had doubts about the accuracy of such answers.

Nevertheless, this did not work and Stack Overflow laid off 28% of its staff