In recent years, computer programmers have flocked to chatbots like OpenAI’s ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year.
That’s a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air.
For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT’s attempt to answer them.
The team also performed a linguistic analysis of 2,000 randomly selected ChatGPT answers and found they were “more formal and analytical” while portraying “less negative sentiment” — the sort of bland and cheery tone AI tends to produce.
The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn’t catch AI-generated mistakes at 39 percent.
The study demonstrates that ChatGPT still has major flaws — but that’s cold comfort to people laid off from Stack Overflow or programmers who have to fix AI-generated mistakes in code.
The original article contains 340 words, the summary contains 199 words. Saved 41%. I’m a bot and I’m open source!
This is the best summary I could come up with:
In recent years, computer programmers have flocked to chatbots like OpenAI’s ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year.
That’s a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air.
For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT’s attempt to answer them.
The team also performed a linguistic analysis of 2,000 randomly selected ChatGPT answers and found they were “more formal and analytical” while portraying “less negative sentiment” — the sort of bland and cheery tone AI tends to produce.
The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn’t catch AI-generated mistakes at 39 percent.
The study demonstrates that ChatGPT still has major flaws — but that’s cold comfort to people laid off from Stack Overflow or programmers who have to fix AI-generated mistakes in code.
The original article contains 340 words, the summary contains 199 words. Saved 41%. I’m a bot and I’m open source!