Can we start with the product actually working? I had some people on a team pinging me telling me they were getting an error message and how to fix said error message, which after a little investigation was not only wrong, it was horrifically wrong in its analysis and supposed solution of the issue. I did further digging, and of course the team had used chatgpt to come up with what I should do. Upon further digging, I saw what was going on, chatgpt had found and spit out the solution for an error message that was “close” to the one that we were getting from the software, but since chatgpt isn’t a human, it just has to go with what it has been programmed, and evidently it’s just spit out whatever you find that’s close. It really is some fucking annoying software and the fact that the press and others ran with this whole ‘artificial intelligence’ name still pisses me off. Just putting in more if/then statements doesn’t make your software ‘intelligent’ and they have obviously been feeding it everything whether it’s true or not and with no actual vetting of said data. Back when we had physical encyclopedias and even digital ones, you could at least trust that the information had been researched. These guys can’t even get that right.
I guess it’s working to specification, though. ChatGPT is supposed to generate plausible text, being correct is more a nice side-effect we get on occasion. I’ve had similar experiences. I mean sometimes it gives good advice and tells people where to look and what to do. Other times it makes something look easy and doable for a non-technical person. The steps are nicely written down, but are completely fabricated and none of it works that way.
I’ve had people approach me with that. It’s always a letdown for them if I tell them it’s entirely unusable. It is a great opportunity to teach about AI, though. And its limitations with factual stuff. Another area where this surfaces is Github issues of Free Software projects. We sometimes get wild bug reports or pull requests written by AI. Of course it mostly wastes everyone’s time as AI isn’t “intelligent” enough to deal with the complexity involved. It’d be way better if people wrote what they saw and what they’re dealing with.
And by the way, I think the term “Artificial Intelligence” is from 1955 already. It’s not like journalists jumped on it recently. But it is a broad field, we have things like expert systems and formal reasoning. I think those definitely fit, as logic and reasoning has something to do with intelligence. Modern chatbots and LLMs are the same field, it’s just that the “artificial” aspect is very pronounced. And due to the realism of the output text, it’s easy to mistake them for proper intelligence, or forget about the nature of the thing…
I always advocate for making watermarking of AI text a law. I can’t stop people from using AI, but I kind of need to be able to tell whether a bugreport is genuine or made up.
Great reply. As far as the term, I am just a little salty with media and how they really ran with the term ‘drone’ when they were talking about quadcopters. They did this in the name of stirring up controversy with the general public and inducing a manufactured fright. Modern media’s manipulation of language to their advantage is another topic…
Can we start with the product actually working? I had some people on a team pinging me telling me they were getting an error message and how to fix said error message, which after a little investigation was not only wrong, it was horrifically wrong in its analysis and supposed solution of the issue. I did further digging, and of course the team had used chatgpt to come up with what I should do. Upon further digging, I saw what was going on, chatgpt had found and spit out the solution for an error message that was “close” to the one that we were getting from the software, but since chatgpt isn’t a human, it just has to go with what it has been programmed, and evidently it’s just spit out whatever you find that’s close. It really is some fucking annoying software and the fact that the press and others ran with this whole ‘artificial intelligence’ name still pisses me off. Just putting in more if/then statements doesn’t make your software ‘intelligent’ and they have obviously been feeding it everything whether it’s true or not and with no actual vetting of said data. Back when we had physical encyclopedias and even digital ones, you could at least trust that the information had been researched. These guys can’t even get that right.
I guess it’s working to specification, though. ChatGPT is supposed to generate plausible text, being correct is more a nice side-effect we get on occasion. I’ve had similar experiences. I mean sometimes it gives good advice and tells people where to look and what to do. Other times it makes something look easy and doable for a non-technical person. The steps are nicely written down, but are completely fabricated and none of it works that way.
I’ve had people approach me with that. It’s always a letdown for them if I tell them it’s entirely unusable. It is a great opportunity to teach about AI, though. And its limitations with factual stuff. Another area where this surfaces is Github issues of Free Software projects. We sometimes get wild bug reports or pull requests written by AI. Of course it mostly wastes everyone’s time as AI isn’t “intelligent” enough to deal with the complexity involved. It’d be way better if people wrote what they saw and what they’re dealing with.
And by the way, I think the term “Artificial Intelligence” is from 1955 already. It’s not like journalists jumped on it recently. But it is a broad field, we have things like expert systems and formal reasoning. I think those definitely fit, as logic and reasoning has something to do with intelligence. Modern chatbots and LLMs are the same field, it’s just that the “artificial” aspect is very pronounced. And due to the realism of the output text, it’s easy to mistake them for proper intelligence, or forget about the nature of the thing…
I always advocate for making watermarking of AI text a law. I can’t stop people from using AI, but I kind of need to be able to tell whether a bugreport is genuine or made up.
Great reply. As far as the term, I am just a little salty with media and how they really ran with the term ‘drone’ when they were talking about quadcopters. They did this in the name of stirring up controversy with the general public and inducing a manufactured fright. Modern media’s manipulation of language to their advantage is another topic…