Alt Text: an image of Agent Smith from The Matrix with the following text superimposed, “1999 was described as being the peak of human civilization in ‘The Matrix’ and I laughed because that obviously wouldn’t age well and then the next 25 years happened and I realized that yeah maybe the machines had a point.”
Gawd, me too. They’ve started scraping my LinkedIn recommenders to try bait me in.
For context, I work at a university. The subject was something like “xxxxxx recommends you for a company like us” implying my contact had actually been behind it, but obviously they didn’t.
And obviously it reads like it was written by one of the GPTs.
Had they seen our profiles, they’d actually know what it is we do and how ridiculous recommending a chat AI is. That’s sooooo beneath our knowledge and expertise. Like a random suggesting Ivermectin to Dr Faucci.
From their example, seems like all they’ve “innovated” is a new, less reliable way to write database queries !
Yep. And query languages being some of the quickest and fastest things an analyst can do with 100% knowledge of the data and any wrangling/conditions that need to be done to assure accurate results.
A bot would never be able to accurately answer these questions off my data unless I thoroughly trained and tested it. But if it’s GPT-based, I’d always have to double-check so it’d just be a hinderence in workflow. There is no way money would be paid to a third-party for such a situation.
Since there’s a mathematical proof that LLMs without hallucinations are impossible, I think this kind of usage is a lost cause.