It still requires a LOT of human oversight, which obviously it was not given in this example, but a good writer paired with knowledgeable use of LLMs is already significantly better than a good content writer alone.
I’m talking about future state. The goal clearly is to avoid the need of human oversight altogether. The purpose of that is saving some rich people more money. I also disagree that LLMs improve output of good writers, but even if they did, the cost to society is high.
I’d much rather just have the human author, and I just hope that saying “we don’t use AI” becomes a plus for PR due to shifting public opinion.
I’m talking about future state. The goal clearly is to avoid the need of human oversight altogether. The purpose of that is saving some rich people more money. I also disagree that LLMs improve output of good writers, but even if they did, the cost to society is high.
I’d much rather just have the human author, and I just hope that saying “we don’t use AI” becomes a plus for PR due to shifting public opinion.
No, it’s not the ‘goal’.
Somehow when it comes to AI it’s humans who have the binary thinking.
It’s not going to be “either/or” anytime soon.
Collaboration between humans and ML is going to be the paradigm for the foreseeable future.
The hundreds of clearly AI written help articles with bad or useless info every time I try to look something up in the last few months says otherwise…