It’s so fucked up. I’ve seen several professors (old as well as “young”) trust the outputs of a LLM. Sure they might know the real answer and just forgot, but blindly trusting a LLM correcting your memory is just not very convincing.
I’m so glad I finished my education before all of this became a somekind of fucked up norm. I feel bad for younger folks though…
It’s so fucked up. I’ve seen several professors (old as well as “young”) trust the outputs of a LLM. Sure they might know the real answer and just forgot, but blindly trusting a LLM correcting your memory is just not very convincing.
I’m so glad I finished my education before all of this became a somekind of fucked up norm. I feel bad for younger folks though…
Yeah. This is the key here. When even professors are blindly trusting AI output we’ve got a serious problem!