• @NounsAndWords@lemmy.world
    link
    fedilink
    2310 months ago

    “First, ELIZA’s responses tend to be conservative. While this generally leads to the impression of an uncooperative interlocutor, it prevents the system from providing explicit cues such as incorrect information or obscure knowledge. Second, ELIZA does not exhibit the kind of cues that interrogators have come to associate with assistant LLMs, such as being helpful, friendly, and verbose. Finally, some interrogators reported thinking that ELIZA was “too bad” to be a current AI model, and therefore was more likely to be a human intentionally being uncooperative.”

    It turns out one of our most human-like qualities is to be dumb and willfully obtuse.

  • @oDDmON@lemmy.world
    link
    fedilink
    1010 months ago

    One potential fly in this study’s ointment? The way the article reads, it wasn’t a blind, or double blind, study. Human participants were sometimes in the same room, even acquaintances. Supposedly, those results were excluded tho’.