• coyotino [he/him]OP
    link
    fedilink
    English
    412 days ago

    i gotta say, this AutoTL:DR seems kind of…bad. “she tells NPR” is a pronoun attached to person that is not named elsewhere in the TLDR. Maybe Beehaw should look into banning this one? I’m not sure how to offer feedback to the bot’s author.

    • @Kissaki@beehaw.org
      link
      fedilink
      English
      512 days ago

      I think it’s to be expected and excusable. When reading the summary with it in mind, that it’s a bot summary, not a human summary, it’s acceptable and still useful. Text is not necessarily coherent. And when it isn’t, it can indicate other content.

      I read a different autosummary earlier today with a similar issue. It referred to something or someone not previously mentioned in the summary. With auto-summarization in mind, it was obvious that there is more information on that in the full article. In a way, that was also useful in and of itself (instead of simple emission).

      Dunno why asking whether to ban. Are others even better? None logically understand the text. If most are coherent, this may be an outlier. If machine summarization is not good enough for someone they don’t have to read it.

      • coyotino [he/him]OP
        link
        fedilink
        English
        212 days ago

        i bring it up because there was a discussion at some point about whether beehaw as a whole should allow bots. I think the agreement was that some users still find them helpful. I’m just questioning if that is still the case, as the summary doesn’t strike me as particularly useful. But if others disagree, then i have no beef with it.

      • Rikudou_Sage
        cake
        A
        link
        English
        112 days ago

        Thanks for tagging me. Basically, overall the bot is very well received, so I think the consensus is that it’s more useful than not.

        As for the sometimes not-ideal summaries, unlike tools with similar purpose, this does not use an AI LLM which makes things up all the time, but instead does an analysis of most used keywords, extracts the sentences around and puts them together.

        This has one (in my opinion) very important advantage: If something is wrong in the summary (like the “she tells NPR” out of nowhere), you immediately know, while you generally have no way to know when something is wrong with an AI generated answer, because it blends wrong info with the correct one seamlessly.

        I don’t watch my bot anymore, but in the first two months I read every summary and found only one case where it put the sentences in such an order that something different than what the article was talking about was said in the summary (this is generally statistically improbable). There were quite a few less-than-useful summaries, but overall I was satisfied in about 80% of cases, which was good enough for me.

        I always viewed it exactly as @Kissaki@beehaw.org said:

        When reading the summary with it in mind, that it’s a bot summary, not a human summary, it’s acceptable and still useful. Text is not necessarily coherent. And when it isn’t, it can indicate other content.

        Tagging @theangriestbird@beehaw.org so that he gets notified about the answer.