• @ryven@lemmy.dbzer0.com
    link
    fedilink
    English
    57 months ago

    The TEA is right about one thing: calling early 21st century machine learning systems “AI” is letting the marketers win. The scoring system is not an “intelligence” in the sense most people would understand the term to mean.

  • @mke_geek@lemm.ee
    link
    fedilink
    37 months ago

    There were computerized test scoring machines 40 years ago, they were called Scan-tron.

  • AutoTL;DRB
    link
    English
    17 months ago

    This is the best summary I could come up with:


    Students in Texas taking their state-mandated exams this week are being used as guinea pigs for a new artificial intelligence-powered scoring system set to replace a majority of human graders in the region.

    The STAAR exams, which test students between the third and eighth grades on their understanding of the core curriculum, were redesigned last year to include fewer multiple-choice questions.

    According to a slideshow hosted on TEA’s website, the new scoring system was trained using 3,000 exam responses that had already received two rounds of human grading.

    Some safety nets have also been implemented — a quarter of all the computer-graded results will be rescored by humans, for example, as will answers that confuse the AI system (including the use of slang or non-English responses).

    A 2019 report from Motherboard found that they were being used in at least 21 states to varying degrees of success, though TEA seems determined to avoid the same reputation.

    The attempt to draw a line between them isn’t surprising — there’s no shortage of teachers despairing online about how generative AI services are being used to cheat on assignments and homework.


    The original article contains 485 words, the summary contains 188 words. Saved 61%. I’m a bot and I’m open source!