• @mindbleach@sh.itjust.works
    link
    fedilink
    97 months ago

    One, the subject is LLMs, and I point this out because I’ve suffered multiple cycles of opinionated meatbags saying computers can never ever become intelligent. I fully expect some dingus to shove headlines like “AI can’t get smarter!” in people’s faces, like it’s divine writ.

    Two, more training is what makes these things smarter. Data was only a major obstacle when there was next to nothing. And they didn’t just pour the new data into the old setup; every major iteration reconfigures the network. Deeper tends to be better but is slow to train. Wider is a cheap path to novel results but requires obscene amounts of memory. Naturally the companies dumping money into this (because they’ve gambled their reputation on an unproven new thingamajig) are only trying to scale up up up - and that’s why this limit appears. A lot more neat shit is going to arise from small networks. They’ll be organized with better human insight (partly derived from the experience of these big dumb money sinks) and they’ll train much more quickly on much more affordable machines.

    Three, tech is not why these idiot corporations are struggling. The tech works as engineers promised. It’s the marketing and executives who promised the moon and the stars as soon as this could almost hold a conversation. We the dorks were cautiously optimistic about the emergent properties. GPT-3 could sorta do math. Yeah yeah yeah, computers doing math doesn’t sound surprising, but the network would have to do math the way you do math.

    We the dorks also pointed out that GPT was set up so it was incapable of holding an opinion. It’d finish your side of the conversation if you left that open. And sometimes it’d do a really good job. This approach may get a lot closer to intelligent than critics are comfortable with. Every advancement in AI demonstrates how little we understand ourselves, via endless failed predictions that ‘only a sentient mind could do [blank].’