Running AI models without matrix math means far less power consumption—and fewer GPUs?

  • @bitfucker@programming.dev
    link
    fedilink
    English
    5
    edit-2
    5 months ago

    Good

    Edit: Oh shit nvm. It still requires dedicated HW (FPGA). This is no different than say, an NPU. But to be fair, they also said the researcher tested the model on traditional GPU too and reduce memory consumption.

    • Pennomi
      link
      fedilink
      English
      25 months ago

      Only for maximum efficiency. LLMs already run tolerably well on normal CPUs and this technique would make it much more efficient there as well.