• Pennomi
    link
    fedilink
    English
    3911 months ago

    Basically the government knows that AI is the next big weapon and is trying to ensure it can’t be outcompeted.

    Every one of these policies is designed to allow the military to copy the best technology for free and prevent opposing militaries from doing the same.

      • @kakes@sh.itjust.works
        link
        fedilink
        English
        1811 months ago

        “Wrong info” – you mean the potential for an infinite and omnipresent amount of targeted propaganda?

        • @gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          111 months ago

          For real, we were all genuinely shocked and a little freaked out when we accidentally started predicting the future with a nontrivial degree of accuracy in the winter of 2022. Geopolitical shitposting shouldn’t be able to serve as anything even remotely close to an Oracle… and yet here we are.

          • gregorum
            link
            fedilink
            English
            11
            edit-2
            11 months ago

            Predicting the recent resurgence of neofascism doesn’t require AI, nor is it remarkable that AIs have noticed it. Both were long predicted and have been widely speculated and commented upon.

    • MxM111
      link
      fedilink
      811 months ago

      That does not require any new policy, such policies existed for long time around technology. And I am sure the military has its own versions of KillGPT4 for a while.

  • @linearchaos@lemmy.world
    link
    fedilink
    English
    1911 months ago

    So

    A. The government installs agencies to prepare them if they need to step in and regulate AI.

    B. Those agencies are also in charge of finding new ways to use AI for the government benefit.

    Hello 1984. It’s obviously nothing dangerous short-term but I can’t say I’m a very big fan of either one of those line items.