• rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      57 minutes ago

      When you are a nation-state, you can find a pretty amount of money to pay for a video of some unfavorable person committing a crime they’ve not committed. Or a dead\incapacitated\unwilling politician saying a speech. Or a person possessing authority confirming something they didn’t confirm.

      When you are an entertainment company, you can find a pretty amount of money to pay for a technology to make characters appear consistent a decade after actors died.

      When you are a multitude of clueless investors, you can together find a pretty amount of money to pay for Sun hardware in close future, then dotcom bubble burst comes. Same for this thing - it may be just a bubble.

      I think all 3 variants are not stable, for #1 people already know deepfakes exist, and also fiction has prepared us for things like Saruman’s voice, charm spells in HP, just convincing illusions in Star Trek, the Force affecting minds in Star Wars, and so on, might be why mainstream doesn’t like geek culture, or tries to present it neutered and bland, for #2 they have to be unbelievably good and generative models are still not very good at philosophy and writing plots, for #3 - I think it’s too optimistic.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    23
    ·
    9 hours ago

    The example videos are pretty good. Only a few had obvious tells (like the Taylor Swift one), and the rest seemed pretty human-like.

    But before everyone goes out and invests in OmniHuman-1 systems, remember that marketing campaigns always show the best they could make, not the average case most people are likely to get. Will it be good enough to trick the average consumer who’s not looking that hard? Maybe. I guess we’ll have to see.

    But if all these generative models are so designed to replace the people upon whose videos they’re based, who/what will train the next generation of models, I wonder?

    • Yingwu@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      Until these starts getting used on a broader scale, I’m not convinced these are not schemes to funnel more investment money into their companies. The examples are really short, probably made after I don’t know how many attempts, and probably very limited in what poses and/or actions they can show. I’m so tired of the LLM hype in general.

    • TheFogan@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 hours ago

      But if all these generative models are so designed to replace the people upon whose videos they’re based, who/what will train the next generation of models, I wonder?

      Maybe we hit a regular cycle…

      IE Data is all trained on real video, gets good enough that humans cannot differentiate. Real video becomes rare, AIs are now training on AI videos. Result… AI video becomes effectively copy of copy of copy of copy… degredation becomes obvious. as mistakes are now compounding. AI developers have to start creating and introducing un-tampered video to train with. AI starts to get better.

  • A_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    9 hours ago

    After DeepSeek_R1 (and Alibaba’s thignie surpassing o4) we have this (OmniHuman-1). All this shows, once again, that Chinese are now ahead of the game in a.i. Searchers who want to keep up have to, at least, study what the Chinese have accomplished.

    (…) OmniHuman-1 is [way] above previous deepfake techniques, (…) While ByteDance hasn’t released the system, the AI community tends [to rapidly] reverse-engineer models like these.