So I opened Google Lens for the first time in years to identify a logo, and got prompted by this. Thank you but no thank you

  • Saik0
    link
    fedilink
    English
    24 months ago

    Depends on the operation. Basic object recognition… your phone can easily do. Can run object database against an image after you take the image and store the found objects in metadata. Then you just search the metadata.

    This doesn’t have to go to the cloud if that’s all you’re doing.

    I do this on my Nextcloud instance. It doesn’t require a full “AI” implementation to do at all.

    • @MillerLife777
      link
      -24 months ago

      Ya but to do it well you need more data correct?

      • Saik0
        link
        fedilink
        English
        14 months ago

        No. You don’t need “more” data. I have a coral in my security setup that does object recognition. I don’t “need” to send any data anywhere else for it to do object detection of 225 fps… Split across all 8 of my cameras I can do 28fps of object detection… You only need like 5-10 fps to do it properly.

        The only thing I would need “more” data from is to just get newer/better object rules, which requires nothing from me. I just download them. Nothing goes to the cloud. They even make little cheapy nuc-style boxes that can do this type of detection these days (GMKtec for instance). There’s absolutely no reason a phone can’t do this as well. I am completely non-reliant on cloud for any of these operations.