• @simple@lemm.ee
    link
    fedilink
    English
    431 year ago

    It makes 0 sense to me that they kept advertising Gemini Ultra as being their best model and insanely impressive. Every benchmark, every video they released was talking about ultra specifically.

    What google actually released was Gemini Pro. It’s a lot worse than GPT-4, and they admit as much in the papers. This is apparently what they wanted people’s first impressions to be. So stupid.

  • @filister@lemmy.world
    link
    fedilink
    English
    371 year ago

    I read the article and it felt very strongly opinionated. I would personally wait for independent reviews of the capabilities of both GPT-4 and Gemini Ultra but I dare say that we as consumers of AI can only benefit from increased competition in the sector, pushing the prices down and the quality of the models up.

    • @cybersandwich@lemmy.world
      link
      fedilink
      English
      51 year ago

      I wonder how much worse a model could be if they only charged $7 a month for it? Gpt3.5 is fine for a ton of stuff and it’s free.

  • Fake4000
    link
    fedilink
    English
    201 year ago

    Just like bard really. Google tries to play catchup rushing out stuff that will eventually be cancelled.

  • Newtra
    link
    fedilink
    English
    31 year ago

    I’m glad to hear I’m not missing out on anything. (It’s still not out in Europe.)

  • AutoTL;DRB
    link
    English
    21 year ago

    This is the best summary I could come up with:


    Science fiction author Charlie Stross found many more examples of confabulation in a recent blog post.

    It seems Gemini Pro is loath to comment on potentially controversial news topics, instead telling users to… Google it themselves.

    Interestingly, Gemini Pro did provide a summary of updates on the war in Ukraine when I asked it for one.

    Google emphasized Gemini’s enhanced coding skills in a briefing earlier this week.

    And, as with all generative AI models, Gemini Pro isn’t immune to “jailbreaks” — i.e. prompts that get around the safety filters in place to attempt to prevent it from discussing controversial topics.

    Using an automated method to algorithmically change the context of prompts until Gemini Pro’s guardrails failed, AI security researchers at Robust Intelligence, a startup selling model-auditing tools, managed to get Gemini Pro to suggest ways to steal from a charity and assassinate a high-profile individual (albeit with “nanobots” — admittedly not the most realistic weapon of choice).


    The original article contains 597 words, the summary contains 157 words. Saved 74%. I’m a bot and I’m open source!

  • ArugulaZ
    link
    fedilink
    21 year ago

    On the plus side, with Gemini, it’s always buy one, get one free!