The actor told an audience in London that AI was a “burning issue” for actors.

  • @Lmaydev@programming.dev
    link
    fedilink
    English
    -61 year ago

    Some AIs are more intelligent than the average person.

    Ask a normal person to do the tasks ChatGPT can and I bet the results would be even worse.

    • @42Firehawk@lemmy.zip
      link
      fedilink
      English
      141 year ago

      Ask chatGPT to do things a normal person can, and it also fails. ChatGPT is a tool, a particularly dangerous swiss army chainsaw.

      • @Lmaydev@programming.dev
        link
        fedilink
        English
        -41 year ago

        I use it all the time at work.

        Getting it to summerize articles is a really useful way to use it.

        It’s also great at explaining concepts.

        • @abbotsbury@lemmy.world
          link
          fedilink
          English
          101 year ago

          It’s also great at explaining concepts.

          Is it? Or is it just great at making you think that? I’ve seen many ChatGPT outputs “explaining” something I’m knowledgeable of and it being deliriously wrong.

          • @gedaliyah@lemmy.world
            link
            fedilink
            English
            41 year ago

            I agree. I have a very specialized knowledge in certain areas, and when I’ve tried to use chat GPT to supplement my work, it often misses key points or gets them completely wrong. If it can’t process the information, it will err on the side of creating an answer whether it is correct or not, and whether it is real or not. The creators call this “hallucination.”

          • @Lmaydev@programming.dev
            link
            fedilink
            English
            -31 year ago

            Yeah it is if you prompt it correctly.

            I basically use it instead of reading the docs when learning new programming languages and Frameworks.

            • nickwitha_k (he/him)
              link
              fedilink
              English
              41 year ago

              A coworker tried to use it with a well-established Python library and it responded with a solution involving a Class that did not exist.

              LLMs can be useful tools but, be careful in trusting them too much - they are great at what I’d say is best described as “bullshitting”. It’s not even “trust but verify” it’s more “be skeptical of anything that it says”. I’d encourage you to actually read the docs, especially those for libraries as it will give you a deeper understanding of what’s actually happening and make debugging and innovating easier.

              • @Lmaydev@programming.dev
                link
                fedilink
                English
                31 year ago

                Ive had no problem using them. The more specific you get the more likely they are to do that. You just have to learn how to use them.

                I use them daily for refactoring and things like that without issue.

            • @abbotsbury@lemmy.world
              link
              fedilink
              English
              41 year ago

              That’s great, it works until it doesn’t and you won’t know when unless you already are knowledgeable from a real source.