Barack Obama: “For elevator music, AI is going to work fine. Music like Bob Dylan or Stevie Wonder, that’s different”::Barack Obama has weighed in on AI’s impact on music creation in a new interview, saying, “For elevator music, AI is going to work fine”.

  • @remus989@sh.itjust.works
    link
    fedilink
    English
    401 year ago

    Do people actually care what Obama has to say about AI? I’m just having a hard time seeing where his skillset overlaps with this topic.

    • @EnderMB@lemmy.world
      link
      fedilink
      English
      231 year ago

      Probably as much as I care about most other people’s thoughts on AI. As someone that works in AI, 99% of the people making noise about it know fuck all about it, and are probably just as qualified as Barack Obama to have an opinion on it.

        • @EnderMB@lemmy.world
          link
          fedilink
          English
          41 year ago

          I work for Amazon as a software engineer, and primarily work on a mixture of LLM’s and compositional models. I work mostly with scientists and legal entities to ensure that we are able to reduce our footprint of invalid data (i.e. anything that includes deleted customer data, anything that is blocked online, things that are blocked in specific countries, etc). It’s basically data prep for training and evaluation, alongside in-model validation for specific patterns that indicate a model contains data it shouldn’t have (and then releasing a model that doesn’t have that data within a tight ETA).

          It can be interesting at times, but the genuinely interesting work seems to happen on the science side of things. They do some cool stuff, but have their own battles to fight.

          • @krazzyk@lemmy.world
            link
            fedilink
            English
            21 year ago

            That sounds cool, I’ve had roles that were heavy on data cleansing, although never on something so interesting. What languages / frameworks are used for transforming the data, I understand if you can’t go into too much detail.

            I did wonder how much software engineers contribute in the field, it’s the scientists doing the really interesting stuff when it comes to AI? Not surprisingly I guess 😂

            I’m a full stack engineer, I was thinking of getting into contracting, now I’m not so sure, I don’t know enough about AI’s potential coding capabilities to know whether I should be concerned about job security in the short, or long term.

            Getting involved in AI in some capacity seems like a smart move though…

            • @EnderMB@lemmy.world
              link
              fedilink
              English
              11 year ago

              We do a lot of orchestration of closed environments, so that we can access critical data without worry of leaks. We use Spark and Scala for most of our applications, with step functions and custom EC2 instances to host our environments. This way, we build verticals that can scale with the amount of data we process.

              If I’m perfectly honest, I don’t know how smart a move it is, considering our org just went through layoffs. We’re popular right now, but who knows how long for.

              It can be interesting at times, but to be honest if I were really interested in it, I would go back and get my PhD so I could actually contribute. Sometimes, it feels like SWE’s are support roles, and science managers only really care that we are unblocking scientists from their work. They rarely give a shit if we release anything cool.

        • @unexpectedteapot@lemmy.ml
          link
          fedilink
          English
          21 year ago

          There is a tad bit of difference between caring about an opinion and tolerating one. Obama’s opinions on AI are unqualified pop culture nonsense. They wouldn’t be relevant in an actual discussion that would cite relevant technical, economical and philosophical aspects of AI as points.

          • @lugal@lemmy.world
            link
            fedilink
            English
            11 year ago

            Sure, care about it or don’t, I don’t care. It was the “being qualified to have an opinion” bit I didn’t like. I don’t have to qualify to have an opinion and I can write an opinion piece and sure enough, less people will read it than Obama’s. I might not be qualified to teach on that subject but everyone is qualified to build one’s own opinion.

            But maybe that’s just overly pedantic on my side. You are qualified to have a different opinion.

      • Queen HawlSera
        link
        fedilink
        English
        -11 year ago

        I know this was once said about the automobile, but I am confident in the knowledge that AI is just a passing fad

        • @ricecake@sh.itjust.works
          link
          fedilink
          English
          11 year ago

          Why? It’s a tool like any other, and we’re unlikely to stop using it.

          Right now there’s a lot of hype because some tech that made a marked impact of consumers was developed, and that’s likely to ease off a bit, but the actual AI and machine learning technology has been a thing for years before that hype, and will continue after the hype.

          Much like voice driven digital assistants, it’s unlikely to redefine how we interact with technology, but every other way I set a short timer has been obsoleted at this point, and I’m betting that auto complete having insight into what your writing will just be the norm going forward.

          • Queen HawlSera
            link
            fedilink
            English
            -21 year ago

            It’s just a Chinese room dude, it doesn’t actually do anything useful

            • Trantarius
              link
              fedilink
              English
              21 year ago

              The Chinese room argument doesn’t have anything to do with usefulness. Its about whether or not a computer that passes the turing test is conscious. Besides, the argument is a ridiculous one to begin with. It assumes that if a subcomponent of a system (ie the human) lacks “understanding”, then the system itself (the human + the room + the program) lacks understanding.

              • @ricecake@sh.itjust.works
                link
                fedilink
                English
                21 year ago

                Anything else aside, I wouldn’t be so critical of the thought experiment. It’s from 1980 and was intended as an argument against the thought that symbolic manipulation is all that’s required for a computer to have understanding of language.
                It being a thought experiment that examines where understanding originates in a system that’s been given serious reply and discussion for 43 years makes me feel like it’s not ridiculous.

                https://plato.stanford.edu/entries/chinese-room/#LargPhilIssu

            • @ricecake@sh.itjust.works
              link
              fedilink
              English
              01 year ago

              What?

              At best you’re arguing that because it’s not conscious it’s not useful, which… No.
              My car isn’t conscious and it’s perfectly useful.

              A system that can analyze patterns and either identify instances of the pattern or extrapolate on the pattern is extremely useful. It’s the “hard but boring” part of a lot of human endeavors.

              We’re gonna see it wane as a key marketing point at some point, but it’s been in use for years and it’s gonna keep being in use for a while.

              • @aesthelete@lemmy.world
                link
                fedilink
                English
                1
                edit-2
                1 year ago

                A system that can analyze patterns and either identify instances of the pattern or extrapolate on the pattern is extremely useful. It’s the “hard but boring” part of a lot of human endeavors.

                I agree with most of what you’re saying here, but just wanted to add that another really hard part of a lot of human endeavors is actual prediction, which none of these things (despite their names) actually do.

                These technologies are fine for figuring out that you often buy avocados when you buy tortillas, but they were utter shit at predicting anything about, for instance, pandemic supply chains…and I think that’s at least partially because they expect (given the input data and the techniques that drive them) the future to be very similar to the past. Which holds ok, until it very much doesn’t anymore.

                • @jasondj@ttrpg.network
                  link
                  fedilink
                  English
                  2
                  edit-2
                  1 year ago

                  I’m sorry, they aren’t good at predicting?

                  My man, do you have any idea how modern meteorology works?

                  A ton of data gets dumped into a ton of different systems. That data gets analyzed against a bunch of different models to predict forecasts The median of al those models is essentially what makes it into the forecast on the news.

                • @ricecake@sh.itjust.works
                  link
                  fedilink
                  English
                  11 year ago

                  Well, I would disagree that they don’t predict things. That’s entirely what LLMs and such are.

                  Making predictions about global supply chains isn’t the “hard but boring” type of problem I was talking about.
                  Circling a defect, putting log messages under the right label, or things like that is what it’s suited for.

                  Nothing is good at predicting global supply chain issues. It’s unreasonable to expect AI to be good at it when I is also shit at it.

                  • @aesthelete@lemmy.world
                    link
                    fedilink
                    English
                    11 year ago

                    They make probabilistic predictions. Which are ok if you’re doing simple forecasting or bucketing based upon historical data, and correlates and all of that.

                    What they are crappier about is things that are somewhat intuitively obvious but can’t be forecasted on the basis of historical trends. So, like new and emerging trends or things like panic buying behavior making it so the whole world is somehow out of TP for a time.

                    I’d argue that relying solely on “predictive analytics” and just in time supply chains aggravated a lot of issues during the big COVID crunches, and also makes your supply chain more brittle in general.

            • @SCB@lemmy.world
              link
              fedilink
              English
              01 year ago

              You not having a job where you work at a level to see how useful AI is just means you don’t have a terribly important job.

              • @aesthelete@lemmy.world
                link
                fedilink
                English
                11 year ago

                What an brain drained asshole take to have. But I’ve seen your name before in my replies and it makes sense that you’d have it.

                AI is useful for filling out quarterly goal statements at my job, and boy are those terribly important… 😆

      • @Rooskie91@discuss.online
        link
        fedilink
        English
        -2
        edit-2
        1 year ago

        Absolutely not. We need to learn the difference between intelligence and expertise. Is Obama an intelligent person? Of course. Is he allowed to have and voice an opinion? Sure, it’s a free country. Does that mean that his opinion is informed by expertise and should dictate peoples actions and therefore the direction of an industry? No.

        This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.

        • @aesthelete@lemmy.world
          link
          fedilink
          English
          3
          edit-2
          1 year ago

          This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.

          And your logic is the same followed by government agencies when they effectively agree to regulatory capture because all of the industry experts work at this company, so why not just let the company write the rulebook? 🤔

          I personally don’t believe we need “industry experts” in every new, emerging type of tech to be the sole voices considered about them because that’s how we largely arrived at the great enshitterment we’re already experiencing.

          Edit: It’s really quite a baffling take (given a moment’s thought) that the big problem and/or a large problem facing America is that we aren’t cozy enough with “industry experts”. Industry practically write the policy in this country, and the only places where we have any kind of great debate (e.g. net neutrality, encryption) is where there are conflicting industry concerns.

    • @Mango@lemmy.world
      link
      fedilink
      English
      -31 year ago

      I’m just a dude who does general labor and have lots of insights about AI just because I’m interested and smart. People tend to come to me just to hear what I have to say.

      Now look at Obama. He’s all of that and much more in the eyes of a society that’s put Obama in the spotlight. He can talk about totally boring stuff and people will still respect his opinion.