• yeahiknow3
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 days ago

    Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.

      • yeahiknow3
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 days ago

        The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort. We don’t need to make an AGI. I personally don’t care. The question was can we? The answer is No.

          • yeahiknow3
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            4 days ago

            Your definition of AGI as doing “jobs” is arbitrary, since the concept of “a job” is made up; literally anything can count as economic labor.

            For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel’s first incompleteness theorem.

            To quote Gödel himself: “We cannot mechanize all of our intuitions.”

            Alan Turing drew the same conclusion a few years later with The Halting Problem.

            In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

            • Communist@lemmy.frozeninferno.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Jobs are not arbitrary, they’re tasks humans want another human to accomplish, an agi could accomplish all of those that a human can.

              For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel’s first incompleteness theorem

              Why do you assume we have to? Even a shitty current ai can do a decent job at this if you fact check it, better than a lot of modern politicians. Feed it the entire internet and let it figure out what humans value, why would we manually do this?

              In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

              humans are conscious and have gotten no closer to doing this, ever, I see no reason to believe consciousness will help at all with this matter.

              • yeahiknow3
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                3 days ago

                Feed it the entire internet and let it figure out what humans value

                There are theorems in mathematical logic that tell us this is literally impossible. Also common sense.

                And LLMs are notoriously stupid. Why would you offer them as an example?

                I keep coming back to this: what we were discussing in this thread is the creation of an actual mind, not a zombie illusion. You’re welcome to make your half-assed malfunctional zombie LLM machine to do menial or tedious uncreative statistical tasks. I’m not against it. That’s just not what interests me.

                Sooner or later humans will create real artificial minds. Right now, though, we don’t know how to do that. Oh well.

                https://introtcs.org/public/index.html

                • Communist@lemmy.frozeninferno.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  That’s just because there are no consistent set of axioms for human intuition. Obviously the best you can do is approximate, and I see no reason you can’t approximate this, feel free to give me proof to the contrary but all you’ve done so far is appeal to authority and not explain your arguments.

                  • yeahiknow3
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    3 days ago

                    Why do you talk about shit you don’t understand with such utter confidence? Being a fucking moron has to be the chillest way to go through the world.