• @serialandmilk@lemmy.ml
    link
    fedilink
    English
    91 year ago

    Many of the building blocks of computing come from complex abstractions built on top of less complex abstractions built on top of even simpler concepts in algebra and arithmetic. If Q* can pass middle school math, then building more abstractions can be a big leap.

    Huge computing resources only seem ridiculous, unsustainable, and abstract until they aren’t anymore. Like typing messages a bending glass screens for other people to read…

    • @SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      31 year ago

      With middle school math you can fairly straightforwardly do math all the way to linear algebra. Calculus requires a bit of a leap, but this still leaves a lot of the math world available.

      • @serialandmilk@lemmy.ml
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        I can’t recall all of it, but most of my calculus courses all the way to multi variate calc and my signals processing all required understanding and using memorized and abstract trig functions which can all be solved using algebra to solve polynomials. One of the big leaps that enables us to go from trig functions to doing limits to calc happen when we used language to understand that summation can tell us what the “area” under the curve is. Geometric functions, odd/even etc is all algebra and trig. If this model can use language to solve those challenges those abstractions can be made more useful to future linguistic models. That’s so much more to teach and embedded in these “statistical” models and NNs. (Edited, because I forgot to check how bad my autocorrect is)

    • @Aceticon@lemmy.world
      link
      fedilink
      English
      31 year ago

      The thing is, in general computing it was humans who figured out how to build the support for complex abstractions up from support for the simplest concepts, whilst this would have to not just support the simple concepts but actually figure out and build support for complex abstractions by itself to be GAI.

      Training a neural network to do a simple task (such as addition) isn’t all that hard (I get the impression that the “breaktrough” here is that they got an LLM - which is a very specific kind of NN, for language - to do it), getting it to by itself build support for complex abstractions from support for simpler concepts is something else altogether.

      • @ChrisLicht@lemm.ee
        link
        fedilink
        English
        31 year ago

        I know jack shit, but actual mastery of first principles would seem a massive leap in LLM development. A shift from talented bullshitter to deductive extrapolator does sound worthy of notice/concern.

        • @Aceticon@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          The simplest way to get an LLM to “do” maths is to have it translate human language tokens relative to Maths to a standard set of Maths tokens, then passing it to a perfectly normal library that does Maths and then translating the results back into human language tokens: easy-peasy LLM “does Maths” only it doesn’t, it’s just integrated with something else (which was coded by a human) that does the maths and only serves as a translation layer.

          Further, the actually implementation of the LLM itself is already doing Maths. For example a single neuron can add 2 numbers by having 2 inputs each with a weight of 1 and a single output because that’s exactly how the simplest of neurons already calculate an output from its inputs in a standard neural networks implementation - it can do simple Maths because the very implementation is already doing maths: the “ability” to do maths is supported by the programming language in which the LLM was then coded, so the LLM would be doing maths with as much cognition as a human does food digestion.

          Given the amount of bullshit in the AI domain, I would be very very weary of presuming this breakthrough being anywhere near an entirelly independent self-assembled (as in, trained rather than coded) maths engine.

          • @ChrisLicht@lemm.ee
            link
            fedilink
            English
            11 year ago

            This sounds very knowledgeable. If the reporting is to be believed, why do you think the OpenAI folks might be so impressed by the Q* model’s skills in simple arithmetic?

      • @serialandmilk@lemmy.ml
        link
        fedilink
        English
        11 year ago

        The thing is, in general computing it was humans who figured out how to build the support for complex abstractions up from support for the simplest concepts, whilst this would have to not just support the simple concepts but actually figure out and build support for complex abstractions by itself to be GAI.

        Absolutely

        “breaktrough” here is that they got an LLM - which is a very specific kind of NN, for language - to do it)

        To some degree this is how humans are able to go about creating abstractions. Intelligence isn’t 1:1 with language but it’s part of the puzzle. Communication of your mathematical concepts and abstractions in a way that can be replicated and confirmed using a rigorous proofing/scientific method requires the use of communication through language.

        Speech and writing are touch at a distance. Speech moves the air to eventually touch nerve endings in ear and brain. Similarly, yet very differen, writing stores ideas (symbols, emotions, images, words, etc) as an abstraction on/in some type of storage media (ink on paper, stone etching stone, laser cutting words into metal, a stick in the mud…) to reflect just the right wavelengths of light into sensors in your retina focused by your lenses “touching” you from a distance as well.

        Having two+ “language” models be capable of using an abstraction to solve mathematical ideas is absolutely the big deal…

        • @Aceticon@lemmy.world
          link
          fedilink
          English
          0
          edit-2
          1 year ago

          Don’t take this badly but you’re both overcomplicating (by totally unecessarilly “decorating” your post with wholly irrelevant details on the transmission and reception of specific forms of human communication) and oversimplifying (by going for some pretty irrelevant details and getting some of it wrong).

          Also there’s just one language model. The means by which the language was transmitted and turned into data (sound, images, direct ascii data, whatever) are something entirelly outside the scope of the language model.

          You have a really really confused idea of how all of this works and not just the computing stuff.

          Worse, even putting aside all of that “wtf” stuff about language transmission processes in your post, even them getting an LLM to do maths from language might not be a genuine breakthrough: they might’ve done this “maths support” by cheating, for example just having the NN recognize math-related language and transform maths-related language tokens into standard maths tokens that can be used by a perfectly normal algorithmic engine (i.e. hand-coded by humans) to calculate stuff and then translating the results back to human language tokens, something which wouldn’t be the “AI” part doing or understanding the concept of Mathsin any way whatsoever, just the AI translating tokens between formats and an algorithmic piece of software designed by a person doing the actual maths using hardcoded algorithms - somebody integrating a maths calculating program into an LLM isn’t AI, it’s just normal coding.

          Also the basis of the actual implementation of an LLM is basic maths and it’s stupidly simple to get, for example, a neuron in a neural network to add 2 numbers.