• @sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    21 year ago

    How do you explain PCIe, DDR, and M.2 standards? Maybe we could’ve had similar performance sooner if motherboard vendors did their own thing, but with standardization, we get more variety and broader adoption.

    If a company wants or needs a major change, they go through the standards body and all competitors benefit from that work. The time to market for an individual feature may be a little longer, but the overall pace is likely pretty similar, they just need to front load the I/O design work.

    • conciselyverbose
      link
      fedilink
      11 year ago

      Completely and utterly irrelevant? They are explicitly for the purpose of communicating between two pieces of hardware from different manufacturers, and obscenely simple. The entire purpose is to do the same small thing faster. Standardizing communication costs zero.

      The architecture of GPUs is many, many orders of magnitude more complex, solving problems many orders more complex than that. There isn’t even a slim possibility that hardware ray tracing would exist if Nvidia hadn’t unilaterally done so and said “this is happening now”. We almost definitely wouldn’t have refresh rate synced displays even today, either. It took Nvidia making a massive investment in showing it was possible and worth doing for a solid decade of completely unusable software solutions before freesync became something that wasn’t vomit inducing.

      There is no such thing as innovation on standards. It’s worth the sacrifice for modular PCs. It’s not remotely worth the sacrifice to graphics performance. We’d still be doing the “literally nothing but increasing core count and clocks” race that’s all AMD can do for GPUs if Nvidia needed to involve other manufacturers in their giant leaps forward.

      • @sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        21 year ago

        communicating between two pieces of hardware from different manufacturers

        • like a GPU and a monitor? (FreeSync/GSync)
        • like a GPU and a PSU? (the 12v cable)

        DLSS and RTX are the same way, but instead of communicating between two hardware products, it’s communicating between two software components, and then translating those messages onto commands for specialized hardware.

        Both DLSS and RTX are a simpler, more specific casez of GPGPU, so they likely could’ve opened and extended CUDA, extended OpenCL, or extended Vulkan/DirectX instead, with the hardware reporting whether it can handle DLSS or RTX extensions efficiently. CPUs do exactly that for things like SIMD instructions, and compilers change the code depending on the features that CPU exposes.

        But instead in all of those cases, they went with proprietary and minimal documentation. That means it was intentional that they don’t want competitors to compete directly using those technologies, and instead expect them to make their own competing APIs.

        Here’s how the standards track should work:

        1. company proposes new API A for the standards track
        2. company builds a product based on proposal A
        3. standards body considers and debates proposal A
        4. company releases product based on A, ideally after the standards body agrees on A
        5. if there is a change needed to A, company releases a patch to support the new, agreed-upon standard, and competitors start building their own implementations of A

        That’s it. Step 1 shouldn’t take much effort, and if they did a good job designing the standard, step 5 should be pretty small.

        But instead, NVIDIA ignores the whole process and just does their own thing until either they get their way or they’re essentially forced to adopt the standard. They basically lost the GSync fight (after years of winning), and they seem to have lost the Wayland EGLStream proposal and have adopted the GBM standard. But they win more than they lose, so they keep doing it.

        That’s why we need competition, not because NVIDIA isn’t innovating, but because NVIDIA is innovating in a way to lock out competition. If AMD and Intel can eat away at NVIDIA’s dominant market share, NVIDIA will be forced to pay nice more often.

        • conciselyverbose
          link
          fedilink
          1
          edit-2
          1 year ago

          Every single thing about what you’re discussing literally guarantees that GPUs are dogshit. There’s no path to any of the features we’re discussing getting accepted to open standards if AMD has input. They only added them after Nvidia proved how much better they are than brute force by putting them in people’s hands.

          Standards do not and fundamentally cannot work when actual innovation is called for. Nvidia competing is exactly 100% of the reason we have the technology we have. We’d be a decade behind, bare minimum, if AMD had any input at all in a standards body that controlled what Nvidia can make.

          We’re not going to agree, though, so I’ll stop here.

          • @sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            11 year ago

            The process I detailed does not require consensus before a product can be released, it just allows for that consensus to happen eventually. So by definition, it won’t impede progress. It does encourage direct competition, and that’s something NVIDIA would rather avoid.