• @BlinkAndItsGone@lemm.ee
    link
    fedilink
    English
    41
    edit-2
    1 year ago

    Here’s the most important part IMO:

    He admits that — in general — when AMD pays publishers to bundle their games with a new graphics card, AMD does expect them to prioritize AMD features in return. “Money absolutely exchanges hands,” he says. “When we do bundles, we ask them: ‘Are you willing to prioritize FSR?’”

    But Azor says that — in general — it’s a request rather than a demand. “If they ask us for DLSS support, we always tell them yes.”

    SO developers aren’t forced contractually to exclude DLSS, but outside the contract language, they are pressured to ignore it in favor of FSR. That explains why these deals tend to result in DLSS being left out, and also why there are some exceptions (e.g. Sony games–I imagine Sony knows what features it wants its PC releases to have and has decided to push back on DLSS inclusion). I think AMD is being honest this time, and I’m surprised it admitted publicly that it’s doing this. Hopefully the word about this will get out and more developers will insist on including DLSS.

      • @sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        301 year ago

        Well, FSR is open, as is FreeSync and most other AMD tech, it’s just that NVIDIA is so dominant that there’s really no reason for them to use anything other than their own proprietary tech. If Intel can eat away at NVIDIA market share, maybe we’ll see some more openness.

        • conciselyverbose
          link
          fedilink
          -31 year ago

          I guess they could just use FSR as a wrapper for DLSS, but they made DLSS because there was nothing like it available, and it leverages the hardware to absolutely blow doors off of FSR. They’re not comparable effects.

          • @sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            61 year ago

            Last I checked, DLSS requires work by the developers to work properly, so it’s less “leveraging the hardware” and more “leveraging better data,” though maybe FSR 3 has a similar process.

            • conciselyverbose
              link
              fedilink
              21 year ago

              It’s a hardware level feature, though. The reason they didn’t support hardware prior to RTX was because they didn’t have the tensor cores to do the right math.

              FSR is substantially less capable because it can’t assume it has the correct hardware to get the throughput DLSS needs to work. I know the “corporations suck” talking point is fun and there’s some truth to it, but most of the proprietary stuff nvidia does is either first or better by a significant bit. They use the marriage of hardware and software to do things you can’t do effectively with broad compatibility, because they use the architecture of the cards it’s designed for (and going forward) extremely effectively.

              • @sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                61 year ago

                I think it’s more the other way around. They designed the feature around their new hardware as a form of competitive advantage. Most of the time, you can exchange cross platform compatibility for better performance.

                Look at CUDA vs OpenCL, for example. Instead of improving OpenCL or making CUDA an open standard, they instead double down on keeping it proprietary. They probably get a small performance advantage here, but the main reason they do this is to secure their monopoly. The same goes for GSync vs FreeSync, but it seems they are backing down and supporting FreeSync as well.

                They want you to think it’s a pro-consumer move, but really it’s just a way to keep their competition one step behind.

                • conciselyverbose
                  link
                  fedilink
                  -11 year ago

                  They can’t improve openCL. They can make suggestions or proposals, but because broad compatibility are the priority, most of it wouldn’t get added. They’d be stuck with a worse instruction set with tooling that spends half its time trying to figure out all the different hardware compatibility you have to deal with.

                  Cuda is better than openCL. Gsync was better than freesync (though the gap has closed enough that freesync is viable now). DLSS is better than FSR. None of them are small advantages, and they were all created before there was anything else available even if they wanted to. Supporting any of them in place of their own tech would have been a big step back and abandoning what they had just sold their customers.

                  It’s not “pro consumer”. It absolutely is “pro technology”, though. Nvidia has driven graphic and gpgpu massively forward. Open technology is nice, but it has limitations as well, and Nvidia’s approach has been constant substantial improvement to what can be done.

                  • @sugar_in_your_tea@sh.itjust.works
                    link
                    fedilink
                    English
                    41 year ago

                    CUDA is only better because the industry has moved to it, and NVIDIA pumps money into its development. OpenCL could be just as good if the industry adopted it and card manufacturers invested in it. AMD and Intel aren’t going to invest as much in it as NVIDIA invests in CUDA because the marketshare just isn’t there.

                    Look at Vulkan, it has a ton of potential for greater performance, yet many games (at least Baldur’s Gate) work better with DirectX 12, and that’s because they’ve invested resources into making it work better. If those same resources were out into Vulkan development, Vulkan would outperform DirectX on those games.

                    The same goes for GSync vs FreeSync, most of the problems with FreeSync were poor implementations by monitors, or poor support from NVIDIA. More people had NVIDIA cards, so GSync monitors tended to work better. If NVIDIA and AMD had worked together at the start, variable refresh would’ve worked better from day one.

                    Look at web standards, when organizations worked well together (e.g. to overtake IE 6), the web progressed really well and you could largely say “use a modern browser” and things would tend to work well. Now that Chrome has a near monopoly, there’s a ton of little things that don’t work as nicely between Chrome and Firefox. Things were pretty good until Chrome became dominant, and now it’s getting worse.

                    It absolutely is “pro technology”

                    Kind of. It’s more of an excuse to be anti-consumer by locking out competition with a somewhat legitimate “pro technology” stance.

                    If they really were so “pro technology,” why not release DLSS, GSync, and CUDA as open standards? That way other companies could provide that technology in new ways to more segments of the market. But instead of that, they go the proprietary route, and the rest try to make open standards to oppose their monopoly on that tech.

                    I’m not proposing any solutions here, just pointing out that NVIDIA does this because it works to secure their dominant market share. If AMD and Intel drop out, they’d likely stop the pace of innovation. If AMD and Intel catch up, NVIDIA will likely adopt open standards. But as long as they have a dominant position, there’s no reason for them to play nicely.

      • @BlinkAndItsGone@lemm.ee
        link
        fedilink
        English
        2
        edit-2
        1 year ago

        Well, Nvidia isn’t directly involved here at all, they’ve only commented on the issue once (to say that they don’t block other companies’ upscaling). The objections tend to come from users, the majority of whom have Nvidia cards and want to use what is widely considered the superior upscaling technology.