• NVIDIA gaming performance guide:

    1) Reduce render resolution ( perf ↗️ | quality ↘️ )

    2) Use AI to upscale ( perf ↘️ | quality ↗️ )

    3) Render less frames ( perf ↗️ | quality ↘️ )

    4) Fill in AI generated frames ( perf ↘️ | quality ↘️ )

    5) ???

    6) Profit? (NVDA ↗️)

    quality: image quality or responsiveness

    Edit:

    I'm clueless why so many people are ignoring the meme's second panel. (Or that this is r/AyyMD, a satire sub) Yes, we know why preset M from DLSS 4.5 is slower, thank you.

    The render pipeline is now officially a roller coaster

    I get to max out my 240 herz monitor. I can't tell much artifacting with 2x frame gen and Quality DLSS looks better than native xD

    This is satire sub

    wooosh post generated

    DLSS does not use AI to upscale, it uses AI to better select samples from previous frames.

    66% scale rendered --> Deep Learning Super Sampling --> 100% scale displayed

    Upscaling is creating higher resolution from a lower resolution, upping the scale, and deep learning is AI.

    DLSS got its name from DLSS 1 being trained on highly supersampled images. Spatial AI upscaling looked horrible, so starting with DLSS 2 it switched to the same principles as TAA(U), often called "temporal supersampling" or "temporal super resolution", as it takes extra samples from previous frames. DLSS 2+ does NOT use AI to upscale anything, it uses AI to replace manually written heuristics for sample selection, to reduce temporal artifacts as much as possible. This is also why starting with DLSS 2, it's required to reduce mipmap bias on lower resolution inputs to improve texture crispness, PRECISELY because DLSS does not use AI to upscale the image.

    that just sounds like it's using AI to upscale the image with more steps to me

    No, it does not use AI to upscale the image. It's literally written on the wiki.

    It should also be noted that forms of TAAU such as DLSS 2.0 are not upscalers in the same sense as techniques such as ESRGAN or DLSS 1.0, which attempt to create new information from a low-resolution source; instead, TAAU works to recover data from previous frames, rather than creating new data.

    All that says is it's not like ESRGAN but it uses ai in combination with engine data to upscale. Which if you noticed includes the words "ai" when I and you mention how it upscales

    It is very unfortunate that you don't even try to understand the topic. I honestly tried to explain it to you.

    Have a good day!

    I did. You are trying to act like what you say isn't what you're saying

    To prove that it isn't upscaling or using AI, you just explained how AI is used in the sampling to get a bigger picture from a smaller one.

    Did I get that right?

    From DLSS 2's release post:

    With Deep Learning Super Sampling (DLSS), NVIDIA set out to redefine real-time rendering through AI-based super resolution - rendering fewer pixels and then using AI to construct sharp, higher resolution images. With our latest 2.0 version of DLSS, we’ve made big advances towards this vision.

    Powered by dedicated AI processors on GeForce RTX GPUs called Tensor Cores, DLSS 2.0 is a new and improved deep learning neural network that boosts frame rates while generating beautiful, crisp game images.

    No, I have explained to you that DLSS does not use AI for upscaling. Upscaling is done the same way as in other TAA(U) solutions, by jittering the image and combining samples from previous frames into new image.

    Look, we have an expert here who knows better than nvidia themselves. Hats off!

    You know, sometimes I wonder - what kind of person would watch content like Hardware Unboxed, who can't even max out GPU, and take it seriously. But then I meet people like you, and, well, yeah. You absolutely are HUB's target audience.

    Have a good day!

    Resorting to ad hominem on a satire sub. Oh the class!

    Are you going to correct nvidia, then professional HW reviewers too?

    You have a mighty fine repertoire of credentials, might I see them?

    Are you going to correct nvidia

    That's what you're trying to do here. My general understanding of DLSS 2+ came from relevant presentation from Edward Liu.

  • Idk if that's worse than not giving your latest model to older cards at all.

    The thing is that DLSS is being marketed, and marketed unfaithfully. This is what makes it worse.

    How its being marketed unfaithfully?

    5090 path tracing at 4k for frame gen not dlss

    Not to excuse Nvidia's decision to unlock 6X and call it a win when even at 3X the tech is all but unusable due to its terrible image quality, but Frame gen is one of the technologies under the DLSS umbrella. It started out meaning upscaling only, but as new cards and new tech arrived, their marketing team expanded it be more of an umbrella term encompassing things like "DLSS Frame Gen." It has lost almost all of its original meaning.

    This is wrong. DLSS means "Deep Learning Super Sampling" and means using ML-training to infer pixels instead of rendering pixels and has done more than just upscaling from the initial release (like smoothing/sharpening).

    As an acronym, yes.

    As a marketing term designed to sell GPUs, no. https://www.nvidia.com/en-us/geforce/technologies/dlss/ "DLSS 4 technologies" now encompasses:

    • DLSS Multi-Frame Generation
    • DLSS Dynamic Frame Generation
    • DLSS Frame Generation (Christ they're reaching making Frame Gen into 3 bullet points)
    • DLSS Ray Reconstruction
    • DLSS Super Resolution <- this term uses classic "DLSS" Deep Learning Super Sampling both to upscale content from small internal render resolution to a larger resolution, and in Super Resolution: enabling a larger render resolution than what the monitor supports.
    • DLAA

    All of those technologies do exactly what I said DLSS does

    Dlss is dlss, any other technology have other names even if it's under the dlss name, AMD didn't replace fsr with redstone

    The legend:

    grey: Native

    bright green: DLSS 4.5 Dynamic

    dark green: DLSS 4.5 6x

    DLSS is the marketing term for the whole software stack.

    NVIDIA DLSS 4 Supreme Speed. Superior Visuals. Powered by AI.

    DLSS is a revolutionary suite of neural rendering technologies that uses AI to boost FPS, reduce latency, and improve image quality. DLSS 4 introduced Multi Frame Generation (MFG) and transformer models. DLSS 4.5 brings Dynamic MFG and a second gen transformer model. All backed by an NVIDIA AI supercomputer in the cloud, constantly improving your PC’s gaming capabilities.

    src: https://www.nvidia.com/en-us/geforce/technologies/dlss/

    Hardware unboxed is only testing the new models for dlss that doesn't have any impact in rt or fg, your slide is showing a metric for rt with fg, you are comparing apples to oranges and you probably know it

    you are comparing apples to oranges and you probably know it

    Riiiiight... I'm the one muddying the waters, right.

    4090 performance at $549 (RTX 5070)

    -Jensen Huang

    It's not, the clearly state this is mainly for 40/50 cards but people with older cards can try it with worse performance because the hardware doesn't really support it

    I mean it's an option, why would it ever be worse? It can be changed in the Nvidia app, and this makes it clear it's harder to run on older card but is still being made available for them.

    Well, its the point of it all.

    "Ok we are going to take your resolution at 4k and lower it to 1080p and then smooth and ai it back to originalish detail. Meanwhile only giving you 4fps more in performance. If you have the latest and greatest of gpu."

    Its interesting and in a certain way impressive. But also why wouldnt you just run it at native and skip using this?

    no? it's still cool to see even if it's not all that useful

    Not really, what would be cool to see is them focusing on making better hardware at a better price so that people can have affordable actual frames not frames generated from thin air by AI that are practically useless when the hardware can't output enough real frames and is essentially a "nice-to-have" when you can. That would be very cool. Sadly it won't happen

    na fuggit DLSS is pretty damn awesome.
    Even FSR 3 can look pretty decent in some games and can be worth the Image Quality hit over the gained performance, but DLSS looks just SOOOO much better, it's not that you won't notice it, it's just that I don't care when I get better FPS for marginally degraded image quality.

    We're not talking just upscaling here. Upscaling is awesome and FSR4 is just as awesome as DLSS. My previous comment was about framegen and multiframegen (which the post is also about), which are nice to have

    them focusing on making better hardware at a better price

    LMAO

    It's like reading the comments of a 5 year old

    Nice I'm younger now. I'm sorry that I want the companies to stop being scummy assholes wringing out money out of consumers at every step, I thought that it would be a universally good thing, but let's keep on sucking multi-billionaires dicks, no problem.

    And the performance hit with 1dlss 4.5 on the older cards is way higher than fsr4 int8 is on RDNA3

    Unfortunately for AMD users:

    FSR 4 INT 8 < DLSS 3 ~ FSR 4 FP8 < DLSS 4 < DLSS 4.5

    It's really not even remotely comparable. DLSS 4.5 in Performance will still look better most of the time than FSR 4 INT 8 in Quality mode.

    But yeah, DLSS 4.5 only makes sense on 40 and 50 series that have the hardware acceleration for it.

    Did you test all those yourself?

  • 2080 Ti user here. Here's a simple but definitive one: FHD native, DLSS 3 vs DLSS 4.5.

    People are so quick to forget how good the old stuff looks. There were barely any things left for 4.0 to solve over 3.x, and less still for 4.5 to solve over 4.0.

    Diminishing returns are once again brought to light for people.

    Same reason games don't graphically look that much better than 10 years ago, compared to 10 years before that.

    It's the reason why I plan to hold onto my 3080 until real-time path tracing is fast and economically viable. (And no a 5090 is not economically viable.) Path tracing is the only thing that looks better than what I have now... Native + High settings in 2020, DLSS 3.0 Q + High settings in 2022, DLSS 4.0 Q-B + Medium settings in 2024... looks like 2026 will bring DLSS 4.5 P + Medium settings, which despite running on a 30 series card will look as good or better than what I've been using.

    Why would I spend $700+ USD to replace my card when so far every game I play has looked the same and performed well enough? This isn't the $350 upgrade cost of yesteryear; if I'm going to spend so much then I expect a SIGNIFICANT upgrade, not just 20-50% depending on the game... and anything under $700 USD is less than 10% or is actually a step down in performance from my 5 year old card.

    (Also, I admit I finally broke down and bought Lossless Scaling for $10 CAD, because sometimes 40-70 ->80-120 fps is in "I don't care about input latency or minor image quality losses" type games can be better than a native 50-80.)

    Yep this here. Its why the comparision between FSR4 to DLSS 4/4.5 is such a nothing burger. All are usable and provide a damn good experience.

    Much like raster is mostly a solved problem, upscaling IMO is totally solved at this point and any improvements will be so minor in speed or quality that you won't be able to tell one tech/algorithm from another... which is fantastic! My only asks for next gen are:

    1. Better looking frame gen - I should need slow motion and pixel peeping to tell real from generated!
    2. Lower latency for frame gen - Begin calculating them before the current frame is even complete, and if possible, offload the task to a dedicated chip / on-die chiplet so that shaders can stay focused on shading and cache doesn't have to keep swapping back and forth between the two!
    3. Faster path tracing - it needs to be useful even by affordable midrange cards, not limited to those who can afford to spend hundreds or thousands of dollars!

    E/M/L?

    It's DLSS presets.

    Oh, that's news to me, I just use default and quality/balanced

    DLAA/Quality/Balanced/Performance/Ultra Performance are the so-called PerfQuality modes. Letters from A to L represent DLSS presets created so far. New DLSS (marketing version is DLSS 4.5, library version is 310.5) added presets M and L. As of latest, for DLAA/Quality/Balanced Nvidia uses preset K (DLSS 4/DLSS 310.2), for Performance - preset M, and for Ultra Performance - preset L. Presets E and F are the latest and best presets before DLSS 4 came around, E being sharper and F blurrier but with better anti-aliasing. Even tho games default to specific presets on specific PerfQuality modes, you can combine any preset with any mode via Nvidia App or Nvidia Profile Inspector. And for the comparisons I linked, I selected DLAA in the game, and then forced specific presets in real time via OptiScaler. Having 2080 Ti, I have enough performance to play games at FHD native (DLAA is a native mode of DLSS), and that's why I tested new DLSS 4.5 presets at native as usual. Well, you can see how insanely heavy they can be for 2000 series cards.

  • But isn't DLSS 4.5 meant for performance and ultra performance upscaling?

    regardless of what quality level that guy on Twitter say they are meant for, a preset really just defines a bias for the various tradeoffs the model needs to take into consideration.

    For example: A model may be faced with a choice to either greedily hold onto historical samples which risks ghosting or aggressively reject dodgy historical samples and risk undersampling. Higher vs lower upscaling ratios may favor one vs the other. The presets also clearly have differences in compute demand, L takes much longer than M, which makes sense ultra performance absolutely dumpsters the internal resolution so the GPU should have compute headroom to spend on upscaling, with an internal resolution so low careful evaluation of each sample is really important.

    In practice for any given quality level, L if you want more quality, M if you want more performance.

    I think L is the only preset that's only relevant for UP but I could be wrong

  • Id rather have an option to use slower FSR4 than stuck on FSR3.

    Real, I'm using fsr4 int8 with optiscaler on where wind meet, cyberpunk and it's game changer

  • Ngreedia is a joke! Don't give the stage to the scumbags

  • Same story as FSR4 FP8 on RDNA 3 and 2 cards. I'd rather still have access to it than not.

    Agreed. Having the option would be better even when the performance gains are modest.

    On Windows you can swap a .dll and on Linux you can tell proton to use FSR4 on RDNA3. (I don’t know if RDNA <3 works.)

    The thing is that DLSS is being marketed, and marketed unfaithfully. This is what makes it not the same story.

    FSR4 doesn't run on FP8 on RDNA 2 and 3, it runs on INT8. And while the performance hit over FSR3 is a bit bigger on the older cards, it isn't slower than native, unlike DLSS 4.5.

    You can run the FP8 version but Q setting will be slower than native.

  • Nvidiots don't care.

    at least they're given the choice to use it, while AMD won't even let your expensive ass XTX use FSR4.

    Yeah, but I can understand the reasoning somehow, as FSR4 is built for their new AI cores. It can work with older hardware, but the performance impact is higher.

    And the issue is, that everyone will go "FSR4 bad performance, Nvidia ftw" if they do that or did it in the beginning.

    I mean, honestly, AMD gets shit for nothing and Nvidia can basically do whatever they want and will be supported and defended.

    There are some braindead AMD fans boys too, but by far not as many as from the green side.

    I mean, look at dlss. They cap it at every generation, so the newest one needs the newest GPU. Like nobody cares. But AMD does it and it's the end of the world. And DLSS can run on the older cards without issue, as was proven in the past.

    Really, it's like the gop and Dems. GOP can do whatever it wants and nobody cares. Dems Obama wears a tan suit and the media goes bat shit with it for months.

    FSR 4, DLSS 4.5... same diff. Both cases of "the old cards' pipelines are FP16 and can't run our new FP8-based upscale as quickly as the new cards."

    As for DLSS being "capped", frame gen is restricted based on hardware's ability to perform it, but I applaud Nvidia for giving upscaling to every card since 2018, even if they can't run it. Personally I think the sweet spot for both companies should be "off by default, but able to be turned on (with a warning that your hardware wasn't designed to run it and it will incur a significant performance penalty) in the driver settings."

    Yeah, but I can understand the reasoning somehow, as FSR4 is built for their new AI cores. It can work with older hardware, but the performance impact is higher.

    Almost as if it's the same story with DLSS 4.5.

    Oh but there is look at this post and all the people with XTX's rushing to talk about native when it's been clear for years that quality upscaling(dlss3+ and fsr4) negates bad native taa. And most games have bad taa. So not only do you fix native, mostly it ain't perfect I agree, but you're also getting a big performance enhancement. Admittedly dlss4.5 is not a direct K model replacement it's more of a making PT+FG better thing.

    And for an AMD user to be making fun of this with a fucking XTX ist so moeonic i cant believe like he is so deep into the cope and he is oblivious to his own idiocy. But he will get a ML upscaling GPU and I promise you he will make a big gulp and be like oh damn...

    I know I did. And the literature/critical consensus is agrees.

    And I know this because I had a XTX and sold it and got a 5070ti so I can very much attest to the fact.

    And for an AMD user to be making fun of this with a fucking XTX ist so moeonic i cant believe like he is so deep into the cope and he is oblivious to his own idiocy.

    Don't make me tap the sign:

    r/AyyMD

    We are a satirical PC hardware community dedicated to proving that AMD is clearly the better choice. Everyone is welcome, including non-AMD fanboys.

    Don't want to burn your house down with Novideo GPUs or Shintel CPUs? Then AyyMD is the right place for you! From dank memes to mocking silly Nvidiots, we have it all.

    But dude you didn't even make a joke what's the joke? If anything this is an ironic joke on how shitty AMD is because they won't even give you the option to use technology even if it doesn't handle well with older gens.

    Like can't you see this is not actually a joke? Again what's the joke; new technology works bad on old hardware but is better quality? Like you're clearly just trying to portray dlss4.5 in a bad light.

    The joke: performance boosting feature giving lower performance.

    Like you're clearly just trying to portray dlss4.5 in a bad light.

    Exactly right! At least that part came through.

    Upscaling is not JUST a performance enhancer it also benefits visual quality and DLSS4.5 specifically improves(greatly) motion clarity and therefore FG works a lot better. But yeah i can see from an XTX users side haha Nvidia bad.

    Unfortuneatly for you its simply ignorant and ironic.

    Edited for better etiquette.

    Please don't break rule 1 or you might get banned.

    Fair I edited for better etiquette.

    no point explaining anything lol, u dont see the irony of OP owning 7900xtx their last flagship gpu forced to buy a gpu worse in some scenarios from next gen thats only mid tier if they want to use their anti aliasing fsr redstone, thatd be like owning a 4080 super and being forced to buy 5070 ti for dlss 4.5

    Wow what a word salad.

    Also 4080 super runs dlss 4.5 better than 5070ti.

    It isn't a question of it not handling well, the older GPUs don't have FP8 execution cores. They aren't physically capable of executing the code.

    The INT8 version that can be used is still in development. I would agree that it should be released for use when it's ready, but this was always something that was going to happen if AMD switched to ML based upscaling. AMD didn't build it into the architecture from the start like nVidia did.

    > FSR4 bad performance, Nvidia ftw" if they do that or did it in the beginning.

    Yeah, literally this thread. People shitting on dlss4.5 being worse on older cards that have weak AI accelerators.

    This just isn't true, nVidia also do lock parts of DLSS down to newer cards, like MFG which only works on 5000 series cards.

    what? i said nothing untrue.

    i didn't say anything about MFG, because AMD isn't offering that and so a comparison can't be made.

    I don't need to, or want to tho. If I want more fps for worse image I'll drop settings lol.

    And you won't have great anti-aliasing, only blurry TAA

    you do know, that all scaling effects are based on TAA.

    Also i just turn AA off. It's not like i'll notice "jadged edges" on a 1440p screen......

    DLSS4 and FSR4 are much better, than TAA.

    Also i just turn AA off

    Haha, good luck!

    Perhaps that's poor eyesight, try getting glasses maybe?

    I have glasses. I honestly never really noticed jagded edges even on 1080p, however I'm super sensitive to motion clarity, and get annoyed with frame drops.

    good luck in games that don't have real antialiasing, TAA/TSR/FSR3 AA makes XeSS AA look good lmao

    This is the thing, DLSS upscaling is only great because UE5's TAA is ass, most often configured wrongly. It's not hard to look better than dog awful, and it immediately gets the praise of "better than native quality".

    Having AI upscaling is only the latest crutch to ship poor graphics and have it fixed in post processing. Threat Interactive (YT) has this topic covered in depth with benchmarks and visual comparisons.

    not sure why they hate you. You're right lol

    also...

    bojler eladó :P

    Threat Interactive is a grifter with poor knowledge on literally anything

    From his UE5 render pipeline breakdowns it is clear to me he knows more than me, and I at least know a bit about it having developed games.

    To my knowledge, there’s technical reasons why they haven’t shipped FSR 4 on RDNA3. (Not that it stops you since on Windows you can do a .dll swap or if you’re on Linux you can tell proton to use FSR4.)

    technically yes, but the reasons are bullshit.

    we have a leaked working INT8 version of FSR4 on RDNA 2 & 3. there is a performance loss over the FP8 version, but it isn't at all enough to just discard it completely. being able to use FSR4 antialiasing instead of TAA/TSR/FSR3/XeSS antialiasing would be huge.

    you can take a look for yourself.

    And that’s worse than not giving FSR4 support to previous gen cards that can run it right? 😭

    Salty much?

    They can't run it, none of them have FP8 support.

    Yea lol it's about their cash btw.

    Probably a bug lol. You have to force enable it right now so most don’t even use it.

  • Rdna3 users coping because amd still wouldnt release fsr4 for rdna3.

    What they dont understand is that when you have an older rtx gpu, you still have the option to choose between dlss 4 and 4.5.

    If you are happy with the image quality of 4, you can stick to that. If you want a better image quality and are okay with losing some performance, you can choose 4.5.

    The key word here is CHOICE. Something amd did not give their rdna3 customers.

    Also many people forget that with 4.5, the image quality on ultra performance and performance is close to balanced and quality on dlss4. Not the same but close enough with an added performance boost.

    I dont like nvidia as a corporation, but when it comes to supporting older gpus, they are the better of the two brands.

    Rdna3 users coping

    Me when I forget I'm on a satire sub:

    /uj I agree, having the choice is better in general. But as I have seen some modded FSR4 versions on RDNA2 and 3 it was still faster than native. I'm sure there are exceptions, DLSS 4.5 isn't always slower than native, either. So releasing a performance-degraded version vs not releasing a less-performant version, I'd say is a toss-up.

    You people have never used dlss and it shows. The image quality on dlss tends to be better than native because of its dlss baded AA implementation. At least on newer games using TAA. So producing a better image quality sacrifices performance as compute power is not infinite.

    This is what happened with fsr4 and fs3. Fsr3 is faster than fsr4 but nobody on rdna4 ever said "hey let me use fsr3 instead because it performs better." 🤣

    The key point here is that you can choose which version of dlss you want that suits your needs. Again, something that amd refuses to give their customers.

    I can't read that last part of the meme too many jaggies

  • there's a huge vram usage increase too on older RTX cards

  • Same old story of nerfing cards with drivers by Nvidia.

    Was done with the 500 and 700 series, after the poor upgrade in performance with the 600 and 900 series.

    And at the time the excuse was a new antialiasing solution and physix and gameworks

    It's not nerfing old cards, it's just the case that older hardware wasn't made with the new tech in mind because that new tech didn't exist yet, and thus the old hardware might not be capable of running it. Is it more right to take the AMD FSR 4 approach and driver-limit the new upscaler to only hardware that can run it? Is it more right to take the Nvidia DLSS 4.5 approach and allow the new upscaler to run on any hardware, even if it harms the experience?

    Both have their pros and cons. Personally I'd prefer the option even if I intend never to use it, but I'm moe of a power user, so I can see how it might be a better idea not to allow owners of older hardware to unintentionally harm themselves.

    Also the idea with the new presets is that you go down a step further in resolution. That the 4.5 Performance should look better than the 4.0 Quality, and thus should not be compared 1:1. Even there, the old cards are often seeing a performance regression; it's really hit-or-miss with them depending on the game and the max resolution you're trying to reach.

    as I said as always Nvidia is using a technology with very low margin as an excuse to sell the new architecture and nerf the previous one deliberately set inside the driver that technology as standard or tricky to disable.

    PhysX in the late 2000s shifted workloads toward newer GPUs and off CPUs.

    GameWorks in the 2010s often hit older cards harder due to optimization focus moving on.

    Today it’s AI features (DLSS, frame generation, denoising), which increasingly assume newer hardware paths.

    It’s not a conspiration, it’s making the new path the default and letting older GPUs absorb the overhead. That’s not accidental, and it’s why people call it soft planned obsolescence.

    Are we still talking about DLSS 4.5? Or are you shifting the conversation to RTX 5000 as a whole? Or to Nvidia's past market practices as a whole?

    Because if it's about RTX 5000, then I agree- there was no reason to even release the architecture when they could have held onto its meagre advances like frame-flip pacing until they had enough of a performance / architecture increase to better justify launching a new lineup.

    And if we're talking about Nvidia's business dealings, then I agree they've got a history of scummy behaviour.

    But if we're still talking about DLSS 4.5 running on old cards, then nothing is limited. Every RTX GPU can run it... however the older generations, which lack hardware support for native FP8, cannot run it well. And seeing as we haven't invented time travel back to go back to 2018 to replace FP16 pipelines with "FP16 or 2xFP8" pipelines, or thwart the laws of physics to make those FP16 pipelines suddenly double in speed, there's sadly nothing Nvidia or AMD can do to make their older gen cards run the new FP8-based algorithms swiftly enough. Advances in performance aren't planned obsolescence. Your argument is invalid.

    however the older generations, which lack hardware support for native FP8, cannot run it well. [...]

    there's sadly nothing Nvidia or AMD can do to make their older gen cards run the new FP8-based algorithms swiftly enough.

    Why do you take it for granted that each generation must introduce backwards incompatibility? Everyone would lose their minds if Intel or AMD added new CPU instructions at every generation. Code compiled for 64 bit x86 works just the same on a CPU from 20 years ago as on one release this year.

    But for some reason this gradual obsolescence is fine in GPU land? Utter BS!

    Nvidia and AMD have the engineering know-how to build lasting hardware. But they are not incentivized, and they do not choose to do so.

    Old CPUs can run the new instructions, they just run it slowly on full fat interpretive pipelines instead of having a pre-optimized path that new CPUs have. And similarly for Nvidia, their old GPUs can run it but run it slowly if you choose to do so. It's only AMD who locked out older hardware from running the hot new upscaling algorithms.

    But none of this is really planned obsolescence, because all of the old GPUs still have and will always have access to the upscalers that they had on release. It's not like the GPUs suddenly can no longer perform upscaling if they are locked out of the latest and greatest. It doesn't even reallyapply 1:1, because a 3.0 upscale will work just as quickly even if the game was optimized for FSR 4.0 / DLSS 4.5. Whereas in the CPU space, if new software is made with only the newest instruction set, then old CPUs cannot simply fall back to old instructions - they must chug through what they were given.

    It's only AMD who locked out older hardware

    I remember when RTX voice launched and GTX cards were locked out. Yet, it ran great on 1060s if you tricked the software.

    But none of this is really planned obsolescence, because all of the old GPUs still have and will always have access to the upscalers that they had on release.

    Blackwell has introduced FP4 and FP6. I believe next generation of DLSS models will similarly not perform well on Ada. It is not by happenstance that newer data-types / instructions are not backwards compatible.

    Down to the last transistor on the chip and the last byte of firmware, it is by design. Nvidia could have widened FP8 pipelines in a new generation, and compatibility would have remained. This is just one example of the many knobs and dials the architecture engineers have at their disposal.

    But rather than getting lost in the infinite design space, ask the question: would nvidia pass down an opportunity to incentivize user to upgrade sooner?

    But again, I thought we were talking about the present, not the past?

    I fault them for not including enough VRAM and for bumping prices to the moon (including naming chips in a predatory manner "but it's a 70! Don't look at the size of the chip, it's a 70! Pay us a 70 price now!" But I can't fault them on the DLSS front, where they've given each new advance away even to 7 year old cards.

    I see DLSS as their primary tool to get away with selling smaller chips at the same or bigger prices.

    And as discussed, the latest versions suffer increased penalties the older your hardware is. Having them on older gen is better than nothing, but worse than if your card performed regardless of upscaling / framegen tech.

    Or another way to think about the same process is buying at full price, but only getting the full performance with the right software. Relying on DLSS or FSR for the advertised performance means Nvidia / AMD has more control than they ever had of how their products perform through software compatibility.

    They could decide any day that there is a new direction and support is no more on the next generation -similar to what happened to PhysX- and consumers would be powerless to do anything about it.

    Jensen Huang was saying just some years ago how Moore's law is dead, yet as soon as the AI space started to bubble, he is saying progress is exceeding Moore's law's growth. It is much hype and less substance.

    Yesteryear, it was ray tracing that everyone was supposed to obsess about. Ray traced shadows, reflections, global illumination all that jazz. But looking back, was the change in the image that revolutionary? Now we are at AI based upscalers and frame fillers, and arguing about the same. Looking back five years from now will this be laughable? I don't know. But I'm deeply distrusting when companies try to convince me what is good for me.

    This could be said of driver software going all the way back to the 90s. DLSS / FSR may be further from the metal but they're still just a required software to get the full features from the hardware.

    I was there when nvidia nuked the performance of 700 series cards by like 25% for the 900 series to only see like 2% loss. It was so obvious what nvidia was doing but people then and even today refuse to see it.

    Nvidia has been caught many times before nerfing old cards to push people into new gen stuff, just harder to see these days due to them encrypting their drivers since the 500 series or so.

    Got any sources for this? Never felt that my 780ti's got worse after updates.

    Just Google the whole the Witcher 3 situation with 780 vs 980ti and gameworks

    It's pure copium / youtubers that do clickbaity and misleading benchmarks without the hardware.

    No, really, I've seen videos that claimed things like that, and when I tried it, it turns out that it had the same perf.

    Never trust random benchmarks in YT, especially if they never show the hardware, or they are a channel with 50k subs who have every single GPU from the last 10 years benchmarked

    Hardware unboxed is one of the largest tech channels on YouTube lol

    He hasn't done that lol

    I'm talking about the videos like "OwO nvidia just slowed down old GPUs with the 570.81 driver" blablabla

    It's completely optional. What are we actually doing here?

  • New software being optimized for new hardware isn't the gotcha you think it is.

    But the performance diff to native even on new cards makes the upscaling kinda pointless, no? 106 fps to 110 fps in Cyberpunk is incredibly underwhelming.

  • Not really sure what's up with these comments. Would you rather have the model not able to be run at all on older hardware? The new model uses FP8. It also uses a much larger model. In FP8, a substantially higher number of operations can be run in the same time (even just looking at memory throughput, but the hardware itself also needs less silicon per operation). Except: If there is no hardware support, it has to be run on slower FP16 hardware.

    Only so much you can do in a given compute budget.

    Most of the crawlers on these forums have a paltry understanding of architecture and click for the bits and bites not knowing bytes are bits.

  • Remember where this is going and what the point of it is: frame generation uses less transistors than real generation. nvidia will be starting the path of decreasing gpu native performance and relying on AI. In the future you won’t even be given the option, you will just have a AI card that slops to your screen.

  • Oh no. The shitty blur generator got worse. Anyway..

  • It's because the new transformer model is for 40 and 50 series cards because Ampere lacks the architecture to run it effeciently. There's enough stuff to shit on Nvidia for without needing to make stuff up or feign ignorance.

    It's because the new transformer model is for 40 and 50 series cards because Ampere lacks the architecture to run it effeciently.

    Notice the huge performance drop even on the 50 and 40 series compared to DLSS 4.

    CP2077 on RTX 5070:

    Native: 106

    Quality DLSS 4: 126

    Quality DLSS 4.5: 110

    ye cuz DLSS 4.5 are harder to run than 4.0 ?. did i missing something here ?

    Yeah, dropping render resolution and increasing response time for ~4% extra fps even on the latest architecture? What a long way we have come from DLSS 1.0

    ~4% extra fps with AA that surpasses anything native can do. sound like a good deal for me ngl

    More like,

    Naive: 106
    4.0 Quality: 126
    4.5 Balanced: ~115 (looks better than 4.0 Quality)
    4.5 Performance: ~130 (still looks better than 4.0 and a slight perf increase)
    4.5 Ultra Perf: ~150 (looks only slightly worse but gives a good boost to perf)

    Besides, 4.0 is still there if you prefer the fps over the image quality which... I kind of do. 4.0 still looks amazing and I don't feel like the visuals of most games are in any way hampered. I never notice ghosting, etc. But to each their own; having more options to choose from is never a bad thing.

  • Love this meme format!

  • What’s about 5060 👉👈

  • I mean that is how antialiasing used to work sooo

  • Nvidia really needs to add some info boxes on the DLSS model selection screen explaining what the different models are made for. They explained all of it elsewhere.

    There are too many uninformed people posting pics like this unironically and being dumb in general.

    In case you didn't know what is happening here: The new model utilizes hardware only available with 40 series and later but still allows you to run it on older GPUs. The new models L and M are also made specifically to be used with DLSS performance and ultra performance which is best used for 4k. For quality and balanced, the older model K is still typically the better choice in terms of performance-quality tradeoff.

  • This is clearly being done so that people who don’t know just turn it on a feel like they need to upgrade sooner. But everyone is saying “good guy nvidia bad guy AMD” because AMD won’t make an official FSR4 release for older cards even the performance will be trash.

    AMD! Why is my Radeon HD 7870 not getting Redstone support?! AMMMDDD!!! :@

  • How the hell did the 1% lows go down too?

  • I mean, it relies on hardware components that older cards have little of...

    I'm clueless why everyone is ignoring the meme's second panel.

  • What is the use of rendering in lower resolution if you are going to get worse performance

  • Funny thing is that games like Oblivion Remaster l Just look plain bad without any AA crutch at native resolution so using a super sampling is required just so it looks decent... Forget going backwards we are free falling.

  • i hate that I was forced to buy rtx5060 - the best low profile option currently available.

    had been waiting for years for a new low profile GPU by AMD, but they dropped the ball again, and my old rx6400 just doesn't cut it in 2026

  • I turned off frame generation anyways. It's worse most of the time. Can't describe it exactly why, but I rather have a lower native FPS.

  • This sub is so hilarious because y'all are mad at Nvidia for making the option available to users of old GPUs available, but are ok with AMD not making FSR4 available to older GPUs at all? Come on there's so much to criticize Nvidia for but this ain't it.

    Like how's this a negative thing? It's literal proof the older carda can't run the new models as well, and yet they still made it available, while AMD left the rdna3 and older cards with the shitty FSR3.

    but are ok with AMD not making FSR4 available to older GPUs at all?

    Who says that? AMD should give official support (beside that accidental int8 version leak).

    Not taking shit from either company is the right thing for us, consumers, to do. I'm mad at nvidia mainly for their marketing and entrenching their proprietary tech, building a walled garden. RTX, DLSS, FG, MFG, Ray reconstruction, Neural textures etc. These are today's HairWorks and PhysX.

    And we have seen with 32 bit PhysX how they can drop support whenever they want to for their closed source tech and leave the user hanging. (The backlash made them add back support. So that's a win.)

    They can't just drop DX11 support or create DX13, but they can play with their own libraries however they want to. And what they want to is convince you to upgrade, even when offering the same perf / $ for generations.

    And AMD's Radeon isn't innocent either. They gladly follows this trend too. Fragmenting the graphics space with even more incompatible implementations. Leaked memos confirm axing promising initiatives in favor of copying nvidia. What has AMD come up with since ray tracing got pushed by nvidia? All they have done is waste resources to develop the same shit many months later.

    Imagine the gaming landscape if this was all standardized. It wouldn't be the first time either that these companies cooperate, that's how Vulkan was made!

    >This sub is so hilarious because y'all are mad at Nvidia for making the option available to users of old GPUs available, but are ok with AMD not making FSR4 available to older GPUs at all?

    Na half the posts on the AMD and Radeon subs are people being pissed off and/or something to do with "look DLSS 4.5 isn't that good!" --- Nvidia gets talked about more in those subs than actual AMD shit, the other half is usually troubleshooting or "use optiscaler" posts.

    100% rent free lol

  • Its obviously because the ML cores can't handle it. Its a decent upgrade for DLSS overall.

    Its a decent upgrade for DLSS overall.

    CP2077 on RTX 5070:

    Native: 106

    Quality DLSS 4.5: 110

    Try to understand its a model for performance mode with the same quality as 4.0 model with qualidy mode.

    Try to understand this is a satire sub for mocking nvidia and intel.

    dlss 4+ at Balanced looks better than older models at Quality

    The closer they get to native quality, the closer they get to native performance.

    I've seen good things about preset L though. It's by far the most expensive to run, but it somehow against all odds looks fantastic, and the offset of "expensive upscale" can be worth it when there's a massive gain from running 720p rather than 4K, or 480p rather than 1440p. Blurry in motion, though.

  • So weird, I'm not seeing that one guy that kept posting "nVidia W" and "nVidia crushes AMD" in all the AMD and Radeon subreddits after 4.5 came out now.
    Even when pointed out that performance degradation worse than native on 20 and 30 cards does not constitute a "W" just because they make the tech available, the argument was that it was VRAM limited and tested on the wrong cards. Bro, how is the 3090 going to be VRAM limited in these scenarios exactly?

  • How is it that the shit post AMD community has more rabid AMD fanboys than the actual AMD communities. Are you all lost?

  • Rule of thumb: - Run everything at native resolution - Lock the framerate as your target (60 etc.) - Motion Blur OFF - Chromatic Aberration OFF - Vignetting OFF - Sharpen with ReShade - Set the best quality according to your hardware specs

    • Run it at native / High to see how it should look, along with settings you prefer e,g. turning off chromatic aberration.
    • Slowly lower settings to see what can improve performance while still looking as good as native, or at least as good as you can accept without ever thinking "this looks bad."
    • Turn on best-quality upscaling and play for a short time, just a minute or two. Note that it looks exactly the same as native. Slowly turn up the upscaling amount until it starts to look worse than native. Back up by 1 so you're back to the strongest upscale that still looks like native.
    • If you don't have the performance you desire, you'll have to accept a worse image, and it's up to you to find a balance of reduced settings / cranked upscaling that works for you.
  • So the issue that nVidia has is that they have less data-centers to focus on Gaming.

    DLSS works by analyzing common frames and assets, predicting them, and then generating fake frames based on that generation...

    Issue is we're running out of data centers as the AI Bubble grows - they don't have power.

    So what's nVidia going to do? Train on gaming which is less than 10% of their customer base, or push more resources towards the other 90%?

    It's likely to improve with updates... slowly.

    If it improves at all, which I'm sure it's at least going to be on part with the last gen DLSS.

    But that all said: AI crash can't come soon enough.

    We do not have enough Power in the US

    We do not have enough demand for AI

    AI Assistants and AI Chatbots are falling apart as they run into the data-center restrictions...

    The only thing still improving is AI Art Theft which is getting better at Reproducing Stealing art styles as it data scrapes steals more and more image assets Intellectual Property and navigates the US Copyright system abuses the DMCA.

  • Not sure why HWUnboxed didn't include Model M Performance benchmarks alongside the Model K Quality preset. No Model L benchmarks either which is pretty jarring for a video titled 'DLSS 4.5 vs DLSS 4 Performance'.

  • So the interesting thing I saw is that DLSS 4.5 performance in cyberpunk looks better than DLSS 4 quality AND runs better. If you use it like that maybe it's worth it?

    But that would contradict his narrative.

  • How well does the FSR4 work on RX7000 and 6000 series?

  • The same thing but worse happened with the 7900 XTX and FSR 4. Except that DLSS of course looks way better still.

  • I mean you can probably correspond new balance to old quality or better and the comparison will be more interesting. If you are really interested and not just hating wait for Tim analysis or Alex from Digital Foundry

  • DLSS 9.0 = all frames are fake, the real game looks like PS1, but with CLSS it looks like Witcher 3.............. goal is that you as an Dev. can make games that look like crap get the same money out of it like you make a game like Witcher 3 because the users pay for the look with their GPU.

  • They literally said these models are for performance and ultra performance

  • I mean those cards physically don’t have FP8 acceleration. It’s literally the same sort of penalty with the Int8 leaked build of FSR4, but you guys love talking about that version even when it gets worse performance.

  • Jokes aside, at least they have a choice while we, rdna 3 users, have nothing... shame

  • Wait so my 4080 is actually competing with a 5070?????

  • Because new 4.5 models are meant to be used with performance and ultra performance respectively. Not quality

    I'm clueless why so many people are ignoring the meme's second panel. (Or that this is r/AyyMD, a satire sub) Yes, we know why preset M from DLSS 4.5 is slower, thank you.

  • Lmao nvidia probably just created an AI bot net and performance drop is because cards are processing tokens for Copilot or something.

  • AMD users big mad they can’t use FSR4 on their 3 year old cards 💀

  • Probably hub benchmarks again

  • [removed]

    I'm sorry if my meme post caused you distress.

    Memes are supposed to be funny

    Yes, and 19 in 20 people think it is

  • Dont own shit old cards, simple as that

    3090 and 2080ti shit cards, lol

    A 4070Ti is more powerful than a 3090 so my point stands.