Perhaps an open discussion on reasonable usage of VRAM for visual return is coming - talking with @HeyItsThatDon about Diablo 4/reading the @ComputerBase article I am scratching my head. These are the textures you need to use to get a stutter free experience on a 12GB GPU: (1/5)
IMO like those textures in the screenshot above from Computerbase and the one below from @HeyItsThatDon make me wonder why these textures are those you need for a 12GB GPU. They honestly looks lower res than games that came from a decade ago! Surely we can expect better... (2/5)
As a contrast here are screenshots from the Witcher 3 Complete Edition with a Diablo 4 camera angle where each screenshot is 8 GB VRAM or less at 4K (no RT on): a lot higher fidelity textures here in TW3 with more reasonable VRAM utilisation. (3/5)
In diablo 4 in comparison the textures in the screenshot below are those required to get a stable experience with *12 GB* of VRAM -> surely these Doom 3 level normal maps (honestly I think it looks worse) should not be what one should expect with a 12 GB GPU? (4/5)
Not wanting to pick on Diablo 4 only but the way some games have questionable texture quality for VRAM requirements is distressing. Especially since games like The Last of Us and Forspoken both saw huge changes in post-launch patches to up quality while even using less VRAM!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Every executive/technical officer at Microsoft should watch this video. Summarized: Windows/DX12's current state is an active hindrance to responsive, smooth gaming on PC. Unlike under Windows/DX12, games on Bazzite do not suffer from intrusive shader compilation stutter. (1/4)
DX12 and Windows' default state is to leave competent shader compilation to devs, who regularly fail to do it as well as they should. That means games on PC can be hitchy jittery messes. Microsoft and the DX team need to embrace the reality and have a mitigation strategy (2/4)
Bazzite avoids shader compilation stutter due to Vulkan Shader Playback from Valve's Fossilize. Why on earth is there not a Microsoft made equivalent in the Windows/DX12 environment? 10 years of DX12 and its default experience without hefty dev work is "runs like junk". (3/4)
We all have our preferences, but one that I could never get over in VG graphics is thanks to Doom 3.
When idtech4 was shown off at Macworld 2001 running on Geforce 3 I was struck by Carmack saying "..the final unification of lighting and shadowing across all surfaces in a game.."
With game objects, moving and static, being treated equally for lighting and shadowing there was no longer obvious visual discontinuities in the game world. This was huge for me then and thereafter as I always knew something was wrong, but I could not put my finger on it before.
I could always visually see the seams in a game due to how objects looked. "oh! That objects is about to move, it looks different." Video games had the problems old cartoons have, E.G. the boulder wile e coyote is next to looks completely different than the surrounding rocks.
Reading Impressions of FFVII R online and I think I have come to the conclusion that I cannot intrinsically trust subjective reports AND that PC benchmarking needs to perhaps use capture frame Analysis and not rely on Windows based tools:
If you started Up FFVIIR in its Default Mode, the first level had incredible stutter. Every effect was displayed the first time and was acompannied by a long stutter. It has to be that way, there is no other possibility. The only thing is some do not subjectively perceive it.
Yet If you look at a benchmark online like those from GameGPU they report nothing of this in the GPU Bench numbers. Curiously, PCGamer article which quotes John and me mentions they loaded up the game and saw nothing of the stuttering in the opening.
Working more on the Checkerboad rendering vs. DLSS 2.0 video today for Death Stranding and I think it is time to write an open letter to about DLSS 2.0 as a quality of life feature on PC that Devs should consider if they use TAA already. Devs please read! 🙂 (1/11 Thread)
TAA has been a great compromise for quality and speed since graphics engines are full of specular aliasing, aliasing in motion, and MSAA and SSAA are too expensive or complex to integrate. Single frame post-process AA (FXAA/SMAA) is just too unstable over time. (2/11)
With DLSS 2.0 we are essentially looking at a more perfect TAA where it is not showing the same characteristic full-frame faults that usual accumulation TAA has, or even 2 frame TAA has like SMAA 2Tx. It is doing temporal reprojection and rejection so much better that (3/11)