Originally Posted by MrSchmelzer
Apparantly, the shader caching helps with level loading times, while
many of the improvements seen by nv's latest drivers seem to
originate in better use of multithreading (that's what journalists
on some German forums have been saying). Me, I'd hope for a
decent writeup, explaining those improvements, while not using
marketing lingo or slides from powerpoint presentations.
To expand on the first point a bit; the fun thing about shaders is that due to how GPU state is managed the shader which is fed in to D3D and bound for use is not the same shader the hardware sees. The driver will, on the fly, recompile bits of the shader depending on the usage - part of this is because things like 'vertex fetch', which originally had dedicated hardware, is now nothing more than a memory read by the GPU so depending on usage the driver will recompile a version of the shader with different fetch instructions depending on the usage pattern and data supplied. (Everyone does this and it's not just limited to vertex shaders of course, all the shader stages have this system applied to them).
What the 'shader cache' gets you is a pre-warmed set of shaders removing the initial startup cost of generating all the combinations.
The second point, with regards to threading, is something I mentioned in another thread; NV simply can't make the D3D issues vanish from their end, much of the problem is in the runtime they don't control, but they can do more work to sort the problem out. Chances are, when compared to something like Mantle, they are burning more CPU time to maintain the higher frame rates, which is a "good" short term fix but doesn't fix the underlying problem. Unfortunately I'm pretty sure all the benchmarks were done on high end systems so this CPU cost isn't going to be exposed and I doubt any review site is going to take the time to try and figure out the relative CPU usage either as the end user is unlikely to care at this point.