"Self explaining"Self explaining , if you want to play your Harold advanture with hairwork or Batman with the fancy cape and smoke without any frame drops what so ever on your AMD GPU.
Personal i think it got to do with the dx12/vulkan open agreement to multi gpu, just wild speculating here . And Micro$ofr and Khronos got the call after all from a develper point off view. AMD bin open source all the time from what I'm aware . Nvidia well .. yah well $
P.S. Who will get the little,tiny itsipitsi hint, i put in there![]()
It's good to see Zogrim's site: Physxinfo still going. Nvidia still spending resources on PhysX and Flex.
It's good to see Zogrim's site: Physxinfo still going. Nvidia still spending resources on PhysX and Flex.
Self explaining , if you want to play your Harold advanture with hairwork or Batman with the fancy cape and smoke without any frame drops what so ever on your AMD GPU.
Personal i think it got to do with the dx12/vulkan open agreement to multi gpu, just wild speculating here . And Micro$ofr and Khronos got the call after all from a develper point off view. AMD bin open source all the time from what I'm aware . Nvidia well .. yah well $
P.S. Who will get the little,tiny itsipitsi hint, i put in there![]()
Support plan for 32-bit CUDA
Updated 01/17/2025 01:22 AM
Support plan for 32-bit CUDA
32-bit compilation native and cross-compilation were removed from CUDA 12.0 and later Toolkit. 32-bit CUDA applications cannot be developed or debugged using CUDA 12.0 or later toolkit for any target architecture. Use the CUDA Toolkit from earlier releases for 32-bit compilation
CUDA Driver will continue to support running 32-bit application binaries on GeForce RTX 40 (Ada), GeForce RTX 30 series (Ampere), GeForce RTX 20/GTX 16 series (Turing), GeForce GTX 10 series (Pascal) and GeForce GTX 9 series (Maxwell) GPUs. CUDA Driver will not support 32-bit CUDA applications on GeForce RTX 50 series (Blackwell) and newer architectures.
Support for running x86 32-bit applications on x86_64 Windows is limited to use with:
- CUDA Driver
- CUDA Runtime (cudart)
- CUDA Math Library (math.h)
Probably not a big deal as the various graphic engines probably already have their own libraries to deal with it on systems that don't have the feature, and it frees up chip space for more general purpose/rendering?