But there's still tons of areas that I think RSR/FSR/spatial upscaling is really going to be great for.
- Playing older games on mobile devices like laptops or steam deck. Games that will never support temporal upscaling, and it probably wouldn't be necessary on a desktop-class gaming PC anyway. Especially for upscaling Steam Deck games to a gaming monitor, for example.
- VR. There's a variety of toolkits that allow you to add FSR into VR games already, and that's neat and useful. But the ultimate solution will be embedded into the headset itself. For native Quest apps, building in an RSR implementation will be very beneficial. For PC apps running on Quest (via Link or Virtual Desktop), rendering a lower-res image and streaming it at high bit-rates to the headset, and then performing FSR on the headset itself, will be massively beneficial.
- Any kind of game streaming, where you can stream a lower-res screen and upscale with FSR.
- Embedded into portable devices like Nintendo Switch.
- Might have some benefits for emulating old console games, compared to other upscaling filters.
- HDMI upscaling devices similar to Mclassic or Retrotink. Built to upscale anything from old devices to 4k. Might be interesting to combine with scanline filters, etc. (?)
Additional thoughts:
- One of the biggest challenges for an RSR-style use case is going to be handling dynamic resolutions. Especially for that last bullet, where an HDMI upscaler could be amazing for a Switch or xbone, to make an image more presentable on a 4k screen. Problem is, many games on those platforms utilize dynamic resolutions. You'd really need a spatial upscaler that was more resilient against that. I still think a spatial upscaler could do the job, but it might need a more robust algorithm.
- I've been playing with running RSR at custom resolutions, especially low resolutions, and getting a feel for what it's good for. Many people say they would never run FSR/RSR except at the ultra quality setting, but I think that's missing the point a bit. Upscaling from lower resolutions won't give you a high-detail image that looks 4k, but it will give you a low-detail image that is presentable on a 4k screen. I've been running Far Cry 3 at 600p with RSR, for example, and I find myself saying "I would totally play this and love it if I were playing on a mid-range laptop GPU", for example... of course I wouldn't actually play that way on my desktop computer, except just to experiment with it.
- As for VR, I've been playing MSFS with OpenXR Toolkit over Virtual Desktop, and it's really great. By far the best image quality & performance I've gotten on MSFS out of my mid-range system. I'd love to be able to apply FSR/RSR to everything in VR, and I'd actually play that way. There's at least 3 toolkits available, but they're still a bit primitive, and you can't just have it work on everything. Also, running FSR on your computer is sub-optimal, because then you have to stream a much higher-res image to your headset... Quest has a limited bandwidth for streaming video, so some of the advantages of FSR are undone from the compression artifacting. It would be far better to stream the low-res image to the headset at higher quality, and then apply FSR on the headset itself.
- Although at first glance it seems like RSR might be a perfect blanket solution for upscaling VR games, actually VR really exposes the limitations of RSR. Because RSR upscales to your monitor's native resolution, what does that mean when you're wanting to display on your headset? The VR rendering pipeline is just different. Games must use OpenVR, OpenXR, or OculusVR to determine many aspects of their rendering, including the render target, and then many games allow you to adjust resolution as a % of that target within the game itself. Then consider game streaming over Virtual Desktop, which has its own render target. I don't think its possible to implement RSR at a driver level that can work with with VR... it has to be worked into the VR pipeline itself.
Leave a comment: