Company: ATI Technologies
Authour: Alex 'Morgoth Bauglir' Voicu
Editor: Charles 'Lupine' Oliver
Date: April 10th, 2008
ATi's new HD3870X2 video card allows for up to 8X MSAA(with 16X AA as a special setting to be detailed later in this article), using a rotated grid for 2X and 4X AA and a sparse sampled one for 8X. An innovation that ATi brought with its R600 is represented by the ability to use other custom resolve filters beyond the traditional "box" filter:
Tent Filters: Narrow and Wide
Resolve represents the stage where subpixel data (the samples we talked about) is combined to produce the final pixel color. Whilst the old "box" filter used subpixels restricted to the area of a single pixel, the "tent" filters add the ability to use subpixel data from neighboring pixels. The "Narrow tent" mode uses one sample from neighboring pixels, whilst "Wide tent" uses two. If you're thinking that messing around with neighboring pixels leads to blurring, you'd be correct. The "tent" modes attempt to alleviate this by computing a weighted average of the samples based on a linear function that decreases the weight of samples further from the pixel center, instead of a simple average as was customary with the "box" filter. In practice, the blurring tends to be quite apparent though. On the other hand, their effect on edge-quality is quite pronounced and for moving images they're a superior solution and, under certain circumstances, like games that employ a very "bloomy" fantasy look (Oblivion or Overlord for example), the "tent" modes tend to shine. Overall, they can be considered an interesting innovation, although not one that is largely applicable.
You'll have noticed that we didn't speak about the "Edge-Detect" mode in the above paragraph, beyond mentioning its existence. If the "tent" modes are quite well documented, the aforementioned custom filter is a bit more mysterious. What is known is what ATi decided to share: Edge Detect represents an advanced form of AA, where first the image is analyzed with an algorithm that detects the location of edges (there are a number of ways that this can be accomplished and ATi hasn't detailed how the algorithm does its magic), after which a high level of AA is applied to them. In practice, it seems that the high level of AA is achieved by using an even wider "tent" mode, one that takes 3 samples from neighboring pixels (or, at least, the analysis of magnified screen captures suggests it). Since it's applied only to edges, the blurring should be minimal (and, in practice, it is). The downside is that the edge-detection algorithm itself isn't cheap, with the wider "tent" mode adding its own contribution to the final cost of the solution, but the quality it produces is arguably the best available currently.
Before actually seeing the above theory being applied in real life, one peculiarity of the R6XX line that directly influences AA needs to be discussed: the absence of hardware based MSAA resolve. All of the custom resolve magic, as well as the less trendy "box" mode, is being done through shaders, whereas before the R600 it was being done by dedicated hardware inside the ROPS. Whilst all DX10 hardware is able to support such shader-based resolve, ATi claims that they've added some hardware tweaks that speed their implementation compared to other competitive solutions. The catch is that the other competitive solutions still have hardware based resolve, and unless being in a situation where they must give that advantage up (a developer using his own custom resolve in a DX10 title, in order to get HDR-correct AA, for example), they'll enjoy a performance advantage. This state of things has caused many to theories that AA is somehow broken in the R6XX line, and that shader-resolve is a major performance limiter-this is untrue. The performance hit from doing resolve in this way is rather small, except the scenario in which rendering has the potential of occurring at very high FPS (hundreds), in which case it will indeed prove to be a limiting factor.
content not found