Product: ATI HD3870X2 CrossfireX
Company: ATI Technologies
Authour: Alex 'Morgoth Bauglir' Voicu
Editor: Charles 'Lupine' Oliver
Date: April 4th, 2008
The History of the HD3870X2

When two little guys become preferable to a single big one

The 28th of January was a significant date in the calendar of most, if not all, GPU enthusiasts. It was the day when, after a complex mélange of rumours, leaked benchmarks and cryptic hints, ATi's (or AMD's, if you fancy saying it like that) comeback to the high-end occurred. In order to grasp why this is significant, and why we are talking of a comeback, a walk down the memory lane is in order (don't worry, it'll be a short one).

ATi's woes started with the somewhat ill fated R6XX GPU family, which was released as the 2X00XT. It was late, hot, power hungry, had less than stellar yields and, most importantly, couldn't properly tackle the high-end segment of the market. Whilst the high-end isn't exactly the most profitable of segments, the general perception created by having a part that rules there tends to trickle down to the lower ones and thus it's important to be at least competitive on that front. Having a big part that was caught in limbo, competing with lower-high-end parts from your main competitor (Nvidia) was not exactly the stuff dreams were made of for ATi, so they got to work on fixing everything that could be fixed, because it was too early for a complete architectural overhaul.

The RV670

The result of those efforts ended up being the RV670: a relatively small 55nm chip that turned out great. It came back from the fab in tippy top shape, thus allowing for an earlier then planned release; the RV670 was cooler then its predecessor, and could be priced very aggressively. While not being a high-end competitor itself, the RV670 sold quite well, and managed to erase some of the unpleasant memories the R600 had seeded. Around this time, down the grapevine came hushed rumours of an R680 part that would reassess ATi as a high-end GPU provider.

The R680

There was much speculation surrounding the R680: it went from being a huge monolithic chip to being a MCM (multi-chip-module: see this for a tad more information on the concept) with two dies on a single package to being Crossfire (see here) on a PCB/on a card, like some AIB made dual 2600XT solutions were (example), or like Nvidia's 7950GX2 (example). The sheer density of these rumours ensured that at least part of them were right, as we'll soon find out for ourselves.

On January 28th, the R680 materialized, proving to be a beastly card: comprised of two RV670 chips crammed on a single PCB and linked by a PLX bridge chip, armed with 512-MB DDR3 DRAM and marketed (quite correctly and decently, in our humble opinion) as the HD 3870X2. As expected, this generated an entire spectrum of reactions, from unabashed enthusiasm to condescending giggles, depending on whether one was an ATi or Nvidia fan . What should be clarified right from the get go is that the 3870X2 is neither the be-all-end-all of GPUs, nor is it some fluke part, like the 2900XT arguably was. As with all things in life, the R680 is neither pure black nor immaculate white: it's a grey! Translation: it's a very good card, with both strong and weak points and, as shall be shown throughout this review, it's a solid high-end contender.

Little Guys and Big Guys

Time to explain the subtitle, as many of you are likely scratching your heads trying to figure out what little guys and big guys have to do with complex pieces of silicon. Whilst 3D rendering has gone from strength to strength since 3Dfx awed us with bilinear filtering (alas, poor point-sampling...for we knew it well), adding more and more power which in turn enabled doing more and more complex rendering work, it's still not at the point where one can say that it's enough to accurately recreate reality; at best, we're around the entrance to the Uncanny Valley in this area (see here in order to figure out what the theory behind the concept is). So graphics power has to continue to scale upwards, thus making chips become larger and larger in spite of being built on progressively advanced process technologies that achieve incredibly small transistor sizes. The trouble with this is that, at some point, you find yourself with all of your eggs in a single basket, with your huge top-end GPU being dependent on the latest, not completely mastered process technology, getting delayed due to unexpected bugs in the silicon and yielding in a completely unsatisfactory fashion. It needn't happen all the time, but when and if it happens, it's a very complex and hard situation to tackle.

An alternative to building constantly larger, more powerful chips is using more, and weaker, chips in tandem, aimed at solving graphics rendering woes. The concept is not new, having been around for quite a while. Over the years 3Dfx, ATi, XGi and Nvidia (quoted in chronological order) have employed it. Whilst it was nice for providing impressive paper specs, and when it worked it did so in an also impressive fashion, the inherent redundancies and inefficiencies of this approach, coupled with the fact that straight chip scaling still had a lot of life left into it, made it less than successful overall. Only in fairly recent times, with Nvidia's SLi (Scalable Link interface, not Scan Line Interleave which was 3Dfx territory) and ATi's Crossfire was a proper foothold established. Both technologies were employed in order to cater to a very select niche who wanted the absolute best performance rather than to create a flagship product aimed at rounding off a product line (the 7950 GX2 could be considered an exception, but it flopped due to a number of reasons and being caught in the wake of the G80), were tied to certain chipsets and, most importantly, required you to buy two (expensive) high end cards for at best a 70% improvement.

A Market in Transition

As GPUs near the one billion transistor mark, the risks are growing and the graphics war is moving towards a more trench-based conflict rather than all-out battle on the open-field. Neither IHV can afford to have another NV30/R600 debacle, and sustaining the accelerated development that we've grown accustomed to solely on the back of bigger and more complex chips might create risks that aren't justified by the possible rewards, so using multi-GPUs to continue scaling graphics performance becomes an increasingly attractive alternative. If you will, it's the inverse of how a product line used to be built. Prior to this, you had the big guy on top who got scaled down for inferior market segments, whilst the future seems to be (at least on ATi's side) starting in the middle and scaling upwards by adding more GPUs and downwards by messing with clock rates/functional unit disabling. Arguably, this should simplify the entire process. Another possible and likely scenario is that we'll have a staggered approach to GPU progression, huge monolithic chips being released with a large interval between them, in gain a good grasp on process technologies and to ensure the squashing of all bugs, with refreshes happening between these releases by means of multi-GPU cards. One way or the other, the future is certainly interesting to say the least.

content not found

Copyright 2022 ©

You may not use content, graphics, or code elements from this page without express written consent from

All logos are trademarks of their original owners. Used with permission.