Product: R700 - The 4870 Gets Kinky
Company: AMD
Authour: Alex 'AlexV' Voicu
Editor: Charles 'Lupine' Oliver
Date: September 25th, 2008
A Bumpy Introduction

The portent of the end of days is here: we've actually managed to finish our 4870X2 review! Kind of ... it's probable that the story behind the delay will be slightly more thrilling than the review itself - which doesn't mean that the 4870X2 is a disappointment! On the contrary, it's undoubtedly a great card, stable and fast, which equates to being boringly good most of the time. It's performance is very much in-line with that which you saw with the Crossfired 4870s, excepting the scenarios where the added VRAM allow it to leap in front. But we're getting ahead of ourselves, let's first handle introductions properly.

Meet the 4870X2

Drool about it, not on it
Drool about it, not on it

What we have here is the finishing touch to what can safely be considered ATi's most impressive, and successful, product line in history. It's aimed at those that care solely about the best performance, the nutters that get QX processors, piles of RAM, large LCDs and other such toys ... you know, guys like us. Whilst the 4870 managed to be surprisingly competitive here, it couldn't truly outmatch its higher-priced opponents overall; the X2 takes care of that though.

If you're familiar with the 3870X2, there are few visual and even technical surprises in store for you here. The board more or less looks the same, although removing the cooler shows that a certain amount of component shuffling happened. Also the 10.5” PCB is black, a departure from ATi's traditional red color-scheme. You'll note there aren't pictures of this patched around this section, seeking to do mean things with our bandwidth ... the net is already quite flooded with 4870X2 nudies as it is, adding our own would hardly have contributed any meaningful information.

PLX 2.0

What the 4870X2 has going for it, as opposed to its predecessor, is the upgrade that the PLX switch has received: it's now a PCI-E 2.0 part, effectively doubling the bandwidth theoretically available. This should cover certain situations where the GPUs were limited by the PLX port; resource upload comes to mind, as that was one of the points where an X2 solution didn't fare all that well. Be aware that we're primarily talking about the initial resource upload process that takes place prior to effectively starting to play (textures, Vertex and Index Buffers etc). If you're moving data amounts that are large enough to saturate the bandwidth that the old bridge provided as you're playing through a level performance will be crappy anyhow, so the improvement there would be from crap to less crap ... of course, this is prevented by the increased VRAM, but we'll get back to that in just a bit.

Sideport Interconnect

This is as good a place as any to discuss the rather mystical Sideport Interconnect. This part of the X2 has heated quite a few imaginations in the interval prior to the launch of the card. After the launch, it caused a number of hissy fits. Why is that, you ask? Simple: it's not enabled in current drivers ... so when something that was supposed to change life, the universe and everything else gets disabled, conspiracy theories and imaginative reasoning spring up like in a “Lost” episode.

The less glamorous truth is that changing the world as we know it was never really on the Sideport's “to-do” list. Summing it up, it's pretty much an extra PCIE 2.0 link that interconnects the GPUs directly, without going through a bridge. This path has a slightly lower latency associated with it, compared to the PLX one, but not to a notable extent. The bandwidth it provides is insufficient for doing really adventurous stuff like a shared memory pool. So is it completely worthless then? Like all things, that depends: it will not be all that helpful when doing typical AFR, but it could be useful  in alternative schemes. We'd urge you to go through our interview with Mr. Eric Demers for slightly more information about the interconnect (and a number of other interesting topics). It's possible to enable the inter-connect in current drivers, although until ATi decides to use it there's no benefit from doing that (and no, we won't detail how to do it ... not that it's some big secret and arcane ritual, mind you).

2GB GDDR5? Oh My!

One final bit of novelty is the rather awe-inspiring amount of GDDR5 that this black-terror sports: 2 GB, split evenly and fairly between the 2 GPUs, each of them having its own 1GB pool to do nasty things with (remember, no shared memory pool this round). This should cover the increasing number of cases where 512MB becomes less than sufficient - this bit should have  a few panties up in a wad, based on the reactions this opinion, expressed in another article of ours, generated. So, let's slowly and carefully untangle the underwear, shall we?

The misunderstanding in this case stemmed from seeing the aspect of memory management in a binary suck/rock key: if it does not suck it rocks (in other words, if I'm not forever pegged at 1 FPS, then it's all good ... my averages are still quite good). Well, that's not exactly how things are: you should see things in a [suck, rock] continuum, with quite a few intermediary steps in between. The basic idea is that in an ideal scenario, you'd upload all the stuff you need for rendering the current level/cell/whatever (depends on how the developer opted to partition his gameworld) to VRAM at once and be done with it. This happens is if all the resources you need will fit in there. Once developers start piling on the shadowmaps, normal maps, treasure maps (joke alert), high resolution textures , whilst using multiple render-targets,  it becomes an increasingly tight fit. Add a high enough resolution coupled with a decent amount of AA and odds are your GPU will face the same problem that Oprah faces when choosing a skirt: it's too small (the VRAM, that is).

Good things come in pairs
Good things come in pairs

What happens now? Well, magic! Okay, not really: all is fine and dandy as long as the GPU doesn't need something that's not in VRAM. If this occurs, the driver must evict something that's already there, to make room for the new resource. A LRU (least recently used) scheme is employed, in which the resource that was accessed least recently (sic) gets flushed. What you'll perceive is that your framerate will go down. For how long? Well, that depends on how much data needs to be evicted/uploaded - and herein lies the source of the conundrum! Since most games today are really aimed at 512MB SKUs (at best/worst, depending on what your stance is on graphical advancement), it's likely that their requirements at extreme settings won't significantly surpass what's available (if they surpass it at all); in translation, the occurrence of the whole enchilada outlined above won't be very frequent. The more requirements surpass available VRAM, the more frequent the shuffling. In a worst case scenario, you'd have to flush and repopulate the entire memory space every few frames - but at this point you're likely to have given up on the game or reduced your settings.

So, if you've been paying attention (probably not, as no one actually reads this stuff in practice), what the rather lengthy paragraph above says is that average framerates can still be decent even when VRAM constrained - they're not what you should be looking at. It's the minimum framerates that will invariably suffer. The averages can and will be affected, but the extent of this is highly dependent on just how frequently the driver starts doing its balancing act. Ultimately, VRAM requirements in a game depend also depend on the level/chapter/whatever being played. If one level needs X amounts of VRAM there's no guarantee that the next won't need twice that. The only way to get an idea of what's going on is to monitor VRAM allocation ... and even that's a bit tricky under Vista.

Considering all of the above, the increase of 512MB to 1GB makes sense given the typical usage patterns that these cards should see (resolutions higher than 1900x1200, high levels of AA, highest possible in-game settings), as well as the trend towards higher and higher VRAM consumption that's noticeable with more recent games (:cough: Crysis :cough:). It's more of a forward-looking thing at this point in time, you can count the games where it makes a difference today without having to borrow a hand/ relying on mucky feet. The coming months might force you to employ at least your toes for that count though. Oh, and we hope no one is upset over the above paragraphs ... they're tongue-in-cheekish to the extreme, because the best way to present info is the friendly neighborhood joker way.

Asides from all of this hubris, there isn't much else differentiating the 4870X2 from a pair of 4870s. Frequencies are the same - 750MHz Core/900MHz Ram - install is painless. Oh, we nearly forgot: the X2 is “smarter” when it comes to thermal/power management. It has a full Powerplay implementation, meaning that it downclocks and downvolts the GPU cores, as well as downclock the memory. The 4870 implements only the most basic Powerplay level, merely downclocking the GPU core, without touching voltage and RAM. We'll (shamelessly) plug our content once more here, and direct you to our interview with Mr. Demers for more elaboration on the topic.

So now comes the part where we show you pretty charts, right? If only it had been that easy...


content not found


Copyright 2014 © Rage3D.com

You may not use content, graphics, or code elements from this page without express written consent from Rage3D.com

All logos are trademarks of their original owners. Used with permission.