Announcement

Collapse

Attention! Please read before posting news!

We at Rage3D require that news posts be formatted in a particular way, so before you begin contributing to the front page, we ask that you study the Rage3D News Formatting Guide first.

Thanks for reading!
See more
See less

AMD Radeon HD 7970 Launch Review @ Rage3D.com

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    AMD Radeon HD 7970 Launch Review @ Rage3D.com

    The new AMD Radeon HD 7970 is here. Is it as hot as it's namesake, Tahiti? Our performance review pits it against the best high end GPUs from the last two generations, and details what's new and exciting in the fastest single GPU available today.

    AMD Radeon HD 7970 Launch Review

    #2
    Great review, thanks.

    Comment


      #3
      Very good work guys
      CROSSHAIR X670E HERO / R9 7950X3D / RTX 4090 GAMING OC / TRIDENT Z5 NEO RGB 6000 CL30 / SAMSUNG 980pro 1TB / 2x SAMSUNG 980 1TB / H150i ELITE LCD / ATH-A2000Z / HX1200 / AW3821DW 38" / LG C2 OLED evo 55" / Enthoo 719 / K70 MKII + Zowie S2 / K57 + Harpoon / Xbox Series X Controller / REVERB G2 V2
      ____________________

      Comment


        #4
        Caveman good review and awesome write up on the conclusion. To many reviewers are just seeing the numbers but you looked at the numbers and then considered the trickle down, well done.
        Edward Crisler
        SAPPHIRE NA PR Representative

        #SapphireNation

        Comment


          #5
          tl;dr

          j/k

          Good review

          Some of your pictures (such as the specifications) have very small text, makes it difficult to read/view.

          Comment


            #6
            Still cant decide should i replace my 5970 with it or not.

            Comment


              #7
              On the AF fixes and changes, your article made it sound like they may be only for future video cards, and not be implemented in current cards. IE, it's hardware not drivers. Is this true?

              Comment


                #8
                Just a quick note: PRT's are not part of DX 11.1. More UAV's and TIR are (which the 7900 support), but PRT's are actually only available in OpenGL at this time, with a specific extension.

                We'll work to add it to DX, but we'll be working with developers on that.

                Comment


                  #9
                  Nice review, James.

                  I'll wait for prices to get lower, can't afford one right now

                  Comment


                    #10
                    Love having the 5870 in there for comparisons, definitely a great upgrade route for me. Good review Cav .

                    I don't understand the resoution thing with Eyefinity 2, custom resolutions, does that mean each monitor can have a different resolution or does that mean you can drive the displays at all the same resolutions but a custom profile resolution. Not sure how this will help??? Does that mean you can drive a different resolution monitor to match a lower resolution monitor?

                    Wasn't sure of the Eyefinity resolution tested (I may just be blind).

                    Of course future insights and updates to this new technology will probably be prudent as time goes on. Very excitting new tech here and AMD did more then just upgrade fps but virtually every aspect of this GPU. AWESOME JOB BY AMD!

                    Now I just wonder when they will start appearing for sell? In addition availability expectations in January? Full availability?

                    I actually think the price is very good for what it offers, I do not want AMD to limit their ethusiast card due to some fixed price scheme. I do believe AMD need competitive products in each price target though plus pricing the 7970 and 7950 at the price points allows for older generation items to remain selling without too much of a price/profit cut and continued production until 28nm production can support the lower end GPUs.

                    Great job by AMD, now lets see some cards be used by end users.
                    Ryzen 1700x 3.9ghz, Thermaltake Water 2.0 Pro, Asus CrossHair 6 Hero 9, 16gb DDR4 3200 @ 3466, EVGA 1080 Ti, 950w PC pwr & cooling PS, 1TB NVMe Intel SSD M2 Drive + 256mb Mushkin SSD + 512gb Samsung 850evo M.2 in enclosure for Sata III and 2x 1tb WD SATA III, 34" Dell " U3415W IPS + 27" IPS YHAMAKASI Catleap. Win10 Pro

                    Custom SFF built case, I7 6700k OC 4.4ghz, PowerColor R9 Nano,, 1TB NVMe Intel SSD M2 Drive, 16gb DDR 4 3000 Corsair LPX, LG 27" 4K IPS FreeSync 10bit monitor, Win 10

                    Comment


                      #11
                      very well done review.

                      I think one of the thigns people are missing is that just because AMD decided on a power envelope and set its frequencies accordingly does not mean that the end user has to use it at that power envelope. with the overclocks many have been getting with this card I have a feeling that the card is priced fort the hardcore who are going to OC the thing to all get out anyway.
                      Main rig: look at system spec tab
                      Storage Server: Dual AMD Opteron 6120 CPUs, 64Gigs ECC Ram 50TB usable space across 3 zfs2 pools


                      HOURGLASS = most appropriate named ICON/CURSOR in the Windows world :-)

                      In a dank corner of ATI central, the carpet covered with corn flakes, the faint sound of clicking can be heard........Click......click, click............as the fate of the graphics world and the future of the human race hangs in the balance.

                      I know....I know........Keep my day job :-)- catcather

                      Comment


                        #12
                        as always great review

                        Comment


                          #13
                          Eyefinity 3 already ???

                          Checked: no, with 7970 they released Eyefinity 2.
                          Intel i7 2700K @ 4.8 Ghz | Zalman CPNS9900LED | Asus Maximus IV Extreme-Z | 16 GB Corsair Dominator GT CMT16GX3M4X2133C9 DDR3 | 2 x (Sapphire) AMD Radeon HD7970 (crossfire) | Creative X-Fi Titanium Fatality Pro | Corsair AX 1200W | 4 x WDC WD1002FAEX | Optiarc AD 5240S | Dell U3011 @ 2560x1600 | Steelseries 7G | Razer Imperator 2012 | Coolermaster Stacker STC T01 | Logitech Z-5500 | Sennheiser HD598 | Windows 7 Ultimate x64 SP1 |

                          Comment


                            #14
                            First: Merry Christmas! The best of New Years to everyone! I invite everyone at Rage 3d, staff and readers alike, to attend my coronation next week as King of North America. Hic.. Hand me some more egg nog, you bi---SLLAP! Hic...! Yo-ho-ho, it's the jolly gre-en giant! What a PAR-TAAY, as the goofballs say!!! uh...is'at cool, or whut?

                            Good grief

                            ________________________________

                            Overall, the review was a good effort, but rushed--but whose 7970 review wasn't?... Jumping right in there, a few bones to pick:

                            In the image-quality section you say:

                            ...Testing this ourselves we see that there is a significant improvement in image quality, a lot less shimmer is discernible...This adjustment [that produced shimmer] is a test case that should never be seen in any game and is done for illustrative purposes only, not as demonstration or proof or wrongdoing or poor quality.
                            But then just a scant few lines later :

                            In specific games with high noise textures used on floor, wall or ceiling objects, shimmer is not observed easily but is one of those items that once you see you can't unsee it. AMD has more work to do on their Anisotropic Filtering, and gladly admits it - they found enough issues and fixes to put in to the next three generations of GPUs, so hopefully along that path they'll get the shimmer issue resolved and perhaps look at new AF techniques like High Quality Eliptical Texture filtering using an Elliptical Weighted Average filter as presented at SIGGRAPH this year.
                            So you go from talking about shimmer not being observable except in test cases where you are pushing it and you are trying to force it--to more or less saying that it's everywhere with "busy" textures and you hope at some point, maybe in the next three generations of ATi gpus, that they'll eventually get around to fixing it.

                            I guess I've read >18 7970 card reviews in the last few days, and of those sites that did mention being able to see shimmer in past ATI gpu generations (I've seen it myself), they also quite specifically mentioned that with the 7970 they could no longer see it, even though they looked for it. The absence of the shimmer they attribute to improvements ATi made with the new architecture, and I think they are right. By most every source I know, ATi's AF is considered the best available. I think that if you are going to make these kinds of criticisms then the least you could is to publicize exactly what you did to force the shimmer that you saw--I mean, did you go -5 scaling on the LOD, or what?...

                            As to your AA/AF bar charts...I confess to not being able to decipher them very well... I know from what you wrote that you were running in only one resolution--1920x1080, but after that things got hazy. What exactly do you mean by "normalized results"? For Dirt 3, the chart says "0xAA 100"--the bottom of the chart says "Performance (average fps %)--and then for 8xAA you list "85.9"...Here's what I think you mean here:

                            You took the frame rate of Dirt 3 running at 1920x1080, at 0xAA, whatever that frame rate was (we don't know because you didn't tell us), and normalized it to 100%, and then ran the degrees of AA and represented them as percentages of the "norm" in corresponding bars on the chart. I honestly think it would have been slightly more informative to list your frame rates as they occurred with each level of AA, and then to summarize in a sentence of text the respective percentages.

                            I must've looked at this particular bar (AA) chart three times before it hit me that "100" was 100% instead of 100 fps, and so on down the line. I mean, doing it that way seems really clumsy when you could just have listed your frame rates and let your readers do the percentage math if they wanted, or else you could have simply summarized the percentage of performance hits in a sentence of text, or you could have included this bar chart along with a chart listing the actual frame rates that occurred at each level of AA.

                            The reason I would suggest this approach is because almost every game is going to see AA levels as a different percentage of its 0xAA frame rates, because every game is different in ways that directly affect their performances on any gpu. IE, 8xAA might well have a 15% hit in Dirt 3 @ 1920x1080, ultra settings (whatever those are), but even Dirt 3 @ different resolutions and different game settings will provide different percentages of AA hit, and will give you a different 0xAA frame rate "norm" to begin with. Moving along outside of Dirt 3, other games are going to see different levels of performance drop relative to 0xAA when AA is used, and so the percentages as listed in your AA percentage-hit bar chart for Dirt 3 would simply not apply. There's just no such thing as a "universal" percentage hit for FSAA...

                            Ditto the above for your AF "percentage hit" representation in Aliens versus Predator--one game and one resolution and one level of IQ settings does not apply to every game at every resolution and every level of IQ in-game settings. I'd simply suggest that in the future you either forgo this kind of chart completely and simply list frame rates and CCC/in-game settings for each resolution you test, and let your readers do the percentages if they are so inclined. Either that, or use your present bar chart for AA & AF percentage hits along with a separate bar chart for actual frame rates at each level of AA. And if you really want to get thorough, you could do the AA & AF percentage-hit bar charts for every game tested wherein you did list their frame rates--being careful to group them all together (each game's AA and AF percentage-hit & frame rate bar charts) so that context isn't lost by the reader..

                            Also, I know that later on in the review you did actually use bar charts demonstrating the frame-rate performance of Dirt 3 & A vs P, but they should not have been separated in the review from their respective AA & AF percentage-hit bar charts, simply because these two examples don't tell us how much of an AA or AF hit we'll see in the other games you benched, and for which you did provide frame-rate bar charts--all presumably running at 1920x1080, with differing in-game settings.

                            Moving on to your "normalized bar chart" for tessellation, I saw that you used Crysis 2 & a benchmark for the unique results the chart illustrates, using the simple CCC controls to force tessellation, if possible. I can think of many more examples of tessellation processing that would have been far more demonstrative than this one--like the Heaven benchmark, for instance.

                            Some other reviews used some games in which previous ATi 6xxx and < gpus had done very poorly compared to some present-gen nVidia cards, precisely because the nVidia gpus were simply more efficient and did a better job with tessellation. I think that for the new architecture AMD is citing up to a 400% (4x) improvement over their tessellation processing in the 6000-series, and the games that the GTX 580 used win because of its tessellation superiority are games that now are taken by the 7970, and by a convincing margin, too. You glossed over the tessellation tests as though they were unimportant, and I think tessellation-processing efficiency and capability is one of the new architecture's high points.

                            FPS bar charts: My main wish here is for better logic in the placement of the various gpus in the charts (you don't list them by fastest-slowest or slowest-fastest, or by model families, or by IHVs), and I *really* wish for the 7970 that you would have used, say, orange,white & green for the 7970, so that the eye would be drawn in each chart to the gpu that is the focus of this review, the 7970. As it is, I had to study each chart far longer than I wished to to locate and parse the 7970 info that it contained... (Jeepers, creepers, who got my peepers?)

                            Aside from that, I would have liked to see maybe a couple more resolutions tested--I know your monitor won't do >1920x1080, but it will surely do less for testing even if the immediate IQ isn't so hot due to the monitor running in a non-native res--that won't affect the frame rates that result. Of course, as I'm sure you've seen in many other reviews, the 7970 is a high end card designed for high-resolutions, and as such the card doesn't really begin to stretch its legs until you're running @ 2560x1600 or >. Speaking of >...

                            Eyefinity tests: First, let me commend you on doing Eyefinity testing with the 7970! I was amazed at the number of review sites that mentioned the capability but then didn't think it was worth reviewing and testing. I think it is one of the signature strengths of the card versus its GTX 580 competition (It takes two nVidia gpus to do the multi-monitor gaming tango, but ATi can do Eyefinity with only one.)

                            I think that is a major and significant difference, and you did a great job in reviewing and testing for it! It's weird, but did you know you are in the minority of review sites in that sense? Yea, it was hard for me to believe, too, that the majority of sites lucky enough to receive a card to review elected *not* to test Eyefinity, and some didn't even mention the fact that the card they compared the 7970 to, the stock GTX 580, couldn't do anything similar unless you wanted to pay $599 for a custom 580 with a custom chip onboard that would allow it to at least support 3 monitors in a game.

                            Anyway, my only question relative to your Eyefinity testing is it would be nice to know at what resolution(s) you tested... Also, the above suggestions about the arrangement of the bar charts and the color of the review-card's bars apply here, too.

                            Unfortunately for us regular joe consumers, AMD has noticed that fact, and the halcyon bang/buck value days of Radeon cards appear over. In a global economy where every company (including AMD) is optimizing for efficiency, AMD is (soft) launching an ultra enthusiast single GPU card for Christmas. With the current one year old competition well and truly trounced, the AMD Radeon HD 7970 is a deserving winner of the title 'World's Fastest Single GPU'. It also does this at a price point cheaper than the equivalent frame buffer competitor card, which is in apparently scarce supply.
                            Seem to be plenty of GTX 580's around. I know you were talking about 3GB 580's but, hey, the GTX 580 is the closest thing nVidia has to the 7970, and it is what it is. That's what these reviews show--that the 7970 is the all-around better product. NewEgg's GTX 580's range in price from a low of $479 to a high of $729, depending on the degree and type of customization. The 590, a dual-gpu nVidia card capable of 3-monitor surround gaming, is $799, I believe. Sure makes the $549 tag on the 7970 look a whole lot better, doesn't it?

                            I think you've maybe misinterpreted some things along the way. First is the fact that the 7970 is a high-end card, not mid-range and certainly not a "value" card. It's not the only one ATi will produce and ship next year. We will definitely see mid-range ~$300 7000-series cards that will most likely blow the socks off of the competition's ~$300 cards... So , I think AMD is keenly aware of world economic conditions...

                            Second, AMD has never had anything against putting a gpu product in the $500+ markets--it's just that with the 6000 series, with the exception of their 6000-series dual gpu offering, up to now AMD didn't think it had a single-gpu 6000-series product that *would sell* in that price bracket relative to the competition. Now, ATi/AMD is convinced they have such a product and so it has been accordingly MSRP'ed. More power to them! I don't know why you'd think the fact that because AMD now has a $500+ card that is very marketable, it means that AMD isn't going to produce a ~$300 7000-series variant next year that will be a killer card, too. Why would you think that?

                            As for announcing this card before Christmas, just a couple of weeks before AMD's January 9th 2012 announced availability date for the 7970, I think it's been a tremendous publicity initiative and I think AMD should be proud. They've come out pretty straight and narrow with the 7970--with very little hyperbole or sensationalism--and they've publicized this information at a time of the year guaranteed to produce a maximum amount of coverage for them--and I think that's great! It's about time that AMD enjoyed some effective PR!

                            The cooling solution is a compromise; you'll have to settle for less than great acoustics as the fan isn't really quiet enough for a premium product.
                            I wouldn't be surprised to see something better when shipping commences Jan 9th.

                            The second compromise is the biggest - software support. There are plenty of niggles in VECC, and the recent fumble with Saint Row: The Third, Battlefield 3, Skyrim and RAGE launch support doesn't inspire premium product confidence.
                            Most of that stuff is already past tense, isn't it? Also, I think AMD has done a lot talking lately about the new initiatives they are putting into place next year to see that a repeat of this years' fumbles don't happen again. Supposedly, AMD is pulling out all of the stops for DevRels next year. Until they prove otherwise, I'm content to take them at their word--they *know* how important driver support is, I'm convinced of it...

                            A continued lack of an option for forced SSAA/Alpha texture AA/SSAO to be added to modern game titles in 2012 isn't premium enough. This is a brand new architecture with all the driver teething troubles accompanying that; you might be settling for less than you can expect for a premium $549 price tag.
                            Well, keep in mind that this *is* a brand new architecture and in the second half of Eric D's interview here at Rage3D he specifically talks about how excited he is about the great stuff coming up in the drivers in the next three months! Sounds good to me, and not at all unusual for new architectures.

                            AMD's outline for Catalyst features and improvements are heading in the right direction, finally getting towards solving some of the problems that premium features such as Crossfire, Eyefinity and HD3D present to users.
                            I don't really give a flip about HD3D, honestly--never have. What a gimmick. The only "3d" I want is the kind that doesn't require glasses--hey, thanks for asking, though, AMD...

                            Is this AMD's admission that NVIDIA had it right for pricing, and that you just can't build GPUs to the lower price points and make money?
                            No, this is AMD's admission that prior to now it didn't think it could make a single-gpu product worthy of the $500+ market niche as presently occupied by its discrete gpu competitor, nVidia. As far as "building gpus for lower price points and making money" goes, I think AMD/ATi has been pretty good at doing that for years. Just because they've come up with another single-gpu product they think is suitable for the $500+ market (after a goof long hiatus from that market), that's no reason to think they aren't going to come out with killer gpus @~$300 next year. Has nothing to do with it, I'd say.

                            ...will the 7800 series simply replace the 6970/50 at around the same price and performance with the only benefits being DX11.1 & ZeroCore power? What about the 7700 series, will they be similar performance and price to the 6800s, too, a new architecture with little to no performance increase? Say it ain't so!
                            OK. "It ain't so!" echo: "it aint'so-itain'tso..." Heh. The 7000-series gpu is a brand new architecture. Is the brand new architecture going to displace the old one? You betcha'! The customer is the one who has to make up his mind as to whether or not he's interested in doing that. Far as I know, new architectures *always* replace the old ones, which means, yes, some overlap of feature support (I mean, 'cause you don't want to *give up* the good stuff when you go to something new, do you?), but which will generally, as a rule, also offer superior performance and some *new* and possibly killer features at the same time.

                            Gotta' go now...Little rug rats are all grown up and their dang presents hardly fit under the tree anymore---grrrrr. Happy New Year and the best of all possible worlds to ya'!
                            **It is well-known that I am incapable of mistakes, so if you encounter a mistake in anything I have written, then you can be sure that I did not write it!

                            Comment


                              #15
                              i'm mostly a lurker... my 2 cents on this tho: it looks like 7970 will be going up against GTX 770.. sorta like current gen 570 vs 6970 i guess...

                              Comment


                                #16
                                Originally posted by Blín D'ñero View Post
                                Eyefinity 3 already ???

                                Checked: no, with 7970 they released Eyefinity 2.
                                Eyefinity 3 refers to the number of screens in use. Previously we had the 5870 Eyefinity 6 edition, remember?

                                @WaltC - when I have 2-3 hours to read through that I'll take a look. In the mean time, thanks for the taking to the to read and respond, and have a Merry Christmas!

                                Comment


                                  #17
                                  Good jorb Caveboy! The only niggle I have is that some of the pictures are kinda small and hard to read, as was mentioned...

                                  Comment


                                    #18
                                    good job on the article.

                                    Comment


                                      #19
                                      Any Bitcoin benchmarks?

                                      Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves. (George Gordon Noel Byron)

                                      Comment


                                        #20
                                        I will be blunt, Nvidia doesn't have a card that competes with the 7970 at this time. AMD's 1.5gb 7950 at $399 (tentative) appears to be a 580 competitor and still may outperform the 580 with having more modern features. The 3gb version of the 7950 would compete against the 3gb 580.

                                        Consider for $100 less you may get the same or better performance then Nvidia's best (7950 vx 580), this is great bang for your buck coming from AMD.
                                        Ryzen 1700x 3.9ghz, Thermaltake Water 2.0 Pro, Asus CrossHair 6 Hero 9, 16gb DDR4 3200 @ 3466, EVGA 1080 Ti, 950w PC pwr & cooling PS, 1TB NVMe Intel SSD M2 Drive + 256mb Mushkin SSD + 512gb Samsung 850evo M.2 in enclosure for Sata III and 2x 1tb WD SATA III, 34" Dell " U3415W IPS + 27" IPS YHAMAKASI Catleap. Win10 Pro

                                        Custom SFF built case, I7 6700k OC 4.4ghz, PowerColor R9 Nano,, 1TB NVMe Intel SSD M2 Drive, 16gb DDR 4 3000 Corsair LPX, LG 27" 4K IPS FreeSync 10bit monitor, Win 10

                                        Comment


                                          #21
                                          Originally posted by indio007 View Post
                                          Any Bitcoin benchmarks?

                                          No, sorry.

                                          Comment


                                            #22
                                            Originally posted by WaltC View Post
                                            So you go from talking about shimmer not being observable except in test cases where you are pushing it and you are trying to force it--to more or less saying that it's everywhere with "busy" textures and you hope at some point, maybe in the next three generations of ATi gpus, that they'll eventually get around to fixing it.

                                            I guess I've read >18 7970 card reviews in the last few days, and of those sites that did mention being able to see shimmer in past ATI gpu generations (I've seen it myself), they also quite specifically mentioned that with the 7970 they could no longer see it, even though they looked for it. The absence of the shimmer they attribute to improvements ATi made with the new architecture, and I think they are right. By most every source I know, ATi's AF is considered the best available. I think that if you are going to make these kinds of criticisms then the least you could is to publicize exactly what you did to force the shimmer that you saw--I mean, did you go -5 scaling on the LOD, or what?...
                                            Firstly I stated what AMD has told the world as their fixes and level of perception. Secondly, I reported what I found.
                                            I did publish information on how to see the shimmer, you can see the extra shimmer vs. reference with no adjustments. If you re-read, you'll see that, y'know, with the giant screencaps with outlines of where the shimmer can be located?

                                            Originally posted by WaltC View Post
                                            As to your AA/AF bar charts...I confess to not being able to decipher them very well... I know from what you wrote that you were running in only one resolution--1920x1080, but after that things got hazy. What exactly do you mean by "normalized results"? For Dirt 3, the chart says "0xAA 100"--the bottom of the chart says "Performance (average fps %)--and then for 8xAA you list "85.9"...Here's what I think you mean here:

                                            You took the frame rate of Dirt 3 running at 1920x1080, at 0xAA, whatever that frame rate was (we don't know because you didn't tell us), and normalized it to 100%, and then ran the degrees of AA and represented them as percentages of the "norm" in corresponding bars on the chart. I honestly think it would have been slightly more informative to list your frame rates as they occurred with each level of AA, and then to summarize in a sentence of text the respective percentages.

                                            I must've looked at this particular bar (AA) chart three times before it hit me that "100" was 100% instead of 100 fps, and so on down the line. I mean, doing it that way seems really clumsy when you could just have listed your frame rates and let your readers do the percentage math if they wanted, or else you could have simply summarized the percentage of performance hits in a sentence of text, or you could have included this bar chart along with a chart listing the actual frame rates that occurred at each level of AA.

                                            The reason I would suggest this approach is because almost every game is going to see AA levels as a different percentage of its 0xAA frame rates, because every game is different in ways that directly affect their performances on any gpu. IE, 8xAA might well have a 15% hit in Dirt 3 @ 1920x1080, ultra settings (whatever those are), but even Dirt 3 @ different resolutions and different game settings will provide different percentages of AA hit, and will give you a different 0xAA frame rate "norm" to begin with. Moving along outside of Dirt 3, other games are going to see different levels of performance drop relative to 0xAA when AA is used, and so the percentages as listed in your AA percentage-hit bar chart for Dirt 3 would simply not apply. There's just no such thing as a "universal" percentage hit for FSAA...
                                            Indeed, and you'll recall there was a shortage of time for things like this. It can be revisited, it can be adjusted, but you understood what was happening and that was the main point. It's not perfect, sure, consider it a discussion point for the forums, then...

                                            Originally posted by WaltC View Post
                                            Ditto the above for your AF "percentage hit" representation in Aliens versus Predator--one game and one resolution and one level of IQ settings does not apply to every game at every resolution and every level of IQ in-game settings. I'd simply suggest that in the future you either forgo this kind of chart completely and simply list frame rates and CCC/in-game settings for each resolution you test, and let your readers do the percentages if they are so inclined. Either that, or use your present bar chart for AA & AF percentage hits along with a separate bar chart for actual frame rates at each level of AA. And if you really want to get thorough, you could do the AA & AF percentage-hit bar charts for every game tested wherein you did list their frame rates--being careful to group them all together (each game's AA and AF percentage-hit & frame rate bar charts) so that context isn't lost by the reader..
                                            In general, the reason for using normalized results is too abstract the results to focus on the differences, not the absolutes. It wasn't a performance test, it was a performance delta test, aimed at quickly looking to see what levels of efficiency were on offer.

                                            AF shows little delta when applying 16xAF from 0xAF, meaning it is essentially free at 1920x1080 - turn on texture filtering wherever you can, it's not going to hurt performance.

                                            AA shows a bigger delta, but still on 15% for 8xMSAA at 1920x1080 - if you remember the first generation of unified shader cards, you'll know how much progress that is.

                                            Originally posted by WaltC View Post
                                            Also, I know that later on in the review you did actually use bar charts demonstrating the frame-rate performance of Dirt 3 & A vs P, but they should not have been separated in the review from their respective AA & AF percentage-hit bar charts, simply because these two examples don't tell us how much of an AA or AF hit we'll see in the other games you benched, and for which you did provide frame-rate bar charts--all presumably running at 1920x1080, with differing in-game settings.

                                            Moving on to your "normalized bar chart" for tessellation, I saw that you used Crysis 2 & a benchmark for the unique results the chart illustrates, using the simple CCC controls to force tessellation, if possible. I can think of many more examples of tessellation processing that would have been far more demonstrative than this one--like the Heaven benchmark, for instance.
                                            I didn't use the Heaven benchmark because it's not used in any games. The CCC control caps tessellation levels, so you can see the difference between what the games wants to do and what happens if you put a lid on the tessellation factor, for performance.

                                            Originally posted by WaltC View Post
                                            Some other reviews used some games in which previous ATi 6xxx and < gpus had done very poorly compared to some present-gen nVidia cards, precisely because the nVidia gpus were simply more efficient and did a better job with tessellation. I think that for the new architecture AMD is citing up to a 400% (4x) improvement over their tessellation processing in the 6000-series, and the games that the GTX 580 used win because of its tessellation superiority are games that now are taken by the 7970, and by a convincing margin, too. You glossed over the tessellation tests as though they were unimportant, and I think tessellation-processing efficiency and capability is one of the new architecture's high points.
                                            Tessellation is largely irrelelvant, and I don't think NVIDIA's ability to process higher tessellation factors is a function of efficiency. They have multiple smaller tessellation units where AMD have one, or Cayman and on, two. AMD improved tessellation but the fact is, even in high tessellation use games like HAWX 2 and Crysis 2, the extra tessellation power of NV cards doesn't make it a better game experience - the raw power of the card is the biggest differentiator.

                                            Originally posted by WaltC View Post
                                            FPS bar charts: My main wish here is for better logic in the placement of the various gpus in the charts (you don't list them by fastest-slowest or slowest-fastest, or by model families, or by IHVs), and I *really* wish for the 7970 that you would have used, say, orange,white & green for the 7970, so that the eye would be drawn in each chart to the gpu that is the focus of this review, the 7970. As it is, I had to study each chart far longer than I wished to to locate and parse the 7970 info that it contained... (Jeepers, creepers, who got my peepers?)
                                            The charts are limited in what they can do. When I get time I'll look for better alternatives. Thanks for your feedback. Don't expect an all new set of charts in the next review we publish, it'll be some time before I can devote time to this.

                                            Originally posted by WaltC View Post
                                            Aside from that, I would have liked to see maybe a couple more resolutions tested--I know your monitor won't do >1920x1080, but it will surely do less for testing even if the immediate IQ isn't so hot due to the monitor running in a non-native res--that won't affect the frame rates that result. Of course, as I'm sure you've seen in many other reviews, the 7970 is a high end card designed for high-resolutions, and as such the card doesn't really begin to stretch its legs until you're running @ 2560x1600 or >. Speaking of >...
                                            1920x1080 is the most common enthusiast gaming resolution, and I don't have any 27"or 30" mega monitors. I do have eyefinity. You use what you've got.

                                            Originally posted by WaltC View Post
                                            Eyefinity tests: First, let me commend you on doing Eyefinity testing with the 7970! I was amazed at the number of review sites that mentioned the capability but then didn't think it was worth reviewing and testing. I think it is one of the signature strengths of the card versus its GTX 580 competition (It takes two nVidia gpus to do the multi-monitor gaming tango, but ATi can do Eyefinity with only one.)

                                            I think that is a major and significant difference, and you did a great job in reviewing and testing for it! It's weird, but did you know you are in the minority of review sites in that sense? Yea, it was hard for me to believe, too, that the majority of sites lucky enough to receive a card to review elected *not* to test Eyefinity, and some didn't even mention the fact that the card they compared the 7970 to, the stock GTX 580, couldn't do anything similar unless you wanted to pay $599 for a custom 580 with a custom chip onboard that would allow it to at least support 3 monitors in a game.
                                            Glad you liked it.

                                            Originally posted by WaltC View Post
                                            Anyway, my only question relative to your Eyefinity testing is it would be nice to know at what resolution(s) you tested... Also, the above suggestions about the arrangement of the bar charts and the color of the review-card's bars apply here, too.
                                            The Dell P2210h's have a native resolution of 1920x1080. I used 3, in portrait configuration. This gives 3240x1920 resolution, plus bezel compensation (which was around 110px per bezel interface, so add 220px to the resolution or so).






                                            Originally posted by WaltC View Post
                                            Seem to be plenty of GTX 580's around. I know you were talking about 3GB 580's but, hey, the GTX 580 is the closest thing nVidia has to the 7970, and it is what it is. That's what these reviews show--that the 7970 is the all-around better product. NewEgg's GTX 580's range in price from a low of $479 to a high of $729, depending on the degree and type of customization. The 590, a dual-gpu nVidia card capable of 3-monitor surround gaming, is $799, I believe. Sure makes the $549 tag on the 7970 look a whole lot better, doesn't it?
                                            No, they've been out for 9-12mo and people expect the new generation to be more performance for the same money, than the last generation. We expect this is in everything. Of course, supply and demand and competitive pressure will adjust prices. But the 580 is a year old, and this new card is faster... but not putting much pressure on it. $499 to $549 is the comparison people will make, regardless of equipped VRAM. OEM's may buy by bigger numbers are better, but readers of Rage3D tend to be a bit more discerning; they will look at the performance that matters to them, and use that as their basis.

                                            Originally posted by WaltC View Post
                                            I think you've maybe misinterpreted some things along the way. First is the fact that the 7970 is a high-end card, not mid-range and certainly not a "value" card. It's not the only one ATi will produce and ship next year. We will definitely see mid-range ~$300 7000-series cards that will most likely blow the socks off of the competition's ~$300 cards... So , I think AMD is keenly aware of world economic conditions...
                                            I'm not sure what you're trying to tell me here... of course there are more cards - I stated that in the review. First page, you can see more SI chips and their codenames. Doesn't change the fact that AMD's last two high-end cards have been significantly cheaper. Why? Competitive performance. Do we really expect Kepler and GK100 to be similar performance to Tahiti? If GK100 outperforms GF110 by 30%, what's fair pricing? $679? If you continue AMD's trend it might be. But NV aren't going to do that, they're going to stick around $500-600, which means Tahiti is going to be poor value again and will get cheaper.


                                            Originally posted by WaltC View Post
                                            Second, AMD has never had anything against putting a gpu product in the $500+ markets--it's just that with the 6000 series, with the exception of their 6000-series dual gpu offering, up to now AMD didn't think it had a single-gpu 6000-series product that *would sell* in that price bracket relative to the competition. Now, ATi/AMD is convinced they have such a product and so it has been accordingly MSRP'ed. More power to them! I don't know why you'd think the fact that because AMD now has a $500+ card that is very marketable, it means that AMD isn't going to produce a ~$300 7000-series variant next year that will be a killer card, too. Why would you think that?
                                            I don't think that, and didn't say that. I questioned the line up strategy. If the 7900 doesn't replace the 6900 - it doesn't, it's in a different price point - then the 7800 will.

                                            If the 7800 has enough performance to be worth buying over a 6900, that will make the 7900 look too expensive. To preserve the relative performance of the line up, the 7800 has to be at a performance level you can already get, for a price you can already pay. What's the point? DX11.1? PRT's? Compute? Look at the die sizes for Pitcairn and Cape Verde, how much performance uplift will there be from Barts and Juniper? What if they're priced against Cayman and Barts? That's the question.


                                            Originally posted by WaltC View Post
                                            As for announcing this card before Christmas, just a couple of weeks before AMD's January 9th 2012 announced availability date for the 7970, I think it's been a tremendous publicity initiative and I think AMD should be proud. They've come out pretty straight and narrow with the 7970--with very little hyperbole or sensationalism--and they've publicized this information at a time of the year guaranteed to produce a maximum amount of coverage for them--and I think that's great! It's about time that AMD enjoyed some effective PR!

                                            I wouldn't be surprised to see something better when shipping commences Jan 9th.
                                            Obviously AMD's partners will eventually have custom designs, but I doubt they will have any for general availability for a while.

                                            Nobody likes a soft launch, launch when you're ready to sell, but it works for a lot of companies (like Apple, for example).

                                            Originally posted by WaltC View Post
                                            Most of that stuff is already past tense, isn't it? Also, I think AMD has done a lot talking lately about the new initiatives they are putting into place next year to see that a repeat of this years' fumbles don't happen again. Supposedly, AMD is pulling out all of the stops for DevRels next year. Until they prove otherwise, I'm content to take them at their word--they *know* how important driver support is, I'm convinced of it...
                                            Fair enough, your opinion.


                                            Originally posted by WaltC View Post
                                            Well, keep in mind that this *is* a brand new architecture and in the second half of Eric D's interview here at Rage3D he specifically talks about how excited he is about the great stuff coming up in the drivers in the next three months! Sounds good to me, and not at all unusual for new architectures.
                                            Keep in mind I did the interviews with him,


                                            Originally posted by WaltC View Post
                                            I don't really give a flip about HD3D, honestly--never have. What a gimmick. The only "3d" I want is the kind that doesn't require glasses--hey, thanks for asking, though, AMD...
                                            AMD will support common or open standards, or widely adopted industry standards. 120Hz/ 60Hz per eye 3D had to wait until Fast HDMI appeared because AMD don't want to support somebody elses solution, they want to support a solution that can have input in to.

                                            Originally posted by WaltC View Post
                                            No, this is AMD's admission that prior to now it didn't think it could make a single-gpu product worthy of the $500+ market niche as presently occupied by its discrete gpu competitor, nVidia. As far as "building gpus for lower price points and making money" goes, I think AMD/ATi has been pretty good at doing that for years. Just because they've come up with another single-gpu product they think is suitable for the $500+ market (after a goof long hiatus from that market), that's no reason to think they aren't going to come out with killer gpus @~$300 next year. Has nothing to do with it, I'd say.
                                            Again, didn't say that it did, but there's also no indication that the cards at the same price points as 6700, 6800, and 6900 inhabit now will be worth replacing those with. Look at 4870 to 5770, similar performance, little bit cheaper. The 5770 to 6770, hmm. Rebrand. 4870 to 5870, great! Double the performance, only $99 difference in launch MSRP! 5870 to 6970, it was cheaper when it launched! Fantastic! 6970 to 7970... ouch on the wallet. 4870 to 5870 uplift, but nearly $200 on the price. Hmm. OK, so redefining segments... but what happens to performance? That's the point, we don't have info on that.

                                            But, I'm like you, I'm hopeful AMD have an awesome strategy to come in.


                                            Originally posted by WaltC View Post
                                            Far as I know, new architectures *always* replace the old ones, which means, yes, some overlap of feature support (I mean, 'cause you don't want to *give up* the good stuff when you go to something new, do you?), but which will generally, as a rule, also offer superior performance and some *new* and possibly killer features at the same time.!
                                            Exactly, but when you redefine the top of your price stack you leave room for adjustment on the rest of the line up.

                                            Let's look at a little history. RV770, became the 4870./4850/4830 (not all launched at the same time). Cypress, became the 5870, 5850, 5830 (again staggered launch). Cayman, 6970, 6950.

                                            All three generations had a problem - very, very close price stacking. The price spread from the mainstream to performance to enthusiast was like $150, $200. A lot of cards in a small range. Great for consumers, great for OEM's, hard for AMD.

                                            So Evergreen moved Cypress up a bit, to give more breathing room, and the delay between launching the 5850 and the 5830 left a gap between 5770 and 5850 you could drive 460 through. Northern Islands was a bit better, but still a lot of density between $100 and $250 - 6750, 6770, 6850, 6870, 6950 in a price range of $150. Hard to get money with that much product in a little space, they tell me.

                                            Putting the Southern Islands top dog Tahiti XT at $550 gives everything more breathing room. Now, if there's a Tahiti Pro and a Tahiti LE, they can be, lets make up a number, ~25% less each. That puts Pro at ~$400 and LE at $300. If we consider there will likely be a 20% perf drop between each, that's a pretty strong competitive line up. That leaves $100 to $300 for the two other ASICs, so maybe $300 is a little low for Tahiti LE and it's going to be $330. Will it be more/same performance as Cayman XT? Possible, that's an interesting point.

                                            So for Pit cairn we have the $200-$300 'sweet spot' for enthusiast gamers, RV700 nailed that segment, Cypress was only just in it with the Pro, but did well with the LE added, Cayman only with the Pro but Barts was in the bottom end, that worked well.

                                            Juniper stayed in the $150-$200 point for a long time then shifted into the $100-$150 range, to make room for Barts (and for competitive reasons as well). Cape Verde can come back to the full range, maybe spread over $100-$200 if there are three versions. We're going to be with 28nm for a while, yields aren't the best they can be so there's going to be some binning to recover costs, it's a possible.

                                            So could be three GPU's, each with three variants, 9 products, cover $100-$600. And there's the dual-Tahiti card... 6990 was $739, vs. a 6970 which was $369. Hmm. That's pretty much on double the price, for the same performance but in a smaller form factor and lower power profile. $549 doubled... $1100 for a GPU card? ASUS only made 1K of their limited edition MARS style cards for a reason. People don't but 1K+ graphics cards. So that price point is unlikely. But $899? $949? Definitely conceivable.

                                            It's going to be fun to find out!

                                            Comment


                                              #23
                                              For price ranges many factors come into it like R&D costs, yields, complexity of card, competition, availability, supply & Demand and lately inflation of currency. I thought AMD would concentrate on the high end due to availability and profit margin for smaller sell numbers. 28nm process is just taking off with many others wanting it as well. Also price structure looks like it will continue to support 40nm production for the lower cost solutions until 28nm production and yield ramps up.

                                              Personally I want to see AMD make some money, real money and not try to please everyone by giving away cards at below costs. Put out superior products and price them reasonably which for the 7970 it is a rather reasonable price for what you get, frankly almost too cheap. The 7950 price ranges should be rather enticing for a lot of folks if it performs around a 580 (I think it will be a little bit faster). If AMD can sell 7970s and make a good profit then it will entice them to do this again, pulling out all the stops and not being too restrictive by price structures. Nvidia sold Ultras in the $800 range, which was by no means Ultra in performance compaired to their next performance card and they sold well. The bad part (my opinion on superficial cards) is they bring in individuals that become more like zealots, religiously spreading the greatness of that $800 card. Which promotes others to do the same. Which I hope AMD does not do, just keep it reasonable and fun and get a loyal following but more important prudent customers that appreciate great products at a good price (not free).
                                              Ryzen 1700x 3.9ghz, Thermaltake Water 2.0 Pro, Asus CrossHair 6 Hero 9, 16gb DDR4 3200 @ 3466, EVGA 1080 Ti, 950w PC pwr & cooling PS, 1TB NVMe Intel SSD M2 Drive + 256mb Mushkin SSD + 512gb Samsung 850evo M.2 in enclosure for Sata III and 2x 1tb WD SATA III, 34" Dell " U3415W IPS + 27" IPS YHAMAKASI Catleap. Win10 Pro

                                              Custom SFF built case, I7 6700k OC 4.4ghz, PowerColor R9 Nano,, 1TB NVMe Intel SSD M2 Drive, 16gb DDR 4 3000 Corsair LPX, LG 27" 4K IPS FreeSync 10bit monitor, Win 10

                                              Comment


                                                #24
                                                AMD has a winning card here. It's great to see and I love it. I waited a few weeks to see which card I'll buy but this is more than I'm willing to pay. It's priced right for what it is. I settled for a cheaper nvidia card.
                                                CURRENT PC:
                                                Seasonic X760 Gold | Corsair 600TM case | 32GB RAM | Some AM4 mobo | AMD 3900X |Lots of SSD and NVME | EVGA RTX 3090 | Some LG 4k monitor

                                                Comment


                                                  #25
                                                  We possibly get a reverse situation? Nvidia trying to get on the wagon with their low-priced cards (I guess by the time they are out they will have no chance huh)? This could get interesting.

                                                  Comment


                                                    #26
                                                    OMG total pwnage of the Green Team's idol GTX580
                                                    HD7970 is da bomb!

                                                    Comment


                                                      #27
                                                      Originally posted by Crisler View Post
                                                      Caveman good review and awesome write up on the conclusion. To many reviewers are just seeing the numbers but you looked at the numbers and then considered the trickle down, well done.
                                                      Totally agreed.

                                                      Comment


                                                        #28
                                                        Thanks for the review

                                                        I have 2 x 5870 in eyefinity @ 5870 x 1080

                                                        i currently run SWTOR and RIFT on 1 x 5870 (crossfire doesn't really help)

                                                        It's mostly smooth as silk apart from some areas where i get some slowdowns.

                                                        would i see improvements with a single 7970?
                                                        Gigabyte X58 UD3R - I7 920 @ 3.6GHz - HIS7970
                                                        6 Gig Corsair Dominator - 3 x 27" LG 6040 x 1080
                                                        Vantage Coolit CPU Cooler - Corsair SSD 128g
                                                        Corsair 800D Case - 850 Antec Tru - win 7 x64

                                                        Comment


                                                          #29
                                                          yes, but depending on the rest of your CPU you may be CPU limited. Depends really on why you're getting slowdowns, are you out of VRAM, is it a GPU bottleneck, or is it CPU bottlneck, or system memory.

                                                          Comment

                                                          Working...
                                                          X