RX480 power useage vs nvidia...

pax

Well-known member
So Higgy said in its own thread but cant find it so I made one:

People need to ask more serious questions about nvidia's huge power efficiency edge. TPU finds the differences odd. I remember nvidia cheating by running fewer pixels with a darker image way back when (around the time I had a TI 4200 IIRC). But it had more fps. Thats when I went red team and never looked back. Well that and 2 nvidia vidcards that died under 2 years.

I mean 70% more efficient with 2 billion more transistors? What is nvidia cutting out to get at that vs the RX 480? When I buy desktop I dont want a mobile part with whatever throttling it plays with or lack of features like proper dx 12 support.
 
I have my 6950 that takes 200watt...
I have also an FX8370 ...
So you feel nvidia is cheating in some way ?
I will get the popcorn ... :popcorn: :popcorn:
 
That's a long time ago .. I don't think that's the case.

I think NV has the more efficient process. They had better efficiency with Maxwell compared to Fury/FuryX, and TSMC is just plain better than GloFlo in nearly every way.

GCN was not an efficient architecture (though the Nano was a great development and maturation of the arch) and I think that's sorta carried over to Polaris with the RX480.

I think a lot of the blame lies on GloFlo, but we don't really have much evidence to support that other than history of AMDs inefficiency, especially since moving to GloFlo.

I'm not sure what you mean by "When I buy desktop I don't want a mobile part" .. you're not making much sense there. Assuming you're talking about the boost system, it's very well developed and I've had no issues with it. You have a point with full DX12 compliance but it seems NV just doesn't care much for DX12 just yet. I think it's a smart business decision as the cards are DX12 capable, but until DX12 is flooding the market .. not much reason to care about it, I guess. In other words, I understand the business decision and I agree with it. We'll see if it comes back to bite NV in the long-run, but I don't believe it will.

I'm suspecting Big Pascal will have "proper DX12 support" like you want.

Anyway, to go back to the power usage discussion, I'm just really worried about Vega if this is the type of efficiency AMD is seeing with the RX480. I'm hoping GloFlo gets their **** together and gets ****ing efficient so it can allow AMD more room to strut their stuff with Vega. Also, Vega being on HBM will open up a little bit of wattage as well so that will also be some gains in efficiency.

I just want some damn competition so NV can't sell reference coolers for $100 extra. Reference ****ing coolers. The worst.
 
A few sites said the current Nvidia lineup was built as a mobile part then ported to desktop. But mobiles are known to throttle a lot to get at power savings or lack features vs their desktop counterparts.

Im speculating that in order to achieve such huge power savings per tranny or per clock that nvidia had to really get at something. As it is it sounds like they are doing one of 2 things. Breaking the laws of physics or cheating (throttling some aspect of their gpus or lack of features ect).

When I see the RX 480 reference get somewhat close to a 1070 in a few games at 1080p im starting to wonder.


https://www.techpowerup.com/reviews/AMD/RX_480/17.html

https://www.techpowerup.com/reviews/AMD/RX_480/13.html

Really looking forward to oced numbers next month on AIB versions of the 480
 
Where do you see the 480 getting close to the 1070 in that link?

The only throttling issues I've seen are from people running the reference coolers and it gets too hot. On the AIB 1080/1070s I've seen little to none, and very good numbers put up with overclocking.

Also keep in mind that the 1080/1070 are akin to the 680/670. It's a small, power-efficient die that isn't the full-featured architecture. The big boy with full features will be the 1080TI/TITAN, as it has been for 6xx/7xx/9xx series.
 
At 1080p its only about 20% slower than a 1070. Now ask yourself if the rx 480 had 2 billion more transistors at the same level of the 1070 how much better those scores would be?
 
Look at it however you want but this should not be happening. Look at the improvement over Fury this is downright terrible why even bother with a new node?

perfwatt_1920_1080.png
 
I'm surprised to see the 980TI being 2% more efficient per watt, lol. That's actually a crazy number.
 
Ya we all know the 70% power usage edge which is even higher per tranny/clock. Be interesting to see if the 1060 which will have closer to the same transistor count as the 480 does in that respect.
 
At 1080p its only about 20% slower than a 1070. Now ask yourself if the rx 480 had 2 billion more transistors at the same level of the 1070 how much better those scores would be?

Just Cause 3: the 480 @ 71FPS and the 1070 @ 110.

Fallout 4: the 480 @ 91 and the 1070 @ 114

These are both CPU heavy games though. Not sure this is a very good indicator.

1080 is almost done as a "I need a flagship to run this maxed out". My single 980TI crushed everything @ 1080p. It's not the res to look at anymore. Mid-range cards can run it well, especially if they're mediocre looking titles like JC3 + FO4
 
Although I agree with the power usage comparison please show me a new 1070 or 980 Ti that I can purchase for $200-$239? I'm sure both companies are still working on inefficiencies and will only get better.
 
I'll have to agree with most of the sentiments here. While I truly like Polaris (especially the price these things are selling at), the power usage is a real head scratcher...

There is just something about the GCN architecture that, firstly, doesn't like high clocks. Secondly, AMD pushes the clocks as high as they reasonably can to stay competitive, which means having to pump a ton more juice into the chips.

When these chips are clocked 100-200mhz lower than stock, they SIP power. And when I say "these chips", I'm talking every GCN chip since 1.0. AMD has just been perpetually caught in a place where they HAVE to push the limits (killing a lot of efficiency along the way) to stay competitive and sell.

I hope that the Vega architecture has something a little more special going on. Hope we're not looking at GTX 1080/ti performance in their top tier chip at FuryX level power usage... I don't know about the rest of you, but I don't like that I have to push +24mv to get to 1100mhz on my FuryX, which in turn consumes an additional 100w(!) or so.

But hey, maybe they DO have something special going on for Vega, which is why it won't hit until end of this year/early next year. I would be completely ok with a power hog if it destroys an obviously upcoming Ti version of the 1080... If not, it better be damn efficient!
 
Many people will play games ... 160watt
Others will worry about vs Pascal stuff ... I am not saying is not visible but really is this really matter to lose so much time and brain latency/time ... :lol:
 
I'll have to agree with most of the sentiments here. While I truly like Polaris (especially the price these things are selling at), the power usage is a real head scratcher...

There is just something about the GCN architecture that, firstly, doesn't like high clocks. Secondly, AMD pushes the clocks as high as they reasonably can to stay competitive, which means having to pump a ton more juice into the chips.

When these chips are clocked 100-200mhz lower than stock, they SIP power. And when I say "these chips", I'm talking every GCN chip since 1.0. AMD has just been perpetually caught in a place where they HAVE to push the limits (killing a lot of efficiency along the way) to stay competitive and sell.

I hope that the Vega architecture has something a little more special going on. Hope we're not looking at GTX 1080/ti performance in their top tier chip at FuryX level power usage... I don't know about the rest of you, but I don't like that I have to push +24mv to get to 1100mhz on my FuryX, which in turn consumes an additional 100w(!) or so.

But hey, maybe they DO have something special going on for Vega, which is why it won't hit until end of this year/early next year. I would be completely ok with a power hog if it destroys an obviously upcoming Ti version of the 1080... If not, it better be damn efficient!

hope so


or NVidia fixes Pascal's dx12 support on the 1080 ti
 
Courtesy of TH

gZFGBlp.png


So if we look at performance the 1080 uses 9 watts more power but it also has a massive performance lead. The suprising one is the GTX 1070 which uses 16w less power and still has a commanding lead over it. Now factor in Crossfire and well the more you look at it, less wattage, less heat better performance under single GPU scenario and its very easy to tell the outcome. Now this is just one game, but other tests on the TH review show a similar story.

Now compare what video cards the 480 is supposed to go up against.

GTX 970 - 145w
GTX 980 - 165w

Now I am comparing the reference specs here as opposed to the AIB cards available. These cards released 2 years ago use lower or the same wattage, have half the memory and still manage to trade blows with the 480.

In my eyes AMD may have mad advances within its own tech but is still far behind its competition.
 
Back
Top