AMD Polaris architecture for Arctic Islands / Radeon 400 series GPUs

Exactly, anything coming from Gandolf must be taken with a grain of salt. I couldn't agree more.

To be fair a lot of Nvidia fans did call out the water cooling on the Fury X. Seems it was necessary for AMD's architecture on the ole' 28mm process to keep temps in check for the high end card. I don' think either side will need WC on their top end cards this time around but it wouldn't surprise me as it is still the best way to get max OC! I just want to see the performance jumps for both sides as 4K HQ gaming is in dire need of it....blahh
 
To be fair a lot of Nvidia fans did call out the water cooling on the Fury X. Seems it was necessary for AMD's architecture on the ole' 28mm process to keep temps in check for the high end card. I don' think either side will need WC on their top end cards this time around but it wouldn't surprise me as it is still the best way to get max OC! I just want to see the performance jumps for both sides as 4K HQ gaming is in dire need of it....blahh

Honestly I never once bashed the WC setup. I've thought about buying AIO WC cards in the past but never did because I've had pumps go bad before and would hate to see that happen on a GPU. To each their own I guess
 
Given that both Nvidia and AMD will try to extract as much performance as they can from both Pascal and Greenland, and do so as long as the yields are half way decent, both are likely to push clock speeds pretty high right from the start, at the expense of power consumption, even with finfet's and the 14~16 nm process.


Remains to be seen if Nvidia hops on the integrated water block / pump just like AMD did on the fury, as if nothing else it made for a cool running card that is pretty silent, and especially useful in multi GPU setups as the system overall can still be fairly quiet even with 2 cards, as the vast majority of the heat is dissipated by the radiators which aren't near the cards themselves, and have a greater surface area that a regular heatsink bolted onto the card directly, never mind that water is a better heat conductor to begin with.



60~65*C under load is pretty impressive for a stock cooler.


Nope, if you push the clocks too high for the process and the architecture you will need it, just like AMD did, if you see an overclock Nano, a 10% increase in clock speed is creating a 30% increase in power, its out of the ideal range of its architecture and process limits, and you have to turn the fan on to 100% speed to keep it in its thermal range.

If you take a look at nV's maxwell 2 architecture they have no problem going 1400+ mhz, on most of their cards, without WC, why can't Fiji GPU's do the same or even with WC get the same? They are on the same process, architecture has alot of do with frequency and power consumption. Power consumption has a direct link to heat created, its a 1 to 1. Than power consumption has a direct link to frequency and temperature. All of these are inter related.
 
Last edited:
You act like all nVidia customers were bashing the AMD WC setup :nuts:
Point me in the direction of an Nvidia loyalist that praised the WC setup, or at least talking about in a good light. I must have missed all of them because all I saw was non stop bashing and how AMD needed WC due to the extremely high power consumption of Fury.
 
To be fair a lot of Nvidia fans did call out the water cooling on the Fury X. Seems it was necessary for AMD's architecture on the ole' 28mm process to keep temps in check for the high end card. I don' think either side will need WC on their top end cards this time around but it wouldn't surprise me as it is still the best way to get max OC! I just want to see the performance jumps for both sides as 4K HQ gaming is in dire need of it....blahh
not temps noise maybe but the fury non x runs quiet and not over temps

Honestly I never once bashed the WC setup. I've thought about buying AIO WC cards in the past but never did because I've had pumps go bad before and would hate to see that happen on a GPU. To each their own I guess
after a asus or msi 3 year warranty who cares it will be long obsolete

I like them better than I thought I would they are very quiet and run cool even the one with it's rad on a post mount inside my case

I do hope the both keep making them and Polaris has a AIO model
 
When i read for first time here the word WC ... in my mind was translated Water Closet ... I have to read 3 times to switch my mind to water cooled... Funny mind i have...Also reading the phrase with Water Closet was really humorous...

I like also that AMD is talking about stars now ...
 
Nope, if you push the clocks too high for the process and the architecture you will need it, just like AMD did, if you see an overclock Nano, a 10% increase in clock speed is creating a 30% increase in power, its out of the ideal range of its architecture and process limits, and you have to turn the fan on to 100% speed to keep it in its thermal range.

If you take a look at nV's maxwell 2 architecture they have no problem going 1400+ mhz, on most of their cards, without WC, why can't Fiji GPU's do the same or even with WC get the same? They are on the same process, architecture has alot of do with frequency and power consumption. Power consumption has a direct link to heat created, its a 1 to 1. Than power consumption has a direct link to frequency and temperature. All of these are inter related.



True, but I think how the individual features are managed in terms of power usage, and keeping them turned on (or off) in Maxwell's case also plays a part there........There's no way that Nvidia would ignore a near 300 mhz increase in clock ( from 1125mhz to ~1400 mhz), if the yields were good and it could sustain those clocks with all the units inside the GPU remaining functional and under 100% usage scenarios in all games, while temperatures and power usage is in check.....Worst case scenario basically.



Hence why I've heard that under more recent and demanding titles, where the functional units are put into use in a more demanding way, a fair amount of users are having to dial down the clocks as the temperatures at those high clocks go into the danger zone to put it mildly, even if we ignore the power usage altogether....



There is built in clock throttling once the GPU hits 95*C no matter what, and such protection is only there because it is possible to make the card hit those temperatures and do so at it's stock clocks, depending on the game, game settings, ambient temperature of the room where the PC is, ventilation inside the case....List of variables goes on and on as you know.
 
True, but I think how the individual features are managed in terms of power usage, and keeping them turned on (or off) in Maxwell's case also plays a part there........There's no way that Nvidia would ignore a near 300 mhz increase in clock ( from 1125mhz to ~1400 mhz), if the yields were good and it could sustain those clocks with all the units inside the GPU remaining functional and under 100% usage scenarios in all games, while temperatures and power usage is in check.....Worst case scenario basically.



Hence why I've heard that under more recent and demanding titles, where the functional units are put into use in a more demanding way, a fair amount of users are having to dial down the clocks as the temperatures at those high clocks go into the danger zone to put it mildly, even if we ignore the power usage altogether....



There is built in clock throttling once the GPU hits 95*C no matter what, and such protection is only there because it is possible to make the card hit those temperatures and do so at it's stock clocks, depending on the game, game settings, ambient temperature of the room where the PC is, ventilation inside the case....List of variables goes on and on as you know.


They didn't ignore them, power consumption goes up for them too, if they needed to push them more they would have, but they didn't need to. With nV's products though the increase in power usage is more linear (this is because nV's chip don't need voltage increases, where as Fiji does). Temperature yeah, but they could have made a new cooler for them if they needed to as well, instead they decided to use an older cooler and that saved them design money.

Both IHV's chips have clock gating and power management states, if anything AMD should have had an advantage by going to HBM they don't need to clock their memory bus as high as nV's, and the amount of silicon in the GPU needed for them is less too!

The list of variables goes on I agree, but what we know doesn't. Unless AMD has been feeding us spoiled food about their own architecture and HBM, I don't think we can ignore those facts, because AMD sure thinks they were important and they wouldn't have touted all those things if they weren't something they thought helped them. They did help them but just wasn't enough.

What if HBM wasn't ready and AMD needed to use GDDR5, what would the situation have been then? I think they would have been stuck with the r290x at this node. If Fiji was cut down because they needed the extra silicon for the larger bus for GDDR5, it performance wouldn't have been enough to make it viable.

What probably happened was they weren't expecting such a large increases in power efficiency from nV without changing nodes. By the time they got wind of Maxwell gen 1 it was too late for them to make any modifications to what they had in store for Fiji, well anything substantial, when they saw GTX 980 they probably thought ok, that's not too bad we can compete with the Nano, that can be our high end, and it would have done well even though it used just a bit more power its performance would have justified that, then Titan X came out, ok well we can get close and around the same power consumption, since the Titan X is priced really high we can compete price wise and still get out ahead even with it being WC's, then 980 ti just crushed all hope.

And nV planned for the higher frequency in its architecture too, there is no way around a 40% increase in frequency isn't planned for.

Frequency is one of the things chip manufactures look into as well when designing chips, yeah 10 years ago they would just let that go to the end and expect the newer nodes to take care of that, but not anymore, that's something they plan for now.

Hmm more like 15 years ago, I think since the g80 and r600 they started planning for frequency, might have been a bit earlier too, for nV, I remember something about the time when they had issues with the FX line and the node change. But that really showed up with the g80 and the semi custom libraries nV started to use. This is something AMD hasn't done yet either.

And AMD will not be using custom libraries with Polaris either from the looks of it since they have split their manufacturing between 3 different plants/ 2 different Fab processes.

Sum this all up and to keep things back on topic, there is nothing wrong with a AIO WC system for a graphics card, but if the competition doesn't need it and it beats you without it, well then you got a problem.

So if Pascal's high end needs WC to keep it marketable vs Polaris's high end doesn't need it, Pascal will be a fail in my eyes, but if both need it ok, there are no options although I don't like using AIO's or any WC for that matter, if I was forced to go WC I would prefer using custom radiators, where I have control of what I want.
 
Last edited:
They didn't ignore them, power consumption goes up for them too, if they needed to push them more they would have, but they didn't need to. With nV's products though the increase in power usage is more linear (this is because nV's chip don't need voltage increases, where as Fiji does). Temperature yeah, but they could have made a new cooler for them if they needed to as well, instead they decided to use an older cooler and that saved them design money.

Both IHV's chips have clock gating and power management states, if anything AMD should have had an advantage by going to HBM they don't need to clock their memory bus as high as nV's, and the amount of silicon in the GPU needed for them is less too!

The list of variables goes on I agree, but what we know doesn't. Unless AMD has been feed us spoiled food about their own architecture and HBM, I don't think we can ignore those facts, because AMD sure thinks they were important and they wouldn't have touted all those things if they weren't something they thought helped them. They did help them but just wasn't enough.

What if HBM wasn't ready and AMD needed to use GDDR5, what would the situation have been then? I think they would have been stuck with the r290x at this node. If Fiji was cut down because they needed the extra silicon for the larger bus for GDDR5, it performance wouldn't have been enough to make it viable.

What probably happened was they weren't expecting such a large increases in power efficiency from nV without changing nodes. By the time they got wind of Maxwell gen 1 it was too late for them to make any modifications to what they had in store for Fiji, well anything substantial, when they saw GTX 980 they probably thought ok, that's not too bad we can compete with the Nano, that can be our high end, and it would have done well even though it used just a bit more power its performance would have justified that, then Titan X came out, ok well we can get close and around the same power consumption, since the Titan X is priced really high we can compete price wise and still get out ahead even with it being WC's, then 980 ti just crushed all hope.

And nV planned for the higher frequency in its architecture too, there is no way around a 40% increase in frequency isn't planned for.

Frequency is one of the things chip manufactures look into as well when designing chips, yeah 10 years ago they would just let that go to the end and expect the newer nodes to take care of that, but not anymore, that's something they plan for now.



It's hard to say to be honest, as Fiji's transistor budget is nearly 1 billion higher than Maxwell's ( 8.9 billion versus 8 billion for maxwell), and does have an edge when it comes to shading power and geometry handling, but maxwell has the edge when it comes to texturing and raw fill rate when not memory bandwidth constrained, hence why either one or the other is slightly faster depending on the game and settings.....




In any case current games aren't using that much for the time being so it's a moot point, as we're already discussing Polaris versus Pascal....Basically either GPU's time in the spotlight is limited and the real abilities of either Maxwell and Fiji aren't really known, until such time when games are created to need that much GPU firepower, at least for the recommended specifications....We won't care who's on top by then, since both companies will have even more powerful architectures no matter what...



So the main issue is that both Nvidia and AMD have to design an architecture not just for games in the future, but also for games that were never designed to need that much GPU firepower to begin with......It's why I hardly pay attention to benchmarks anymore, as there's no way to design a bad GPU anymore when talking about transistor budgets that exceed the entire human population of the planet, never mind Polaris and Pascal likely doubling that figure.....
 
It's hard to say to be honest, as Fiji's transistor budget is nearly 1 billion higher than Maxwell's ( 8.9 billion versus 8 billion for maxwell), and does have an edge when it comes to shading power and geometry handling, but maxwell has the edge when it comes to texturing and raw fill rate when not memory bandwidth constrained, hence why either one or the other is slightly faster depending on the game and settings.....

Hmm no Fiji only has shader performance, Maxwell, even Keplar beats Fiji in geometry through put.


In any case current games aren't using that much for the time being so it's a moot point, as we're already discussing Polaris versus Pascal....Basically either GPU's time in the spotlight is limited and the real abilities of either Maxwell and Fiji aren't really known, until such time when games are created to need that much GPU firepower, at least for the recommended specifications....We won't care who's on top by then, since both companies will have even more powerful architectures no matter what...
Yes and no, I think these cards were just stop gates for a much better DX12 and next gen API architectures. They were planned for as back ups once they noticed 20nm wasn't going to be good for high end GPU's.

True we can't really speculate much of Pascal and Polaris even with what AMD has shown so far, with the frame rate cap on in their demo, it really doesn't show anything. Because frame rate locks for AMD hardware give them a sizable power reduction where nV hardware can't take advantage of that. Secondly the cap doesn't allow us to know what the actual frame rates are for each card.

So the main issue is that both Nvidia and AMD have to design an architecture not just for games in the future, but also for games that were never designed to need that much GPU firepower to begin with......It's why I hardly pay attention to benchmarks anymore, as there's no way to design a bad GPU anymore when talking about transistor budgets that exceed the entire human population of the planet, never mind Polaris and Pascal likely doubling that figure.....
Hmm benchmarks themselves don't mean much, but what developers want for their next gen games does. Developers want more polygons, they want more tessellation, they want more shader horsepower, they want more bandwidth. I have been talking to other development teams for insight on what their target goals on for different aspects, so I can plan the game my team and I are making, 3 times the polygon counts for characters seem to be good. 8k textures for everything, possibly multiple 8 k for main characters. Shaders are going to be more complex but not too much more then what we have now, possibility of that to change depending on how much performance Polaris and Pascal bring to the table. More focus on better lighting techniques, this will require more shader horsepower. Etc.

nV is disadvantaged in one of those four areas, shader horsepower.

AMD has 2 areas they have to look into, geometry through put and tessellation, both are linked together. So in reality only one area, geometry through put. And this is a big one, they are lacking a lot here, I didn't realize it was so low till last week I was discussing tessellation with at B3D and it was pointed out to me. I was thinking it was tessellation performance where AMD was crapping out, but it was actually geometry through put. And it gets murdered.
b3d-poly-throughput.gif



Bandwidth is going to increase for both the architectures so I don't think we need to worry about that.
 
Last edited:
seems they need to fix that and without question should be fixed with Polaris as they had time to do so now.

There is the question of how much increase are they looking at, because when did they realize how much that deficit has set them back and to what degree. Also to what degree newer games are going to push this to, many devs have stated they want use 2 to 3 times the polycounts on characters alone, and around 2 times the polycounts on environments.

Its one thing if we where looking at a 10 to 20% difference but this up to a 300% difference in through put in certain cases, that's a large increase with everything else that is going on (there is only so much silicon they have to play with, ie: more shader units, more efficiency in utilization and power, etc.)

I expect them to do some catch up here, but I don't expect them to equal nV. Just as I don't expect nV to equal AMD's GPUs in raw shader through put.
 
I guess that is the multi-million dollar balancing act each vendor faces. Where to emphasize and dedicate your resources for the amount of silicon you have.
 
I guess that is the multi-million dollar balancing act each vendor faces. Where to emphasize and dedicate your resources for the amount of silicon you have.

Yep!

I don't know if you guys watched this video before. But you should know this how Crytek abused with tessellation. These objects were over tessellation just too much and yet it still looks flat and bland. That's why Nvidia is faster at tessellation since they have a better geometry engine and made ATI GPUs look bad.

http://techreport.com/review/21404/crysis-2-tessellation-too-much-of-a-good-thing

[yt]IYL07c74Jr4[/yt]

They didn't know how the Cry engine 2 3 worked, in wireframe mode there is no culling going on, so the engine thinks it had to do the tessellation even though in some cases it didn't.

Think this was discussed quite of times.
 
They didn't know how the Cry engine 2 3 worked, in wireframe mode there is no culling going on, so the engine thinks it had to do the tessellation even though in some cases it didn't.

Being in wireframe has nothing to do with it, engines don't care if they are in wireframe mode or not for culling purposes, they just care if stuff is fully behind a triangle or a object bound box.

And they didn't knew how cryengine 2 worked? They Who? Nvidia? Has a GPU maker and a graphics driver maker they are required to know how a game engine works, if not, how the hell would they optimize their drivers to it?

Crytek implemented extreme Tesselation on Crysis 2, a Nvidia sponsored game, they add to test it inhouse on both vendors GPU's (if not that would be very bad engineering practice from a company like Crytek) and realize right away that AMD users with similar performance cards would get the shaft on that tech, so why did they didn't optimized it?

And yes this has been discussed many times and in all of them there's only this conclusions or Crytek rendering staff is incompetent and didn't knew how to implement tessellation or Nvidia add leverage on the decision, we all know who in the end got all the benefits on this.
 
Back
Top