Why AMD cannot get 1.4 - 1.5GHZ Boost Clocks at stock?

Matrixzer01

New member
When I say stock I mean launch stock clocks. I was hoping for a 1.2 - 1.3GHZ Base Clock and 1.4 - 1.5 GHZ Boost clock. So...

1. Did AMD shot themselves in the foot with not using a 8 PIN connector? And the heat is coming from drawing too much power from the PCI Express slot?

2. Is there any PCB issues like missing parts besides the missing 8 PIN connector?

Or...

3. Was this architecture not meant to go over 1.0GHZ. But AMD seeing the early results in internal testing and decided to overclock the damn thing at the last minute and stick with the 6 PIN connector they had for the original model design?

As the PCBs were probably in production at this stage or they already had them?

I am an AMD supporter(Not Fanboy) but it seems strange they cannot get a midrange 14nm part to clock high. I hope Vega is a different/newer architecture than Polaris and is made for Higher Clock and draws mainly from a 8 / 8 + 6 PIN Connector. Plus the 14nm process is refined more.

Thanks.
 
I'm guessing they did want the clocks to be higher like what you stated. New tech on a new 14nm process I'd imagine they were hoping for more out of the latest revision of the chips. Might get it right with one more revision but that takes more time. I think the problem is keeping the power/heat down once it ramps up higher.
 
Different process and manufacturer than what Nvidia is using... Also, I am a firm believer that AMD has a more complex architecture than Nvidia. Not a knock on Nvidia at all though, they know how to push performance where it is needed and crank up the speed.

Another thing to consider is that AMD has continuously made refinements to the GCN process and Polaris is no different. Vega may even be a further enhancement and be considered GCN 1.5 or something. It will take time to get those clocks up and they will likely revise the chips a bit (perhaps release RX485 and 495 models, etc) as they learn more.

From everything I've read, Nvidia's Pascal is hardly any different than their Maxwell architecture minus the node shrink. A lot easier to get clocks out of something that is already proven...

Lastly, just give it a little time and don't focus on clocks too much (can't compare clocks of different architectures). We are at a turning point with APIs. AMD is seeing MASSIVE gains in Vulkan (Doom) and size-able gains in DX12 (which hasn't even really seen a proper implementation yet). Today, a DX12 benchmark was finally released for 3dMark and it is showing the same story. AMD's hardware async is looking better and better all the time and there are a ton of DX12 (and some Vulkan) games on the horizon. AMD's architecture choices that started several years ago with GCN 1.0 are finally starting to pay off.

A lot of people like to think that DX11 will be most prominent for several more years, but I highly doubt that. I expect most AAA titles to be DX12/Vulkan by early/mid next year, period.

EDIT: Realized I left out a big piece of answering your question. Power consumption. AMD could likely push a ton of power into Polaris and crank clocks up with a massive cooler, but they are marketing efficiency right now. You can't have efficiency when you're pushing chips to their max. The rest goes back to what I said above. AMD's architecture choices make it difficult for them to push clocks, but those same decisions are starting to pay off in other ways.
 
Last edited:
I also can see a 485 refresh once yields are up, process is more refined. 485 could use lower power and much greater boost clocks as well. I see AMD has room to grow on clock speed while Nvidia is more at peak limits with the process.
 
It seems like with the reference card, the cooler is the major bottleneck. Even with the current boost clocks, it starts overheating and ends up throttling back down. The engineering team selecting the cooler clearly didn't pick one that left much headroom.

I don't see evidence that they are pushing the chip hard, like with Hawaii. One site (can't remember which) compared power efficiency at different speeds and they are pretty much at the sweet spot right now. If they were pushing it beyond its limits then you'd see performance/watt increase as you reduced the clock speed.

Personally, I think the issue is just that GCN doesn't clock as high as Nvidia's architecture, rather than it being based on them being on GloFo instead of TSMC.
 
Clocks are never equal among competing architectures....Wondering why GPU A has something clocked at 1.5GHz while GPU B has something clocked at 1.3GHz is a complete waste of "wondering" time...;) Because it's possible that the 1.3GHz part will run appreciably faster than the 1.5GHz part. That's true often, actually. Folks should become familiar with IPC, an acronym that means "Instructions per clock." Basically, the following is true:

MHz clocks by themselves provide no meaningful performance information. Performance is described like this:

IPC x Mhz = Performance, or P. (IPC x MHz = P) IPC or MHz numbers by themselves tell us nothing about performance.

The more IPC a given architecture executes, then A) the lower it is likely to be clocked, and b) the more performance it is likely to provide, everything else aside from clockspeed being equal. This is *very* basic, of course, but is far more informative than simply trying to compare competing architectures on the basis of MHz clocks alone--a total waste of time.

So, if GPU A executes 4 instructions per clock, and it is clocked at 1.3 GHz, it will perform better than GPU B which executes 3 instructions per clock but is clocked @ 1.5GHz. (4 x 1300Mhz= 5200 instructions executed per second; 3 x 1500 = 4500 instructions executed per second.) Thus GPU A, while clocked lower, is the faster GPU. Etc. (Again, everything else being equal, of course.)

The IPC number for any GPU is determined solely by its architecture--and AMD, nVidia, and Intel (just threw in Intel although Intel isn't competitive in 3d acceleration) architectures are all different from one another, often very different. So, raw MHz numbers tell us nothing about comparative performance. The same is true for CPUs of differing architectures, too.
 
It seems like with the reference card, the cooler is the major bottleneck. Even with the current boost clocks, it starts overheating and ends up throttling back down. The engineering team selecting the cooler clearly didn't pick one that left much headroom.

I don't see evidence that they are pushing the chip hard, like with Hawaii. One site (can't remember which) compared power efficiency at different speeds and they are pretty much at the sweet spot right now. If they were pushing it beyond its limits then you'd see performance/watt increase as you reduced the clock speed.

Personally, I think the issue is just that GCN doesn't clock as high as Nvidia's architecture, rather than it being based on them being on GloFo instead of TSMC.


No, the cooler is not the problem. But power consumption limit is the one that bottlenecks a whole RX 480 card. (See "Advanced AMD RX 480 Overclocking - 1500 MHz using I2C and watercooling " - der8auer on youtube).

If even with very good cooler, it would still need more power to draws, the card itself has power limit.

R9 290/390 series are very different since it's just cooler problem, so all it need is a good cooler. It's done. But this RX 480 is a different story.

And also GCN architecture probably isn't designed to run at very high clocked. I guess.
 
You could be right, as it seems that this custom RX480 at TPU still throttled even though it wasn't hot.

https://www.techpowerup.com/reviews/ASUS/RX_480_STRIX_OC/

But the reference heatsink didn't even manage to maintain full stock boost speeds without getting very hot, so I think it's accurate to say it's also playing a role. Unfortunately, if the power limit is a significant limiting factor that means the custom cards won't be able to clock up much higher, even with better cooling.
 
You could be right, as it seems that this custom RX480 at TPU still throttled even though it wasn't hot.

https://www.techpowerup.com/reviews/ASUS/RX_480_STRIX_OC/

But the reference heatsink didn't even manage to maintain full stock boost speeds without getting very hot, so I think it's accurate to say it's also playing a role. Unfortunately, if the power limit is a significant limiting factor that means the custom cards won't be able to clock up much higher, even with better cooling.

In realistic case scenario, you can expect the most custom RX 480 can clocked between 1300-1400mhz. But to beyond 1400Mhz, you need to use fully water cooled setup and can run 24/7. But again, higher clock isn't always better because AMD and Nvidia both have different architecture and you cannot compare them based on clocks.

Also, there's a the german site that tested on memory overclocking with stock core clock. It gained around 8-10% performance more on the overclocked memory alone. So there is definitely memory bottleneck since it don't have a wider memory interface. I bet RX 480 still has some power potentials left.
 
Back
Top