Omega53
Go Chargers!
That is priceless considering who its coming from.
Exactly, anything coming from Gandolf must be taken with a grain of salt. I couldn't agree more.
That is priceless considering who its coming from.
Exactly, anything coming from Gandolf must be taken with a grain of salt. I couldn't agree more.
Exactly, anything coming from Gandolf must be taken with a grain of salt. I couldn't agree more.
To be fair a lot of Nvidia fans did call out the water cooling on the Fury X. Seems it was necessary for AMD's architecture on the ole' 28mm process to keep temps in check for the high end card. I don' think either side will need WC on their top end cards this time around but it wouldn't surprise me as it is still the best way to get max OC! I just want to see the performance jumps for both sides as 4K HQ gaming is in dire need of it....blahh
Given that both Nvidia and AMD will try to extract as much performance as they can from both Pascal and Greenland, and do so as long as the yields are half way decent, both are likely to push clock speeds pretty high right from the start, at the expense of power consumption, even with finfet's and the 14~16 nm process.
Remains to be seen if Nvidia hops on the integrated water block / pump just like AMD did on the fury, as if nothing else it made for a cool running card that is pretty silent, and especially useful in multi GPU setups as the system overall can still be fairly quiet even with 2 cards, as the vast majority of the heat is dissipated by the radiators which aren't near the cards themselves, and have a greater surface area that a regular heatsink bolted onto the card directly, never mind that water is a better heat conductor to begin with.
60~65*C under load is pretty impressive for a stock cooler.
Point me in the direction of an Nvidia loyalist that praised the WC setup, or at least talking about in a good light. I must have missed all of them because all I saw was non stop bashing and how AMD needed WC due to the extremely high power consumption of Fury.You act like all nVidia customers were bashing the AMD WC setup![]()
not temps noise maybe but the fury non x runs quiet and not over tempsTo be fair a lot of Nvidia fans did call out the water cooling on the Fury X. Seems it was necessary for AMD's architecture on the ole' 28mm process to keep temps in check for the high end card. I don' think either side will need WC on their top end cards this time around but it wouldn't surprise me as it is still the best way to get max OC! I just want to see the performance jumps for both sides as 4K HQ gaming is in dire need of it....blahh
after a asus or msi 3 year warranty who cares it will be long obsoleteHonestly I never once bashed the WC setup. I've thought about buying AIO WC cards in the past but never did because I've had pumps go bad before and would hate to see that happen on a GPU. To each their own I guess
Nope, if you push the clocks too high for the process and the architecture you will need it, just like AMD did, if you see an overclock Nano, a 10% increase in clock speed is creating a 30% increase in power, its out of the ideal range of its architecture and process limits, and you have to turn the fan on to 100% speed to keep it in its thermal range.
If you take a look at nV's maxwell 2 architecture they have no problem going 1400+ mhz, on most of their cards, without WC, why can't Fiji GPU's do the same or even with WC get the same? They are on the same process, architecture has alot of do with frequency and power consumption. Power consumption has a direct link to heat created, its a 1 to 1. Than power consumption has a direct link to frequency and temperature. All of these are inter related.
True, but I think how the individual features are managed in terms of power usage, and keeping them turned on (or off) in Maxwell's case also plays a part there........There's no way that Nvidia would ignore a near 300 mhz increase in clock ( from 1125mhz to ~1400 mhz), if the yields were good and it could sustain those clocks with all the units inside the GPU remaining functional and under 100% usage scenarios in all games, while temperatures and power usage is in check.....Worst case scenario basically.
Hence why I've heard that under more recent and demanding titles, where the functional units are put into use in a more demanding way, a fair amount of users are having to dial down the clocks as the temperatures at those high clocks go into the danger zone to put it mildly, even if we ignore the power usage altogether....
There is built in clock throttling once the GPU hits 95*C no matter what, and such protection is only there because it is possible to make the card hit those temperatures and do so at it's stock clocks, depending on the game, game settings, ambient temperature of the room where the PC is, ventilation inside the case....List of variables goes on and on as you know.
They didn't ignore them, power consumption goes up for them too, if they needed to push them more they would have, but they didn't need to. With nV's products though the increase in power usage is more linear (this is because nV's chip don't need voltage increases, where as Fiji does). Temperature yeah, but they could have made a new cooler for them if they needed to as well, instead they decided to use an older cooler and that saved them design money.
Both IHV's chips have clock gating and power management states, if anything AMD should have had an advantage by going to HBM they don't need to clock their memory bus as high as nV's, and the amount of silicon in the GPU needed for them is less too!
The list of variables goes on I agree, but what we know doesn't. Unless AMD has been feed us spoiled food about their own architecture and HBM, I don't think we can ignore those facts, because AMD sure thinks they were important and they wouldn't have touted all those things if they weren't something they thought helped them. They did help them but just wasn't enough.
What if HBM wasn't ready and AMD needed to use GDDR5, what would the situation have been then? I think they would have been stuck with the r290x at this node. If Fiji was cut down because they needed the extra silicon for the larger bus for GDDR5, it performance wouldn't have been enough to make it viable.
What probably happened was they weren't expecting such a large increases in power efficiency from nV without changing nodes. By the time they got wind of Maxwell gen 1 it was too late for them to make any modifications to what they had in store for Fiji, well anything substantial, when they saw GTX 980 they probably thought ok, that's not too bad we can compete with the Nano, that can be our high end, and it would have done well even though it used just a bit more power its performance would have justified that, then Titan X came out, ok well we can get close and around the same power consumption, since the Titan X is priced really high we can compete price wise and still get out ahead even with it being WC's, then 980 ti just crushed all hope.
And nV planned for the higher frequency in its architecture too, there is no way around a 40% increase in frequency isn't planned for.
Frequency is one of the things chip manufactures look into as well when designing chips, yeah 10 years ago they would just let that go to the end and expect the newer nodes to take care of that, but not anymore, that's something they plan for now.
It's hard to say to be honest, as Fiji's transistor budget is nearly 1 billion higher than Maxwell's ( 8.9 billion versus 8 billion for maxwell), and does have an edge when it comes to shading power and geometry handling, but maxwell has the edge when it comes to texturing and raw fill rate when not memory bandwidth constrained, hence why either one or the other is slightly faster depending on the game and settings.....
Yes and no, I think these cards were just stop gates for a much better DX12 and next gen API architectures. They were planned for as back ups once they noticed 20nm wasn't going to be good for high end GPU's.In any case current games aren't using that much for the time being so it's a moot point, as we're already discussing Polaris versus Pascal....Basically either GPU's time in the spotlight is limited and the real abilities of either Maxwell and Fiji aren't really known, until such time when games are created to need that much GPU firepower, at least for the recommended specifications....We won't care who's on top by then, since both companies will have even more powerful architectures no matter what...
Hmm benchmarks themselves don't mean much, but what developers want for their next gen games does. Developers want more polygons, they want more tessellation, they want more shader horsepower, they want more bandwidth. I have been talking to other development teams for insight on what their target goals on for different aspects, so I can plan the game my team and I are making, 3 times the polygon counts for characters seem to be good. 8k textures for everything, possibly multiple 8 k for main characters. Shaders are going to be more complex but not too much more then what we have now, possibility of that to change depending on how much performance Polaris and Pascal bring to the table. More focus on better lighting techniques, this will require more shader horsepower. Etc.So the main issue is that both Nvidia and AMD have to design an architecture not just for games in the future, but also for games that were never designed to need that much GPU firepower to begin with......It's why I hardly pay attention to benchmarks anymore, as there's no way to design a bad GPU anymore when talking about transistor budgets that exceed the entire human population of the planet, never mind Polaris and Pascal likely doubling that figure.....
Hmm no Fiji only has shader performance, Maxwell, even Keplar beats Fiji in geometry through put.
seems they need to fix that and without question should be fixed with Polaris as they had time to do so now.
I guess that is the multi-million dollar balancing act each vendor faces. Where to emphasize and dedicate your resources for the amount of silicon you have.
I don't know if you guys watched this video before. But you should know this how Crytek abused with tessellation. These objects were over tessellation just too much and yet it still looks flat and bland. That's why Nvidia is faster at tessellation since they have a better geometry engine and made ATI GPUs look bad.
http://techreport.com/review/21404/crysis-2-tessellation-too-much-of-a-good-thing
[yt]IYL07c74Jr4[/yt]
They didn't know how the Cry engine 2 3 worked, in wireframe mode there is no culling going on, so the engine thinks it had to do the tessellation even though in some cases it didn't.