Theory on roadmap offerings: Design vs wafer size...

MasterGoa

New member
So after seeing both AMD and Nvidia take a generation of chips based on 28nm
and actually do 3 passes of optimizations over the last 4 years,
do you believe we will see more of this with this generation?

IE using an OK design and benefiting from 14nm, then further optimizing with the next gen?
 
I think everything is going to slow down. Intel has been pushing up against this wall for a while now. The limits of physics are starting to come into play, making further shrinks more difficult. We could be less than a decade out from reaching the end of the road on silicon entirely.
 
I think everything is going to slow down. Intel has been pushing up against this wall for a while now. The limits of physics are starting to come into play, making further shrinks more difficult. We could be less than a decade out from reaching the end of the road on silicon entirely.

Yes is a real problem but imo they will never let this stuff stagnate, because if they do, is the death of the industry and a total colapse of our economies, the day silicon based transistors get impossible to shrink anymore, they already have others ideas to replace them, or even how to use them in other ways, as that day gets near, research on new tech will increase in support, there's already some ideas like biological based transistors, quantum computing (still a pipe dream tho), 3D processors instead of 2D like they are now, but cooling is a real problem on those, etc.
 
Didnt samsung tape out a 10nm part recently? One rumor had it the next galaxy Note would have a 10 nm exynos part...
 
I'm guessing 4-5 years of 14nm, similar to 28nm.

AMD will move to 10nm first and utilize the next memory tech first. Does it mean AMD will be "better"? Maybe, maybe not... they just tend to push new tech first, for better or worse.

Just my .02
 
I'm guessing 4-5 years of 14nm, similar to 28nm.

AMD will move to 10nm first and utilize the next memory tech first. Does it mean AMD will be "better"? Maybe, maybe not... they just tend to push new tech first, for better or worse.

Just my .02

How would AMD be the first to 10nm? They're solely reliant on other companies process technology. Aside from that, what TSCM and Samsung call 10nm is closer to Intel's 14nm in actual feature size. There is no industry standard for what 10nm actually means. It's just the naming convention for the next node.
 
How would AMD be the first to 10nm? They're solely reliant on other companies process technology. Aside from that, what TSCM and Samsung call 10nm is closer to Intel's 14nm in actual feature size. There is no industry standard for what 10nm actually means. It's just the naming convention for the next node.

I am just speaking strictly of GPUs in relation to Nvidia. And it's just a wild guess...

I feel the only reason Nvidia got their 14nm cards out a month ahead of AMD is because they (by general consensus thus far...) have only shrunk the maxwell architecture to create Pascal. AMD, on the other hand, has touted various large improvements and changes to GCN for 1.4 (4.0). Hell, clock for clock Pascal isn't even as efficient as Maxwell. Nvidia went quick and dirty, and it will likely pay off for them anyway because objectively the cards DO perform well at those high clock speeds.

On the memory side, it's no contest. AMD helped create HBM (and GDDR5 iirc). They will continue to push memory tech and apparently already have plans for something new in Navi.
 
Back
Top