AMD has a master plan that spells doom to Nvidia

Argoon1981

New member
Don't know if this was posted before but i saw this video today and i want to know your guys opinion about it.

[yt]aSYBO1BrB1I[/yt]

Imo this is too far fetched, but even tho i'm a AMD user and if this for some crazy reason becomes true, then i don't like it, a monopoly is always bad.

:bleh:
 
its not that far out there.

node shrinks are failing, multi-chip is the only way to go.

same thing happened with IPC on CPUs, gains couldnt keep up with software demands, so they had to go multicore.
 
LOl well why can't nV do the same with multi small GPU's I mean its in the specs of Direct X isn't it? This isn't anything exclusive to AMD. And this master plan thing, wasn't CavemanJim the one that stated they were in it for the long haul, and he hinted at this years ago. Still hasn't happened, and to think it will happen exactly as Adorned pointed out, it won't plans change based on what and how competitors react to pressure.

And the rest of his rambling about getting out of gaming, man that is so much BS. Neuro nets is a 500 billion dollar industry, if you have a product that can do those things, why not go into it. Stupidity has no bounds.
 
Last edited:
razor1 even tho i do agree with you, this Master plan talk does sound to much out there, this guy does know what he talks about from is other videos, so is not easy to dismiss it has a moron that don't know what is talking about, and perhaps he is reading to much into what AMD is doing and seeing stuff that is not there, but contrary to you i don't bet so easily on he being flat out wrong.
 
Last edited:
razor1 even tho i do agree with you, this Master plan talk does sound to much out there, this guy does know what he talks about from is other videos, so is not easy to dismiss it has a moron that don't know what is talking about, and perhaps he is reading to much into what AMD is doing and seeing stuff that is not there, but contrary to you i don't bet so easily on he being flat out wrong.


Actually he posted a few times at B3D, his thought process is so limited it isn't funny, things that are in his face he doesn't see it. Its great that he dumbs down things for others to understand but by not understanding the fundamentals of business and why something happened and what has happened thus far *pretty much leaving it all out* is just as bad as forecasting something that will happen 10 years from now and not even being in the industry.

The first part of the master plan has already failed, the console dominance doesn't take into affect as it doesn't' affect the PC realm as much as he is alluding to. Hence why AMD's margins tanked, there are no margins in consoles. And AMD's fool hearted look at that was bad management. It sounds all rosy when AMD talked about it at first, but we can see now years later it did nothing for them, if anything its hurting there bottom line. The 3 new semi custom wins, weren't wins either, they are replacements of current products they already have. AMD made it sound like they are getting more, they aren't its just going to be the same as before. Will margins improve we might be thinking, no they won't because the process they are using now doesn't accommodate for margin increases unless the product price increase too, but product price might actually drop not increase.

The first part of the video went into console game sales, this has been know for ages, that console games make money much more than PC's, have we seen that affect nV's, or AMD's bottom line in past. For AMD, no we haven't, for nV yes we have Xbox, but MS wasn't happy with what they were paying so they went elsewhere. That tells ya you just aren't going to make any large amounts of money because of consoles. Consoles make money for game developers, that has never directly translated to more console sales as direct profits for the hardware makers.

The second part of the plan, is not in AMD's hands, its in MS, Sony, and Nintendo *if Nintendo ever becomes a major player again* and nV, all of these companies will not want a strong AMD (stronger then they are) and all of them will make sure AMD will not have a strangle hold on any of these companies, let it be from an API perspective or purely graphics technology superiority perspective. AMD will have to make either of these two things on their own without setting off alarms with the other companies. MS will do their best to make sure nV and AMD are on equal footing from an API perspective as its beneficial for them to maintain power of through the API. Sony will want to keep AMD's influence to a minimal too so they have options to get a graphics processor and cpu's from someone else just in case AMD tries to strong arm, and this is why they will never will go outside of their API's as they have complete control. And nV we know about them, they won't twiddle their thumbs and just wait for AMD to screw them.

I have to quote this
The best-laid plans of mice and men often go awry
Granted some parts of the plan might be successful, which is great, but as a doomsday prediction, yeah not going to happen too many variables to plan out against companies that have been much smarter in their tactics and understanding of their market and have much more resources than AMD.

Things like nvlink, something that was staring right in his face, why would nV create something like this? The reason was right there, an interconnect that can complement group processor interactions. Wait wouldn't Navi need something like that too, to be effective? I think it would.

PSI forgot to add another party in the second part of the plan the biggest of them all Intel! You think Intel is just going to sit around and wait for "master plan", sounds like some german nazi concoction, we all know how those turned out lol, to side step them?
 
Last edited:
:lol: Nvidia never saw it coming. GameWorks is just back firing and probably caused them more harm than good.

Still the video conclusion is fubar. Why? Nvidia has the means, the know how and most importantly the competitive spirit to design hardware that will win. After the 5800FX fiasco Nvidia rapidly developed the next generation GPU that beat AMD's with more DX features and sorta imitating AMD designs. Nvidia can always change plans, divert resources and make new resources available - they have the cash flow and means to do this.

Still Nvidia is hitting a marketing wall with the gaming Industry, if they are pretty much confined to the PC then their gaming market share has been declining over time due to AMD Consoles. So new markets such as VR, Cars and HPC markets Nvidia is wise to pursue. The cut throat Android market and falling sells in tablets is also not the best place to make money unless they have something extremely good - they don't.

So this maybe Nvidia's most important and interesting time where their previous business strategy was very successful but now are on the tail end of it. I expect Nvidia not to sit to long.

Oh, with IBM and Nvidia, Nvlink is working with IBM processors - I would not rule out a Nvidia Console in the future using IBM processors. So much can happen, AMD is the one that better look out beyond the next few years - Nvidia will be around long after that. Could be Linux based, Vulcan (thanks AMD) IBM cpu power house game destroying box (SteamMachine + VR maybe).

https://www.ibm.com/blogs/systems/ibm-power8-cpu-and-nvidia-pascal-gpu-speed-ahead-with-nvlink/
 
Last edited:
Actually he posted a few times at B3D, his thought process is so limited it isn't funny, things that are in his face he doesn't see it. Its great that he dumbs down things for others to understand but by not understanding the fundamentals of business and why something happened and what has happened thus far *pretty much leaving it all out* is just as bad as forecasting something that will happen 10 years from now and not even being in the industry.

The first part of the master plan has already failed, the console dominance doesn't take into affect as it doesn't' affect the PC realm as much as he is alluding to. Hence why AMD's margins tanked, there are no margins in consoles. And AMD's fool hearted look at that was bad management. It sounds all rosy when AMD talked about it at first, but we can see now years later it did nothing for them, if anything its hurting there bottom line. The 3 new semi custom wins, weren't wins either, they are replacements of current products they already have. AMD made it sound like they are getting more, they aren't its just going to be the same as before. Will margins improve we might be thinking, no they won't because the process they are using now doesn't accommodate for margin increases unless the product price increase too, but product price might actually drop not increase.

The first part of the video went into console game sales, this has been know for ages, that console games make money much more than PC's, have we seen that affect nV's, or AMD's bottom line in past. For AMD, no we haven't, for nV yes we have Xbox, but MS wasn't happy with what they were paying so they went elsewhere. That tells ya you just aren't going to make any large amounts of money because of consoles. Consoles make money for game developers, that has never directly translated to more console sales as direct profits for the hardware makers.

The second part of the plan, is not in AMD's hands, its in MS, Sony, and Nintendo *if Nintendo ever becomes a major player again* and nV, all of these companies will not want a strong AMD (stronger then they are) and all of them will make sure AMD will not have a strangle hold on any of these companies, let it be from an API perspective or purely graphics technology superiority perspective. AMD will have to make either of these two things on their own without setting off alarms with the other companies. MS will do their best to make sure nV and AMD are on equal footing from an API perspective as its beneficial for them to maintain power of through the API. Sony will want to keep AMD's influence to a minimal too so they have options to get a graphics processor and cpu's from someone else just in case AMD tries to strong arm, and this is why they will never will go outside of their API's as they have complete control. And nV we know about them, they won't twiddle their thumbs and just wait for AMD to screw them.

I have to quote this
Granted some parts of the plan might be successful, which is great, but as a doomsday prediction, yeah not going to happen too many variables to plan out against companies that have been much smarter in their tactics and understanding of their market and have much more resources than AMD.

Things like nvlink, something that was staring right in his face, why would nV create something like this? The reason was right there, an interconnect that can complement group processor interactions. Wait wouldn't Navi need something like that too, to be effective? I think it would.




I have to quote this too:

"If you want to make GOD laugh ... MAKE A PLAN" :D
 
Actually he posted a few times at B3D, his thought process is so limited it isn't funny, things that are in his face he doesn't see it. Its great that he dumbs down things for others to understand but by not understanding the fundamentals of business and why something happened and what has happened thus far *pretty much leaving it all out* is just as bad as forecasting something that will happen 10 years from now and not even being in the industry.

Wait...aren't you the guy I had to school on fab process after you tried to tell me about node differences? I noticed you shut up about it after Jawed put you right.
 
Wait...aren't you the guy I had to school on fab process after you tried to tell me about node differences? I noticed you shut up about it after Jawed put you right.


No I wasn't really wrong about that, lets see. Because when you go from a full node to a half node, there are things that don't change, but It wasn't worth getting into that discussion as it was pretty off topic so I dropped it.

The posts you and I talked about was specific to Polaris, and the showing of Polaris and what the frame cap does for AMD hardware.

Also things like "shrinking of transistors to drop power usage" for the new node, the transistors really didn't shrink at least when you compare finfets on the same node, Finfets tend to be larger than regular planer transistors, the power savings don't come from them being smaller..... And now if you are comparing them from 28nm to 14nm, yeah they are smaller, but that isn't the reason for the major drop in power usage, not all of it. Majority of it comes from else where, not the transistor size.


I suggest you pick up a book about basic EE before you keep post. Or ask questions.


I would go on historical precedence where we know that AMD generally does more with less die space (with Maxwell being a spectacular exception)
When you make statements like this, come one? What are you thinking? Are you talking about transistor density, GPU structure, etc. Both of those are tied to on another and you can't talk about size of a GPU and performance of a GPU without the other two.


Well are you expecting something out of the ordinary? If so it must be from the AMD side as you clearly believe that Nvidia hasn't done a whole lot different from Maxwell, right?

So that leaves us with an AMD failure or an AMD success. If they've failed then we can expect Polaris 10 to be much slower than the 1070, fair enough? But how can you come to that conclusion based on what we've already seen from Polaris 11 compared to the 950 in the December demonstration? We've also seen Polaris 10 playing Hitman at higher framerate than Fury X. Do you expect the 1070 to be much faster than the 980 Ti?

So if we take all of this information I guess you must believe Polaris will be a runaway success instead? Therefore the 232mm2 Polaris 10 will easily sail past the cut 294mm2 GTX 1070? Isn't it a lot more likely that they'll be pretty close in performance?
Now if you really want me to, I can go through your videos and rip them apart too. The assumptions you make are boundless. And people fall for them because they don't understand what it really entails, not to mention, you don't understand them either so you tend to leave large amounts of important data points that you should be drawing conclusions from out. So who is really doing what here. A fool telling the masses of fools what he thinks is right? That is a great way to get youtube hits! Guess what people need a youtube video to explain this stuff? Yeah, I can see why ADD is such a problem with today's kids.

Here is another great quote of yours

Yes because Fury X is an awful GPU in so many ways. It barely beats that same 440mm2 Hawaii GPU in most cases - certainly not by enough to warrant HBM and the extra area.

And on top of that, Hawaii is another unbalanced GPU. Doesn't even have compression, so the only real factor preventing it from losing to Polaris 10 (bandwidth), is basically a non-factor.

I'm coming from a simple angle here - AMD's current generation of GPUs are basically, terrible. If they can't get at least back on track vs Nvidia, given that they barely even put out a passable GPU the past 3 years, then they just need to give up altogether.
You know Hawaii wasn't that unbalanced of a chip right? It was made very well, the only unbalanced chip was Fiji, and that was because of the amount of bandwidth it had. And where does Hawaii or Granada ever come close to Fiji? I can only think of one situation that is with Async shaders helping Granada or Hawaii and that is when Fiji is CPU bottlenecked. Does that mean lets see how you put it FuryX was just an awful chip. By that train of though anyone with a 1080p monitor will think any card better than a 960 or tonga, are just crap.

You say AMD's current generation is terrible, there is only one problem with that, they weren't terrible, yeah they used more power, but performance they are just fine. Capabilities were just fine.

Oh yeah I remember from one of your videos, you stated, nV has an advantage at 28nm because they pushed the limits of it and because of this they won't have the same advantages in 16nm lol. Funny thing is do you know anything about the custom libraries they are using to push the node limits? I would like to hear your thoughts on this because the guy that you called a neural network sarcasm generator works at TSMC!

If you don't remember that

https://forum.beyond3d.com/posts/1906563/




If you can't figure out that it's more difficult to beat Titan X on 16nm (with a midrange GPU) than it would be to beat the 780 Ti, it's your comprehension that's at fault.






What is so difficult about that? And FYI you're like a neural network sarcasm generator.


This is why people look at you as naive. I sure look at you as something else.

Now do you want me to rip apart all your videos on Youtube and post there? I'm giving you the option, or I will rip them apart and post on B3D and others that know more than me about dedicated topics rip them apart even more?

The whole problem with this is though, you will get more hits, more attention, you are the youtube equivalent of Charlie and semiaccurate!
 
Last edited:
No I wasn't really wrong about that, lets see. Because when you go from a full node to a half node, there are things that don't change, but It wasn't worth getting into that discussion as it was pretty off topic so I dropped it.

The posts you and I talked about was specific to Polaris, and the showing of Polaris and what the frame cap does for AMD hardware.

Also things like "shrinking of transistors to drop power usage" for the new node, the transistors really didn't shrink at least when you compare finfets on the same node, Finfets tend to be larger than regular planer transistors, the power savings don't come from them being smaller..... And now if you are comparing them from 28nm to 14nm, yeah they are smaller, but that isn't the reason for the major drop in power usage, not all of it. Majority of it comes from else where, not the transistor size.


I suggest you pick up a book about basic EE before you keep post. Or ask questions.


When you make statements like this, come one? What are you thinking? Are you talking about transistor density, GPU structure, etc. Both of those are tied to on another and you can't talk about size of a GPU and performance of a GPU without the other two.


Now if you really want me to, I can go through your videos and rip them apart too. The assumptions you make are boundless. And people fall for them because they don't understand what it really entails, not to mention, you don't understand them either so you tend to leave large amounts of important data points that you should be drawing conclusions from out. So who is really doing what here. A fool telling the masses of fools what he thinks is right? That is a great way to get youtube hits! Guess what people need a youtube video to explain this stuff? Yeah, I can see why ADD is such a problem with today's kids.

Here is another great quote of yours

You know Hawaii wasn't that unbalanced of a chip right? It was made very well, the only unbalanced chip was Fiji, and that was because of the amount of bandwidth it had. And where does Hawaii or Granada ever come close to Fiji? I can only think of one situation that is with Async shaders helping Granada or Hawaii and that is when Fiji is CPU bottlenecked. Does that mean lets see how you put it FuryX was just an awful chip. By that train of though anyone with a 1080p monitor will think any card better than a 960 or tonga, are just crap.

You say AMD's current generation is terrible, there is only one problem with that, they weren't terrible, yeah they used more power, but performance they are just fine. Capabilities were just fine.

Oh yeah I remember from one of your videos, you stated, nV has an advantage at 28nm because they pushed the limits of it and because of this they won't have the same advantages in 16nm lol. Funny thing is do you know anything about the custom libraries they are using to push the node limits? I would like to here your thoughts on this because the guy that you called a neural network sarcasm generator works at TSMC!

If you don't remember that

https://forum.beyond3d.com/posts/1906563/







This is why people look at you as naive. I sure look at you as something else.

Now do you want me to rip apart all your videos on Youtube and post there? I'm giving you the option, or I will rip them apart and post on B3D and others that know more than me about dedicated topics rip them apart even more?

Razor and Adored

I recommend you keep B3D stuff at B3D and R3D threads here. Gets rather confusing and impossible to follow line of discussion when out of nowhere quotes are made out of context of a whole discussion.

I think the reason why a good portion of AMD stuff went AMD way is just because they were generally correct in their outlook on gaming hardware and software. Plus in the end having an APU or designing an APU using X86, they did not have to reinvent the wheel for everyone. Meaning MS/Sony can rapidly get a good API without having to worry about too many unique custom chips. The developers will have ilike four platforms (Sony, MS, Nintendo, PC) that have similar architecture in general and programming methods.

Nvidia should also be able to link up multiple GPU's using NVlink as well as using the interposer for multiple chips if need be. The video did have some great points on die size, yield and using multiple chipped solutions to get the most out of the process. I do believe AMD will gain back market share with the mobile market and make good inwards into new discrete PC gpus. Time will tell.

I do not think AMD will be able to corner the PC market, they just have a good starting point or lead at this time on where they are going. It is any ones prediction or guess how all of this will play out. Plus the console generation after the next update could be an Intel APU or IBM/Nvidia combination as a possibility. Nvidia probably made a mistake not trying harder in the Console business. Steam machines are just not cost effective for console buyers - that could change though in a few years.
 
Razor and Adored

I recommend you keep B3D stuff at B3D and R3D threads here. Gets rather confusing and impossible to follow line of discussion when out of nowhere quotes are made out of context of a whole discussion.

I think the reason why a good portion of AMD stuff went AMD way is just because they were generally correct in their outlook on gaming hardware and software. Plus in the end having an APU or designing an APU using X86, they did not have to reinvent the wheel for everyone. Meaning MS/Sony can rapidly get a good API without having to worry about too many unique custom chips. The developers will have ilike four platforms (Sony, MS, Nintendo, PC) that have similar architecture in general and programming methods.

Nvidia should also be able to link up multiple GPU's using NVlink as well as using the interposer for multiple chips if need be. The video did have some great points on die size, yield and using multiple chipped solutions to get the most out of the process. I do believe AMD will gain back market share with the mobile market and make good inwards into new discrete PC gpus. Time will tell.

I do not think AMD will be able to corner the PC market, they just have a good starting point or lead at this time on where they are going. It is any ones prediction or guess how all of this will play out. Plus the console generation after the next update could be an Intel APU or IBM/Nvidia combination as a possibility. Nvidia probably made a mistake not trying harder in the Console business. Steam machines are just not cost effective for console buyers - that could change though in a few years.

Well Intel does have their own high speed interconnect too so expect them to be a big player in this too.

Also smaller chips and an interposer, for the next 5 years or so, don't expect to see this happen. Its not cost effective and won't be for a while. Not to mention making smaller chips where the reticle limit still can be reached without much issue, will also hinder this, so that 5 years might become 10 years.

Smaller chips are easier to manufacture, yes they are but what are the reasons they are easier to manufacture? And why does it become easier as the process matures to make larger chips? Yeah its question of should we do smaller chips on an interposer rather than wait for the bigger chips to be viable. The secondary contrast to this is if the larger chips are viable at a certain time, will the smaller chips still have a better yield? And is that yield going to offset the loss of the chips that have to be cut down.

And if you looked at Raja's statements he even says there are cases where this will just not work, not all software models suite multiple chips, even with multi adapter. So engines and games have to change too. So when are we expecting Navi? what is the time frame for new games to be coming out after Navi comes out? What are other players in the industry doing in the mean time? What has happened this far in the context for this "master plan". Can AMD play puppet master to the rest of the Industry? A company that as been relegated to the bottom of the barrel.........

Unlike multiple cores on a CPU, graphics is a much tougher problem to over come because of bandwidth constraints and also because of the inherent parallelism of the GPU to begin with. If multi gpu scalability is to be achieved, both of these problems have to be addressed in unison in software and hardware. By this time what is the specialty of Navi, when the other industry leaders are just as vested and are in a better position then AMD to just sit around and let things slide for AMD's 'master plan'?

nV didn't make a mistake by dropping out of the consoles, that showed in the past three quarters for AMD, where they took losses from their console sales. Margins are so thin that the sales of the consoles aren't making up for the total production of chips. That is just not good. Consoles traditionally were sold at a loss for this very reason, and the components were directly bought out by the console company or made by them, AMD's contract was different they take a portion of the risk in the sales by taking a % instead of a flat out buy, which turned around and bit them in the ass. This is exactly why nV stepped away, remember what MS wanted nV to do with the original Xbox? PS3 was also nV taking a risk with a % of the sale price. They decided the risk for this was too great. And now we can see why........
 
Last edited:
Well Intel does have their own high speed interconnect too so expect them to be a big player in this too.

Also smaller chips and an interposer, for the next 5 years or so, don't expect to see this happen. Its not cost effective and won't be for a while. Not to mention making smaller chips where the reticle limit still can be reached without much issue, will also hinder this, so that 5 years might become 10 years.

Smaller chips are easier to manufacture, yes they are but what are the reasons they are easier to manufacture? And why does it become easier as the process matures to make larger chips? Yeah its question of should we do smaller chips on an interposer rather than wait for the bigger chips to be viable. The secondary contrast to this is if the larger chips are viable at a certain time, will the smaller chips still have a better yield? And is that yield going to offset the loss of the chips that have to be cut down.

And if you looked at Raja's statements he even says there are cases where this will just not work, not all software models suite multiple chips, even with multi adapter. So engines and games have to change too. So when are we expecting Navi? what is the time frame for new games to be coming out after Navi comes out? What are other players in the industry doing in the mean time? What has happened this far in the context for this "master plan". Can AMD play puppet master to the rest of the Industry? A company that as been relegated to the bottom of the barrel.........

Depends also when die stacking beyond memory will start to happen. Once you established communications lanes (4096bit bus +) you can also start separating different parts of the chips in stacks. For example APU could be two distinct parts CPU on one stack and GPU on another for example. Multiple GPU's (scalability) stacked making the size of the interposer even smaller than today designs. Think of 3D chips or 2.5d if using an interposer. Reminds me of the 3D chip design in the Terminator. That is the future -When? That is any ones guess but AMD looks to be on top of it.

http://www.microarch.org/micro46/files/keynote1.pdf

The video hinted as well that AMD is already using a modular design like their ACE design in the GPU's - It is working - if on separate stacks or not should not make much difference as a thought. Still this is probably 5-10 years out maybe and rather forward looking.
 
Last edited:
Depends also when die stacking beyond memory will start to happen. Once you established communications lanes (4096bit bus +) you can also start separating different parts of the chips in stacks. For example APU could be two distinct parts CPU on one stack and GPU on another for example. Multiple GPU's (scalability) stacked making the size of the interposer even smaller than today designs. Think of 3D chips or 2.5d if using an interposer. Reminds me of the 3D chip design in the Terminator. That is the future -When? That is any ones guess but AMD looks to be on top of it.

http://www.microarch.org/micro46/files/keynote1.pdf

The video hinted as well that AMD is already using a modular design like their ACE design in the GPU's - It is working - if on separate stacks or not should not make much difference as a thought. Still this is probably 5-10 years out maybe and rather forward looking.


Well I would expect that to be more than 10 years in the future, die stacking different types of chips into one socket, is very very complex and that is a good paper to see where all the hurdles are, I'm sure there will be more too, as node size drops.....

Look modern GPU's are all for practicality purposes modular in design pretty much all pieces have been decoupled since nV and AMD went scalar. So its not just the ACE's.
 
Well I would expect that to be more than 10 years in the future, die stacking different types of chips into one socket, is very very complex and that is a good paper to see where all the hurdles are, I'm sure there will be more too, as node size drops.....

Look modern GPU's are all for practicality purposes modular in design pretty much all pieces have been decoupled since nV and AMD went scalar. So its not just the ACE's.

So using both sides of the interposer is also an option, does not need to be only on one side - except cooling maybe somewhat an issue, a new cooling design for both sides would be needed. Much simpler using both sides for cooling then stacking chips (especially high power ones) on top of each other, that would be a cooling nightmare. Still in the end that is where the future looks to be heading.
 
So ... The next gen console will be intersting.Maybe noone will want in anymore... By noone i mean neither Nvidia or AMD..

If this may happen the only remain will be the .... PC

:D

THis is ..... intersting:

http://megagames.com/news/amd-creates-new-rivals-intel-licensing-its-chips-china

"AMD is licensing its x86 processors and system-on-chip technology to THATIC (Tianjin Haiguang Advanced Technology Investment Co. Ltd.), a joint venture between AMD and a consortium of public and private Chinese companies."
 
well stacking isn't the issue, the issues is how the stack affects other things.

In any case, we can also point out with the master plan, Where does Zen fit into all this if it fails or since AMD has now been trying to license Intel's x86 patent to a Chinese partner, does that put AMD's, Intel, cross licensing contract in jeopardy, where Intel can forcible shut down AMD's CPU business world wide by going so far as an injunction? Yeah we can start drawing conclusions based on this, but really will it all work out? Probably not, but it surely is much more in front of us than a plan that is out there.

PS the secondary Chinese company is not a subsidiary of AMD and the contract is very specific to say only subsidiary companies of AMD have access outside of companies manufacturing the chip. So until we know more about what this secondary company is doing, its pretty much up in the air.

Oh yeah AMD was confident spinning off GF wouldn't hurt their cross license contract either, Intel made AMD drop anti competitive lawsuits in Japan and EU for the change in the contract. Them being confident doesn't mean jack when its in black and white on paper.

What this is is AMD wreaking of desperation, not a master plan. They are cash strapped and the deadline is fast approaching for the bonds to mature.......
 
Last edited:
well stacking isn't the issue, the issues is how the stack affects other things.

In any case, we can also point out with the master plan, Where does Zen fit into all this if it fails or since AMD has now been trying to license Intel's x86 patent to a Chinese partner, does that put AMD's, Intel, cross licensing contract in jeopardy, where Intel can forcible shut down AMD's CPU business world wide by going so far as an injunction? Yeah we can start drawing conclusions based on this, but really will it all work out? Probably not, but it surely is much more in front of us than a plan that is out there.

PS the secondary Chinese company is not a subsidiary of AMD and the contract is very specific to say only subsidiary companies of AMD have access outside of companies manufacturing the chip. So until we know more about what this secondary company is doing, its pretty much up in the air.

Oh yeah AMD was confident spinning off GF wouldn't hurt their cross license contract either, Intel made AMD drop anti competitive lawsuits in Japan and EU for the change in the contract. Them being confident doesn't mean jack when its in black and white on paper.

What this is is AMD wreaking of desperation, not a master plan.

http://wccftech.com/intel-x86-isa-license-spreadtrum/
Was Intel violating AMD64? Intel gets a licensing fee? Who knows. Also TSMC have been making APU's meaning X86 chips. So that was not an issue using a foundry or different ones. So if AMD gets yet someone else to make X86 chips what is the issue?
 
Back
Top