Go Back   Rage3D » Rage3D Discussion Area » Graphics Technology Forums » AMD Radeon Discussion and Support
Rage3D Subscribe Register FAQ Members List Calendar Mark Forums Read

AMD Radeon Discussion and Support General discussion, tweaking, overclocking and technical support questions about discrete Radeon graphics products.

Reply
 
Thread Tools Display Modes
Old Jun 17, 2018, 04:13 PM   #1
Advertisement (Guests Only)

Login or Register to remove this ad
Hapatingjaky
Maximum Disappointment
 
Join Date: Nov 2005
Location: Canada New Ukraine
Posts: 9,238
Hapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling fresh


Default NAVI No Longer MCM

https://www.pcgamesn.com/amd-navi-monolithic-gpu-design

Seems because of Multi-GPU ala Crossfire and SLI being killed off AMD plans to ditch the MCM design and remain a "Monolitic GPU"

Quote:
“We are looking at the MCM type of approach,” says Wang, “but we’ve yet to conclude that this is something that can be used for traditional gaming graphics type of application.”
Game developers adoption of Multi-GPU support:

Quote:
That infrastructure doesn’t exist with graphics cards outside of CrossFire and Nvidia’s SLI. And even that kind of multi-GPU support is dwindling to the point where it’s practically dead. Game developers don’t want to spend the necessary resources to code their games specifically to work with a multi-GPU array with a miniscule install base, and that would be the same with an MCM design.
Now if say a console had an MCM CPU/APU/GPU whatever you want to call it then we may see Multi-GPU being supported better.
__________________
Intel Core i7 8700K @ 5.2GHz, Asus Maximus X Apex, GSkill Trident-Z 16GB DDR4 4266MHz CAS19, Asus Strix 1080Ti OC, Creative Labs Soundblaster E3, Samsung 970 Pro 1TB, Corsair AXI 1500i PSU, ThermalTake View 71, Corsair K95 Platinum RGB, Corsair Dark Core RGB SE, Acer Predator X34, Windows 10 Professional X64
Hapatingjaky is offline   Reply With Quote
Old Jun 17, 2018, 04:21 PM   #2
bill dennison
Radeon Arctic Islands
 
Join Date: Jan 2007
Location: United States phoenix
Posts: 19,085
bill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Diesel


Default


Quote:
It’s definitely something AMD’s engineering teams are investigating, but it still looks a long way from being workable for gaming GPUs, and definitely not in time for the AMD Navi release next year. “We are looking at the MCM type of approach,” says Wang, “but we’ve yet to conclude that this is something that can be used for traditional gaming graphics type of application.”


http://www.rage3d.com/board/showpost...9&postcount=64

welcome to 3 days ago




…..

navi was always iffy for MCM too soon after Raja

maybe the next one

Last edited by bill dennison : Jun 17, 2018 at 04:27 PM.
bill dennison is offline   Reply With Quote
Old Jun 17, 2018, 09:18 PM   #3
0091/2
We Do It!
 
Join Date: Dec 2002
Location: China Brooklyn, NY
Posts: 7,311
0091/2 is not someone to be trifled with0091/2 is not someone to be trifled with0091/2 is not someone to be trifled with0091/2 is not someone to be trifled with0091/2 is not someone to be trifled with0091/2 is not someone to be trifled with0091/2 is not someone to be trifled with0091/2 is not someone to be trifled with


Default

What are the users here doing with their SLI setup? Are the newer AAA games supporting them?
__________________
Lenovo x61t - Display : 12.1 (Multi-Touch) - CPU : Intel Lv7700 @1.8ghz - Graphics : Intel GMA X3100 graphics - Chipset : Intel 965 Express - Communication : Intel Wireless WiFi Link 4965AGN
10/100/1000 Ethernet - RAM : G.skill ddr2 800 4gb - Storage : G.Skill 64 SSD(SLC) - Battery : 8cell


Current Desktop [2016]
Monitor: NEC EA244wmi | CPU: Intel 3570k @4.2ghz | Heatsink: NH-D14 | GPU: Intel HD4000 | Mobo: ASUS P8Z77-v pro | WiFi: Asus PCEAC68 | SSD: Samsung 830pro 128GB | HDD: WD Black 8========D~13TB | PSU: Seasonic Plat. 660w
0091/2 is offline   Reply With Quote
Advertisement (Guests Only)
Login or Register to remove this ad
Old Jun 17, 2018, 10:38 PM   #4
pax
Rage3D Spammer
 
Join Date: Apr 2001
Location: Canada Grand Falls, New Brunswick, Canada
Posts: 26,487
pax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird sings


Default

If they had used hbm in navi I think it couldve worked with 2 gpu on one card. Probably one of the issues is that gddr6 will be the ram of choice for speed and cost for gaming cards. And that makes a dual gpu card harder to make.
__________________
I talked to the tree. Thats why they put me away!..." Peter Sellers, The Goon Show
Only superficial people cant be superficial... Oscar Wilde

Piledriver Rig 2016: Gigabyte G1 gaming 990fx. FX 8350 cpu. Asus R290 DCUII Cats 19.4.3, SoundBlaster ZXR, 2 x 8 gig ddr3 1866 Kingston. 1 x 2tb Firecuda seagate with 8 gig mlc SSHD. Sharp 60" 4k 60 hz tv. Win 10 home.

Ryzen Rig 2017: Gigabyte X370 K7 F40 bios. Ryzen 1700x. 2 x 8 ddr4 3600 (@3200) Gskill. Sapphire Vega 64 Reference Cooler Cats 19.6.2. Soundblaster X Ae5 May 10th 2018 driver. 28" Upstar 4k ips 60hz panel, Intel 600p NVME 512GB. 4 TB HGST NAS HD. Win 10 pro.

Ignore List: Keystone... -My Baron he wishes to inform you that vendetta, as he puts it in the ancient tongue, the art of kanlee is still alive... He does not wish to meet or speak with you...-
"Either half my colleagues are enormously stupid, or else the science of darwinism is fully compatible with conventional religious beliefs and equally compatible with atheism." -Stephen Jay Gould, Rock of Ages.
"The Intelligibility of the Universe itself needs explanation. It is not the gaps of understanding of the world that points to God but rather the very comprehensibility of scientific and other forms of understanding that requires an explanation." -Richard Swinburne


www.realitysandwich.com

www.makepovertyhistory.org
pax is offline   Reply With Quote
Old Jun 18, 2018, 06:56 AM   #5
Hapatingjaky
Maximum Disappointment
 
Join Date: Nov 2005
Location: Canada New Ukraine
Posts: 9,238
Hapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling freshHapatingjaky kills 99.99% of germs and leaves hands feeling fresh


Default

Quote:
Originally Posted by pax View Post
If they had used hbm in navi I think it couldve worked with 2 gpu on one card. Probably one of the issues is that gddr6 will be the ram of choice for speed and cost for gaming cards. And that makes a dual gpu card harder to make.
No, the way Crossfire and SLI work it wouldn't really matter if you used infinity fabric or not its still the same issue. As long as both AMD and Nvidia use an AFR method over SFR ( this is what 3DFX used back in the day ) then we are screwed. HBM has nothing to do with it. GDDR6 is fast enough if not faster and cheaper too produce then HBM.

This comes down to the fact that there is no developer support for Multi-GPU in todays games anymore. You have DX12 and Vulkan 1.2 ( 2018 release ) that support Multi-GPU natively, but developers have to enable and support the feature. Gone are the days of Crossfire/SLI Profiles from AMD or Nvidia.

Even if AMD or Nvidia were to release a Multi-Core GPU it will still come down too developer support. And that won't happen unless a console gets Multi-GPU treatment.
__________________
Intel Core i7 8700K @ 5.2GHz, Asus Maximus X Apex, GSkill Trident-Z 16GB DDR4 4266MHz CAS19, Asus Strix 1080Ti OC, Creative Labs Soundblaster E3, Samsung 970 Pro 1TB, Corsair AXI 1500i PSU, ThermalTake View 71, Corsair K95 Platinum RGB, Corsair Dark Core RGB SE, Acer Predator X34, Windows 10 Professional X64
Hapatingjaky is offline   Reply With Quote
Old Jun 18, 2018, 10:24 AM   #6
GTwannabe
Radeon Northern Islands
 
Join Date: Nov 2002
Posts: 2,049
GTwannabe once held a door open for a complete strangerGTwannabe once held a door open for a complete strangerGTwannabe once held a door open for a complete stranger


Default

Very few developers will waste time and money on optimizations for the tiny fraction of gamers that run multi-GPU. Makes more sense to tweak your code to run well on consoles. For mGPU to work, it needs to be transparent to the OS.
GTwannabe is offline   Reply With Quote
Old Jun 18, 2018, 10:36 AM   #7
bill dennison
Radeon Arctic Islands
 
Join Date: Jan 2007
Location: United States phoenix
Posts: 19,085
bill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Diesel


Default

I thought of the two way to do MCM one was SFR two or more gpu's working on one hbm memory pool at the same time
and you need hbm as you need it all on the same chip package

windows might need driver balancing but the game would not
a little like ryzen



and nv is working on it also

http://research.nvidia.com/publicati...ip-Module-GPUs
bill dennison is offline   Reply With Quote
Old Jun 18, 2018, 11:04 AM   #8
bobvodka
Sleeping.
 
Join Date: Sep 2003
Posts: 1,852
bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'


Default

Quote:
Originally Posted by bill dennison View Post
windows might need driver balancing but the game would not
a little like ryzen
[/url]
So, firstly, games don't ignore CPU topology - they know what they are running on (and best guess forward compatibility) so that resources can be spread as they need - so already it's not that simple.

Secondly, trying to pretend that 2 GPUs are one is what DX11 and OpenGL did and that lead to driver profiles... so, I mean, if you want some kind of driver profile provided by MS in order to load balance your games correctly for each GPU then knock yourself out but that's the reality of what you are suggesting.

But then I keep pointing this out and apparently no one pays attention so... *shrug*
bobvodka is offline   Reply With Quote
Old Jun 18, 2018, 11:17 AM   #9
pax
Rage3D Spammer
 
Join Date: Apr 2001
Location: Canada Grand Falls, New Brunswick, Canada
Posts: 26,487
pax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird sings


Default

Quote:
Originally Posted by Hapatingjaky View Post
No, the way Crossfire and SLI work it wouldn't really matter if you used infinity fabric or not its still the same issue. As long as both AMD and Nvidia use an AFR method over SFR ( this is what 3DFX used back in the day ) then we are screwed. HBM has nothing to do with it. GDDR6 is fast enough if not faster and cheaper too produce then HBM.

This comes down to the fact that there is no developer support for Multi-GPU in todays games anymore. You have DX12 and Vulkan 1.2 ( 2018 release ) that support Multi-GPU natively, but developers have to enable and support the feature. Gone are the days of Crossfire/SLI Profiles from AMD or Nvidia.

Even if AMD or Nvidia were to release a Multi-Core GPU it will still come down too developer support. And that won't happen unless a console gets Multi-GPU treatment.

I meant as space needed to put 2 gpu with gddr6. Makes for a big pcb. As for dx12 and Infinity Fabric it was always deemed to be developer agnostic. Mind you Id always like a user side switch in case something doesnt work well.
__________________
I talked to the tree. Thats why they put me away!..." Peter Sellers, The Goon Show
Only superficial people cant be superficial... Oscar Wilde

Piledriver Rig 2016: Gigabyte G1 gaming 990fx. FX 8350 cpu. Asus R290 DCUII Cats 19.4.3, SoundBlaster ZXR, 2 x 8 gig ddr3 1866 Kingston. 1 x 2tb Firecuda seagate with 8 gig mlc SSHD. Sharp 60" 4k 60 hz tv. Win 10 home.

Ryzen Rig 2017: Gigabyte X370 K7 F40 bios. Ryzen 1700x. 2 x 8 ddr4 3600 (@3200) Gskill. Sapphire Vega 64 Reference Cooler Cats 19.6.2. Soundblaster X Ae5 May 10th 2018 driver. 28" Upstar 4k ips 60hz panel, Intel 600p NVME 512GB. 4 TB HGST NAS HD. Win 10 pro.

Ignore List: Keystone... -My Baron he wishes to inform you that vendetta, as he puts it in the ancient tongue, the art of kanlee is still alive... He does not wish to meet or speak with you...-
"Either half my colleagues are enormously stupid, or else the science of darwinism is fully compatible with conventional religious beliefs and equally compatible with atheism." -Stephen Jay Gould, Rock of Ages.
"The Intelligibility of the Universe itself needs explanation. It is not the gaps of understanding of the world that points to God but rather the very comprehensibility of scientific and other forms of understanding that requires an explanation." -Richard Swinburne


www.realitysandwich.com

www.makepovertyhistory.org
pax is offline   Reply With Quote
Old Jun 18, 2018, 11:26 AM   #10
bill dennison
Radeon Arctic Islands
 
Join Date: Jan 2007
Location: United States phoenix
Posts: 19,085
bill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Diesel


Default

Quote:
Originally Posted by bobvodka View Post
So, firstly, games don't ignore CPU topology - they know what they are running on (and best guess forward compatibility) so that resources can be spread as they need - so already it's not that simple.

Secondly, trying to pretend that 2 GPUs are one is what DX11 and OpenGL did and that lead to driver profiles... so, I mean, if you want some kind of driver profile provided by MS in order to load balance your games correctly for each GPU then knock yourself out but that's the reality of what you are suggesting.

But then I keep pointing this out and apparently no one pays attention so... *shrug*
they we are s*it out of luck

because it is not getting much smaller than 7nm soon we are going to hit the wall on die shrinks
what is next 3nm where to after that and how many years will it take

so I sure hope they can get MCM working
bill dennison is offline   Reply With Quote
Old Jun 18, 2018, 01:25 PM   #11
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by bobvodka View Post
So, firstly, games don't ignore CPU topology - they know what they are running on (and best guess forward compatibility) so that resources can be spread as they need - so already it's not that simple.

Secondly, trying to pretend that 2 GPUs are one is what DX11 and OpenGL did and that lead to driver profiles... so, I mean, if you want some kind of driver profile provided by MS in order to load balance your games correctly for each GPU then knock yourself out but that's the reality of what you are suggesting.

But then I keep pointing this out and apparently no one pays attention so... *shrug*

The latest version of DX12 has multi GPU support built in without resorting to driver profiles by either GPU vendors or MS itself providing them.....It's all up to developers to use it, but it is there.



It still makes sense to provide support to at least 2 way multi GPU, since the performance scaling increase is definitely there going from 1 to 2 GPU's in the vast majority of cases, with the best case scenarios going over 80% scaling which isn't peanuts by any standard, unless said user doesn't even bother with at least a 4K display and still sticks to 1080p or 1440p displays where one GPU is enough for what is these days considered a low resolution and in this latter case, one gets CPU limited quite often especially at 1080p ( almost all the time in that case ).
shadow001 is offline   Reply With Quote
Old Jun 18, 2018, 07:41 PM   #12
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by bill dennison View Post
they we are s*it out of luck

because it is not getting much smaller than 7nm soon we are going to hit the wall on die shrinks
what is next 3nm where to after that and how many years will it take

so I sure hope they can get MCM working


There is nothing beyond 7nm basically while the chip is made of silicon, as the gate portion that allows current to flow thru it ( or not ), and that controls the behavior of said transistor is so thin at that point ( think 2 to 3 atoms wide ), that regardless if it's in it's closed position the current will pass right thru it even when it's not supposed to ( quantum tunneling ).....The transistor becomes too erratic in it's behavior and unreliable as it has a hard time producing the same results in the same conditions...… AKA, useless for a computing device of any kind where consistent and repeatable results are a requirement, not just a nice thing to have.
shadow001 is offline   Reply With Quote
Old Jun 18, 2018, 09:34 PM   #13
pax
Rage3D Spammer
 
Join Date: Apr 2001
Location: Canada Grand Falls, New Brunswick, Canada
Posts: 26,487
pax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird singspax knows why the caged bird sings


Default

They've been talking 5 nm and 3 nm tho.. mind you they tend to measure transistors differently from one node to the next even if its the same number such as intel nodes being usually smaller than others at the same number.

Some even saying intel 10 nm is in some ways smaller than GF or TSMC at 7 nm.
__________________
I talked to the tree. Thats why they put me away!..." Peter Sellers, The Goon Show
Only superficial people cant be superficial... Oscar Wilde

Piledriver Rig 2016: Gigabyte G1 gaming 990fx. FX 8350 cpu. Asus R290 DCUII Cats 19.4.3, SoundBlaster ZXR, 2 x 8 gig ddr3 1866 Kingston. 1 x 2tb Firecuda seagate with 8 gig mlc SSHD. Sharp 60" 4k 60 hz tv. Win 10 home.

Ryzen Rig 2017: Gigabyte X370 K7 F40 bios. Ryzen 1700x. 2 x 8 ddr4 3600 (@3200) Gskill. Sapphire Vega 64 Reference Cooler Cats 19.6.2. Soundblaster X Ae5 May 10th 2018 driver. 28" Upstar 4k ips 60hz panel, Intel 600p NVME 512GB. 4 TB HGST NAS HD. Win 10 pro.

Ignore List: Keystone... -My Baron he wishes to inform you that vendetta, as he puts it in the ancient tongue, the art of kanlee is still alive... He does not wish to meet or speak with you...-
"Either half my colleagues are enormously stupid, or else the science of darwinism is fully compatible with conventional religious beliefs and equally compatible with atheism." -Stephen Jay Gould, Rock of Ages.
"The Intelligibility of the Universe itself needs explanation. It is not the gaps of understanding of the world that points to God but rather the very comprehensibility of scientific and other forms of understanding that requires an explanation." -Richard Swinburne


www.realitysandwich.com

www.makepovertyhistory.org
pax is offline   Reply With Quote
Old Jun 18, 2018, 11:39 PM   #14
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by pax View Post
They've been talking 5 nm and 3 nm tho.. mind you they tend to measure transistors differently from one node to the next even if its the same number such as intel nodes being usually smaller than others at the same number.

Some even saying intel 10 nm is in some ways smaller than GF or TSMC at 7 nm.

That's because it is since all the parts within the transistor are made at 10nm, even the most difficult part which is the gate switch i mentioned earlier, while TSMC and Global may be using 7nm on the emitter and collector side, but still go at it with 12nm at the gate switch itself which is the hardest part.



Add the above with the use of regular ultra violet light and not the Deep ultra violet ( DUV ) or even extreme ultra violet light ( EUV ), and it's not too surprising that for the first time in 30+ years, Intel doesn't have the fab process advantage it used to have ( 18 to 24 months ahead of everyone else ), giving them always that edge in pulling off a design while the die size and power use are still reasonable.


I can't wait another 6~7 weeks until 2nd generation thread ripper is out, and i get a 32 core / 64 thread one built at 12nm and still backwards compatible with existing boards too boot, and for much less than the 28nm monstrosity shown by intel and still cooled with chilled water and devouring near 1000 watts of power, while needing a custom 3647 pin socket usually reserved for servers.....


Then add 3rd gen Ryzen next year at 7nm, and 3rd gen Epyc and thread ripper also at 7nm, still keeping the pressure on intel.....
shadow001 is offline   Reply With Quote
Old Jun 19, 2018, 05:17 AM   #15
demo
space cadet
 
Join Date: Sep 2007
Location: Australia Melbourne
Posts: 26,288
demo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bounddemo can leap small-ish buildings in a single bound


Default

Quote:
Originally Posted by shadow001 View Post
The latest version of DX12 has multi GPU support built in without resorting to driver profiles by either GPU vendors or MS itself providing them.....It's all up to developers to use it, but it is there.
That's a blessing and a curse at the same time. On one hand, great! It's supported via API.

On the other hand, it's up to devs instead of IHV's to support it.. meaning IHV's no longer have incentive to invest time and money into MGPU. They have to rely on devs that primarily code for console to implement support.. MGPU is dead.
__________________
____________________
demo is offline   Reply With Quote
Old Jun 19, 2018, 09:42 AM   #16
bobvodka
Sleeping.
 
Join Date: Sep 2003
Posts: 1,852
bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'


Default

Quote:
Originally Posted by bill dennison View Post
they we are s*it out of luck

so I sure hope they can get MCM working
MCM working isn't a problem; we've had gfx cards with two GPUs on before, and AMD have interconnect stuff working for CPUs so the basics are there.

Depends on what they come up with however can could make it complicated for games;
1) 1 'device' which is really two under the hood is 'welcome to driver profiles' land
2) 2 devices, with a dedicated bus connection to their own memory, would result in a faster SLI/CFX setup (no PCIe bus transfers) but relies on games being coded for it and has some of the old problems of resource ownership
2a) As above but with a NUMA style setup so GPU0 can ask GPU1 to fetch some data - still requires support, and 'remote' access would be slow but removes classic SLI/CFX data transfer options at the expense of 'remote' bandwidth on the other GPU.
3) 2 devices with a shared memory bus means the resource ownership and transfer problems go away, but now everyone is hitting the same bus so unless bandwidth goes up you risk slowing things down

(I could do a much longer post on the problems of work sync but I'll refrain for now at least)

All of that assumes they just glue multiple GPUs as we know them today on to a single interconnect.

Maybe the solution is to go sideways from where we are; break the GPU apart and redesign how everything fits together? but that comes with a series of massive unknowns and, more than likely, developer buy in when it comes to the newer APIs.

In short; simple MCM is possible, but comes with a metric ****ton of issues where SLI/CFX and shared memory bandwidth collide in an orgy of potential performance issues.
bobvodka is offline   Reply With Quote
Old Jun 19, 2018, 01:05 PM   #17
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by demo View Post
That's a blessing and a curse at the same time. On one hand, great! It's supported via API.

On the other hand, it's up to devs instead of IHV's to support it.. meaning IHV's no longer have incentive to invest time and money into MGPU. They have to rely on devs that primarily code for console to implement support.. MGPU is dead.


It's looking more and more like it, but it can also backfire big time in that if the games are inherently designed to only require a single GPU to run them at 60+ Fps even at high quality settings and resolutions, then sales for those that were doing multi GPU up to that point disappear, and these are the customers that usually buy the higher end / higher profit margin cards, even if the total amount is peanuts compared to mainstream cards in the 200~300$ range.

Quote:
Originally Posted by bobvodka View Post
MCM working isn't a problem; we've had gfx cards with two GPUs on before, and AMD have interconnect stuff working for CPUs so the basics are there.

Depends on what they come up with however can could make it complicated for games;
1) 1 'device' which is really two under the hood is 'welcome to driver profiles' land
2) 2 devices, with a dedicated bus connection to their own memory, would result in a faster SLI/CFX setup (no PCIe bus transfers) but relies on games being coded for it and has some of the old problems of resource ownership
2a) As above but with a NUMA style setup so GPU0 can ask GPU1 to fetch some data - still requires support, and 'remote' access would be slow but removes classic SLI/CFX data transfer options at the expense of 'remote' bandwidth on the other GPU.
3) 2 devices with a shared memory bus means the resource ownership and transfer problems go away, but now everyone is hitting the same bus so unless bandwidth goes up you risk slowing things down

(I could do a much longer post on the problems of work sync but I'll refrain for now at least)

All of that assumes they just glue multiple GPUs as we know them today on to a single interconnect.

Maybe the solution is to go sideways from where we are; break the GPU apart and redesign how everything fits together? but that comes with a series of massive unknowns and, more than likely, developer buy in when it comes to the newer APIs.

In short; simple MCM is possible, but comes with a metric ****ton of issues where SLI/CFX and shared memory bandwidth collide in an orgy of potential performance issues.

At least the bandwidth issue is easily solved with the use of HBM as the bus itself passes inside the GPU die itself and no longer in the video card PCB, so it can be made as wide as it needs to be.....Current Vega chips use an HBM memory bus 1024 bits wide for each memory chip which is undoable on a PCB in the traditional way, and their memory runs at a pretty low clock speed, so the next generation coming up next year passing the 1 TB/sec mark wouldn't be the least bit surprising and doesn't even require a new memory type either.....It's all in how wide the memory bus is.



Would be a sad joke that now that they've got the bandwidth issue licked once and for all even in a multi GPU on a single card design, they give up on it just the same.
shadow001 is offline   Reply With Quote
Old Jun 19, 2018, 01:33 PM   #18
bill dennison
Radeon Arctic Islands
 
Join Date: Jan 2007
Location: United States phoenix
Posts: 19,085
bill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Diesel


Default

I thought the whole point of HBM was to do MCM eventually


going back to gddr6 for navi is one reason MCM on it is out


but they need a card fast so Navi is going the old way big chip and gddr6
bill dennison is offline   Reply With Quote
Old Jun 19, 2018, 10:09 PM   #19
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by bill dennison View Post
I thought the whole point of HBM was to do MCM eventually


going back to gddr6 for navi is one reason MCM on it is out


but they need a card fast so Navi is going the old way big chip and gddr6

At this point, I think it would be harder to get Navi working with GDDR6 since it means redesigning the PCB of the card in the same fashion as they used to be before HBM was available, essentially making them more complicated and harder to achieve a given capacity without putting a huge amount of memory chips surrounding the main GPU packaging, as well as power delivery to each memory module, which will use more than HBM does and the higher clocks of said memory, given that the bus can't be as wide as HBM, means adjusting the timings of the GDDR6 memory higher than they are with HBM since the latter running at a much lower clock speed to begin with.


With Navi potentially being the 4th GPU using HBM since AMD started with the 28nm R9 Fury GPU and it's near 600mm^ size, it would be beyond dumb of their part to throw all that [email protected] in the trash, having gained serious experience in how to integrate HBM in the same package as it is just like MCM just that the companion chips are for memory and not graphics processing, never mind the MCM approach being used in thread ripper with the 4 die 32 core / 64 thread ripper being released in a little over a month from now using infinity fabric and much to the grief of Intel and them still sticking to a monolithic die...…




Seems that socket 2066 for intel / X299 will max out with a 22 core / 48 thread part in a single die, still skylake architecture based so it's IPC is no better than Threadripper for the same clock speed single core, but the latter throws another 10 cores and 20 threads for the highest end version, so for multi tasking and / or multi threading scenarios, it's about to become very painful for intel for the next year or 2, especially if 3rd generation thread ripper simply adds more improvements to the same formula, this time built at a 7nm process rather than 12nm that's for 2nd gen thread ripper being released soon...( wallet is ready now.... ).



Not applying the same trick for GPU's by having something similar would be dumb on their part, where the single Navi GPU version is the successor to the Rx580, and 2~3~4 GPU versions within the same HBM style packaging covers all the other markets, from high end gaming desktops, to workstations and compute cards like the instinct series.....One chip covers it all rather than making ~3 distinct chips for low end, mid range and high end of the market, and the only trick is if this multi GPU approach is completely transparent to the O/S and developers.....If it is, it's the death of the big monolithic GPU as we know it and absolutely needing the latest cutting edge fab process to be even remotely feasible.



So that is what it comes down to....Can they pull off what they did on the GPU side, what they've done with high end CPU's? and going MCM there?....My guess is an easy yes.
shadow001 is offline   Reply With Quote
Old Jun 19, 2018, 10:38 PM   #20
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

http://www.guru3d.com/news-story/ben...e-surface.html




Ouch.....What does intel have to counter this on high end desktops?....Nothing.
shadow001 is offline   Reply With Quote
Old Jun 19, 2018, 10:44 PM   #21
bill dennison
Radeon Arctic Islands
 
Join Date: Jan 2007
Location: United States phoenix
Posts: 19,085
bill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Diesel


Default

Quote:
Originally Posted by shadow001 View Post
http://www.guru3d.com/news-story/ben...e-surface.html




Ouch.....What does intel have to counter this on high end desktops?....Nothing.
I can't wait till 3rd gen ryzen
bill dennison is offline   Reply With Quote
Old Jun 19, 2018, 11:17 PM   #22
NWR_Midnight
Radeon Volcanic Islands
 
Join Date: Jul 2001
Location: United States Under the Sun
Posts: 3,564
NWR_Midnight once won a refrigerator on 'The Price is Right'NWR_Midnight once won a refrigerator on 'The Price is Right'NWR_Midnight once won a refrigerator on 'The Price is Right'NWR_Midnight once won a refrigerator on 'The Price is Right'NWR_Midnight once won a refrigerator on 'The Price is Right'NWR_Midnight once won a refrigerator on 'The Price is Right'


Default

Quote:
Originally Posted by shadow001 View Post
http://www.guru3d.com/news-story/ben...e-surface.html


Ouch.....What does intel have to counter this on high end desktops?....Nothing.
I thought their answer was a 28 core refrigerator.
__________________
I speak my mind! if you can't handle that, you might want to leave, because **** is going to get real!!

~I had the right to remain silent, I just didn't have the ability. ~ Ron White
~You can't fix Stupid! ~ Ron White
~There's not a pill you can take; there's not a class you can go to. - ~Stupid is forever. ~ Ron White
~Life is a hard teacher, it gives you the test before it teaches you the lesson.
~It's never to late to have a good childhood! The older you are, the better the toys! ~ My Dad
~Live everyday as though it is your last, it can all end at any moment!
NWR_Midnight is offline   Reply With Quote
Old Jun 20, 2018, 03:28 AM   #23
bobvodka
Sleeping.
 
Join Date: Sep 2003
Posts: 1,852
bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'


Default

Quote:
Originally Posted by shadow001 View Post
and the only trick is if this multi GPU approach is completely transparent to the O/S and developers
We had this.
It was called 'SLI with driver profiles'.

And seriously, did you just ignore the bit where I pointed out that games (and indeed other high performance apps) don't treat the CPU as some kind of black box and have some awareness of what they are running on?

What you are suggesting is the GPU version of telling every app they are running on a single core, single thread machine and hope some magic lets them scale up via the OS or hardware just 'rescheduling work' behind the scenes...
bobvodka is offline   Reply With Quote
Old Jun 20, 2018, 05:27 AM   #24
bobvodka
Sleeping.
 
Join Date: Sep 2003
Posts: 1,852
bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'


Default

Quote:
Originally Posted by shadow001 View Post
At least the bandwidth issue is easily solved with the use of HBM as the bus itself passes inside the GPU die itself and no longer in the video card PCB, so it can be made as wide as it needs to be.....Current Vega chips use an HBM memory bus 1024 bits wide for each memory chip which is undoable on a PCB in the traditional way, and their memory runs at a pretty low clock speed, so the next generation coming up next year passing the 1 TB/sec mark wouldn't be the least bit surprising and doesn't even require a new memory type either.....It's all in how wide the memory bus is.
Awwww, bless, you think there is such as thing as 'enough bandwidth', how adorable

You put a 1TB/s bus on a single GPU and someone will find a way to use it to increase the quality of what is being rendered - that's basically the history of graphics development right there.

It's also not just about the total bandwidth, but about usage - you don't want one GPU to be doing a heavy bandwidth process at the same time as the other GPU, so you start going down the route of cross GPU sink/stalls and communications which just had a whole host of New **** That Can Go Wrong.

And this is without taking in to account the overhead for all the new signaling and various other factors which means as soon as you start hammering the bandwidth from two locations, in a non-coherent fashion your overall bandwidth availability drops faster than you expect. (Early PS4 docs, for example, had some bandwith numbers when the GPU and CPU where both touching the bus at the same time... the fall off was non-linear. Two GPUs only make the problem worse.)

Frankly trying to say that 'Oh, that can do it on a CPU, so a GPU is easy...' is naïve at best, utterly foolish at worse...
bobvodka is offline   Reply With Quote
Old Jun 20, 2018, 06:34 AM   #25
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by bobvodka View Post
We had this.
It was called 'SLI with driver profiles'.

And seriously, did you just ignore the bit where I pointed out that games (and indeed other high performance apps) don't treat the CPU as some kind of black box and have some awareness of what they are running on?

What you are suggesting is the GPU version of telling every app they are running on a single core, single thread machine and hope some magic lets them scale up via the OS or hardware just 'rescheduling work' behind the scenes...


And we seem to have multi GPU support in the latest version of direct X12, so that driver profiles for either SLI or Crossfire are no longer needed, so it seems to suggest that hardware resource mangement is handled directly by the API with no driver help required any longer.

Quote:
Originally Posted by bobvodka View Post
Awwww, bless, you think there is such as thing as 'enough bandwidth', how adorable

You put a 1TB/s bus on a single GPU and someone will find a way to use it to increase the quality of what is being rendered - that's basically the history of graphics development right there.

It's also not just about the total bandwidth, but about usage - you don't want one GPU to be doing a heavy bandwidth process at the same time as the other GPU, so you start going down the route of cross GPU sink/stalls and communications which just had a whole host of New **** That Can Go Wrong.

And this is without taking in to account the overhead for all the new signaling and various other factors which means as soon as you start hammering the bandwidth from two locations, in a non-coherent fashion your overall bandwidth availability drops faster than you expect. (Early PS4 docs, for example, had some bandwith numbers when the GPU and CPU where both touching the bus at the same time... the fall off was non-linear. Two GPUs only make the problem worse.)

Frankly trying to say that 'Oh, that can do it on a CPU, so a GPU is easy...' is naïve at best, utterly foolish at worse...

I never said that, I said that whatever bandwidth is required is easily achieved thru HBM simply by making the bus as wide as it needs to be, and me stating the 1TB figure is simply to keep it simple by just making the bus twice as wide, and all the other variables stay the same....There's no major re engineering involved.


I'd have a hard time believing that if we take the GDDR6 route, and instead of the usual 384 bit memory bus a GTX1080TI uses, or other previous high end Nvidia cards have in the past, that they could go to a 768 bit bus to double bandwidth overall and somehow find room for 24 GDDR6 memory modules in the process on a card that has to be the same physical size to be PCI-e certified.


See just like GDDR5, GDDR6 uses a 32 bit memory interface for each individual memory module, so 24 of them would be needed to be fitted to the PCB, powered up and the traces on all of them being the exact same length to the GPU for timing issues being identical, hence HBM not only making that connection within the GPU package, which also gets the modules as close as they can physically be to the GPU and as wide as needed, but also stacking the memory modules vertically like a high rise building.....Kills 3 birds with one stone.


As for the last part, given that CPU's run hundreds of different programs that behave differently in every case and they went MCM with it, going the same route with a GPU where the primary goal is running graphics and supporting the graphics features of DX12 and Vulkan, doesn't sound like a big deal when you look at the big picture.
shadow001 is offline   Reply With Quote
Old Jun 20, 2018, 08:10 AM   #26
bobvodka
Sleeping.
 
Join Date: Sep 2003
Posts: 1,852
bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'


Default

Quote:
Originally Posted by shadow001 View Post
And we seem to have multi GPU support in the latest version of direct X12, so that driver profiles for either SLI or Crossfire are no longer needed, so it seems to suggest that hardware resource mangement is handled directly by the API with no driver help required any longer.
Yes, I know, and it does this by not hiding anything behind any kind of abstraction and letting you query the system for all the hardware available meaning that isn't "completely transparent to the O/S and developers".

Which is really the two ends;
- multiple bits of hardware hidden = 'profiles'
- everything can be queried = requires dev support

There is no magic here.
bobvodka is offline   Reply With Quote
Old Jun 20, 2018, 08:23 AM   #27
bobvodka
Sleeping.
 
Join Date: Sep 2003
Posts: 1,852
bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'bobvodka once won a refrigerator on 'The Price is Right'


Default

Quote:
Originally Posted by shadow001 View Post
As for the last part, given that CPU's run hundreds of different programs that behave differently in every case and they went MCM with it, going the same route with a GPU where the primary goal is running graphics and supporting the graphics features of DX12 and Vulkan, doesn't sound like a big deal when you look at the big picture.
You didn't repeat the mantra did you.

Getting MCM to work and getting MCM to work well are two totally different problems but as you seem to have such a hard-on for the CPU comparison then let me point out that when CPUs started to either come in pairs (ah, the dual CPU Celeron boards) or picked up extra cores it took time for games to make use of it and it took even longer to make anything approaching optimal use of it as core counts climbed.

However, I guess I shouldn't be too hard on you.. the fact you seem to think you can just slap these things on an interconnect and make the bus a bit wider and everything will be great is to be expected... I mean, if you had any real knowledge or training in this area you wouldn't be making these arguments.

Edit:
And this is my last post on the subject... I really REALLY cba going around in circles on this subject any more, I'm bored of it already frankly...
bobvodka is offline   Reply With Quote
Old Jun 20, 2018, 05:53 PM   #28
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by bobvodka View Post
You didn't repeat the mantra did you.

Getting MCM to work and getting MCM to work well are two totally different problems but as you seem to have such a hard-on for the CPU comparison then let me point out that when CPUs started to either come in pairs (ah, the dual CPU Celeron boards) or picked up extra cores it took time for games to make use of it and it took even longer to make anything approaching optimal use of it as core counts climbed.

However, I guess I shouldn't be too hard on you.. the fact you seem to think you can just slap these things on an interconnect and make the bus a bit wider and everything will be great is to be expected... I mean, if you had any real knowledge or training in this area you wouldn't be making these arguments.

Edit:
And this is my last post on the subject... I really REALLY cba going around in circles on this subject any more, I'm bored of it already frankly...


My arguments are based on technical feasibility, design development cost and mass volume production cost, so you have to do better than it being more trouble to program for on your end of things as the only major drawback here.


I'm done too.....
shadow001 is offline   Reply With Quote
Old Jun 20, 2018, 07:29 PM   #29
bill dennison
Radeon Arctic Islands
 
Join Date: Jan 2007
Location: United States phoenix
Posts: 19,085
bill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Dieselbill dennison exchanges holiday cards with Vin Diesel


Default

Quote:
Originally Posted by shadow001 View Post
My arguments are based on technical feasibility, design development cost and mass volume production cost, so you have to do better than it being more trouble to program for on your end of things as the only major drawback here.


I'm done too.....
well if both nv and AMD do get it working and both go that way and they are both talking about it

game programmers will just have to live with the extra work if need be

or see loss of sales as both 4k and 8k gaming are going to need more than 7nm+ can give I think
bill dennison is offline   Reply With Quote
Old Jun 20, 2018, 08:44 PM   #30
shadow001
Captain thread derail....
 
Join Date: Jul 2003
Posts: 9,219
shadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete strangershadow001 once held a door open for a complete stranger


Default

Quote:
Originally Posted by bill dennison View Post
well if both nv and AMD do get it working and both go that way and they are both talking about it

game programmers will just have to live with the extra work if need be

or see loss of sales as both 4k and 8k gaming are going to need more than 7nm+ can give I think

It's a sure thing it won't be enough, especially at 8K where to render each frame and assuming the scene in question is completely GPU limited and CPU or I/O has nothing to do with it, then it's 4x more demanding than rendering at 4K and there's plenty of titles right now that with a single GPU just barely hit 60 Fps even on a high end GPU, so running the exact same game in that completely GPU limited scenario at 8K resolution means doing no better than 15 Fps....It's a slide show, and not even putting 2 GPU's together and overclocking them to hell and back, might get close to 30 Fps....Still far from smooth.
shadow001 is offline   Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Navi bill dennison AMD Radeon Discussion and Support 156 Dec 29, 2018 11:18 PM
So I'm no longer SLi Blasphemer Other Graphics Cards and 3D Technologies 2 Feb 22, 2008 05:15 PM
What's been longer: sex or rep? Zero Off Topic Lounge 75 Mar 31, 2007 12:07 AM
When gig of Ram is no longer enough. tempnexus Off Topic Lounge 7 Mar 21, 2004 04:16 AM
No longer in top 10 DorXtar Distributed Computing (Team Rage3D) 20 Mar 17, 2004 12:37 AM


All times are GMT -5. The time now is 10:40 AM.



Powered by vBulletin® Version 3.6.5
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
All trademarks used are properties of their respective owners. Copyright ©1998-2011 Rage3D.com
Links monetized by VigLink