Go Back   Rage3D » Rage3D Discussion Area » Computing Forums » General Software » Programmers Discussion Forum
Rage3D Subscribe Register FAQ Members List Calendar Mark Forums Read

Programmers Discussion Forum Use this discussion forum to talk about the art of programming. Everything from website development to 3D graphics programming should go in here.

"
Reply
 
Thread Tools Display Modes
Old Jun 4, 2002, 09:34 AM   #1
Advertisement (Guests Only)

Login or Register to remove this ad
Hellbinder
Nothing but the Truth
 
Join Date: Oct 2001
Location: Canada Coeur d Alene, Idaho
Posts: 5,124
Hellbinder is still being judged by the masses


Default Days of 3d accelerators numbered?

Odd topic I know,

With the ever increasing focus on complete programability throughout the entire 3D rendering process.... Its looking more and more like the old software days. Comments from various developers have gone from "wow we can do all this cool stuff now" to "we are being held back by limitations in progamability".

The most excitement about any new product recently seems to be focused on the P10, which is nearly like a second complete CPU. I am wondering if what we really need is actually a new powerful 3D programing language, and the focus placed on more powerful CPU's. I dont see why companies like ATi, Nvidia and others simply develope a specialized CPU with the instruction set tailored to 3D. GPU's are already starting to go that route. Surely it would be faster to have a 2Ghz C-GPU than the setup we have today. Of course there are many other factors involved to make such a design really fly Like memmory access, bandwidth, cache, Its own specialized bus etc etc...

Anyone else have any imput on the idea?
__________________
Intel Core i5 2500 @ 3.6ghz
His Radeon 6870 @ 945/1140
4GB Ram
Hellbinder is offline   Reply With Quote
Old Jun 5, 2002, 02:31 PM   #2
Ichneumon
Lord of the Flies
 
Join Date: Sep 2000
Location: United States Michigan
Posts: 3,735
Ichneumon is still being judged by the masses


Default

I'm not sure I agree with that. I think there will always be some form of independant graphics processor (even if that means it is integrated into the core of the CPU).

In 5-10 years I could mebbie see what you're taling about. When we're talking cpus in the 100s of GHz. At which point processing power will be such that there will be power to spare, and room in them on small processes to integrate functions to help accelerate some of the really complicated math functions required for graphics which really aren't needed in day-to-day computing.
__________________
Ichneumon
http://www.rage3d.com

"A lie gets halfway around the world before the truth has a chance to get its pants on. "
- Sir Winston Churchill (1874-1965)
Ichneumon is offline   Reply With Quote
Old Jun 5, 2002, 02:50 PM   #3
kakarot
Rage3D Veteran
 
Join Date: Feb 2002
Location: United States Boston
Posts: 4,026
kakarot is still being judged by the masses


Default

That would be nice but I agree with Ichneumon. That thing will need alot more power than what's available today to get everything done. I don't see why it wouldn't be possible in the future though
kakarot is offline   Reply With Quote
Advertisement (Guests Only)
Login or Register to remove this ad
Old Jun 6, 2002, 08:23 AM   #4
merry
(unconfirmed)
 
Join Date: May 2002
Location: Romania bucharest.ro
Posts: 56
merry is still being judged by the masses


Default Re: Days of 3d accelerators numbered?

Quote:
Originally posted by Hellbinder
I dont see why companies like ATi, Nvidia and others simply develope a specialized CPU with the instruction set tailored to 3D. GPU's are already starting to go that route.
Well, this is a general thing in computing, in automation and - generally speaking - in decision making, that is de-centralizing.

You already have several processors in your pc, apart from the cpu, which is 'general purpose', there is the gpu, the 'chipset' - which makes a lot 'alone', without assistance from the cpu - the ide controller (well, it's become part of the chipset... years ago the i8272, VLSI, was just controlling a floppy - or two) even the keyboard has a controller - i.e. dedicated processor of itself. The purpose is to reduce execution time by using 'specialists' for each function, and to simplify comunication by reducing message length between them.

A good team of specialists with good communication.
merry is offline   Reply With Quote
Old Jun 7, 2002, 09:09 PM   #5
BeardedClem
Nine and a half Courics
 
Join Date: Sep 2000
Posts: 1,162
BeardedClem is still being judged by the masses


Default

I'm an idiot so I have a dumb question.

Why is it that GPU's have such a lower clock speed compared to todays CPU's when the die size is even smaller and transistor count is higher in most cases?.

Would higher clocked GPU's be an advantage or is there a bottle neck somewhere that would make it unnecessary?
BeardedClem is offline   Reply With Quote
Old Jun 9, 2002, 12:56 PM   #6
wandernACE64
heaven ^
 
Join Date: Mar 2002
Location: 5 min from Nvidia HQ
Posts: 1,376
wandernACE64 is still being judged by the masses


Default

Quote:
Originally posted by BeardedClem
I'm an idiot so I have a dumb question.

Why is it that GPU's have such a lower clock speed compared to todays CPU's when the die size is even smaller and transistor count is higher in most cases?.

Would higher clocked GPU's be an advantage or is there a bottle neck somewhere that would make it unnecessary?
i think the high transistor count of the GPU's is what holds the speed down for reasons of heat and others that i'm not quite sure...maybe someone else can answer that..

later
__________________
Folding@Home -- personal goal: 35k po!nts...for now.

Join Rage3D's team! [team number: 64]
If you like eating beef and are afraid of getting Mad Cow Disease, then you should join Rage3D's F@H team and help to find a cure faster! :)
Thanks for those of YOU who'd joined; YOU are making a difference!

:: direction.diligence.discipline.
wandernACE64 is offline   Reply With Quote
Old Jun 10, 2002, 09:44 PM   #7
cbsboyer
Rage3D Veteran
 
Join Date: Jan 2002
Location: Canada In the golden horseshoe
Posts: 6,518
cbsboyer can beat 'Minesweeper' on any difficultycbsboyer can beat 'Minesweeper' on any difficulty


Default

Quote:
Originally posted by wandernACE64


i think the high transistor count of the GPU's is what holds the speed down for reasons of heat and others that i'm not quite sure...maybe someone else can answer that..

later
More than likely. When a company is producing a chip with a lifespan of 12-18 months they are probably not going to put much effort into efficiency. It seems to be more of a "Do what you need to do to hit this target" approach that they are using, and if they need 80 million transistors to do it, but can only reach 250-300 MHz then so be it. It would probably take a lot more time and effort to make a smarter chip with fewer transistors and a higher clock rate than to just bash something through that works.

Chris.
cbsboyer is offline   Reply With Quote
Old Jun 10, 2002, 09:55 PM   #8
cbsboyer
Rage3D Veteran
 
Join Date: Jan 2002
Location: Canada In the golden horseshoe
Posts: 6,518
cbsboyer can beat 'Minesweeper' on any difficultycbsboyer can beat 'Minesweeper' on any difficulty


Default Re: Days of 3d accelerators numbered?

Quote:
Originally posted by Hellbinder
Odd topic I know,

--->8---
I dont see why companies like ATi, Nvidia and others simply develope a specialized CPU with the instruction set tailored to 3D. GPU's are already starting to go that route. Surely it would be faster to have a 2Ghz C-GPU than the setup we have today.
---8<---
Anyone else have any imput on the idea?
Actually, that was what they used to do long, long ago. Way back in the day Number Nine (and a few others) made a really good series of cards based on a Texas Instruments DSP. The upshot was that if they wanted to add or fix Windows accelerator functions, it was just a driver update rather than new firmware or a new chip. Problem was that fixed function accelerators were a lot cheaper to design and build, so they took over. That was when S3 took over the market for the 2D accelerators with their fixed function 801/805 series chips. Funny how things come around again if you wait long enough.

Chris.
cbsboyer is offline   Reply With Quote
Old Jun 11, 2002, 08:08 AM   #9
merry
(unconfirmed)
 
Join Date: May 2002
Location: Romania bucharest.ro
Posts: 56
merry is still being judged by the masses


Default

The term DSP is a little ambiguous - it stands for Digital Signal Processor, and there are a lot of chips marketed under this acronzm. However, they are general purpose DSPs, that means processors with built in support for some functions most widely used in digital signal processing, especially for digital filters and Fourier transforms.

A GPU is - in a way - a DSP (it does digital signal processing), only it needs more specialized functions than - say - digital audio boxes, Dolby 5.1 etc - or the EMU10k1.

The point in using dedicated chips is that the GPU will not waste time in fetching and processing code (like 'get x', 'get y', 'calculate x2+y2', 'next x', etc.) to simulate those functions, but will accept comands like 'draw a circle', the rest of the 'get x' stuff being performed implicitely by the whole bunch of transistors.

Yes, it's cheaper to build those chips than to buy some 200$ TI DSP (check the DSP prices, you'll be surprised), and the current DSPs run also in the 100-300 MHz range...

The slowness of GPU's is, IMO, in the fact that they are not built from scratch, like CPUs, but are in fact some sort of ready made configurable chips (PGA, ASIC and the sort), which are slower because of the structure overhead implied by configurability. That is, you buy the chip and 'configure' it to be a GPU (or anything else), but the internal structures that allow you to do that remain within the chip and consume some bandwith, power etc.

As far as development is concerned, it's easyer to do DSP, not that easy with ASIC - and really not that easy with 'transistor by transistor' chip design. In the end, it's easiest to leave it all tu the CPU.

Ten years ago, Intel made 50 MHz chips - I think. The consummer graphics industry will come of age, eventually.
merry is offline   Reply With Quote
Old Jun 11, 2002, 10:47 AM   #10
cbsboyer
Rage3D Veteran
 
Join Date: Jan 2002
Location: Canada In the golden horseshoe
Posts: 6,518
cbsboyer can beat 'Minesweeper' on any difficultycbsboyer can beat 'Minesweeper' on any difficulty


Default

Quote:
Originally posted by merry
Yes, it's cheaper to build those chips than to buy some 200$ TI DSP (check the DSP prices, you'll be surprised), and the current DSPs run also in the 100-300 MHz range...

...

As far as development is concerned, it's easyer to do DSP, not that easy with ASIC - and really not that easy with 'transistor by transistor' chip design. In the end, it's easiest to leave it all tu the CPU.
Well, it might be easier to build the actual DSP chip, but then you have a whole new department to develop the internal code for it. It's the engineering that's expensive, not the chip. The reason that DSPs are expensive is the toolkits they provide to help you actually do something with it. The upshot is that you can tell the DSP exactly what you want rather than having to munge something out with the grocery list of supported functions like an ASIC, which may or may not do exactly what you need.

As far as keeping everything in the CPU goes, who here got a chance to use a Commodore Amiga? They had a graphics processor, a sound processor, and the CPU as separate hardware, which is why they could do motion video or play games with great colour and digital audio when people on IBM PCs were playing Commander Keen in EGA with an AdLib card.

Chris.
cbsboyer is offline   Reply With Quote
Old Jun 11, 2002, 03:40 PM   #11
UFO
Who's your buddy?
 
Join Date: Feb 2002
Posts: 4,616
UFO is still being judged by the masses


Default

Think about the sound cards. About 4 years ago they often came with advanced signal processor for voice recognition e.t.c. They often had a pretty advanced midi wavwtable synthesizer. Look at the sound cards of today. The wavetable synthesizer is often gone and the card only handles big number of streams instead. The wavetable synthesizer is moved from hardware to software (just listen Yamaha software synth, better than almost all hardware synths) and the software synth doesn't make that much impact on performance today.

One interesting aspect of the future CPU:s are that they will come with multiple cores on one chip. One question is if that will make the GPU:s more redudant since they almost are an extra CPU albeit a bit more specialized.

However, there are still very much to do in the visual domain. Scientists are saying that real-time raytracing will take over the world one day. Raytracing would make great benefits of hardware since it can be done in parallell to great extent. I am really wondering why the big GF companies don't put more effort in this area since raytracing gives much more for "free" (reflections, refractions, easy portals).

For those who are interested in real-time raytracing go to www.openrt.de and read some interesting papers.
__________________
Intel 80386 16 MHz, VGA graphics, 4MB RAM, 30MB HDD, 3.5" floppy drive
45303 3dMark06, Crysis 2 60 fps all settings at highest.

Last edited by UFO : Jun 11, 2002 at 03:42 PM.
UFO is online now   Reply With Quote
Old Jun 11, 2002, 06:25 PM   #12
noko
Rage3D Veteran
 
Join Date: Sep 2000
Location: United States Orlando, Florida, USA
Posts: 7,181
noko can beat 'Minesweeper' on any difficultynoko can beat 'Minesweeper' on any difficulty


Default

Intergrating features onto chips is an on going process, it reduces overall costs, improves dependability and probably a number of other aspects. I remember XT motherboards that would make the mother boards of today look like unpopulated toys bit yet today motherboards are eons more advance and more capable. I don't think we are even remotely close in intergrating a R300 or NV30 onto a cpu or having a cpu capable enough to reproduce even the basic features we take for granted. 5 to 10 years will probably be way different but I think dedicated chips are here to stay. I would like to see the wire traces dissappear and light traces or fiber optics take over though.
__________________
FX8350 4.6ghz, Thermaltake Water 2.0 Pro, Asus Sabortooth 990fx, 8gb DDR3 2133, Asus/PowerColor Radeon R9 290x/ 290, XFire, 950w PC pwr & cooling PS, 128gb Kingston SSD and 1tb WD SATA III, 27" IPS YHAMAKASI Catleap. Win7

HTPC - ASUS M5A88-M ATi 880G AM3+ chipset, FX8120, 750w P/S, PowerColor Radeon HD 7970, 8gb Ripjaw DDR3 1600, 1TB SATA III Seagate, MicroATX Silverstone box, 60" LED HDTV, Win8.1 Pro 64
noko is offline   Reply With Quote
Old Jun 11, 2002, 09:03 PM   #13
cbsboyer
Rage3D Veteran
 
Join Date: Jan 2002
Location: Canada In the golden horseshoe
Posts: 6,518
cbsboyer can beat 'Minesweeper' on any difficultycbsboyer can beat 'Minesweeper' on any difficulty


Default

Integration might work for low-cost systems, but for high-bandwidth/high-end applications it still makes sense for a processor to be where the work is. Why share the memory and I/O bandwidth of the CPU if you can have an external unit take care of it by itself with a completely separate memory bus? The only reason to do differently is to make it cheap (like those sound cards).

Chris.
cbsboyer is offline   Reply With Quote
Old Jun 12, 2002, 05:21 AM   #14
merry
(unconfirmed)
 
Join Date: May 2002
Location: Romania bucharest.ro
Posts: 56
merry is still being judged by the masses


Default

Quote:
Originally posted by cbsboyer


Well, it might be easier to build the actual DSP chip, but then you have a whole new department to develop the internal code for it. It's the engineering that's expensive, not the chip
I guess this is the point where my english starts failing me. This is precisely what I wanted to say. Intel has the lot of bucks to support chip design.

The point with expensive DSP chips is that, well, to do a good job you need an expensive chip, more expensive than a middle of the road graphics card - mine was 85$, at this money I couldn't buy a reasonably performant DSP (I tried to find a DSP56307 or 311 evaluation module, I should have to pay some 300$, board, sw and all - and it's not even suitable for graphics).

A chip that you develop yourself will be cheaper than that, design costs incuded.

I don't know much about designing w ASIC, I mentioned it because I've heard this it what they do for GPUs, ATI at least.

And it can't be as fast as a CPU, because of the internal complexity required for the engineering of the chip, but useless for the final activity of the chip.

Uf, I hope this makes any sense. I actually totaly agree with you:
Quote:
Integration might work for low-cost systems, but for high-bandwidth/high-end applications it still makes sense for a processor to be where the work is. Why share the memory and I/O bandwidth of the CPU if you can have an external unit take care of it by itself with a completely separate memory bus?
merry
merry is offline   Reply With Quote
Old Jul 12, 2004, 01:15 PM   #15
Crawdaddy79
Radeon 8500
 
Join Date: Dec 2001
Location: United States Burke, VA
Posts: 16,868
Crawdaddy79 can beat 'Minesweeper' on any difficultyCrawdaddy79 can beat 'Minesweeper' on any difficultyCrawdaddy79 can beat 'Minesweeper' on any difficulty


Default

Two years later, and 3D acceleration is still alive and kicking.. with no sign of degrading.

It's just getting faster and faster and bigger and has bigger heatsinks/fans now.



My Voodoo 3 did not have a fan. It had just a heatsink, and that alone was probably not nessecary.
Crawdaddy79 is offline   Reply With Quote
Old Jul 26, 2004, 06:50 AM   #16
Hanners
Zetsubou Sensei
 
Join Date: Oct 2000
Location: United Kingdom England
Posts: 15,680
Hanners is still being judged by the masses


Default

I think an interesting point here is that Tim Sweeney (of Unreal fame) seems to be thinkinf along similar lines - He seems to be convinced that at some stage in the future, the CPU and GPU will converge into a single, all-purpose module.

I can't say that I agree with him personally, but it's an interesting theory.
__________________
Owner / Editor-in-Chief - Elite Bastards
Hanners is offline   Reply With Quote
Old Jul 26, 2004, 07:28 AM   #17
Vengeance
May 2, 1965 - April 21, 2008
 
Join Date: Feb 2003
Location: United States Undercity
Posts: 17,017
Vengeance is still being judged by the masses


Default

Quote:
Originally Posted by Hanners
I think an interesting point here is that Tim Sweeney (of Unreal fame) seems to be thinkinf along similar lines - He seems to be convinced that at some stage in the future, the CPU and GPU will converge into a single, all-purpose module.

I can't say that I agree with him personally, but it's an interesting theory.
That would be cool. I'm sick and tired of hearing "cpu limited" Why is it that you can go out and buy a $500.00 video card and your 2.8 P4 is holding it back?
__________________
eVGA nForce 680i SLi
Intel Q6600 Quad Core Stock @ 2.4GHz
nVidia 8800GTX @ stock speeds
2GB Crucial Ballistix DDR2 @ 800mhz 4-4-4-12 1T
Seagate Barracuda 320GB SATA2
PC Power & Cooling Turbo-Cool 1KW
COOLER MASTER Stacker Black Aluminum ATX Full Tower
Westinghouse 22in wide screen LCD @ 1680x1050

Report a bug with the NVIDIA graphics driver.
Vengeance is offline   Reply With Quote
Old Jul 26, 2004, 07:43 AM   #18
EMunEeE
Formerly MrEMan4K
 
Join Date: Jun 2002
Location: United States Raleigh, NC
Posts: 5,753
EMunEeE is still being judged by the masses


Default

Quote:
Originally Posted by Vengeance
That would be cool. I'm sick and tired of hearing "cpu limited" Why is it that you can go out and buy a $500.00 video card and your 2.8 P4 is holding it back?
I think is because the CPU is not as fast in doing things like AI or physics. The GPU ends up waiting on the CPU for things like that. It would be nice if either a) CPU manufacturers can speed that kind of processing up or b) GPU manufacturers use the GPU to accelerate AI/physics/etc. calculations. Either way we will be CPU limited for sometime. I do not know much about dual core processors and SMP, but it would be nice if during a gaming session, one core handled the necessary processing tasks and some game specific calculations, while the other core did game specific calculations exclusively.
__________________
Visit my website: http://emuneee.com -- It's pretty cool.
EMunEeE is offline   Reply With Quote
Old Jul 26, 2004, 08:34 AM   #19
Hanners
Zetsubou Sensei
 
Join Date: Oct 2000
Location: United Kingdom England
Posts: 15,680
Hanners is still being judged by the masses


Default

I think once PCI Express has matured and become the standard, and games start being coded around its inherent benefits, then we may well see a lot of things like physics shifted onto the GPU.

We're already seeing things like geometry instancing taking some of the strain away from the CPU, and I imagine that trend will only continue.
__________________
Owner / Editor-in-Chief - Elite Bastards
Hanners is offline   Reply With Quote
Old Jul 27, 2004, 10:02 AM   #20
scificube
Radeon HD 7970
 
Join Date: Apr 2004
Location: the nth dimesion.
Posts: 750
scificube is still being judged by the masses


Default

Isn't there a shift towards parallel processing in the computing world such as dual-core processors or dual CPU systems.

I think this shift is due to the logical assumption that if two tasks are unreleated they can be done concurrently and save the time to do each seperately.

With this in mind I think the tasks and methods for completing them are unreleated with respect to the CPU and GPU. The tasks a CPU performs alot of are unreleated tasks that require well for lack of a better way of saying it step by step decision making. Tasks like physics and even more so AI are very suited for being placed on the CPU. The GPU handles batches or very similiar tasks that don't require alot of decision making (ignoring dynamic branching in shaders for a moment). As someone previously noted where a CPU would plot a circle point by point a GPU would simply draw a circle as it is optimized to do.

Since general tasks and graphic processing are for the most part unrelated tasks I don't think the call for parallelism would be defeated by a merger of the CPU and GPU. Making a single core that was efficiently at doing both tasks concureently seems a very difficult task. I feel it is likely however that a dual core processor may however replace the CPU and GPU down the road. That is to say a GPU core will be added to a traditional CPU core on a single chip. This would give the GPU core the much needed speed boost it needs without crippling the CPU core with tasks it is not optimized to perform and since the GPU is up to speed there would be no real need to labor the CPU core with geometry at all allowing for it to be solely dedicated to AI, physics, and system tasks. This of course means you need a faster bus and more bandwith but that already happening.

Who knows, multicore design may be even more explicit than that where a core will be dedicated to graphics, AI, physics, and general system tasks respectively. Sony certainly seems unafraid to add cores to the Cell chip. (maybe they should be...)

Well that's my 2 cents. I think multi-"specialized"-core processors will win out over a complicated single core attempt to do all tasks efficiently. I'm not as as smart as you guys so I apologize if I'm aggregiously off base.
__________________
Windows XP Home Edition SP2
Catalyst 5.7, AMD 3000+ @2.4GHz, Thermaltake Polo @4000RPM, Chaintech VNF-250 Nforce3, ANTEC 430Watt PSU, 1 Gig DDR 333 @400MHz 2T timing, Sapphire Radeon X800Pro Vivo 256MB - 16 pipes unlocked @ 510/550 - stock cooling, 7.1 X-Mystique Gold Edition, Maxtor SATA 120GB-8MB cache, Western Digital 40GB-8MB cache

Last edited by scificube : Jul 28, 2004 at 03:25 PM.
scificube is offline   Reply With Quote
Old Nov 6, 2006, 05:59 AM   #21
merry
(unconfirmed)
 
Join Date: May 2002
Location: Romania bucharest.ro
Posts: 56
merry is still being judged by the masses


Default

This thread started in 2002, and was continued in 2004. Time for the 2006 edition

AMD-ATI are into Fusion, which seems to have been predicted by some of the posts above.

Quote:
AMD plans to create a new class of x86 processor that integrates the central processing unit (CPU) and graphics processing unit (GPU) at the silicon level with a broad set of design initiatives collectively codenamed “Fusion.”
(from the press release)

Meanwhile, Valve is working on multicore optimizations for it's games, counting on future high-count multicore processors and no dedicated GPU:

Quote:
Newell even talked about a trend he sees happening in the future that he calls the "Post-GPU Era." He predicts that as more and more cores appear on single chip dies, companies like Intel and AMD will add more CPU instructions that perform tasks normally handled by the GPU. This could lead to a point where coders and gamers no longer have to worry if a certain game is "CPU-bound" or "GPU-bound," only that the more cores they have available the better the game will perform. Newell says that if it does, his company is in an even better position to take advantage of it.
(article at arstechnica)
merry is offline   Reply With Quote
Old Dec 3, 2008, 02:05 PM   #22
merry
(unconfirmed)
 
Join Date: May 2002
Location: Romania bucharest.ro
Posts: 56
merry is still being judged by the masses


Default

It's that time of the two year interval again:

http://www.tomshardware.com/news/win...-gpu,6645.html
merry is offline   Reply With Quote
Old Dec 3, 2008, 08:16 PM   #23
cbsboyer
Rage3D Veteran
 
Join Date: Jan 2002
Location: Canada In the golden horseshoe
Posts: 6,518
cbsboyer can beat 'Minesweeper' on any difficultycbsboyer can beat 'Minesweeper' on any difficulty


Default

Quote:
Originally Posted by merry View Post
It's that time of the two year interval again:

http://www.tomshardware.com/news/win...-gpu,6645.html
At least the numbers show how very, very, very poorly suited CPUs are compared to a GPU for video rendering.
__________________
"When you find a big kettle of crazy, it's best not to stir it." - Dilbert's pointy hair boss

"Relationships are like dormant volcanoes, most of the time things are fine or manageable but there's always a chance she blows molten crazy all over you." - ice
cbsboyer is offline   Reply With Quote
Old Dec 4, 2008, 11:40 PM   #24
red_star
Banned
 
Join Date: Aug 2004
Posts: 2,247
red_star is still being judged by the masses


Default

The idea of converging GPU and CPU didn't come from IBM or Intel or AMD or NVidia or ATI but Commodore/Amiga and their 1989 secret AAA project. They did go even step further to actually have CPU/GPU and Sound Chip all together in one package. They were ahead of their time, and IBM PC concept was and still is utter crap concept of personal computer.
It was supposed to be released 1990 and they had working prototype but it seems that greedy Commodore CEO was doing everything possible to kill the company, heck i would be even suprised IBM got their dirty fingers in there. Oh man I hate ****ing IBM cause everything what came from them is utter crap. I can't wait a day when that ****ing company dies cause it did so much damage to IT industry with their crappy products.

Anyway, speaking of sound chip. 1989 Amiga comes up with 18bit sound chip...way way way ahead of its time. Just to toss some numbers here.
CPU/GPU package had power of todays $40 low end card. Guy we are talking about 18 years in past

The biggest loss IT ever had was when Commodore/Amiga closed their business. Apple and IBM did not have better sale together compared to Commodore. And true reason why Steve Jobs left Apple was their inability to compete Commodore/Amiga not IBM/PC. It's the biggest lie and for some reason IBM and Apple don't mention Commodore at all, company who simply outperform them in every possible way.

Should i Mention Amiga OS -> everything else at that time was ****ing joke compared to Amiga OS written for new AAA chip.
red_star is offline   Reply With Quote
Old Feb 3, 2012, 05:22 AM   #25
merry
(unconfirmed)
 
Join Date: May 2002
Location: Romania bucharest.ro
Posts: 56
merry is still being judged by the masses


Default

Warning: This thread is very old... started almost ten years now.

Sort of interesting to follow though:

http://www.pcworld.com/article/24922...thodology.html
merry is offline   Reply With Quote
Old Feb 3, 2012, 10:10 AM   #26
cbsboyer
Rage3D Veteran
 
Join Date: Jan 2002
Location: Canada In the golden horseshoe
Posts: 6,518
cbsboyer can beat 'Minesweeper' on any difficultycbsboyer can beat 'Minesweeper' on any difficulty


Default

Kind of shows how most ideas get recycled, but this kind of turns the idea on it's head in a way - AMD's marketing concept is almost a case of a GPU with an integrated CPU. This is a bold play for AMD, and if successful could redefine a large segment of the market.

This will be excellent for low-power/mobile, mainstream and embedded applications. For enthusiast/performance markets, discrete components will still be the only real option 'tho, as there will still be bandwidth and power limitations that an integrated solution wouldn't be able to overcome in a cost effective manner. Imagine trying to put a 125W 256-bit bus part and a 250W 384-bit bus part on one piece of silicon in a single socket while trying to keep the board layer count down so it wouldn't cost as much as a car, plus deal with power supply and heat dissipation requirements - the engineers would explode.
__________________
"When you find a big kettle of crazy, it's best not to stir it." - Dilbert's pointy hair boss

"Relationships are like dormant volcanoes, most of the time things are fine or manageable but there's always a chance she blows molten crazy all over you." - ice
cbsboyer is offline   Reply With Quote
Old Feb 3, 2012, 12:44 PM   #27
caveman-jim
Retired
 
Join Date: Oct 2003
Posts: 48,680
caveman-jim can recite pi backwardscaveman-jim can recite pi backwardscaveman-jim can recite pi backwardscaveman-jim can recite pi backwardscaveman-jim can recite pi backwardscaveman-jim can recite pi backwardscaveman-jim can recite pi backwards


Default

Quote:
Originally Posted by cbsboyer View Post
Kind of shows how most ideas get recycled, but this kind of turns the idea on it's head in a way - AMD's marketing concept is almost a case of a GPU with an integrated CPU. This is a bold play for AMD, and if successful could redefine a large segment of the market.
You're thinking in the wrong terms still, this is AMD being a design company and allowing third party IP to be inserted in to their designs. What this leverages is that companies who currently make custom ASIC's or build custom platforms can move to HSA and get the benefits of common architecture/platform with their custom IP. It massively expands AMD's markets which allowing more people access to broader capabilities.


Quote:
Originally Posted by cbsboyer View Post
This will be excellent for low-power/mobile, mainstream and embedded applications. For enthusiast/performance markets, discrete components will still be the only real option 'tho, as there will still be bandwidth and power limitations that an integrated solution wouldn't be able to overcome in a cost effective manner. Imagine trying to put a 125W 256-bit bus part and a 250W 384-bit bus part on one piece of silicon in a single socket while trying to keep the board layer count down so it wouldn't cost as much as a car, plus deal with power supply and heat dissipation requirements - the engineers would explode.
In form factors like a PC, you're right. However, in a custom design environment where there is no expectation of changed components, upgrades, then that's perfectly doable. There are plenty of boxes out there with 1200W TDP designs that sell very well. They're called servers. PC Gamers spend 1-2K+ on a bunch of components they put together themselves, or an OEM prebuilt. The hurdle stopping AMD (or Intel, or NVIDIA) putting together a box with a 125W CPU and a 250W GPU on a board with adequate cooling is expansion slots, upgradability. People won't accept not being able to swap out bits when the new stuff comes out. It's not an engineering challenge, if you've got sign off for the form factor control and TDP then you can make that easily. Harder to get a 125W CPU and a 250W GPU on a single chip, though... not impossible, just hard to manufacture, hard to price where it's not cheaper to make discrete components (until system architecture changes).
caveman-jim is offline   Reply With Quote
Old Feb 3, 2012, 02:17 PM   #28
aviphysics
Atari 800 FTW
 
Join Date: Sep 2008
Location: United States Livermore Ca
Posts: 6,873
aviphysics knows why the caged bird singsaviphysics knows why the caged bird singsaviphysics knows why the caged bird singsaviphysics knows why the caged bird singsaviphysics knows why the caged bird sings


Default

Almost sounds like they are moving to a more ARM like approach. At least in terms of viewing the CPU and GPU as something below the chip level that can be integrated with other people's IP onto the same chip. I wonder if in these arrangements they will be licensing AMD's IP and letting the other company take care of manufacturing.
__________________
THG is to computer hardware what MTV is to music.

Last edited by aviphysics : Feb 3, 2012 at 02:21 PM.
aviphysics is offline   Reply With Quote
Old Feb 5, 2012, 11:57 AM   #29
cbsboyer
Rage3D Veteran
 
Join Date: Jan 2002
Location: Canada In the golden horseshoe
Posts: 6,518
cbsboyer can beat 'Minesweeper' on any difficultycbsboyer can beat 'Minesweeper' on any difficulty


Default

Quote:
Originally Posted by caveman-jim View Post
You're thinking in the wrong terms still, this is AMD being a design company and allowing third party IP to be inserted in to their designs. What this leverages is that companies who currently make custom ASIC's or build custom platforms can move to HSA and get the benefits of common architecture/platform with their custom IP. It massively expands AMD's markets which allowing more people access to broader capabilities.




In form factors like a PC, you're right. However, in a custom design environment where there is no expectation of changed components, upgrades, then that's perfectly doable. There are plenty of boxes out there with 1200W TDP designs that sell very well. They're called servers. PC Gamers spend 1-2K+ on a bunch of components they put together themselves, or an OEM prebuilt. The hurdle stopping AMD (or Intel, or NVIDIA) putting together a box with a 125W CPU and a 250W GPU on a board with adequate cooling is expansion slots, upgradability. People won't accept not being able to swap out bits when the new stuff comes out. It's not an engineering challenge, if you've got sign off for the form factor control and TDP then you can make that easily. Harder to get a 125W CPU and a 250W GPU on a single chip, though... not impossible, just hard to manufacture, hard to price where it's not cheaper to make discrete components (until system architecture changes).
The bolded part was exactly my point - a 1200W server box (I've owned a few) is not the same as a 400W chip. Trying to make a 350-400W TDP processor part on a single chip without pricing it out of the consumer market would be very challenging, particularly with the crazy wide data buses required to keep the monster fed. The chip would be very complex and expensive, the motherboard would be very complex and expensive and upgrades for new generations would involve throwing out pretty much everything and starting over. This keeps the enthusiast/performance market locked to discrete components rather than a monolithic unit, at lest for the foreseeable future.
__________________
"When you find a big kettle of crazy, it's best not to stir it." - Dilbert's pointy hair boss

"Relationships are like dormant volcanoes, most of the time things are fine or manageable but there's always a chance she blows molten crazy all over you." - ice
cbsboyer is offline   Reply With Quote
Old Feb 5, 2012, 08:09 PM   #30
noko
Rage3D Veteran
 
Join Date: Sep 2000
Location: United States Orlando, Florida, USA
Posts: 7,181
noko can beat 'Minesweeper' on any difficultynoko can beat 'Minesweeper' on any difficulty


Default

Fusion is a on going process (not sure of the new name, good grief AMD). What is cpu and GPU looks like it will grey into a common structure free from bus slowdowns. I am wondering when AMD will make a new instruction set to extend x86, x64 for GPU like instructions, meaning programming to the metal possibilities. As for memory bandwidth, still hope to see fiber optics used for memory with virtually unlimited bandwidth potential. The APU or what ever it will be called will need a fiber optic receiver/transmitter/decoder to hook up to the rest of the computer. Still having pins for power and other stuff as a note.
noko is offline   Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
numbered .dat files in my documents... any reason for them? Adanu General Software 3 Dec 13, 2008 03:58 PM
Help - Creating numbered vareables c++ TheOwly1 Programmers Discussion Forum 2 Oct 18, 2007 06:21 AM
ATI RMA sent back odd numbered card Menel General Radeon Discussion 12 Jul 8, 2003 09:02 PM
30 Days? ATI's Days are 1 day= 3 Real World days RottenPeanuts Radeon Technical Support 9 Aug 6, 2002 07:51 PM
Nvidia's day are NUMBERED as #1! planet4bs Off Topic Lounge 34 Mar 19, 2002 02:32 AM


All times are GMT -5. The time now is 05:46 PM.



Powered by vBulletin® Version 3.6.5
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
All trademarks used are properties of their respective owners. Copyright ©1998-2011 Rage3D.com
Links monetized by VigLink