How far is far?

back to mystics original question since all the talk since is a little over my moronic head. I think tre to life rendering is where it will stop. I've always thought that since I was 10 or so and upgraded from nintendo to super nintendo. we've got quite a fews years yet to wait for that though.
 
Matrox:

It's not space that really matters. Do you have any idea how much electricity it would take to power 20 GBs of RAM at 2.5V to 3.3V?

The Quantum Rushmore Ultra 5320 is a solid state SRAM drive.
It is 3.2GBs and costs $28,000 USD EACH. However, it does sport 0.5ms access times.

You can take a look at it here:
http://www.quantum.com/products/ssd/ultra/ultra_overview.htm

The next generation RAM will probably be DDR-II running at 200MHz, thus giving us 6.4 GB/s of bandwidth.

The whole point of AGP's bandwidth is not for games today, it is for the games and technology of tomorrow.

The bottleneck with graphics cards isn't due to GPU/CPU co-dependency, but with memory bandwidth. And why would you want to share resources with the CPU? That would just slow things down.

Intel tried integrating graphics onto a CPU. It was called Timna. It failed miserably. And it was a lot slower too. And it cost twice as much as a discrete solution.

Do you even know what a co-processor does?

The GPU does NOT "pull extra resources" for the CPU.

The PS2 has a specialized graphics core that is more advanced than virtually any graphics chip out there. None of the consoles us "PC graphics cards", they used specialized graphics chips with dedicated resources and P2P resource mapping.

And those games you refer to look so good because the consoles are a known quantity and the games are optimized for that platform. PC game developers have to go for the lowest common denominator and thus design for P133s with 32MBs of RAM and a 8MB video card. Read the boxes and that's what you'll find.

I already told you that Intel is designing the replacement for AGP.

And unless we speed up HDDs the rest won't matter.
 
Klingon:

I've thought about the multi-head solution too. But I wonder if since the drives are operating so quickly there might be synchronization errors? I mean, think how hard it would be to write to a drive if you have to write to both halves of a drive at the same time?

And think of the fragmentation!!!

I agree we need new technologies, but as we both know even evolutionary technologies are hard to accept, much less revolutionary ones. And with holographic technology around the corner?

What I'm interested in is better processors in the HDDs and improved ECC that is NECESSARY for higher areal densities. Already IBM is falling behind Seagate in non-recoverable errors. Simple Reed-Solomon doesn't cut it anymore. I read somewher before that there is an improved Reed-Solomon ECC that is going to be available soon that requires less resources to run and thus will be more effective.

I know that IBM and Seagate are using 2.5" platters in their 15k drives to boost seek times and thus improve access speeds.

That new technology you meantion is a kind of holograhpic medium. THere are several companies working on them, including Candescent and Constellation 3D.

I guess if the heads move there would be greater chances of vibration. Not a good thing.

Didn't you know everything posted here becomes Rage3D property? Hahaha...
 
Irock:

But how real does it have to be to become "True-Life Rendering"?

When we get high enough polygon counts? Or 64-bit colour? Or holographic imaging?
 
To put into context when I said true to life rendering, think of Madden football games, when you can look at the screen and see no difference between the game and the picture in a live football game on tv, that would be true to life rendering. whether that will ever actually happen is an entirely different issue.
 
Eventually that will happen as processors get faster, but what about the rest of the experience? It has to be truly immersive, which means actual 3D and a wraparound view.
 
It's the interface!

It's the interface!

Viligant writes " . . . It has to be truly immersive, which means actual 3D and a wraparound view. . ."

===

The actual 3D and wraparound view sound feasible with only a few enhancements to existing technology. The problem is going to be the interface.

There's no way that an environment is going to be "immersive" if you have to use a mouse, joystick, keyboard etc. to do something as simple as walking.

I've given thought to ways to get around this in a VR environment, and frankly, it's going to be tough. The best idea I came up with is an interface that has the user wearing high-grip shoes on the surface of a GIANT mounted trackball that measures at least 20 feet across. Somehow I don't think it will sell.

The only other tech that stands a chance of circumventing this obstacle is direct neural connectivity, a la Neuromancer. By plugging one's neural cortex directly into a VR system, it would be possible to intercept the brain signals that control "walking," and route them into a computer where the signals are parsed and translated into graphical signals that are routed directly either to the eyes themselves, or directly to the brain through the same neural cortex connector.

This may sound like a lot of science-fiction mumbo-jumbo, and you're right. We're leagues away from this sort of thing. I doubt any of us will see technology remotely like this in our lifetimes, but hardly a year goes by when I don't see progress being made.

I just read recently about the first "cyborg." Yes, it's a little lame... it's a lamprey (a really ugly fish) spinal column that's been interfaced with a computer, and is capable of reacting to light. Not exactly "Terminator," but it's ... progress.

Anyway, the graphics won't be the bottleneck - it's the interface we gotta worry about.

- N
 
Yes, I agree the interface is the issue, and like all interfaces we will have to deal with bandwidth issues.

But I doubt that people will want a direct neural interface, especially not if Microsoft has anything to do with it.

Haha, think of it - Direct X 2050: DirectNeuralLink
 
Vigilant said:
Klingon:

I've thought about the multi-head solution too. But I wonder if since the drives are operating so quickly there might be synchronization errors? I mean, think how hard it would be to write to a drive if you have to write to both halves of a drive at the same time?

And think of the fragmentation!!!

I agree we need new technologies, but as we both know even evolutionary technologies are hard to accept, much less revolutionary ones. And with holographic technology around the corner?

What I'm interested in is better processors in the HDDs and improved ECC that is NECESSARY for higher areal densities. Already IBM is falling behind Seagate in non-recoverable errors. Simple Reed-Solomon doesn't cut it anymore. I read somewher before that there is an improved Reed-Solomon ECC that is going to be available soon that requires less resources to run and thus will be more effective.

I know that IBM and Seagate are using 2.5" platters in their 15k drives to boost seek times and thus improve access speeds.

That new technology you meantion is a kind of holograhpic medium. THere are several companies working on them, including Candescent and Constellation 3D.

I guess if the heads move there would be greater chances of vibration. Not a good thing.

Didn't you know everything posted here becomes Rage3D property? Hahaha...
OK, so everything I write here is now pre-copyrighted... ;)

Why would there be any synchronization errors? I’m sure you know how things work for a one head solution, so just make it work similarly for two heads. Just provide appropriate controlling components and you’re done.

For the benefit of those who don’t know the details, the r/w heads are capable of constantly reading data that’s passing under them as the platters spin. Sectors contain more that just the 512 bytes of user information. They contain several fields, some of which to help with identifying the sector, and error detection and correction. When a sector has to be accessed, the head assembly is first moved to the proper cylinder. Then the proper head monitors what information is passing by on that track. Eventually the required sector information is read, identifying the desired sector, and the read or write operation can then take place. Double the spindle speed and reduce the drive size, and you might need a faster asic in a one head solution anyway to do that reliably.

Note that while one operation is taking place one head is used but the drive moves all the other heads needlessly, possibly away from where they will be needed next. That’s not very efficient.

Now if you have two heads per cylinder (at 180 degrees of each other) you just do what I’ve described above, independently for two heads. You only need to design an asic that is marginally smarter and faster. I really don’t see the synchronization problem inside the drive.

As for fragmentation, that’s an interesting question. I don’t pretend am an electronic engineer, and I don’t pretend I have an answer to every question. I’m only suggesting new ways to improve the performance of hard drives, ways that the industry doesn’t seem to have considered yet, but rather intentionally avoided.

There may be very “good” reasons why. The original design of hard drives was somewhat dumb from the start, based on apparently unwritten postulates: There could be only one head per track; each head can only read/write on one track; and all heads must always seek a common cylinder. Because of these brain dead hardware designs, software engineers have designed brain dead file systems and drivers (likewise for firmware). So, improving drives by throwing away the postulates is likely to require a significant rethinking of how drives are accessed, both in the drive's firmware itself and in the layers above it. Instead, manufacturers are clearly doing their best to avoid that, therefore we see the recent improvements we all know about.

It’s not that these improvements are stupid or useless. There comes a time when it becomes increasingly harder to improve something significantly. I guess it will become very hard to spin drives beyond 15,000 rpm, so transfer rates will have to increase through higher areal densities. Higher areal densities require increasingly precise head positioning, so at one point it might become harder to get higher capacities without compromising seek times with current designs.

I’m suggesting three improvements: Multi-head as you call them (heads at 180 degrees), a head that can access more than one track without the need to physically move, and (new) allow each head to move independently of all the others (a fourth improvement would be to mandate transparent drive enclosures so that we can watch the thing while it does its magic!) Why not have larger sectors also...

Yes, that has the potential for extreme fragmentation. But you’ll have to agree that a drive that has the potential to create tons of fragments all over the place at incredibly fast rates also has the potential to deal with that much better than current designs. As I said, I’m not an engineer, but I believe this problem can be overcome at least in part, and the residual fragmentation (if it still matters at all) might not cause much of a problem. It depends on the new firmware, drivers and file systems that we decide to invent and implement for drives that have these new capabilities. Drives based on my ideas would probably not benefit much to a FAT or FAT32 type of file system on current operating systems. More sophisticated existing file systems, such as ntfs might see better improvements.

Imagine a planet where normal people have only one leg, and imagine yourself getting there and trying to explain to them the notions of walking, running and dancing while you are in transit. They might have some difficulty understanding the ideas, but once you have landed they can appreciate the advantages of having two legs when, at last, they see you doing these things. Unfortunately that is something they will never be able to experiment for themselves, so it could not help them. Current drives are one-legged at best, and I am trying to give the basic design more degrees of freedom, enabling the hardware to do many things all at the same time.

Removing the current restrictions (the above postulates) makes it possible to have larger numbers of platters without any inertia problems on the r/w heads because they move independently of each other.

Back to fragmentation: It is an important concern because of the current design. Change it and you need to redefine the problems it creates and the ways to overcome them, or minimize their importance by design. I can imagine near-future workstations with 2 or 4 processors, sharing a single hard drive, becoming more common place. Drives based on my improvements, if done properly, should perform much better at 7200 rpm under heavy load in multi-threaded, multi-processor environments than current designs could at 30,000 rpm. More so with more platters (very large drives), or with increasingly random access patterns.

I don’t believe the 1 terabyte media I was referring to have anything related to holographs.

Finally, about heads creating vibrations when they move. Sure, it could probably happen if you go too far... As tracks get ever closer to each other you might not need a very large head to cover several tracks. Current heads already do actually. My point was to find a way to fit more “r/w sub-heads” on a head that would remain about the same size...

Regards, ../K
 
Back
Top