Announcement

Collapse

Attention! Please read before posting news!

We at Rage3D require that news posts be formatted in a particular way, so before you begin contributing to the front page, we ask that you study the Rage3D News Formatting Guide first.

Thanks for reading!
See more
See less

Farewell to DirectX?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Farewell to DirectX?

    AMD's head of GPU developer relations, Richard Huddy, says games developers are asking AMD to 'make the API go away'

    Despite what delusional forum chimps might tell you, we all know that the graphics hardware inside today's consoles looks like a meek albino gerbil compared with the healthy tiger you can get in a PC. Compare the GeForce GTX 580's count of 512 stream processors with the weedy 48 units found in the Xbox 360's Xenos GPU, not to mention the ageing GeForce 7-series architecture found inside the PS3.

    It seems pretty amazing, then, that while PC games often look better than their console equivalents, they still don't beat console graphics into the ground. A part of this is undoubtedly down to the fact that many games are primarily developed for consoles and then ported over to the PC. However, according to AMD, this could potentially change if PC games developers were able to program PC hardware directly at a low-level, rather than having to go through an API, such as DirectX.

    (please visit the source link for more information)...


    Source: bit-tech

    #2
    I didn't read the whole article, just the first page and a half.

    Still this seems like nonsense to me. Developers can code for broke on consoles because the hardware doesn't change. It remains a constant standard.

    On the PC, that is not the case at all. Hardware is changing constantly. So DirectX is suppose to be the layer between the game and the hardware, where it is the constant standard. This provides the game developers something to program against. It's a contract.

    The hardware people are responsible for first creating the hardware, and then writing drivers that go between the hardware and DirectX (or any other standardized constant API).

    So if you take DirectX out of the picture, then each developer is responsible for creating a game that can work on multiple interfaces. One for AMD, one for NVidia, another for Intel, etc.. And that's only if they interface directly with the device drivers treating them as a standard. It could be argued that they could skip those also and interface directly with the GPU itself for more added performance. But I just can't see any developer wanting to support all the different GPUs on the market directly in their game without a standard interface to aid them. Would it give them more power? Yes, probably. But the cost of development would be costly and lengthy.

    Am I thinking about this wrong? It just doesn't seem plausible. I mean could you imagine buying a new video card and not being able to play old games on it because the game was written before the card existed.
    My Dealings

    Comment


      #3
      God help us all if we have to go back to the DOS days...
      "Curiosity is the very basis of education and if you tell me that curiosity killed the cat, I say only that the cat died nobly." - Arnold Edinborough

      Heatware

      Comment


        #4
        Originally posted by Ristogod View Post
        One for AMD, one for NVidia, another for Intel, etc.. And that's only if they interface directly with the device drivers treating them as a standard.
        There's an easy solution... everyone just uses AMD.

        Why else do companies ever make up proprietary BS?

        Comment


          #5
          Originally posted by Ristogod View Post
          I didn't read the whole article, just the first page and a half.

          Still this seems like nonsense to me. Developers can code for broke on consoles because the hardware doesn't change. It remains a constant standard.

          On the PC, that is not the case at all. Hardware is changing constantly. So DirectX is suppose to be the layer between the game and the hardware, where it is the constant standard. This provides the game developers something to program against. It's a contract.

          The hardware people are responsible for first creating the hardware, and then writing drivers that go between the hardware and DirectX (or any other standardized constant API).

          So if you take DirectX out of the picture, then each developer is responsible for creating a game that can work on multiple interfaces. One for AMD, one for NVidia, another for Intel, etc.. And that's only if they interface directly with the device drivers treating them as a standard. It could be argued that they could skip those also and interface directly with the GPU itself for more added performance. But I just can't see any developer wanting to support all the different GPUs on the market directly in their game without a standard interface to aid them. Would it give them more power? Yes, probably. But the cost of development would be costly and lengthy.

          Am I thinking about this wrong? It just doesn't seem plausible. I mean could you imagine buying a new video card and not being able to play old games on it because the game was written before the card existed.
          I agree, on a multi configurable platform with different hardware capabilities one would have to write optimize low level code for each one. This may work on a console, that is if there are a great number of a given type. If you go with compiler route for the different hardware then you are right back at DirectX level where the driver compiles the code for the given GPU anyways. Allowing access to low level seems one choice but then you end up with games optimize for one piece of hardware over the other. Looks like draw calls is what holding up directX, still - fix that and allow some more lower level stuff which I thought direct compute was surpose to do.

          Is this AMD version of trying to corner the pc market with Fusion as like Nvidia with Cuda? Basically having 100 million Fusion processors out there rendering where game developers say, "Hey man lets do something unique, that market has more potential then the consoles!"
          Last edited by noko; Mar 19, 2011, 12:11 AM.
          Ryzen 1700x 3.9ghz, Thermaltake Water 2.0 Pro, Asus CrossHair 6 Hero 9, 16gb DDR4 3200 @ 3466, EVGA 1080 Ti, 950w PC pwr & cooling PS, 1TB NVMe Intel SSD M2 Drive + 256mb Mushkin SSD + 512gb Samsung 850evo M.2 in enclosure for Sata III and 2x 1tb WD SATA III, 34" Dell " U3415W IPS + 27" IPS YHAMAKASI Catleap. Win10 Pro

          Custom SFF built case, I7 6700k OC 4.4ghz, PowerColor R9 Nano,, 1TB NVMe Intel SSD M2 Drive, 16gb DDR 4 3000 Corsair LPX, LG 27" 4K IPS FreeSync 10bit monitor, Win 10

          Comment


            #6
            This news was already posted.

            Comment


              #7
              Personally I think the end of the API would be a bad thing for consumers. The idea of Direct X from a consumer point of view is that if I buy a game I know it will work with my hardware no matter who made it.

              Now we still have some issues with this today thans to the various dev programs introduced by the hardware companies but overall things are pretty much you buy it and it works.

              If we go away from a standardized API we will stratify the industry and thus the consumer base as well.

              In fact to hear this kind of stuff discussed by someone at AMD suprises me. The reason I say that is that in the realm of GPU computing nVidia has a HUGE lead right now. AMD has put it's eggs in the open makret basket and this apporach will not let that strategy work very well.
              Edward Crisler
              SAPPHIRE NA PR Representative

              #SapphireNation

              Comment


                #8
                Agreed, this is bull****. I wonder who these game devs are and what they really said. I can't see anyone wanting to program everything directly to the hardware. Sure, a small "assembly language" code here and there could be nice to speed things up, but nobody is mad enough to program a full game in assembly language, especially one geared towards specific hardware.

                As someone said on another forum, the reason games look the same is that there's a small number of AAA engines which are used for many games. I'd add that devs outside the AAA engines simply don't have the ability to create new visuals, since these are complex to do, and it's the fact that AAA engines are sold for use in multiple game which allow them to be created. Lastly, games look the same because most aim for certain realism. Adding power won't create more differentiation. (And of course, games don't really look the same.)
                ET's place
                ET's competition list

                Comment


                  #9
                  We are returning to 3dfx and glide era again

                  Comment


                    #10
                    Originally posted by Sound_Card View Post
                    This news was already posted.

                    Comment


                      #11
                      Thought about it some more, and the reason I think this will not work is that it's AMD doing it. NVIDIA is very good at pushing proprietary interfaces, CUDA, PhysX, Cg, you name it, NVIDIA had done it, and kept doing it until it got some acceptance. AMD never managed that. ATI Stream, proprietary tesselation, all the way back to the Rage Pro proprietary SDK, were all failures, with AMD/ATI seemingly doing its best to hide them from developers. Using a proprietary AMD extension would mean using something that's likely to die before the game is even out.

                      Still, I'm sure this could give NVIDIA an idea, and it'd pull it off, and have another proprietary technology to leverage against AMD.
                      ET's place
                      ET's competition list

                      Comment


                        #12
                        AMD are upset because their latest and greatest hardware doesn't do much more than the 4870 from 3 years ago. They are finally realising that the endless stream of console ports aren't taking advantage of current hardware, so why buy new hardware?

                        Regarding the API, the problem is that a lot of developers do as little effort as possible for the PC version of a multiplatform title, so why then would getting rid of the API be a good thing, it just gives them more work for the PC version! Sure, indie developers or maybe 1 or 2 major PC devs might be able to get more out of the hardware without the API making the calls, but how many dedicated PC devs are left, who are willing to do all that extra work? Not many I bet.
                        Why doesn't batman dance anymore?

                        Comment


                          #13
                          Originally posted by ET View Post
                          Thought about it some more, and the reason I think this will not work is that it's AMD doing it. NVIDIA is very good at pushing proprietary interfaces, CUDA, PhysX, Cg, you name it, NVIDIA had done it, and kept doing it until it got some acceptance. AMD never managed that. ATI Stream, proprietary tesselation, all the way back to the Rage Pro proprietary SDK, were all failures, with AMD/ATI seemingly doing its best to hide them from developers. Using a proprietary AMD extension would mean using something that's likely to die before the game is even out.

                          Still, I'm sure this could give NVIDIA an idea, and it'd pull it off, and have another proprietary technology to leverage against AMD.
                          Nothing really proprietary about this. They are not going to replace D3D with their own API. They want no API. That means the developer would have to program CTM. All AMD would need is a light low level compiler to stick in between. It's a risky move because PC is a back thought of developers these days. It's known as the "quickie cash" platform. So I don't think many developers are going to jump on programming CTM on PC when they already do on consoles. Then again, on the other hand, this is probably the only way to make the PC platform truly take off and grow again. Separate it self away from consoles and get unique original games. A true separate experience.

                          But I honestly don't think it would be really that hard to say take two vendor platforms with the highest common dominator and code CTM on both, then let both vendors(NV/AMD) throw their OWN compiler in between. The hard part would be on AMD's and NV's hands. Making a compiler that can virtualize this common dominator platform and scale it up or down. Preferably up. Not so much more trouble on the developers to be honest.

                          So I don't see anything proprietary here. As far as your assessment goes, it's slightly flawed in the sense that NV has a rather long list of fail as well. Only semi successful thing they have is CUDA which in due time would fall to OpenCL.

                          Comment


                            #14
                            Read the already posted thread for info on why this is a bad idea.

                            Comment


                              #15
                              I think that "only" standards do slow down innovation and also need competition from technology players to help speed up innovation. By creating more awareness places pressure on Microsoft to improve things and try to move quicker and more efficiently for the developers that may have the resources to do more for the PC platform.

                              Innovation doesn't wait for anything.
                              Really enjoy 3d gaming flexibility; a gamer's best friend!

                              Comment


                                #16
                                IMHO, this is nothing more than an attempt to push the price of stocks up. DirectX is there for a reason. With next generation of consoles, PC gaming will become irrelevant, anyway.

                                Comment


                                  #17
                                  Imho,

                                  To me, AMD is trying to make a case for the importance of heterogeneous low-latency computing and its potential of what it can do for the PC platform. The future is Fusion.

                                  That's how I see this message.
                                  Really enjoy 3d gaming flexibility; a gamer's best friend!

                                  Comment


                                    #18
                                    Originally posted by SIrPauly View Post
                                    Imho,

                                    To me, AMD is trying to make a case for the importance of heterogeneous low-latency computing and its potential of what it can do for the PC platform. The future is Fusion.

                                    That's how I see this message.
                                    Good insight there, now I do believe AMD is expecting or is doing their best to have Fusion in at least one of the next generation consoles and I believe that will happen if not two of them. Once that is in place games made for those consoles could be ported to PCs with Fusion or AMD GPU's using the low level code path. Developers are using low level code now for the consoles but then to put it on a PC they have to go the DX route and any limitations. It seems AMD, who was successful with x64 and implimenting it and getting Intel to finally go along with it may be doing something similar here. Fusion can have an instruction set, low level programming like x64 inherit with it beyond DX. Fusion APU's, as long as AMD can make enough of them and keep developing them, will probably become the dominant gaming hardware plateform on the PC, as in the largest base of gaming capable PCs will be Fusion based.

                                    I think one of the major mistakes of Microsoft with the 360 was it was not DX10 based or fully capable which stagnanted or hurt the PC game and DX side advancement besides the lousy sells of Vista itself.

                                    As for next generation of consoles killing off PC gaming - .
                                    Last edited by noko; Mar 19, 2011, 11:42 PM.
                                    Ryzen 1700x 3.9ghz, Thermaltake Water 2.0 Pro, Asus CrossHair 6 Hero 9, 16gb DDR4 3200 @ 3466, EVGA 1080 Ti, 950w PC pwr & cooling PS, 1TB NVMe Intel SSD M2 Drive + 256mb Mushkin SSD + 512gb Samsung 850evo M.2 in enclosure for Sata III and 2x 1tb WD SATA III, 34" Dell " U3415W IPS + 27" IPS YHAMAKASI Catleap. Win10 Pro

                                    Custom SFF built case, I7 6700k OC 4.4ghz, PowerColor R9 Nano,, 1TB NVMe Intel SSD M2 Drive, 16gb DDR 4 3000 Corsair LPX, LG 27" 4K IPS FreeSync 10bit monitor, Win 10

                                    Comment


                                      #19
                                      *puts on Professional Graphics Programmer hat*

                                      So, yeah, read the thread Jim linked above for the majority of my thoughts on this. (Although I'll probably recover a few points here)

                                      Secondly; Fusion will NOT replace GPUs. As much as I love Fusion the GPU simply has too much power to dump it for graphics; from memory bandwidth to ALU counts it will just outclass a Fusion system for a few years yet.

                                      This means even with Fusion the DX problem for graphics cards is going to exist; and by 'problem' I mean not that the API exists but that has a bit too much overhead. And this isn't going to go away; as long as PC X can have any combination of hardware you WILL need an API and HAL layer to present a unified interface.

                                      The only reason we can go 'low level' on consoles is because the hardware is fixed; although it generally takes a few years before this starts to happen wihch is a few years more than a GPU arch generally stays current (For added amusement factor consider AMD put this idea forward yet HD5 to HD6 breaks backwards compatibility thus would have created more work). Even then this 'low level'ness tends to only be a case of constructing command lists and pointing the GPU at it using, you guessed it, an API aka DX on the 360.

                                      DX itself isn't a limitation; the limitation is the fact you can't know what hardware you are programming against. If we assume that at any given time both NV and AMD have 4 generations in the field that is 8 combinations of hardware with different charactistics all of which need to be coded against. Then AMD release a new card and, unless hardware and specs have been in the hands of the IHV for a few months, the low level code isn't going to work and suddenly your shiney new card is... well.. useless.
                                      (I'd also like to point out that right now, even with a common interface, games have issues. Just yesterday while playing Dragon Age 2 my card reset and killed the game and wouldn't run it again until I had reset; this is using a driver from a company which knows the hardware).

                                      Finally, on this subject, I feel it's a bit rich for AMD to complain that multi-threading isn't helping when they haven't bothered to implement multi-threaded command lists in their drivers yet (NV are a bit further along but basically demand a core to do the work).. and guess what; on the consoles we can construct command lists on multiple threads. gg AMD, gg.

                                      (Side note; while you might well complain about MS and the 360 not being DX10 or 'fully capable' I'm just going to point out that the GPU in the 360 is leaps and bounds better than the PS3, if anything out of the two consoles the PS3 makes our lives much harder from a graphics point of view as it's just utterly under powered).

                                      Oh, and as for 'Fusion in the consoles', which is basically 'x86/x64 in the consoles; DO.NOT.WANT.
                                      x86/x64 is frankly anemic when compared to the current PPC chips as it has frankly hardly any registers to work with.
                                      PPC on the other hand lacks out of order execution and decent branch prediction but that is much easier to work around and add support for I'm sure.

                                      Comment


                                        #20
                                        If the HAL could become the performance API with a standardized hardware feature set, with DirectX sitting on top it would probably help. Keep DX for existing apps and apps that don't need a lot of optimization (basically as a wrapper), and have performance sensitive games and apps access a thin, standardized HAL directly. If the driver and HAL layers were kept as close to a standard model that the hardware has to follow, aside from performance differences between vendors and architectures, it would make things much simpler (i.e. x86 is x86, but the performance between generations and vendors may vary).

                                        Way back in the days of DOS and DOS gaming, you could buy a lot of different more or less 100% VGA/VESA SVGA compatible video cards with a lot of different chipsets, and depending on the vendor the performance could vary a lot with the same code, but the same software would work on any of them without recompiling. If you had the same hardware/driver feature set for "HAL Standard A", you could have a lot of different performance levels on compatible hardware, but not actually break any code vendor to vendor or generation to generation. We would just have to do away with the seemingly mandatory vendor specific "Cappuccino foam simulation acceleration, only available on {chipset x}". Pick a hardware feature set and stick to it with no extensions that some support and others don't. You want to make your card better? Make it do the standard feature set better, be it more instances/streams, faster or both.
                                        "When you find a big kettle of crazy, it's best not to stir it." - Dilbert's pointy hair boss

                                        "Relationships are like dormant volcanoes, most of the time things are fine or manageable but there's always a chance she blows molten crazy all over you." - ice

                                        Comment


                                          #21
                                          Originally posted by Sound_Card View Post
                                          As far as your assessment goes, it's slightly flawed in the sense that NV has a rather long list of fail as well. Only semi successful thing they have is CUDA which in due time would fall to OpenCL.
                                          I agree that NVIDIA's track record isn't perfect, but it's much better than anything AMD has ever done. NVIDIA's developer site was and is better than ATI/AMD, NVIDIA pushes its technologies a lot more, NVIDIA makes them available to the public (indie developers) more easily, and NVIDIA supports them over a longer period of time, usually with better tools which are more regularly updated.

                                          AMD's way of operation is more: here's something cool, it's minimally documented, hard to find on the site, but there you go. In some ways AMD is a lot more open than NVIDIA, and it has decent support for open source and standards, but exactly where it comes to proprietary standards is where it fails.

                                          If you think that CUDA will lose to OpenCL, then you have to also think that any proprietary AMD technology will lose to DirectX.
                                          ET's place
                                          ET's competition list

                                          Comment


                                            #22
                                            By the end of the article, it felt to me that it was bitching more about DX not moving fast enough. And then suggesting the idea of a API's custom to each hardware vendor(Like Glide from 3DFX & Metal from S3)

                                            Imho, DirectX is moving rather slowly. But that doesn't surprise me, as the company behind it has a conflict of interest.
                                            -Trunks0
                                            not speaking for all and if I am wrong I never said it.
                                            (plz note that is meant as a joke)


                                            System:
                                            Asus TUF Gaming X570-Pro - AMD Ryzen 7 5800x - Noctua NH-D15S chromax.Black - 32gb of G.Skill Trident Z NEO - Asus DRW-24F1ST DVD±RW - Samsung 850 Evo 250Gib - 4TiB Seagate - PowerColor RedDevil Radeon RX 7900XTX - Creative AE-5 Plus - Windows 10 64-bit

                                            Comment


                                              #23
                                              Fusion may not replace a GPU but it could make a rather nice package for a console . As for bandwidth limitations there are options in having a fast memory pool, wider then now with a secondary pool or larger amount of slower memory. In addition programmable firmware could be added to a APU so it could be updated for new instructions which the APU could read at boot time to load the additional new instructions. It may not be optimized for a new instruction set but it doesn't necessarily have to break either if used down the road. Remember; "The Future Is Fusion".
                                              Ryzen 1700x 3.9ghz, Thermaltake Water 2.0 Pro, Asus CrossHair 6 Hero 9, 16gb DDR4 3200 @ 3466, EVGA 1080 Ti, 950w PC pwr & cooling PS, 1TB NVMe Intel SSD M2 Drive + 256mb Mushkin SSD + 512gb Samsung 850evo M.2 in enclosure for Sata III and 2x 1tb WD SATA III, 34" Dell " U3415W IPS + 27" IPS YHAMAKASI Catleap. Win10 Pro

                                              Custom SFF built case, I7 6700k OC 4.4ghz, PowerColor R9 Nano,, 1TB NVMe Intel SSD M2 Drive, 16gb DDR 4 3000 Corsair LPX, LG 27" 4K IPS FreeSync 10bit monitor, Win 10

                                              Comment


                                                #24
                                                Richard Huddy chimed in at Beyond3d:

                                                Really enjoy 3d gaming flexibility; a gamer's best friend!

                                                Comment


                                                  #25
                                                  I don't get it... the 360 and PS3 both do use APIs to get their work done. I don't know how it is on the PS3, but Microsoft (supposedly) forbids developers explicitly from bypassing the API on the 360. And, if it's a DirectX issue, why not look to OpenGL? Even Apple (who have complete control over their hardware) is still reliant on APIs in their OSes so I don't really see how it's feasible in Windows' space.

                                                  I've always found the whole argument of '10x more powerful GPUs in PCs' a bit flawed. The consoles now drop their rendering resolutions as far as 960x540 which would mean that you need an almost 5x more powerful GPU to render at 1920x1200. Throw in stereoscopic 3D and you will need twice that. It might be a simplified equation but I still find it's a somewhat linear scaling.

                                                  And wasn't DX10 supposed to remove all those bottlenecks?

                                                  Comment

                                                  Working...
                                                  X