Announcement

Collapse
No announcement yet.

Power consumption 'review' and chart for Video cards...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Power consumption 'review' and chart for Video cards...

    No 2900xt or 8800Ultra though...

    We not only focus on the 3D performance of a new graphic card, but also power consumption. Today we got about 8 cards to do this comparison, but it is a pity that we don’t have NV & AMD’s latest flatship: 8800 Ultra & HD 2900XT.

    Remarks:

    Except AMD’s reference HD 2600XT, all other cards are retail product.They all running at their default frequency and using stock cooler. We using Seasonic’s Power Angle to get the actual power consumption of the whole platform. Because all the cards are using the same platform so we believe the data we collect is trustful.

    We got the results from two way:

    1.idle in vista for 15minute

    2.using ATI Tool’s show 3D view to simulate graphic card’s fullload.





    http://en.expreview.com/?p=72

    #2
    jeez... the gtx would tank my old power supply just by itself

    i guess being a gamer is definitely not an eco-friendly lifestyle...

    Comment


      #3
      Impressive. Slightly less consumption than the GTS at load yet performance somewhere betwixt the GTX and the Ultra. So how is big N going to move all that G80 stock if they keep prices where they are?
      Last edited by eatyour1337ies; Dec 2, 2007, 11:53 PM.
      yan yan > snu snu

      Comment


        #4
        Originally posted by Ryoko View Post
        jeez... the gtx would tank my old power supply just by itself

        i guess being a gamer is definitely not an eco-friendly lifestyle...
        8800GTX is a very very power hungry card. Yet for a long time it was the best performance per watt ((kinda crazy as technology evolves)) But imagine running 2 of them in SLI.. or maybe an even higher configuration?

        Thats the beauty of process shrinks., And its why I love the 8800GT so much. While the 8800GTX is more performant., The 8800GT comes within shouting distance at so much less power usage. I can get nearly 8800GTX SLI performance from 2 8800GTs in SLI with a 550 Watt Enermax PSU.

        Chris
        Chris 'ChrisRay' Arthington
        1983 - 2010
        May you rest in peace

        --

        |CPU: Intel I7 Lynnfield @ 3.2 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

        |CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

        Nzone
        SLI Forum Administrator

        NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members

        Comment


          #5
          Originally posted by ChrisRay View Post
          Thats the beauty of process shrinks., And its why I love the 8800GT so much. While the 8800GTX is more performant., The 8800GT comes within shouting distance at so much less power usage. I can get nearly 8800GTX SLI performance from 2 8800GTs in SLI with a 550 Watt Enermax PSU.

          Chris
          Hey, Chris, I just got my kid an 8800 GT and it's clocked to 700 core 1000 RAM. Should that card not beat my GTX in most games @ 1600x1200? I'm just curious since you've mentioned that the GTX would win in some scenarios. Can you point me out to some numbers?

          Thanks!!

          Comment


            #6
            2.using ATI Tool’s show 3D view to simulate graphic card’s fullload.
            Is this some sort of joke?

            When using ATI Tool’s 3D view, Geforce 8800GT and Radeon HD 3870 is very close, 8800GT even 2 watt lower than HD 3870. And comparing the load mode of the new G92-8800GTS with old G80-8800GTS also have 2 watt’s diference; but in idle mode, there is a 20 watt’s gap between them. I guess using the new 65nm processing to shrink GPU core only affect their idle power consumption.
            lol? This is a joke.

            Comment


              #7
              Originally posted by acroig View Post
              Hey, Chris, I just got my kid an 8800 GT and it's clocked to 700 core 1000 RAM. Should that card not beat my GTX in most games @ 1600x1200? I'm just curious since you've mentioned that the GTX would win in some scenarios. Can you point me out to some numbers?

              Thanks!!


              It really depends on your settings..
              Chris 'ChrisRay' Arthington
              1983 - 2010
              May you rest in peace

              --

              |CPU: Intel I7 Lynnfield @ 3.2 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

              |CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

              Nzone
              SLI Forum Administrator

              NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members

              Comment


                #8
                Originally posted by Sound_Card View Post
                Is this some sort of joke?
                ehh...dunno. I dont know what exactly its doing, but I do know it heats my core up just as high if not higher than any game I own. It should be stressing the core and memory. I think it is b/c I get artifacts when I push my core to high, other artifacts when I my memory goes too high as well. So its doing something. How though, I dont know.

                Where is Ray Adams when you need him?



                lol? This is a joke.
                My theory...that is a silly statement (them), have to agree there.

                Comment


                  #9
                  Originally posted by ChrisRay View Post
                  It really depends on your settings..
                  OK, 4xAA / 16xAF @ 1600x1200. Which card would get the best fps? My oc'd GTX or the GT @ 700/1728/2000?

                  Comment


                    #10
                    Probably a little faster. But it's gonna depend the game. For instance FEAR seems to prefer the 8800GTX design due to higher bandwith/stencil shadowing performance. But Bioshock will probably do better on the 8800GT ((at those clocks))

                    Your mileage will vary. But I'd say they were close to equivalent. As far as linkage numbers. No I dont have any as that configuration isnt widely available or benchmarked for reference. I could probably attempt to make some if I can keep my 8800GT's stable at those settings for long enough to do some real testing. The settings where the 8800GTX are going to be undisputed will be high memory utilization scenerios such as 8xQ, 16xQ, 16xS, 32xS and high resolutions. Below is an example of Bioshock on my system using dual 8800GTX verses 8800GT where 16xQ causes stuttering and higher FPS deficit due to running out of memory.

                    Bioshock 1680x1050



                    Performance Thoughts: Bioshock does extremely well on the 8800GT with 16xAA enabled. You can definately say that this will be the preferred setting for DirectX 9.0 users. Unfortunately under 16xQ I ran into some framebuffer limitations on the 8800GT SLI configuration which caused some hitching and prevented performance from being as optimal.
                    From my 8800GT SLI preview.

                    http://forums.slizone.com/index.php?showtopic=9798

                    But as I said in another thread. If you use 4xAA/8xCSAA/16xCSAA its almost impossible to tell the difference between 8800GTX cards and my 8800GT ones. Unless your actually looking for something to point out. But as games progress and memory amount increases. You'll find the 8800GTX hardware will behave more consistently.
                    Last edited by ChrisRay; Dec 3, 2007, 07:36 PM.
                    Chris 'ChrisRay' Arthington
                    1983 - 2010
                    May you rest in peace

                    --

                    |CPU: Intel I7 Lynnfield @ 3.2 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

                    |CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

                    Nzone
                    SLI Forum Administrator

                    NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members

                    Comment


                      #11
                      Ok finished editing. Now your question should be answered better.
                      Chris 'ChrisRay' Arthington
                      1983 - 2010
                      May you rest in peace

                      --

                      |CPU: Intel I7 Lynnfield @ 3.2 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

                      |CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

                      Nzone
                      SLI Forum Administrator

                      NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members

                      Comment


                        #12
                        Originally posted by ChrisRay View Post
                        Ok finished editing. Now your question should be answered better.
                        As usual, awesome job. Thanks so much. The GT is the new Ti 4200. What an awesome card. The same performance as GTX for about half the price. Incredible.

                        Comment


                          #13
                          Originally posted by acroig View Post
                          As usual, awesome job. Thanks so much. The GT is the new Ti 4200. What an awesome card. The same performance as GTX for about half the price. Incredible.
                          Huh?

                          GT is a severely bandwidth crippled card that often gets raped by the GTX at high resolutions w/ AA & AF...& that gap will only get bigger as newer more demanding games come out.

                          Is the GT a fantastic deal compared to the GTX?

                          Absolutely.

                          But saying it's the same performance as the GTX is nonsense, unless you run at uber low resolutions.

                          Start benching @ 2560x1600 w/ eye candy turned up, & that "same performance" becomes remarkably dissimilar.

                          n7


                          n7

                          Comment


                            #14
                            Originally posted by -n7- View Post
                            Huh?

                            GT is a severely bandwidth crippled card that often gets raped by the GTX at high resolutions w/ AA & AF...& that gap will only get bigger as newer more demanding games come out.

                            Is the GT a fantastic deal compared to the GTX?

                            Absolutely.

                            But saying it's the same performance as the GTX is nonsense, unless you run at uber low resolutions.

                            Start benching @ 2560x1600 w/ eye candy turned up, & that "same performance" becomes remarkably dissimilar.
                            Please take a look at post #9. No one is talking about stupid high resolutions, just 1600x1200. That's not "uber low" is it now?

                            Comment


                              #15
                              Originally posted by acroig View Post
                              Please take a look at post #9. No one is talking about stupid high resolutions, just 1600x1200. That's not "uber low" is it now?
                              Kinda yeah, but i'm used to a tad *cough* higher.

                              I don't know why you'd want a GT though since you have a GTX, unless you want SLI...which i of course cannot recommend, since taking two bandwidth limited cards results in even worse scaling...

                              But each to their own.

                              I'm personally waiting for real nV release...GT & GTS 512 MB only annoy me, since having nothing really better than the GTX for over a year doesn't make me happy.

                              n7


                              n7

                              Comment


                                #16
                                Originally posted by -n7- View Post
                                I don't know why you'd want a GT though since you have a GTX....
                                Please look at post #5.

                                And yes, I too would like to see a new high end from nV.

                                Comment


                                  #17
                                  Originally posted by -n7- View Post
                                  Kinda yeah, but i'm used to a tad *cough* higher.
                                  That's the blessing and the curse of owning huge displays. Awesome to look at but tough to get top performance at native res.

                                  Comment


                                    #18
                                    Originally posted by Jas420221 View Post
                                    ehh...dunno. I dont know what exactly its doing, but I do know it heats my core up just as high if not higher than any game I own. It should be stressing the core and memory. I think it is b/c I get artifacts when I push my core to high, other artifacts when I my memory goes too high as well. So its doing something. How though, I dont know.

                                    Where is Ray Adams when you need him?



                                    My theory...that is a silly statement (them), have to agree there.

                                    ATi tool is made by w1zzard. You must be thinking of ATi tray tools. I hope they did not use tray tools, as that would be even worse. My x1800gto2 get's over 300 fps on ATi tools 3d view.

                                    It's a 3D hairy cube spinning in random directions. It's going to increase power consumption just like any 3D application is, but obviously not every 3d app is the same and is going to vary from work load. Which is why I get a little frustrated when power consumption is measured with 3dmark 2006 and 3dmark 2006 only. Not a good indicator of how todays GPU's throttle across work loads.

                                    Comment


                                      #19
                                      Originally posted by -n7- View Post
                                      Huh?

                                      GT is a severely bandwidth crippled card that often gets raped by the GTX at high resolutions w/ AA & AF...& that gap will only get bigger as newer more demanding games come out.

                                      Is the GT a fantastic deal compared to the GTX?

                                      Absolutely.

                                      But saying it's the same performance as the GTX is nonsense, unless you run at uber low resolutions.

                                      Start benching @ 2560x1600 w/ eye candy turned up, & that "same performance" becomes remarkably dissimilar.
                                      I dont think the gap will "increase" greatly as newer games come out. ((Unless framebuffer limitations come to account)) you will probably see similar behavior patterns throughout the "useful" lifetime of either of these cards. The only exception being is when that 512 barrier gets hit. The 8800GT has some interesting bandwith saving technology and the shader domain is clocked high enough to where there is not a large difference in shader power.

                                      Chris
                                      Chris 'ChrisRay' Arthington
                                      1983 - 2010
                                      May you rest in peace

                                      --

                                      |CPU: Intel I7 Lynnfield @ 3.2 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

                                      |CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

                                      Nzone
                                      SLI Forum Administrator

                                      NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members

                                      Comment


                                        #20
                                        Originally posted by ChrisRay View Post
                                        The 8800GT has some interesting bandwith saving technology and the shader domain is clocked high enough to where there is not a large difference in shader power.

                                        Chris
                                        I noticed. 1728, impressive. The best I could get out of my GTX is 1620.

                                        Comment


                                          #21
                                          That power drain on the 8800 gtx is why I built a low power downloading machine. Running my gaming machine over night was a huge waste of power to download things. Now I've got a small box that pulls 42W total when it's downloading. Much less than the few hundred watts it used to cost me to see heroes.
                                          Originally posted by dildil

                                          the standoffs go under the mobo ? becoz i have them over it and then screwed it down.......... and i did not install all of them either only did like 5.....

                                          Comment


                                            #22
                                            Originally posted by Sound_Card View Post
                                            ATi tool is made by w1zzard. You must be thinking of ATi tray tools. I hope they did not use tray tools, as that would be even worse. My x1800gto2 get's over 300 fps on ATi tools 3d view.

                                            It's a 3D hairy cube spinning in random directions. It's going to increase power consumption just like any 3D application is, but obviously not every 3d app is the same and is going to vary from work load. Which is why I get a little frustrated when power consumption is measured with 3dmark 2006 and 3dmark 2006 only. Not a good indicator of how todays GPU's throttle across work loads.

                                            That 3d hairy cube is very graphically intensive, anyone want to look into how many calculations and what parts of the GPU being stressed?

                                            Just a hint, why we don't see fur shaders or grass shaders that fill up the screen in real time games yet.......


                                            Originally posted by silent_guy
                                            What exactly do you think would happen if you *did* connect a large load? The arrival of the Four Horsemen of the Apocalypse?
                                            Originally posted by Charlie
                                            Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.
                                            http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

                                            Originally posted by Corum Jhaelen Irsei
                                            and you tell me I am in for a suprise? It is the FX; Late, hot, needing insane clock rates for its size. You have yet to show even one of my posts wrong.

                                            Comment


                                              #23
                                              Originally posted by Crag2804 View Post
                                              That power drain on the 8800 gtx is why I built a low power downloading machine. Running my gaming machine over night was a huge waste of power to download things. Now I've got a small box that pulls 42W total when it's downloading. Much less than the few hundred watts it used to cost me to see heroes.
                                              Your gaming rig DL things at 400W? Impossible. Unless you have an SLI setup, a sheeeeeeeeee ton of HDDs and active USB perihperals, its not pulling 400W to DL.

                                              Originally posted by razor1 View Post
                                              That 3d hairy cube is very graphically intensive, anyone want to look into how many calculations and what parts of the GPU being stressed?

                                              Just a hint, why we don't see fur shaders or grass shaders that fill up the screen in real time games yet.......
                                              Thank you for your support that was much more intelligently put!

                                              Comment


                                                #24
                                                Originally posted by razor1 View Post
                                                That 3d hairy cube is very graphically intensive, anyone want to look into how many calculations and what parts of the GPU being stressed?

                                                Just a hint, why we don't see fur shaders or grass shaders that fill up the screen in real time games yet.......
                                                It may be graphically intensive from a artistic standpoint, not GPU intensive. I hope your not trying to argue it's a valid form to measure power consumption from. It is not. It's a 400x400 box with a even smaller cube and a solid background in which a x1600xt get's about 100 frames per second on.

                                                hint: these GPU's are not going to run all out on ATi tool 3d view.

                                                Here is what happens when you take power consumption from a handful of games and factor them together.

                                                Comment


                                                  #25
                                                  next time actually link to the article

                                                  http://www.hothardware.com/articles/..._RV670/?page=5

                                                  what motherboards are they using again?

                                                  We tested all of the graphics cards used in this article on either an EVGA nForce 680i SLI motherboard (NVIDIA GPUs) or an Asus P5E3 Deluxe (ATI GPUs) powered by a Core 2 Extreme QX6850 quad-core processor and 2GB of low-latency Corsair RAM. The first thing we did when configuring the test systems was enter their respective BIOSes and set all values to their "optimized" or "performance" default settings. Then we manually configured the memory timings and disabled any integrated peripherals that wouldn't be put to use. The hard drive was then formatted, and Windows Vista Ultimate was installed. When the installation was complete we fully updated the OS, and installed the latest DX10 redist and various hotfixes along with the necessary drivers and applications.


                                                  Originally posted by silent_guy
                                                  What exactly do you think would happen if you *did* connect a large load? The arrival of the Four Horsemen of the Apocalypse?
                                                  Originally posted by Charlie
                                                  Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.
                                                  http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

                                                  Originally posted by Corum Jhaelen Irsei
                                                  and you tell me I am in for a suprise? It is the FX; Late, hot, needing insane clock rates for its size. You have yet to show even one of my posts wrong.

                                                  Comment


                                                    #26
                                                    Too bad the difference is not 50 watts between x38 and 680i.

                                                    http://techreport.com/articles.x/13351/13

                                                    Comment


                                                      #27
                                                      The amount of power consumed by a board will depend greatly on whats installed. 3 GPUS for instance will use more power ((at the motherboard level)) than 2. On the 680I boards.
                                                      Chris 'ChrisRay' Arthington
                                                      1983 - 2010
                                                      May you rest in peace

                                                      --

                                                      |CPU: Intel I7 Lynnfield @ 3.2 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

                                                      |CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

                                                      Nzone
                                                      SLI Forum Administrator

                                                      NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members

                                                      Comment


                                                        #28
                                                        Originally posted by Sound_Card View Post
                                                        Too bad the difference is not 50 watts between x38 and 680i.

                                                        http://techreport.com/articles.x/13351/13
                                                        Weird how that picture shows the 2900xt consuming less power than a GTX....was it always that way? I thought the 2900xt took that crown by about 20W not using less by almost 30W???

                                                        It may not be 50W but it is ~30W.

                                                        Comment


                                                          #29
                                                          Yeah I remember everyone screaming when 2900XT launched: "ZOMG I'll need a power plant to run R600 !"

                                                          Comment


                                                            #30
                                                            Originally posted by jam2k View Post
                                                            Yeah I remember everyone screaming when 2900XT launched: "ZOMG I'll need a power plant to run R600 !"
                                                            Who can forget...

                                                            Thats why I personally dont like that picture that was posted.

                                                            Comment


                                                              #31
                                                              Originally posted by Sound_Card View Post
                                                              It may be graphically intensive from a artistic standpoint, not GPU intensive. I hope your not trying to argue it's a valid form to measure power consumption from. It is not. It's a 400x400 box with a even smaller cube and a solid background in which a x1600xt get's about 100 frames per second on.

                                                              hint: these GPU's are not going to run all out on ATi tool 3d view.

                                                              Here is what happens when you take power consumption from a handful of games and factor them together.

                                                              I don't think ati tool's hairy box thing per se is a great stress test as it doesn't use the latest shaders and other features, but that has nothing to do with it being a 400-400 box. Small windowed demos should work great assuming the vsync is off, and the card doesn't have separate 2d and 3d clock settings which to my knowledge, the geforce 8s don't, as there is no processor or memory bottlenecking the graphics card, so it can go full boar. I've seen some of my highest gpu temps while running Humus's demos in a 200 by 200 window. Unlike Oblivion or Crysis, these demos do not allow the card to take a breather while caching memory or waiting for something to load.

                                                              Also keep in mind that some of the greatest processor stress tests are simple small programs that do things like test prime numbers, or calculate PI.
                                                              "In the beginning, the universe was created. This made a lot of people very angry, and has been widely regarded as a bad idea." - Douglas Adams

                                                              Comment


                                                                #32
                                                                Originally posted by jam2k View Post
                                                                Yeah I remember everyone screaming when 2900XT launched: "ZOMG I'll need a power plant to run R600 !"

                                                                For the performance that it gives, its alot.


                                                                Originally posted by silent_guy
                                                                What exactly do you think would happen if you *did* connect a large load? The arrival of the Four Horsemen of the Apocalypse?
                                                                Originally posted by Charlie
                                                                Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.
                                                                http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

                                                                Originally posted by Corum Jhaelen Irsei
                                                                and you tell me I am in for a suprise? It is the FX; Late, hot, needing insane clock rates for its size. You have yet to show even one of my posts wrong.

                                                                Comment

                                                                Working...
                                                                X