Page 2 of 4 FirstFirst 1234 LastLast
Results 16 to 30 of 48

Thread: 1.5 Vs 3Gb Vram (Vram Shared Between Two Gpu, Or Single Gpu)?

  1. #16
    Super Member Paul_Boland's Avatar
    Join Date
    Nov 2003
    Location
    Waterford, Ireland.
    Posts
    1,087
    Quote Originally Posted by Hopper View Post
    You guys keep referring to "SLI configuration". If you're NOT talking about gaming (or anything full screen 3D), then your not talking about Scalable Link Interface. SLI doesn't mean generically "2 cards in one system". LightWave, 3DSMax, Maya, etc can't use SLI or CrossFireX as they are not full screen applications. If you mean "the use of two cards for a regular desktop application", then the application will have to be specifically designed for using multiple GPU's for rendering frames, performaing calculations, etc..). The previously mentioned applications are not designed for this.

    And as far as SLI configurations becoming out of sync - they haven't used AFR rendering since around 2001. Frames are checksum'd through the bridge chip set now and it's pretty much impossible to get out of sync. I would say it's a pretty safe bet that you wouldn't run into that problem unless you're using some really old equipment.
    I saw a computer with Nvidia 8800 cards in sli out of sync. Every movement looked fuzzy due to the odd and even pixels being out of sync.

    As for desktop software not using sli, you are right, it is for 3D processing, so doesn't Lightwave avail of it? I know AutoCAD does and I "think" (not 100% sure) 3D Studio does too.
    KnightTrek Productions
    http://www.knighttrek.com

  2. #17
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    Quote Originally Posted by Paul_Boland View Post
    I saw a computer with Nvidia 8800 cards in sli out of sync. Every movement looked fuzzy due to the odd and even pixels being out of sync.
    That would simply be poor configuration and/or bad timing with the ICH on the motherboard. Poor quality RAM can also cause the same result, but SLI synchronization problems will result in dropped frames, stuttering (ripping), and actual render errors. The cards could have also been set to SFR rendering, but unlikely.
    Quote Originally Posted by Paul_Boland View Post
    As for desktop software not using sli, you are right, it is for 3D processing, so doesn't Lightwave avail of it? I know AutoCAD does and I "think" (not 100% sure) 3D Studio does too.
    LightWave is a desktop application, thus does not make specific use of SLI. AutoCAD and 3D Studio are also desktop applications and do not make use of SLI. I think you may be mistaking multi-GPU processing through CUDA or some other shader API for SLI processing. These are not the same thing. A "3D application" as far as SLI is concerned is a full screen OpenGL or DirectX application. All of the previously mentioned applications are windowed and use the platform level windowing API's to draw the displays. SLI is controlled by the driver - CUDA (and other API's) are controlled by the application.
    Last edited by Hopper; 01-07-2012 at 04:15 PM.
    Playing guitar is an endless process of running out of fingers.

  3. #18
    obfuscated SDK hacker Lightwolf's Avatar
    Join Date
    Feb 2003
    Location
    Stuttgart, Germany
    Posts
    13,602
    Quote Originally Posted by Hopper View Post
    A "3D application" as far as SLI is concerned is a full screen OpenGL or DirectX application. All of the previously mentioned applications are windowed and use the platform level windowing API's to draw the displays.
    Unless you run a Quadro of course...

    Cheers,
    Mike

  4. #19
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    Quote Originally Posted by Lightwolf View Post
    Unless you run a Quadro of course...

    Cheers,
    Mike
    How do the Quadro's perform with 3D compared to a 580?

    Looking at the specs, they have 6GB VRAM instead of 3GB for the 580, but the specs seem to indicate they would be slower (less cores, lower memory bandwidth, etc.)?

    http://www.nvidia.com/object/product...o-6000-us.html
    http://www.evga.com/products/moreInf...s%20Family&sw=

  5. #20
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    Quote Originally Posted by Lightwolf View Post
    Unless you run a Quadro of course...

    Cheers,
    Mike
    Good point.
    Playing guitar is an endless process of running out of fingers.

  6. #21
    Registered User
    Join Date
    Jan 2006
    Location
    Baltimore, MD
    Posts
    541
    Quote Originally Posted by 3dWannabe View Post
    How do the Quadro's perform with 3D compared to a 580?

    Looking at the specs, they have 6GB VRAM instead of 3GB for the 580, but the specs seem to indicate they would be slower (less cores, lower memory bandwidth, etc.)?

    http://www.nvidia.com/object/product...o-6000-us.html
    http://www.evga.com/products/moreInf...s%20Family&sw=
    The GeForce 580 is much quicker for modeling than a comparable Quadro, but for rendering the Quadro has better color reproduction. As stated earlier most 3d packages still use the CPU to generate the 3d images, so a faster GPU doesn't necessarily translate into faster render times.

    Rich

  7. #22
    Super Member Paul_Boland's Avatar
    Join Date
    Nov 2003
    Location
    Waterford, Ireland.
    Posts
    1,087
    That's right, a standard gamers graphics card is not a render card, go for the Quatro or simular if that's what you're after but don't 3D package need to support these cards before they work?

    Regarding sli, gamer cards, and desktop application, Hopper I checked the Autodesk website. You are right, AutoCAD does not use sli. But I found out that Motion Builder, a desktop application, does us it so sli is not restricted to gaming.
    KnightTrek Productions
    http://www.knighttrek.com

  8. #23
    Super Member JonW's Avatar
    Join Date
    Jul 2007
    Location
    Sydney Australia
    Posts
    2,235
    Quote Originally Posted by RTSchramm View Post
    for rendering the Quadro has better color reproduction.
    I asume you mean for "better colour reproduction" what is displayed on the monitor will look better using a Quadro?

    & if this is the case. Would a calibrated monitor using a Quadro card look more accurate than a calibrated monitor using a standard card?


    Images on my monitors using standard cards (I don't have a Quadro) or built in graphics look reasonably similar to my laser printer & I haven't/can't calibrate my printer with the device I have.
    Procrastination, mankind's greatest labour saving device!

    W5580 x 2 24GB, Mac Mini, Spyder3Elite, Dulux 30gg 83/006 72/008 grey room,
    XeroxC2255, UPS EvolutionS 3kw+2xEXB

  9. #24
    obfuscated SDK hacker Lightwolf's Avatar
    Join Date
    Feb 2003
    Location
    Stuttgart, Germany
    Posts
    13,602
    Quote Originally Posted by JonW View Post
    I asume you mean for "better colour reproduction" what is displayed on the monitor will look better using a Quadro?
    The only difference that I can think of is that Quadros allow for 30-bit output (10-bit per component) - in conjunction with the right display and the right software.

    Cheers,
    Mike

  10. #25
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,446
    ...and do not forget that the nvidia 4xx/5xx consumer cards have crippled opengl, which may result in a dismal opengl viewport performance - but it depends on the application used.
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  11. #26
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    Quote Originally Posted by Rayek View Post
    ...and do not forget that the nvidia 4xx/5xx consumer cards have crippled opengl, which may result in a dismal opengl viewport performance - but it depends on the application used.
    Do you have more info on this?

  12. #27
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,446
    About a year and two months ago I replaced my 280gtx with a 480gtx, obviously expecting at least twice the opengl performance in most 3d apps.

    Unfortunately, it was not to be: in a number of apps the performance was much worse than the 280 (for example: Blender, Maya). To cut a long story short, lots of research on the net (there is still a lot of confusion among users relating to this topic, which is understandable) and a quick discussion with nvidia's support made it clear that certain vital opengl functions are throttled down.

    So great performance in games (mostly directx), and abysmal performance in opengl apps. Users have been reporting much worse performance with a 460 than an aging 8800.

    When I asked nvidia support, they got back to me saying this is 'expected behaviour' of the new cards. Basically, nvidia got tired of people NOT buying Quadro cards anymore, instead opting for consumer card to drive their 3d apps.

    I switched the 480 for an ati 5870, which works great in opengl (I benchmarked all my apps, and 5870 wins hands down compared to 480 - no comparison). It does depend a bit on the app used - for example, Lightwave seems less affected for some reason - though the opengl performance in lightwave is quite bad as it is, so...

    Quote (not mine):

    All AMD Radeon cards work fine.
    All Nvidia GeForce200 series cards and older work fine.
    All Nvidia GeForce400 series cards and newer are affected.
    All Nvidia Quadro cards are not effected.

    OpenGL calls like "glReadPixels()" on a GTX480 are ~4 times slower than on a GTX285.
    Same goes for certain buffer operations. VBO's are affected as well.
    ---

    From wikipedia page (since removed, I think by nvidia):

    It has been reported by users as well as developers [13] [14] [15] [16] [17] [18] [19] [20] that nVidia 400-series cards have severe performance problems with 3D content-creation applications such as Autodesk Maya and 3ds Max, Blender, Rhinoceros 3D—as well as some OpenGL games—to the extent that video cards two generations older routinely outperform 400-series in such applications and games. The problem, which affects any OpenGL application using textures, involves accessing framebuffer contents or storing data on the GPU. So far, one customer using an OpenGL based application got a response from nVidia support indicating that the behavior is expected in the GeForce 400 line of cards, and no software update is available to improve the performance of the hardware.[21]. The problem can be worked around with a hack by using a CUDA memory copy to access the buffer object.
    ---

    Ha! Well, I can see why they would remove such anecdotal evidence - that one customer was me! Figures.

    Honestly, it really depends whether an opengl app or game makes use of these crippled functions. If they do, it's horrible. In Blender less than a million verts would slow the viewport down to a crawl. On my 5870 I have a smooth viewport at 8 million verts and more.

    I plan to purchase the new 7970 3gb card from amd next month.
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  13. #28
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    Oh #@(&%%%%!!!!! My 580 arrives tomorrow. I really feel ripped off and will probably never buy nVidia again, as this should have been disclosed.

  14. #29
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,446
    Quote Originally Posted by 3dWannabe View Post
    Oh #@(&%%%%!!!!! My 580 arrives tomorrow. I really feel ripped off and will probably never buy nVidia again, as this should have been disclosed.
    That's how I felt when I found out about those issues myself - cost me $90 restocking fees with the 480.

    And for those still in doubt about this:
    http://www.opengl.org/discussion_boa...=284014&page=5
    http://forums.nvidia.com/index.php?showtopic=166757

    Either call it 'crippling' or 'disabling professional features', the fact remains that a 280gtx is much faster for most opengl apps than a 580.

    Of course, the directx game performance is great! ;-)
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  15. #30
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    Quote Originally Posted by Rayek View Post
    That's how I felt when I found out about those issues myself - cost me $90 restocking fees with the 480.

    And for those still in doubt about this:
    http://www.opengl.org/discussion_boa...=284014&page=5
    http://forums.nvidia.com/index.php?showtopic=166757

    Either call it 'crippling' or 'disabling professional features', the fact remains that a 280gtx is much faster for most opengl apps than a 580.

    Of course, the directx game performance is great! ;-)
    The 2nd link seems to indicate that the OpenGL issue could be bypassed by using a CUDA call. But, that's from 2/17/'11 - so ... do you know if CUDA is also crippled now?

Page 2 of 4 FirstFirst 1234 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •