Page 3 of 4 FirstFirst 1234 LastLast
Results 31 to 45 of 48

Thread: 1.5 Vs 3Gb Vram (Vram Shared Between Two Gpu, Or Single Gpu)?

  1. #31
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,448
    Cuda performance is also crippled on the consumer cards. Though in my experience they still render fast - much faster than a cpu.

    The Octane devs had to overcome some real challenges getting the 480 to work in their gpu-based renderer, due to some of those changes.

    For full cuda performance you will have to buy a Tesla, or the upcoming Maximus cards (which, according to nvidia propaganda, will do both cuda and opengl well!)

    Really, it's just nvidia maximizing their revenue. Good or bad, that's what companies tend to do. And I think I read somewhere that nvidia is going to have a bad time, what with apple, microsoft and sony going with amd graphics for the foreseeable future (new gen consoles, macs). Don't quote me on this, though. Rumors, rumors.

    Addendum: over at the Blender foundation they seem to be acutely aware of the Fermi problems - reading through their latest development notes, it states:
    Brecht found possible solution for Nvidia Fermi slowdown with faces using double sided light. He will check if this is feasible to add.
    Last edited by Rayek; 01-09-2012 at 04:47 PM.
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  2. #32
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,448
    The new 7970 is also getting great overclocking results. Up to 80% faster than a 580 in multi-monitor setups!

    http://www.hardocp.com/article/2012/...mance_review/7
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  3. #33
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    Quote Originally Posted by Rayek View Post
    OpenGL calls like "glReadPixels()" on a GTX480 are ~4 times slower than on a GTX285.
    Same goes for certain buffer operations. VBO's are affected as well.
    How does this actually affect Lightwave?

    If we're using opengl to to display an estimation of the render for tweaking, I would guess Lightwave would be using the glWritePixels - much like a video game would to display the game.

    Is this only going to affect users to want to read what opengl has 'rendered' - or is this going to affect normal usage of Lightwave? When would you need to read those buffers, rather than display them?

    I thought all actual rendering was done in software, at least for final output, but ... I could be confused?

    BTW - when you researched this, did you find any actual throttling of CUDA? Someone on ipisoft (used for mocap) reported that they saw a large speed increase when moving from a GTX-260 to a 460.

    ** EDIT *** this benchmark seems to indicate CUDA is getting faster in newer GTX cards!

    http://kernelnine.com/?p=218
    Last edited by 3dWannabe; 01-09-2012 at 05:17 PM.

  4. #34
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,448
    How does this actually affect Lightwave?
    Not too sure - when I tested with the 480, Lightwave 9 did not seem to be affected that much, but other apps were. However, I have read it does affect the performance of vbo's, which is what Layout uses as well. So there may be a slowdown. Only way to tell is that I test a heavy object in Layout on my machine (ati 5870), and someone else tests on a 285gtx, a 480gtx and a 580 gtx. Then we will know for sure what kind of impact it might have.

    I thought all actual rendering was done in software, at least for final output, but ... I could be confused?
    It has nothing to do with rendering to final output, but everything to do with how fast the opengl viewports are 3d hardware accelerated to render the image displayed on your screen while orbiting, panning and zooming. So realtime update of the viewport.

    Mind, whereas a couple of years ago rendering final images and opengl output were sharply defined areas, boundaries are getting blurred more and more :-)

    BTW - when you researched this, did you find any actual throttling of CUDA? Someone on ipisoft (used for mocap) reported that they saw a large speed increase when moving from a GTX-260 to a 460.
    Other people have been doing tests, and found that the identical fermi hardware is capped for cuda unless you happen to buy a nvidia Tesla. It is still very fast - but way slower than it could be with consumer based fermi cards.

    And to complicate matters even more, the 3d api may have a huge impact on viewport performance as well:

    3.6mio poly model:
    Blender 2.57b x64 OpenGL: 9 FPS
    3dsMax 2011 x64 OpenGL: 0-1 FPS
    3dsMax 2011 x64 DX9: ~250 FPS (smooth+highlight, panning+rotating)
    Edit mode gets slow as crap in both tools though at this poly amount but it´s expected.
    (not my words)

    And, of course how optimized the opengl/viewport is in applications:http://www.cgchannel.com/2011/10/rev...a-vs-amd-2011/

    Huge differences in viewport performance between apps.

    So, overall, deciding on a video card depends on your software, whether you require cuda gpu-based rendering, and so on.
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  5. #35
    Kamehameha Chameleon BigHache's Avatar
    Join Date
    Sep 2006
    Location
    Future Past Life
    Posts
    1,899
    Rayek thanks for this info, I hadn't come across the Fermi issue until this thread. I'm running a 260 GTX and have been passively researching a newer card myself. Meh.

  6. #36
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,448
    Quote Originally Posted by BigHache View Post
    Rayek thanks for this info, I hadn't come across the Fermi issue until this thread. I'm running a 260 GTX and have been passively researching a newer card myself. Meh.
    Thanks - the current situation is very confusing and messy, which makes it harder to choose a video card that fits one specific needs versus affordability.

    ps running the same antec case!
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  7. #37
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    If there was some opengl test I could run with Lightwave? I'm getting the GTX-580 tuesday, and currently have a GTX-285.

    I actually want to have a few benchmarks for the different software I run in place before I install t.

    I think I can use Turbulence to test CUDA performance.

  8. #38
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    I dunno, I think you guys are just tossing around numbers at this point... I can flip a 1.4million poly model (completely textured) around in Modeler 10.1 with my GTX560 Ti like butter.

    Or to put it this way... (for most of us any way)

    Just think what your OpenGL performance was like 5 years ago. The cards now (even the ones you're calling "slow") are infinitely faster than they were.

    Now think about your models... Has the complexity of your models really changed that much? Probably not.

    Just sayin...
    Playing guitar is an endless process of running out of fingers.

  9. #39
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,448
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  10. #40
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,448
    Quote Originally Posted by Hopper View Post
    I dunno, I think you guys are just tossing around numbers at this point... I can flip a 1.4million poly model (completely textured) around in Modeler 10.1 with my GTX560 Ti like butter.

    Or to put it this way... (for most of us any way)

    Just think what your OpenGL performance was like 5 years ago. The cards now (even the ones you're calling "slow") are infinitely faster than they were.

    Now think about your models... Has the complexity of your models really changed that much? Probably not.

    Just sayin...
    No, that's the point - the 480gtx's opengl performance in some popular 3d applications is on par with cards that came out almost 5 years ago. People were complaining about performance that was equal to a 9800gtx (three gen old video card by now )- and I don't know about you, but my models and scenes have exponentially grown more complex, what with zbrush and cloth, water, and smoke simulations. I grew with the speed of the hardware. Why do you think I increased my ram from 12gb to 24gb? Because some of my latest projects were breaking through the ceiling while rendering. My clients expectations have grown rapidly in the last couple of years, and I don't see the end of it.

    Fact of the matter is that 3d apps keep adding more eye candy (well, Lightwave has been somewhat slow in adapting new tech) and the hardware requirements keep growing to adequately meet the demand in the market.

    At least, that is my personal situation.

    And discovering that the "latest and greatest" from nvidia actually performed way below the previous generation of video cards was "slightly" bewildering. I am not expecting quadro performance, but at least a little bit of an improvement. Instead, nvidia decided to throw me (and a lot of other 3d creators) into a hole, trying to force us into buying an overpriced quadro card that's only really worth it if you use Maya or Max. Lightwave, Cinema4d, Blender, XSI: these do not benefit that much, if at all, from a workstation card, because dedicated drivers do not exist!

    Furthermore, the difference between the (older) ati 5870 in opengl compared to that 480gtx is like night and day in the apps I use. Animating a 3d character in Blender was nigh on impossible - now it works fine. Sculpting at tens of millions of polys is completely smooth - on the 480gtx I couldn't go beyond 1 or two million before the lag became unbearable. I'd say that's pretty darn bad.

    So, no, I politely disagree with you - expectations HAVE grown across the industry, as well for freelancers like me. And to keep up I have to upgrade my hardware once in a while, and I feel it's a cheap trick from nvidia to pull the ground under from me.

    Meanwhile, lots of users complaining, and nvidia acts like it's the most natural thing in the world - which, for a greedy company, it is, and it absolutely makes economic sense. But I will not be buying anything from them anymore, and I can warn other people whenever I can that nvidia is pulling one over us - which is what I have been doing since last year.
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  11. #41
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    The only benchmark I saw was on glReadPixels()

    and

    read: orphane buffer0, bind to PIXEL_PACK_BUFFER, glReadPixels
    copy: map buffer1, memcopy image data, unmap buffer1
    tex: not used
    swap(buffer0, buffer1)

    http://www.opengl.org/discussion_boa...789#Post291789

    I have no way of evaluating how that translates into actual usage of opengl with a 3D app such as Lightwave? Luckily, 3d-Coat has a choice of opengl or directx (directx is advised).

    Thanks for posting the benchmarks, I'll take a look!!!

    But, yes - this is very confusing for everyone reading this, I'm sure.

    And that's a shame. It should be a simple decision to purchase a video card. You should not have to become an 'expert' on all these subjects.

    If nVidia has slowed down performance in later graphic cards to below earlier graphic cards, that's just very poor behavior which will not win them friends and supporters.

    The market for those high end cards (that they keep for themselves) is so small, that the goodwill negated by doing that must cost them money.

  12. #42
    Super Member
    Join Date
    Feb 2009
    Location
    Dallas
    Posts
    989
    I found some benchmarks for the SPECapc for Lightwave 9.6.

    At the bottom of this page is shown the 3 tests for the GTX-480

    http://jonpeddie.com/reviews/comment...me-workstation

    On this page are shown two of those tests for the Quadra 5000

    http://www.tomshardware.co.uk/quadro...-31984-10.html

    and the three tests for the Quadro 5000 along with an AMD FirePro:

    http://www.tomshardware.com/reviews/...00,2780-4.html

    We're comparing apples with oranges, and not sure about how the computer specs for resolution figure in, but ... it seems that the GTX-480 is in the ballpark for this test compared to the quadro, it's not 4x slower or anything?

    For Premiere Mercury playback, the GTX580 seems to beat the Quadro (so, I guess CUDA performance isn't throttled?)

    http://ppbm5.com/MPE%20Gains.png

  13. #43
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    Quote Originally Posted by Rayek View Post
    No, that's the point - the 480gtx's opengl performance in some popular 3d applications is on par with cards that came out almost 5 years ago. People were complaining about performance that was equal to a 9800gtx (three gen old video card by now )- and I don't know about you, but my models and scenes have exponentially grown more complex, ...

    <snip>...

    So, no, I politely disagree with you - expectations HAVE grown across the industry, as well for freelancers like me. And to keep up I have to upgrade my hardware once in a while, and I feel it's a cheap trick from nvidia to pull the ground under from me.

    Meanwhile, lots of users complaining, and nvidia acts like it's the most natural thing in the world ...
    Well stated. I can see your point of view. I would agree will all of your statements, but if I did, I fear it would somehow break the Internet.
    Last edited by Hopper; 01-10-2012 at 07:47 PM.
    Playing guitar is an endless process of running out of fingers.

  14. #44
    Registered User Rayek's Avatar
    Join Date
    Feb 2006
    Location
    Vancouver, BC
    Posts
    1,448
    Quote Originally Posted by Hopper View Post
    Well stated. I can see your point of view. I would agree will all of your statements, but if I did, I fear it would somehow break the Internet.
    Indeed it would, wouldn't it?

    I do think that for most 3d tinkerers semi-old hardware is still more than enough. And in my case, there is a certain geek factor at work as well, so I need to be careful not to muscle up my rig just for the sake of "coolness".

    ... (silence)

    Why of why did I just order an additional 24gb? Aarghh... Now I have to make scenes that can render in 48gb.
    Win10 64 - i7 [email protected], p6t Deluxe v1, 48gb, Nvidia GTX 1080 8GB, Revodrive X2 240gb, e-mu 1820. Screens: 2 x Samsung s27a850ds 2560x1440, HP 1920x1200 in portrait mode

  15. #45
    Super Member JonW's Avatar
    Join Date
    Jul 2007
    Location
    Sydney Australia
    Posts
    2,235
    Quote Originally Posted by Rayek View Post
    Why of why did I just order an additional 24gb?
    So one can run 2 instances of LW or 2 Screamernet nodes fired up so the CPU/s run at 100%!

    No point have a whole box of parts running at 80% because one is a bit tight with ram!
    Procrastination, mankind's greatest labour saving device!

    W5580 x 2 24GB, Mac Mini, Spyder3Elite, Dulux 30gg 83/006 72/008 grey room,
    XeroxC2255, UPS EvolutionS 3kw+2xEXB

Page 3 of 4 FirstFirst 1234 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •