PDA

View Full Version : 1.5 Vs 3Gb Vram (Vram Shared Between Two Gpu, Or Single Gpu)?



3dWannabe
01-04-2012, 03:13 PM
I was about to go with a GTX-590 with 3GB VRAM, but just realized that it splits the VRAM between two processors so each just gets 1.5GB.

I can get a GTX-580 with only a single GPU, but that GPU will have a full 3GB VRAM.

Which is going to end up faster? I'm concerned about going down to 1.5 GB VRAM as I've got 2GB VRAM now with a GTX-285.

Thanks!

Paul_Boland
01-04-2012, 07:28 PM
I'm not a fan of sli graphic card setups, they can fall out of sync if not perfectly setup. Plus, as far as I know, sli setup requires two identical cards.

I have the Nvidia GeForce 560 1.25Gb and it's a great card for gaming and Lightwave.

3dWannabe
01-04-2012, 07:40 PM
I'm not a fan of sli graphic card setups, they can fall out of sync if not perfectly setup. Plus, as far as I know, sli setup requires two identical cards.

I have the Nvidia GeForce 560 1.25Gb and it's a great card for gaming and Lightwave.

So you are saying that the 590, being an SLI solution from EVGA, may get 'out of sync'? Or, as they are choosing the identical components probably from the same builds, do I need to worry about that with the 590 (but would if I purchased cards separately for SLI)?

My main concern is running out of VRAM - as I'd like the card to last a few years, and I'm not sure if 1.5 GB is enough?

Thanks!

Paul_Boland
01-04-2012, 08:36 PM
Let me clarify...

I'm a gamer and an I.T. tech guy (not as a career). When folks ask me about new computers and/or graphics cards, I advise against sli configuration and recommend that they get as powerful a single graphics card as they can afford.

As a tech guy, I've seen people with sli setups which have gone out of sync. It's rare, but it does happen. When a computer runs two graphics cards in sli, the way it works is that one card processes all the odd lines of pixels across the screen and the other processes all the even lines of pixels. So card 1 does lines 1, 3, 5, 7, 9, 11, etc. Card 2 does 2, 4, 6, 8, 10, 12, etc. As long as both cards are in sync and working fine, there is no issue here and the display is very fast.

But if the cards go out of sync, which usually happens when the computer gets older, one card through natural wear and tear slows a bit, what happens is the odd and even lines of pixels go out of sync. Fine when looking at a static image, but if viewing movement (a movie, scrolling, moving through a scene, etc), you can see lag between odd and even lines of pixels.

Also, unless this has changed, as far as I am aware sli configuration requires two cards of the same type. You said you have a GeForce 260. If you are looking to setup sli with this card, you need another GeForce 260, I don't think you can just pop a GeForce 590 in beside it and connect them together. I could be wrong on this point but I do believe sli setup needs two cards of the same type.

Also, note that sli setup generates a lot of heat and requires a lot of power. Make sure your computer has good ventilation and a good PSU.

I recommend folks, and I myself also go for, a single powerful graphics card. I recently got a new PC as my old one died completely on me and I went with the Nvidia GeForce 560 card which has 1.25Gb RAM on board. As a gamer, this card rocks. It also handles Lightwave's OpenGL scenes and VPR scenes just fine.

As for lasting for a few years, my old PC which just died had a Nvidia GeForce 6800 card in it which only had 256Mb RAM on it and while it was showing its age as far as gaming goes, it handled Lightwave scenes just fine.

I hope this bit of info helps you out.

3dWannabe
01-04-2012, 09:38 PM
Ahh - I'm sorry for the confusion!

I'm replacing a single GTX-285 with a single GTX-590. I run three 1920 x 1200 monitors (using an EVGA UV19 USB for the 3rd monitor when using the GTX-285, but the 590 will handle 3 monitors).

From talking to EVGA tech support, they said that each GPU will only handle two monitors, so that the 2nd GPU runs the 3rd monitor.

It doesn't sound like the screens are having odd/even lines drawn by different GPUs, at least from their description.

As for VRAM, I'm not only interested in Lightwave, but also run 3ds Max, 3D-Coat, MotionBuilder, etc. - and I see this trend towards GPU acceleration (Turbulence for Lightwave is not GPU accelerated with CUDU). Lightwave itself might move in that direction, so having a bunch of CUDU enabled cores could come in handy!

My SuperMicro dual Xeon motherboard can only have one dual slot video card, so ... whatever I get has to rock and roll all by itself. Unless I try something like the Cubix GPU-Xpander http://www.cubix.com/content/gpu-xpander-desktop, I'm pretty much stuck with just a single card.

Finding out that the VRAM was split between the GPUs came as a surprise to me, I'd counted on 3GB, which would have been more than enough. With 1.5GB, I just don't know?

Thanks!

JonW
01-04-2012, 11:03 PM
I've got the SuperMicro X8DAL-3 & it's got a GTX 280 in the main slot to run a 30" monitor, & a 9800 in one of the standard slots to run a 24" monitor (vertical).

I had both monitors running off the GTX 280 but my Spyder couldn't store the calibration for the second monitor. So with the second card I could calibrate both monitors (separate issue, I'm not overly impressed with the calibration).

From what I have heard over time is that people who have 200 series cards are very happy with them. I also had my GTX 260 in this box for a long time & was very happy with it. I eventually got around to swapped it with my GTX 280 from my E5450 box. They are both very good cards but the 280 is a bit quicker than the 260, but there is not a lot in it. The 260 is a good card.

I'm even inclined, when I get a new box, is put the current cards in the new box unless there is some major breakthrough for LW.

Link to my old box, but it doesn't focus on graphics cards.
http://forums.newtek.com/showthread.php?t=100802

Paul_Boland
01-06-2012, 01:10 PM
Thanks for the clarification, 3Dwannabe, I thought you were looking to set up a sli configuration. The best advice I can give you is the same I give everyone else, that is go for the best most powerful card you can afford. 3Gb card would be great but I'm sure a 1.5Gb will see you fine too.

JonW, interesting to hear you have two different graphic cards in your system, I was not aware that was possible, thanks for clearing that up for me.

3dWannabe
01-06-2012, 01:23 PM
Actually, I don't have two cards. The 2nd 'video card' is an EVGA USB video card (UV-19) http://www.evga.com/products/moreInfo.asp?pn=100-U2-UV19-TR&family=USB&sw=10. It just uses the USB 2.0 and I've never had an issue with it.

I'd never have thought it would work - or would be very slow! I use it for Lightwave dialogs that don't change a lot, although it will actually play video at 1920 x 1200. Who would have guessed?

I took your advice and purchased the single GPU GTX-580 classified ultra
http://www.evga.com/products/moreInfo.asp?pn=03G-P3-1595-AR&family=GeForce%20500%20Series%20Family&sw=

If I can move to a new motherboard with 7 PCIe slots at some point, I can have up to 4 of them.

Thanks for your advice!

JonW
01-06-2012, 02:36 PM
Let us know how you feel the GTX 580 classified ultra performs compared to the GTX 285.


Attached: GTX 280 at top, another network card & below this a 9800.

3dWannabe
01-06-2012, 03:00 PM
Let us know how you feel the GTX 580 classified ultra performs compared to the GTX 285.


Attached: GTX 280 at top, another network card & below this a 9800.

Is the 9800 in a PCI-X slot? I'd tried to use a 9400 in the PCI-X slot of the X8DEA along with the GTX-285, but it wouldn't boot. Of course this was two years ago, maybe the video drivers or SuperMicro BIOS have dealt with this issue?

I'll have my card on Tuesday. My guess it that it will scream for any CUDU enabled task (Turbulence, iPi mocap, Fusion, Premiere, etc.). Fusion will handle multiple GPUs, so ... I'm going to miss that extra one there.

JonW
01-06-2012, 04:48 PM
GTX 280 in slot 6 & 9800 in slot 3. I'm using the 24" in a vertical orientation for LW menus. One useful thing is you can get up to 116 layers visible in Modeler at one time, which helps make up for it's total lack of user friendliness.

I bought this box in July 2009, its prehistoric now & ready for the farm!

Paul_Boland
01-06-2012, 09:08 PM
3DWannabe, that EVGA USB video card is just the coolest thing I've ever seen! I might just pick one up for myself. Enjoy the Nvidia 580. With 3Gb RAM on board, it's a beast of a card for sure and will last you a long time. If you're a gamer, it will rock at that too ;).

JonW, I'm amazed at your two different video cards installation. I would have thought using two different video vards would cause drive conflict. Thanks for the pic.

Hopper
01-06-2012, 09:40 PM
You guys keep referring to "SLI configuration". If you're NOT talking about gaming (or anything full screen 3D), then your not talking about Scalable Link Interface. SLI doesn't mean generically "2 cards in one system". LightWave, 3DSMax, Maya, etc can't use SLI or CrossFireX as they are not full screen applications. If you mean "the use of two cards for a regular desktop application", then the application will have to be specifically designed for using multiple GPU's for rendering frames, performaing calculations, etc..). The previously mentioned applications are not designed for this.

And as far as SLI configurations becoming out of sync - they haven't used AFR rendering since around 2001. Frames are checksum'd through the bridge chip set now and it's pretty much impossible to get out of sync. I would say it's a pretty safe bet that you wouldn't run into that problem unless you're using some really old equipment.

JonW
01-06-2012, 09:46 PM
The 9800 is overkill but it was just a spare card I pulled out of one of the other boxes as it is only ever connected remotely for Screamernet. With the calibration really annoying me I thought I'd try it. I downloaded the latest drivers & everything worked well first time. It's a low power card so it's ideal for the purpose as well.

Still not overly impressed with the difference in calibration between the 2 monitors. There are plenty of excuses due to monitor differences. Although it's not a vast amount by any means, quite frankly it's still not good enough as far as I'm concerned in this day & age.


Link to a bit of fun with two 30" monitors running off GTX 280 for a 5000 wide pixel VPR! I should give it a go with the 24" as well!
http://forums.newtek.com/showthread.php?t=117008

JonW
01-06-2012, 10:30 PM
30" vertical on left (GTX 280) 24" vertical in centre (9800) & 30" horizontal right (GTX 280)

Quick VPR 5360 x 2560 overall with a bit of missing real estate!

I must say I could quickly get addicted to a few 30" monitors vertically! It could be quite handy when one needs to take those "aerial" screen shots!

Paul_Boland
01-07-2012, 12:51 PM
You guys keep referring to "SLI configuration". If you're NOT talking about gaming (or anything full screen 3D), then your not talking about Scalable Link Interface. SLI doesn't mean generically "2 cards in one system". LightWave, 3DSMax, Maya, etc can't use SLI or CrossFireX as they are not full screen applications. If you mean "the use of two cards for a regular desktop application", then the application will have to be specifically designed for using multiple GPU's for rendering frames, performaing calculations, etc..). The previously mentioned applications are not designed for this.

And as far as SLI configurations becoming out of sync - they haven't used AFR rendering since around 2001. Frames are checksum'd through the bridge chip set now and it's pretty much impossible to get out of sync. I would say it's a pretty safe bet that you wouldn't run into that problem unless you're using some really old equipment.

I saw a computer with Nvidia 8800 cards in sli out of sync. Every movement looked fuzzy due to the odd and even pixels being out of sync.

As for desktop software not using sli, you are right, it is for 3D processing, so doesn't Lightwave avail of it? I know AutoCAD does and I "think" (not 100% sure) 3D Studio does too.

Hopper
01-07-2012, 04:10 PM
I saw a computer with Nvidia 8800 cards in sli out of sync. Every movement looked fuzzy due to the odd and even pixels being out of sync.
That would simply be poor configuration and/or bad timing with the ICH on the motherboard. Poor quality RAM can also cause the same result, but SLI synchronization problems will result in dropped frames, stuttering (ripping), and actual render errors. The cards could have also been set to SFR rendering, but unlikely.


As for desktop software not using sli, you are right, it is for 3D processing, so doesn't Lightwave avail of it? I know AutoCAD does and I "think" (not 100% sure) 3D Studio does too.
LightWave is a desktop application, thus does not make specific use of SLI. AutoCAD and 3D Studio are also desktop applications and do not make use of SLI. I think you may be mistaking multi-GPU processing through CUDA or some other shader API for SLI processing. These are not the same thing. A "3D application" as far as SLI is concerned is a full screen OpenGL or DirectX application. All of the previously mentioned applications are windowed and use the platform level windowing API's to draw the displays. SLI is controlled by the driver - CUDA (and other API's) are controlled by the application.

Lightwolf
01-07-2012, 07:14 PM
A "3D application" as far as SLI is concerned is a full screen OpenGL or DirectX application. All of the previously mentioned applications are windowed and use the platform level windowing API's to draw the displays.
Unless you run a Quadro of course...

Cheers,
Mike

3dWannabe
01-07-2012, 08:41 PM
Unless you run a Quadro of course...

Cheers,
Mike

How do the Quadro's perform with 3D compared to a 580?

Looking at the specs, they have 6GB VRAM instead of 3GB for the 580, but the specs seem to indicate they would be slower (less cores, lower memory bandwidth, etc.)?

http://www.nvidia.com/object/product-quadro-6000-us.html
http://www.evga.com/products/moreInfo.asp?pn=03G-P3-1595-AR&family=GeForce%20500%20Series%20Family&sw=

Hopper
01-08-2012, 01:34 AM
Unless you run a Quadro of course...

Cheers,
Mike
Good point.

RTSchramm
01-08-2012, 03:27 PM
How do the Quadro's perform with 3D compared to a 580?

Looking at the specs, they have 6GB VRAM instead of 3GB for the 580, but the specs seem to indicate they would be slower (less cores, lower memory bandwidth, etc.)?

http://www.nvidia.com/object/product-quadro-6000-us.html
http://www.evga.com/products/moreInfo.asp?pn=03G-P3-1595-AR&family=GeForce%20500%20Series%20Family&sw=

The GeForce 580 is much quicker for modeling than a comparable Quadro, but for rendering the Quadro has better color reproduction. As stated earlier most 3d packages still use the CPU to generate the 3d images, so a faster GPU doesn't necessarily translate into faster render times.

Rich

Paul_Boland
01-08-2012, 08:25 PM
That's right, a standard gamers graphics card is not a render card, go for the Quatro or simular if that's what you're after but don't 3D package need to support these cards before they work?

Regarding sli, gamer cards, and desktop application, Hopper I checked the Autodesk website. You are right, AutoCAD does not use sli. But I found out that Motion Builder, a desktop application, does us it so sli is not restricted to gaming.

JonW
01-08-2012, 09:35 PM
for rendering the Quadro has better color reproduction.

I asume you mean for "better colour reproduction" what is displayed on the monitor will look better using a Quadro?

& if this is the case. Would a calibrated monitor using a Quadro card look more accurate than a calibrated monitor using a standard card?


Images on my monitors using standard cards (I don't have a Quadro) or built in graphics look reasonably similar to my laser printer & I haven't/can't calibrate my printer with the device I have.

Lightwolf
01-09-2012, 01:38 AM
I asume you mean for "better colour reproduction" what is displayed on the monitor will look better using a Quadro?

The only difference that I can think of is that Quadros allow for 30-bit output (10-bit per component) - in conjunction with the right display and the right software.

Cheers,
Mike

Rayek
01-09-2012, 09:21 AM
...and do not forget that the nvidia 4xx/5xx consumer cards have crippled opengl, which may result in a dismal opengl viewport performance - but it depends on the application used.

3dWannabe
01-09-2012, 09:26 AM
...and do not forget that the nvidia 4xx/5xx consumer cards have crippled opengl, which may result in a dismal opengl viewport performance - but it depends on the application used.
Do you have more info on this?

Rayek
01-09-2012, 10:24 AM
About a year and two months ago I replaced my 280gtx with a 480gtx, obviously expecting at least twice the opengl performance in most 3d apps.

Unfortunately, it was not to be: in a number of apps the performance was much worse than the 280 (for example: Blender, Maya). To cut a long story short, lots of research on the net (there is still a lot of confusion among users relating to this topic, which is understandable) and a quick discussion with nvidia's support made it clear that certain vital opengl functions are throttled down.

So great performance in games (mostly directx), and abysmal performance in opengl apps. Users have been reporting much worse performance with a 460 than an aging 8800.

When I asked nvidia support, they got back to me saying this is 'expected behaviour' of the new cards. Basically, nvidia got tired of people NOT buying Quadro cards anymore, instead opting for consumer card to drive their 3d apps.

I switched the 480 for an ati 5870, which works great in opengl (I benchmarked all my apps, and 5870 wins hands down compared to 480 - no comparison). It does depend a bit on the app used - for example, Lightwave seems less affected for some reason - though the opengl performance in lightwave is quite bad as it is, so...

Quote (not mine):

All AMD Radeon cards work fine.
All Nvidia GeForce200 series cards and older work fine.
All Nvidia GeForce400 series cards and newer are affected.
All Nvidia Quadro cards are not effected.

OpenGL calls like "glReadPixels()" on a GTX480 are ~4 times slower than on a GTX285.
Same goes for certain buffer operations. VBO's are affected as well.
---

From wikipedia page (since removed, I think by nvidia):

It has been reported by users as well as developers [13] [14] [15] [16] [17] [18] [19] [20] that nVidia 400-series cards have severe performance problems with 3D content-creation applications such as Autodesk Maya and 3ds Max, Blender, Rhinoceros 3D—as well as some OpenGL games—to the extent that video cards two generations older routinely outperform 400-series in such applications and games. The problem, which affects any OpenGL application using textures, involves accessing framebuffer contents or storing data on the GPU. So far, one customer using an OpenGL based application got a response from nVidia support indicating that the behavior is expected in the GeForce 400 line of cards, and no software update is available to improve the performance of the hardware.[21]. The problem can be worked around with a hack by using a CUDA memory copy to access the buffer object.
---

Ha! Well, I can see why they would remove such anecdotal evidence - that one customer was me! Figures.

Honestly, it really depends whether an opengl app or game makes use of these crippled functions. If they do, it's horrible. In Blender less than a million verts would slow the viewport down to a crawl. On my 5870 I have a smooth viewport at 8 million verts and more.

I plan to purchase the new 7970 3gb card from amd next month.

3dWannabe
01-09-2012, 11:07 AM
Oh #@(&%%%%!!!!! My 580 arrives tomorrow. I really feel ripped off and will probably never buy nVidia again, as this should have been disclosed.

Rayek
01-09-2012, 11:36 AM
Oh #@(&%%%%!!!!! My 580 arrives tomorrow. I really feel ripped off and will probably never buy nVidia again, as this should have been disclosed.

That's how I felt when I found out about those issues myself - cost me $90 restocking fees with the 480.

And for those still in doubt about this:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=284014&page=5
http://forums.nvidia.com/index.php?showtopic=166757

Either call it 'crippling' or 'disabling professional features', the fact remains that a 280gtx is much faster for most opengl apps than a 580.

Of course, the directx game performance is great! ;-)

3dWannabe
01-09-2012, 11:48 AM
That's how I felt when I found out about those issues myself - cost me $90 restocking fees with the 480.

And for those still in doubt about this:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=284014&page=5
http://forums.nvidia.com/index.php?showtopic=166757

Either call it 'crippling' or 'disabling professional features', the fact remains that a 280gtx is much faster for most opengl apps than a 580.

Of course, the directx game performance is great! ;-)

The 2nd link seems to indicate that the OpenGL issue could be bypassed by using a CUDA call. But, that's from 2/17/'11 - so ... do you know if CUDA is also crippled now?

Rayek
01-09-2012, 12:20 PM
Cuda performance is also crippled on the consumer cards. Though in my experience they still render fast - much faster than a cpu.

The Octane devs had to overcome some real challenges getting the 480 to work in their gpu-based renderer, due to some of those changes.

For full cuda performance you will have to buy a Tesla, or the upcoming Maximus cards (which, according to nvidia propaganda, will do both cuda and opengl well!)

Really, it's just nvidia maximizing their revenue. Good or bad, that's what companies tend to do. And I think I read somewhere that nvidia is going to have a bad time, what with apple, microsoft and sony going with amd graphics for the foreseeable future (new gen consoles, macs). Don't quote me on this, though. Rumors, rumors.

Addendum: over at the Blender foundation they seem to be acutely aware of the Fermi problems - reading through their latest development notes, it states:

Brecht found possible solution for Nvidia Fermi slowdown with faces using double sided light. He will check if this is feasible to add.

Rayek
01-09-2012, 12:34 PM
The new 7970 is also getting great overclocking results. Up to 80% faster than a 580 in multi-monitor setups!

http://www.hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/7

3dWannabe
01-09-2012, 05:09 PM
OpenGL calls like "glReadPixels()" on a GTX480 are ~4 times slower than on a GTX285.
Same goes for certain buffer operations. VBO's are affected as well.

How does this actually affect Lightwave?

If we're using opengl to to display an estimation of the render for tweaking, I would guess Lightwave would be using the glWritePixels - much like a video game would to display the game.

Is this only going to affect users to want to read what opengl has 'rendered' - or is this going to affect normal usage of Lightwave? When would you need to read those buffers, rather than display them?

I thought all actual rendering was done in software, at least for final output, but ... I could be confused?

BTW - when you researched this, did you find any actual throttling of CUDA? Someone on ipisoft (used for mocap) reported that they saw a large speed increase when moving from a GTX-260 to a 460.

** EDIT *** this benchmark seems to indicate CUDA is getting faster in newer GTX cards!

http://kernelnine.com/?p=218

Rayek
01-09-2012, 06:26 PM
How does this actually affect Lightwave?

Not too sure - when I tested with the 480, Lightwave 9 did not seem to be affected that much, but other apps were. However, I have read it does affect the performance of vbo's, which is what Layout uses as well. So there may be a slowdown. Only way to tell is that I test a heavy object in Layout on my machine (ati 5870), and someone else tests on a 285gtx, a 480gtx and a 580 gtx. Then we will know for sure what kind of impact it might have.


I thought all actual rendering was done in software, at least for final output, but ... I could be confused?

It has nothing to do with rendering to final output, but everything to do with how fast the opengl viewports are 3d hardware accelerated to render the image displayed on your screen while orbiting, panning and zooming. So realtime update of the viewport.

Mind, whereas a couple of years ago rendering final images and opengl output were sharply defined areas, boundaries are getting blurred more and more :-)


BTW - when you researched this, did you find any actual throttling of CUDA? Someone on ipisoft (used for mocap) reported that they saw a large speed increase when moving from a GTX-260 to a 460.

Other people have been doing tests, and found that the identical fermi hardware is capped for cuda unless you happen to buy a nvidia Tesla. It is still very fast - but way slower than it could be with consumer based fermi cards.

And to complicate matters even more, the 3d api may have a huge impact on viewport performance as well:


3.6mio poly model:
Blender 2.57b x64 OpenGL: 9 FPS
3dsMax 2011 x64 OpenGL: 0-1 FPS
3dsMax 2011 x64 DX9: ~250 FPS (smooth+highlight, panning+rotating)
Edit mode gets slow as crap in both tools though at this poly amount but it´s expected.

(not my words)

And, of course how optimized the opengl/viewport is in applications:http://www.cgchannel.com/2011/10/review-professional-gpus-nvidia-vs-amd-2011/

Huge differences in viewport performance between apps.

So, overall, deciding on a video card depends on your software, whether you require cuda gpu-based rendering, and so on.

BigHache
01-09-2012, 09:46 PM
Rayek thanks for this info, I hadn't come across the Fermi issue until this thread. I'm running a 260 GTX and have been passively researching a newer card myself. Meh.

Rayek
01-09-2012, 10:05 PM
Rayek thanks for this info, I hadn't come across the Fermi issue until this thread. I'm running a 260 GTX and have been passively researching a newer card myself. Meh.

Thanks - the current situation is very confusing and messy, which makes it harder to choose a video card that fits one specific needs versus affordability.

ps running the same antec case!

3dWannabe
01-09-2012, 10:57 PM
If there was some opengl test I could run with Lightwave? I'm getting the GTX-580 tuesday, and currently have a GTX-285.

I actually want to have a few benchmarks for the different software I run in place before I install t.

I think I can use Turbulence to test CUDA performance.

Hopper
01-09-2012, 11:40 PM
I dunno, I think you guys are just tossing around numbers at this point... I can flip a 1.4million poly model (completely textured) around in Modeler 10.1 with my GTX560 Ti like butter.

Or to put it this way... (for most of us any way)

Just think what your OpenGL performance was like 5 years ago. The cards now (even the ones you're calling "slow") are infinitely faster than they were.

Now think about your models... Has the complexity of your models really changed that much? Probably not.

Just sayin...

Rayek
01-10-2012, 12:04 AM
You can download:

SPECapc for Lightwave (free): http://www.spec.org/gwpg/downloadindex.html

Cinebench: http://www.maxon.net/downloads/cinebench.html

Heaven benchmark: http://unigine.com/products/heaven/
(set to opengl benchmark)

furmark: http://www.ozone3d.net/benchmarks/fur/

http://www.futuremark.com/

http://www.videocardbenchmark.net/

Rayek
01-10-2012, 12:32 AM
I dunno, I think you guys are just tossing around numbers at this point... I can flip a 1.4million poly model (completely textured) around in Modeler 10.1 with my GTX560 Ti like butter.

Or to put it this way... (for most of us any way)

Just think what your OpenGL performance was like 5 years ago. The cards now (even the ones you're calling "slow") are infinitely faster than they were.

Now think about your models... Has the complexity of your models really changed that much? Probably not.

Just sayin...

No, that's the point - the 480gtx's opengl performance in some popular 3d applications is on par with cards that came out almost 5 years ago. People were complaining about performance that was equal to a 9800gtx (three gen old video card by now )- and I don't know about you, but my models and scenes have exponentially grown more complex, what with zbrush and cloth, water, and smoke simulations. I grew with the speed of the hardware. Why do you think I increased my ram from 12gb to 24gb? Because some of my latest projects were breaking through the ceiling while rendering. My clients expectations have grown rapidly in the last couple of years, and I don't see the end of it.

Fact of the matter is that 3d apps keep adding more eye candy (well, Lightwave has been somewhat slow in adapting new tech) and the hardware requirements keep growing to adequately meet the demand in the market.

At least, that is my personal situation.

And discovering that the "latest and greatest" from nvidia actually performed way below the previous generation of video cards was "slightly" bewildering. I am not expecting quadro performance, but at least a little bit of an improvement. Instead, nvidia decided to throw me (and a lot of other 3d creators) into a hole, trying to force us into buying an overpriced quadro card that's only really worth it if you use Maya or Max. Lightwave, Cinema4d, Blender, XSI: these do not benefit that much, if at all, from a workstation card, because dedicated drivers do not exist!

Furthermore, the difference between the (older) ati 5870 in opengl compared to that 480gtx is like night and day in the apps I use. Animating a 3d character in Blender was nigh on impossible - now it works fine. Sculpting at tens of millions of polys is completely smooth - on the 480gtx I couldn't go beyond 1 or two million before the lag became unbearable. I'd say that's pretty darn bad.

So, no, I politely disagree with you - expectations HAVE grown across the industry, as well for freelancers like me. And to keep up I have to upgrade my hardware once in a while, and I feel it's a cheap trick from nvidia to pull the ground under from me.

Meanwhile, lots of users complaining, and nvidia acts like it's the most natural thing in the world - which, for a greedy company, it is, and it absolutely makes economic sense. But I will not be buying anything from them anymore, and I can warn other people whenever I can that nvidia is pulling one over us - which is what I have been doing since last year.

3dWannabe
01-10-2012, 09:03 AM
The only benchmark I saw was on glReadPixels()

and

read: orphane buffer0, bind to PIXEL_PACK_BUFFER, glReadPixels
copy: map buffer1, memcopy image data, unmap buffer1
tex: not used
swap(buffer0, buffer1)

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=291789#Post291789

I have no way of evaluating how that translates into actual usage of opengl with a 3D app such as Lightwave? Luckily, 3d-Coat has a choice of opengl or directx (directx is advised).

Thanks for posting the benchmarks, I'll take a look!!!

But, yes - this is very confusing for everyone reading this, I'm sure.

And that's a shame. It should be a simple decision to purchase a video card. You should not have to become an 'expert' on all these subjects.

If nVidia has slowed down performance in later graphic cards to below earlier graphic cards, that's just very poor behavior which will not win them friends and supporters.

The market for those high end cards (that they keep for themselves) is so small, that the goodwill negated by doing that must cost them money.

3dWannabe
01-10-2012, 10:02 AM
I found some benchmarks for the SPECapc for Lightwave 9.6.

At the bottom of this page is shown the 3 tests for the GTX-480

http://jonpeddie.com/reviews/comments/reviewing-the-boxx-4850-extreme-workstation

On this page are shown two of those tests for the Quadra 5000

http://www.tomshardware.co.uk/quadro-5000-firepro-v8800-workstation-graphics,review-31984-10.html

and the three tests for the Quadro 5000 along with an AMD FirePro:

http://www.tomshardware.com/reviews/firepro-v9800-eyefinity-quadro-5000,2780-4.html

We're comparing apples with oranges, and not sure about how the computer specs for resolution figure in, but ... it seems that the GTX-480 is in the ballpark for this test compared to the quadro, it's not 4x slower or anything?

For Premiere Mercury playback, the GTX580 seems to beat the Quadro (so, I guess CUDA performance isn't throttled?)

http://ppbm5.com/MPE%20Gains.png

Hopper
01-10-2012, 07:40 PM
No, that's the point - the 480gtx's opengl performance in some popular 3d applications is on par with cards that came out almost 5 years ago. People were complaining about performance that was equal to a 9800gtx (three gen old video card by now )- and I don't know about you, but my models and scenes have exponentially grown more complex, ...

<snip>...

So, no, I politely disagree with you - expectations HAVE grown across the industry, as well for freelancers like me. And to keep up I have to upgrade my hardware once in a while, and I feel it's a cheap trick from nvidia to pull the ground under from me.

Meanwhile, lots of users complaining, and nvidia acts like it's the most natural thing in the world ...

Well stated. I can see your point of view. I would agree will all of your statements, but if I did, I fear it would somehow break the Internet.

Rayek
01-10-2012, 09:11 PM
Well stated. I can see your point of view. I would agree will all of your statements, but if I did, I fear it would somehow break the Internet.

:D Indeed it would, wouldn't it?

I do think that for most 3d tinkerers semi-old hardware is still more than enough. And in my case, there is a certain geek factor at work as well, so I need to be careful not to muscle up my rig just for the sake of "coolness".

... (silence)

Why of why did I just order an additional 24gb? Aarghh... Now I have to make scenes that can render in 48gb. :devil:

JonW
01-10-2012, 09:24 PM
Why of why did I just order an additional 24gb?

So one can run 2 instances of LW or 2 Screamernet nodes fired up so the CPU/s run at 100%!

No point have a whole box of parts running at 80% because one is a bit tight with ram!

MUCUS
01-30-2012, 07:52 AM
Well as the main question of this thread WAS 1.5 vs 3gb video card, may I ask this question again?
Does a 3gb version of the same video card affect display performance a lot, compared to a 1.5gb?
(Speaking about layout and modeler)

Rayek
01-30-2012, 01:33 PM
Unless you have to deal with giant textures and show those in opengl, I do not think so.

MUCUS
01-30-2012, 03:11 PM
Thank you Rayek, indeed I was thinking that the total size of images in scene
(as show in image editor) was something that was handle by graphic card.

So I was thinking that the more vram you have, the less lags you get in display
when dealing with high number of picture.