PDA

View Full Version : Graphics Card for LW 11.5



tcoursey
03-25-2013, 03:13 PM
I know this has been discussed many times here. What is a good step up from a Quadro FX 3700? Should I go to a high end gaming card or an entry level Workstation card. I want NVIDIA of course. Probably want to stay somewhere in the $500 or less range.

Thoughts?

tcoursey
03-25-2013, 03:49 PM
I've looked up a bunch and seems a good bang for the buck is the GTX 670 or 680. Anyone have these and use with LW? I don't care about games...just poly performance in Moderler and shaded views in Layout. Sure with VPR would use those CUDA cores...cum on NewTek!!!

phillydee
03-25-2013, 04:39 PM
I've looked up a bunch and seems a good bang for the buck is the GTX 670 or 680.

I currently am using a GTX 680. Fast? Yes... You'll be able to increase your bounding box threshold in Layout. I've not really done many tests in terms of performance increases from my previous card, but it works well, and it's good enough for me. Surely others will chime in with some food for thought. If you do want to save yourself some $$, the 670 is only marginally "slower" for a lot cheaper, a good bang for less buck.

ActionBob
03-25-2013, 08:12 PM
I thought lightwave didn't really rely on the graphics card (whether pro or gamer), but was CPU intensive. As far as I know, it has never really supported Pro cards and as such you would just buy a good gamer card and call it good. I will tell you something though, you would want a 670 or higher for lightwave if you think you are going to use Octane Render for lightwave - a GPU based, unbiased render engine that has its limitation, but for what it does do, is AMAZINGLY fast.

see info here: http://forums.newtek.com/showthread.php?131666-Octane-render-for-Lightwave

I run a 680 and it is really fun to put that GPU power to task...

-Adrian

OFF
03-25-2013, 09:17 PM
I've never worked with a professional card, and I'm curious whether the performance gains when used in the modeler? I am currently working on the GTX 560Ti 1Gb and its performance is very poor to work on hi-textured objects from the 200 000 poly's and above. In the layout, because there support for multiprocessing, my card is enough to work on the scene in 30 million polygons and higher. Someone has comparative experience with gaming and professional video cards in Lightwave (especially in the Modeler at work on a heavy object)?

Riff_Masteroff
03-26-2013, 03:36 AM
tcoursey: you didn't specify your system's intended specs.

Any GTX card will have 'spanning' issues on Win7 and 2 monitors. Not so for older GTX cards on WinXP. The reason: NVidia refuses to provide its NView applet on newer GTX drivers. Also the OS in this regards is not as capable now as it was then.

5 1/2 years ago Nvidia did include this applet with their drivers and . . . all was OK for me with my use of LW back then. This month I am assembling a new 'puter system, and LW (Modeler) is only useable on one monitor (either monitor, but not both). Its not a matter of speed. In practical terms my ability to select polys when Modeler is stretched over two monitors is borked. Maximized on any one monitor, it works OK.

In searching the web, others have also had similar problems. Two possible work arounds are offered:
First: Hack Nvidia's current Quadro drivers to extract the Nview app. Detailed directions are provided.
Second: Use a fourth party utility such as Ultramon (realtimesoft.com). 40usd

My system (03/09/13 first power on):
i7 3970X CPU
Asus 'Sabertooth' Mobo
64Gb RAM
Nvidia GTX 660 Ti
Water Cooling w/ 240mm radiator
LSI Hardware RAID5 data storage
2 Dell 6bit IPS 24" Monitors
Win7x64 (pro)

Mastoy
03-26-2013, 04:07 AM
I found this a year ago when looking for a new graphic card :

http://www.cgchannel.com/2011/10/review-professional-gpus-nvidia-vs-amd-2011/

Finally bought a Firepro V5900 and honestly, I don't see any difference with a gamer card whatsoever

prometheus
03-26-2013, 05:06 AM
Stay with Nvidia, I got one geforce 480, and on my laptop geforce 560M, a big difference I think anyway where 480 is better, but hard to tell overal due to other systems specs.

You might wanīt to check the octane for lightwave thread about whats coming up and what is needed, that might tell you a little of future of gpu rendering, also check
requirements for turbulenceFD perhaps since that uses the grapics card and cuda.

As much on board ram you can get on the graphics board.
investigate nvidia forums and technology and whats coming up, I think if you spend some time doing this and not rush in, you might come out very satisfied in the end.

Archvis, huge objects or rendering power, and simulation power...you have to think about what you want here.

Michael

ianr
03-26-2013, 05:37 AM
Look on Jascha's TURB fd plugin site he gives speeds for GPU
solve times this will show CUDA times.
Then you can make a future leap of faith/judgement.
I feel that the LW3DG should embrace GPU stuff soon.

tcoursey
03-26-2013, 03:41 PM
I understand that Lightwave doesn't use much multi tasking on moderl commands. I understand Layout doesn't use CUDA or GPU for much other than OGL commands. But is there a standard or best practice for how to setup OGL in LW? Sort By Poly, Streaming...etc..etc...

Thoughts?

tcoursey
03-26-2013, 03:44 PM
tcoursey: you didn't specify your system's intended specs.

Any GTX card will have 'spanning' issues on Win7 and 2 monitors. Not so for older GTX cards on WinXP. The reason: NVidia refuses to provide its NView applet on newer GTX drivers. Also the OS in this regards is not as capable now as it was then.

5 1/2 years ago Nvidia did include this applet with their drivers and . . . all was OK for me with my use of LW back then. This month I am assembling a new 'puter system, and LW (Modeler) is only useable on one monitor (either monitor, but not both). Its not a matter of speed. In practical terms my ability to select polys when Modeler is stretched over two monitors is borked. Maximized on any one monitor, it works OK.

In searching the web, others have also had similar problems. Two possible work arounds are offered:
First: Hack Nvidia's current Quadro drivers to extract the Nview app. Detailed directions are provided.
Second: Use a fourth party utility such as Ultramon (realtimesoft.com). 40usd

My system (03/09/13 first power on):
i7 3970X CPU
Asus 'Sabertooth' Mobo
64Gb RAM
Nvidia GTX 660 Ti
Water Cooling w/ 240mm radiator
LSI Hardware RAID5 data storage
2 Dell 6bit IPS 24" Monitors
Win7x64 (pro)

I will be on Windows 7 with two monitors. 24 and 19. I don't span my apps, but rather use each monitor with there own app. Will there still be issues with drivers in that regard? Maybe I'm not following. I know there higher end cards now support 4 monitors.....surely spanning and managing that many monitors is setup in there drivers still...

Riff_Masteroff
03-26-2013, 06:01 PM
. . . . 24 and 19. I don't span my apps, but rather use each monitor with there own app. Will there still be issues with drivers in that regard? . . .

tcousey: I think you will be just fine using the existing drivers provided for any GTX card. As for myself, I wish to work with LW spanned across both monitors. Not to be had out-of-the-box, but I think I can find a work-around that suits my needs. Note that I am quite surprised that others in this forum have not spoken of this mess earlier (or maybe I didn't read all postings).

Riff_Masteroff
03-26-2013, 06:31 PM
Off topic, but in addition is redirecting the LW configs folder. Using the -c switch, the LW readme states that you must turn-off the UAC to accomplish this on initial LW setup. I have 'discovered' that UAC needs to be turned OFF always for 'my' use of LW. I am constantly changing the config files year in and year out and I want my changes to stick. And I want to easily find the folders where they reside!

On the C:\ drive where my LW is installed (both in Programs and Programs (x86)) I use a folder called 'Configs'. Subfolders within are labeled (by me):
Layout_Key
Layout_Menu
Modeler_Key
Modeler_Menu
Program

This setup / method / style only works sans UAC. Advice: you better have a great anti-virus program installed. Or else.

Also note that if you are using Win7x64, it best to use LWx64 even though LWx32 works also. The reason is that MS OS x64 systems do not use 32bit programs at all. MS does provide a work-around, and that is called WoW.dll. Windows 32 on Windows 64. Additional overhead for 32bit programs and not needed on 64bit programs. I advise myself to test both LWx32 and LWx64 using Benchmarkmarbles.scn to compare/check render time differences.

Titus
03-26-2013, 06:38 PM
I've a Quadro 2000. LW works as fast (or slow) as with the older card.

OFF
03-26-2013, 09:02 PM
Titus, thanks for your reply - so there is no difference between game (NVidia GTX) and professional (Quadro-like) cards for Lightwave users?

Titus
03-27-2013, 11:25 AM
I don't think so.

tcoursey
03-27-2013, 01:39 PM
I can confirm at least in modeler that a GTX 550ti (I think it was) in our video editing machine did not perform any better than my Quadro FX 3700 card. The 3D benchmarks on these two cards are quite different.

It all comes down to modeler (not layout) relies more on CPU in it's functions for some reason it seems. Many other threads talk about this and it's ashame!

Cause I open "cough cough" competitors product and it whirls around my 500,000 object like it was butter. You want to move some points around...sure go ahead! a very little lag but nothing like my trusted LW!

I am truly hopeing a very near future update addresses some of these issues that have been around for years now! We have had some updates, remember LEGACY OGL, don't even think about clicking that. It will make you think current modeler is a Ferrari! lol.

prometheus
03-27-2013, 01:47 PM
Titus, thanks for your reply - so there is no difference between game (NVidia GTX) and professional (Quadro-like) cards for Lightwave users?

I think the quadro fx drivers were optimized to handle the object data actually much better than gaming cards, even for lightwave modeler, not entirely sure about it though.
Michael

tcoursey
03-27-2013, 01:53 PM
Michael, I agree and have read the same. But real world experience at least on these two cards show no signigicant signs of improvement either way. These two cards (which benchmark considerably different) are the same in Lightwave Modeler.

I have not tested in Layout. I was hoping to get an improvement in modeler first and foremost. Large poly count models kill me. I hate managing layers...but indeed will continue to do so. At least I have 100 to work with! :)

Riza
07-20-2013, 12:05 AM
this is several months old discussion, but I just found this: http://www.tomshardware.com/reviews/best-workstation-graphics-card,3493-12.html
radeon card for gaming is better than nvidia for gaming ( tho' their prof card yield different result )

anyone use radeon 7970GE that can confirm?

COBRASoft
07-20-2013, 02:38 AM
As far as I know, Pro cards are much better in handling overlapping windows (up to 8 instead of 1). So, Modeler would benefit only by a small scale from this. A Pro card isn't worth the money, except if you use other packages like Adobe After Effects which have more benefit from Pro cards.

Waves of light
07-20-2013, 03:28 AM
I'm on the lookout for a second hand gtx 580, as there's an issue with 600 range cause Nvidia crippled the CUDA performance a bit in the Kepler series.. I don't think this has much bearing on LW, but some of the older 500 series out-do the newer 600 models in 3DCoat, especially.

http://3d-coat.com/forum/index.php?showtopic=14766&hl=

Also, http://gpuboss.com/ is a good way to compare two graphics cards.

prometheus
07-20-2013, 04:48 AM
I wouldnīt have looked at radeon cards before and will not now either, nvidia also has a sort of official cooperation with newtek I think.
Ati radeon cards have notorious been having issues with 3d performances and drivers, donīt know how they can overcome that.

Stay with nvidia cards, but if you want to play fast games rather than doing 3d gaphics, then radeon might be the way.

Michael