View Full Version : GFX card OpenGL question

09-17-2004, 01:42 PM
Hi Guys !

I wonder did anyone of you had chance to test new GFX card form 3DLabs ?

They showed (at this Siggraph) new Wildcat REALIZM series (100, 200 and 800) and by specs they look impressive and utilize mighty 512MB RAM + OpenGl 1.5 - 2.0 support + DirectX 9.0 (not too important for me).

I'd like to see is this baby capable to move some polygons in modeler as it shoud since right now i'm not happy wiht modelers OpenGL speed :(. It's very slow to move anything above 15k SubPatches. It's just **** too slow to rotate/move or turn on/off subpatches. I do work in layers but often i need to se complete model and then 15-25k SubPatches came in play so it's too slow.

I tested with both of my GFX cards and none can handle it too good so it must be LW wise then :(? I have Wildcat VP760 and GeForce Fx 5950Ultra and speed difference is very very small in textured, shaded, and shaded wireframe views (too slow). Only wireframe view goes smooth as i would expect form usable application :).

I also use DeepExploration software (for transfering files) which natively supports Lws SubPatches (and shows them in OpenGl or DirectX) and SAME files/models rotates/moves atleast 3x times faster than in LW.

Is LW OpenGL so BADly implemented :(??

Anyone using new 3Dlabs to help me wiht decision buy or not buy :)?


09-18-2004, 02:11 AM
I think you are confusing or misinterpreting things here. A faster card would not accelerate Modeler, but a faster CPU would. The thing is, that all transformation and tesselation is evaluated on a per vertex basis factoring in vertex maps and permanently checking for user interaction - this process is totally relying on brute CPU power. It's pretty much like that in every other app. Deep Exploration does not do that, it only needs to create the tesselation once and then only does transforms. The same goes for Layout. That's why both use other drawing routines which are faster than the ones in Modeler. There may be some potential for optimizing things on both ends, however.


09-18-2004, 05:25 AM
Hi Millenyum !

I appreciate your help but i doubt thats only thing what helps modeler and in other softwares especialy.

I TESTED 4 GFX cards in smae machine (P4 3.0GHz wiht 2GB Ram) and differences are visible and in some cases BIG difference so It Can't be CPU only.

Try to put Matrox G400 DualHead card in machine and them click TAB on 20k SubPatches on default Level 6. You'll wait 25-30 seconds just to transform in SubDs cage and rotating and zooming of that will last forever no matter what CPU is :(. I also tested Ati Radeon 9500pro and situation is that card is atleast 3x times faster at SAME opertion in modeler but too slow for serious work. Then i used current 3DLabs and Nvidia which enhanced movement but that's still to sloow (while in Wireframe mode is extremly fast thani nothers) comparing to other SWs wiht SAME or slower machines. I've seen guy modeling Car in XSI with more than 100k SubDs (with SLOWER CPU and weaked GFX card) and i can only dream about that in LW since it will be so slow that it's useles and frustrating :(.

If GFX card helps only layot then what would all modelers on this world DO with GFX cards ? What's the ussage of 3Dlabs cards then if it will be slow as on Matrox G400 which costs 10x times less money :)?


09-18-2004, 06:42 AM
Lewis: Same experiences over here, other software smashes LW in viewport performance.
Whatever part of it is CPU and GPU dependant, you sure can do it faster, as a lot of software devolper proved it.

About the SubD-evaluation, actually Modeler becomes signifficantly SLOWER when i freeze a SubD model. The number of polygons that get displayed are still the same...
I have a very hard time working on large non-SubD models, like building with lots of details. Like i can spin it in 3ds max with >10fps but Modeler will cr*p out at below 2fps.
So in my opinion, it is primarily Modeler's fault, i can't accept buying a $1000 video card that will give me the performance i get in other software with a $100 video card. I'd rather buy Modo then.

09-18-2004, 08:24 AM
Yeah i FEEL your "pain" Lynx3d :)

Lets hope that NT guys are listening and that we will soon see MAJOR OpenGL improvments in 8.x.


P.S. So noone yet tested new 3D Labs REALIZM GFX cards ?

09-18-2004, 12:48 PM
Hehe, a G400... That card has no 3D acceleration at all, it's all emulated via the driver ;o) (and thus CPU dependent). As for your observations - I can't entirely confirm nor deny them. There are situations where other progs feel faster, but every once in a while you stumble over seemingly simple things that slow down the app terribly. It's kind of funny things like in Maya, where fully shaded mode sometimes is faster than wireframe - go figure.


09-18-2004, 01:05 PM
Hehehe YES G400 isn't best for 3D (yet it still rulle in 2D image quality :)) but simply if you look difference between ATis R9500 and Nvidias 5950 ULTRA difference in price is 3,5x times (Nvidia more expensive) and yet it's only few frames faster than ATI :(.

So it must be faster if programmed wiht full OpenGL support IMO :)

09-18-2004, 01:41 PM
Well gaming performance has not necessarily reflect performance in high-poly scenarios, because fillrate and memory bandwidth is less important, but geometry throughput.
So a Radeon 9500, which basically is a Radeon 9700 with half the pipelines (=half raw fillrate) doesn't necessarily have a big disadvantage to a Radeon 9700 when displaying 1Mio polygons, although it wouldn't reach the same framerates when displaying only a several thousand polys.
(hm i only remember an old 3ds max benchmark from X-bit labs demonstrating the little difference between GeForce2 GTS and MX, but that's a bit outdated)

Btw, Matrox G400 has 3d acceleration just like any GeForce/Radeon, but Matrox isn't really famous for outstanding OpenGL drivers to say it politely. It took Matrox around one year to deliver OpenGL drivers for their G200 cards (you can imagine how much i love Matrox since then...) until then you had to cope with a D3D wrapper (but for whatever reason the real OpenGL driver wasn't much faster for me...)