PDA

View Full Version : Confusing Situation -Bootcamp to Windows, 3 times Rendering Speed!



Glendalough
06-13-2006, 03:27 PM
Ho!

I just got a dual core Mac Mini with 1Gb ram and thought to try bootcamp and compare rendering speeds in Lightwave 8 on both OSs.

Am getting 3 times the speed on Windoze with scenes containing hypervoxels, and almost 4 times the speed on plain scenes without.

To reboot between the 2 OSs takes 40 secs.

In defense of the Intel MiniMac (new name) its only 15% to 20% slower than first edition G5 single processor.....

-Any comments aside from shut up?

Scott C. Zupek
06-13-2006, 08:48 PM
3x's the speed being faster or slower? If its faster is because apple wont update it OPEN GL code..they do it on purpose and thats why you'll get better render times in x86(windows)

Chilton
06-13-2006, 09:13 PM
Hi,

The speed difference is simply that LW is not a Universal Binary yet. So you're running LW under Rosetta emulation.

Scott C. Zupek
06-13-2006, 11:49 PM
Hi,

The speed difference is simply that LW is not a Universal Binary yet. So you're running LW under Rosetta emulation.

that too;)

Captain Obvious
06-14-2006, 03:15 AM
3x's the speed being faster or slower? If its faster is because apple wont update it OPEN GL code..they do it on purpose and thats why you'll get better render times in x86(windows)
OpenGL speed has no impact what-so-ever on final frame rendering. And the OpenGL on Macs is plenty fast.

Glendalough
06-14-2006, 05:57 PM
(Thanks for comments)

We all know the reasons. It's just the feeling is a bit disorientating, weird...

And the implications i.e...

The greatest thing about the new Intel Macintoshes is you can run Lightwave (rendering) in Windows real fast...

Scott C. Zupek
06-14-2006, 06:00 PM
OpenGL speed has no impact what-so-ever on final frame rendering. And the OpenGL on Macs is plenty fast.

so you are sayign all that shading, reflection, refraction, etc is done WITHOUT open gl? I kind of find that difficult to believe... why wouldnt they use it?

joao
06-15-2006, 04:05 AM
rendering has nothing to do with the graphics card or opengl in lightwave (and most other 3d programs for that matter).
there is no hardware rendering mode in lightwave where the graphics chip is used.
the speed increase is, as was said before, because lightwave is not a universal binary yet and therefore is running under emulation mode (rosetta) in osx. It is not a native intel mac program yet basically. Under windows it is running natively.
As for opengl viewport udpating, you will notice dramatic speed increases running lightwave (especially modeler) in windows. This is due to bad programming. Newtek has always said it was due to apple's opengl implementation until every other 3d program on the mac proved this not to be the case. It seems to be solved in layout on lightwave 9 but not yet in modeler....
LW9 will also not run natively under intel macs when it first comes out.
Its all very sad really..........

Lightwolf
06-15-2006, 04:16 AM
so you are sayign all that shading, reflection, refraction, etc is done WITHOUT open gl? I kind of find that difficult to believe... why wouldnt they use it?
Because it would be a b*tch to code. Even nVidia, with their high-price hardware assisted (note, assisted only!) renderer to most of the computation on the CPU.
Other issues are driver compatibility (the result could change from driver to driver revision, as well as from gfx board to gfx board chipset), openGL not being able to handle textures larger than 4096x4096 pixels well (sometimes less), raytracing etc. would have to be "cheated" badly etc.
All of the current full hardware based renderers are nothing but nice proof of concept demos - nothing you'd like to use in production.
An exception would be hardware rendering for special FX (i.e. particles, like Maya does for ages) - or a low quality - high speed renderer (missing all of the exciting functionalits, like raytracing, gi, procedurals etc...).

Cheers,
Mike

Captain Obvious
06-15-2006, 05:43 AM
or a low quality - high speed renderer (missing all of the exciting functionalits, like raytracing, gi, procedurals etc...)
Kind of like the OpenGL viewports! ;)

LSlugger
06-15-2006, 07:42 AM
The speed difference is simply that LW is not a Universal Binary yet. So you're running LW under Rosetta emulation.

I haven't spent much time with an Intel Mac, yet, but I'm pretty sure that Rosetta only uses one core, so there may be a double penalty, compared to Windows.

BeeVee
06-15-2006, 08:14 AM
It uses both here for rendering with more than one thread, but it isn't fast even so.

B

Scott C. Zupek
06-15-2006, 03:26 PM
rendering has nothing to do with the graphics card or opengl in lightwave (and most other 3d programs for that matter).
there is no hardware rendering mode in lightwave where the graphics chip is used.
the speed increase is, as was said before, because lightwave is not a universal binary yet and therefore is running under emulation mode (rosetta) in osx. It is not a native intel mac program yet basically. Under windows it is running natively.
As for opengl viewport udpating, you will notice dramatic speed increases running lightwave (especially modeler) in windows. This is due to bad programming. Newtek has always said it was due to apple's opengl implementation until every other 3d program on the mac proved this not to be the case. It seems to be solved in layout on lightwave 9 but not yet in modeler....
LW9 will also not run natively under intel macs when it first comes out.
Its all very sad really..........

so do any of the higher end cards (quadro, fire gl, etc) actually increase rendering time @ all, or is that all still CPU(processor, floating point) based?

Captain Obvious
06-15-2006, 04:13 PM
so do any of the higher end cards (quadro, fire gl, etc) actually increase rendering time @ all, or is that all still CPU(processor, floating point) based?
It's all done on the CPU. You can stick all the video cards you like in your machine, it will still have zero impact on your render times. Unless you're using Nvidia's Gelato, of course, but that thing seems to be slower than Lightwave's renderer even with a beefy video card...

Scott C. Zupek
06-15-2006, 10:56 PM
well ****...is that for all renderes? what would be the point of those high end cards then?

Jarno
06-15-2006, 11:24 PM
The point of those high end cards is to render lots of polys with pretty shading really quickly in realtime through OpenGL or the like. Such as done in Modeler and Layout, or the latest and greatest FPS game.

Gfx cards are quite restricted to what they can render. Basically just opaque or semi-transparent polygons. Gfx cards put speed over quality.

Renderers such as the one in LightWave put quality first. Renderers also render things other than simple polygons, such as volumetrics, have arbitrary procedural texturing, can do arbitrary refractions and reflections, and a bunch more stuff that can not be translated into something that a gfx card can render.

---JvdL---

Scott C. Zupek
06-15-2006, 11:37 PM
okay so would a higher end card help FPRIME 2.0 or would something like a 7800 be fine(nvda)? thanks

Captain Obvious
06-16-2006, 05:11 AM
okay so would a higher end card help FPRIME 2.0 or would something like a 7800 be fine(nvda)? thanks
The 7800 IS a higher-end card! But no, it doesn't help Fprime. Fprime is a software renderer, just like the ordinary Lightwave renderer. However, the ordinary viewports in Lightwave benefit quite a lot from having a powerful video card. (Unless you're on a Mac, because Lightwave's OpenGL on the Mac seems to do most of the drawing on the CPU...)

Scott C. Zupek
06-16-2006, 06:12 AM
ha and this whole time i have been waiting for the day I would need a quadro, but from what it sounds, its not needed:) and the 7800 is only high end on the gaming side. On the workstation side, Quadro's and FIRE GL's are in a league of their own. Now im just curious as to why they cost so much. I figured it helped rendering times drastically. but now im hearing otherwise. (i understand it helps view port rendering in windows)

Captain Obvious
06-16-2006, 06:26 AM
The basic reason for the high price of the Quadros and FireGLs, is that people are willing to pay that much for them. They're really not that good. The Quadro 4500 is the same basic card as the 7800. There are some differences, but the performance is pretty much the same. Hardly worth the $1000 price difference, at any rate.

joao
06-16-2006, 06:27 AM
:)
The workstation graphics card you mention are there for one purpose in my understanding (though i never used them). So that you can work and model in 3D programs effectively. This is not relevant until you are working on scenes or models with over 500 000 polygons (much less in the case of lightwave) and need to work on all of them first-hand - this applies mostly to architecture, mechanical design, environments... (areas where you must use a lot of instances or need access to interior details of the model).
They work with specific drivers for specific programs (autocad, maya....) and basically are windows-only. If you do a lot of very heavy modelling and find that you are not able to move around your model, and work with it effeciently, than they are a good option. But then.... you should be considering a windows-workstation and a different program first since neither the mac or lightwave are suited for those applications.
If you just want faster renders.... than your only 2 options are: to try different render engines (fprime, kray, maxwell for lightwave - or try a different program: modo, 3dmax...) - or get a faster processor (or a bunch of them in a network, or rent a network).
Thats all there is to it i think..... :-)

Captain Obvious
06-16-2006, 06:30 AM
I often display 500,000+ polygons on my computer with a Radeon 9800 Pro... Silky smooth, for the most part.

joao
06-16-2006, 09:53 AM
i was refering to lightwave modeler in this case.... Layout seems capable of handling a larger set of polygons - but again, in the case of lightwave, this is cpu-dependent rather than gpu. I assume the display is better in the land of G5s and core-duos... :-)

joao
06-16-2006, 09:55 AM
and yes... the diference in performance between workstation and consumer graphic cards seems to be getting less relevant as time goes by (but this is just from what i hear)

Captain Obvious
06-16-2006, 10:51 AM
and yes... the diference in performance between workstation and consumer graphic cards seems to be getting less relevant as time goes by (but this is just from what i hear)
It's true. I even heard a rumour that Nvidia would merge the two lines at some point.