View Full Version : Hardware accelerated rendering

05-05-2003, 02:26 PM
I would like to see LightWave implement hardware accelerated rendering.

05-05-2003, 02:48 PM
accelerated by CPU and RAM?

05-05-2003, 02:52 PM
I would like to see GPU's taking the burden for things like Hyper Voxels etc. They tend to be good at those sort of things.

05-05-2003, 04:30 PM
Agreed...everything graphical should be moving towards hardware use, especially as GPU's get better and better.

05-05-2003, 04:30 PM
GPU can't accelerate voxels. GPU only accelerate polygons. GPU can render shadows and stuff and full-scene-antialiasing that look as good as raytraced ones.

05-05-2003, 04:32 PM
Even so...GPU's would make rendering faster, would they not?

05-05-2003, 04:49 PM
Implementing hardware rendering would put Lightwave's rendering quality at the mercy of the quality of the GPU. Any competitor would be able to take advantage of any inherant weaknesses of the GPU and produce a better renderer.

I wouldn't mind an option for to use the GPU as a preview or for those who develop games. But I would definitely not want it to be my only option.

05-06-2003, 06:01 AM
Here is more info:


05-06-2003, 11:24 AM
I agree that it would have to be an option. Actually I dont see how else they would go about it. It would be a major overhaul to run the renders solely off the graphics boards, support for boards should be added in order to keep up with Maya and other 3d renderers.

05-06-2003, 12:28 PM
I didnt mean that the GPU was the only method, but use them for what they are good for, and the rest for the CPU etc. Although good GPU's might be capable of Real Time for the game people.

05-06-2003, 01:30 PM
if there is a hardware to accelerate rendering. All plugins for render should be rewrite. But I think a good ideia.

And newtek(Andrew Cross) have know about this.


05-06-2003, 03:40 PM
All plugins for render should be rewrite.

That's not necessary, since hardware will most likely unable to accelerate any render process that has to do with raytracing.

05-07-2003, 11:06 AM
I believe Maya now supports hardware rendering... Up to 20x faster for broadcast graphics type stuff...

Elmar Moelzer
05-07-2003, 12:21 PM
As an option for previews and the like, yes, otherwise I dont think it is a good idea. Todays GPUs are still alcking in mayn parts.
I am not sure about hypervoxels not working as acceleration for volumetrics has been introduced with Geforce 3 if I remember correctly. However, all this will only be usefull for previews or stuff where the quality is not as important as speed.

05-07-2003, 12:49 PM
Quality was mentioned. I dont see how the quality would be worse if a GPU is in use. It is built to do what the software does just faster.

05-07-2003, 01:26 PM
Hi there,

I don't agree with you, Vertigo. We are still a million miles from comparable qualities. There is always some limitation in hardware rendering (limited instruction sets, no real reflections etc.). Also I think you are forgetting one big issue: The translation from software shading algorithms with theoretically unlimited complexity to those limited 1024 instructions a pixel shader can have nowadays (If I recall correctly, that is). For quick Logo stuff however it may always be good enough.


05-07-2003, 07:05 PM
I am not sure about hypervoxels not working as acceleration for volumetrics has been introduced with Geforce 3

when did geforce 3 accelerate volumetrics?

Elmar Moelzer
05-07-2003, 07:39 PM
I am not totally sure anymore, but I read about it somewhere.
I can still remember the pictures of some layered fog and fire etc.
Actually HW- accelleration for Volumetrics is not a new thing. This has been around for many years, though only on special- purpose- hardware.

05-07-2003, 08:25 PM
but nobody would make a voxel accelerator.

05-08-2003, 01:56 AM
I think people have a wrong vision of what hardware acceleration can do.

Actually it's pretty simple : if you specialize the chips in doing some tasks *very fast*, like pushing polygons or mapping square textures, you'll indeed be fast as hell for that task. But, you'll only be able to do what has been wired in hardware.
Or, you try to make a general purpose processor.
It'll be slower on the tasks that it's not been specialized in, but it'll be able to do anything.
Specialized hardware has a strong tendancy to obsolescence. You're limited in what you can do with it, and it's fast only for the time it takes for newer hardware to come out that'll be able to do other things.
The same is true for our computers, but even a slow one can output the same images as the latest, if you wait long enough.

People see gorgeous games, moving tons of polygons in real time and keep asking why they can't do that within LW. Truth is, these games cheat. Everything is as much precalculated as possible. There is a reason why 3D softs are used in designing these games. When looking at Splinter Cell or Unreal 2, remember there are hundreds of hours of work before what you see, and that's been done with LW or Max or Maya.

Feature wise it also doesn't work : sure our vid cards can now do a rather decent smooth shading. They can cast specular highlights, and they can use good looking bump maps or ref maps.

In the meantime we're rendering using radiosity, raytraced blurred reflections, refraction, volumetric particles that are really volumes and not just sprites, complex shaders and so on.
Would you really exchange LW's render engine for something that could only do what our cards can ?

I say use these cards for what they're good at. Previewing, animating. Don't let them approach the render engine in any way : it's not their place.

I've seen the same kind of debates in numerical simulations. In the end, the software wins.


05-09-2003, 09:22 AM
One question: Has anyone actually BOTHERED to go read those links?!

Before you weigh in, read them.

Here are the meat and potatoes part for those to lazy to go look for themselves, at least read THIS.

"Bringing the horizon closer
But Duff wasn't counting right when he implicitly equated general-purpose UltraSPARC processors with dedicated graphics ASICs. Later that same year at the Siggraph CG conference, Mark S. Peercy and his colleagues from SGI presented a paper entitled Interactive Multi-Pass Programmable Shading. Peercy's paper provided the missing link between consumer graphics chips and cinematic CG by proving that any OpenGL-capable chip could produce high-end rendering techniques through multi-pass rendering. Essentially, Peercy showed how even highly complex effects could be broken down into manageable, bite-sized steps—steps even a GeForce2 could process.

Peercy's paper demonstrated precisely how to translate complex code—in this case, those of the RenderMan Shading Language used at places like Pixar—into OpenGL rendering passes using a compiler. The compiler would accept RenderMan shading programs and output OpenGL instructions. In doing so, Peercy was treating the OpenGL graphics accelerator as a general SIMD computer. (SIMD, or Single Instruction Multiple Data, is the computational technique employed by CPU instruction set extensions like MMX and SSE. SIMD instructions perform the same operation on entire matrices of data simultaneously.) If you know a little bit about CPUs and a little bit about graphics, the notion makes sense as Peercy explains it:

One key observation allows shaders to be translated into multi-pass OpenGL: a single rendering pass is also a general SIMD instruction—the same operations are performed simultaneously for all pixels in an object. At the simplest level, the framebuffer is an accumulator, texture or pixel buffers serve as per-pixel memory storage, blending provides basic arithmetic operations, lookup tables support function evaluation, the alpha test provides a variety of conditionals, and the stencil buffer allows pixel-level conditional execution. A shader computation is broken into pieces, each of which can be evaluated by an OpenGL rendering pass. In this way, we build up a final result for all pixels in an object.
Peercy goes on to explain how the OpenGL "computer" handles data types, arithmetic operations, variables (which are stored in textures) and flow control. It's heady stuff, and it works. Peercy's demonstration compiler was able to produce output nearly identical to RenderMan's built-in renderer.
Peercy compiled RenderMan Shading Language to OpenGL, but the same principle could be applied to other high-level shading languages and other graphics APIs, like Direct3D. The implications were simple but powerful: this method would enable consumer graphics chips to accelerate the rendering of just about "anything." Even if the graphics chips couldn't handle all the necessary passes in real time, they could generate the same ouput far faster than even the speediest general-purpose microprocessor."

I trust these guys (at SGI) opinions more than most of yours. :rolleyes:


05-09-2003, 10:34 AM
I trust these guys (at SGI) opinions more than most of yours.

How wude.
If you don't want our opinion don't bother posting here, send a mail to NT instead.

So, some undoubtedly talented guys managed to stuff PRMan scenes through OpenGL. Fine. Great.
They don't do transparency, they don't do reflections other than planar and order 1, they don't filter textures, they don't support motion blur, they don't support high quality AA, they don't support depth of field. You can also bet they don't do caustics, radiosity, refraction or volumetrics. Oh, they don't do displacement either.

I'll keep LW's awfully unspecialized rendering engine, thanks.

It may be an interesting method, but there is quite a lot of distance between a paper and useability.

Phil, who never worked at SGI and -therefore- is a sucker.

05-09-2003, 10:45 AM
Sorry Phil, not trying to be rude. I wasn't really requesting an opinion, Newtek watches these fora for Feature Requests, it was aimed at them.

05-09-2003, 05:07 PM
and I'm sure they would want to hear counter arguments.

Elmar Moelzer
05-09-2003, 11:39 PM
Hey Harhar.
Actually I have been working on a few workstations from SGI and Sun that had HW- acceleration for Voxels.
Most medical software that is Voxelbased uses special hardware that is designed to accelerate volumetrics. There is even an OpenGL- extension for that.