PDA

View Full Version : LightWave and XGrid



royL
04-12-2005, 11:13 AM
is it possible to run LW on an XGrid on MacOs X?
anyone tried?

Anatoly Zak
04-12-2005, 01:03 PM
As I understand, XGrid is a feature, allowing distributed rendering, which comes with the Tiger version of Mac OS X. Sounds exciting, but I afraid we have to wait for Tiger to try it.

monovich
04-12-2005, 02:49 PM
which is fine because Tiger is out in a few weeks. check out apple.com if you haven't already.

Captain Obvious
04-12-2005, 03:10 PM
Xgrid has been around for a while, it's just better integrated with Tiger, if I've understood things correctly. But I'm pretty sure you have to add support for it into an application, or at least have the application scriptable in such a way that Xgrid can access it. Lightwave is not, unfortunately.

royL
04-12-2005, 06:58 PM
:( I thin x grid can solve the life of we the people that like to make hi-res renders.

So I sugest that those interested in this feature start posting in feature request so the code team can add it to the wish list

eblu
04-13-2005, 07:59 AM
:( I thin x grid can solve the life of we the people that like to make hi-res renders.

So I sugest that those interested in this feature start posting in feature request so the code team can add it to the wish list

I'd like to save LightWolf the trouble ;)

XGrid, IS NOT MUCH HELP for Computational Problems that rely on vast resources, or a sequence of events.

another way of putting that is... XGrid is bad at 3d rendering.

the first main issue is that Xgrid has to distribute the problem to each machine involved in the computation... this means copying the content directory to each machine, or as screamernet does rely on the user to produce access to said files... XGrid does the former. Just imagine the fun that would be your network as one machine copies your content directory out to multiple cpus all at once. One image sequence could kill the whole thing.

The second issue is that 3d rendering is a linear thing (At least Lightwave rendering is). you can't do the steps out of order, so you can't just break up the problem give a bit of it to each machine and expect to be able to combine the results... you'd get swiss cheese. with 3d rendering, you do a computation, take the result and do another, etc... and if you handed the bits to each node in the grid and told them to wait for their turn, then whats the point in using a network in the first place? It'd be the same as letting one machine do the whole problem, except that the network would be sucking down MORE resources, adding to the overall slowness... etc...

XGrid is great for computational problems... that don't use many resources, and that can be done in parallel. 3d rendering is not either of those.

if you are going to post a request in the feature request forum... please, for the love of bob, do not ask for Xgrid compatibility. Ask for the replacement of ScreamerNet with something less embarrassing.

Captain Obvious
04-14-2005, 03:54 AM
The second issue is that 3d rendering is a linear thing (At least Lightwave rendering is). you can't do the steps out of order, so you can't just break up the problem give a bit of it to each machine and expect to be able to combine the results... you'd get swiss cheese. with 3d rendering, you do a computation, take the result and do another, etc... and if you handed the bits to each node in the grid and told them to wait for their turn, then whats the point in using a network in the first place? It'd be the same as letting one machine do the whole problem, except that the network would be sucking down MORE resources, adding to the overall slowness... etc...
Uhm, with rendering, you can generally do anything in any order. Especially for animation renders. It doesn't matter which order you render it in. In the olden days, a common way to get really high-res renders was to divide the scene and set up the camera differently in each and render (either on different computers or in order). Lightwave can even divide a scene on the fly (set the threads to '8', for example). There is no technical reason for not rendering anything with, say, Xgrid. Of course, it'd really suck for, say, Fprime, since it'd had to upload the files to each machine, but if you have a 1 week render to do, it'd be nice.

Lightwolf
04-14-2005, 05:54 AM
I'd like to save LightWolf the trouble ;)
:D LOL elbu, you just made my day...

Cheers,
Mike

eblu
04-14-2005, 08:15 AM
Uhm, with rendering, you can generally do anything in any order. Especially for animation renders. It doesn't matter which order you render it in. In the olden days, a common way to get really high-res renders was to divide the scene and set up the camera differently in each and render (either on different computers or in order). Lightwave can even divide a scene on the fly (set the threads to '8', for example). There is no technical reason for not rendering anything with, say, Xgrid. Of course, it'd really suck for, say, Fprime, since it'd had to upload the files to each machine, but if you have a 1 week render to do, it'd be nice.

cap,
i understand your meaning. it has no bearing on the discussion. You have clusters, and networks confused. 3d rendering IS a procedural process (there ARE steps in the procedure, you can't skip them). There is a whole mess of math happening to get those colored pixels and line them up next to each other. You can Possibly do camera Masking techniques, which allows distribution of individual "frames" to multiple Cpus, But thats a Hack (its the same as segmenting), you can easily do that on your own, and isn't what Xgrid does.

XGrid makes itself valuable, by making a pile of cpus work like One machine. its more of a metaphor than a reality, but it serves our purposes. as a Computer a cluster has a few advantages and a few liabilities. the advantages are the megahertz, and the interface... all of a sudden you have a super computer where you used to have a pile of computers. The biggest Liability is that Now the communication of that "super computer" is all done across a severely slow pipe... your network (and even if you run Fibre, it ain't nearly like the speed you have between your CPUs that reside in a box). So, smart clusters, break problems up into smaller, easy to digest pieces, give a pieces to each node, and then combine the results. This saves the pain of each machine screaming all at once to each other across the network and killing your network, the ethernet hub, and the local power grid.

for instance, we have a rendering engine "foo", it follows these steps:
1 figure out what is visible hide the rest
2 draw color layer
3 draw shadow layer on color layer
4 draw specular layer on the shadow layer on the color layer

you make an animation and send this animation to foo, running on a cluster.
foo starts rendering its first frame.

the cluster has 4 machines, so it breaks the render of the frame up thusly:

machine A gets step 1
machine B gets step 2
machine C gets step 3
machine D gets step 4

all 4 machines do their work at once and then the result is combined at the end... its a black frame.

or
each machine does its part only after the machine before it in the chain gives it the appropriate data... the final result is exactly what we want... but the render took longer, than a single machine would take to do it, Due to the overhead in moving information across the network as opposed to our CPU BUS.

now this is all gross over-simplification, take it as a fable that illustrates a point. the point is, clustering is a powerful tool, just not the best tool for 3d rendering at the moment.

A cluster doesn't work like a network Cap, the interface is simpler as if it was one cpu, so you send it 1 frame at a time. its supposed to do that, but it isn't very helpful for 3d rendering. As an aside... theres an xgrid plugin for Aftereffects, the reviews basically support my argument, its actually slower than using a single powerful desktop machine.

http://mograph.net/board/index.php?showtopic=3027&mode=linear

toby
04-14-2005, 05:01 PM
the first main issue is that Xgrid has to distribute the problem to each machine involved in the computation... this means copying the content directory to each machine, or as screamernet does rely on the user to produce access to said files... XGrid does the former. Just imagine the fun that would be your network as one machine copies your content directory out to multiple cpus all at once. One image sequence could kill the whole thing.

I don't see how this is different from regular farms - everything has to be loaded into the ram of each machine either way, right?

btw - for high-res rendering, isn't there any bucket-rendering software? Our td's wrote one where I work, to render some 7k images on the farm -

Lightwolf
04-15-2005, 03:46 AM
I don't see how this is different from regular farms - everything has to be loaded into the ram of each machine either way, right?

btw - for high-res rendering, isn't there any bucket-rendering software? Our td's wrote one where I work, to render some 7k images on the farm -
Don't these two sentences contradict each other? ;)
One of the main advantages (besides distribution) of bucket rendering is that you can use certain techniques to only load geometry or images that are needed for a bucket into RAM. Which is why imho 64bit isn't as needed in rendering imho as it is in compositing. There is plenty of smart cacheing that can be done in rendering.

Cheers,
Mike

Johnny
04-15-2005, 07:40 AM
XGrid is bad at 3d renderingXGrid is great for computational problems... that don't use many resources, and that can be done in parallel. 3d rendering is not either of those.

Nice explanation of that, eblu; thanks for posting it.

J

eblu
04-15-2005, 09:33 AM
I don't see how this is different from regular farms - everything has to be loaded into the ram of each machine either way, right?
-

the main difference:
if we think of computers as individual people...
a render farm is a room packed with people, each with their own abacus.

a cluster is one person, a savant, like "rain man".

the cluster can solve math problems in an almost magical way, but its not terribly flexible. "rain man" can't break up the render by frames, he takes each individual frame and Breaks it up across the cpus. Not only do the cpus need to have all of the data available (which is the same as in a farm and therefore zeroed out of the equation) they also need to be much more chatty on the network, because your network has become the backbone of a "Gestalt cpu", and "rain man" has to talk to himself while he works.

RonGC
11-25-2006, 12:12 PM
These arguments are true for the moment, with the current rendering process design. However chip manufacturers are rapidly coming up on the top end of speed that they can build in to their chips, considering power and heat dissipation. So they are currently building quad core chips to add more calculation power.

Now it may well need a fresh approach to designing render engines, but they should be able to specificaly write a renderer that would use all the potential of Cluster computing.

The holy grail of 3d seems to be the real time/photoreal render. In order to achieve this i feel that they will need computer clusters for the raw processing power.

Ron

mlinde
11-28-2006, 10:15 PM
I'm going to drop in from the deep dark ether on this one.

XGrid would be a good control system for LWSN if the two were compatible - rather than having to set up and distribute all the data for LWSN, imagine being able to simply have it installed and XGRid configured on each node - XGrid gets the command strings from a master controller, and each node launches, configures, and runs LWSN based on that XGrid command.

That's what I'd like to see with XGrid and LWSN - but this is a dead horse I've beaten on for years.

eblu
11-29-2006, 07:31 AM
mlinde, I respectfully, but thoroughly and completely disagree.

xgrid is not a front end, it is a clustering solution.
it is not in any way compatible or even applicable to screamernet.

investing time and money on a clustered version of screamernet would give you a Negative ROI. 3d rendering in general is NOT a good candidate (in the least) for clustered solutions, and screamernet does not lend itself easily to being clustered.

Many people seem to think that clustering is a replacement for networking, and they associate Screamernet's biggest faults with the networking side of things. Clustering is NOT a network replacement. clustering runs on top of a network. And guess what, its only as fast as That network topology. Its not magic, and doesn't magically fix ANY of the problems we see with screamernet and/or Network rendering in general. it IS the wrong solution for Screamernet.

Its great for amino acid computations however.

mlinde
11-29-2006, 11:10 AM
mlinde, I respectfully, but thoroughly and completely disagree.

xgrid is not a front end, it is a clustering solution.
it is not in any way compatible or even applicable to screamernet.

investing time and money on a clustered version of screamernet would give you a Negative ROI. 3d rendering in general is NOT a good candidate (in the least) for clustered solutions, and screamernet does not lend itself easily to being clustered.

Many people seem to think that clustering is a replacement for networking, and they associate Screamernet's biggest faults with the networking side of things. Clustering is NOT a network replacement. clustering runs on top of a network. And guess what, its only as fast as That network topology. Its not magic, and doesn't magically fix ANY of the problems we see with screamernet and/or Network rendering in general. it IS the wrong solution for Screamernet.

Its great for amino acid computations however. Ack! I'm on the wrong darn application again. QMaster ... that's what I'm thinking of. Replace XGrid with QMaster in my last post and that's what I'm talking about.

eblu
11-29-2006, 03:29 PM
lol!
I disagree with you there too, but mostly for personal preferences.

even if you get Qmaster running w/ scremernet...
you STILL have to work out scremernet's content directory issues yourself!

theres better solutions today.

RonGC
11-29-2006, 06:03 PM
it seems to me that if we never ask for change it never happens. Perhaps right now as lw stands xgrid may not be the hot item, but if the apps rendering portion were to be rewritten to send xgrid data that would work with clustered computing, would that not be a good thing?

Ron

John the Geek
11-29-2006, 06:48 PM
well, in a close-knit cluster with fiber channel you'd have it made, otherwise lets not confuse cluster render with distributed render.

I think distributed will always be faster, while clustered could potentially be easier. We just need it to be easier to setup and use.

Like have the node manager generate the executable and config files in a folder for all of your nodes so you just copy a folder over to the node and run it. The Node manager could also enable network sharing for you too, etc...

There's growth potential here.