PDA

View Full Version : Screamer Net2 Problems



marcopolio
02-26-2003, 01:20 PM
Ok, I am new to lightwave. In the past I have used Cinema 4D and 3D studio max and the ease of use of their net render modules is astronomically superior to lightwaves. What is going on over there in the department responsible for creating the user interface? do they drink on the job? Ive gone through everything I have found online and am not able to get it to work.

I have in the Programs folder of my first machine the LWSN Command line file saved and working to render in the background of one machine. Do I need to adjust the command line files of every machine i am using for the net render? What should the command line files of my render nodes look like? can anyone help me out?
-A

paintboy
02-26-2003, 01:25 PM
go here....www.catalystproductions.cc/screamernet
excellent instructions for initial setup procedure.
every thing will be fine once you get it set up the first time.
good luck

marcopolio
02-26-2003, 01:30 PM
Those are the instructions that I have followed.. now my problem comes in when copying the LWSN file and the LWSN command line file. Do I then move them to the node computers, or does only the host have to have the copied files? What should the command line file look like on the node computers? Do they all need to point to the same preferences folder, or the local preferences folder? Does the job directory have to be the same? Should they all point to a dir on the network? If anyone can take the time to help me out it would be greatly appreciated, It is critical that we get the network rendering to work for lightwave.
-A

bakasaru
02-26-2003, 01:34 PM
Some of these may help:

http://members.shaw.ca/lightwavetutorials/rendering.htm

mlinde
02-26-2003, 02:35 PM
First, go to
http://homepage.mac.com/nonplanar/
and click on the OS X Scream! link. Review the tutorial there. Basically, the LWSN duplication requires two things:
1) multiple LWSN apps named slightly different (on the host computer) with corresponding cmdLine files that have individual job & ack numbers but identical content and preference directories.

2) Make sure the various nodes can access the applications directory on the host computer. Each one starts the copy of LWSN appropriate to it.

paintboy
02-26-2003, 02:55 PM
Those are the instructions that I have followed.. now my problem comes in when copying the LWSN file and the LWSN command line file. Do I then move them to the node computers, or does only the host have to have the copied files?
...each Node folder resides within the hostLW/App folder.
then you move to the node2 machine mount the host drive/navigate to
the node 2 folder/LWSN app and create an alias wich is moved to the node 2 computers desktop, when you click this it launches the appropriate LWSN node app. then on to node three computer and repeat.....etc.

What should the command line file look like on the node computers? Do they all need to point to the same preferences folder, or the local preferences folder?

if you followed JB's instructions(copying) they should be all bepointing to the hostpref folder. and directory you chose.

the only difference in the command line for each node is Job#(same as node folder#)and ack#(same as node folder)

...slow down, take a breath and follow the instructions, it works

Julian Johnson
02-26-2003, 02:57 PM
I agree with Michael :-).....but I think that it could get confusing if you mix 'n' match Jonathan's setup technique with nonplanar's. Pick one and hold fast - both are very explicit about what you should do but differ in subtle ways (nonplanar's uses a naming convention for LWSN and the associated cmdlines which bypasses the need to setup distinct folders for each node and also, I believe, explicitly states the location of the jobx and ackx files whereas JB's requires that you create fully populated duplicate program directories for each node and relies on the default location for the command directory (i.e. the Content Directory). Don't be tempted to pick bits from one and bits from the other. It's tricky enough as it is :-)

If you're already down the line with Jonathan Baker's instructions then you will have already created several sub folders within your host machine's Program folder called LWSN Node 2, LWSN Node 3 etc and duplicated the content of the host program folder into each of those subfolders. In each of those folders you should have a copy of the LWSN cmdline text file containing the string which tells LWSN where your preference files and content directory are (just as Jonathan says) and which Job file to act on (and where the command directory is). The jobx and ackx strings should have been incremented for each node. As you've duplicated the whole program folder then you don't need to worry about LWSNs with slightly different names.

You don't physically move anything to the node computers. You load the volume that the host's Program folder (with the sub nodes) and Content Directory sit on onto the node's Desktop. Once you have your Host machine's drive shared on the Node just navigate back from the node to the sub folder on the host (e.g. LWSN Node 2) you created for that node and either create an alias to the LWSN that sits in there, drag it to the desktop and launch it (as JB says) or just launch LWSN as you wish. The cmdline file will tell the LWSN that you just launched on the node (but which physically sits on the host) to look in the shared volume for both the preference file and content directory - there's no need to copy anything to the local preference folder.

Effectively you're setting it up so that all the information that any of the nodes needs sits on one computer. Provided you share the relevant Host volumes on the node machines you shouldn't have to worry at all about copying stuff over from machine to machine.

Finally, if you're on anything less than 10.2.3, I'd try and upgrade as there's an OS issue in 10.2.1/2 which wreaks unwarranted and hellish havoc with LWSN (although Jonathan also has a fix for that!).

Julian

marcopolio
02-26-2003, 03:12 PM
Thanks for all of your insight Jullian! It is much appreciated. I think we have it working... we set it up so everything is looking at one machine. do all of the textures need to also be in the directory with the scene to be rendered? It just renders nothing at this point, but sees all of our nodes (dual processor machines)
-A

Julian Johnson
02-26-2003, 03:48 PM
The textures need to be visible to the Nodes i.e. they need to be on a volume that the node has mounted but, in theory, they could be anywhere on that mounted volume either inside or outside the content directory *provided* all the references to the images are accurate in the original scene and objects.

In the example of an Object file loaded into LWSN running on a node, it will read the texture locations directly from the object file. If the object was saved with textures that resided in the same content directory as the scene you're trying to render then the references in the object file will be 'relative' - get my textures from Images/ which sits in the Content Directory (and the Content Directory is as specced in the cmdline file). In that instance everything should be fine. If objects were saved with textures that sat outside of the current (at the time of save) Content Directory then they'd have been saved with absolute paths (MacintoshHD:RandomStuff:Misc Images:texture.tga) and it would be essential that you had that root volume (MacintoshHD) mounted.....if it isn't already.

If you use a per-project Content Directory system then you need to be sure that if the objects were saved with relative paths they actually do exist in the appropriate folder in the Images directory in the Content Directory you've set in the LWSN cmdline..

I always use Layout:Generics:Content Manager with my scene open if I need to check for any images that might be sitting outside of my current content directory..... you don't have to apply it just cancel it when you're done.

When you say it renders nothing, do you mean it outputs black frames or that it isn't putting anything anywhere. In that case you need to make sure that the output location for the RGB files in the scene file is also visible to the node...

Julian

monovich
02-26-2003, 04:12 PM
I'm working with marcopolio on this problem, so I'll answer his question because he is reading some tutorials right now.

We've got all six nodes showing up correctly on three dual processor G4s. I used your tip above to make sure the files are showing up with relative paths. I mounted the drive with the files on it on the two other node computers (besides the host) and set the content directory, then quit Lightwave on both those computers.

Then I added the scene to the Network Rendering qeue on the master machine and hit render. All nodes begin rendering immediately, but the renders take 0.0 seconds per frame. Then when I look at the output directory, it only has rendered files for frames rendered by the nodes on the master computer, not the nodes. Also, the frames rendered are all black. If I hit F9 on the master computer, the image renders fine.

So my two problems seem to be that the network nodes aren't seeing the content correctly, and they aren't saving their frames, and the frames that ARE being saved are emply black anyway.

:confused:

monovich
02-26-2003, 04:30 PM
Ok, got all the computers to render by correctly setting up the prefrences and content directories for each computer.

So now the only problem that remains is that it's just rendering black frames.

-S

Julian Johnson
02-26-2003, 04:38 PM
OK. The questions I'd be asking are: 1) If you have LWSN open on the nodes, how long is a single frame taking and are you getting any 'can't load object' messages in the LWSN window 2) Are you absolutely sure that the Content Directory you've specified in the LWSN cmdline text file is right?. Why not post the cmdline text here along with what your actual Content Directory path is and the location of your preference files. 3) What image saver have you set your scene to?

If you get black frames it's usually because the scene can load and output but can't locate any of the associated objects to render - with nothing in it just renders black....

Julian

monovich
02-26-2003, 04:51 PM
There are no "Can't load object" errors. Each frame is taking 0.0 seconds to render

my LWSN cmdLine:

-2 -c"steve1_HD:Users:steve:Library:Preferences"
-d"steve1_HD:lwsn" job1 ack1


Each computer is refrencing local prefrence files, not a global one on the master computer, but all computers nodes are pointed to a common content directory on the master computer.

Is the first line in the cmdLine file the content directory?

I'm confused. I'm going to go try setting both the above parameters to a common content directory.

thanks for all the help. This is compicated and confusing, but also fun to figure out.

-Steve

monovich
02-26-2003, 05:35 PM
update:

went back and read about how to set up the config directory and content directory correctly. Now I'm getting renders (finally!). But what was interesting is that my renders are rendering without the texture maps and were rendering to .flx file format even though I had specified .tga.

I've made sure my reletive paths are correct, so I think it may be a plugin issue.

I'm still troubleshooting that problem, but I looked at my Layout and Hub prefrences and saw that they were different (from renaming my HD), so I'm going to try to resolve that.

yeesh!

monovich
02-26-2003, 05:51 PM
update:

Got everything working. I just needed to clean up my cmdline files .

Again, thanks for all the help. I'm very happy to finally be able to do a batch render or a network render at my leisure.


So, the question begs to be asked, why is ScreamerNet so damn complicated? Why can't they write a simple GUI for it and add a few simple features to pump up scheduling and other controls?

hmmm.

-S

paintboy
02-26-2003, 06:30 PM
"So, the question begs to be asked, why is ScreamerNet so damn complicated? Why can't they write a simple GUI for it and add a few simple features to pump up scheduling and other controls?"

just the threads on this topic, on this forum, over the last 3 years
would make a book the size of the LW manual:p

roberthurt
03-18-2003, 03:29 PM
I've been reading through this with interest as I'm also having difficulty configuring my screamernet setup on a couple of Macs at work.

I've been following nonplanar's technique, using LWSN1, LWSN2, etc. I got as far as getting 2 local and 2 remote nodes running. My hard drive is uniquely named and mounted at the root level on the remote machine. My cmdline files look like:

-2 -c"Minbari:Users:hurt:Library:Preferences:LightWave Layout 3 Prefs" -d"Minbari:Users:hurt:Documents:Lightwave:" "Minbari:Applications:Lightwave 3D 7.5:Programs:job1" "Minbari:Applications:Lightwave 3D 7.5:Programs:ack1"

I get as far as running LWSN1-4, 2 on each machine. The network rendering panel shows all 4 nodes. The feedback messages are identical on both machines (Can't open Job file...).

However, when I start rendering, only the 2 local nodes are able to find the scenefile. I verified that the job1-4 files contained the full path to the scenfile:

Minbari:Users:hurt:Documents:Lightwave:Scenes:_Ast ro:L-1:Survey Cones 2 HV

But the problem shows up in the remote screamernet window:

LightWave command: load.
Loading "M".
Error: Can't open scene file.

On my local node the scene loads up OK, but on the remote node, it only sees a one-letter scenefile name and therefore can't get the file.

Note that I did get everything working in mode2 if I only use local nodes. Also, I found I could run mode3 successfully remotely using commandline syntax like:

-3 -c"Minbari:Users:hurt:Library:Preferences:LightWave Layout 3 Prefs" -d "Minbari:Users:hurt:Documents:Lightwave:Scenes:Game s:Chessboard.lws" 0 30 2

Any hints would be greatly appreciated!

Julian Johnson
03-18-2003, 03:35 PM
Hi Robert,

Are you on anything less than 10.2.3? This looks just like the network read/write error that Jon Baker discovered in 10.2.0/1/2.

Julian

roberthurt
03-19-2003, 02:21 PM
Julian, many thanks for the tip...

(wiping egg off my face) I knew about that bug but utterly forgot about it in the erroneous assumption that my coworkers would maintain consistently updated systems... D'ooh!