PDA

View Full Version : Node Displacement



Mr Rid
08-25-2011, 12:40 AM
I will never understand nodes. Too mathy. But am using the typical node setup I see around with the Normal Spot Info, Subtract, Multiply (math) and Scale. Have no idea how that works, but its not working right with the map I am using. No matter what values I enter, the mesh displaces in opposing wrong ways.

Layer displacement operates on white= max pos disp, and black= no disp.
This map assumes gray= no displace, black= max neg displace, & white = max pos displace. I dont know if nodes understand this or what I may need to change?

dpont
08-25-2011, 01:08 AM
So your displacement values are stored in the image in min 0 and max 1 values,
but is originally backed in -1 -> 1+ limit range,
so you substract 0.5,
to get a -0.5 -> +0.5 limit range,
and multiply by 2
to get the proper -1 -> +1 limit range,
don't know where your normal displacement comes from,
but if is inversed,
multiply by -2 instead of 2,
if the original displacement has been normalized,
I mean, it was divided by the max real value,
you need to multiply the result also by this max value,
before scaling the spot normal.

Hope that help,
Denis.

Mr Rid
08-25-2011, 01:09 AM
:eek: Ok now, you are using math. Scene- http://www.box.net/shared/static/0mr8h3ofpcq9u05gbho3.rar

dpont
08-25-2011, 02:06 AM
Sorry but Displacement is math,
and addition, substraction, multiplication
are basic arithmetics.

Obviously you map is normalized or in a different unit, not
real meter measurement, so if you want
an accurate displacement you need to
know how this map has been exported,
the image map itself doesn't contain this information,
so as a first step,
substracting by 0.5, multiplying by 2 is good,
then you need to add another Multiply node,
before scaling the normal vector,
a small factor like 0.005 doesn't look too bad,
but I don't know the original model and your software.

Denis.

lardbros
08-25-2011, 05:58 AM
Oh my word... still complicated for a maths simpleton like me. A completely artistic brain like mine finds this stuff tricky... Glad to see someone else who is confused by this... :) I do have issues with this stuff. Only some of it makes sense

EDIT!
----------------

Actually... read through your first post and it makes sense now I read it through and apply my maths hat. Just wouldn't know how to figure this out myself :D I'll store this in a special slot in my brain for future reference!

dpont
08-25-2011, 06:46 AM
I'm agreed,
this is a complicated way to export and import such data,
the most simple way should be,
use a floating point image format 16, 32 or 96 bits,
which allows negative values,
make sure that baking in the exporter is done in meters,
then in the Lightwave displacement editor,
just scale the normal by the image value,
no needed compression, normalization or conversion,
that's all.

Denis.

Tobian
08-25-2011, 08:49 AM
Hmm, ok a couple of things to help explain.

Textures are assumed to be 0-1 - but in nodes '1' actually represents 1 metre. So assuming you displace your model using that 0-1 map (multiplied with a spot normal) you will get 1 metre displacement for a white pixel.

If you need to scale the texture then a multiply node here also works. Assuming that your 'white' pixel is 5cm (so like the old texture displace, typing in 5cm) you would input the texture into your multiply node and type in the value you want to set as your scale. A good hint here is that the node editor works like the rest of LW, so if you typed in '5cm' it will transform it to 0.05 for you, so you can type in real world values, and they will be translated to metre scale.

Nodes can look a little scary but if you just go from one to the next yo can see what's being done with your data, and in the case of the displacement it's quite simplistic.

and I agree with you DP ideally the displacements should be stored in a float state, with negative values, but then you'd till have the issue of scale, as the other app might not use metres, or the object may not have been modelled to scale...

JeffrySG
08-25-2011, 09:16 AM
So your displacement values are stored in the image in min 0 and max 1 values,
but is originally backed in -1 -> 1+ limit range,
so you substract 0.5,
to get a -0.5 -> +0.5 limit range,
and multiply by 2
to get the proper -1 -> +1 limit range,
don't know where your normal displacement comes from,
but if is inversed,
multiply by -2 instead of 2,
if the original displacement has been normalized,
I mean, it was divided by the max real value,
you need to multiply the result also by this max value,
before scaling the spot normal.

Hope that help,
Denis.


Sorry but Displacement is math,
and addition, substraction, multiplication
are basic arithmetics.

Obviously you map is normalized or in a different unit, not
real meter measurement, so if you want
an accurate displacement you need to
know how this map has been exported,
the image map itself doesn't contain this information,
so as a first step,
substracting by 0.5, multiplying by 2 is good,
then you need to add another Multiply node,
before scaling the normal vector,
a small factor like 0.005 doesn't look too bad,
but I don't know the original model and your software.

Denis.

Ok, wow. At first glance I wasn't expecting to understand all of that but surprisingly it was really clear to me. I don't use displacement a lot but this will be great to know. :) Thank you!


Hmm, ok a couple of things to help explain.

If you need to scale the texture then a multiply node here also works. Assuming that your 'white' pixel is 5cm (so like the old texture displace, typing in 5cm) you would input the texture into your multiply node and type in the value you want to set as your scale. A good hint here is that the node editor works like the rest of LW, so if you typed in '5cm' it will transform it to 0.05 for you, so you can type in real world values, and they will be translated to.
Great to know too! Thx Tobian! :)

lardbros
08-25-2011, 09:50 AM
Ah... okay... so , all nodes work in metres? I never knew! These little things help massively. The entering 0.005 for vector seemed odd, but now makes sense! :D

dpont
08-25-2011, 10:05 AM
Just to avoid confusion,
In the sample posted by Mr Rid,
the texture map values are shifted, and compressed (scaled)
to respect a classic image format with restrictions,
so in the image,
50% (grey) is 0% Displacement,
0% (black) is a negative/ingoing 100% Displacement,
100% (white) is a positive/outgoing 100% Displacement,

If the exporter is in millimeter unit,
1 millimeter could represents a maximum of 100% (absolute disp. value)
or a max 50% negative + max 50% positive (relative disp. value),
in this last case after the substraction (for shifting)
a multiplication by 0.001 (mm to m) could be enough,
no more this x2 and (odd) x0.005.

Denis.

dpont
08-25-2011, 10:12 AM
Ah... okay... so , all nodes work in metres? I never knew! These little things help massively. The entering 0.005 for vector seemed odd, but now makes sense! :D

Lightwave coordinate system is in meter,
other units are just for the user in the interface
and always converted internally in meters,
the nodal system is directly connected to
Lightwave, without conversion, like for a plugin
with the LW SDK.

Denis.

XswampyX
08-25-2011, 10:41 AM
Forget the maths, just stuff it through a gradient node.

http://i465.photobucket.com/albums/rr16/xXswampyXx/DispGrad.jpg

Celshader
08-25-2011, 10:49 AM
I will never understand nodes. Too mathy. But am using the typical node setup I see around with the Normal Spot Info, Subtract, Multiply (math) and Scale. Have no idea how that works, but its not working right with the map I am using. No matter what values I enter, the mesh displaces in opposing wrong ways.

Layer displacement operates on white= max pos disp, and black= no disp.
This map assumes gray= no displace, black= max neg displace, & white = max pos displace. I dont know if nodes understand this or what I may need to change?

First, do not use the Node Editor in Object Properties. Use the Surface Node Editor to define your displacement, then use Bump Displacement in Object Properties to define the amount of displacement along the normals.

This frees up the Node Editor in Object Properties to load MDD/XML animation files onto the cage object, while the ZBrush displacements can be layered after Subdivision. We are using this technique on TERRA NOVA to layer ZBrush-generated displacements on top of Maya-generated geocache files in LightWave.

Here's how to do it...


In the Surface Node Editor, select the surface you want to displace.
Load your displacement map into an Image node. (Add Node->2D Textures->Image)
You need to redefine 50% Grey as 0%. To do this, subtract 50% from the map. Load a Subtract node into the Node Editor (Add Node->Math->Scalar->Subtract). Plug the Luma output of the Image node into the "A" input of the Subtract node. Double-click on the Subtract node to access its options, and enter a value of 0.5 for "B." This will subtract 0.5 from all values on the map. Values of 0 (black) will become -0.5. Values of 1.0 (white) will become 0.5, and values of 0.5 (50% Grey) will become 0 (black).
Plug the Result output of the Subtract node into the Displacement input on the Surface node in the Surface Node Editor.
Make sure the Node Editor for this surface is checked/activated on the main panel of the Surface Editor. Otherwise this surfacing will have no affect on the displacement.
Save the object so that this surfacing adjustment is saved.
In the Object Properties of the object, go to the Geometry tab and set the Subdivision Order to "After Bones."
Then go to the Deform tab, activate "Edit Nodes" and set its Node Displacement Order to "Before Bones." This way you can load MDD/XML files into this Node Editor in the future, and they will be applied to the cage and not the subdivided object.
Activate "Enable Bump" and set the Bump Displacement Order to "Before World Displacement," ensuring that the Bump Displacement only occurs after subdivision.
Set the Distance to the desired amount of displacement.


If you have Bump maps you want to use in addition to the Displacement map, pipe them into the Bump input on the Surface Node Editor instead of the Bump channel's "T" button on the main surface panel. This is because the Node Editor will allow separate textures for Bump and Displacement, while the "T" button for the Bump channel on the main surface panel affects both bump and displacement.

If you already have bump maps in the "T" button, use Copy->All Layers to copy all the layers. Go to the Node Editor and add a Bump Layer node (Add Node->Layers->Bump Layer). Then use Paste->Replace All Layers to copy your original Bump textures into the Bump Layer node. Plug the output of the Bump Layer node into the Bump input on the Surface node. Then deactivate the layers in the "T" button on the main Surface panel.

Mr Rid
08-25-2011, 03:17 PM
Sorry but Displacement is math,
and addition, substraction, multiplication
are basic arithmetics...

I have not looked into the last two posts yet but I just want to express my frustration with nodes, and I am far from the only one. Everything is math under the hood but the end artist shouldnt be bothered with it, especially to just 'apply map.' I dont care how an accelerator or brake works. I just want to know which pedal I push to make it go and which one to make it stop, and I'll take it over the finish line. We're just talking greyscale here, like the spec, bump and transparency maps that I dont need any special formula to apply. There should be a simple way to tell LW, 'black is in(-), white is out(+),' like a bias slider (am reminded of the lack of moblur bias).

This displace map comes with a Daz model. I am not exporting or converting it. Within Daz, the map is selected under 'displacement' and it just works. No math. Daz is geared toward ease-of-use and it does some auto 'optimizing images' when I first apply displace maps that may be a levels eval (?). There are also several min, max, & sensitivity parameters to play with. But there really ought to be a way to apply this without all the node clutter and tutorials.

I took Introduction to Algebra all four years of high school and my right-brain never passed (not kidding). Meanwhile I excelled at geometry (cuz I can see it) and physics (push or pull on this, then that happens). Programming is more of a left-brain endeavor. This too often results in a disconnect between interface and right-brain users. My brother is a brilliant but severe right-hemi artist who could never figure the first thing out about a Lightwave or Maya interface. Its a shame many artists who should be using the tools are literally unable to because interfaces are unnecessarily technical and unintuitive. I am not as laterally challenged as some, but where CG goes into formulas and expressions, I wont be following.

Another problem with nodes is I cant easily experiment to figure the effect of each node because I cant see what each node is doing at that point in the flow. I can only see the accumulative end result of all nodes applied with F9, or VPR (although it's often impractical). I am used to Fusion nodes where one click-n-drag allows me to see the effect each node has on the whole, at any point along the flow. And a hot key toggles a node's 'bypass' so I can quickly compare before and after. To do same in LW I have to isolate nodes and awkwardly connect/disconnect perhaps several inputs/outouts, then I get lost.

I love the interface simplification work of Takeo Igarashi that demos how complex functions like setting up cloth dynamics or modeling can be so easy that a six year old with no prior computer experience can figure it out in minutes (theres a demo of this somewhere I am not finding at the moment). Sweater http://www-ui.is.s.u-tokyo.ac.jp/~takeo/research/cloth/index.html and Smooth Teddy http://www.youtube.com/watch?v=7kid06mw0Pg were written a decade ago, but there's no reason why such intuitive approaches could not be applied in more complex tools.

I would LOVE to be an artist-friendly interface consultant.

Celshader
08-25-2011, 03:45 PM
Here's screengrabs to accompany my earlier post on this thread.

This is all you need to mix ZBrush displacements with MDD files.

Cageman
08-25-2011, 03:54 PM
Well, as Castius put it:

"3D is a form art that is made up by a ton of complicated crap"

At the same time, it is undersandable that you, as an artist, needs to be on your toes and dig into the technical side of things, in this case nodes, in order to keep up. I can't, or really, shoudn't, compare connecting 3 or 4 nodes to something like L-scripting or Programming. Completely different things, in my book.

I understand that nodes are difficult; I had a the exact same feeling you have now, but I took that fight several years ago when nodes were introduced in LW. I was lucky enough to know a guy who I could talk to through skype or msn chats, which helped A LOT. Don't you know anyone you could have as a mentor to get into nodes more easily?

When I started to understand nodes in LW, it also made it much easier to understand and use Mayas Hypershade. Not to mention, with the help of Dponts fantastic nodes, extend LightWave functionality and capability with an order of magnitude.

There will always be situations where "out of the box" doesn't cut it, and you need to create a tool by connecting nodes together. I rather have an application that is less limited in this way, if I had to pick one side. LWs shading is an order of magnitude more powerful within the node editor compared to just using the layersystem.

This all said and done, there are always things to do to make it better. We have a Normalmap loader and I think it would be quite easy to create a Displacementmap loader with some options exposed.

In any case, I've provided two nodesetups that you use depending on where you setup your displacement. These nodesetups works with ZBrush displacements, and as such, they should also work with images exported from 3DCoat and Mudbox as well, provided that they are exported correctly. The only thing you should have to tweak is the multiply node.

Oh well...

Good luck!

:)

dpont
08-25-2011, 05:02 PM
...The only thing you should have to tweak is the multiply node...

That's exactly what I tracked in my posts,
and as I see, whatever the solution,
surface displacement, displacement node editor
(simple) math nodes or gradient,
this conversion value is needed in all cases,
this should not be empiric for an accurate result.

..But what a bad tone Mr Rid,
when we just want to help..
Please be cool.

Denis.

dwburman
08-25-2011, 05:13 PM
Having a node specifically for displacements, or at least some different options to interpret/translate the image is a great idea.

I too, wish we had some diagnostic nodes (plug the output in here to see the result of that node) and some workflows that mimic the node-based workflows of fusion or shake. The Node Editor UI can certainly be improved.

Now for an act of shameless self-promotion:
My Digital Bullet Hits: Metal Door Part One covers node-based displacements among other things. I didn't use something as specific as a displacement map from a different app, but it may help. I used the method that Cel Shader recommended. There's a trailer video that shows some clips from the tutorial.

http://www.liberty3d.com/citizens/d-w-burman/danas-videos/digital-bullet-hits-making-holes-in-a-metal-door-part-one/

lardbros
08-26-2011, 06:20 AM
Thanks for all the input from everyone. Jennifer and DPont... always great to have such concise explanations. I definitely need a mentor for these nodes... there seems to be a trickle of information that one NEEDS to know before you can have a clear understanding of how some nodes work. I for one only found out recently that some diffuse shading nodes include the shadows (is that right?)... found that out when I was messing with Blochi's shadow catcher nodes. Seems VERY weird to me... and i'm certain not all other software can work in this way!... or am I wrong? Just making assumptions here! :D

RebelHill
08-26-2011, 07:17 AM
I for one only found out recently that some diffuse shading nodes include the shadows (is that right?)

Of course they do... diffuse shading is based on light received by a surface... an area of taht surface that is occluded in some way, and not receiving the light is therefore in shadow.

As a slight aside to the whole "Im an artist, dont want all this techy/math stuff"... Its a complaint Ive heard a gazillion times. Quite simply put, my response to that mentality has to be... tough titties... If you want to do art that involves no techy stuff... take up painting, or ballet, forget CG.
So why does LW not include some ncie easy such "displace" node (or whatever)... I guess because the disp maps are handled differently in different packages... if u had one rpeset disp node then ther'd be cries of "why cant it work with disps from this/that package?"

Ok, so maybe there could be a nice "easy" ndoe to just plug it into to "remap" these values... well there is, as already pointed out, a gradient node.

The problem with simple, artisty solutions is the fact that they are severely limiting... if you take the techy/mathy control away from the user, then you've painted them into a corner which they cant get out of (which is pretty much what poser is... very artist friendly... extremely inversatile as 3d apps go.

For surfacing, pretty much the ONLY math functions you need to understand are add, multiply, divide and subtract... maybe handle a couple fractions.

Now as for folks who say they don't "Get" that, I have to say... I think you're just too lazy to try and want someone else to have done it for you... don't say you dont get it... these are the most basic of sums that they teach to 6 year olds.

Anyhow... [/rant.]

Tobian
08-26-2011, 07:29 AM
To be honest the best solution for all of this would be compounds (clusters of nodes which are grouped together in a single node, with customised inputs and outputs) That way people who want the power, can have it and people who want pre-made off-the-shelf and simple solutions, can have it too. Most of the tools are there to make a 'zbrush node' it's just it comes out as a bunch of complex nodes instead of a single one.

Nodes do take a lot of effort to get your head round, and some of the more advanced node clusters are way over my head, but this is no different than the rest of 3D: Sorry but 3D is horribly complicated. There are things you can do to mitigate that, but really complex sh*t will always end up as really complex sh*t :D (or horribly limited).

Tobian
08-26-2011, 07:33 AM
Ob btw, while it doesn't help with the normalising, and mid-point adjustment bit...

DB&W does a rather nice remap node http://www.db-w.com/products/dbwtools/docs?start=1 which does exactly that (though you may have to MANUALLY type in the correct values yourself, oh the horror! :P)

RebelHill
08-26-2011, 07:35 AM
To be honest the best solution for all of this would be compounds

YES!!! that would be best. Just look at all the hackety whacking JR had to do to compress the blurry reflection optimiser node network down to a single node, having to implement it as a plugin... basically all he's created is a compound node which could be made with 2 clicks were this feature built into LWs nodes.

Though I can still forsee the cries that you'd get when the "standard" compound that had been created for a given task failed to produce the right result on a given setup, as users found they still had to dive into the root tree for some things.

lardbros
08-26-2011, 07:48 AM
Of course they do... diffuse shading is based on light received by a surface... an area of taht surface that is occluded in some way, and not receiving the light is therefore in shadow.

As a slight aside to the whole "Im an artist, dont want all this techy/math stuff"... Its a complaint Ive heard a gazillion times. Quite simply put, my response to that mentality has to be... tough titties... If you want to do art that involves no techy stuff... take up painting, or ballet, forget CG.
So why does LW not include some ncie easy such "displace" node (or whatever)... I guess because the disp maps are handled differently in different packages... if u had one rpeset disp node then ther'd be cries of "why cant it work with disps from this/that package?"

Ok, so maybe there could be a nice "easy" ndoe to just plug it into to "remap" these values... well there is, as already pointed out, a gradient node.

The problem with simple, artisty solutions is the fact that they are severely limiting... if you take the techy/mathy control away from the user, then you've painted them into a corner which they cant get out of (which is pretty much what poser is... very artist friendly... extremely inversatile as 3d apps go.

For surfacing, pretty much the ONLY math functions you need to understand are add, multiply, divide and subtract... maybe handle a couple fractions.

Now as for folks who say they don't "Get" that, I have to say... I think you're just too lazy to try and want someone else to have done it for you... don't say you dont get it... these are the most basic of sums that they teach to 6 year olds.

Anyhow... [/rant.]

Hmmm, okay... it IS simple maths, but where do we see the output halfway through a node flow? And easily? Some of us can't visualise what a multiply node will do when you take an RGB of 145, 167, 199 and multiply that with 206, 76, 93. We have to plug it in and see the effect... OR have it explained to us in a way that makes sense. Other software is set up much better in this regard.

Anyway... i don't want to argue about any of this... I LOVE all the techy stuff, I enjoy learning... what I don't enjoy are things that don't make sense!!!

The diffuse pass to me, is diffuse shading, and JUST that. If an object is casting a shadow, then yes, the absence of light causes a lower diffuse, but the diffuse of an object 'catching' the shadow shouldn't really include the shadow, well, it doesn't in 3ds max.

You say 'of course they do'... then it must be obvious to YOU, and you must have a deeper knowledge in the underlying shading architecture within LW... but it's not THAT obvious. You look at a shadow pass right... does that include the absence of light on the diffuse of an object? NO, it simply contains the shadows!! Do you see my point?

In 3ds max, I render out a diffuse pass, and it DOES NOT contain shadows! It is purely the diffuse. YES, with the absence of light on the shaders, but not the shadows... so, when I came to look at Blochi's shadow catcher node, I couldn't understand what the hell was going on because his diffuse node was causing the transparency on the catching object... now, how was I to know that in Lightwave the diffuse buffer has the shadow in it?

The only way for me to know this is to pick apart other people's nodes, like Blochi's, so we're NOT lazy... just ill-informed on the inner workings of Lightwave's shaders! :D

lardbros
08-26-2011, 07:52 AM
To be honest the best solution for all of this would be compounds

Completely agree... this is how most nodal portions of software works, and Virtool's works like a charm in this regard!

RebelHill
08-26-2011, 08:09 AM
Hmmm, okay... it IS simple maths, but where do we see the output halfway through a node flow? And easily? Some of us can't visualise what a multiply node will do when you take an RGB of 145, 167, 199 and multiply that with 206, 76, 93. We have to plug it in and see the effect... OR have it explained to us in a way that makes sense. Other software is set up much better in this regard.

Anyway... i don't want to argue about any of this... I LOVE all the techy stuff, I enjoy learning... what I don't enjoy are things that don't make sense!!!

The diffuse pass to me, is diffuse shading, and JUST that. If an object is casting a shadow, then yes, the absence of light causes a lower diffuse, but the diffuse of an object 'catching' the shadow shouldn't really include the shadow, well, it doesn't in 3ds max.

You say 'of course they do'... then it must be obvious to YOU, and you must have a deeper knowledge in the underlying shading architecture within LW... but it's not THAT obvious. You look at a shadow pass right... does that include the absence of light on the diffuse of an object? NO, it simply contains the shadows!! Do you see my point?

In 3ds max, I render out a diffuse pass, and it DOES NOT contain shadows! It is purely the diffuse. YES, with the absence of light on the shaders, but not the shadows... so, when I came to look at Blochi's shadow catcher node, I couldn't understand what the hell was going on because his diffuse node was causing the transparency on the catching object... now, how was I to know that in Lightwave the diffuse buffer has the shadow in it?

The only way for me to know this is to pick apart other people's nodes, like Blochi's, so we're NOT lazy... just ill-informed on the inner workings of Lightwave's shaders! :D

Well, when I say "lazy"... I didnt mean you (or anyone) in aprticular... nor that your problem was caused by such things on your own part... but rather as a general attitude you see repeated to greater or lesser extents. And some have a very clear "I dont want to have to learn it" vibe going.

Otherwise, yes, I can see how the "of course" may not be so clear to you, coming from seeing such things in another package. Really, the only confusing part is the truncated terminology.

When LW talks about a "diffuse pass" it means diffuse SHADING... which has to include brights and shadows, cos thats what shading is. What I would suspect therefore that max is calling diffuse is diffuse COLOUR... which ofc is just the colour info of the surfaces devoid of shading. LW has this same pass, but its called "raw RGB" (interestingly, LW has another diffuse buffer which is neither shading nor colour, but the diffuse value of the given surface, so a BW buffer representing 0-100% values on the surface as defined by the surface settings, not colour or shading).

Ofc... if you're wanting to get a deeper understanding of the different buffers in LW, and what each produces... thats easy... jsut set up some basic sphere n cube render, throw in colours, shadows, felections, transparencies... as many different surface attributes as you can, and then use the renderbufferview/export filter, and just check every last buffer option. A quick click through the different images will clue you in in no time.

And yes... I agree that itd be good to have a result/readout node that you could stuff into a network to see the values rendered by a particular set/chain of nodes... but for most things, expecially surfacing, VPR does taht well enough to my mind.

lardbros
08-26-2011, 08:56 AM
This is what confused me... maybe I should stick to one piece of software eh :D Trouble is, how many other things are like this in Lightwave? :(

On the same subject... the Diffuse Buffer in 3ds max doesn't actually contain the RAW RGB, it's simply the diffuse, as I'd expect it... as in the shading on all objects devoid of shadows, but including the absence of light on each object's shading (but again, no shadows). It may seem odd to others who don't use Max, but it makes sense over here :) The way it works in Max, works well for compositing, I'm not keen on LW having the shadows in the Diffuse Buffer, it seems all wrong... but now I know, i guess i can get on learning :D

MSherak
08-26-2011, 02:18 PM
This is what confused me... maybe I should stick to one piece of software eh :D Trouble is, how many other things are like this in Lightwave? :(

On the same subject... the Diffuse Buffer in 3ds max doesn't actually contain the RAW RGB, it's simply the diffuse, as I'd expect it... as in the shading on all objects devoid of shadows, but including the absence of light on each object's shading (but again, no shadows). It may seem odd to others who don't use Max, but it makes sense over here :) The way it works in Max, works well for compositing, I'm not keen on LW having the shadows in the Diffuse Buffer, it seems all wrong... but now I know, i guess i can get on learning :D

Max does not seperate Color from Diffuse.. They are one in the same in MAX which drives me nuts since they are not the same in the real world.. Diffuse is shading, or the level of color bouncing back due to all the other colors being absorbed, which when a shadow comes over a surface this is just another change on the amount of light bouncing of the surface.. Which is correct to the real world.. Comes to the question.. What color is a RED ball, GREEN ball, BLUE ball in a room with no light???

So think RAW channel in LW is the same as DIFFUSE channel in MAX.

-Sherak

PS. 8 / 10 will get the answer wrong

Cageman
08-26-2011, 03:50 PM
What color is a RED ball, GREEN ball, BLUE ball in a room with no light???

Black... if there is no light, there can't be colour, since colour is reflection of light.

EDIT: Rather... color is absorbtion of light as well. A red item absorbs all wavelengths of light except for those that gives a red colour; bouncing that red colour back.

Cageman
08-26-2011, 04:14 PM
btw...

Thinking more about this question of colour, and speaking with some guys at Skype... it is a Trick question... because the one asking the question should also specify wether it is within the wavelengths of Light that we as humans (and most camerasystems) can see. A completely dark room will give the effect of that everything is black, but just because the light isn't there, doesn't mean it changes the properties of the objects that gives of the reflection of Red, Green or Blue... so in THAT meaning, the colours doesn't change, but from a perceptive perspective (as in, what do we really see), it certanly changes from colour to complete black.

:)

ivanze
08-26-2011, 04:29 PM
This could help you to know the value of a given node and it is free.

Input Spy

http://www.lwplugindb.com/Plugin.aspx?id=c793077b

Tobian
08-26-2011, 04:43 PM
This is one of those things which you can put down to peculiarities of software, as most software does it more like Lightwave does. It's only Max that calls colour diffuse. Really there should be diffuse reflectance (your diffuse input) and diffuse colour (which caused by absorption). Much the same as you have your specular reflectance value and your specular reflection colour (to get realistic metallic reflections). In some maths terms, the two are multiplied together, to give off the net energy returned to the camera, and then added to the specular reflections likewise multiplied together.. it's all just maths!

As to the conundrum.. Well how do you know you are in a room with a red, green and blue ball if there's no light? I question your question! :p Without any light you can't see squat to answer a question as to the contents of the room, and last I checked I don't have echo location and that doesn't see in colour either :p To be honest this reminds me of the conundrum "If a tree falls in a forest and no one is there to hear it, did it make a sound?" To which my answer is "yes, because the insects, animals and birds heard it :p"

lardbros
08-26-2011, 04:45 PM
Max does not seperate Color from Diffuse.. They are one in the same in MAX which drives me nuts since they are not the same in the real world.. Diffuse is shading, or the level of color bouncing back due to all the other colors being absorbed, which when a shadow comes over a surface this is just another change on the amount of light bouncing of the surface.. Which is correct to the real world.. Comes to the question.. What color is a RED ball, GREEN ball, BLUE ball in a room with no light???

So think RAW channel in LW is the same as DIFFUSE channel in MAX.

-Sherak

PS. 8 / 10 will get the answer wrong

Guess it's what you're used to... I'm used to 3ds Max and to me, having the diffuse have the shading but not shadows is better for me in compositing, there are ways round this in Lightwave, but it just makes sense the way max does it. I don't know... It just makes sense the way max works and I'd guess that other software works differently to Lightwave.

H_Molla
08-26-2011, 07:06 PM
interesting Thread...

MSherak
08-29-2011, 02:56 PM
Well how do you know you are in a room with a red, green and blue ball if there's no light?

Question was:
What color is a RED ball, GREEN ball, BLUE ball in a room with no light???

The question really does not place you are in there looking.. There are two types of answers you will get from people.. Black (light) and the Colors (matter).. and in the physical world even if there is no light to your visible eye the property of the material is still the same.. All depends on how your mind perceives the question.. Knowing what the physical makeup of something is helps in learning how wavelengths interact with it.. It's all math in the end..

-Sherak

If you really want to see this put to the test.. Do an oil painting.. You will surprise yourself..

Tobian
08-29-2011, 03:14 PM
I've done oil painting, watercolour, gouache, charcoal sketching and a bit of decoupage, but that doesn't change physics :p

I think you're confusing 'low' light with 'no' light. If there is no light whatsoever, it doesn't matter how high you turn up the sensitivity of your light collector, or how long the exposure, you will never see anything. If there is low light then you can percieve something even if it's not normally visible to your eyes.

Colour is a perceptual phenomenon, based on how human eyes perceive wavelengths of light reflected or emitted from objects. Unless there is light in the equation then there is no colour, because it's entirely a subjective experience. In this case the argument of painting is invalid, as yes, an artist would make 'black' paint from a number of pigments which absorb light, but it would never truly be black, and it's also a subtractive model, which is the opposite of the additive model used for measuring colour in this context. In the subtractive model, black is a colour because it is made from many colours of pigment. In the additive model, black is not a colour, because it is simply an absence of information.

If you asked an artist to paint this scene they would have to do it from imagination, so the colours would be wrong (if subjectively right to their imagination :D), so I am not sure how that is even a valid argument?

Mr Rid
08-29-2011, 08:18 PM
"Simplify, simplify, simplify" - Thoreau


...
..But what a bad tone Mr Rid,
when we just want to help..
Please be cool.

Have no idea what you are referring to.


... "Im an artist, dont want all this techy/math stuff"... Its a complaint Ive heard a gazillion times. Quite simply put, my response to that mentality has to be... tough titties... If you want to do art that involves no techy stuff... take up painting, or ballet, forget CG.

My point is about getting down to the fun part of why I got into this in the first place, and not wasting time, that a client is usually paying for when I can plainly see there's a better way to do it. 'Intuitive' means a way of setting up the interface that relates to experience common to an intermediate user so they can somewhat reason out how to do things without having to constantly read docs and search out tutorials for each step of the process. But of course, new processes mean new ways of thinking.

An animation app is primarily a creative tool. It should allow an artist to focus more on the objective and less on the tool itself. The interface should clear the tech out of the way for ideas to get out. The complex part should be made transparent or understandable for the artist. As Dodgy says, push the software, dont let it be a pain in the rump. I once heard William Vaughan say something about how you have to figure out how the tool wants to work. I cant subscribe to that if the tool is a pain and keeps me from getting stuff done. Dev's job is to support production, not the other way around. On a deadline I must constantly weigh the productivity of my actions in order to keep schedule, and know when I or someone else is straying too far with a convoluted technique or tool. Then its time to move onto a more elegant solution.

Traditionally, art, music and literature are not the domain of numbers. People who excel in these areas possess a particular aptitude and their minds work differently than say a mathematician or programmer. Unique to computer art, the two hemispheres have to collaborate. As an artist, I see the final picture and am anxious to get to it. Am a perfectionist and anxiety keeps me up all hours worrying how to solve or improve shots ("Anxiety is the hand maiden of creativity." - Eliot). I HATE the convolution of computers... constantly forced to deal with endless upgrades, configurations, settings, codecs, drivers, formats, instructions, FAQs, tutorials, forums, bugs, glitches, crashes, viruses, incompatibilities... I just wanna get sh.t done. All of these problems are due to math. Technology inevitably spawns as many or more problems than it solves. But you need artists to make artwork.

98% of the good CG artists I have worked with were drawists of some sort before all the mouse pushing. As a mentor I've learned that I am probably wasting my time if the student wasn't doodling little characters all over their notebooks in school. Most friends are artists of some sort who learn and live visually, dropped out, are self taught, disregard authority, bad with money, intuitive, absent-minded, show up late, run against the grain, ignore rules, prefer to jump in and explore rather than be told what to do or have to read instructions, and are spontaneous free associators who joke around a lot. In short, anti-math. I can think of only a few good animators with no prior artistic experience, but who both eventually went into more technical aspects. I failed most classes my senior year (ignoring daily work) while scoring mostly A's on exams because I was good at remembering and deducing rather than actually understanding the material, and was 20 points short of the min requirement for Harvard on SAT (they threw a diploma at me anyway), without having passed Intro to Algebra in four years.

Change is not brought about by contentment and knowing boundaries. My instinctive approach to CG serves well enough, associating the tools in ways to slice thru Gordian knots where others are stumped. Equations wont help me extrude a landscape mesh from a static plate of a hill and integrate instanced horses running down the hill when the app isnt designed for it. You have to get creative.

Complex tools place me at the mercy of a left brain's terms of how they should function. The first 3D app I saw was one that a friend was using to move shiny balls around by typing text on a black screen... no interface. Not my thing. But slap a steering wheel and a comfy seat on that engine, and I'll place in a few races. That friend was another LW animator who eventually found his place in more L-brain administration.

The best support I've experienced was with RealFlow. In v1 days, I sent about 80 emails over the course of a year, and they usually responded the next day with patches addressing whatever issues I raised, one of them being lack of motion blur (maybe they were already working on it but they solved it a few days after I brought it up). THAT is support. They listened to whoever was pushing the software, were not expecting me to come around to their way of thinking, and that tool has really evolved since.

Many years ago I street raced alot in a stock Z. I didnt (don’t) know the first thing about engines, yet I always won (even out ran state troopers in their puny Mustangs) over guys who had been spent years customizing a muscle car. I just want to get where I am going in the least time and I expect the machine to respond to my instinct. Many years ago I saw Scorsese state in an interview how he dreamed of a camera that sees and moves the way his own head and eyes do. My dream is software that similarly keeps up with me, that I am not waiting on or troubleshooting every hour.



Now as for folks who say they don't "Get" that, I have to say... I think you're just too lazy to try and want someone else to have done it for you... don't say you dont get it... these are the most basic of sums that they teach to 6 year olds.
... [/rant.]

I normally multi-task between three LWs, modeler, fusion, PShop, several web sites and wordpads (making tons of notes), and using two or three workstations (a good roll chair is essential). A few supervisors have said I do the work of 2 or 3 people. Too 'lazy' to figure technical tools, or too busy?

All 3D apps are complicated and have problems. I spend most of my time at the computer pushing LW and about 60% of that is spent troubleshooting and coming up with workarounds. Again, I understand that this stuff is complicated. But just once, I would love to have an NT dev sit next to me all day. I just spent four hours trying to do something very basic with cloth, including 20 minutes trying to rename a UV, 10 mins trying to load an object, followed by another 15 mins trying to qemloss a simple mesh, then a blue screen crash. Typical time wasted.

When I need to do something in nodes that I have not tried before, I waste so much time trying what seems logical but making little progress because they are just not intuitive for a typical R-brain. Yes I understand basic math but that doesnt really begin to explain which node plugs into what, in what order, to get what, when all I want to do is say apply a damm displacement map. And as Lardbors said, I dont care what the numbers indicate, I need to SEE what is going on. Docs dont explain much (certainly not how to get to the big picture). Tuts show how to do one specific thing, but the moment I modify or combine, am back tearing hair out. There are efficient, and then less efficient ways of doing things. When there is no basic pass-thru/bypass (trialing variations and comparing before & after is basic), can only see the accumulated result, I cant scale and shift grad values, and incidence values are still backward, I can only wonder, are they listening to feedback? Do they really know how users typically interact with the tool? Are they concerned the tool is doing what it needs to be doing? Couldnt they take a cue from a similar tool that handles a task more efficiently?

I dont know how many times I complain about a clunky process, and some tech defends the limitations of numbers and how I should adjust attitude. Then an innovative program somewhere comes up with the solution I was talking about, because they were listening to users. Seeing the big picture solution without knowing how it works is another R-brain tendency.

I slogged thru After Effects since version 1, always thinking 'there's got to be a better way.' Eyeon answers. In AE, I open and close windows and menus to death, like wondering around streets and corners to figure out how to get somewhere, while Fusion is like looking down from the sky. I can see everything! Again, the 'whole picture'. I can figure most of it out by just clicking around without having to read docs (same experience with Poser). I knew several AE guys who took to Fusion and never looked back.

I worked at a place where for three months we struggled to setup linear color workflow that turned into a huge mess trying to get all apps, displays, platforms, employees and clients on the same page. The process was obstructing productivity when we already had a way of doing things that worked fine. I refused to waste any more time in the convolution of LCW until someone made it a checkbox. Same reason I skipped over LW 6,7,8,9,10 for production. I wait until more kinks are worked out in the .3+ versions. I supervised one large project where I had all the artists use LW5.6 although LW 7 was just out, because we knew 5.6 and it was reliable. Its about not wasting time, money or hair.


The problem with simple, artisty solutions is the fact that they are severely limiting... if you take the techy/mathy control away from the user, then you've painted them into a corner which they cant get out of (which is pretty much what poser is... very artist friendly... extremely inversatile as 3d apps go.

Simpler interfaces dont necessarily mean less capable tools. Igarashi's interface experiments from ten years ago, demonstrated modeling and painting with surprisingly few strokes of a pen or mouse, without buttons or menus. Smooth Teddy allowed creation of organic 3D shapes from free form strokes, which is more intuitive for artists over the rigid, CAD-tech mode of construction. Yes, Teddy is a limited, experimental tool. But at some point this intuitive gestural interaction was applied in apps like Silo and Zbrush, and artists swarmed all over it. Effort vs productivity. His Sweater demo is still amazing.

And that tired thing about how inferior Poser is... many very talented artists would probably never go near a 3D app if werent for Poser, where they are freed up to express more compelling ideas and storytelling than what I see many users of more complex apps doing. And they are always free to learn to model and scratch elsewhere. Its up to the individual to make something interesting.

"After a certain high level of technical skill is achieved, science and art tend to coalesce in esthetics, plasticity, and form. The greatest scientists are always artists as well." -Einstein

dpont
08-30-2011, 12:07 AM
..Have no idea what you are referring to...

So stop to quote me, for this kind of things,
you simply feign to have an issue to solve,
I tried to understand it, and mathy
things was not mine but included in your posted sample.

I'm for sure no more interested by this discussion,
I hate sectarism on both technical and artist side.


"Bad tone" words are the best I found to describe
the greenish skin tone of Fantomas, a well known very bad guy,
without ears, from Z french serial movies.


Denis.

dpont
08-30-2011, 02:10 AM
..I have also a proof that this not CG,

97850


Denis.
(who likes a lot also to learn and use classic/traditional VFX,
mixing artist and artisan work)

archijam
08-30-2011, 02:12 AM
..I have also a proof that this not CG,

97850


Denis.
(who likes a lot also to learn and use classic/traditional VFX,
mixing artist and artisan work)

Denis - Love it! Fantomas et Funès! :thumbsup:

dpont
08-30-2011, 02:28 AM
Denis - Love it! Fantomas et Funès! :thumbsup:

But be aware, De Funes is sometimes,
just Fantomas with a mask,

97851

Denis.

Philbert
08-30-2011, 06:14 PM
Good thread, I think Jen's little tutorial might help with my current project.

Some comments:


Now as for folks who say they don't "Get" that, I have to say... I think you're just too lazy to try and want someone else to have done it for you... don't say you dont get it... these are the most basic of sums that they teach to 6 year olds.

I'm sorry but no, I don't get it, and it's not that I haven't tried. It's not that want someone else to do it either, if someone could explain it in a way that makes sense to me I would be very eager to learn it, I'm tired of not understanding nodes.


Hmmm, okay... it IS simple maths, but where do we see the output halfway through a node flow? And easily? Some of us can't visualise what a multiply node will do when you take an RGB of 145, 167, 199 and multiply that with 206, 76, 93. We have to plug it in and see the effect... OR have it explained to us in a way that makes sense. Other software is set up much better in this regard.

Exactly. I even picked up James Wilmott's DVD "Introduction to Node Based Texturing". I think James did a decent job with it, but he really only listed each nodes name and what it's for. Not really what to do with it. So now I might know that a node multiplies things, but I have no idea where to put it in my cluster or why I would use it vs. another node. Like LardBros said, some people might know what effect they want and automatically know what to do. I look at the huge list of nodes and have no idea which ones I want and which order I would put them in.

jeric_synergy
08-30-2011, 11:04 PM
I'm w/Mr. Rid: there does seem to be something about nodes that is very difficult for certain minds, mine very much included, to grok.

For me, it's that sometimes networks seem to work strictly from left to right, but then sometimes they do not. This is especially true, it seems, in Displacement applications.

Nodal is reminiscent of "Reverse Polish Notation"-- some people take to it, some never get it. Which sucks, 'cuz I can see that the nodal approach is very powerful.

Fortunately, there seems to be a lot of gratis help out there writing nodes, but frankly it just makes me nervous. That NewTek didn't make, TMK, a "preview node" immediately available strikes me as a rather big oversight. More nervousness.

dpont
08-31-2011, 12:53 AM
...For me, it's that sometimes networks seem to work strictly from left to right, but then sometimes they do not. This is especially true, it seems, in Displacement applications...


Nodal system works in both direction,
first initialization to fill all needed infos is set
from right (the root) to left (the node)
the Spot Info node provides almost all
these infos, but they are also accessible
from each Node if needed,
second evaluation to build a output target,
from left (your first node) to right (the root)
through your whole node tree.
There also a few exception like the Replace Spot node
which modify nodal infos like the root,
but inside the node tree, from right to left.

Displacement is just may be less easy to figure
than Surface because a 3D vector needs indeed
math knowledges, compared to RGB Color which is may
be more intuitive, but some parts of Surface like Bump
are obviously more difficult to handle than Normal
or than displacement even for me.

An chaotic displacement, let say a randomized vector
is simple to understand but just for noise, natural effect,
moving a point along its normal, is just a multiplication
of each x,y and z vector component by the distance (in meter).

The problem above with a Normal displacement exported
by a specific application is caused by a lack of standardization
in the 3rd party sculpting applications,
Mr Rid revealed lately that he used the Daz software,
but providing a 'one button' tool to import this image map in Layout
could be only possible if there was some kind of Meta data
in the image file, a good informative thread here could start
here if all users post here the available exportation settings
of ZBrush, 3DCoat, Mudbox, Daz etc...

Denis.

RebelHill
08-31-2011, 04:31 AM
...20 minutes trying to rename a UV

...Traditionally, art, music and literature are not the domain of numbers. People who excel in these areas possess a particular aptitude and their minds work differently than say a mathematician or programmer.

...An animation app is primarily a creative tool. It should allow an artist to focus more on the objective and less on the tool itself. The interface should clear the tech out of the way for ideas to get out. The complex part should be made transparent or understandable for the artist.

...it's not that I haven't tried. It's not that want someone else to do it either, if someone could explain it in a way that makes sense to me I would be very eager to learn it, I'm tired of not understanding nodes.

...he really only listed each nodes name and what it's for. Not really what to do with it. So now I might know that a node multiplies things, but I have no idea where to put it in my cluster or why I would use it vs. another node. Like LardBros said, some people might know what effect they want and automatically know what to do. I look at the huge list of nodes and have no idea which ones I want and which order I would put them in.

...there does seem to be something about nodes that is very difficult for certain minds

20 mins to rename a UV??? Ok, there's definately something wrong there... Rclick, rename, type, return.... less than 20 secs on every occasion Ive ever done it.

But to get onto the main points...

First, Im sorry if anyone felt offended by my "laziness" comment in any way... It really wasnt directed at anyone, but at the general malaise Ive seen many times, over many years from many CGers (and not just 3d Cgers)... "Cant it be automated, cnt it be presets??"

The answer there being in some cases yes, and in others no, and in the case of the latter because it represents a situation where doing so would put limitations on what can be done, or the provided preset would have so limited a use as to not be a huge time saver in many situations, and still leave all those other situations un-preset in any way... leaving u back at square 1. I mean, Rid.. you complain that tutorials only show you how to do 1 specific thing in the 1 specific situation and when confronted by another, you're lost... well thats EXACTLY what presets and so called "artist friendly" solutions do so much of the time too. You cite ZB (et al) as a good example of artist friendly... but those sculpting apps largely fit right into this description... they're good at one thing, organics... need to now model a building or a car... you're SOL. (yes I know apps are getting hard surface tools now, but I think you'll be hard pushed to find many who can knock out hard surface stuff as quickly or precisely in a sculpting app like others can with a more "old fashioned" CAD tool.

A 3d app should clear all the tech out the way, and make it transparent to the artist... great idea... but fundamentally IMPOSSIBLE... at least so long as the aim of a 3d app is to allow a user to create anything... ANYTHING! The app cannot be structured to allow for every possible variation of every possible thing whilst remaining obfuscated tech wise. Id like to see anyone who thinks it should be that way actually try and offer a solution as to how you could make such an app, and how it should work for the user. Good f'ing luck, chuck!

Now as for the whole "OH we're artists... our brains dont work that way"... tbh... I really DO believe thats a complete cop-out. You're not disabled, you're not retarded... There's no shortage of people out there in the world with severe limitations on their abilities who beat and wail on their own internal boundaries and achieve things that ordinary folk would never believe they could have done (sometimes even things that the ordinary/capable people themselves dont have the grit to do). Yes it may be frustrating, yes it may make you pull your hair out and want to scream and shout... but so what, thats how you get new places, new heights... nothing worth doing is easy. (And oh, you can scratch that about music... musical ability is well known to be very tightly linked to numerical ability).

So what is it about nodes that "artists" find so hard to understand.

Well first off... I think that to an extent its the ASSUMPTION that somehow nodes are harder, or more complex... and thats all it is honestly, an assumption... an illusion.

For instance, Phil... you say "Ok, so I know theres a multiply node, but how/when would I use it?"

Ok, well... do you know how/when to use a multiply layer in the layer surface, or in photoshop... cos if so, then you already know how to use the node... its no different, its just somewhere along the line you've gotten confused/turned around, and its left you THINKING that you dont know how when in fact you do... you just gotta adjust your thinking from a layerd, top down approach... to a FACETED top down, bottom up, left to right and right to left approach... thats really all there is to it.

In photoshop, add a multiply layer on top, and it affects all layers beneath... in nodes, it'll only affect the nodes that you plug it into (either directly, or indirectly/eventually via the network).

The best I can do to try and explain it I guess is to look to something like after effects. It uses layers... jsut like photoshop... BUT, you can pre-compose parts... so you can have one set of layers working in isolation from another set of layers, and then comp the results of both those layered images together in a third composition, by layering them together.

Maybe I need to do a video...

stiff paper
08-31-2011, 08:03 AM
Edit: I've just re-read what I've written below and decided it needed a preface - I'm arguing my side strongly here, that's all. I don't want anybody (particularly RH!) to feel like I'm attacking them. I'm not. I'm disagreeing forcefully, but hopefully politely. Imagine it in a vigorous debating voice, not a shouting voice.


OH we're artists... our brains dont work that way"... tbh... I really DO believe thats a complete cop-out.

So here, essentially, you're saying that you don't believe anybody else's brain works differently in any way than yours. Ergo, nobody can do anything that you're incapable of if you simply put in the effort, and you can't do anything that somebody else is incapable of doing as long as they put in the effort.

I'm sorry but I outright reject that idea.

I've never seen a single shred of evidence that would suggest everybody out there is equally capable, and I've had 30 years as an adult that have been packed full to bursting with innumerable examples that suggest the specific and exact opposite of that is true.


So what is it about nodes that "artists" find so hard to understand.

Well we should make a distinction between LW's nodes and nodes in, say, Nuke (or Fusion). When nodes are used for a sensible purpose they're a) utterly intuitive, and b) a joy to use.

In Nuke the node tree is linear and from a given single-node starting point it sequentially does one thing after another to the image. It's an utterly intuitive way of working in that you follow a single line to the next node, look at what the node does to the image, then follow another single line to the next node and... etc. Sure, you can have a branch coming in from the side, but that branch is exactly the same, just one thing after the other linearly until the point where it connects (usally with a merge of some kind).

So if we analyze our thinking when working in Nuke it looks like this:
"What thing do I want to do next to the image?"
- Do it.
"What thing do I want to do next to the image?"
- Do it.
"What thing do I want to do next to the image?"
- Do it.
"What thing do I want to do next to the image?"
- Do it.
"What thing do I want to do next to the image?"
- Do it.

And that's the process forever and always. It's always a matter of doing the next thing to the *same* data, over and over, until you've finished. I've shown Nuke to dozens of people over the past 10 years and I've *never* had a single person fail to understand how it works within two minutes. All that's left after two minutes is to learn what nodes there are and learn what the settings are in each node.

(I should acknowledge here that, since Nuke 3, some of the things that have been added have been designed by people who haven't really understand exactly how Nuke's workflow worked back then, and so nowadays some things are a little funky. And the 3D system is not as intuitive as the regular 2D workflow. I should also add that Fusion has never been quite as blissfully intuitive as Nuke, but still, the linearity of the node tree is there.)

And now on to LW's node system. No real linearity. A vast, irritating mess of different types of data that can only be plugged in here, can't be plugged into that, won't do the right thing here, must be piped through this to convert it so it can do that. Instead of being a visual representation of linearly doing one thing after another to the same piece of data, it's a visual representation of taking lots and lots of different pieces of data, some loaded in, some specified numerically, some generated algorithmically and then processing them, attaching them and mixing them in a wide variety of complex ways.

Nuke's node display was written on top of the original Nuke, which was just a text file of one operation after another. The early Nuke compers had no interface, and sat there writing one instruction after another in a text editor, and then executed it in Nuke. If you open a Nuke script you'll find that even now it's just a text file of one thing after another.

Nuke's nodes are a way of graphically representing a simple, one thing after another list of instructions.

LW's nodes are a way of visually representing full-on programming. They replicate, visually, the entire structure of subroutines and main loops.

Node systems are just an interface. Nothing more, nothing less. In Nuke the node tree makes it a little easier to generate the list of commands, but it doesn't affect the way you think about the list at all. It can't. It's just an interface. Likewise in LW the node view does nothing to simplify the thinking. You still have to think exactly like a programmer.

If you don't think like a programmer, then having the node view in LW isn't going to help you in any way whatsoever. It's just an interface on top of a programming task.

If you think *do* think like a programmer, then bully for you, it must be fabulous that they added a node view in LW to make generating all those little subroutines a breeze. You can get together with every single one of the other 14 people in the entire world that really does fully understand LW's nodes and happily make all kinds of lovely, complex nodal graphs that make LW do all kinds of wild things.

And you know what? The rest of us will all be truly impressed. We really, genuinely will. But we'll also be massively frustrated and p***ed off that the people developing LW thought it was acceptable to force everybody to use a thinly veiled programming system. It must have been great for them, because they didn't have to write any more interfaces for shaders or any of that *boring* guff. Yeah, cool, let's just make them program instead. Programming's cool.

Unfortunately, programming isn't cool. It's okay if you like it and it's vile if you don't.

If you don't like "programming" then you can't use SSS in LW any more, and you can't use any of the nice new shaders. And why not? Because there's no interface for them any more.

For years now this stuff has been driving people away from LW and over to packages where, if you want fancy shading for human skin, there's actually an interface and you just click on a button then enter numbers until you like what you see. Intuitive. Exactly like how LW used to work. Remember that?

Tobian
08-31-2011, 09:12 AM
WOOO I'm a programmer, command line typing into my nodes! Wait, when did that happen? Oh wait, it didn't :p

Sorry but that's just not true at all. Yes Nodes can be used like programming, but it's incredibly limited, in comparison to real programming (and if it was real programming, I can assure you I wouldn't and couldn't use nodes AT ALL). The real secret sauce with nodes is most people who use them use them incredibly simply. The thing most people get in a knot in about is when people do incredibly COMPLEX things with nodes, because they can, not because you HAVE too!

In your example with Nuke: All you are doing is taking an image and manipulating it through a series of filters and combining it with other images.. Exactly like you can do with LightWave nodes, and for the most part that's visual, (unless you are using nodes which aren't supposed to be used for that), as many of the operators in LightWave are visual (go do the same thing in Maya and get back to me!). But the problem is this, as a comparison is really not valid at all: If you handled each texture channel as a separate series linear chains, this would make most people's node work-flows MORE complex, not less. and I am sorry but there's SO much more to do in a typical surface than there is in a compositing job. The fact that it's operating in 3D, the fact it's multiple texture channel layers, the fact there's alternative shader models, or materials. Case in point the fact materials exist take a LOT of the work out of it for people (with their simplified compound interface). The only reason people don't use them is because they CHOOSE too, and that's the point, it's about flexibility.

It's not like the layer editor suddenly got broken either. Most things can be done in the layer editor, in terms of simple layered, linear compositing, but for many this is quite limiting and slow, and as soon as you want to do something more complex, then there's no getting round that it's going to be complex, and you will hit a massive brick wall, using that paradigm. If you want to do something simple, keep on using the classic surface editor, and the layer channels.

Where I do agree with you naysayers is the node editor could be simplified, and improved, and contextualised, to reduce the number of possibilities, where they are not needed, but that is only ever going to be an improvement of the interface, not a reduction of the irreducible complexity of what it is doing. If you want to do something complex, then sorry it's going to be complex, and I'd rather do it this way than have to write my own shaders!

RebelHill
08-31-2011, 09:23 AM
I don't want anybody (particularly RH!) to feel like I'm attacking them. I'm not. I'm disagreeing forcefully, but hopefully politely. Imagine it in a vigorous debating voice, not a shouting voice.

Not taken such in the slightest... I hope my own comments are taken the same:thumbsup:


essentially, you're saying that you don't believe anybody else's brain works differently in any way than yours.

Essentially... yes, and thats not an opinion, its a fact. The brain/mind is just a machine... like an aeroplane... and all planes are FUNDAMENTALLY the same... some are cut better for carrying heavy cargo loads, some for manoeuverability, and others go supersonic. But still they all do much the same thing, based on the same basic parts/systems (or perfect forms as the philosophical among us might say).

So yes, different people have brains that are better/worser suited to certain tasks (each individual having strengths and weaknesses compared to one another), but the core functions and abilities are the same (though not all to the same standard)... this, ofc assuming we're talking about a healthy/fully functional brain. Just like all those of us with normal/functional legs can run... but we cant all run like The Bolt.


I've never seen a single shred of evidence that would suggest everybody out there is equally capable

Well there's TONS of evidence that proves just that (depending on how rigidly you want to define EQUAL, as per previous plane analogy)... As it happens (and some including me might say unfortunately)... my mother is a psychologist, and I spent a good deal of my teenage years locked in the bathroom reading her textbooks as she was doing her degree (in the downtime between masturbatory sessions ofc)... and there's no shortage of experimental, clinical evidence that this fact is indeed the case.

No, not everyone is capable to the same LEVEL in all tasks, but we all have some measure of ability in all the same tasks. Its like when folks say "Im no good learning languages"... its a complete nonsense... if you can learn one language, you can learn ANY. Chinese may seem like greek to you (see what I did there), but had you grown up in china, or were you to go live there for a few years, you WOULD learn it just fine.

Add to this that my girlfriend is a special needs nurse who works with the mentally handicapped (and the things Ive seen, and learned being around her), I can assure you that when a given individual brain is TRULY incapable of performing a particular type of task the resultant individual is REMARKABLY different from your average person.


In Nuke the node tree is linear and from a given single-node starting point it sequentially does one thing after another to the image.

And LWs system can do the exact same. You can jsut go linearly from one thing, to the next to the next, altering/adding to (for example) a texture/surface one node at a time... (and with VPR {or previously FP or viper to slightly lesser extents} you can see each change as it happens on a node by node basis).

Ofc, its capable of more than that... but if that's how one "gets" nodes, or wants to use them, then you can do just that quite happily.


A vast, irritating mess of different types of data that can only be plugged in here, can't be plugged into that, won't do the right thing here, must be piped through this to convert it so it can do that.

Yikes, Dont go near Maya hypershade, or XSIs ICE then, you'll DIE!! But Lolz aside and to the point...

Ofc its that way... the nodal system is designed to be able to work on lots of differnt things, not just textures... so what else would you expect?

Nodes can work on geometry (like dispacements)... so that requires a vector data type (triplet value of x,y,z)... How on earth would you connect that to say a specular input (0-1/min-max/hoever you wanna phrase it)... The idea of connecting the 2 defies logic itself. It'd be like saying that the area (m^2) of your living room somehow determines the colour of the carpet you want to lay, or that the temperature inside your fridge should influence the fat content of the milk in there.... WTF??


LW's nodes are a way of visually representing full-on programming. They replicate, visually, the entire structure of subroutines and main loops.

Careful... you'll upset Pooby and bring on another ICE rant:tongue:

But yes, ideally thats the sort of thing that LWs nodes can allow for (to whatever extent LW can manage such things, Pooby, Quiet!)... but thats not the ONLY way they can be used. Again, to go back to the Afx example of multiple compositions of layers subsequently composed together in "second level" layers. You can use nodes either way.


Likewise in LW the node view does nothing to simplify the thinking. You still have to think exactly like a programmer.

Blah, blah Afx... precompose... yadda yadda... no you dont HAVE to do any such thing. You get advantages over the standard layer system if you just use nodes linearly/sequentially/precompositionally, and if thats as far as you wanna go with them, fine, stop there, at least you've gained something. But if you want to go further and introduce some "tech" type thinking, you can do that too and get even more control.


Unfortunately, programming isn't cool. It's okay if you like it and it's vile if you don't.

Totally agree... I hate... HAAAAAAAAAAAATTTTTTTE programming. It does my bloody swede... but i get some of it, can apply a bit of it, and if that helps me get more out of the tools then groovy.


If you don't like "programming" then you can't use SSS in LW any more, and you can't use any of the nice new shaders. And why not? Because there's no interface for them any more... from LW and over to packages where, if you want fancy shading for human skin, there's actually an interface and you just click on a button then enter numbers until you like what you see. Intuitive. Exactly like how LW used to work. Remember that?

Nonsense... complete and utter nonsense.

For SSS just open nodal, pick your material, skin, simple skin, fast skin... plug it into material input... double click the SSS node, and boom... a lil interface where you can enter numbers/colours, etc exactly as you describe, and in a fashion that is no different whatsoever to all the previous shaders that have been in LW.

The only difference is that since the nodal system was fitted in, rather than mess around by spreading these "materials/shaders" into different parts of the interface, they're all contained in the one place, the nodal view. More compact, less mess, and better organisation for all. If you just want to stop with the one material node (no "network")... great, u can do that. If you wanna "layer" it up in the nukish linear manner you describe, you can do that too, and if you want to get in deeper with some codie/programmy type manipulations, you can do that too... but you don't HAVE to go to the fullest possible length, and can stop anywhere you want.

djlithium
08-31-2011, 09:23 AM
Max does not seperate Color from Diffuse.. They are one in the same in MAX which drives me nuts since they are not the same in the real world.. Diffuse is shading, or the level of color bouncing back due to all the other colors being absorbed, which when a shadow comes over a surface this is just another change on the amount of light bouncing of the surface.. Which is correct to the real world.. Comes to the question.. What color is a RED ball, GREEN ball, BLUE ball in a room with no light???

So think RAW channel in LW is the same as DIFFUSE channel in MAX.

-Sherak

PS. 8 / 10 will get the answer wrong

I'm fighting this out myself right now with stuff that was textured for us by a max artist and its brutal. GRRR! Autodesk products! GRRR!

lardbros
08-31-2011, 10:38 AM
I'm fighting this out myself right now with stuff that was textured for us by a max artist and its brutal. GRRR! Autodesk products! GRRR!

I'm on the other side of the fence, looking in and going grrr, lightwave ... grrrr! I can't understand why Lightwave does it the way it does, and worry I'll never remember all the differences each time I come to do some passes in Max, and others in LW!

This is where we need a pass system that you can load up and it will automatically have all your passes there... ready to go!

stiff paper
08-31-2011, 11:26 AM
@RebelHill "Well there's TONS of evidence that proves just that (depending on how rigidly you want to define EQUAL..."

Well, okay, that moves unavoidably onto a much more expansive topic, but if I can attempt to limit its scope, just for now...

I've had cause to try to introduce a number of people to the basics of 3D along the way at various places of work...

Hmm. I'm not sure how to... wait, let's try this: have you ever come across the person who can't tell which way around a wireframe is rotating in the persective view? I have, and it's pretty awful. I don't mean it's awful for them, I mean it's not as if anybody *has* to do 3D, it's not a fundamental requirement of life or anything. I mean it's awful because there's nothing to be done about it. It doesn't matter how much you point at the vertices at the front and explain perspective. It doesn't matter how much they practice looking at rotating wireframes. All that happens is four months later they're sacked because their supervisor has just realized they can't "see" 3D.

Ten years ago (or thereabouts) Dreamworks decided to force all their old-school 2D animators to use Maya and do all their animation in 3D from now on. They all went to training. There were a lot of people. Some of them were fine. But some of them... well I don't think I've ever *seen* people be that miserable. I was talking to grown men who were in tears because they simply couldn't "see" 3D properly and they could tell they were never going to make it as 3D animators.

Imagine me flapping my arms about now and being bewildered as I say: people just aren't all equally as capable of doing any given thing as anybody else is. They just aren't. It doesn't pan out that way in practice. It would be nice if it did, but it doesn't.

@RebelHill "Yikes, Dont go near Maya hypershade, or XSIs ICE then, you'll DIE!!"

Yes, I know. (If LW goes away or gets any less pleasant to use I'll never do any 3D again. I'll find work as a comper instead.)

That does bring me on to another point about LW nodes. A few years back now, I actually had a couple of LW old-timers say to me, roughly, "If I have to do all that node cr*p in LW now then why the hell aren't I just using Maya?" right as they made the switch over. I suspect LW devl really thought nodes were a reason for people to stay with LW, but my experience was that nodes were a reason why people thought "There's no difference in usability now so I might as well go where the jobs are."

@RebelHill "It'd be like saying that the area (m^2) of your living room somehow determines..."

I don't think we shouldn't have nodes in LW. I just think they should be just one available tool, and 99% of the time for anything where people just want to input some values and add a couple of maps (like, say, a skin shader or a displacement map from ZBrush) there should be a simple, old fashioned LW interface panel.

@RebelHill "Totally agree... I hate... HAAAAAAAAAAAATTTTTTTE programming."

Well this is probably the moment when I should confess that I used to be an assembly language programmer. (Professionally. For a few years. Not a pretend programmer in my bedroom.) So I do understand programming, or at least I understand how to think about programming. And you may say you hate programming but your training videos and the fact that you don't mind LW's nodes both demonstrate a grasp of technicalites that suggests you'd have an affinity for it.

In the end, despite several people being quite vocal about how straightforward LW's nodes are, surely nobody can deny that an overwhelmingly large majority of LW users doesn't "get" nodes and doesn't use them for much. And I suggest, respectfully, that any aspect of a piece of software that's alienating and baffling the majority of its users is a failure and needs to be looked at. The nodes themselves are fine, for maybe 10% of LW users. Meanwhile, can the rest of us have some good old fashioned interface please?

@Tobian "If you want to do something complex, then sorry it's going to be complex, and I'd rather do it this way than have to write my own shaders!"

Well, yes, I agree with that. But as it stands there are now important non-complex (in fact, very simple) things that can't be done in LW outside the node editor (like displacement from a ZBrush map). Years back, when LW devl announced that from now on they weren't going to add anything new to the traditional LW surfacing method and everybody now had to use nodes, do you think they did that for our benefit or do you think they did that because they were too damn... umm... let me rephrase that... or do you think they did it to save themselves effort?

Edit:

...
Things are currectly feeling a bit... apocalyptic, in LW land. Plainly, *something* hasn't been impressing the users for years now and they've been voting with their feet. Unfortunately I still don't see that LW devl has accurately worked out what's been going wrong. I honestly think that nodes... well it's not that they're specifically the problem at all, but it's that they're symptomatic of something that might be. LW's strength was always that it was much more plug n play than others.

You want to put some maps on that cube? Sure, just click a few buttons. Done.

LW desperately needs things like a tool for applying zbrush displacement maps with three clicks and a texture load. That's what LW was all about when it was at its most successful. It desperately needs to work in a way that allows people to apply a skin shader by making three clicks in a window and typing in two numbers.

RebelHill
08-31-2011, 12:45 PM
have you ever come across the person who can't tell which way around a wireframe is rotating in the persective view?

Yes... ME!!!

This is a very common optical illusion, and interestingly enough, it's one thats been used many times expreimentally... most commonly as a cube, but also as a wireframe of a human face... does it appear convex, or concave??

And ultimately what this test proves is that all humans interpret visual information via the exact same set of mechanisms... so that very point you make is actually a point of proof that the piece of our brain that does that is the same in all of us. The reason the illusion happens is because with only a 2d representation, the brain has to make a choice as to wether it thinks its facing one way or the other, and the result of that choice is what we visually "see".

But like any such minor idiosyncracy of our functions, and more importantly because its a CHOICE (albeit a subconscious one), and not a difference in the actual mechanics, you CAN train yourself around it.

When I first started with 3D, I would have this problem all the time. I still do from time to time (usually if my minds wandered off) but nowhere near as often as I used to. The fix for me is to turn away... blink repeatedly like Im having a seizure, TELL myself which way Its facing, and look back.

I guarantee you that if those guys who "couldnt" see it had persevered, that their brains would eventually have figured it out, and corrected for it. Maybe there's a small minority who would never have gotten it... but I promise it would have been very small.

The reason they couldnt "see" it wasnt because they just plain couldnt... it was because they'd not had enough previous exposure to it, so their brains had never learned how.


had a couple of LW old-timers say to me, roughly, "If I have to do all that node cr*p in LW now then why the hell aren't I just using Maya?" right as they made the switch over. I suspect LW devl really thought nodes were a reason for people to stay with LW, but my experience was that nodes were a reason why people thought "There's no difference in usability now so I might as well go where the jobs are."...
...I just think they should be just one available tool, and 99% of the time for anything where people just want to input some values and add a couple of maps (like, say, a skin shader or a displacement map from ZBrush) there should be a simple, old fashioned LW interface panel.

Works both ways... Id wager just as many (or near) people would, had LW not added nodes (or other functions, Poobs, you can chime in now), have thought, sod this, LW stagnates while others move on, Im jumping ship.

Again, at the end of the day... you dont HAVE to use nodes... they're there if u do, ignore em if u dont.

Well, Ok, maybe u HAVE to go in there sometimes... like if u wanna use simple skin, or carpaint... but all u have to do is hook up one connection and double click the node, and there is your bog standard, age old LW interface... piece of cake. If someone really is CHILDISH enough to say that those 2 clicks in a "new" window are too much/too difficult... then screw em I say, they're idiots.

And the idea that such things should have to be doubled up in the interface to make up for such folks is frankly ridiculous... doing things like that would only serve to slowly, and overtime turn LW into yet another cumbersome piece of bloatware (not unlike maya, lol).


you may say you hate programming but your training videos and the fact that you don't mind LW's nodes both demonstrate a grasp of technicalites that suggests you'd have an affinity for it.

Hey, i grasp the technicalities of gardening... still hate it more than words could ever express.


can the rest of us have some good old fashioned interface please? It desperately needs to work in a way that allows people to apply a skin shader by making three clicks in a window and typing in two numbers.

You have and it does... as I said... hook up your material node (thats 3 clicks) double click the node to open its standard LW interface, and just carry on as you ever did, and just ignore the fact that its all happening in a nodal window... cos if thats as deep as you're going then the rest of the nodal window and workflow is of no consequence anyway.

Badda Bing!

jeric_synergy
08-31-2011, 01:34 PM
Essentially... yes, and thats not an opinion, its a fact. The brain/mind is just a machine... like an aeroplane... and all planes are FUNDAMENTALLY the same...
{{huge eyeroll}} Obviously, untrue. Entire schools of pedagogy are devoted to how to address those people that think differently in a general population.

The fact that 3d attracts a tiny sliver of the population just narrows it a bit, but I think that my and Mr. Rid's testimony here should suggest to you that you're absolutely wrong in this.

Are you a musician? If so, no doubt you've had the experience of running into those people for who music is a snap, while others struggle, and others just completely fail. (I'm in the 'struggling' camp.) To blithely assert "Oh, everybody can get nodes!" is just insulting to the effort we've PUT INTO 'getting it'.

We're the equivalent of 'tone deaf', we're 'node blind'.

For me, the pertinent question is: is there a way to overcome this, either technologically (better nodes), or pedagogically (better learning aids)?

Philbert
08-31-2011, 01:44 PM
I was going to mention this odd analogy myself. Yes a fighter jet and a cargo plane are the same at their core, they both have engines, wings, and a cockpit, but that doesn't mean an F-14 can somehow carry freight around just because it tries harder.

RebelHill
08-31-2011, 01:49 PM
I think that my and Mr. Rid's testimony here should suggest to you that you're absolutely wrong in this.

Well, then you'll just have to forgive me... but I'll take the results and conclusions of decades of clinical study by physchologists and neuropsychologists over subjective self observation.

If you'll read what I wrote, I never said that everyone can "get" things to the same level... I said that some will struggle more than others... but that fundamentally on-one is totally INCAPABLE unless some part of their machine (brain) is broken.

Its like me asking what the square root of 69,274 is...

Anyone who's learned basic numeracy can figure it out in their head... some people will be able to tell you the answer instantly... others may take hours, days, or even months to do the sums.

Same abilities, different levels of that ability.

Lightwolf
08-31-2011, 02:30 PM
You have and it does... as I said... hook up your material node (thats 3 clicks) double click the node to open its standard LW interface, and just carry on as you ever did...
Oy, don't do that. Expand the right side of the node editor and trim that down to a single click. :hey:

Sheesh, some people... :D

Cheers,
Mike

stiff paper
08-31-2011, 05:55 PM
@RebelHill ...

Well, I guess we fundamentally disagree on some very basic matters. Obviously it would be nice from my point of view if I could persuade you that at least some of what I say holds true, and I'm sure you feel the obverse, but I can't see it happening, and besides that, I don't feel any need to have everybody agree with me. It's just my analysis of things, nothing more.

This part:

And the idea that such things should have to be doubled up in the interface to make up for such folks is frankly ridiculous...
does make me kind of sad though, because from my point of view this approach will almost inevitably lead to LW eventually losing too many users to remain viable.

RebelHill
09-01-2011, 03:55 AM
Well I guess it depends on what you're wanting to persuade me of...

If its that there are things, like nodes, that some folks are completely incapable of understanding and using to any level... all the science of cognition says otherwise, which I find more persuasive.

If its that some folks find nodes so alien from the "traditional" LW to the point that they can't even hook up one or 2 nodes in a plain linear fashion to use materials... or that folks are unable to use these material nodes without some "nodal/programmatic" type thinking, because there's no traditional interface available... then you cant, because its blatantly untrue... the interface is right there.

If its that the program could or should offer the same tools in multiple places in LW.. via different interfaces within the program... then fair enough if thats what u think... but it seems nutty to me, and I cant for the life of me think of a single program Ive ever used that has had something akin to this inside it, and Id assume for the same reasons I think its a bit nuts (namely being redundant and bloaty). In all honesty Id have to see it in action to make a final judgement though.

But if its that some old time LW users find themselves becoming more disenchanted with the package because they see it as becoming too convoluted or complex to get things done... then sure... I believe you just fine (you say you dont like apples, no probs, I buy it... its your taste buds).

However, accepting that fact does provoke one glaring question about their abandonment of LW, which is...

"Where did they go?"

Maya, Houdini, Blender??

If anyone ditched LW for one of these packages because they found LW was becoming too complicated, then I fear they probably received a VERY rude shock upon arrival at their new destination. Unless ofc they decided to leave "complicated" 3D to go do video editing... or open a coffee shop. In which case, Id think their perspective is irrelevant to development anyway.

lardbros
09-01-2011, 06:22 AM
This thread is getting a bit mental (get it... do you see what I did there?) :D

As far as Psychology and Neuropsychology goes, it's all well and good that these Dr's have written books, or papers on all this stuff, but simply put it's still ALL based on what THEY have studied. The fact is, only because they have a PHD in a subject gives them the right to write about these things and for other students to study this stuff. If you read through their books, most of it is common sense... the fact that Autism could be caused by brain damage at a certain age (could be obvious)... some psychologists believe it may be nurture that brings about the effects of Autism or Aspergers (not so obvious, but I could have come up with that theory). A psychologist doesn't HAVE to come up with this stuff, but they do, and they write about it, other people study it, it is deemed correct at time of printing, but is then invalidated/validated years later by other books/papers/experiments... I could quite easily come up with my own theories or hypothesis, they'd be just as valid... but I can't make students study them because I'm not qualified in that area. :D Just being pedantic, and slightly facetious ;)

What I'm trying to say... we're all human (well most of us), and we all have a pretty good idea of what other human's are capable of, we see them and interact with them every bloody day! Some of us in the office do programming, some are creative, one guy is half and half... but I could NEVER be a programmer, i just don't think that way. The reason why? My parent's weren't programmers, they were artists/architects, their parents were similarly more creative minded people. The programmer in our office can't design, and won't ever have an interest in it either. (Not saying that programmers aren't creative here... the stuff they do is very creative indeed, but in a different way to arty creative). If I'd been brought up to do maths/engineering, I'd probably have a better chance to be that way inclined now, but I wasn't and I therefore find the technical stuff just doesn't go into my head unless I really know the deep and dirty details of why I'm doing things.

I for one, can play a violin, haven't touched it in around 10 years, but I could still pick it up and play it because I could always sight-read... my identical (and mirror) twin brother used to play the piano... but could he play it right now? NO, he used to memorise the music and then play it...he used to practice more than me, until he knew it all off by heart, not me... i never practised, yet could pick up a piece of music and play it. (I couldn't memorise music at all... even the first bar would stray from my mind). We're a good example of the differences in a brain. He is left-handed, I am right... we are both creative, but use it in different ways. He does 2d... I do 3d.

I do believe that anyone can accomplish anything, as Rebel Hill suggests... just some find it harder than others to grasp.

Back to nodes... it's the grasping of the concept that is the hard part. I've learnt a lot about nodes since they were introduced. The material nodes are indeed very easy to use... and I use them over standard layers all the time. But constructing a complex dielectric shader from scratch, using nodes, and having an etched pattern on the side that causes the glass to be softened/blurry refraction... it would take me longer to figure out using nodes, and I wouldn't necessarily find the most logical solution either. I could do it... but a logically minded person would find fault with it :D This has happened here at work... my brain tries to think too hard about things, and never seems to figure out the easier most logical path.

So, how to solve this?

Have a system that works for both!? Yes. Antii is clearly VERY smart, and his work on the node editor is a very dumbed-down version of what he would probably rather use (making assumptions here)... but to someone like me it's still not perfect. The names of the nodes can be confusing, the inputs... what the hell do some of them do? How do I make a blurry reflection node that cuts off the amount of reflection when it reaches a certain number of bounces, or one based on fresnel, the less reflective, the less detail or something? Anyway, these are things that I know the principles of, and have an understanding of the ideas involved in how to speed up rendering, but wouldn't have any idea where to start in the LW Node editor! I've tried, but when you go through the help files and see:

Spot Info Node:
"The spot info node allows access to all information regarding the current hit point"

Great... no example at all... don't have a clue where this will work with what, when to use it, how to use it etc, etc.

I have used it in creating displacements in the nodal displacement thing, but only after following a tutorial on where to use it. I'd ideally need to see an example of each output of this node and where to use it. I have tried blindly plugging bits into other bits to see what happens, and it just doesn't help... and is a massive waste of time. Other people in here, KNOW what each thing does, and tell other people, which is great... but how do they know?

Rant over :D

lardbros
09-01-2011, 06:31 AM
{{huge eyeroll}} Obviously, untrue. Entire schools of pedagogy are devoted to how to address those people that think differently in a general population.

The fact that 3d attracts a tiny sliver of the population just narrows it a bit, but I think that my and Mr. Rid's testimony here should suggest to you that you're absolutely wrong in this.

Are you a musician? If so, no doubt you've had the experience of running into those people for who music is a snap, while others struggle, and others just completely fail. (I'm in the 'struggling' camp.) To blithely assert "Oh, everybody can get nodes!" is just insulting to the effort we've PUT INTO 'getting it'.

We're the equivalent of 'tone deaf', we're 'node blind'.

For me, the pertinent question is: is there a way to overcome this, either technologically (better nodes), or pedagogically (better learning aids)?

Good post, and pretty much how I feel too :D

RebelHill
09-01-2011, 07:11 AM
As far as Psychology and Neuropsychology goes, it's all well and good that these Dr's have written books, or papers on all this stuff, but simply put it's still ALL based on what THEY have studied.

Well yes and no... the points Im trying to make have to do with the mechanics of cognition, and aren't just theories based on logic or common sense, these are KNOWN things based upon experimentation (which is ofc repeatable)... and ofc, the things Ive stated arent really MY opinions... they're things Ive seen evidential proof of... Kind of like me stating that the earth goes round the sun... not really MY opinion.

But back to the topic...


Back to nodes... it's the grasping of the concept that is the hard part... my brain tries to think too hard about things, and never seems to figure out the easier most logical path.

So, how to solve this?

... but wouldn't have any idea where to start in the LW Node editor! I've tried, but when you go through the help files and see:

Spot Info Node:
"The spot info node allows access to all information regarding the current hit point"

Great... no example at all... don't have a clue where this will work with what, when to use it, how to use it etc, etc.

I have used it in creating displacements in the nodal displacement thing, but only after following a tutorial on where to use it. I'd ideally need to see an example of each output of this node and where to use it... Other people in here, KNOW what each thing does, and tell other people, which is great... but how do they know?

Do that first thing many times myself... find a path to a solution, only to revisit it later and see how it can be simplified... and so on and so forth. Whats clearly important is finding that path in the first place... ANY path, even if its an unnecessarily convoluted one (the long way). Almost no-one (apart from the odd rare genius that pops up ever couple of centuries) can see the most direct route through a complex sytem off the bat.

Now as for the explanation/documentation/whatever of these "parts" (nodes)...

Now that IS a trickier one... I'll give y'all that no argument at all. When nodal was first introduced I found many fo the documented explanations more than a lil mystifying... but through re-reading, and re-reading and turning it over and over in my mind over a period of time, things started to clear.

So part of the answer is I think TIME, on the part of the user (which harks back to what Ive been saying about effort in<>productivity out). If at first you dont succeed and all that.

Now the spot info IS a good example, and I keep seeing it mentioned by folks again and again... the fascinating thing about spot info is that it is DECEPTIVELY SIMPLE... its this amazing simplicity that gives it its FLEXIBILITY of usage, and it is, most likely, that very flexibility that makes it APPEAR complicated.

I certainly agree that examples of use would be helpful... how can I use it... In what situations would I use it???

But herein lies the hitch... "access to ALL information regarding the current hit point"

So we know that the "hit point" must be a piece of geometry... a poly, a point, whatever (but clearly not empty space as there's nothing there to "hit").

And ALL information... so that could be colour, visibility to camera, to a given light, or another piece of geometry... it could be normal... the base geometric normal, the normal after subdivision... the normal after surface smoothing. It could be the level of illumination received, or how strongly occluded by a shadow... it could be the specualr level... it could be the angle between that point and the camera, or some other geo... and we could go on, and on, and on just listing all the possible pieces of info that could potentially exist at a given hit point.

Now that list itself could be long enough... but then to provide examples of ALL (or even a large and expansive number) of the possible situations that could be used in... HOW LONG would that set of examples have to be?? I mean you could probably fill a phone book sized tome on just that one node alone. Is that really gonna be helpful... ok maybe... is it gonna be easily achievable to put together such a great volume... not really (and you can guaranteee that jsut after going to press you'll think up or discover a new situation it could be used in).

So to me... its much like saying...

"This is yellow... its a nice friendly colour that usually inspires a nice calm feeling"

to whit you reply...

"Ok... but under what circumstances would I use it? Should I use it in landscapes, or portraits... what about if the landscape is an industrial scene... should it go there?"

Or maybe its more like cooking...

"Here's mayonnaise... its thick and creamy, but with a punchy, savoury tang"...

How do I KNOW to put it on chicken or tuna fish, but not in a cheesecake, or drizzled on chocolate ice cream???

Damned if I know... intuition I guess, which is the result of repeated experimentation and experience of using it in different foodstuffs. Should there really be a cook book that lists EVERY possible combination fo recipes that use mayonnaise, and even if there were, would then there not need to be the same for every other possible ingredient that could find its way into my kitchen? Thats a lot of cookbooks.

So I think its kinda like the thing I talk about in the intro vids to my rigging course...

There's no way to TEACH creative thinking... there's no way to explain all these different examples of how things could or couldnt work. The best that can be done is to try and give as comprehensive and clear an overview as possible... but ultimately the rest has to be up to the user.

And to help with that I can only advocate the same approach I recommend for rigging...

Think about what it is you're trying to achieve, first as the big picture, and then break that idea apart into all its little component parts, until the parts can be broken down no smaller, and then try to get each of those parts and "build" your way up to the greater whole that you had invisioned.

The first time it may work but be a bit crappy, the second it'll be a little better, the thrid a lil better still, and so on, and so on.

Information is something thats easy to spread between individuals... UNDERSTANDING is, i fear, something you have to discover for yourself.

pooby
09-01-2011, 07:41 AM
I've have direct experience in this area regarding learning ICE in softimage..
I have a lot of tutorials up simply explaining how it works. Many artists are reluctant and start with the same argument. Its not for me. I dont get nodes. Its too technical. you have to have a certain type of mind etc.
The reason people who do actually try, like my tutorials is that I came from Exactly the same standpoint. I can locate posts of mine from 2 years ago before I learnt ICE making the same arguments.
I know what makes you think like that. I would be willing to guarantee and put money down that I could take ANYONE with an average IQ and teach them ICE in a week, where they understood the basic concepts to a point where they could start making their own stuff and actually enjoy it.

What puts you off this stuff is things not connecting and/or erroring.. not understaning the logic of the flow and why things wont connect. Its extremely frustrating and gives the impression that there is a big incompatibility between your cognitive powers and the Input that the Nodes require. My Experience based on feedback from the many that have seen my tutorials tells me thats not the case. Many are now using ICE that previously thought it was simply too hard and too obscure.

Its a combination of keeping hope that its possible, and by not letting failures and frustrations destroy that hope.. this is why good tutorials help. They will hold your hand through the minefield of errors, whilst explaining why they are taking those particular steps. If done well, you can continue walking on your own.

RebelHill
09-01-2011, 08:00 AM
...start with the same argument. Its not for me. I dont get nodes. Its too technical. you have to have a certain type of mind etc.
The reason people who do actually try, like my tutorials is that I came from Exactly the same standpoint. I can locate posts of mine from 2 years ago before I learnt ICE making the same arguments.

Yes... I agree here fully.

Incase anyone thinks I dont understand the emotional feeling that "ill never get this"... you're wrong... I know that feeling just as well as anyone else, and like everyone else have run up against it many times in my lifetime, not jsut related to 3d or computing (one prime example being in my early 20s when the only job I could find was doing manual labour with heavy lifiting and me weighing in at 125lbs).

But i guess what helps make up the gap for me is that I weren't raised to be no quitter... I was always taught that anything worth doing required grit and graft, that "I cant" was NOT acceptable... and that the FEELING you couldnt do something (no matter how deeply felt and convincing) was jsut that... a feeling... a lil devil sat on your shoulder lying to you and breaking your confidence in yourself before you'd given yourself a chance to build any up.

A "can-do" attitude wont work miracles, turn u into heracles nor einstein... but it'll get you further than you might otherwise imagine.

Oh, and also to backtrack a lil to Jerics post... I can understand, to an extent how my "u should try harder" can at face value be taken as an insulting view... but I ask you this...

If you read back through all the things Ive said, HOW is it in anyway insulting for me to be saying that I believe you are capable of more than you credit yourself with?? That I have faith in your abilities above and beyond that which you have in them??

Now Pooby... Obviously you've got all your ICE tuts (which are great ofc), which tackle many of these "thought process issues" raised in Lards previous post and my response... Id be greatly interested to hear your thoughts on, not necessarily the learning process, but the teaching/documenting process. Although I think ive laid out where I think the stumbling blocks are, Id very much like to see some further input on this as its not a topic that has a great level of discussion, nor recognised procedure surrounding it... so... any thoughts?

pooby
09-01-2011, 09:24 AM
When it comes to learning a new paradigm, I think that a lot of teaching skips out essential nuggets of information that is assumed too simple or atomic but for the newcomer , is essential.
For example, with ICE, there was nothing explaining even what a vector is or an array or a boolean etc. They just assume that people who want to learn it will at least know those basics etc.
Even when you google them and find out, you need practical simple examples of them in use to see why you would need them.
I think learning is best done in tiny chunks that are clear, and progress on from the last, repeatedly applying these chunks in ever so slightly more complex configurations whilst slowly adding new ones as the old ones have sunk in.
So much documentation is just the dry facts, and much learning material skips over essential puzzle pieces, leaving the solitary user with the impression that everybody else understands it apart from them.

lardbros
09-01-2011, 10:49 AM
When it comes to learning a new paradigm, I think that a lot of teaching skips out essential nuggets of information that is assumed too simple or atomic but for the newcomer , is essential.
For example, with ICE, there was nothing explaining even what a vector is or an array or a boolean etc. They just assume that people who want to learn it will at least know those basics etc.
Even when you google them and find out, you need practical simple examples of them in use to see why you would need them.
I think learning is best done in tiny chunks that are clear, and progress on from the last, repeatedly applying these chunks in ever so slightly more complex configurations whilst slowly adding new ones as the old ones have sunk in.
So much documentation is just the dry facts, and much learning material skips over essential puzzle pieces, leaving the solitary user with the impression that everybody else understands it apart from them.

I do agree... And believe me, the harder we try the easier things become. I've been trying the node editor and of course am understanding it on a level higher than I did at the beginning, but I, and people like me need something to click in our minds and then suddenly it will all be easy. I haven't reached this click/eureka moment in Lightwave... Yet watching Pooby's ICE tutorials made more sense to me than a simple texture editor in Lightwave :( I have tried to break apart people's node flows in Lightwave when i have the chance and they make sense... And help me learn. Just the spot info node is confusing me somewhat

prospector
09-01-2011, 11:54 AM
I don't think it is the nodes in and of themselves that are the stumbling area...It's the tree formation of the nodes.

There should be some basic trees that will give you certain results Mainly the basics like in the layers panels.
Add a tree set up, then add pics, the basics are done.

Tweak from there.

I'm looking at my 'add node' pulldown, and I see a hundred or so different nodes, I ask "where the h.e. double hocky sticks do I put them?"

Once a basic tree is formed I can usually just add a node here or there to see if it works if the name of the node makes some kind of sence to tweak something.

It's the basic tree is where everyone seems to find the first stumbling block.

It's like the first problem in the thread...adding displacement.

I started following the post showing the pics of a setup and already a problem...my window has no surface node like shown to plug the image node into.
I checked all the 'add node' dropdowns and there is no surface node that I could find.. so now I have to find out why my node setup is different tho all the posted pics match mine.

There should be a preset tree for every matching layers panel in surface editor that we then can tweak to our hearts content by pluging in and testing other nodes somewhere.

And they should all match for the info in each.

Right now I can drive things with a weightmap in every panel in the surface editors texture buttons...and yet in the texture button on the object properties panel...I cant.

There is no weightmap submenu...it's the only panel that doesn't have it...and it's the one I need to have it.

So after I figure out why my node editor has no surface node like the pic earlier in thread..
I then have to test all the nodes I have listed (wither named coherently or not), to see if somewhere I stumble upon a way to get weightmaps to work with displacements.

A time consuming proposition to say the least.

So yes, tree presets preinstalled in LW would be very timesaving for those who have trouble in nodes. At least we have a place to start.

Philbert
09-01-2011, 09:10 PM
I would be willing to guarantee and put money down that I could take ANYONE with an average IQ and teach them ICE in a week, where they understood the basic concepts to a point where they could start making their own stuff and actually enjoy it.

I would love that with nodes. I think the problem is that nobody is teaching them, aside from a few very specific tutorials.

jasonwestmas
09-01-2011, 10:02 PM
Yes... ME!!!

This is a very common optical illusion, and interestingly enough, it's one thats been used many times expreimentally... most commonly as a cube, but also as a wireframe of a human face... does it appear convex, or concave??

And ultimately what this test proves is that all humans interpret visual information via the exact same set of mechanisms... so that very point you make is actually a point of proof that the piece of our brain that does that is the same in all of us. The reason the illusion happens is because with only a 2d representation, the brain has to make a choice as to wether it thinks its facing one way or the other, and the result of that choice is what we visually "see".

But like any such minor idiosyncracy of our functions, and more importantly because its a CHOICE (albeit a subconscious one), and not a difference in the actual mechanics, you CAN train yourself around it.

When I first started with 3D, I would have this problem all the time. I still do from time to time (usually if my minds wandered off) but nowhere near as often as I used to. The fix for me is to turn away... blink repeatedly like Im having a seizure, TELL myself which way Its facing, and look back.

I guarantee you that if those guys who "couldnt" see it had persevered, that their brains would eventually have figured it out, and corrected for it. Maybe there's a small minority who would never have gotten it... but I promise it would have been very small.

The reason they couldnt "see" it wasnt because they just plain couldnt... it was because they'd not had enough previous exposure to it, so their brains had never learned how.



Works both ways... Id wager just as many (or near) people would, had LW not added nodes (or other functions, Poobs, you can chime in now), have thought, sod this, LW stagnates while others move on, Im jumping ship.

Again, at the end of the day... you dont HAVE to use nodes... they're there if u do, ignore em if u dont.

Well, Ok, maybe u HAVE to go in there sometimes... like if u wanna use simple skin, or carpaint... but all u have to do is hook up one connection and double click the node, and there is your bog standard, age old LW interface... piece of cake. If someone really is CHILDISH enough to say that those 2 clicks in a "new" window are too much/too difficult... then screw em I say, they're idiots.

And the idea that such things should have to be doubled up in the interface to make up for such folks is frankly ridiculous... doing things like that would only serve to slowly, and overtime turn LW into yet another cumbersome piece of bloatware (not unlike maya, lol).



Hey, i grasp the technicalities of gardening... still hate it more than words could ever express.



You have and it does... as I said... hook up your material node (thats 3 clicks) double click the node to open its standard LW interface, and just carry on as you ever did, and just ignore the fact that its all happening in a nodal window... cos if thats as deep as you're going then the rest of the nodal window and workflow is of no consequence anyway.

Badda Bing!

When I started sculpting in Zbrush 1.5 I would have the worst time trying to tell if the shading of the model was pushing in with the brush or pushing out. I think it was a balance of adding visual indicators in the material shaders and the position of the lighting to help me to better see what it is that I'm looking at depth wise. I also prefer modeling with the color blue, it just agrees with my eye's depth perception more than greys or other colors. Plus it's more calming=).

For the anti node and scripting crowds: that's just a silly attitude to have, and that's a fact. ;) You are simply just adding more channels to your arsenal and then adding, subtracting, multiplying, dividing their values automatically. . . and LW does that nodal math all for you.

If you have a view port and VPR you will be able to see your results right away using Nodes giving you room to have visual examples of the values you enter. Everything else is just remembering which channels mean what, which are a lot to remember but that doesn't mean that 10 of those channels wouldn't be useful to you and give you a result that is closer to your projects needs or your own artistic vision.

So if I was given a choice of more channels to play with or less, then I really would choose more channels, wouldn't you? If you want to live in the past you still have your older versions of LW to use anyway.

lardbros
09-02-2011, 12:40 AM
Jason, I'm CERTAINLY not with the type who don't ever want Lightwave to change and move forward... I'd love a fully nodal system like ICE. I've used nodal systems before for animation, and scripting even, and they just make more sense and are more flexible. The biggest difference between those systems and LightWave is that each node in Virtools for instance, you click on the node and press F1 and you get the help system up straight onto that node with examples of how to use it. Brilliant!

Would anyone here be kind enough to give an example node flow of what the spot info node could be used for? Just something a bit odd, rather than pumping the geometric normal out for the displacement... As rebel hill says, it can be used for millions of things, but I just don't know what the application of any of. It is, or could be?!

The weird thing is, I only struggle with all of this in texturing, the videos bryphi has done for dponts displacement node makes complete sense to me, but the texturing seems arbitrary to me right now, I haven't made a 'link' yet that makes it click for me!

dpont
09-02-2011, 02:13 AM
Is it not just possible to do this without nodes?
Using Bump Displacement like Jen, 0% Bump, image map
as first layer, a Gradient as second layer set to "Previous Layer",
with a negative<->positive range for remapping,
adjusting the conversion in the Gradient or Bump
Displacement Amount.

Denis.

jasonwestmas
09-02-2011, 07:50 AM
Jason, I'm CERTAINLY not with the type who don't ever want Lightwave to change and move forward... I'd love a fully nodal system like ICE. I've used nodal systems before for animation, and scripting even, and they just make more sense and are more flexible. The biggest difference between those systems and LightWave is that each node in Virtools for instance, you click on the node and press F1 and you get the help system up straight onto that node with examples of how to use it. Brilliant!

Would anyone here be kind enough to give an example node flow of what the spot info node could be used for? Just something a bit odd, rather than pumping the geometric normal out for the displacement... As rebel hill says, it can be used for millions of things, but I just don't know what the application of any of. It is, or could be?!

The weird thing is, I only struggle with all of this in texturing, the videos bryphi has done for dponts displacement node makes complete sense to me, but the texturing seems arbitrary to me right now, I haven't made a 'link' yet that makes it click for me!

What got me into the Lightwave node editor was that I wanted to create different shading characteristics (skin, metal, fiber, ceramic, rubber, patterns etc.). I was using a lot of texture maps for specific parts of a water tight model and using 20 different surfaces with 20 different UV maps just slows the workflow when dealing with a single object. To me, that's really old school to do that with modern hardware and texturing programs.

To simplify all this, I wanted a single displacement map with a single UV map for a single object to define the detailed geometry using a single TIFF image. Then I wanted to create a complicated surface on top of that singular displacement. To do this with a single node editor was the quickest and cleanest way to do that. This is done without jumping into different surfaces, which doesn't work that well when you have a single piece of geometry with a single displacement map. Like I said, I wanted to work on one surface per subject because then I could see my entire network within one editing environment, the node graph/tree. And on top of that I have ultimate precision within the math nodes and graphically the nodes are way more friendly to me naturally. And no I didn't start with nodes when I began my 3D journey. I started with Zbrush and a little Lightwave 7. So when LW9 came along I got pretty excited it supported Zbrush displacements.

Maybe that's just a rare and advanced situation that most Lightwavers don't come accross much but I personally use it all the time, especially with character costume setups and dynamic furniture material relationships and other props.

So it really is the fact (in my case), that I use the Node editor because it lets me create several materials on a single piece of geometry in a single editing environment and allows me to see more at once therefore granting me more interactive behavior in the interface. Plus we do get more channels and options within the nodes themselves.

lardbros
09-02-2011, 10:54 AM
I hope people don't think I'm stupid, I can use the node editor for all the things that the layer one could, and if I get stuck I use a layer node (adding that was a stroke of genius)... In fact, many things are easier using the node editor, especially the stuff using UVs and most things combining textures with procedural textures.

I think I just struggle with some parts of it. I'll keep hammering away and see what bs learn.

jasonwestmas
09-02-2011, 11:31 AM
I hope people don't think I'm stupid, I can use the node editor for all the things that the layer one could, and if I get stuck I use a layer node (adding that was a stroke of genius)... In fact, many things are easier using the node editor, especially the stuff using UVs and most things combining textures with procedural textures.

I think I just struggle with some parts of it. I'll keep hammering away and see what bs learn.

Nah, and I hope people don't think that I believe that I have a super gene in order to use nodes. I'm really not that complicated. I actually use nodes to simplify a little more complicated idea.

Cageman
09-02-2011, 11:39 AM
Good stuff to look at and get inspired by...

http://www.youtube.com/watch?v=48WyVJP_8sI

:)

jasonwestmas
09-02-2011, 12:03 PM
Good stuff to look at and get inspired by...

http://www.youtube.com/watch?v=48WyVJP_8sI

:)

Great example of simplifying an otherwise complicated and time consuming modeling procedure. Plus It's in Layout and with Nodes and still has a spline gizmo that never disappears for later adjustments.

jeric_synergy
09-02-2011, 01:02 PM
This thread is getting a bit mental (get it... do you see what I did there?) :D

Here that? ::crickets:: It's Vegas not calling. :D LOL, I keed, I keed.

my mind). We're a good example of the differences in a brain. He is left-handed, I am right... we are both creative, but use it in different ways. He does 2d... I do 3d.

Interesting.


EDITED FOR BREVITY:
...............
So, how to solve this?

.......... but to someone like me it's still not perfect. The names of the nodes can be confusing, the inputs... what the hell do some of them do? ........, but when you go through the help files and see:

Spot Info Node:
"The spot info node allows access to all information regarding the current hit point"

Great... no example at all... don't have a clue where this will work with what, when to use it, how to use it etc, etc.

I have used it in creating displacements in the nodal displacement thing, but only after following a tutorial on where to use it. I'd ideally need to see an example of each output of this node and where to use it. I have tried blindly plugging bits into other bits to see what happens, and it just doesn't help... and is a massive waste of time. Other people in here, KNOW what each thing does, and tell other people, which is great... but how do they know?

Rant over :D
Good points. NAMING is a very basic and important thing: how the engineers name something can actually be an impediment to learning how to use it.
Naming is the first level of documentation.

(Example: in Blender, what they CALL layers is not what the rest of the CGI world calls layers, particularly the Adobe-centric world-- they are much more like what people think of as 'Groups'. So, everytime you hit 'layers' in a blender context you have to do a little translation in your head.)

Engineers should not be naming ANYTHING in the UI. Designers should be (or at least signing off on names and labels). Designers dedicated to clarity, not just their own opinions.

So, I agree w/you: some of the nodes and their inputs could certainly be better named.

Your other point is the weakness of the documentation. I've been beating up NewTek on this topic for, literally, DECADES. Lately, they just don't want to hear it any more. Too g.d. bad.

If Tim Jensen would sell one of his frackin' Ferraris he could fund a dedicated worker who could address the woeful state of LW documentation. But nooooooooooooooooooooooooooo. :cursin:

RebelHill
09-03-2011, 01:46 PM
So the discussion here got me thinking bout things, and the thoughts about explaining things in baby steps sometimes rang true with how I set about approaching my rhr set... so I popped this together in the hope that Jeric, Lard, and others may find it helpful, as it seems to be a node that has been mentioned specifically as daunting by a few users.

Hope it fits the bill some...

http://newtek.com/forums/showthread.php?p=1179176#post1179176

jeric_synergy
09-03-2011, 01:54 PM
So the discussion here got me thinking bout things, and the thoughts about explaining things in baby steps sometimes rang true with how I set about approaching my rhr set... so I popped this together in the hope that Jeric, Lard, and others may find it helpful, as it seems to be a node that has been mentioned specifically as daunting by a few users.

Hope it fits the bill some...

http://newtek.com/forums/showthread.php?p=1179176#post1179176
Rebel, that's very generous. Thank you for your time. :beerchug:

Philbert
09-03-2011, 03:36 PM
I would pay for more like that.

Mr Rid
09-03-2011, 10:38 PM
Sorry is so long, I fell behind here.

RH's tut (see, something good came out of this!) addresses one part of the problem which is adequate explanation. "If you can't explain it simply, you don't understand it well enough." -Al Einstein


If you have a view port and VPR you will be able to see your results right away
I have yet to find a practical use for the VPR. But this is another problem for sure for visual artists- not being able to see what is going on with nodes, particularly at intermediate points in the flow instead of only at the end. But the VPR is just too friggin slow. The first scene I tried with VPR was a figure lit by an HDRI and it took 2 minutes for it to render what Fprime did in literally 2 seconds. So forget when I turned on Simple Skin which Fprime just crashes on. But having the image break up into mosaic each time I move a value is just not efficient. You need to see a before and after comparison especially with subtle changes.



NAMING is a very basic and important thing: how the engineers name something can actually be an impediment to learning how to use it.
Naming is the first level of documentation.

Yes. 'Oren–Nayar' tells me absolutely nothing. Again with the tech bias. A small example was the first time I went looking to load an envelope in the new graph editor... 'save, copy, paste... where the hell's the load button... Replace?!' WHY change the most ancient and commonly understood term in software, 'load?' Isnt paste the same as replace? So that term doesnt fit, it takes more characters, and throws everyone off. ?



I hate sectarism on both technical and artist side.

... "OH we're artists... our brains dont work that way"... tbh... I really DO believe thats a complete cop-out.
Nonetheless, its ridiculous to think everyone has the same aptitude. Can just anyone have spontaneous recall as a savant like Kim Peek, or solve complex math faster than a calculator like Scott Flansburg, or conduct electricity like Current Mohan, or regulate body temperature like Wim Hof, who climbed 7400 meters up Mt. Everest wearing only sandals and shorts? Aptitude is genetic and conditioned, and obviously it affects how easy or difficult some things may be to learn or accomplish.

Art is about evoking emotion, which is not the realm of numbers. Inspiration does not fit into numeric constraints, be they time, tech or money related. Rules, conformity, authority are not conducive to creativity, whereas it thrives with passion and spontaneity. The artist decides where to accept compromise when dealing with the structure of computers and entertainment biz. Artists are generally not the types to spend a lot of time within the confines of equations. Thus the more technical or mathy the tools, the more an artist will struggle to grasp. Notice how the more organic you try to make CG, the more difficult it is to achieve since the tools are fundamentally based on rigid rules, straight lines and right angles. Curves and splines must be approximated from straight lines. You have to go out of your way to bend CG to appear spontaneous and not look like a Computer Generated it. The artist tries to force the emotionally meaningless tech into something evocative.

Tech and intuition are opposite ends of a spectrum, and then there is everything in between. CG art attracts more of those somewhere in between (including myself), who are often content with the process and who consequently output much emotionally neutral art. Galleries are filled with mimicry and 'look how detailed and realistic this looks!' The most interesting artists I know would probably never touch anything as mired in tech as a 3D app. I was disappointed when I bought Exotique 4, partly because I was hoping to see more 3D renders (is mostly 2D), but mainly because most of the ideas, while technically dazzling, are bland and derivative. Its all related to the nature of the tools and the most fitting aptitude. Exceptions, are just that.

Personally I am so weary of constantly fighting the computer as the majority of time is devoured by process, deciphering, glitches and constant acquiring of upgrades, plugs and apps, and figuring out how to use them. Its not like you can just focus on mastering a medium or tool like oils or a piano. And the tools are comprised of many components all continuously upgrading from different vendors with different ideas on how they should operate. So I welcome any form of reprieve, to just make stuff.



You cite ZB (et al) as a good example of artist friendly... but those sculpting apps largely fit right into this description... they're good at one thing, organics... need to now model a building or a car... you're SOL. (yes I know apps are getting hard surface tools now, but I think you'll be hard pushed to find many who can knock out hard surface stuff as quickly or precisely in a sculpting app like others can with a more "old fashioned" CAD tool.

Of course, I am not saying you have to erase all hints of technology in the app. But the more extreme R-brains would have much less interest in mimicking a car or a building. They are attracted to the organic aspect of ZB, and I would never direct them to say Modeler. But even hard surface modeling can be made intuitive for an artist. A good friend of mine is an art director and production designer who has poked at LW and Modo, but he designs sets in Sketchup that was much easier to get. I constantly see where a menu could be setup more intuitively while requiring little or no more effort, and no less capability. It would cut down on huge amounts of time wasted trying to figure the damn thing out, and posting in forums and support to in turn waste the time of others. If you do it right up front, it saves time on the end. And again, Igarashi's demos show that the tools dont have to be complex at all to 'create.'

Boujou comes to mind, that I have used dozens of times yet I have never read any instructions. The setup wizard allows any idiot to easily command a $10,000 chunk of math for great tracks.


... do you know how/when to use a multiply layer in the layer surface, or in photoshop...
In PS or layers I am used to multiplying an image with another image, while I have no idea what math is applied, all I know is what the result looks like. I click thru blend modes until something looks right. But in nodes, I have no comprehension of multiplying an image with a number, or how that tells lightwave to read displacement maps differently than what am used to in Layers, or how the typical user is suppose to come up with this. Which brings me to...


So stop to quote me, for this kind of things,.
Was addressing 'displacement is math' as if a user can only apply displacement by way of solving an equation. This is the problem.


you simply feign to have an issue to solve,
Huh?


I tried to understand it, and mathy
things was not mine but included in your posted sample.
As I posted, am using the setup that everyone else is, but it wasnt working for me.

So your displacement values are stored in the image in min 0 and max 1 values,
but is originally backed in -1 -> 1+ limit range,
so you substract 0.5,
to get a -0.5 -> +0.5 limit range,
and multiply by 2
to get the proper -1 -> +1 limit range,
don't know where your normal displacement comes from,
but if is inversed,
multiply by -2 instead of 2,
if the original displacement has been normalized,
I mean, it was divided by the max real value,
you need to multiply the result also by this max value,
before scaling the spot normal.
Denis.

This is alien hieroglyphics to me. I dont know what you mean by "backed." And I dont know what "-1 -> 1+ "... 'minus 1 minus greater than 1 plus?' means. When I subtract .5 from -1 I get -1.5 not -.5 (?).

I see that subtracting .5 from the map makes it darker, multiplying makes it really dark as the values are pushed somewhere. Thats all I know. I dont know how LW is using this, let alone how to anticipate what will happen as I add nodes, or try to combine maps. I am back experimenting futilely, trying to find a tut or post that explains specifically what I want to do, or asking someone how to do it. Its just NOT remotely intuitive.



"Bad tone" words are the best I found to describe
the greenish skin tone of Fantomas, a well known very bad guy,
without ears, from Z french serial movies.


"Fantômas is anyone and no one, everywhere and nowhere, waging an implacable war against the very bourgeois society in which he moves with such ease and assurance."


... have you ever come across the person who can't tell which way around a wireframe is rotating in the persective view?

This is what I am talking about. There should be an option to visually fade the wire from front to back. And when trying to click-select an item from a jumble of overlapping wires, there should be a pre-select highlight, and the item closest to view could have preference. How hard is this?



"There's no difference in usability now so I might as well go where the jobs are."
Exactly where I am.



99% of the time for anything where people just want to input some values and add a couple of maps (like, say, a skin shader or a displacement map from ZBrush) there should be a simple, old fashioned LW interface panel.."
Your posts nailed it for me all around. I have to go into nodes because layer dev ceased. SSS is the only thing I have been missing in layers. Now its a displace map, bringing attention to a basic problem that many users feel. Nodes are not that complicated, but they step off into the technical labyrinth that I personally have zero aptitude for.



Whats clearly important is finding that path in the first place... ANY path, even if its an unnecessarily convoluted one (the long way). Almost no-one (apart from the odd rare genius that pops up ever couple of centuries) can see the most direct route through a complex sytem off the bat...

It doesnt take genius to ask typical users what works or what doesnt work for them. An animator friend and I were fed up with with how clunky all LW render controllers were, particularly how they commonly resulted in dropped frames so regularly. So he taught himself enough to write a controller that was more reliable than any of them (never dropped frames), and animators preferred (was adopted at Rhythm and Hues commercials). He was just an animator but he knew how it should work. If we complained to support, they might ignore us as temperamental artists who just want it all to be easy, or the limitations of the tech would have been explained. Among other features, his controller simply looked at the time stamp of a saved frame. If it wasnt there, it rendered it again, and just to be sure it checked all frames again after the last frame saved. This also allowed you to save over old frames with same name. How come no one thought of this before?! It didnt take genius, yet it saved huge amounts of time (I believe the RnH animation director said it increased productivity 20%) normally wasted on 'patching' dropped frames... so ridiculous that there is a term for this commonly accepted part of the process. He went on to write a network scanner that allowed him to make much more money than as an animator as he competed with bigger entities, and took on clients like NASA, IBM, GM, Boeing, Army, Navy, ILM, all because he simply looked at what the competition was doing wrong...'dont do that', and what users want...'do that.'

Mr Rid
09-04-2011, 01:37 AM
...
You're not disabled, you're not retarded...


Actually, I do have a disability in the last few years that definitely impairs my CG ability. But this reminds of another interesting example. I have worked with a number of guys in this business who might easily be described as very nerdly, they all wear thick glasses, lack social skills and are obsessive about Orcs, WWII and such. But I know that these particular individuals actually have borderline autism (one admits being diagnosed). They all definitely lean toward the tech side, are good with numbers and computers, and figuring out how stuff works. But they also lack at coming up with creative solutions and they need direction to complete tasks. Again, aptitude is absolutely relevant.

dpont
09-04-2011, 01:40 AM
At least I will try to improve my bad english synthax,
and my poor pedagogical talent, but I'm not optimistic
on this point.

In the 3rd party application,
exporting displacement data as an image map,
is similar to a surface backing process,
displacement data is basically
the negative or positive distance
from the displaced position to the original position
along the normal of the point,
so the original displacement value for a given
geometric point is ranged between a max negative
value and a max positive value.

Then this data must be interpolated to fill
the whole surface in the image map,
the main advantage of this is to be able
to displace a similar object with the same
UV map but with less or more points (subD level)
than the original "backed" model.




Classic image format accepts only values between 0 and 1,
(we have now better formats which allows negatives
and unlimited values)
This is why displacement are transformed in the
exporter and why we need to inverse this transformation
in Lightvave.


In the exporter, Zbrush or else,
First step,
it scaled the displacement distance,
for reducing or extending the range limit,
between -0.5 and +0.5, so all values
are ranges in a field of 1.
for a decimeter to meter conversion,
this could be a multiplication by 100,
or just a factor customized by the user
(depends of the application, not sure
there is a standardization for this)
Second step,
it added a 0.5 value to the displacement distance,
to push/shift all values in the 0 to 1 range limit,
avoiding negative values not printable in the image format .

In Lightwave, we need to inverse the transformation (from the end),
First step,
substracting the same 0.5 value to push/shift in
the inverse direction, retreiving our signed values
negatives and positives.
Second step,
rescaling, or multiplying by the inversed factor,
if the exporter multiply by 100, we multiply by 0.01.

This can be done in different ways in Lightwave,
but as I said in my previous post,
this is doable with Texture Layer only and no node at all,
using the Bump Layer and "Enable Bump" Deform property checked,
indeed with a gradient (set to "Previous Layer")
we can do the substraction step,
by setting -0.5 as min value and +0.5 as max value.
and Bump Displacement distance set to 10 mm.

I tried this setup with the Skin surfaces only,
97921


IMHO this version of Fantomas is more parodic than romantic.

Denis.

dpont
09-04-2011, 02:04 AM
I think that Bump Displacement is definitively the way,
because you have different displacement maps per surface.

btw, I missed to include the object with the saved surface
settings, I will try to post a srf later.

Denis.

dpont
09-04-2011, 02:29 AM
Here is the SkinFace surface,
97922

-0.5 and +0.5, are -50% and +50 of course,
also the difference with nodal system is that
nodal has 2 separate input for bump and displacement,
so if you have also a surface bump for your model,
I'm not sure how you could setup this,
the level of Bump should be reset to 100% (for bump)
but the displacement layer should not affect Bump
and still being available for Bump Displacement.

..Or you apply the same settings in a Scalar Layer node
connected to the Displacement input...

Denis.

dpont
09-04-2011, 03:06 AM
With a Texture Amplitude to 0% it is possible
to exclude a displacement from bump,
but a bump is always 'displaced',

Here is the nodal surface version of SkinFace,
with similar settings in a Scalar Layer node,
very near from Jen's setup,
but in a 'Texture' form,
97923,
here the gradient is set from -0.5 to +0.5,
not in percentage.

So with this, the Classic Bump Texture layer
and Nodal Bump input are free for a pure
and separate Bump mapping.

Denis.

lardbros
09-04-2011, 03:27 AM
This is alien hieroglyphics to me. I dont know what you mean by "backed." And I dont know what "-1 -> 1+ "... 'minus 1 minus greater than 1 plus?' means. When I subtract .5 from -1 I get -1.5 not -.5 (?).



Believe it or not, in maths, subtracting from a minus number actually adds the number! So it adds even more to the confusion! Think this is the limit of my mathematical knowledge from school! :)

Couldn't agree much more on your statement! Some of the physicists at my work are verging on aspergers or autism... There is a huge scale of inbetween too, some of them float around grinning to themselves, some very sociable and funny, but you can tell they have a COMPLETELY different view of the world to me. I see nature as artistic beauty, and that the science behind it all is hidden, unless I really have a deep think, and realise I'll never understand what's fully going on... Yet these guys view things as mathematical challenges, or equations. Talking to these guys fascinates me... They are on another planet, but they probably think I am too :) We help each other though... Us that aren't quite so smart in their capacity, actually come up with ideas that they haven't thought of and its interesting to see their reactions to our problem solving compared to their own. Common sense is generally the difference :) A massive generalisation, but this is what I notice at work.


Thanks RebelHill! I'll take a look... Thanks for persevering with us lot :)

jasonwestmas
09-04-2011, 07:42 AM
@Mr. Rid "I have yet to find a practical use for the VPR. But this is another problem for sure for visual artists- not being able to see what is going on with nodes, particularly at intermediate points in the flow instead of only at the end. But the VPR is just too friggin slow. The first scene I tried with VPR was a figure lit by an HDRI and it took 2 minutes for it to render what Fprime did in literally 2 seconds. So forget when I turned on Simple Skin which Fprime just crashes on. But having the image break up into mosaic each time I move a value is just not efficient. You need to see a before and after comparison especially with subtle changes."

Yes there are some problems with VPR in SRGB mode where certain shaders or nodes don't refine very fast and sometimes don't refine much at all. In Linear Space the issue is far less noticable and VPR is very fast on my computer with a low end i7. And you're right about the GI, too slow. Reflection blurring doesn't work very well with SRGB vPR either.

I think those are fixable things though. Gotta be.

With the latest version of FPrime in 9.6 I was able to work on a simple skin head with textures and had very few crashes. I just remember the process going very smoothly with Fprime.

Cageman
09-04-2011, 08:50 AM
Regarding VPR... LW10.1 had it's second devlopment cycle applied to it. So, for being such a "young" implementation, it works quite darn well allready. I use it all the time, and for me it is quite fast, even when throwing in thousands of DPInstances with all bells and whistles enabled. But yes... I am on a pretty speffy computer as well.

:)

jasonwestmas
09-04-2011, 08:57 AM
Regarding VPR... LW10.1 had it's second devlopment cycle applied to it. So, for being such a "young" implementation, it works quite darn well allready. I use it all the time, and for me it is quite fast, even when throwing in thousands of DPInstances with all bells and whistles enabled. But yes... I am on a pretty speffy computer as well.

:)

I do have to agree that after VPR only being available for a year now, it's pretty damn good in that respect. Where FPrime has been around for several years. But I'm sure that point has already been discussed to death. But in Linear color space VPR is silky smooth most of the time.

Tobian
09-05-2011, 08:10 AM
You know I wonder if instead of taking many hours out of your day angrily explaining, in huge detail, with multiple quotations, with great verbosity, clearly demonstrating your inherent intelligence.. you could have devoted some of that to learning the node editor :p

I think I would be more sympathetic to your plight, Mr. Rid, if I wasn't aware of your portfolio of hugely complicated, and technically clever work which goes right over my head, in terms of ability and skill :P

jasonwestmas
09-05-2011, 08:51 AM
You know I wonder if instead of taking many hours out of your day angrily explaining, in huge detail, with multiple quotations, with great verbosity, clearly demonstrating your inherent intelligence.. you could have devoted some of that to learning the node editor :p

I think I would be more sympathetic to your plight, Mr. Rid, if I wasn't aware of your portfolio of hugely complicated, and technically clever work which goes right over my head, in terms of ability and skill :P

indeed, Rid definitely has an eye for things.

stiff paper
09-05-2011, 09:49 AM
I think I would be more sympathetic to your plight, Mr. Rid, if I wasn't aware of your portfolio of hugely complicated, and technically clever work which goes right over my head, in terms of ability and skill :P

See now, for me, if I've said something along the lines of:
"Mr. Rid, I'm aware of your portfolio of hugely complicated, and technically clever work which goes right over my head, in terms of ability and skill..."
I'd feel compelled by the simple dictates of logic, sense and evidence to follow it with:
"And the quality of your work and the effort you've put in means you've plainly earned the right to have your complaint taken dead seriously, and I wouldn't think of dismissing what you say or insisting that you're just not putting in enough effort to learn things."

But from this thread I see that it's only me that thinks that, and it doesn't matter how much proof there is that the person talking might just, you know, know what he's talking about...

Tobian
09-05-2011, 10:16 AM
and if you'd taken as much time learning nodes as you did correcting my grammar..... :p

Seriously though, sorry but no. You confuse me not agreeing with you as 'dismissing out of hand. Those are not the same things: I agree there's a lot of things that could be done to improve the node editor, I just don't agree that it's a bad decision or direction or direction for LightWave to take. I take people's opinions on the node editor more seriously when they take the time to learn and understand it's functionality and can make an educated, considered discussion on things that need improving and changing. Dismissing it out of hand makes me dismiss their opinion likewise.

To use a bad analogy (which I am sure someone will de-construct and turn round to support their position) If you have 2 artists using a chisel to carve stone. Artist 1 has a go, fails to make anything of consequence, or even cuts his hand, immediately hates it, declares all chisels stupid and wrong. Artist 2 perseveres, and after a lot of trial and error determines that a slightly differently shaped chisel would be better. Artist 1 might be an excellent drawer, painter, collager and decoupager, but until he understand the tools and becomes proficient in them, I wouldn't take his opinion or advice on sculpting with chisels, until he learns to use them himself.

Sorry but I don't take anyone's side ad hominem based on their skills, portfolio or standing, if it's not something they understand: I don't go to Steven Hawking for cookery tips, I go to him for physics :p Like it or not 3D is a really complicated beast which consists of several disciplines, which people have in greater or lesser degrees.I appreciate some people don't grasp everything, but that doesn't mean it's bad and wrong, just because THEY don't get it. I know next to nothing about rigging, so I wouldn't expect you to take my thoughts on rigging seriously, just because I am good at solid modelling. If anything I take people with such skill's opinion LESS seriously specifically because I know them to be skilful, and therefore quite capable of learning it themselves :)

jasonwestmas
09-05-2011, 12:59 PM
Plus Mr. Rid most likely isn't using LW for the same stuff I am and other people are so to say Nodes are cumbersome and non-interactive is not true in all specialized cases and projects.

lardbros
09-05-2011, 01:39 PM
Having VPR has improved the nodal thing for me quite a bit... But it would be nice to be able to get see the buffers in the VPR window... Maybe it's planned?! :)

jasonwestmas
09-05-2011, 03:48 PM
Having VPR has improved the nodal thing for me quite a bit... But it would be nice to be able to get see the buffers in the VPR window... Maybe it's planned?! :)

Yes, that would rock! Bookman has talked some about that. I know he would love that as well.

jeric_synergy
09-05-2011, 04:11 PM
But from this thread I see that it's only me that thinks that, and it doesn't matter how much proof there is that the person talking might just, you know, know what he's talking about...
You'd be wrong there. :beerchug:

stiff paper
09-05-2011, 04:53 PM
...correcting my grammar...

Eh?

I didn't "correct" your grammar at all. There was nothing wrong with your grammar.

I changed a negative sense to a positive one so that I could pull that section out as a quote by itself and maintain its exact meaning. If I'd left the negative at the start, then when I removed it from the original context its meaning would not have been the same as it was in context.

I apologise for having left you with the impression that I was correcting you rather than maintaining your original sense.

(I'm not saying I wouldn't ever correct a person's grammar, but they'd either have to annoy me a lot first or I'd have to feel like it was going to help them in some positive way.)

Mr Rid
09-08-2011, 10:00 PM
Believe it or not, in maths, subtracting from a minus number actually adds the number! So it adds even more to the confusion! Think this is the limit of my mathematical knowledge from school! :)

Well, the calculator gives me -1.5 when I subtract .5 from -1. :stumped:

Thanks Jen and Denis!!

This is how I would sum- with this image, black displaces 0 meters, and white displaces 1 meter outward from the normal. " 0 to 1" To displace inward from the normal, shift all gray values 50% negatively by subtracting .5. Now black displaces half a meter inward, and white displaces half a meter outward. "-.5 to .5 " To increase displacement both in and out, multiply the values.

One node function down. Next!

But this is where I expect to simply input a min and max value somewhere. Early on I tried using previous layer in the Normal Displacement but was not quite working right. XswampyZ' gradient example looks to have the same problem- the ear, nose and lips are too bulbous. But the surface-bump examples should cover it... until I try to modify for some other purpose and I get lost again. ;D



IMHO this version of Fantomas is more parodic than romantic.

All the more surreal appeal!
98047
And its not your syntax but your technicality that I find difficult to read. Not always sure what you said, but I know what you mean :)


I do have to agree that after VPR only being available for a year now, it's pretty damn good in that respect. Where FPrime has been around for several years. ...

The VPR is fine, but this is sorta like applauding a supersonic plane that only goes 300mph, while someone already broke the sound barrier several years ago.


You know I wonder if instead of taking many hours out of your day angrily explaining, in huge detail, .. you could have devoted some of that to learning the node editor :p


Yeah I get a lot of Nick Burns attitude from the leftys :) (this sorta sums it up) http://www.nbc.com/saturday-night-live/video/nick-burns/2786/
and this- http://www.youtube.com/watch?v=9j6Og1QcpMU

When I started in CG I lived in 'dangerous' Deep Ellum, a starving artist district in downtown Dallas, and I observed the left/right:yingyang: thing for many years (am reminded of a book about Merlin revived in modern times where he had to team up with a physicist to fight the bad guys, and each was impressed yet baffled by the other's 'powers'). Its interesting, and Spock-brained developers of all GUIs and sites should be very aware. I see that most everything about computers remains unduly convoluted, and I constantly see ways to make them more human-relatable.

My brother for example is a severe right-brained musician, talented at keyboards, drums, strings, vocals and composition, and he knows more about music than anyone you will ever meet. He also draws, paints, sculpts, photographs and writes brilliantly yet he barely scraped out of high school, cant read music or balance a checkbook, literally. The only way he could begin to learn a techfest like a 3D app is if you put a gun to his head, but it would take him a LONG time, he would hate every minute of it, and you would sooner give up or splatter yourself before he could figure out nodes. Lower the gun, and he will joyfully return to more free form ways of expression. My fave artist Szukalski once related something about how the most frustrating part of being an artist was having to actually make the art. He would have to absolutely detest using a computer thats like sculpting with boxing gloves on, but the gloves have to be powered, booted, configured, upgraded, and they glitch because they arent compatible with the laces v3.1.

My favorite art piece, Struggle by Szukalski
98048

"expresses the struggle between Quantity, the fingers, and Quality, the thumb. It is not the fingers nor the Gods which make man survive, but his opposing thumbs. They are the makers of things, the builders, the creators of Cultures and Civilizations. The thumbs are like creative individuals who make the mammal herd into the human society.

Here the four fingers of the hand attack the opposing thumb in mortal struggle. The thumb carries a temple tower as a part of its head. The fingers dig a hole in the center of the hand, so as to cut themselves off from their Inspirer, not realizing that when they tear the hand apart, their concerted attack is in reality suicidal to their social organization, and they will perish or become slaves."

-Stanislav Szukalski 1917

jeric_synergy
09-09-2011, 09:38 AM
All our links are broken now that NewTek changed how the forum is addressed.

Mr Rid
09-09-2011, 04:03 PM
More tech in the way. Cant someone do a mass 'find and replace' text edit? I guess its a WIP.

jasonwestmas
09-10-2011, 09:01 AM
@Mr.Rid "The VPR is fine, but this is sorta like applauding a supersonic plane that only goes 300mph, while someone already broke the sound barrier several years ago."

I think I'm applauding more of the fact that VPR is Native and has more of a future. I don't trust 3rd party's very much to deliver a fully fleshed out product, not that it is always their fault. I could say the same about NT about some of their plugins that they own now but potentially the ball is always in their court to fully integrate stuff.

Also, in this context VPR supports LW better than Fprime in an all around feature for feature battle. The main things Fprime has above VPR is the faster and more friendly GI and maybe some speed advantages but certainly not stability and preview accuracy. So those are my thoughts on that after using VPR for a year now.