PDA

View Full Version : Zbrush and Lightwave



Karl Hansson
05-17-2004, 04:36 AM
Has anyone tried it? Ive seen some cool renderings in other 3d applications where displacementmaps from zbrush was used to gain extra detail. Ive only seen a few pictures that was rendered in lightwave using zbrush displacement maps but they were quite bad quality. Is it worth buying zbrush for lightwave?

Exper
05-17-2004, 07:06 AM
You can greatly boost LW<->ZBrush interaction using two plugins by Lynx (plus they are free):

1) Normal Displacement
http://lynx.aspect-design.de/plugins/normal_displace_info.htm

2) 16bit grayscale TIFF imagel loader
http://lynx.aspect-design.de/plugins/tiff16bps.htm

Take a look here... for more examples:
Normal Displacement plugin by Lynx
http://vbulletin.newtek.com/showthread.php?s=&threadid=22997

Bye.

peteb
05-17-2004, 09:12 AM
I'm confused, why do people use displacement maps of high res models they've already made? Why not just use the model itself. You've got to have a hign density mesh to apply the displacement map so why not just use the model that you made all the detail on and forget about making a map of it first? Someone told me that it might be useful for LOD's (level of detail) but apart from that I can't see the point, can someone explain?

peteb
05-17-2004, 09:46 AM
it's ok I've just read some stuff on the Zbrush forum. I didn't know that you could Displace objects by just using the pixels. I mean I know you have bump maps and normal maps but they don't actually change the mesh. Apparently you can have subv pixel displacement which changes the mesh with out increasing polys. I'm assuming this affects shadows as well?

moth0027
05-17-2004, 09:53 AM
I'm also confused...

I was under the impression that zbrush was similar to Bodypaint but the website is very much focused on the modelling tools, displacement maps, etc...

Does it do what Bodypaint can do and if it does , does it do it well?

peteb
05-17-2004, 10:03 AM
I'm learing it at the moment. I haven't got to the texturing part yet but from what I can see it is very similar to how Bodpaint can paint onto meshes. The great thing about Zbrush is you can deform your mesh like it's made of clay. Using a wacom tablet makes it even better. So you can start with a ball and then just push it about in till you get what you want. It's a bit strange at first because once you chane tools the object then becomes part of the canvas or turns to "pixols" as the call them, basically 2d but with a look of 3d. What ever you've created becomes a tool (basically a brush shape ) So when you grab the "brush shape" it will paint the object you created.

I've been talking to our lead programmer and the way the sub pixel stuff works is it stores the info in the displacement map. So when you render it then adds all the polys it needs to create the complex mesh but while you're just working in open gl you can just use your base model. I think that's why people are having problems with Lightwave because it diesn't support sub pixel rendering, well from what I've heard?

Karl Hansson
05-17-2004, 11:38 AM
I saw this amazing demo video of zbrush where they built a very "light" character but with amazing detail and smoothness.

I also saw a demo of unreal 3 (I think it was) where the characters and the environmen was built using this technique.

sire
05-17-2004, 02:00 PM
Originally posted by peteb
I think that's why people are having problems with Lightwave because it diesn't support sub pixel rendering, well from what I've heard?

You could crank up the render resolution value for subpatch objects while keeping the value for opengl low (another option would be to use proxy objects). If the object should stay edgy, just apply an appropriate weight map to it.

jin choung
05-17-2004, 06:58 PM
GENERAL INFO:

the idea behind this recent phenomena is MANAGING COMPLEXITY.

yes, you COULD just skin and rig your super hi poly mesh with all that detail but that would not only be a nightmare on your screen redraw but absolutely impossible to rig properly. if you have to weight map and keep track of the movements of that many points, it will slow your production to a crawl and probably your rigger will throw the model back at you.

so the idea is, if a subdivision cage is set up so that it has just enough verts that you DO want to weight map and skin, then that is IDEAL for rigging and animating.

but at that point, even if you subdivide into infinity, you get a super smooth and roundy model that has NO high frequency detail.

so, this method allows you to combine the best of all worlds.

1. you make you sds cage that has all the detail you need for animation and no more.

2. from that low density cage, you continue working until you get a super detailed mesh that no one in their right might would think of animating.

3. you skin and animate the sds cage but at render time, because of the displacement map, viola! you have your super dense cage again automagically.
------------------------------------------------------------------------------------

ISSUES OF CONFUSION:

1. what the hell? we've had displacement maps for centuries... even the ancient greeks had displacement maps... WTF is the big deal?

- ABSOLUTELY TRUE. there's NOTHING NEW about displacement maps. but until recently, you had to generate your displacement map by PAINTING... either in 2d or 3d... in both cases, but especially with 2d painting, it was very very difficult to gauge precisely how your final model will look. and in addition, a paint brush is not necessarily the best tool for sculpting detail.

- THE NEW THING is the idea of taking a super dense, detailed mesh, COMPARING THAT from the low poly cage that it was derived from and ENCODING the DIFFERENCE into a displacement map or a normal map (usually for games).

that is the new thing. (in film, lord of the rings models encoded all their hifrequency detail in their models using this method, in computer games, the details are encoded into bump maps [normal maps] that are then placed on low poly models. this is being used in FAR CRY but gained fame from early DOOM 3 talks)

now, we can use all the 3d tools that we've ever used to create as complex a mesh as we want and you don't have to worry about the massive amounts of detail that you're generating.

further, because of this concept, even such things as HIERARCHICAL SDS can be left out of the production loop because you STOP MODELING the base sds cage long before you ever would really need to use such things.

(the above is not strictly true... zbrush seems to use hierarchical sds as part of the modeling process but what i mean is that in essence, that complicated mesh type is never part of the render pipeline... in the end, you get a base cage and an image map... very simple. and simple is good)

and after you've stopped, you start modeling a separate model altogether that you just slap on details without regard for any base cage.

zbrush allows you build up the detail to immense density, while maintaining a relationship with your low polycage (completely transparent to you) and then generate the difference map. it also allows you not only to 'paint displacements' like simply 3d painting but gives you actual modeling tools that allow you to work easily and well with millions of polys.

BUT

you don't NEED ZBRUSH to utilize these techniques... with marvin landis' plugins, you can do the same and just model everything in lw... also, there's a free app called ORB that will generate the difference maps as well.

2. displacement map, normal map, normal displacement, bump displacement, WTF?!?!?!

this is an unfortunate situation that newtek has created in 'multiplying entities without reason'.... occam's razor is a good thing after all.

'NORMAL MAPS' as people refer to in the general cg public refers to the 24bit color image maps that are basically used as BUMP MAPS. NO GEOMETRY IS DISPLACED. it is a fast bump that is used in real time gaming engines and even in non-rt renderers, seems to produce nicer results than a straight bump (credit to karmacop for hashing out that finding with me).

displacement maps are grey scale images... either 8bit standard or HDRI (16bit or higher resulting in better precision and the ability to encode detail more accurately).

but unfortunately, lw has named all of their displacements as confusingly as possible. basically, all our displacements should be THE SAME!

they should all displace according to the NORMAL of the polygon (and not just in one of the 3 axes x,y or z).... but our original did not work like that (they intended it strictly for height fields evidently) and then they gave us NORMAL DISPLACEMENT which did work like that but evidently it doesn't take to UV MAPPED projections very well so we got BUMP DISPLACEMENT which seems completely nonsensical to name it that but this will allow you to displace according to a uv mapped image according to the normal.

as i said, this is a complete [email protected]#... they gotta scrap everything and just give us one displacement option with all the options that we need.

so NORMAL DISPLACEMENT or BUMP DISPLACEMENT in lw have nothing to do with NORMAL MAPS.... got it?

and the only way to use NORMAL MAPS is with marvin landis' superb plugins.
------------------------------------------------------------------------------------

LW PROBLEMS:

1. it's not because we don't have a renderer that does not render micropolys or 'subpixel displacement'. what that term basically refers to is renderers like RENDERMAN that generate final 'tesselation' resolution during rendertime, as a function of some relationship to the render resolutionl... for resolution independent surfaces like NURBS or SDS of course.

- so since every 'polygon' in an sds surface can be made to be 1 pixel large or smaller, you can potentially have an image that betrays no indication of FACETS... NO MATTER HOW CLOSE YOU GET! the closer you get, the more it tesselates... AUTOMAGICALLY!

- and it's pretty fast too because it does not UNIFORMLY tesselate! if i have an SDS ribbon stretching from camera to infinity, that surface will be very highly subdivided up close to camera but as that surface goes farther away, it no longer needs to be subdivided so finely in order for a 'poly' to appear smaller than a pixel... so it lets a poly be larger and thus, does not subdivide SENSELESSLY! also, it's really good about dumping unseen polys so if you get really close to the eye, it will tesselate that mother like nobody's business but everything off to the sides has been dropped from calculation....

FAST FAST FAST!!!

- lw tesselates our SDS UNIFORMLY. all parts of a model are the resolution that you determine in the subdiv settings in layout. we now have plugins that will allow that number to change based on distance, but your whole model will change.

2. again, this is not lw's major problem though! sure, it's primitive as hell and slow and wasteful but you can theoretically get results JUST AS NICE as a micropoly renderer by simply manually jacking up the tesselation settings so that you never see a facet.

- considering that we pay so much less for lw, such manual primitiveness is completely understandable in my view.

- but the PROBLEM is that none of our existing displacement methods work properly!

- we get ugly artifacts and the results tend to be unpredictable. but luckily, our good friend LYNX3D has generated a NEW displacement plugin that should produce very nice results. further, he has enabled us to load in 16bit per channel HDRI displacements generated from zbrush for very nice results.

- but one REMAINING PROBLEM is that you still have to put subdivision FIRST!!! what this means is that you will have to weightmap and deform the subdivided mesh! you can now do nice displacements but it's still useless in the context of animation and therefore, most production!!!
------------------------------------------------------------------------------------

as i've said elsewhere, the NEAR TERM SOLUTION is:

1. clean up our displacements!!! hire lynx3d and make 1 displacement for all seasons.

2. instead of merely FIRST and LAST for subdivision order, we need a re-orderable list that contains SUBDIVISION as one entry and another entry for EVERY POSSIBLE DEFORMATION OPERATION! and each of those entries is allowed to move around on the list.

so we can say that bones and endomorphs happen first, then subdivision, then displacements, etc.

jin

Triple G
05-17-2004, 08:26 PM
Very informative, and excellent suggestions as well. Well said, Jin.

Jeff_G
05-17-2004, 10:18 PM
Actually, with the new normal displace plugin (it totally saved my butt last week, thanks!) if you set the displacement to world, and set the sub-div level to after bones, you can deform via bones. So you can do some animating with it.

I haven't animated anything more complex than a sway back and forth on a biped sheep, but it seemed to deform nicely.

I absolutly agree about the cleanup. There are too many things thrown around in there that do the same(ish) thing.

jin choung
05-17-2004, 10:42 PM
i'm pretty sure that you CAN deform... what i meant is that it's not a good solution because you're deforming the after subdivision cage....

if you're referring to lynx3d's plugin, on his page it says that the option for specifying when the displacement happens has no effect.

am i turned around on that? are the bones affecting the pre-subdivision verts?

jin

hazmat777
05-17-2004, 11:38 PM
Thanks, jin!

While I'm just barely on the edge of understanding what you are saying, you always give enough info for us new people to go and do a little research of our own to come up with relevent questions.

And eventually maybe some solutions :)

moth0027
05-18-2004, 02:26 AM
wow, that explanation is nuts...

... but very informative. Thanks Jin!

jin choung
05-18-2004, 02:46 AM
:)

thanks fellows....

yah, i have a pet peeve in that i NEED to understand things... at least things i care about...

so as they say, 'do unto others...'

jin

Karmacop
05-18-2004, 08:25 AM
Jin, I skipped straight past your post, you use way too many words ;)

Just ask for displacement maps to have an extra axis button called "normal". This makes everything backwards compatible and lets lightwave work the way you want it to. Problem solved. ;)

You should write an email that says this to [email protected] . You don't need to say everything you said, they may just ignore it because it's so long, when all you really want to say is something simple. I think Bernerd Shaw, a great writer, once said something to the effect of "sorry for this letter being so long, I didn't have the time to make it shorter". Just somethign to think about :)

Exper
05-18-2004, 08:28 AM
Jin is a... "long-writer"! ;)

peteb
05-18-2004, 09:03 AM
I'm still confused, I don't see why you'd get a problem with animating. If you leave the subd's at a low division before render then why can't you do all your animation then? You can tell Lightwave how much you want it to subd at render time which would be at the point it uses the displacement map. So you could still animate your low subd cage before hand and preview to see your animation?

caesar
05-18-2004, 09:22 AM
Uh...maybe Im too stupid...maybe its may bad english...i didnt understand...LW displacemente maps are way cool - use a w/b image and it REALLY deforms your model in real time, bang boom! Ok, now normal map are made of 24 color images, they dont deform your model, BUT it "works" when you render the image - bumps in the 3 coordinates (thats why color umage RBG-XYZ) fake real poly´s. Is this right?
Doesn´t Microwave plug in does it?
I also downloaded a free video tut about it in simplylightwave.com, but i didnt had time to watch it, and how the color image was created for normal mapping.
Normal mapping is being used in the greatest games engines today, like Doom3, Half Life 2, Far Cry, Halo 2, Unreal 3, and it really delivers a great graphical evolution in real time 3D.

Karmacop
05-18-2004, 12:37 PM
Hopefully the image below will help explain things. It's a sphere with a greyscale bump map as its texture. The "y-axis" image is a standard displacement map. The "normal" image is using the normal displacement plugin.

Basically, the standard y axis displacement just moves the point up or down on the y-axis. The normal displacement moves the point towards or away from the polygon, based on the direction the polygon is pointing. Both of these methods are useful, but it'd be nice to have them both built in. Hopefully the image and my explaination helps.

EDIT: note that the image and the mapping type are the same for both displacements, only the way the points are being displaced are different.

caesar
05-18-2004, 12:58 PM
Karmacop, the last post "enlight" me a lot in this point- I understand exactly what you showed - the normal map wrapping the sphere- , but i got a little doubt...I only saw color normal maps, whats the diference when using grayscale images?

hope is the last to die....(i dont know the motto in english...but in portuguese makes sense );)

Thanks for explanation!

sire
05-18-2004, 02:15 PM
Normal Maps don't encode actual displacements but rather the direction a surface faces at a certain point. They use RGB for the three world axes, that's why they're coloured. They work on the same level as the edge smoothing (gouraud, phong etc.) or Bump Maps. Their purpose is to fake higher detail. Opposed to Displacement Maps, which are basically Bump Maps without the fake. They really change the shape of the object.

Normal Maps calculated the way mentioned above only bake the normals of a higher res mesh down onto a lower res mesh. Because the normals are in world space, they could tell a spot on a surface to behave as if it would face to a completely different, even the opposite direction. This would be impossible with a Bump Map.

toonafish
05-18-2004, 02:30 PM
You said it all Jin :-)

But Mathias Wein's Normal Displacement does allow you to have your normal displacements before the bone deformations and still have the bones deform the lowrez object.

Silly a 3rd party plugin has to save the day....Il bet this plugin will be part of LW 8.5 ;-)

But even in LW8 displacements show these weird polygon edges so you have to set your SD's to silly amounts just to get rid of them. Would be cool if Newtek would fix small stuff like this before we get any new revolutionary tools.

Fish

jin choung
05-18-2004, 05:06 PM
peteb,

the problem is this:

if you create a model that has detail like WRINKLES and VEINS, your base cage ends up not being very 'low poly'.

if the EYELIDS of my model have fine crenelations and lines modeled in, your low poly cage is going to be pretty damn hi poly.

so with this method, you stop modeling your base cage when you have all the major topographical features to 'catch' the displacement map.

and then, you continue modeling the 'high frequency detail' like wrinkles, warts, the creases in the lips, etc, in a separate model.

then you generate the difference map, and if that map is applied on the base cage AFTER SUBDIVISION, you get the super dense model at render time.

how's that?

jin

Chris S. (Fez)
05-18-2004, 05:08 PM
"But even in LW8 displacements show these weird polygon edges so you have to set your SD's to silly amounts just to get rid of them"

Amen Fish. Those "weird polygon edges" are exactly why I have often asked for explicit control and stacking of subdivisions. An object with 6 subdivisions can potentially get great displacement details (subdivide a cube 6 times in modeler and marvel at all those points you have to play with!) but the Layout displacements are never really smooth. The solution is to simply give us the option to add one or more levels of subdivision after the displacement. Lynx's plugin would be incredibly useful to me and many others if Newtek could add such functionality ASAP!

Of course we should be able to add another normaldisplacement deformer on top of that level of subdivision and then another level of subdivision on top of that...and on and on. This would give us Zbrush-like control over our displacements and allow for all kinds of level-of-detail tweaks.

This would make Lightwave much more compatible with Zbrush without the R&D of subpixel displacement. Please get going!

peteb
05-19-2004, 08:42 AM
Sorry Jin I'm still a little confused to what you're saying. Are you saying that the base model still has to have loads of detail so that the hign res mesh can have minute detail? I thought as long as you had your base mesh in a rough shape to the final model then the displacement map will add all the wrinkles and crease that you made in the high res model? When I look at the tutorials on the zbrush site, usually the underlying mesh is basically a very simply subd cage. If you look at the head that's been doing the rounds the basic model is very smooth and has very little detail in it. And yet when they add the displacement map that they made from that base model it adds all the wrinkles and creases. Sorry if I'm sounding pretty stupid here.

caesar
05-19-2004, 11:45 AM
Originally posted by sire
Normal Maps don't encode actual displacements but rather the direction a surface faces at a certain point. They use RGB for the three world axes, that's why they're coloured. They work on the same level as the edge smoothing (gouraud, phong etc.) or Bump Maps. Their purpose is to fake higher detail. Opposed to Displacement Maps, which are basically Bump Maps without the fake. They really change the shape of the object.

Normal Maps calculated the way mentioned above only bake the normals of a higher res mesh down onto a lower res mesh. Because the normals are in world space, they could tell a spot on a surface to behave as if it would face to a completely different, even the opposite direction. This would be impossible with a Bump Map.

Very clear, right now!:D

jin choung
05-19-2004, 03:09 PM
hey peteb, no problemo.

"Sorry Jin I'm still a little confused to what you're saying. Are you saying that the base model still has to have loads of detail so that the hign res mesh can have minute detail?"

yes. in lightwave if you have a vein in the hires model, that means you modeled that vein in low poly form in the low poly cage.

zbrush is using something called HIERARCHICAL SUBDIVISION SURFACES such that in zbrush, you can have a hires version with wrinkles and creases and warts and none of this is apparent in the low poly cage AT ALL! that's part of what makes it sooo cool.

but there is no equivalent of this in lw. in lw, if you have something with wrinkles and wart and such, in your lowest poly version, you will have such details in low poly form too.

"I thought as long as you had your base mesh in a rough shape to the final model then the displacement map will add all the wrinkles and crease that you made in the high res model?"

that's exactly right.

but wasn't your question: "why not just use the super dense mesh?"

and so if you understand why it's beneficial to use the low poly cage, the reason why we're using a displacement map is that that is the only way to EXPORT THE HI RES detail into a usable form into lightwave, maya, etc.

remember, nobody else has zbrush's implementation of hierarchical subdivision surfaces so you would not be able to have a low poly mesh that would result in the zbrush detail version by just turning up the resolution.

does that make sense yet?

jin

Chris S. (Fez)
05-19-2004, 03:36 PM
"remember, nobody else has zbrush's implementation of hierarchical subdivision surfaces so you would not be able to have a low poly mesh that would result in the zbrush detail version by just turning up the resolution."

Hey Jin. Wouldn't Lightwave let us access those hierarchal deformations if they implemented the subdivision stack I outlined above? For each new subdivision level you add and edit in Zbrush just output a displacment map and then apply that map to the corresponding subdivision level in Layout. For instance, say you sculpt horns at subdivision level 3 in Zbrush. Export the "horn" map and apply it on top of subd level 3 in Layout. Back in Zbrush you subdivide the mesh three more times to level 6 so that you can add some "wart" detail. Again, export the "wart" map and apply it above both the "horn" map and three more subdivisions. Then add another subdivision an top of that to smooth it out. So the stack would look like this:

subdivide: 3
displacement: "horn" map
subdivide: 3
displacment: "wart" map
subdivide: 1

Obviously differences in the way Zbrush and Lightwave subdivide play a factor, but stackable displacement maps and subdivisions would still be useful even without Zbrush.

jin choung
05-19-2004, 04:13 PM
hey chris,

everything you say is true and potentially true. but it's FAR more likely to happen with maya and xsi who already have hierarchical sds.... but it hasn't happen there yet either.

problem in lw is that we have no notion whatsoever of a history and hierarchical sds is absolutely dependent on that.

it is desirable and such exports from zbrush is possible... but i don't think it'll happen.

lw's architecture is just not capable and although such an export from zbrush to xsi or maya is perhaps POSSIBLE... it may not be all that desirable.

if you're gonna render over a network or interface with third party renderers (or even other applications like LW!), it simply might be less of a hassle to use the displacement map.

simple seems to be better.

jin

Karmacop
05-19-2004, 11:22 PM
Why are hierarchical sds dependent on a history?

moth0027
05-20-2004, 06:14 AM
In peoples opinion, is zbrush is it worth getting considering Lightwaves lack of hierarchical sds?

peteb
05-20-2004, 06:59 AM
Hey Jin I think we had a bit of crossed wires there. That was my original question but I Posted again to say I'd figured that out by looking at the zbrsuh forums. So basically my only real question is that people are saying that even with the displacement maps from zbrush you still can't animate with out slow down. And what I was saying was if the zbrush displacment mao was giving you all that detail why couldn't you just whack the subd's up at render time when Lightwave will actually make use of the displacement map?

Mattoo
05-20-2004, 01:38 PM
PeteB: You've got it, that's exactly the idea. You animate the low res poly cage that you sent to ZBrush.

LW then uses the displacement map made in ZBrush during render time by heavily subdividing your model and displacing the polys.

Check out this thread for the process explaination, it includes some animated examples:

ZBrush to LW tutorial (http://www.spinquad.com/forums/showthread.php?s=&threadid=1265)

peteb
05-20-2004, 04:53 PM
Thanks for the tutorial that makes it nice and clear. But why then are people saying that it doesn't solve animation or where they asking that before seeing that link?

So here's another question, how to you go about texturing. I know you can texture in Zbrsuh but what if you wanted to use Lightwave. You could texture the model before it went into zbrush but then surely the displacement map would ruin the texture when it came to render it. So would you have to create UV's on the high res displacement map model in Lightwave?

jin choung
05-20-2004, 05:56 PM
hey peteb,

yah, you're right... i did miss a beat in the conversation! my bad.

actually, it looks like you can animate as you would in maya or as the WETA guys did for LOTR.... it does indeed looks like it works!!! cooooool.

i think there was an issue with this if you didn't use lynx3d's fantastic plugin.... there was a conversation where splinegod brought it up i think.

anyhoo, so IT WORKS! woo hoo.
------------------------------------------------------------------------------------
but then, we run right smack dab into what you said! it's a minefield ain't it?!

you're right,
------------------------------------------------------------------------------------
if you texture in zbrush or in any 3d painter really, you're good to go. no problems at all:

1. you paint on the HIRES OBJECT. and you can take the resulting image map and map it right onto your LOW RES OBJECT because the uv map of the hires is EXACTLY THE SAME as the uv map of the low res.... i.e. all the outlines of all the parts in uvspace are exactly the same... no curvatures is induced in uvspace... just more subdivision.

2. the fact that in uvspace, there is no subdivision is a big problem if you want to paint in a 2d app.

this is where you run into the dreaded 'uv distortion' issue. because edges in uvspace never end up curving, they stop resembling those same edges in 3d space that do undergo (sometimes radical) curving during subdivision.

the discrepancy creates distortion when you try to paint on the uv screenshot in photoshop.

this is not a problem if you use 3d painting or even jackydaniel's workaround because by painting on the 3d surface, the 'paint' is projected onto the image map 'already distorted'!

so if you look at the texture map of an sds model painted in zbrush, you will see lots of little distortions that end up looking just right when applied to your model.

this will be evident in the displacement maps as well.

3. main issue when it comes to this distortion however is HOW low poly your low poly cage is... the lower density it is, the more the distortion... the most distortion can be seen on a single quad sheet model.

so hopefully, by the time you finish modeling your low poly cage, it has enough detail to not distort that much.

the fact that lightwave currently has no method of addressing this problem if the mesh happens to be 'not dense enough' is currently the problem.

jin

jin choung
05-20-2004, 06:02 PM
hey karmacop,

although not commonly referred to as such, hierarchical subdivision IS a kind of history.

in apps like maya and max when you refer to an object's history, you are referring to the series of steps it took for you to get your model in its current condition. this is arranged such that any change in the 'past' will propagate the effects down the chain of events.

actually, in max, i think it is explicitly called an 'object hierarchy'....

hierarchical subdivision may not incorporate an app's 'main history stack' but the principle is exactly the same so if not the same, it's just a modified history stack dealing specifically with subdivision.

so hierarachical subdivision is a subset of history in a sense.

and so, my logic is that since lw doesn't have ANY notion of history whatsoever, it would be odd to see such a history heavy feature when nothing else supports history.

actually, i'd prefer to have a robust, general purpose history before we get the special purpose hierarchical subdivision.

jin

Mattoo
05-20-2004, 06:11 PM
I honestly don't know why people believe animation couldn't be done. I see people frequently missunderstanding how the Subdivide order works... perhaps that's it?

I managed to animate just fine with the old LW Normal Displacement so again, I dunno what problems other people might have come across, but I didn't get any in this regard.

I think Jin got second question handled quite well.

Although I haven't got around to the texturing side of ZBrush just yet I do plan on keeping Body Paint for this side of things.

What I'll probably do is apply the Displacement map to the low res cage in LW - with a few subdivision so that it gets some displacement.

Then I'll Save Transformed and send that to BodyPaint, where I'll apply the Displacement map again but as a bump map so I can see the detail as a guide.
Then I'll just paint my colour/spec/gloss etc as per usual.

Should work.... :)

jin choung
05-20-2004, 06:19 PM
for me at least,

my error probably came about while trying to use METAMATION to fix the uv distortion issue but being unable to animate with a displacement....

cool. i'm very glad to hear that it's doable in lw.

jin

peteb
05-21-2004, 03:10 AM
Hey thanks for all the replies on this.

Sounds like Lightwave has the same problem with texturing curved surfaces as Maya did a few years ago.
We were making a game back then which was one of the first to use bezier patches to create the vehicles. We were going to use max but it only had a really basic Bezier patch feature so we moved to Maya that had more control. All was good till we discovered that the textures would distort. We contacted Alias but they said that there was nothing in Maya that would solve this. So our programmers wrote an in house texture package that meant you could place uv's and they wouldn't distort. Alias actually saw this and wanted to buy it from us, we said no. I think they've now sorted this out. if only Lightwave would do the same?

So would you say it's best to texture out of Lightwave? Is bodypaint 2 better to use then Zbrush? I know Bodypaint2 has probably got more options but it means I have to buy another package so can I get good results in Zbrush?

xtrm3d
05-21-2004, 05:16 AM
here .. a picture resulted from the workflow.. lw to zbrush then back to lw ..

xtrm3d
05-21-2004, 05:18 AM
uuupps.s.. sorry forgot to post the picture

xtrm3d
05-21-2004, 05:20 AM
and the low poly cage for the head

moth0027
05-21-2004, 05:30 AM
That's a really nice model. I also like your avatar a lot too.

Did you model it in Lightwave, then add detail and textures in zbrush and then finally back into lightwave to render?

xtrm3d
05-21-2004, 05:33 AM
yep.. that the way i did ..

using the new cool displacemnt plug from lynx3d

MrWyatt
05-21-2004, 06:00 AM
Hi Christophe,
cool model. but i have some Questions:

1. is it animatable?

2. does it deform right?

I read all this issue thing about that you have to subdivide first and that messes with morphs and bone deformation. how good is this plugin dealing with the deformations .

Chris S. (Fez)
05-21-2004, 07:20 AM
Wow. Very nice. What subdivision level are you rendering at?