PDA

View Full Version : True Sub-pixel Displacement ...



Gregg "T.Rex"
04-08-2004, 08:10 PM
Com'on guys....:(

Even Poser 5 have true subpixel displacement, not to mention three Max's renderers, Turtle for Maya and of course Renderman.

Especialy, in Poser and Turtle is just one mouse click away. How many more years do we have to wait for it in Lightwave? Is it so much difficult to implement such a feature? Other softwere make it look so simple and easy, with almost ZERO time penalty when rendering.

Right now, F-Prime saved Lightwave from being my second choice for rendering, against Maya's Turtle, which is a great, easy and fast renderer. I hate porting there my scenes for render just to use that feature, but for a certain project, i realy have no other options...

Please, someone bring this feature in Lightwave, ASAP....

I started the poll just to see how many people need (or don't) true subpixel displacement in Lightwave, or am i the only one in the world?

Regards,

Karmacop
04-08-2004, 11:43 PM
It's hard to raytrace subpixel displacement, and Lightwave's renderer wasn't designed with it in mind. Newtek is wanting to implement it though.

jin choung
04-09-2004, 12:22 AM
so what you mean by 'subpixel displacement' -

1. RENDERER

changing the renderer so that non explicit surfaces like NURBS (non lw) and SDS (this is what we got) as well as metaballs have no inherent TESSELATION RESOLUTION and can be adaptively subdivided/tesselated, the actual resolution of the rendering mesh determined by the SCREEN RESOLUTION OF THE RENDER such that a single facet is either 1 pixel large or smaller.

another method would be directly rasterizing the limit surface of the SDS... but considering that only realsoft3d does that and renderman does not, there might be reasons to avoid that method... though the precise reason WHY eludes me.

so if we go with actually turning into polys (though renderman uses bilinear quad patches they call MICROPOLYS)... errrrr, are there ANY RENDERERS THAT ARE NOT *REYES* THAT DOES THAT?

and if not, errrr, there may be a reason for that! it may be really really hard. it may be based on technology that is not at all compatible with the current lw rendering engine. and if this is so, it would probably be easier to support a plugin renderer ala FPRIME or an export plugin to renderman or compliant REYES RENDERERS.

2. SUPPORT OF DISPLACEMENT MAPS

the current degree of such support in lw is DUBIOUS... there seems to be all kinds of usability issues, especially as peratains to UV MAP SEAMS. not to mention that we have completely assinine, redundant notions like NORMAL DISPLACEMENT and BUMP DISPLACEMENT....

displacement, bump and normal maps in lw is a complete [email protected]#$ and needs to be cleaned out and rethought before we can have any illusions of renderman-like rendering. hell, we need this fixed so that we can just export to renderman proper.

3. RENDERMAN CONSIDERATE SDS IMPLEMENTATION

because of point 1, changing lw's renderer to conform to your request is a pipe dream imo. SO... the next best thing is implementing SDS in such a way so that it behaves in lw as it can in renderman.

this means dealing with these SDS issues like renderman does:
a. UV MAPPING
b. EDGE SHARPENING
c. NGON SUBDIVISION
c. GENERAL IMPLEMENTATION such that a cage resolves into a limit surface in lw as it will look in renderman.

4. THERE ARE MANY FREE IMPLEMENTATIONS OF RENDERMAN COMPLIANT REYES RENDERERS.

AQSIS is one. BMRT used to be another but it's a goner now.

it is FAR FAR FAR FAR FAR MORE realistic to ask newtek to develop a robust and integrated EXPORT to renderman compliant renderers than it is to expect a total tear down of their render engine.

think that sounds like a TALL TASK? IT IS! JUST THE EXPORTER ITSELF! now think about how much the tear down would take.



we're not gonna get a reyes renderer in lw. not for the forseeable future anyway. so the request as asked for is completely undoable.

expect and ask for items 2 and 3. they are doable and not unrealistic. if we get 2 and 3, we're golden.

jin

p.s. there IS a renderman exporter for lw... it's an external commercial plugin but it is available.

Gregg "T.Rex"
04-09-2004, 06:46 AM
Originally posted by jin choung
p.s. there IS a renderman exporter for lw... it's an external commercial plugin but it is available.

Well, at first i did try the Lightman plugin to export some scenes for Renderman. But, it had lots of limitation, plus the pipeline was not optimize at all for real work.

Then, i switch over to Maya for trying Renderman directly, through MayaMan and R.A.T. (Renderman Artist Tools) and i must say it was not easy, but the results were most impresive.

And one day, i discovered the TURTLE. A Maya renderer, that it's so well implemented in Maya (nothing like the crappy Mental Ray) and with true sub-pixel or high-frequency or micropolygon displacement. It fully supports the 16bit displacement maps that ZBrush 2 exports and it can displace geometry with ray trace reflection, refraction and final gather rays, just as if it was actual geometry and in fraction of a time. In addition to that, it has a superb feature called: "Render as Subdivision", which actualy take ANY polygonal model and rendered it just as if it was a subdivision model. I'm beta testing this renderer right now and i'm very very impressed. Every feature is just a click away; it's very compact, streamlined and very well implemented in Maya.

Anyone, who want to take a look at the features of this renderer click here: TURTLE Maya renderer features page (http://www.illuminatelabs.com/products/)

Lightwave is my primary application for most projects, but not for all. Newtek, has to make the BIG disision to re-write the core rendering engine, or else will soon expire...

Cheers

jin choung
04-09-2004, 12:13 PM
"render as subdivision"

niiiiiiiiiiiiiiiice.

well, i agree that everything that you're asking for is desirable... i don't think it's gonna happen but it sure is desirable.

alas, so many things in life including paris hilton fall into this category....

jin

Chingis
04-09-2004, 02:13 PM
There's abosolutely no reason to think that Newtek can't or won't impliment this feature. I think this is THE feature that separates more professional renderers from Lightwave's. You want to render something with scales at film res? Absolutely impossible with Lightwave. I think that's a pretty big limitation and if Newtek doesn't see that they don't belong in the film market like they claim. Don't get me wrong, I love Lightwave for all it's worth, but heaven forbid we be stuck rendering dinky spaceships in outerspace for the rest of our LIghtwave years.

I vote YES. This is a TOP priority feature. If it requires a rewrite - so be it.

jin choung
04-09-2004, 02:27 PM
ummmm, so be it eh?

well, a big gigantic huge 800lb gorilla of a reason is COST....

if you'll notice, all the other guys doing it cost significantly more and have much greater market share....

more moola to play around with you see....

if you're willing to pay more for more, you can already do that. but if you want more for the same price, that may indeed be undoable.

and as i've said before, lw meets a price/performance niche for me. if it costs more, i might as well just go with maya/mental ray.

jin

Chingis
04-09-2004, 05:28 PM
I agree that R&D requires much moola. However; Maya with Mental Ray is only $400 dollars more than lightwave. Why not just use Maya? Well, I do for work, but for personal projects I prefer Lightwave. I guess what I'm saying is I wouldn't mind at all if Newtek raised the price of Lightwave to $2000 if it meant adding more professional features... assuming they maintained their upgrade policy and free render nodes - which is where you save most of the money anyway. A small price increase could perhaps give them enough for some R&D with out getting ridiculous with their policies and customer service like Maya and XSI. Just a thought. It makes a whole lot more sense for Newtek to add User requested features (no matter the cost) than for all their users to have to jump ship and change their entire pipeline for just one feature. Not to mention all the customers they would lose- If people really want it, they can't afford NOT to.
Lightwave has always been about bringing highend FX to the budget user. Why stop with sub-pixel displacement? Why have SDS's? why have character tools? Why have anything? They all cost money to develop. It's really not a matter of IF, but a matter of WHEN.

jin choung
04-09-2004, 06:23 PM
chingis,

your argument is completely reasonable as presented.

but some of the assumptions may be off: i don't think they WOULD be able to keep the upgrade price the same, i don't think $2000 would be enough (considering that maya has an unlimited version and that it probably has more sales to fx houses).

i think you're underestimating the amount of the cash discrepancy between privately owned newtek (with current lw sales/ upgrades) vs. the money that alias has at its disposal as a public company with current maya sales/ upgrades and other support incomes.

if a moderate increase in price results in the ability to really get up to speed in r&d, that would be well worth it.

but personally, i think in order to catch up, they would have to RADICALLY increase the price.

and considering that the rendering change requested would not be the equivalent of BOLTING ON a black box, but rather a complete rewrite of the engine (and the expertise that requires something like that certainly doesn't grow on trees and we can't even be sure there's someone there now who's equal to such a task), it seems to me very very pricey.

it may be that MY assumptions are off the mark. but in the reality that i live in, everything ends up costing a bit more and taking a little longer than estimated.

also, there's that ol' maxim:

"GOOD, FAST, CHEAP. PICK TWO."

jin

p.s. i actually kinda agree with 'not a matter of if but when' but the when may be so far out that it becomes equivalent to an 'if' for all intents and purposes.

mkiii
04-10-2004, 04:31 AM
The cost argument is a pretty important one to me, and is probably one of the key things that keeps people with LW even when other more exprensive apps promise more features. Bang per buck is important.

However, The other main players, and those closer to LW in price & capability such as C4d offer more than one version, or in the case of C4d - modules that can be added to the much cheaper main app. Maybe Newtek could consider a more modular approach too, allowing a full blown 'Extreme' version to have the bells & whistles that normal users don't require.

I don't mean that LW should be crippled in the way that Inspire was of course, and not quite as drastic as C4d does it, so that the base package doesn't even include UV tools, which are only available once you buy Bodypaint (a major booboo imo).

Of course, we have this already to some extent. Not everyone owns G2, Sas Full, or FPrime, so we have a choice in that respect, but is the LW technology really up to the task of handling major bolt on additions, without a comple rewrite that would make the SoftImage to XSI transition look like a mere recompile.

Nemoid
04-10-2004, 06:04 AM
I agree with Jin

IMO a good idea would be for NT to buy from Dan Maas the renderman exporter he compiled awhile ago for his great mars animation. its ready just now, and will give alot of possibilities to all the artists wich need sub pixel disp and renderman power foe sure.

another good thing, would be if Fprime wich uses a different rendering technology, to grow more and more and to allow a good sub pixel disp as well.


due to the Lw current structure and problems (related also to bump displacement and UV bad support ), these are issues to work up when Lw will be migrated to a new environment, as well as to open Lw to third party rendering technologies. be this Mental ray, ot Vray or other.

I'm a good fan of Lw built in rendering, but the possibility to be free to choose what kinda rendering technology i want will be a great thing and a benefit for Lw itself. :)

JDaniel
04-10-2004, 08:15 PM
Ooh yeah, i wanted this as soon as I saw ZB. i know LW won't have it forever. So I'm getting ZB2 to sculpt and my buddies mental Ray to render.

bloontz
04-10-2004, 10:17 PM
If Newtek opens their SDK to fully support external render engines like FPrime, what is to stop someone from writing a Reyes renderer for Lightwave? Posers Firefly is a reyes renderer ported by a third party, Andrew Bryant and his team at Pixels 3D. I'm dying to use ZBrush 2 with Lightwave.

JDaniel
04-11-2004, 01:49 AM
Originally posted by bloontz
I'm dying to use ZBrush 2 with Lightwave.
The only thing we can take advantage of is the normal map created from they're normal shader of the hi-res geo to the low-res geo. Maybe lower detailed displacement maps w/ combination of the norml map for the hi-res pixel displacement illusion. But, of course no hi detailed pixel displacement of geo.

bloontz
04-11-2004, 06:57 AM
Originally posted by JDaniel
Maybe lower detailed displacement maps w/ combination of the norml map for the hi-res pixel displacement illusion. But, of course no hi detailed pixel displacement of geo.
Pixologic support did suggest that that was a method being used (probably by Glenn Southern). I've had no luck getting Lightwave to load a 16 bit greyscale map, frustrating after waiting so long for the upgrade...

Nemoid
04-12-2004, 02:23 AM
Yap I saw the Southern post on CGTalk. beautiful work.BTW issues like these have to be solved for sure. i'm going to work hard with z brush, because its SO a cool app . hoping that even Lw will work smoothly with it soon. In the meantime , I think something like Maya is the way to go with Z Brush for now.

jin choung
04-12-2004, 03:37 AM
hmmmmmm,

here's an interesting thought....

i thought that there was no infrastructure pre-existent in lw to handle RENDER TIME RASTERIZATION for stuff like 'SUBPIXEL DISPLACEMENT'....

this is not STRICTLY true but the only analog we have isn't well suited for ADAPTATION.

the EXISTING SURFACE TYPE that can perhaps be retrofitted to work is HYPERVOXELS.

the surfaces of hypervoxels are completely determined at rendertime and so, if SDS can be tagged in the renderer to determine final limit surface JUST LIKE hypervoxels, it would be possible to do at lease PIXEL LEVEL DISPLACEMENT.... i don't imagine it would be too difficult to tell the renderer to allow the user to specify fractions of a pixel but you could easily get the same result by simply rendering at a higher res and downsampling.

and considering that you CAN texture hypervoxels with lw legacy projections, it shouldn't be too hard to tell lw to texture the hypervoxel surface according to UV MAPS.

i have a sneaking suspicion that if this is possible at all, we'd be constrained to making all the sds surfaces WATERTIGHT, solids...
------------------------------------------------------------------------------------

bad part,

it would be HORRIBLY, ABOMINABLY, REPREHENSIBLY SLOW! i wonder how quickly REALSOFT3D's direct rasterization of higher order surfaces is... cuz basically, it seems to me roughly analagous to this method.

and perhaps that is why renderman and REYES renders turn stuff into MICROPOLYS first instead of direct rasterization.

there's a lot of stuff that renderman does like doing binary space partitions (or an equivalent process) and then performing ADAPTIVE SUBDIVISION so that something in background does not get subdivided as finely as something large in the foreground.

lw's hypervoxels don't....

------------------------------------------------------------------------------------

just an idea. impractical but an idea nonetheless.

jin

Gregg "T.Rex"
04-12-2004, 06:21 AM
...just an idea. impractical but an idea nonetheless.

Well, nice idea but then, it would be another low-end solution-turnaround by Newtek. Why not aim for the real thing, as all other apps do? These low-end "methods" accelerate the "expire" factor...

Gregg "T.Rex" Glezakos
3D Artist

JDaniel
04-12-2004, 01:23 PM
Jin,speaking indirectly about HV and uvs... I had a crazy idea about putting your model inside a HV and trying to bake the 3d texture of the map apllied HV around it all at once. :D

tudor
04-14-2004, 02:23 PM
Originally posted by jin choung
hmmmmmm,

here's an interesting thought....

the EXISTING SURFACE TYPE that can perhaps be retrofitted to work is HYPERVOXELS.

jin


Take a look at http://www.tufflittleunit.com/

They have a plug (waterpool) that transforms HV's according to a collider.

Imagine everything being hypervoxels.. Solid modelling :)

JDaniel
04-15-2004, 11:50 AM
That is very interesting.

sketchyjay
04-22-2004, 02:18 PM
I have to disagree with Jin here.

zBrush and Poser are both sub $500 apps.

So I see no reason that if the Newtek team focused on updating the render engine solely that they could not shake out the bugs and get it updated.

This won't require any more money itjust means they focus on fixing the render pipeline the same way the updated the character animation tools. The character tools did not cost us more to get. With the money they get for the upgrades and new purchases it is obvious that they have limited resources but when they focus on each part in turn they can make great strides.

Now I see several ways they could do this.

1. They aquire the renderman plug and get it working better.

2. They build a whole new renderer and plug it into the hub. The hub allows them to add any module onto the whole package, why not beef up the hub and add better modules on. The guys who left Newtek saw this so I don't see why Newtek can't work on improving the hub the same way.

3. Tear out the renderer or build up the renderer (since I don't know the programming aspect of how this is implemented it is hard to say which is right) Tearing out is bad since it will destroy lots of plugins but improving the renderer with some good tricks may be possible.

4. Trick out the whole SDK so everyone and their grandmother can do what they want. It sounds like they are doing this with FPrime so if that comes to pass it may also allow all the other render engines in.

Speaking of FPrime, if he was able to build a whole new renderer and plug it in then I don't see why no one can implement a Reyes render the same way. Once the SDK is fixed who knows how many renderers may pop up for lightwave.

So I don't think it is something that needs to be shot down. It is something that can be done with the proper allocation of resources and the right team to do it. Newtek seems to hire freelance programers or whole teams to focus on various parts of lightwave (DStorm for instance) so never don't give up on it yet

jin choung
04-22-2004, 07:25 PM
ah,

but that's the thing though. all the new character tools are JUST PLUGINS. nothing really changed with the core architecture.

to my knowledge, and the fprime guys would back it up, there's a LOT about the renderer plugin sdk left to be desired. it may not be possible to bolster it this way.

as for the two sub $500 apps you mention... does poser support 'SUBPIXEL DISPLACEMENT'? does it tesselate to final mesh resolution at render time? i doubt it.

it probably supports simple DIPLACEMENT MAPPING and that's definitely something that we can and should fix. but that's a simple and shameful bug.... it's not what everyone really wants - though it would be a start.

as for zbrush.... i have no idea what their renderer is doing but it seems to me that their technology is just plain different.
------------------------------------------------------------------------------------

also, if you STARTED OUT ON THE RIGHT FOOT, it's probably cost effective. but if you already have a core technology that you're using and that technology is incompatible with new developments, then i simply can't see how that would cost a LOOOOOOOOOOOOOOT of money to address.

mark my words.

the evidence will be in how long it takes for us to finally get something like 'subpixel displacement'. my bet is on way out in the future if ever.

time will tell.

jin

sketchyjay
04-22-2004, 07:57 PM
Hmmmm...

The problem I'm seeing is that this Dr. in charge of lightwave development is doing it piecemeal. That is a little at a time instead of redoing everything from the ground up. So a doing it right from the beginning doesn't seem like it is in the stars for us LW users.

So to me this means that they have to gut the renderer and tell all the plugin people what the new functions are to get their various plugins to work with it again (if they bother to) or redo the equations and function calls so that it can expand the capabilities of the renderer. I have bearly tapped lscript so I couldn't guess what they can do with the renderer and wont even try.

I really think the only way for them to do a new renderer is to just break the SDK open for the render pipe so that more kinds of render plugins can be added or 3rd parties can get in and make pipes out to other engines.

A programmer would probably have to answer this one, but if they can reroute render calls to point at plugins, then make the lw render engine a plugin then it can be swapped out with newer renderers or 3rd parties could make new ones for toon or reyes etc.

The limitation would be figuring out the render flow logic and how to reflow it out the right pipes. If done right it would not require breaking too many plugins.

If they focus on the render engine next I see them making some headway in the next year.


Well that's my prediction. 2006 we'll have a new render engine or a open render SDK for others to add theirs in.

Hey I can dream can't I

JDaniel
04-23-2004, 12:11 AM
Jin, I just got Zbrush2 and it rocks! I'm still learning the new workflow. The displacements are awesome though.

jin choung
04-23-2004, 12:52 AM
wow! lucky!!!

good for you.

hey jacky, you're actually the PREFECT guy to ask about this:

HOW does zbrush2 handle UV MAPS and SDS?!?! in the uv view, when you uv map the low poly base cage, do you see CURVES in the uv view? to defeat the distortion... or is the low poly cage mapped with STRAIGHT LINES?

and therefore the maps that are generated are COUNTER DISTORTED as in your method?

i always wondered about this in zbrush.

thanks

jin

claymation
04-23-2004, 01:05 AM
zbrush does not uvmap the same way we are used to in lightwave. It basicly breaks up the surface into individual squares so you couldn't, for instance, take it into photoshop to do more work. Nendo uses the same system for texturing. The benefit of this is no distortion. There is a thread on this at the zbrush.com forum.

All my luck i ordered zbrush 2 the last day of the month and now have to wait for them to finish mastering the CDs to ship it to me. It is now a race to see if I get zbrush or LW 8 first.

THis broken up UV map is like unwelding the entire mesh then using atlas mapping. In a program like zbrush this does not cause a problem since I can paint right on the mesh. In LW it is pretty useless. Maybe if it was doen this way and taken into bodypaint or tatoo it would work well.

jin choung
04-23-2004, 01:18 AM
hey claymation,

thanks for the response.

but if that were the case, it would be almost completely useless for mainstream vfx work.

what i'm really interested in - and i suspect most of us who have recently been swayed by the charms of zbrush2 - is the whole 'encoding hires detail into a DISPLACEMENT MAP'.

BUT

if the uv mapping is NON-STANDARD in any way, it would not be usable in external apps such as lw, xsi or maya and that would make zbrush2 a pointless little island except for 3d illustrators.

so in a multi-app environment, where you're encoding the hires detail in zb and painting a texture map in photoshop, what does the uv map look like....

cuz if it really is such an unexportable island, my interest has completely evaporated....

jin

Gregg "T.Rex"
04-23-2004, 02:20 AM
Jin,
FYI Poser 5 have true subpixel displacement, just like Renderman.
It does tesselate the mesh resolution during render, with no extra time penalty and it does a pretty good job. Though, the stupid thing about Poser is that it only support polygon models and that sucks. It's also very easy to use:
You add your displacement map (8 bit grayscale), click on the
"Use Displacement maps" button, set displacement limit and you're off.

It SOOOO simple that makes me wonder why it's so hard to do that in Lightwave, since Newetk already have the Hypervoxel engine, which is a "tessalate-during-render" type of engine...

Cheers,
___________________
Gregg "T.Rex" Glezakos
3D Artist

jin choung
04-23-2004, 02:57 AM
hahahahahahahaha!

wow....

they DO!

ummmmm... but if they don't support anything but polys, EVERYTHING is already tesselated... what does it mean that it support micropoly displacement?

you mean it treats a single poly face like an implicit surface and will subdivide to accommodate a displacement map?

that is - treat polys as if they were a non explicit surface?

if so, that's cool.

but it seems that they also have a MODULAR RENDER architecture so that they can add renderers piecemeal....

so chalk it up then... even poser has a more flexible architecture!

------------------------------------------------------------------------------------

to be fair though, it also has less render 'features' to support so it might not have been as difficult to implement.

i think the problem that the FPrime discussions seems to highlight is that not all of the data from lw has properly been BLACK BOXED so that any external renderer can take that and start rendering.... perhaps a lot of hard coded dependencies.... that's bad.

yah, hypervoxels is exactly the proper paradigm to cite but then again, it's slow like a [email protected]#$%^.... to get around that, we would have to have things like buckets and space partitions and such. i guess all of that would be part of the rendering module....

hmmmm, let's start a LIST shall we? of things that would have to be properly 'tied off' to support plug and play renderers:

1. geometry - polys, sds, lines, points (x,y,z,h,p,b,u,v,vertex maps)
2. bones and displacement
3. lights (including HDRI)
4. camera (and for this endeavor, we find out that most of the things in the camera panel probably more properly belongs in the render panel - everything except position and orientation in fact.
5. image maps and textures (surface properties) (including HDRI)

if they made NO INTERFACE for HYPERVOXELS and SKYTRACER and forced us to use the renderers separately and composite on our own time, i wouldn't object to that. cuz i think certain features are so embedded into the lw renderer that it would not be possible to abstract it out to talk to other renderers.

and that might be it for architecture clean up.

but then they would have to either create a renderer with the desired features or acquire one. and neither of those are cheap propositions.

jin

p.s. oh, and i note that the new features are in the FIREFLY renderer... so there really may be something to the notion that somethings are easy if you start from scratch but not worth doing if you want to bolt something new onto an existing system?

Gregg "T.Rex"
04-23-2004, 03:39 AM
Well...
i guess, Newtek shouldn't pay much attention on developing or augmenting the current LW rendering engine, cause it's old and dated, though still quite good as it is.
As i see it, all the efforts should be focus on making LW SDK completely "open" and able to "talk" to third parties, so renderers like Maya's TURTLE or XSI's Mental Ray (or even Fprime v2.0), could be adopted like plugins and talk to all LW's handlers, like displacements, surface and textures, volumetrics, motion etc.
Then i wouldn't mind paying any price for that new Lightwave beauty...

Cheers,
___________________
Gregg "T.Rex" Glezakos
3D Artist

jin choung
04-23-2004, 03:45 AM
yes.

that is the dream.

but i could not disagree more with the 'at any price' sentiment. but that's just me....

jin

JDaniel
04-23-2004, 11:08 AM
Jin you can import geo and export geo w/ or w/out normal maps and displacement maps. Since LW can't handle the hi-res detail I'll render the hi-res(w/out displacement map) final geo in Fprime, or to take advantage of the lo to hi res displacement map I can render it in my friend's Maya5 w/Mental Ray.
I got it mainly for sculpting hi-res detail. I've only had it a couple days so it's still kinda weird. :D Doing the tutorial thing still.
This guy is a ZBrush expert, he answered my emails too.:cool:
http://www.southerngfx.co.uk/
I figure if Zbrush2 is good enough for WETA in LOTR 3 then...

Edit:Mental Ray

jin choung
04-23-2004, 06:48 PM
yah,

but let me know when you figure out how the uvs work... that is the mystery for me.

jin

sketchyjay
04-23-2004, 07:42 PM
Zbrush has all the same mapping types as the rest. It also has AUVTiles

http://www.pixolator.com/zbc-bin/ultimatebb.cgi?ubb=get_topic&f=1&t=010200


it is as I said before. It works like atlas but has several features that make it better.

1. it has a minimum size it will allow so you don't get those tiny crupled polys like in LW.

2. you can set the min to max difference. so you can make them all the same size or only vary a small amount or have it as much as in LW atlas mapping.

3. if your texture is too small it will warn you. LW will use a 256x256 as happlily as a 4096x4096 texture.

it also seems to be better at joining up the edges of polys so there wont be UV seam issues. the atlas/AUVTiles style mapping of course does not produce stretching, or a minimal amount at most.

all of this is great for 3d paint programs but useless if yoiu like to hand paint your textures in photoshop

jin choung
04-23-2004, 08:02 PM
hey jay,

specifically, i want to know what the uv map of a low poly sds cage looks like in zbrush....

do you see straight lines or curves. are the images generated from zbrush counterdistorted.

these are the issues i'm talking about.

jin

claymation
04-23-2004, 09:21 PM
just scroll down through that link i posted it shows what the uv map looks like. From the UV example he posts it is not that the UV map is unique just how it seems to seamlessly stitich it all back together.

http://209.132.69.82/uploaded_from_zbc/200308/user_image-1060868428yxn.jpg

as you can see it is very much like an atlas mapped lightwave model.

There is a thread on their site which I was trying to find where they show a model AUVtiled exported to lightwave to show the effects of distortion.
I loaded the model ages ago and there was little distortion.

jin choung
04-23-2004, 10:47 PM
hey clay,

ummm... what link?

anyhoo, yah, i anticipated that result but that would not be ideal for contributing to the texture map in something like photoshop.

i'm sure that there are other ways of dumping uvs into uvspace so that it CAN be used in photoshop - i.e. do a peltlike unwrap.

my essential question has to do with this:

------------------------------------------------------------------------------------

all apps that i know of, including lw and maya, uv map an SDS object by dumping the uvs of the BASE CAGE into uv space (actually, this is not strictly true in maya... will get into that later).

THUS - for an SDS SPHERE

in uvspace in lw, you have SIX SQUARES in uv space. the squares are very different from the curved edges that would be present in the final 'LIMIT SURFACE' of the SDS object at render time. this discrepancy introduces DISTORTION.

MAYA gets over this to a degree by NOT dumping the uvs of the BASE CAGE but instead, uv mapping the cage after ONE ITERATION OF SUBDIVISION... so instead of SIX SQUARES for a sphere, it would have SIX "ROUNDY" SQUARES, each square composed of 4 quads... (perform a metaform on a square and you'll see what i mean).

the 'roundy squares' incurs less of a distortion because the edges of each roundy square BETTER APPROXIMATES the limit surface than a perfect SQUARE.

so i'm wondering how zbrush handles the issue.

does it indeed map the uvs of the BASE CAGE and just COUNTER DISTORT all the images (bump, displacement map, etc) it generates ala jacky daniel's method?

and considering that any other method that does not counter distort the image maps would make export to other apps impossible, i would guess that that is what it does. but just wanted confirmation....

whew... i do get tired of reiterating this over and over. this should really be in the FAQs of all future SDS apps. if for nothing else than to convince me that they are aware that the issue exists.

whew....

jin

kikuchiyo
04-27-2004, 06:13 AM
I do wish one of the options in this poll was:

"Subpixel displacement would be nice at some point, sure,
but I want Normal Displacement to work NOW".

JDaniel
04-27-2004, 01:47 PM
Have'nt got into uv's yet, but this might help explain Jin--http://www.jackydaniel.com/images/zb_uvs.jpg

jin choung
04-27-2004, 09:05 PM
thanks jackyd!

that was very informative... but it looks like that they do not treat uvs any differently, ultimately than lw... no consideration is given to uv curvature of sds objects....

so it relies on counterdistortion of the images generated from within zbrush....

or the mesh ends up simply being dense enough to defeat distortion....

ultimately, i would like to see how a simple zbrush sphere with a cage of six quads outputs a displacement map after it has been sculpted several subdivisions down....

thanks again.

jin

kikuchiyo
04-27-2004, 10:02 PM
Jin,

Not sure this is what you're looking for:

The first is a dmap of a sculpted 6 sided cube without
'smoothUV' switched on in ZB2. The second is with it
switched on.

jin choung
04-27-2004, 10:56 PM
hey kikuchiyo,

yup, that's exactly what i was looking for... the edges are STRAIGHT.... that means that it IS exportable to even lw (if and when they fix the displacement deal) and it relies on counter distorting the image....

so basically, it seems to address uv distortion like everybody else except pixar currently.

thanks much.

jin

JDaniel
04-27-2004, 11:18 PM
You can import your uvs also on a base mesh.
Rendering in LW, still when you hit Tab on the base, it will distort as usual. I think the auv tiles are the way to go. (Projected pixels per poly) Probably make uvs in LW then convert it to auv tiles in Z2.
I'm half way through the Z2 practical manual tuts!:D

sailor
04-28-2004, 12:37 AM
Hey Jin btw,

i never realized that talking about distorded maps to fit uv this is an option i Maya when trying to texture NURBS ...as you might know NURBS UV space is always square and therefore there is a way in Maya to distord the map to fit the UV space that u can't edit....knowing that there is a lot of heritage between NURBS and subdivided surfaces you could certainly find interesting solutions by looking at the way NURBS deal with the UV and even the way NURBS use reverse enginering to make some edition transparent for the user (see the example of edit points versus CV's that reminds me of the way phantom points works in subdivision surfaces in LW)
all this workarounds exists in the NURBS workflow long time before subdiv where used in the industry

jin choung
04-28-2004, 12:58 AM
hey sailor,

actually, there's almost as much DIFFERENT about NURBS and SDS than there are similarities....

UNLIKE nurbs models, sds models are NOT essentially rectangular sheets. in this case, because of the possibility of a single skin modeling taking on numerous BRANCHING shapes, it is much more like polys.

nurbs lends themselves so well to uv mapping because u and v are essential properties of the surface of a nurbs patch. because a nurbs patch is indeed rectangular and because that surface is parameterized, you have INHERENT uv mapping.
------------------------------------------------------------------------------------

and actually, uv mapping a nurbs model is almost NOTHING like uv mapping a polygonal model.

there is no UV EDIT VIEW for nurbs surfaces.... in fact, texture mapping nurbs is very very similar to the act of texture mapping in lw in the 5.6c days... you use projections... and you do not tweak individual control points (uvs).

and BECAUSE you are using projections to texture, you have not the distortion that you get with uv mapping sds low res cages.

RIGHT NOW in lw, if you use legacy projections (so if you texture map it like a NURBS model), you do not incur the kind of distortion that you do with uv mapping.

but then, you end up with the same limitations of NURBS texture mapping... that is, not a quick clean way to generate your painting template, need to use multiple alpha channels to blend between multiple projections, etc....

that's why for nurbs models, almost everybody agrees that the best way to approach it is just use a 3D PAINT app....

jin

JDaniel
05-04-2004, 12:48 PM
If Messiah can have sub-pixel displacement ... c'mon LW. :mad:

jin choung
05-04-2004, 01:03 PM
yeah...

but they got their renderer pretty late on and it was touted as pretty advanced to start with....

heck, if they've got rendertime "tesselation" ("dicing" actually), that's a pretty damn nice renderer....

p.s. jacky, it seems that zbrush deals with uv distortion the exact same way that lw does: it doesn't....

in uv space, all edges are STRAIGHT and so there would be distortion... but the good thing is that like your method, all the image maps are counter-distorted so you get no apparent distortion.

and again, this is something that goes away as your base cage gets denser....

wow... it really does seem like everybody deals with this issue the same way: they simply don't! you uv map sds so as to incur distortion....

jin

kfinla
05-05-2004, 03:08 PM
I'd love to see sub-pixel displacement in LW ASAP! I actually emailed Steve worley a few weeks ago about the subject and a possible plug-in. I assume he's busy working with newtek and the lightwave SDK to make the rendering engine more "open" to 3rd party plugins. So Fprime can finally work with g2 , sasquatch and other shaders.

To not beat arund the bush .. I'd love to have Zbrush and Lightwave work together better, the workflow currently is quite lacking in there compatibilty with each other. Which is where my interest lies in seeing sub-pixel displacement and LW able to import 16bit grayscale images like maya etc.

-my thoughts

Gregg "T.Rex"
05-05-2004, 04:14 PM
Originally posted by kfinla
I assume he's busy working with newtek and the lightwave SDK to make the rendering engine more "open" to 3rd party plugins

Let's all cross our fingers about this, though i wouldn't hold my breath. No one official yet cared to give a hint about subpixel displacement; only Deuce mention that procedurals in Open GL will follow the new Open GL additions in LW8. Is this a TOP priority?

Years have passed and silence is only bad policy for Newtek nowdays, imho.

Turtle through Maya is the way to go; at least for subpixel displacement from ZBrush2. It's nice to have alternatives...

Cheers,
___________________
Gregg "T.Rex" Glezakos
3D Artist

tudor
05-06-2004, 01:33 AM
Let me see if I got this right..

Atleast for now, we can use bumpmaps (normal maps) from Zbrush without problems in LW?

Displacements in LW are atm no good due to no micropoly displacement or adaptive tesselation.

Am I right so far?

If this is correct, I can use zbrush to paint normal maps which I use with Marvin Landis normal map shader. I loose the benefint of seeing the displacement in profile to the background, but the rest will look ok..

Gregg "T.Rex"
05-06-2004, 07:13 AM
Well...
Normal maps are FAR better than simple bump maps, but still normal maps are an illusion. They are no were near as if you were using true subpixel displacement. Displacement is not only seen on the model's edges. The quality difference between subpix disp and normal maps can be huge, though it depends on the level of the displacement you want to achive. Ultimate abuse of true subpixel displacement can be seen in the movie Dinosaur from Disney. They used Renderman of course. They used displacement to actualy model and build those amazing dinosaurs.
Cheers,
___________________
Gregg "T.Rex" Glezakos
3D Artist

tudor
05-06-2004, 07:41 AM
I understand that displacement maps are much more powerful than normal maps, but for simple wrinkles and details like that, normal maps performs ok.. If it is just the shading I want..

Why are normal maps better than bump maps really? The way I understand it, normal maps are just bumpmaps half way converted to something easier to use for the program. It was developed for games where they couldn't afford to recalculate the bump on a frame per frame basis.

Gregg "T.Rex"
05-06-2004, 08:16 AM
Why are normal maps better than bump maps really?

Well, for a start they render about 20 times faster than bump maps. Most important is that ray trace shadows follow more accurate the slopes and valleys, were bump maps don't. At least, in most renderers they don't. One renderer in know that bump maps cast proper ray trace shadows is the Final Render for MAX...
Cheers...

Karmacop
05-06-2004, 09:32 AM
bump maps (height maps) store the height of a pixel. Then when it renders it must find out from this bump map what direction every pixel is facing.

A normal map stores the direction each pixel is facing. That's why rendering with them is faster. Also, they are better because they essentially hold 3 times as much data as a height map.

To see the difference, convert a height map to a normal map, or a normal map to a height map.

Gregg, I've never heard of a renderer that 'distorted' shadows on bump maps. I thought by defenition bump maps weren't meant to recieve or produce shadows. Sounds like a cool feature though ... how noticeable is it?

jin choung
05-06-2004, 10:32 AM
actually,

i have the intuition that normal maps don't contain much or any more information than a bump map.

consider:

1. normal maps are used in games over bump maps. that should tell us something.... it's much FASTER. but that is why i think that a normal map is simply a 'pre-digested' version of a bump map and that a normal map would be generated in the course of rendering a bump map in a non-rt renderer like lw. i think all renderers create a normal map in the process of rendering bump maps.

2. people compare 24bits of data VS. 8bits of data but i think that that is actually an inaccurate comparison. not all 24bits are used. why?

because 24bits encodes the normal data for pixels facing 360degs! meaning, half the 24bits face AWAY FROM CAMERA and is thus useless. that's why all the normal maps we're used to seeing has such a clipped palate... those colors represent the direction of pixels on the hemisphere facing us.

so, 24bits divided by 2 = 12bits. it's a comparison between 12bits vs. 8bits... and i just have a feeling that the remaining 4 bits don't make a noteworthy contribution.

3. the proper way to test would not be to simply convert one to another... use zbrush and create a normal map and render that.

then, use zbrush to create a grayscale image and use that as a bump map.

is there a qualitative difference?

4. upon much questioning, i've heard that a normal map is merely a 'SCALAR' of the grey scale value... i don't know what that means but it was said to say that it is not superior.

5. don't get too enamored with the idea of 'indicating a NORMAL vector'. it doesn't buy you any advantages... you cannot describe any surface that cannot be described by a grey scale map.

that is, you cannot possibly encode UNDERCUTS with a normal map.
------------------------------------------------------------------------------------

this is my intuition. almost everybody believes that normal maps are superior. but i think that's an assumption that may not bear out in the math of it all.

jin

Karmacop
05-06-2004, 11:32 AM
Ahhh Jin, you should start understanding things more ;)

A scalar is basically a magnitude. eg the length of something. A vector has both a magnitude and a direction, eg, wind velocity.

Anyway, I know you aren't big on maths Jin, but maybe I can help you understand why a normal map is better.

Ok, so thr rgb channels define the xyz th pixel is pointing right? So even when using tangent maps that only uses 12 bits per pixel, we can find out how many degrees we can encode by dividing 360 by 128. That leaves us with 2.8 .. lets just say about 3. So a normal map can encode every 3 degrees on each axis ... eg, 0°, 3°, 6° etc. Fair enough?

Now, bump maps are quite different. lets find the angle inbetween pixels, as that's easiest to understand. Also, lets imagine that each pixel is actually a cube 1 pixel high, 1 pixel wide, and one pixel deep. So say a pixel goes from 255 high to 254 high, a one pixel height difference. Using the gradient formula (rise/run) we get 1, and taking the arctan of 1 we get 45°. Of course you're saying that's stupid, and it is because bump maps are usually more subtle, but that's not even my point.

So if we use this same methos on a few pixels, lets see what happens.

255 -> 254 = 45°

255 -> 253 = 63°

Again, te angles would never really be like this, but ok this is where I start my point. These pixels are 1 pixel away from each other basically, and there is a 40% difference in the angle. Now watch this.

255 -> 2 = 89.774°

255 -> 1 = 89.775°

These pixels are again, 1 pixel different from each other, but this time there's a 0.001% difference between the angles.

So what has this shown? That bump map angles aren't evenly spaced. That bump map values only cover 45° in a 1:1 ratio. Less when it's scaled down (making the map more accurate) and more when scaled up (though making it less accurate and a bigger jump in angles between pixels).

I hope you can understand this, I'm bad at explaining things.

jin choung
05-06-2004, 12:13 PM
wow,

nevermind the math. i couldn't quite choke down the condescension.

i don't know if you realize it or not but you have a tendency to state things as fact when issues are far from decided. you claim an authority on subjects that you don't have a right to claim. and you don't cite any sources... ever... for any of your very authoritative assertions.

you don't want to be the guy that thinks and speaks as if he knows everything already do you? cuz if you're wrong even once (and i clearly remember one instance already), you will damage whatever credibility you seemed to have.

jin

Karmacop
05-06-2004, 12:37 PM
Umm ... how was I condescending? I really didn't mean to be.

I state things as fact when they are. Whenever I'm not sure about something I will say so. I cite sources when I need to. What have I been wrong about?

jin choung
05-06-2004, 12:52 PM
what the....

if you don't get how what you wrote can be read as condescending:

you are not aware of the tone. ergo, no harm intended. forget it then.

------------------------------------------------------------------------------------
let's do some math then:

12bits per pixel yes? not 12bits per CHANNEL.

12bits = 4bits, 4bits, 4bits = x, y, z = r, g, b

correct?

4bits=2^4=16

each channel can have 16 discreet values... correct?

now does that math correspond with what you are saying?
------------------------------------------------------------------------------------

and actually, the math word that i've heard in relation to this topic was NOT scalar but:

GRADIENT

a programmer told me that lw has to sample 6 neighboring pixels and then calculate a GRADIENT (delta) vector which is added to the surface normal.

now do your math. does that model correspond to what you said a bump map is?
------------------------------------------------------------------------------------

jin

p.s. fine, i will not take offense. but you really do state things authoritatively. the instance that i remember that you were wrong was when we were swapping screenshots for how to resolve uv mapping sds distortion. do you remember that? the one example that you kept bringing up was impossible. and yet, every statement from you until we agreed that it was impossible, you stated completely as fact.

that may just be your personality but man, it rankles.

JDaniel
05-06-2004, 01:04 PM
umm.... sub-pixel displacement is cool! :eek: he-he-he

tokyo drifter
05-06-2004, 01:16 PM
Originally posted by jin choung
...you claim an authority on subjects that you don't have a right to claim... LOL, a classic case of the pot calling the kettle black.

And who is handing out these "authority" certificates? I want one! I want the "right" to talk about things, too. But the bigger question is who gave Jin the "UV mapping expert award" when I've never even seen a single object textured by him? This forum is too funny.

Ohh, and sub-pixel displacement is very important and Lightwave needs it ASAP! Sorry I didn't cite sources for that last sentence, I'll have to go and write a thesis paper right now. :rolleyes: :D

wacom
05-06-2004, 01:22 PM
Originally posted by tokyo drifter
LOL, a classic case of the pot calling the kettle black.

And who is handing out these "authority" certificates? I want one! I want the "right" to talk about things, too. But the bigger question is who gave Jin the "UV mapping expert award" when I've never even seen a single object textured by him? This forum is too funny.

Ohh, and sub-pixel displacement is very important and Lightwave needs it ASAP! Sorry I didn't cite sources for that last sentence, I'll have to go and write a thesis paper right now. :rolleyes: :D

Now this is funny!

Still we do need to have some sub-pixel displacement!

wacom
05-06-2004, 01:30 PM
Not to get way OT in this...but is it just me or does FPrime do a better job of taking bump maps into consieration for radiosity renders than the native LW renderer? I swear that I get better radiosity interaction even when zoomed WAY in in terms of shadows, reflections etc. It's like there is no minimum evaluation spacing. Could this not help us in the future if we DO get sub-pixel displacement?

tudor
05-06-2004, 02:59 PM
Uhm.. but can we use normal maps from Zbrush in LW without problem.. That is my main concern..

Otherwise.. Creating really good normal maps within LW is quite simple. Marvin Landis (The one, the only), normal map creator.
Create one lowpoly object. Duplicate to a new layer. Subdivide. Create new morph. Smooth shift a bit. Base layer. Airbrush, set it to the morph target. Paint. Subdivide more where needed. Run Landis normal map creator plug with lowpoly in front, high poly in back.. Tada!! Coolness.. Not as fast as Zbrush, but free, and a better interface then zbrush... btw. Normal map creator also produces displacementmaps. They look weird, but please, someone try them in zbrush and see if they hold up to some general standard for displacementmaps.

jin choung
05-06-2004, 03:28 PM
ok,

let's play then.

tell me when i've ever claimed to be an authority on anything? name one thread where i discuss something without an explicit or implicit "i believe".

name one thread where i've contradicted someone without showing explicit proof that it was so.

tokyo drifter, why is this your fight? who invited you in. i'm not even actively engaged in a tussle with karmacop now.

i work in professional gaming. i've textured lots of stuff for lots of games. you can see my work if you go to spider-man the ride in florida.

you want to get nasty? what have you done?

and excuse me if i don't feel the compulsion to prove myself to anyone, least of all to the likes of you.

howabout you wacom? you want some? want in on the fight too?

jin

wacom
05-06-2004, 04:09 PM
Originally posted by jin choung


howabout you wacom? you want some? want in on the fight too?

jin


I just said it was funny- that's all. Jin you're a big help to the LW community. You ask for things that need to be in LW way before many people even know what they are. True, you do go on and on with technical know how, which I'm in no position to question, that sometimes alienates people from getting your main point, but that is a small price to pay for your insights. I condsider you one of the UV map crusaders. You fight the nasty fight so that others don't have to.

Do I like your style? Not always. Do I like what you're saying- almost always.

No need to bash pixels over my head! Let's focus our energies on getting LW to impliment SPD instead of on eachother.

tokyo drifter
05-06-2004, 04:28 PM
Originally posted by jin choung
tokyo drifter, why is this your fight? who invited you in. i'm not even actively engaged in a tussle with karmacop now. I didn't know that I had to be invited. Who knew?
i work in professional gaming. i've textured lots of stuff for lots of games. you can see my work if you go to spider-man the ride in florida. What games? I'm a big gamer, I'd like to hear what you've worked on.
you want to get nasty? no.
and excuse me if i don't feel the compulsion to prove myself to anyone, least of all to the likes of you. I never said that I wanted you to prove anything. And thanks for the condescending " least of all to the likes of you" statement. Isn't that the same reason you got upset with karmacop? :rolleyes:






EDIT: I don't want this thread to get locked, so I'll stop posting off topic comments in this thread. I really do hope that Newtek develops a good displacement solution so that Lightwave can work well with amazing programs like ZBrush 2.

Everybody should invite more lightwave users to come to this thread and add their vote. If Newtek sees that a thousand users say they want sub-pixel displacement then it would be hard for them to ignore.

Gregg "T.Rex"
05-06-2004, 04:55 PM
Damn, people!
Calm down everybody! Take a deep breath for a sec, will ya?

To return things to "normal" let me post a very quick and dirty test of true subpixel displacement using Turtle renderer in Maya 5.01. Just to show what we miss in Lightwave and want Newtek to do for us in the next update, if possible.

Now, as you see Turtle not only have true sbpix. disp. but also a very very very cool feature called "Render as subdivision surface". With this you can render a polygon object just AS IF it was a subdivision surface object. Imagine having in Lightwave the "Render subpatch level" effecting not only the TAB subdivided objects but also plain polygon objects.

Newtek people realy have to consider were they "want to go" with Lightwave in the next revisions.
So here's the quick test: :D :D

jin choung
05-06-2004, 05:20 PM
wacom,

that was rather silly and histrionic of me... but it seemed like the right kind of thing to say when it looked like i was getting ganged up on.

true, you said merely that it was funny but the piece that you quoted implied that the fun was being had at my expense.

in the context of thread, it looked like the beginnings of a dogpile - in which case i follow a martial arts tenet that goes something like: there's no such thing as being surrounded.... but there are times when you can swing in any direction.

i appreciate your conciliatory sentiments and if you want peace, good. you have it.

tokyo drifter,

either you are a sociopath or i am.

if it was not a challenge, what was the sentiment behind "i've never seen anything textured by him"? that's not just a meaningless statement of fact and you know it. in context, that is a direct indictment on my abilities and knowledge.

and so my response was DELIBERATELY rude... no need to compare it with my beef with karmacop.

i keep saying this but it is absolutely true: i make it a point NOT to start hostilities. and by hostilities, i do NOT mean disagreements. but if somone insults me, i don't turn the other cheek.

according to your response, you evidently have a LOT of problems with me. fine. but if you need to pick a fight, why not pick a moment when i've offended your sensibilities and take me on in a separate thread?

your choice. if you want an enemy, you got one.

as for the games i've worked on, if you are a hardcore gamer, you will likely not have ever even touched their boxes:

-secret agent barbie, ultimate ride, ultimate ride deluxe, ultimate ride disney edition, nascar racers web game, wild rides water park designer... all i can remember at the moment.

and so that there's no confusion, i worked on the PRE-RIDE CARTOON for spider-man, not the ride footage itself.
------------------------------------------------------------------------------------

finally,

the issue that brought this whole thing up is this:

i will not contradict anyone without PROVING IT or allowing for the possibility that i myself may be wrong.

i don't have a problem with people making innocent statements. but if you contradict someone, nevermind the fact that it is rather rude - nevermind that; but if you contradict, do you not incur an extra burden of proof?

either of logic by means of discussion or by citation of authority if you are NOT an authority. but to simply state that another's is false and yours is true...

that is my issue.

jin

jin choung
05-06-2004, 05:35 PM
gregg,

you want some too?!

kidding! kidding!

very nice.... again, i really hope that i will be proved wrong and that micropolygon dicing is do-able in lw because it is undeniably nice.

some zbrush guys have been turning out some pretty good results in lw on the cgtalk boards but larry schultz brought up a really good point that's almost been overlooked....

as it is, even though you CAN get nice results, you have to put subdivision order to FIRST! and so you can't gain any of the advantages of skinning/animating a low poly cage while getting a superdense mesh at render time. you have to animate (not to mention WEIGHT MAP) the super dense mesh!

right now, lw's method of determing subdivision as FIRST or LAST is really not ideal and we end up with situations now where you can get NICE DISPLACEMENT but you'll NEVER BE ABLE TO ANIMATE IT!

so i hope that if/when newtek gives us better displacement, they're mindful of implementing it in such a way so that it's useful for more than just still sculptures!

jin

tokyo drifter
05-06-2004, 05:38 PM
Ahh Jin, you're so cute when you're angry. Wanna kiss and make up?

Gregg, in turtle, can you exclude certain polygonal objects from that "render as subdivision surface" feature (that's if it's a global parameter) or is that option per object? How does it handle models with n-gons when subdividing? Never the less, That's a great feature!

wacom
05-06-2004, 06:00 PM
not to be an idiot but...

I know why I want True Sub-pixel Displacement, but I'm confused as to how hard or easy it would be to get into LW right now.

OK, so not even getting into the UV map issue, is all that's needed is the mesh to be divided at render time in a more intellegent way? Is Turtle just slice'n and dice'n the model up regardless of where it's being displaced, or does it do it only where needed to help keep render times down.

I don't need a math answer here, or any proof that someone is a rocket scientist, just a simple explination of what is going on with the mesh.

Could you render a mesh version of that displaced sphere so I can see the mesh Gregg "T. Rexx"?

jin choung
05-06-2004, 06:33 PM
fine... there will be peace in the kingdom... good.
------------------------------------------------------------------------------------
as for subpixel displacement... as far as i've heard, it is somewhat of a misnomer.... the crux of it is not the fact that it is getting displaced but that the renderer itself simply produces 'micropolygons'.

if turtle is a REYES renderer, it works something like this:

------------------------------------------------------------------------------------
1. resolution independent surfaces (nurbs, sds) are truly treated as if they have no INHERENT RESOLUTION. no tesselation state need be entered from the 3d app (high, med, low, etc).

2. the final DICING (if it's reyes, it creates micropolygons which are 'bilinear patch quads' - which is like a poly quad with straight edges and i can't personally understand what the distinction is but they do note that it is not a quad poly) can be set to "how much of a pixel". so you can set a 'micropoly' to be one pixel large or some percentage of that.

3. but what this means is that a part of a nurbs surface in the EXTREME FOREGROUND is NOT diced in the same way as that surface extends into the BACKGROUND! if it simply diced uniformly according to the highest setting needed for the part closest to camera, render time would skyrocket.

4. so they have some kind of SPACE PARTITION thingy where it sections up a model and dices those sections ADAPTIVELY, according to their own needs... so that according to distance to camera, the micropolys all end up being the 'percentage of a pixel' that you set.

5. afaik, REYES DOES NOT dice AGAIN after considering DISPLACEMENT.... what this means is that you can create micropolys that stretch larger than a pixel if you make an unreasonable displacement map (from second renderman book).

this is what i mean that the displacement itself is not really handled in an exotic manner... it's just that the exotic dicing that happens during render is the special part that the displacement then gets to act on.

------------------------------------------------------------------------------------

as for how difficult this would be for lw, my opinion is that it is not do-able within current financial status. many, including lightwolf, feels that it is. i hope they're right.

but considering that we don't do #s 1,2,3 or 4 and since our renderer is not in anyway REYES specd, i seriously doubt they can MODIFY the existing renderer to do this.

they MAY be able to alter the architecture and simply BUY a third party renderer that will though.... or even just make a conduit to something like AQSIS which i keep bringing up again and again... but hey, IT'S FREE RENDERMAN! that imo would be very very doable... heck, even as a fallout from their fixing the SDK for FPrime.

jin

jin choung
05-06-2004, 06:41 PM
wacom,

if the sphere is diced into micropolygons, the mesh will be composed of quads that are a pixel large or smaller.

so the 'mesh' that you see in your opengl views is an approximation without any necessary bearing to the 'final mesh'.

so you would have your 'control cage' or 'isoparms' that define a structure that is potentially INFINITELY DENSE.

and further, even if you zoomed in on that sphere very very closely and then rendered the mesh, the mesh would STILL be composed of quads that are a pixel large or smaller.

and that's really cool. you never ever ever get visibly facets if you use a res independent geometry type like sds or nurbs.

jin

Gregg "T.Rex"
05-06-2004, 07:29 PM
Originally posted by tokyo drifter
Gregg, in turtle, can you exclude certain polygonal objects from that "render as subdivision surface" feature (that's if it's a global parameter) or is that option per object? How does it handle models with n-gons when subdividing? Never the less, That's a great feature!

It is not a global parameter. It is a flag set on a per object basis. It doesn't care if the polys are n-gons; it subdivides them also. Though, the manual states that for best results try and keep the geometry as quads as possible.


Originally posted by jin choung
if turtle is a REYES renderer...

Turtle, afaik is NOT a RAYS renderer, though i could be wrong here, since it is not that clear in the manual. Here's Turtle's site for those interesting: Illuminate Labs (http://www.illuminatelabs.com/)

Right now, i'm racing against time to finish a Coca Cola commercial, which btw have a 2 million polygons stadium and 7.4 billion polygons in the form of 100.000 high rez subdivision surfaced poeple, that populate that stadium. A nightmare job, belive me... When i'm done, i'll come back with more real-world "abuse" of Turtle's subpixel displacement...

Stay tuned everybody. And specially, Newtek guys...
Cheers,
___________________
Gregg "T.Rex" Glezakos
3D Artist

wacom
05-06-2004, 07:47 PM
OK...clear on it now! Thanks. So even if somebody wanted to "show me the mesh" they couldn't...or else it wouldn't be sub-pixel.

So does it slice and dice the whole model? Kind of like hitting ****-D as many times as there are pixels per polygon per image. So is the big secret getting the "big" closer to the camera polygons to get cut up more than the "smaller" ones in the scene? Is this how it differs from just the simple sub-division levels in LW?

jin choung
05-06-2004, 08:11 PM
hey gregg,

cool. can't wait to see what you're working on. let us know when it hits the airwaves.

yah, i have no idea about turtle either. but i've only ever heard of micropolygon dicing on reyes specd renderers. but heck, even POSER has it now so perhaps it's not the sole province of reyes.

wacom,

right... that is the great thing about micropolys. if it's micropolys, you will never ever see a facet and that's why it's impossible to show you a mesh! the ball would simply be the color of the wireframe you selected! but as i said, if you do a ridiculous kinda displacement map, you can still stretch out a micropoly until its 'edges' become visible.

right, and that's the big difference between micropolys and lw. in lw, you set a resolution for the model and that's the resolution for the WHOLE MODEL, near, far or inbetween.

if you zoom in far enough, you WILL DEFINITELY SEE EDGES if you don't up the subdivision level.

!!! aha!!!

but if you zoomed in that close, that means you have to up that subdivision level PRETTY FREAKIN' HIGH to get rid of facets!!! and again, that level of subdivision has propagated to the ENTIRE MODEL!!! and i'm not sure but i don't think lw is great at CULLING out non visible polys! and so you incur a MASSIVE render time hit to do super close ups (but that's rarely an issue though honestly).
------------------------------------------------------------------------------------

simply, lw's subdivision:
- HAS NO RELATIONSHIP TO SCREEN RESOLUTION
- SUBDIVIDES THE ENTIRE MODEL *UNIFORMLY*. it is much much easier to do this! all the computer has to do is the equivalent of hitting 'ctrl+d' to however many levels you need it!
- does not take place in the renderer! lw just turns everything into triangles and feeds raw triangles to the renderer. the renderer never sees a 'subdivision surface' geometry type... just triangles.
------------------------------------------------------------------------------------

i just look at it that lw is much more MANUAL.

YOU have to make sure you won't get faceting at a certain resolution.

YOU have to makes sure that the shot isn't so dense as to make rendering virtually impossible.

and i just consider that a price that must be paid for the price of the product.

besides, if not for MENTAL RAY now shipping with virtually all the other apps, the other apps' native renderers do the same thing as lw. lw is tops amongst the native renderers but that's why people still like to go with renderman or reyes for movie production and stuff...

jin

Karmacop
05-06-2004, 10:23 PM
Originally posted by jin choung

let's do some math then:

12bits per pixel yes? not 12bits per CHANNEL.

12bits = 4bits, 4bits, 4bits = x, y, z = r, g, b

correct?

4bits=2^4=16

each channel can have 16 discreet values... correct?

now does that math correspond with what you are saying?


See this is why I questioned as i went through, I need people to check my math at 4 am :p

Ok, so, looking at normal maps a bit more, their vectors cover 180°. So even with tangent maps, you're still getting the full 180° on the x and y axis, but you're only getting 90° on the z axis. Now, it was really wrong of me to say 12bits per pixel, because it's not, you shouldn't confuse me Jin :p. It'd be 21 bits per pixel ... but even this isn't true. Since the x and y axis are both still 8bit no matter what, and the z axis is 7 bit, that's 23bits per pixel ... one bit less than 24bit for aworld normal map.

So taking this, Normal maps can actually encode every 1.5 degrees.



Originally posted by jin choung


and actually, the math word that i've heard in relation to this topic was NOT scalar but:

GRADIENT

a programmer told me that lw has to sample 6 neighboring pixels and then calculate a GRADIENT (delta) vector which is added to the surface normal.

now do your math. does that model correspond to what you said a bump map is?


What I wrote before was for angles inbetween points, so my math for that stands to be correct as far as I know. I don't know where you get 6 samples from ... I think it'd more likely be 8 :confused: . Anyway, I'm not 100% sure how to figure out exactly how many angles bump mapping can represent, but I know for a fact that because of how it is interpolated, there's some angles that it can't do. Basically, bump maps are nice for doing smooth bumps, but not angular bumps. You'd need to have a much higher res bump map than normal map to create similar angles. Proceduaral textures that are calculated at render time of course get past this.
------------------------------------------------------------------------------------


Originally posted by jin choung

jin

p.s. fine, i will not take offense. but you really do state things authoritatively. the instance that I remember that you were wrong was when we were swapping screenshots for how to resolve uv mapping sds distortion. do you remember that? the one example that you kept bringing up was impossible. and yet, every statement from you until we agreed that it was impossible, you stated completely as fact.

that may just be your personality but man, it rankles.

What? What was impossible about my uvmap? It was a completely possible uv map ... I had it mapping onto a real object. What we agreed on was that what I did wasn't the answer ... which was my point. But the way you were creating your uv maps, using atlas, wasn't the answer either, and would only work well with your cube example. After reading the pdf you posted from pixar I know, or have a good idea about what Newtek is doing wrong with their sds, as well as how they are uv mapped.

JDaniel
05-06-2004, 10:43 PM
Gregg how's it look out of Mental Ray? Which do you prefer?

jin choung
05-06-2004, 11:40 PM
fine,

you were wrong in that what you proposed wasn't the answer. my point is merely that when you contradict me, you might want to throw in some magnanimous room for error. cuz you've been wrong in the past and you will be wrong again.

I'VE BEEN WRONG IN THE PAST and I'LL BE WRONG AGAIN.

(and as for my solution, if you use UVEDIT PRO which does not require unwelding, it should work)

as i hoped to clarify, i don't care that people just state things in a blank context.

but if you are challenging someone else's ideas, it is simply not enough to say that it is not so without proof... and imo, a generous helping of leeway for error.

we're none of us experts.

when i write, i am begging someone to throw in some authoritative facts. i am ignorant of a great many things that i would desperately like to be enlightened on.

when you come in and state things categorically, you tend to SHUT DOWN the search.
------------------------------------------------------------------------------------

if you're gonna bother to write about math, care to write in a way that is at all understandable to the rest of us?

HOW, from what you wrote did you arrive at the conclusion that it's 21 bits per pixel?

and if what you're saying is true, why is the color palette for a tangent space map so markedly different from a worldspace then?

i've never seen a credible discussion of this issue. have you? do you have any sources that you could cite to back up anything?

cuz i'd really like to read about it.

jin

Karmacop
05-07-2004, 01:17 AM
Sorry, it's hard to write math in just text, and I don't know how much of it you understand. Also, I don't cite thigns often because I've either picked something up along the way, or I can't find any good links.

Ok, now to find out it's 23 bit, I opened the image in something where I can see the rgb channels separately. Of course you should use a good normal map, one of a sphere would be best as it covers all angles. Anyway, if I look at the red or green channels by themselves I can see they go from black to white. I can also check this by looking at the seperate colour levels. Now, when I look at the blue channel I can see it doesn't go to black, it goes to a mid grey. Again, looking at the blue colour levels I can see it only takes up half the graph.

Now, with an 8 bit channel we know that there's 2^8 (2 to the power of 8) possible levels it can be, or 256 different levels. So if the blue channel is only using half the possible levels, that'd mean it's using 128 different levels. This is where you got confused (and so did I). 128 is 2^7, not 2^4. So, the blue channel is really only using 7 bits to encode it's data.

So ok then, that's 8 bit for the red channel, 8 bit for the green channel, and as I just said, 7 bit for the blue channel. So 8+8+7 = 23bits.

Why I said doing it your way would be 21bit is that you were saying it only uses half of each channel. As I said earlier, half of each channel is 7bits. So 3 channels times by 7 bits is 21bits.

World space looks different to tangent space because tangent space is missing half of it's blur channel, or 8,388,608 different colours.

If you want better explainations and diagrams I could do that, but I need to get going to work soon so you'll have to wait several hours.

wacom
05-07-2004, 01:29 AM
I know this is all really good stuff Karmacop and Jin...but how is this thread going to gain any steam (besides the steam you're blowing off at eachother) when you guys go on like this? Is it really to inform anyone or to show up the other guy?

Just had to ask as I think we're still talking about Sub Pixel Displacement right?

Karmacop
05-07-2004, 01:39 AM
Jin wanted to know something, so I'm trying to inform him on it. Jin wanted to know if normal maps or bump maps were better, as both can be used for sub pixel deformation.

Gregg "T.Rex"
05-07-2004, 06:18 AM
Originally posted by JDaniel
Gregg how's it look out of Mental Ray? Which do you prefer?

Well...

Mental Ray in version 3.3 and in Maya 6.0 is looking great. They've done a great job this time. It's so much faster now for GI calculations and easier to work with.

But...

Mental Ray does not have the cool feature "Render as subdivision surface" Turtle have for polygon objects and it's analytic displacement is about 20 to 30 times slower than Turtle's.

And best of all, Turtle is -and take my word for it- "CLICK & PLAY". It's so easy to use, that is scary! It's as easy as editing options in Lightwave. The renderer setup and do everything for the user. It's the most user friendly renderer i've ever sceen. And i think i've seen most of the well known renderers around.

Turtle right now finishes the beta testing period and is about to go gold. I'm proud to say that i've helped the guys behind Turtle to find bugs, since i'm using Turtle now for a couple of months and in real production environment.

Newtek, realy have to stand up and look around to the competition. Until last year LW was my main app for everything. But we have to evolve, as the bar raises. Nowdays, it's only for modeling and some volumetric stuff, with the HD Instance plugin. Who knows, what will happen when (if) MODO hit the streets...

Imagine, if there were no third party developers around, like the D-Storm, Worley, Evasion, Happy Digital, Dynamic Realities to name a few and all the great minds our Lightwave community have. Since version 6, who is actualy doing innovation code for Lightwave? Newtek or all the third parties out there?


My best regards,
___________________
Gregg "T.Rex" Glezakos
3D Artist

ps: I'm a hardcore LW user for more than 10 years and my heart is in pain, watching my favorite app left so far behind of the competition...

Karmacop
05-07-2004, 09:59 AM
Found this site, not very in depth but it has some nice examples.

http://www.pinwire.com/article82.html

JDaniel
05-07-2004, 10:15 AM
Thanks Gregg. How much will Turtle be?
You hit the nail on the head.
Also Silo is looking damn good also.

Luís Santos
05-07-2004, 10:18 AM
Newtek, realy have to stand up and look around to the competition. Until last year LW was my main app for everything. But we have to evolve, as the bar raises. Nowdays, it's only for modeling and some volumetric stuff, with the HD Instance plugin. Who knows, what will happen when (if) MODO hit the streets...

I´m using 90 % of modeler, when modo hits the street i´m moving...for rendering i´m going to use messiah studio. It have subpixel displacement, fast radiosity, good rigging tools, and very nice sss.

I hope nt implement this features fast, or it´s going to loose some ppl.

LS

JDaniel
05-07-2004, 12:18 PM
Luis have you seen this 100$ subd modeler w/ edge selections too? http://www.nevercenter.com/

Luís Santos
05-07-2004, 01:59 PM
Yes, JDaniel i´ve tried the demo, it´s fun to work with. Let´s see modo, the expectations are pretty high. ;)

Cheers,

Luís Santos

jin choung
05-07-2004, 02:21 PM
hey wacom,

well i don't think me and karmacop are involved in a pissing contest anymore... other than karmacop's hellbent insistence on continually and unceasingly proclaiming his belief that i could not possibly understand anything of any value whatsoever of course....

(i mean if subjects like quantum physics can be understandable to the lay-person by authors like brian green, michio kaku and stephen hawking, i think the mathematics of normal maps should be at least as within reach for heaven's sakes!)

------------------------------------------------------------------------------------

my personal concern came up when someone brought up normal maps.

for me, i have a sneaking suspicion that normal maps are simply an explicit encoding of values that must be calculated during rendering when you're using a bump map - therefore being faster.

but i also have a feeling that they are no more descriptive. and since this issue is not really widely discussed (for some reason! i mean what question is begged more than this?!), i really want to get to the bottom of it if possible.

just a desire to understand. if i could ask john carmack, i would.

jin

Gregg "T.Rex"
05-07-2004, 03:01 PM
Originally posted by JDaniel
Thanks Gregg. How much will Turtle be?

I'm not sure about the price since it's still in beta, though i think is something between $1000-1500 for a single workastation liecense. Far more cheaper than Renderman or anything of the same quality.


You hit the nail on the head.

I hate when i do that, but i'm afraid Chuck, William or anyone official form Newtek, has to stand up and explain to all the LW community, what the feature of LW is and how long will it take to get there. Unless, they are devoted 100% to the Toaster, LW won't get far from were it's now. They owe to us paying customers at least an answer, because i'm tired of ..."hopping" anymore...

A dedicated user,
___________________
Gregg "T.Rex" Glezakos
3D Artist

bloontz
05-07-2004, 03:46 PM
Originally posted by jin choung
for me, i have a sneaking suspicion that normal maps are simply an explicit encoding of values that must be calculated during rendering when you're using a bump map - therefore being faster.

but i also have a feeling that they are no more descriptive. and since this issue is not really widely discussed (for some reason! i mean what question is begged more than this?!), i really want to get to the bottom of it if possible.

just a desire to understand. if i could ask john carmack, i would.

jin

http://www.pinwire.com/article82.html

This seems to make sense, though I know nothing about the author. If correct it would indicate that normal maps do contain more information/detail than bump maps.

jin choung
05-07-2004, 07:38 PM
hey bloontz,

thanks much for the article! but that's the problem with some of these 'lay' articles... they seem to dwell on a 'popular wisdom'...

of course (!!!) 24bit normal maps are more accurate than 8bit depthmaps!

but the really hilariously funny thing is that the results of the depth map beethoven example that they use is REMARKABLY SIMILAR to the normal map one!

the only difference seems to be a deliberate attempt to light it badly and with less specularity!

seriously, look at the details in terms of folds and polys... REMARKABLY SIMILAR! and i really do wonder what it would look like if it was lit better.

i'll keep looking... wish i really could ask john carmack!

jin

bloontz
05-07-2004, 09:16 PM
What I found interesting was the method he uses to generate normal maps, at the bottom of the article. Helps to explain the wacky colors and also seems to suggest that normal maps can contain more info than bump maps in that they have directional info for each axis whereas the bump is just elevation. That seems like it would allow for the greater accuracy with sharp angles that he talks about. I would think that Marvin Landis should have a good handle on normal maps, you should send him an email.

jin choung
05-07-2004, 09:34 PM
actually,

i already wrote marvin about this and his brief response was that normal maps aren't really more accurate. i wish i could get him to write about it more at length but he's a busy man and i don't want to bother him to just satisfy my own curiousity....

jin

Karmacop
05-07-2004, 10:50 PM
Jin, why is hard to understand about my maths? I'm trying to make it simle, but it's hard to explain using text. If pictures will help I'll try to draw somethign up. Would you be happier if I made you a scene file showing a bump map vs a normal map?

Jin, I originally posted a link to that article.

You could email John at [email protected] . He usually answers emails.


they seem to dwell on a 'popular wisdom'... of course (!!!) 24bit normal maps are more accurate than 8bit depthmaps!

Jin, are you saying you know that 24bit normal maps are better, or that this person just assumes it with no proof like everyone else? And I can assure you, even if they were the same material (they probably sre, just for your info) that the normal map has more info.

Karmacop
05-07-2004, 10:58 PM
Also Jin, when using bump maps with todays 3d engines, the bump map will be converted to a normal map on loading so that it doesn't need to computer it every frame. So then what's the difference between storing normal maps and storing bump maps? Bump maps could be stored with less room, but normal maps have more quality. Converting bump maps to normal maps at loading is very fast, so don't use that as a reason. Again, this is just coming from me so you will probably want a better source.

jin choung
05-07-2004, 11:27 PM
Originally posted by Karmacop
Jin, why is hard to understand about my maths?

sigh....

i'm not saying that your math is hard to understand. i'm saying that you continue to condescend by doubting that I CAN understand... you keep saying that!

the implication of course being that i am a monkey.

but seriously... i'm coming to think that this is a language barrier or something - if it is i'll gladly just let perceived slights go from here on out.

actually, this will be impossible to ask without it sounding offensive but i ASSURE YOU, that i don't mean it as an offense:

is english your primary language?

again, i ask only because we seem to be having a lot of misunderstandings. if it is not, that is NOT a bad thing! there are millions of people who speak german, chinese, spanish, ancient greek, etc that are terrific people and better than me.

again, i just want to know because it seems like we're having trouble communicating.

jin

Luís Santos
05-07-2004, 11:49 PM
One word:

When?

http://www.pixolator.com/zbc-bin/ultimatebb.cgi?ubb=get_topic&f=1&t=015376

I feel ashamed, really, even c4d have it.

Luís Santos

jin choung
05-08-2004, 12:48 AM
in any case,

thanks karmacop. our little discussion really lit a fire under me to find an answer to this. and you were also right about john carmack!

and although there is some pretty technical stuff in here, i think that the gist is pretty clear if complex.

behold:

------------------------------------------------------------------------------------
Normal maps and bump maps can be converted back and forth for a given model, so they both do effectively the same thing. The question of precision has some subtleties.

24 bit normal maps do not really have as much precision as you would like, because the only values that are relevant by themselves as normals are those close to the surface of the unit sphere, which is a much, much smaller number than 2^24. If you have a fragment program normalize the values after loading, you can use many more of the values to get fractional levels between the surface values, but picking the right ones is tricky. Most models don't show a problem, but you can easily create models with subtle faceting that will look like crap with a straightforward normal map, especially if you use the normal map for reflection. This lack of precision is why Nvidia provided the hi-lo format, which is another way of specifying unit normals with much more precision. These were rarely used because of the awful NV20 texture shader interface.

Bump maps have a scale factor associated with them, so when this scale factor is small, the resolution of the generated normals can be arbitrarily fine. A sophisticated program may also use fairly wide filter kernels that look at quite a few bump map texels to determine a sampled normal.

A program like Maya is certainly generating higher quality normals from grey scale bump maps that a real time game is getting from 24 bit normal maps, but the same techniques could also be applied to normal map filtering if you wrote a long fragment program for it.

With state of the art hardware (NV40) you can store full floating point normal maps and get anisotropic trilinear filtering on the full floating point values if you want to spend the memory and clocks for it.

John Carmack
------------------------------------------------------------------------------------

and so here's the answer from a pretty freakin' credible source. i am more than sated.

jin

p.s. and in case it might be relevant to the reply, here is my initial mail:
------------------------------------------------------------------------------------
hi john,

long time admirer, first time writer.

sorry to bother you. with doom3 and rocketing stuff into the stratosphere, you must be a tremendously busy person so please ignore this if you haven't the time to spare.

but i have a technical question that i don't think anyone (according to web searches) has thought to seriously ask:

are 24bit normal maps being used in today's realtime engines superior in quality to the 8bit bump maps that we've been using for years in nonrealtime apps like maya, lightwave, max, etc.?

there's a lot of hoopla over normal maps these days primarily because of the really cool, novel way of GENERATING them: by comparing the mesh of a low poly with a high and encoding the difference into a normal map.

but my question is if you used the same technique to generate the image maps, would a normal map be superior to a bump map? or is the prime advantage merely speed for real time engines?

popular wisdom says that, "OF COURSE, normal maps are superior, providing better detail and more accuracy. it's 24bits vs. 8bits... what are you thinking?"

but since tangent space normal maps throw away all the normals on the hemisphere facing away from us, isn't it really 12bits vs. 8bits?

is it better? i have a sneaking suspicion that it's not.

and if you do write back, i hope you don't mind if i share the response with a group of people in the lightwave forums... we're having a discussion there about this issue.

thanks very much and i'm eagerly looking forward to doom3!

jin
------------------------------------------------------------------------------------

Karmacop
05-08-2004, 04:46 AM
Srry Jin, I wasn't saying you don't understand maths, I was asking what about my maths you didn't understand. A lot of people have troubles with math, and trying to explain maths in words is hard to do.

Yes english is my native language. I left out the word 'it', because I didn't proof read. I was in a rush and tired.

Anyway, I don't think John really answered what we're discussing. I assure you that if you make a normal map and a bump map from a model, and then convert the bump map to a noprmal map, you will end up with two very different normal maps.

Also Jin, again, it's not 12bits. Half of 8 bits is 7 bits, and the red and green channels still use all 8 bits.

Lynx3d
05-08-2004, 07:11 AM
Wow, i haven't read the whole thread, just the last 2 pages...

Precision of normal maps...well i guess 23bit is not really true, neither are 12bits.

I just had a look at some normal maps, and it seems clear to me that the normals are normalized, i.e. the length of the reconstructed normal vector comes out as (nearly) 1.0.

This inevitably makes one coordinate radundant, so from your 24bits 8 are actually useless because you can tell the third coordinate when you know that the vector length equals 1.0.
So we're at 16bits, from which 1 is never used because of the backfacing situation.
That means, with the normal maps i just investigated, you get roughly 15bits of usefull information out of 24. That's still a lot more than 8, but not realy fine enought to make a model look perfectly smooth without texture filtering!

You may argue that all channels contain values from 0-255 and hence all bits are used, but the 3 values have a mathematical relation that makes one redundand. You can actually see it, does a normal map contain near black or white pixels?
The once i saw didn't, hence those perfectly possible values never get used.

Karmacop
05-08-2004, 08:53 AM
Hmm ... interesting point. I was thinking about that the other night (that there can be more than one way to give the same angle) but didn't think about it any more than that. I'll have a look into it ...

EDIT: Ok, maybe this is just what I'm using to test my normal maps, but it doesn't like crazy values much at all :p Again, great work, it's something I over looked.

Lynx3d
05-08-2004, 09:45 AM
Oh...wait, i think i overlooked a small fact that adds one necessary bit again...there a two possibilities for a vector to have the same length at two given coordinates: with positive and negative sign....

Well, nobody's perfect :)
The shader Marvin Landis wrote can actually handle non-normalized vecotrs (he gave me the code some time ago), but it seems the normal map create does only produce normalized ones, but to be honest i haven't really looked into the creator(s) yet.

I wanted to make the shader usable for bones and morph deformations too, but this really isn't as easy as i thought...

Karmacop
05-08-2004, 10:33 AM
Just wondering, could you add a deformation plugin that'd pass the deformation information to the shader?

Lynx3d
05-08-2004, 11:17 AM
Basically, yes...but i haven't really gotten that far with my considerations.

I'd need the undeformed object too, i am a bit confused in which states you get the object with the various ways to obtain a meshInfo from LW...
The next problem is, how do i get the transformaition (-matrix) for each point...a polygon (triangle) can not only be rotated, it can be sheared too...how exactly would this affect normals? Because, you also need some transition from one polygon to the next or you'll have a kink.

I tried to find out how this is done in games etc but i couldn't really find something usefull...only one about bones that seems to assume that one vertex can be only assigned 100% to one bone, no weight maps or whatever.

jin choung
05-08-2004, 11:31 AM
hey guys,

nice having you aboard lynx3d!

"I assure you that if you make a normal map and a bump map from a model, and then convert the bump map to a noprmal map, you will end up with two very different normal maps," karmacop.

that's not the point. the point is if you generate a DIFFERENCE MAP between hipoly object and lowpoly object and make one a BUMP MAP and the other a NORMAL MAP, and then you apply these images back to your low poly object and RENDER, would the normal map look any better or more accurate?

right now, that question is in doubt because:

1. "Bump maps have a scale factor associated with them, so when this scale factor is small, the resolution of the generated normals can be arbitrarily fine. A sophisticated program may also use fairly wide filter kernels that look at quite a few bump map texels to determine a sampled normal," john carmack.

and the second part about sampling neighboring texels is what marvin landis told me.

2. "The shader Marvin Landis wrote can actually handle non-normalized vecotrs (he gave me the code some time ago), but it seems the normal map create does only produce normalized ones, but to be honest i haven't really looked into the creator(s) yet," lynx3d.

and of course this makes sense because the normal map generator is based off the ATI app that's designed to help create game assets.

so does having normalized vectors for the actual NORMAL MAP produce less precision renders?

and finally,

3. "A program like Maya is certainly generating higher quality normals from grey scale bump maps that a real time game is getting from 24 bit normal maps, but the same techniques could also be applied to normal map filtering if you wrote a long fragment program for it," john carmack.

and so the question is, non-rt renderers do have established and hi quality systems for rendering grey scale bump maps. do the methods that render the normal maps utilize 'long fragment programs' to make it superior in quality?
------------------------------------------------------------------------------------

and finally, john carmack's response has pretty much resolved the issues that i had in any case:

A. that normal maps can describe surfaces that bump maps can't!

this is a big one. it is a misconception among many that because a bump map is 'just height' while a normal map is 'normal vectors' that normal maps are by their very nature superior.

but carmack's response has indicated clearly that this is NOT the case.

it is merely an issue of PRECISION!

B. that the discrepancy between the precision is NOT as wide as 24bit vs. 8bit.

and finally,

C. that this is indeed a 'subtle' issue and not CLEAR CUT. because of many factors, including point '1' in this post, it may in fact turn out that normal maps may (perhaps in most, maybe only in some) in fact NOT be any better than bump maps.

jin

p.s. and lynx3d, one of the things that i was thinking is that if normal maps are not vastly superior to bump maps, why even bother to get normal maps to work with bones and morphs and such?

just use marvin's plugin or ORB (free) to create a bump difference map! that will work perfectly.

Karmacop
05-08-2004, 11:44 AM
Hmm, interesting. By shearing, you mean stretching the texture basically right? So as you stretch the polygon to become bigger, the vectors would become more parallel with the base polygon, and as the polygon was stretched to become smaller, the normals would become more perpendicular to the base polygon. Or did you already know that? .. Or am I wrong? ... But if this is right, I don't know how you'd figure out how to distort the map ....

Actually, I was just thinking, if the polygon is sheared then the normal will change anyway wont it? So wouldn't you need to do nothing? Again, it's late, I should go to bed :p

Karmacop
05-08-2004, 12:14 PM
Normalizing the vectors does not affect the precision at all. It's basiaclly like saying 2 quaters is the same as a half, so just call it a half.

Fragment programs are to open gl what pixel shaders are to direct 3d. What John was saying was that maya calculates higher quality normals from the bump map, but you could use the same method with normal maps to produce higher quality normal maps, so it's a bit of a moot point.

Jin, below is a render from lightwave. On the left side is the bump map, and on the right side is the normal map. The top bump map is 100% bump, the bottom bump map is 10000%. Does this prove that the normal map will look better? :p

jin choung
05-08-2004, 03:38 PM
holy moly,

i can only assume that you used some kind of inferior way to generate your bump map image.

i just used marvin landis' exact same normal map plugin, opted to save out a 'displacement map' and then just used that resulting image as a bump map.

viola.

i don't know why i'm getting the faceting but it's probably some switch somewhere that will control that.

in any case, i think the point is made.

i'm not challenging anymore that a normal map does not contain ANY more accuracy. it may. but as i said, it's not really a question between 24 v 8 bits.

and i'm also saying that they can DESCRIBE EXACTLY THE SAME DETAILS (i.e. it's not like you can describe a surface with normal maps that a bump map can't... and that is an important myth that i want to put to bed)... it is JUST a matter of accuracy. and i believe carmack says the same.

and as you can see, even when it comes to accuracy, it IS INDEED A SUBTLE MATTER. not clear cut.

is there any part of THIS post that you disagree with? if not, let's let it go finally eh?

jin

jin choung
05-08-2004, 04:21 PM
oh but it should be noted that the above image is probably using a HDR image... marvin's plugin makes a flx and some other hdr image so i'll try to make an 8bit one later. gotta get out of the house for a while.

jin

p.s. yah, it's a 96 bit image... 32bits per channel.

Karmacop
05-08-2004, 05:25 PM
You get facets because they are there, there's no way to smooth that out except for making more geometry.

So yeah, your 96 bit texture is better than my 24 bit texture, what's your point? :p

Do you have hdr shop or something? Maybe you could find a way to convert it down to 72 bits (I don't know if that'd work though).

EDIT: Oh and Jin, are you just using 100% for your bump map, or are you using something big like I did?

Karmacop
05-08-2004, 10:51 PM
Ok, left side is an 8bit bump map, right side is a 7 bit normal map. They are both ugly, but I'd prefer the normal map ;)

jin choung
05-08-2004, 11:02 PM
yup,

the normal map still looks a hell of a lot better.

as for my example, yah, i still had to bump up the percentage to several hundred percent and i amped up the texture amplitude too.

it looks ugly as hell with an downconverted 8bit image but it scaled pretty gracefully with the hdri.

but i only ended up using the hdri image in the first place because it's not possible to create an 8bit image directly from marvin landis' plugin and i think that the downconversion may play a part in generating the artifacts....

i'd also like to see if it looks any better in maya or rhino and then try ORB to create the 8bit bump directly (which it can do) and see how that looks.

so yes, the normal map looks a hell of a lot better right now in lw. but it's still true that it's not 24 v. 8, normal maps don't describe things that bumps can't and in terms of the underlying math, it is nuanced and not clear cut... more experimentation to come.

jin

Karmacop
05-09-2004, 01:29 AM
I didn't even know you could use hdri images for bump maps, it looks like a good alternative. You wont have any less quality down sampling to 8-bit compared to creating an 8bit texture straight from the model, so you shouldn't worry about that.

Yep, same here, I'd like to see if maya or any other program handle bump maps differently to Lightwave too. The way lightwave is interpreting the bump map looks very weird ... at high bump values anyway.

I think basically, normal maps are more precise for the normal of the pixel, and bump maps are more precise for the height of the pixel. I guess we should wait to see how other renderers handle bump maps though.

jin choung
05-09-2004, 01:43 PM
well,

haven't tried ORB yet but i've tried using the downsampled bump map in maya and rhino... rhino offers no amplitude control at all so it just looks like the bumped disk in your first example.

maya looks similar to lw (although i got better results than the really noisy example you got in lw... but not by much). in maya, you can really filter the hell out of it but it just looks like 'smoother artifacts'!

yah, i didn't know you could use hdri for bumps either but if you can't get rid of the facets, that's not ideal either though. that's weird though, if you generate normal maps, it respects whether your hires model used gourad smoothing or not but if you make an hdri bump map, it doesn't.

anyway, i personally think that the 'height and normal' is essentially the same thing but it is undeniable that the normal map offers far superior results, with no pain.

jin

Karmacop
05-09-2004, 06:16 PM
Jin, it doesn't smooth the height map because that is the height of the pixel. There's no way to smooth that except for subdividing. Normal mapping records the angle of the pixel though, so if you use gouraud shaing the angle does get smoothed out.

Ztreem
01-15-2005, 05:38 PM
Sub Pixel Displacement is something I want in LW for sure. Just did a reply on this old thread so it keeps alive. Maybe they can include it in the next update. :D

Lynx3d
01-15-2005, 06:16 PM
Well i know someone who wrote some nice sub-polygon displacement stuff, but some other company kinda cheatet him...he already asked me if there was some major package left lacking this feature :D

Ramon
11-23-2005, 05:28 PM
Well, there you go! :)

erikals
11-23-2005, 07:32 PM
Isn't this somewhat the same?
http://forums.cgsociety.org/showthread.php?postid=1961492#poststop
(note, bugs in 8.3, works in 8.2/8.5)

BazC
11-24-2005, 03:28 AM
Holy poo! That looks amazing, I hope that development continues at some point, even with LW9's adaptive subdivision this would be really useful! - Baz

Dodgy
11-24-2005, 03:46 AM
Oops! Found the instructions :)

erikals
11-24-2005, 06:01 AM
Yeah, instructions are a bit "work-aroundish", here's a super-fast "get started" I made earlier.
http://home.no.net/erikals/cgtemp/AVT.jpg

Also interesting, using the same plug
http://www.spinquad.com/forums/showthread.php?t=8515

Dodgy
11-24-2005, 06:48 AM
Also interesting, using the same plug
http://www.spinquad.com/forums/showthread.php?t=8515

<wolf whistle>
Now that is very very nice :) This seems to be a very very important tool :)
Be nice if NT included it alongside the new adaptive displacement, making it more straightforward to use...

erikals
11-24-2005, 07:00 AM
yeah, its quite cool :)
Never got if they fixed the "popping" problem though, shown in this video.
http://www.erikalstad.com/cgtemp/Test3.avi (by Greg Malick)
not sure if that was the fix Gerardo was refering to. Some tests will show I guess.
Note that its a pixel filter, so not sure if this qualifies as "true sub-pixel displacement"

Very intersting thread, worth a read ;)

Dodgy
11-24-2005, 07:09 AM
But it's good enough to break up a profile of an object to help fool the eye :)
Shame it doesn't keep the shader on when you save the object :P Or has it been developed further since I last got it?

erikals
11-24-2005, 07:21 AM
Very true, also maybe there are cheats for the popping.

Nope, :) I still doesn't save.
Very happy someone made this, great for fast displacement...

Editing the terrain where the "pop" occurs might solve some of it, as it seems like the "pop" occurs when new polys are "revealed".

Dodgy
11-24-2005, 09:21 AM
Yeah, since it seems to be basically a distortion effect based on the face angles and bump map, having a poly appear would make it pop for large displacements on very curved surfaces. Probably better to get LW to do some of the displacement and add that plugin to break up the edge..