PDA

View Full Version : Rural Landscaping with LightWave



John Geelan
01-14-2009, 07:29 AM
I am completely new to LightWave and 3D graphics. I purchased LightWave the week before Christmas and am now one quarter way into Dan Ablan's book, "Inside LightWave" - that's how new!
My interest during the last twenty years has been rural landscape photography. With LightWave, I hope to produce photorealistic prints based, wholly or in part, on my photography. I envisage modelling rural architecture, farming tools and traditional features of the rural landscape which can be reused when needed. I don't have a need for animation.
As I know what it is I want to do, I'm looking for any advice as to how I might go about it ie, a general overview of the most relevant LightWave tools, external plugins, other software etc. LightWave is a pretty deep program and I have yet to come across anybody who uses it to specialize in terrain modelling or rural landscaping. It seems to me that VUE satifies most artists needs in this regard. My limited experience of VUE suggests it is not suitable for an accurate reproduction of the flora and fauna of my local landscapes.
Whether I am being overly ambitious or not, I don't know - yet, every time I fire up LightWave, I have the distinct impression that everything is possible.

bluerider
01-14-2009, 09:40 AM
Hello John,
Have you got any examples of you photography to show. Make a thread in the Sketch of the day section and post some in there. Would love to see them.

monovich
01-14-2009, 10:04 AM
would love to see some examples also. I've got a few thoughts on the subject, but I'm not sure what your end-game is.

bluerider
01-14-2009, 10:30 AM
would love to see some examples also. I've got a few thoughts on the subject, but I'm not sure what your end-game is.

monovich, You make awesome art with LightWave, can't wait to read your thoughts on this subject.

Exception
01-14-2009, 08:55 PM
There's definately better landscaping tools out there, like bryce and vue, however these are no match for LW at all in terms of render quality, options and modeling anything besides terrain. You might want to investigate a pipeline where you model your terrain in one program and then use it in LW to fill it up with vegetation (VeggiPaint plugin is handy here, in conjunction with the plugin HD Instance), and objects and so on.

You might wan to check out Terragen. AFAIK there's a free version and it gives Vue and Bryce a run for their money on many features.

archijam
01-15-2009, 01:26 AM
-

archijam
01-15-2009, 01:26 AM
Just to add to exception's post, sounds like you need some good tree models. Google xfrog and onyx, (or cornocopia for Vue) and check your photo library for recurring species and ages of trees.

Post a few images and we can give more tips, regarding skies, lighting etc.

JBT27
01-15-2009, 02:32 AM
We've been round the houses on this one, buying this and that, but always ended back with Lightwave, purely for the very precise control, the texturing and the rendering.

We are, oddly enough, about to step more into this area as well but we are deciding from the outset not to massively step outside of LW for the whole job.

For terrain, look at spline modelling, especially where you need to have roads and maybe earth-banks - clean edge-loop modelling applies as much to this as it does to face modelling.

HD-Instance is a must, though if you are not in a hurry, I might hang on and see if LWX is going to offer anything in the way of instancing.....that said, no-one knows when we're going to see any of this, or how well it will work, so be cautious with buying plugins, but HD-Instance needs to be on the must-have list.

Onyx Tree is great, and they just added Grasses to their tool arsenal - I'd recommend that entire suite of tools, though it will set you back $595 now.

http://www.onyxtree.com/index.html

While you are looking into LW as well and how to get started with all this, I'd start looking into compositing, even though you are doing largely illustration, same art and tech applies.

Julian.

John Geelan
01-15-2009, 09:33 AM
Sincere thanks to all who have made replies - your advice and comments have been invaluable.
Having spent a few hours working on your responses, I have narrowed my choices to the following;

For terrain modelling:
Bryce.
Terragen.

For vegetation:
Onyx

For rapid population of a terrain:
HDInstance.


While Bryce has an eccentric interface, IMHO, it appears to be a successfully established terrain modeller. I dismissed Vue because I have found it to be problematic and not as accurate as I would like for vegetation.

Though Terragen 2 is still in development, version 0.9 received much praise from experienced users. Rendered images from v 0.9 can only be saved in a bmp format - what consequences this limitation has, if any, for subsequent manipulation in LightWave, I have no idea.

Onyx turned out to be a very pleasant surprise - many thanks for the link, Julian! It seems to be exactly what I was looking for.

HDInstance is, I can now appreciate, absolutey essential.

Before I use my credit card, I wonder if there are any issues running the above programs with LightWave? From what I gather, there can be huge problems importing/exporting 3D geometry across applications.

All comments welcome!

cresshead
01-15-2009, 09:49 AM
there was a free version of bryce version 5...try that first off

MooseDog
01-15-2009, 09:50 AM
From what I gather, there can be huge problems importing/exporting 3D geometry across applications.

All comments welcome!

absolutely correct, and incorrect :). if you're very clear about what you want your end-result to be, it makes your "pipeline" or workflow decisions a bit easier.

for instance, images of natural grandeur would require a different approach then intimate shots of natural beauty.

JBT27
01-15-2009, 12:30 PM
Although I am no veteran with Onyx, I will confirm that my first uses of it, which are essentially make a tree, export it to LW object format, open it in Modeler, reduce polys (when necessary), copy points for leaves if you don't want to use theirs, works very well indeed. So you can rest assured on that one. There are, as well, other threads from recent months where you will find LW artists supporting their choice of Onyx. Xfrog is another valid choice, but you will find that Onyx is actually developed by a programmer and a botanist (you've probably seen that already), and to be honest the specific species I have made so far look very real, in detail and in profile, and I am very sorry I didn't buy into this a few years ago.

HDInstance is a must - you'll be fine buying that as well.

But, I would, as others have said, investigate the cheap and free terrain options first, because I reckon sooner or later when you start looking at the terrain shapes you need and you practice those within Modeler, you may realise LW is all you need.

One thing maybe, is look at 3DCoat, for sculpting:

http://www.3d-coat.com/

I have Modo, but largely only use it for edge-cleanup and its sculpting - that's alot of money for just that, and if it were me now, I'l look at 3D Coat.

Julian.

John Geelan
01-16-2009, 12:38 PM
3D-Coat is indeed impressive!
No less so is its lead programmer, Andrew Shpagin who, from all accounts, is energetically dedicated to the perfection of 3D-Coat.
Unlike 3D-Coat's more expensive competitor, ZBrush, 3D-Coat is designed with LightWave firmly in mind.
I dont't know how they compare on a technical level because the technical comparisons I've come across are way over my head.
User feedback suggests, however, if Andrew continues development at his present rate, 3D-Coat will, at least, equal the competition in a very short time.
I downloaded a 15-day trial last night. It is so intuitive, in comparison to a ZBrush demo I have, that I modelled a terrain almost by accident.
For the reasons given above, the question uppermost in my mind is not whether to buy 3D-Coat but how can I refuse not to? - thanks again for the pointer, Julian!

I've decided not to use either Terragen or Bryce for terrain modelling. what's the point when LightWave can do it? Apart from the learning curve demanded by each application, the more links there are in the chain from creation to print, the greater the probability of compatibility issues with each software upgrade etc.

I purchased OnyxGarden Suite - received the activation files an hour or so ago. A few minutes looking at the detail and resolution of trees and grasses confirms what a worthwhile purchase it is.

Just two questions regarding LightWave:

1) Colour Management - I always profile my monitors, printer and paper and use a colour space when using Photoshop.
LightWave doesn't appear to support colour management. Yet, the metadata readout from the render window suggests there might have been the intention to implement it. For example, the metadata readout lists;

White Point: 1
Gamma: 1
Color Space: uncalibrated

Without colour management, a linear gamma and an uncalibrated colourspace, how can a LightWave user determine when colours are out of gamut or trust in having colour consistancy across different monitors and systems? For me, it raises the problem of ensuring accurate colour output from a printer.

2) Resolution for printing - What is the workflow when rendering for print graphics? The resolution of a render is pretty small ie, 640 x 480 - and, for argument sake let's say 1ppi = 1dpi - therefore much too small for a graphic intended for printing at, say, 17in by 20in at 300dpi. Yet, I see no way of determing a printing resolution before, during or after rendering. I read somewhere that LightWave is capable of resolutions up to 16000 x 16000 (ppi, I presume).
Now, I'm obviously missing out on something - I'm sure users don't use massive interpolation for print graphics. I suspect the answer lies somewhere in the the conversion from vector graphics to raster graphics. I haven't had the time to read up on it but the question it keeps coming back to bug me all the same.

Looking forward to any comments.

JBT27
01-16-2009, 01:00 PM
Glad you're pinning down the terrain creation options!

I may buy 3D Coat as well, even having Modo, partly because it will be cheaper to maintain 3D Coat, and the vast majority of my core work is in LW.

OK. I'm not big on the tech of colour management, but first for your print resolution questions.

Generally, for print, publishing for magazines and books, you work at 300dpi, so an image 10 inches across will be 3,000 pixels on that side. That's a very simple calculation. What we do is create some custom camera sizes for these and keep them in the configs along with the film and video formats. We have A4 at 300dpi, A3 and some others. Just handy things to quicken things up a little at setup.

We recently did some images to be printed at 15ft across - they were done at 150dpi, partly because of file management, but also because it was all that was needed for the size and greater viewing distance.

Colour management is something others will have to comment on. Much as it's not universally applauded, we recently hit a similar dilemma, and looked to a mid-price hardware solution. At the risk of spending more of your money :D, take a look at the Spyder3 Elite:

http://www.datacolor.com/

and Datacolor's products in general, as well their tutorial videos and docs. We have found this device invaluable for colour matching, and are getting far better and more consistent colour than ever.

Julian.

JBT27
01-16-2009, 01:06 PM
Hang on - just re-read your post re. resolution.

You're actually asking about how to set the pixel size of an image, which is what the camera renders? That's simply typing values for Width and Height in the Camera Properties, and for print keeping your Aspect Ratio at 1.0.

Is this what you mean?

My comment about adding your own presets is still valid here, it just saves you typing values each time if you work with consistent image sizes, like A3.

Julian.

JBT27
01-16-2009, 01:21 PM
Just to hammer it home, taking your 17" x 20" print at 300dpi.

That's 5,100 x 6,000 pixels (17 x 300 and 20 x 300). That's what you type into the Camera Properties Width and Height fields, respectively.

You will know the size you are working to before you begin, like buying a canvas for an oil painting.

Once you bring the finished render into Photoshop, you can change the dpi in the Image Size dialog in Photoshop, or whatever you use. Providing you don't have Photoshop resample the image size, you will see the pixel count remain at 5,100 x 6,000 pixels. Clearly, dividing these values by 300dpi will give you the 17" x 20" size that you want. Divide them by 150dpi and you'll get a 34" x 40" print, though the pixel count will remain the same.

Julian.

John Geelan
01-16-2009, 01:25 PM
Ah! Camera resolution! Now that makes sense! Thanks a million!
Regarding colour management - I have an i1 spectrophotometer and software - couldn't live without them.
My concern is how to implement colour management in LightWave?
Is this what you have achieved using Spyder3 and Datacolor?
If so, I'm curious to know how because without colour management in LightWave, there is no way to be sure the colours
we gave to our graphics will be accurately reproduced either in print or on other monitors or systems.

John Geelan
01-16-2009, 01:31 PM
It's my standard practice when using my own cameras - just never ocurred to me the same applied to the virtual camera - thanks again, Julian!

JBT27
01-16-2009, 01:41 PM
Well, as I mentioned, colour management is not my big thing and I might not be fully understanding what you mean (though I think I am :)), but as far as I see it, LW works to, well I believe 128 bit colour (?? - anyone :question:) which is resampled according to what file format you use.....thin-ice here.....might need a pinch of salt until someone else chimes in :D

As far as I understand, what you are asking is done in comp, or edit, or in Photoshop for print.....isn't it?

Our Spyder3, I suspect, does the same job your spectrophotometer does - calibrates a monitor to your working colour space, and allows you to match all your monitors and indeed print output.

??? Sorry I'm being vague here.....

Julian.

JBT27
01-16-2009, 01:42 PM
It's my standard practice when using my own cameras - just never ocurred to me the same applied to the virtual camera - thanks again, Julian!

Yes, almost everything you do and see in your photographic reality is mimicked closely in LW and the rest of your pipeline, right down to depth-of-field and motion-blur with the newer cameras.....quality stuff!

Julian.

John Geelan
01-16-2009, 02:10 PM
Essentially what is required is that LightWave would have a device independant colour engine, just as Photoshop has ie, Adobe(ACE). LightWave should also allow the user to select a colour space to work in(Device dependant). With a monitor profile in place, the colours you then use in your graphics are remapped by the device independant colour engine from the colour space in which they were created and remapped to your monitor according to the behaviour of your monitor as described by its monitor profile.
In other words, the device independant colour engine knows how much the shade of red, say, you used from the colour space, when creating the graphic, must be altered to display accurately on the monitor. And it knows this because that is the purpose of the monitor profile - to describe the behaviour of your monitor.
Hope this makes sense - volumes have been written on colour management.
As a test, I sometimes profile the lowest quality of paper, then print the same print on the same paper, before and after profiling - the difference is like that of night and day.

JBT27
01-16-2009, 03:34 PM
Yes, that makes sense, and I think I generally had that in mind - just rubbish at explaining that for some reason.

What I don't get, and this is where I might be missing the point, is that although I understand the nature and need for colour spaces, surely LW is operating with some kind of colour space, hence my point about the 128 bit colour, internally. With that depth, colour space becomes irrelevant at render stage, providing you preserve that level of data out into the pipeline, via OpenEXR, say. Adjustment for target devices and media comes further down the line.....doesn't it?

Assessment of images during production in LW is done on calibrated monitors, which you and we use - that calibration is based on the limits of that particular monitor, clearly, in order to display as neutral an image as possible, which is what LW is outputting anyway.

As I say, adjustment for specific colour space is done further down the pipeline.

Thinking back to my Maya days, which were years ago, I don't recall that having options for colour space either.....could be wrong though.

I'm probably making a monumental twit of myself to anyone who knows this stuff backwards :D

Julian.

John Geelan
01-16-2009, 05:43 PM
The concept of 128 bit colour cannot have any practical meaning in terms of visible colour, as far as I can see.:D
Take,for example, 8 bit colour. This means the maximum number of colour values it can represent is 2 to the power of 8 ie, 2 by itself 8 times which is 256.
16 bit color = 2 to power of 16 and which equals 65,536 possible values which can be represented.
Let's take 32 bit. 2 to the power of 32 equals 4,294,967296 possible values. Over 4 billion!
For interest sake let work it out for 128 bit - the number of possibles values are 2 to the power of 128, which equals:
340282366920938463463374607431768211456
Well, we know human vision cannot see all of the colour values of 32 bit - and so certainly not 128 bit.

This is where I'm inclined to agree with you that with 128 bit depth, colour spaces may become irrelevant.
Perhaps working in LightWave is equivalent to working with the RAW format in digital cameras?
I shoot in RAW format all the time because the only three factors which effect the shot are; Shutter Speed, Aperature and ISO.
There is no colour space, contrast or brightness settings or non-linear gamma values imposed by the camera to interpret the shot.
When I open the shot in Adobe's Camera RAW, it is then MY choice as to what colour space, gamma settings, white balance etc, I want to use.
I think my cameras shoot at a bit depth of 12, and which equals 4096 recorded values.
Given the small colour gamuts of monitors, printers and printing paper, 4096 values for each bit - and remember, there are 8 bits in a byte - is more than the hardware can handle.
This would also explain the metadata listing from the render window in LightWave ie,

White Point: 1
Gamma: 1
Color Space: uncalibrated

If our speculations are correct, then we are, in effect, importing a RAW file from LightWave into Photoshop. The choice of a colour space and the other necessary settings allow us to impose our own interpretation on the imported object. It looks to me then as if it would be quite easy to setup a standard workflow for importing from LightWave. Of course, this workflow would require a profiled monitor, and a profiled printer if the goal is print graphics.
There is a lot to think about and experiment with. I must have a go this week. Many thanks for your thoughts. Without them I'd still be in the dark!


I'm probably making a monumental twit of myself to anyone who knows this stuff backwards
I heard once how Thomas Edison had made 3000 experiments before successfully inventing the lightbulb. A youg reporter asked him what did he think of his many failures. Edison looked at him with genuine surprise and retorted - Not only did I discover the one way the electric bulb will work, but I also discovered 2999 ways in which it won't work!

The thrill of enquiry eventually leads to certain knowledge!:thumbsup:

Matt
01-16-2009, 10:36 PM
I have to say, for someone so new to LightWave, you seem to be grasping this all very well! I know I wasn't so savvy when I first started!

Matt
01-16-2009, 10:41 PM
2) Resolution for printing - What is the workflow when rendering for print graphics?

I've written a script for setting up the camera dimensions for print, if you need any help installing it, let me know.

Here's the script:
http://www.newtek.com/forums/showpost.php?p=788244&postcount=16

Here's a video showing it in action:
http://www.creactive-design.co.uk/lightwave/lscripts/DPI_Camera.zip

Hope you find it useful!

Cheers
Matt

JBT27
01-17-2009, 03:18 AM
The concept of 128 bit colour cannot have any practical meaning in terms of visible colour, as far as I can see.:D
Take,for example, 8 bit colour. This means the maximum number of colour values it can represent is 2 to the power of 8 ie, 2 by itself 8 times which is 256. <snip>

That all makes sense and is pretty much how I was interpreting it - the only reason for color spaces is to manage the limits of devices and media and the human eye/brain.

By starting out with more colour data than you can shake a stick at, by default, every potential colour space invented (and many still to come) is available in there.

I suppose the upshot of all that is, at least while generating imagery in LW, don't worry about colour space! :D

Good luck with the tests!

Julian.

MooseDog
01-17-2009, 07:58 AM
...volumes have been written on colour management...

and here's some more, but specifically with lightwave in mind. there's no built-in color management tools in lightwave. actually, afaik, none of the major 3d packages possess this ability.

sadly, you'll need to fight your way through some bickering, but read closely the posts by gerardstrata. an expert!

http://www.newtek.com/forums/showthread.php?t=79047&p=652370

as a starting point though, don't confuse a floating point workspace with color management. seems like you're into this stuff, so i would highly recommend this book, written by an expert lightwave user and deals with all aspects of floating point/high range imaging:

HDRI Handbook (http://www.amazon.com/HDRI-Handbook-Dynamic-Imaging-Photographers/dp/1933952059/ref=sr_1_7?ie=UTF8&s=books&qid=1232203662&sr=1-7)

JBT27
01-17-2009, 01:02 PM
This is interesting.....just read that thread (and the bickering), downloaded the tools and read through the wiki. I do have Bloch's 'The HDRI Handbook', which yes, is well worth the price.

I cringe a little at it all, being something I haven't delved into in a big way, worry about the time it's going to take to understand and implement it, and given that we haven't had a major hardware upgrade for awhile, especially monitors so wonder at the worth of it anyway until we do.....that may not be all that relevant, though.

I guess I could say that we go so far and that's been fine and why should we bother changing that? But it seems to me, after reading all that, that it is something that should be looked at, understood and implemented.

So much learning and tech versus just the want of making pictures - I wonder how anything gets done sometimes :D

Julian.

John Geelan
01-17-2009, 04:47 PM
Matt
For my purposes you have hit the nail right on the head. This is going to be very valuable to me. Excellent work!
I had come across references to LScripts but dismissed them as being too advanced for me to explore. You've changed my mind!
Before I call on your help, I'm going to have a go at installing your .ls and .png file and experiment a little. Your movie is a model of clarification. I'll be in touch later.

MooseDog

sadly, you'll need to fight your way through some bickering
I agree! What a shame it is to have to witness such a useful thread being undermined by downright nonesense?
Before your post I had been trawling through reams of info on other sites re colour management and gamma correction. I haven't had time to digest all of it but I'm certainly winded by the discovery.
I wonder would you please help with some clarification regards the differences between 2D and 3D colour rmanagement and the issues involved? What I'm looking for is a precise definition of the problem.

Let's start with 2D colour management.
Working with a colour managed Photoshop workflow is a non-linear gamma workflow. Gamma encoding, a white point value, a defined neutral grey, a device independant colour engine and monitor and printer profiles are all in place. All of this ensure consistant colour across any system using colour management.

3D colour management.
Am I correct in thinking that 3D colour management is, by nature of its design, intended to follow a linear gamma workflow?
If yes, are the complications which arise in implementing such a colour managed work flow, due to the various technologies in a 3D application such as LightWave? I'm thinking here of features such as reflection, refraction, radiosity, caustics, procedural textures and raster textures. I gather some of those are processed in lightwave as linear gamma, while others are processed as non-linear gamma.
Is this the nub of the problem then - the problem of of making linear those non-linear gamma processes so as to ensure a perfect, linear gamma workflow?
Aside from the software problem, A further problem seems to be the non-linear nature of the monitor and how to take it into account in a linear workflow.


don't confuse a floating point workspace with color management.

I don't understand the distinction you draw between "a floating-point workspace and colour management".
As I understand it, the term "floating-point" is a programming term which signifies large integer and large decimal values. Given the 128 bit colour depth referred to by Julian and the colossal value a 128 bit depth represents, I can see the relevance of the term.
So is it being used as a synom for linear gamma or has it another significance?

Hope you can help with clarifying some of those questions. With a bit of luck, Gerado, from the thread link you supplied might discover our thread and lend a helping hand!


Julian
Looks like we hadn't it solved after all.:confused:
If MooseDog or somebody can supply the clarification we need we should be able to go forward from there. If it is only the non-linear gamma processes of LightWave and the non-linear nature of the monitor which are the complications, then the problem will have been defined and easily understandable. From what I've read, a linear workflow based on Sebastian Goetsch's SG-CCTools will be all that is required.
Had it not been for the complications due to the mixture of linear and non-linear gamma processes within LightWave, I think the parallel between RAW format and a saved image from LightWave might have held.
Whatever you do, don't despair! We have a saying here - a good workman doesn't blame his tools. Think of achieving a linear workflow as an enhancement to what you are now doing well, not as an alternative to it or a replacement for it. In fact, developing a linear workflow and understanding it should prolong the usefullness of your hardware, in that, it will be used more efficiently.:thumbsup:


it is something that should be looked at, understood and implemented.

Agreed! I won't be letting go until then either!:dito:

MooseDog
01-17-2009, 06:18 PM
i've taken the liberty of attaching a couple of pages from the book i referenced earlier.

the author, christian bloch is a semi-regular through here, a well-known lightwave artist, and all-around talented dude, so if i have inappropriately abused his copyright, christian pls tell me and i'll yank the images! i just have the impression that where john geelan is coming from and your book is, are a natural fit.

a floating point image is really no more than an image "file format". 8-bits, 16-bits, 32-bits are uber-important to computers and applications first. lightwave was one of, if not the first 3d application to work within a floating point image environment. and by image i mean mainly outputted render, but also the ability to accept and utilize hdr images as sources.

to the best of my knowledge, the under-the-hood magic of a lightwave rendering is not linear (which is to say pre-designed to accommodate all kindsa different viewing mediums (tv, monitor, screen, print, all from the same outputted render) ), but "merely" floating point. hence gerardo's efforts to teach us about a linear workflow and the internal and external tools needed.

i hope this helps, i have the impression you have it mostly sussed. keep this in mind: a rendered image is only as valuable as the format you choose to save it in. .hdr or .exr are two representative floating point examples, while .jpg, .tga are old-school, 8-bit images.

caveat lecteur: the above is only mildly informed!

John Geelan
01-17-2009, 06:49 PM
MoosDog - Many thanks! If in contact with Christian, tell him your efforts have sold another copy.
The comparison of HDR with RAW is fascinating. Though I could sense a parallel of some sort, I just didn't have the working details. I see now the meaning of floating-point as applied to HDR.
Many questions remain unanswered and it's going to take a while to sort it all out.
One point though - it appears the virtual camera in LightWave works with a linear gamma. This leads to major complications for the image presented on a non-linear monitor.
A complete clarification of the problem has got to be my first stop!

Larry_g1s
01-17-2009, 09:49 PM
I've written a script for setting up the camera dimensions for print, if you need any help installing it, let me know.

Here's the script:
http://www.newtek.com/forums/showpost.php?p=788244&postcount=16

Here's a video showing it in action:
http://www.creactive-design.co.uk/lightwave/lscripts/DPI_Camera.zip

Hope you find it useful!

Cheers
MattMatt...very cool. Thanks.

P.s. What color settings are you using to get LW to look like that, if you don't mind me asking?

JBT27
01-18-2009, 07:22 AM
Julian
Looks like we hadn't it solved after all.:confused:
If MooseDog or somebody can supply the clarification we need we should be able to go forward from there. If it is only the non-linear gamma processes of LightWave and the non-linear nature of the monitor which are the complications, then the problem will have been defined and easily understandable. From what I've read, a linear workflow based on Sebastian Goetsch's SG-CCTools will be all that is required.
Had it not been for the complications due to the mixture of linear and non-linear gamma processes within LightWave, I think the parallel between RAW format and a saved image from LightWave might have held.
Whatever you do, don't despair! We have a saying here - a good workman doesn't blame his tools. Think of achieving a linear workflow as an enhancement to what you are now doing well, not as an alternative to it or a replacement for it. In fact, developing a linear workflow and understanding it should prolong the usefullness of your hardware, in that, it will be used more efficiently.:thumbsup:

Agreed! I won't be letting go until then either!:dito:

I think I need to go hide in a corner and absorb and play with all this :)

As we're all spending your money for you, John :D, you might also take a look at this book:

http://www.amazon.co.uk/Digital-Compositing-Visual-Effects-Animation/dp/024080760X/ref=sr_1_1?ie=UTF8&s=books&qid=1232278783&sr=8-1

Invaluable beyond understanding colour and gamma etc., for its compositing breakdowns and why you do something such and such a way.

Now I have the quandary of whether to buy those HDRI back issues and see what all this is about in practical terms.....or, seriously, can this be found in more detail in a book somewhere? Also have to be blunt about the age of our hardware and software at this point, particularly still running the Adobe CS range - but then that's tool-blaming :D

Julian.

JBT27
01-18-2009, 07:41 AM
I've written a script for setting up the camera dimensions for print, if you need any help installing it, let me know.

Here's the script:
http://www.newtek.com/forums/showpost.php?p=788244&postcount=16

Here's a video showing it in action:
http://www.creactive-design.co.uk/lightwave/lscripts/DPI_Camera.zip

Hope you find it useful!

Cheers
Matt

Forgot to say, in my ramblings, that this is very useful - thanks very much :thumbsup:

I never had a problem figuring this by hand, but it's always nice to have a push-button way.....we are using computers after all :D

Julian.

John Geelan
01-20-2009, 05:05 AM
As we're all spending your money for you ...

I know ... and it's getting serious ... I'm beginning to have a vision of myself as a well-informed down-and-out in some early-morning soup kitchen, staring at the steam rising from the soup saucepans and the sweep of moving shadows across the walls from the strengthening sun, with my only concern being how all of it could be reproduced in LightWave ... in a linear workflow, of course!:D

Perhaps I'm wrong, but the impression I'm getting is that few either recognise or are interested in the problem!


Now I have the quandary of whether to buy those HDRI back issues and see what all this is about in practical terms.....or, seriously, can this be found in more detail in a book somewhere?

:dito:So far unable to find any reference to alternative sources!

JBT27
01-20-2009, 04:13 PM
Agreed! Having been there with various things over the years, related to this job and the whole issue of working on your own and largely having to figure it out yourself, I mightily sympathise :)

You may be right on the recognition thing, or it might be that it has no major bearing on many people's work, or that further still, many are operating a different solution to arrive at the same end result.

I will concede that although I get your intentions about colour directly in LW, I had been working on the basis that I extract as much data as possible out of my scenes, via the appropriate format, and by utilising multi-pass rendering, and then working with all those bits later in comp. Either Photoshop or After Effects.

What this thread has done is peak my interest in this, as much for the fact that I may be missing out on an optimum workflow. But my immediate response, on the back of that, was to start thinking that I'd got it all wrong and I should make every effort to understand Gerardo's methodology, and 'buy' into that, because he knows this stuff and has developed those workflows.

But that's countered by the curious fact, as someone confirmed, that most if not all mainstream 3D apps do not offer this level of colour control, but work internally at very high bit-depths and allow you to save as much of that data as you need, for clean comp and grading down the line.

I haven't found anything else in print either on the specific workflows. So I guess it's back-issues of HDRI, or go down another route. Aside from this issue, I have thought more than once of getting HDRI, even specific issues, but nowhere around here stocks it, and as far as I can tell has even heard of it.

Just as an academic point, I wish sometimes that these tech mags would switch to electronic publishing, which surely would slash their costs, and make it far more accessible to far more people. I don't want the Max articles, or the ZBrush, or XSI, and I don't want yet more paper magazines on my shelves that ultimately I will have to go through, tear out the bits I want, file those between more bits of paper, then throw the remainder out.

If the articles are for fun and for free, and the publishers are in it because they want to head-up a quality cgi mag, but not for profit, why not go electronic? Which in turn allows them to separate off, as extracts, individual articles. Happy to pay for all and any of that.

Not bickering and not flaming, just a comment on the accessibility of what are effectively relatively short-life tech/info articles.

.....or I could just order HDRI.....:D

Yeah, yeah, moan moan.....:D

Julian.

mikala
01-20-2009, 04:48 PM
Order the back issues absorb the info .
Apply it if you can to what you are doing until the next greatest thing comes along.
Besides it's far cheaper than going to an overpriced 3D school to learn it.

John Geelan
01-20-2009, 08:18 PM
But that's countered by the curious fact, as someone confirmed, that most if not all mainstream 3D apps do not offer this level of colour control, but work internally at very high bit-depths and allow you to save as much of that data as you need, for clean comp and grading down the line.

And I think I would be happy with that scenario too, but I'm getting a different perception about the role of pre-processing.
Perhaps, I'm wrong, but my perception is this.
Of the many features of LightWave which can be called on by an artist, some of those features demand a linearizing correction immediately.
They cannot wait for post treatment as errors cannot be undone or, at least, can be made to appear undone with a quick-fix clean-up.
What is bugging me, is trying to identify which features, if any, fall into this category.

BTW - I've ordered the #18 and #19 issues. To quote Oscar Wilde - I can resist everything but temptation!:D
A full report will follow once I digest the articles.
Have to agree with you that the articles should be available electronically.

John Geelan
01-21-2009, 08:42 AM
This what I am referring to - much more succinct and pointed than I managed!:thumbsup:

It comes from the following link:
http://www.newtek.com/forums/showthread.php?t=79047



Consider also that varing the gamma at the end of the output render without any pre-processing, doesn't mean that we are gamma-correcting colors. Because we are not correcting colors in fact, but the opposite.

In those specific cases, we are not gamma-correcting colors because flat colors, colors from procedural textures, from 8-bit images, from lights, etc are in log space (this means they are already gamma-encoded). Though the diffuse shading obtained is linear, colors are not linear; so when a simple gamma exponent is applied in post-processing (LW, PS or any compositing package), what we are really doing is cranking gamma up for those colors. this means, we are adding gamma twice! So colors are totally wrong.

Gerado has posted on my other thread - Colour Management in LightWave.
At last there is clarity!!!!!
I'll be posting there later.:D

jaf
01-21-2009, 10:23 AM
John,

as far as plugins, I'm a little surprised LWCAD ( http://www.wtools3d.com/ ) was not mentioned (or maybe I missed it.)

If you are going to model structures -- barns, sheds, houses, etc. LWCAD has some really nice tools.

But most users here are way beyond me LW expertise, so maybe this will generate some comments about LWCAD's usefulness for your intended modeling.

John Geelan
01-21-2009, 04:04 PM
Yes! I've heard of LWCad but never got around to discover what it is all about.
Like yourself, I'm totally new to CG but I believe its worth the effort to master.

colkai
01-22-2009, 02:42 AM
Yes! I've heard of LWCad but never got around to discover what it is all about.
Oh you really must take a look, don't think it's just for ArchViz either, I use it pretty much daily for any hard modelling.

JBT27
01-22-2009, 03:04 AM
John,

as far as plugins, I'm a little surprised LWCAD ( http://www.wtools3d.com/ ) was not mentioned (or maybe I missed it.)

If you are going to model structures -- barns, sheds, houses, etc. LWCAD has some really nice tools.

But most users here are way beyond me LW expertise, so maybe this will generate some comments about LWCAD's usefulness for your intended modeling.

On a general response, LWCAD is an absolutely stunning addition to Modeler.

That said, it is primarily key in quickly building clean and precise built-structures, for arch-viz, amongst others.

But, there are sufficient gems of tools, really intelligent tools, to make some modelling work a real treat - the snapping alone is jaw-dropping - the highly priced Modo, famed for it's modelling elegance (when it's not crashing :D) comes nowhere close to what LWCAD has.

Not that we're trying to bankrupt you or anything, John ..... :D

Julian.

John Geelan
01-22-2009, 08:00 AM
Oh boy! ... here we go again ... now where's that credit card ....

Somebody mentioned the word "recession" the other day .... not at all sure what it means ... problems with a non-linear workflow in the financial world, I think.:D

John Geelan
01-24-2009, 04:26 AM
Thanks to Moosedog and Julian for their recommendation of the HDRI Handbook - Yes! Worth every penny/cent.:)
Reading it has helped me define my probelm more succinctly ie:

How do I accurately preview a linear workflow in LW?

An LW scene is composed of materials and lighting. Lighting, since it originates withi LW, will be handled linearly.
Materials, such as bitmapped textures, brought into LW will require immediate linearization.
So too, will colours from all pickeres except the SG_CCPicker.
Assuming I use the SG_CCPicker, the only remaining problem is how to linearise bitmaps.

My question is: How do you UN-GAMMA a gamma encoded bitmap or, for that matter, create a linear bitmap for importation into LW?

It's been remarkably quiet re colour management recently!:stumped:

JBT27
01-24-2009, 08:00 AM
I don't know - I suspect there are an awful lot of workflows done by individuals and small companies that don't take the majority of this into consideration. Those larger companies that do already know what they are doing, have complex and clean workflows in place already, and either aren't on these forums or have nothing to say about it :)

And I'm 'guilty' of not running a full enough workflow on this, partly because I wasn't fully up to speed on this, but also because the work I have done over the years has largely worked out fine, with alot of pleased clients and me not seeing any major wanderings from the colour or tone I intended in the image.

That said, now I look closely at this, and despite what I said, I think I will also order those two back-issues of HDRI. Looking at Gerardo's posts on SpinQuad, the wiki on the SG tools, and reading your two threads, I am aware that I am missing out on alot of control, whilst also mindful that what I do now is largely acceptable. I have to say that even with the greatest care, once a shot lands with the editor, or with the printer, I have seen the most horrendous things done on occasion which make me wonder why I even bother :D

I suppose also to alot of artists, this level of tech is an awful lot of extra learning and understanding, and implementation. Let's face it, not so long ago, Dave Jerrard was pointing out that with some image maps, you could easily strip them down to index colour to save memory, and still not see a major difference in the result (you actually don't oftentimes!). Of course, output has moved on since those days ..... :)

Julian.

clagman
02-17-2009, 08:40 AM
Posting to a long dead thread but I ran across this while searching for a linear gamma solution for Vue7. The way I do it is to use the SG tools for color picking and such and for bitmaps use the image panel to set the gamma (or use nodes). This way all the bitmaps and colors are 1.0 gamma.

My problem is working with the xStream renders. You can adjust each material with a gamma node but dealing with the sun, fog, and light colors aren't nearly as easy since there is no linear color picker and the sun color is generated dynamically.

On the other hand I just really got started with Vue so perhaps there is a good work around that I haven't seen yet.