Page 3 of 29 FirstFirst 1234513 ... LastLast
Results 31 to 45 of 425

Thread: Radiosity, animated

  1. #31
    Quote Originally Posted by DrStrik9 View Post
    OK, so here's what I'm getting from the last several posts:

    Forget Lanczos for GI; it adds noise. Use Classic, which is the only reconstruction filter that isn't really a post effect.
    I would say with the advanced or Perspective cameras forget all but Classic reconstruction filter, period. They made a little bit of sense with the classic camera, but not with the newer cameras.

    What we really need (and I have asked for) is filters for how oversampling is applied. It is always applied box right now.

  2. #32
    inSPIRAL jay3d's Avatar
    Join Date
    Jul 2003
    Location
    Wavers World
    Posts
    902
    Quote Originally Posted by joelaff View Post

    I agree interpolated radiosity is a total waste of time, in LW and in Mental Ray, BTW. It is blotchy as all hell, flickers, and generally just sucks... Looks like a Quake map.
    I STRONGLY disagree (on the LW part, cuz MR interpolated really sucks ),
    When I did some tests for a same scene of LW 9.6 GI compared to Vray, mr, modo, guess what?, It beats the hell out of those regarding details, speed, and animated GI.

    I was very impressed with the test results, the new team really got it right this time.

    I will follow up with some tests
    Last edited by jay3d; 05-22-2009 at 11:20 AM.

  3. #33
    I'd love to find some settings that actually work. However, I have yet to find interpolated settings that meet my standards. Post some anims (with scenes) if you can....

    Everything is going to vary by scene. I am sure there are some scenes with interpolated may work OK.

    Honestly I would rather spend my time creating than testing out all the different setting per scene. This is why I like FPrime and non-interpolated Monte Carlo. They just work.

  4. #34
    inSPIRAL jay3d's Avatar
    Join Date
    Jul 2003
    Location
    Wavers World
    Posts
    902
    Quote Originally Posted by joelaff View Post
    I'd love to find some settings that actually work. However, I have yet to find interpolated settings that meet my standards. Post some anims (with scenes) if you can....
    .
    You can start with those settings and tweak depending on the scene

    Samples: 512
    Secondary: 256
    Bounces: 3
    Min Pixel: 0.5
    Max Pixel: 12
    Use Gradients

    AA: 1
    Threshold: 0.04 or 0.03, and use 0.02 and 0.01 for final rendering
    Over Sample: 0.7 and u can add to it some sharpening filters, but ONLY with the oversample setting as it will greatly enhance the AA combined

  5. #35
    Quote Originally Posted by jay3d View Post
    You can start with those settings and tweak depending on the scene

    Samples: 512
    Secondary: 256
    Bounces: 3
    Min Pixel: 0.5
    Max Pixel: 12
    Use Gradients

    AA: 1
    Threshold: 0.04 or 0.03, and use 0.02 and 0.01 for final rendering
    Over Sample: 0.7 and u can add to it some sharpening filters, but ONLY with the oversample setting as it will greatly enhance the AA combined
    I tried this on our current project (detailed running shoes with no person in them). It rendered in about 40% of the time of the non interpolated, but it still had blotchiness animated along most edges (concave areas). It flickered and was unacceptable. I am trying now with min 0.25 max 5 RPE 768 SBR 384.

  6. #36
    The new settings still flickered. I guess I will try one more... This time RPE 1024, SBR 512, MinPS 0.15, MaxPS 2.0...



    Note my angular tolerance is 45 deg. Should I be adjusting that instead/also ?
    Last edited by joelaff; 05-22-2009 at 01:04 PM.

  7. #37
    inSPIRAL jay3d's Avatar
    Join Date
    Jul 2003
    Location
    Wavers World
    Posts
    902
    Quote Originally Posted by joelaff View Post
    I tried this on our current project (detailed running shoes with no person in them). It rendered in about 40% of the time of the non interpolated, but it still had blotchiness animated along most edges (concave areas). It flickered and was unacceptable. I am trying now with min 0.25 max 5 RPE 768 SBR 384.
    Did u baked the animated radiosity? or u calculate every frame without caching?

  8. #38
    Quote Originally Posted by jay3d View Post
    Did u baked the animated radiosity? or u calculate every frame without caching?
    Every frame. No cache. Rendering with BNR.

    It has been too long since I have messed with this.. Is there a way to bake it using screamernet? Otherwise baking is out of the question in most production facilities. Nobody renders with a single machine in production.

    Will "automatic" bake using the network? Or would I have to do Bake Scene on a single machine?

    Note that my shoes are animated with bones...

    Thx

  9. #39
    inSPIRAL jay3d's Avatar
    Join Date
    Jul 2003
    Location
    Wavers World
    Posts
    902
    Quote Originally Posted by joelaff View Post
    Every frame. No cache. Rendering with BNR.

    It has been too long since I have messed with this.. Is there a way to bake it using screamernet? Otherwise baking is out of the question in most production facilities. Nobody renders with a single machine in production.

    Will "automatic" bake using the network? Or would I have to do Bake Scene on a single machine?

    Note that my shoes are animated with bones...

    Thx
    Baking animated radiosity should be done on one machine since each machine has its own distribution of noise pattern, and that's fine because baking animated GI will store only the sample records which will be faster than full evaluation, and final shading will be done on network with no problems.

    U welcome

  10. #40
    President, 3D Product Division
    Join Date
    Jan 2005
    Location
    Orange County, CA
    Posts
    2,438
    Quote Originally Posted by joelaff View Post
    Every frame. No cache. Rendering with BNR.

    It has been too long since I have messed with this.. Is there a way to bake it using screamernet? Otherwise baking is out of the question in most production facilities. Nobody renders with a single machine in production.

    Will "automatic" bake using the network? Or would I have to do Bake Scene on a single machine?

    Note that my shoes are animated with bones...

    Thx
    Joe, you need to cache the GI solution if you do not want it to flicker. That is the whole point of the cache...
    Jay Roth
    NewTek
    www.lightwave3d.com
    http://twitter.com/jaymroth

    "Everything I write is forward looking -- specifications are subject to change without notice..."

  11. #41
    I don't know what you have in your shot, but if it has anything deforming -ie bones etc, then it is not supported by LW's interpolated radiosity. The animation cache will work only for animated 'solid' geometry
    http://www.newtek.com/forums/image.php?type=sigpic&userid=18493&dateline=130857  4707

  12. #42
    I follow how the cache works. It all makes sense. I just didn't know if you Newtek wizards had come up with a distributed solution yet...


    Here is the issue... With this cached interpolated monte carlo the processing time is now in computing the GI solution. It takes up about 90% of the per frame render time. So now I have to do this on a single machine. The actual render takes very little time. So distributing the render doesn't help much, esp since both the radiosity calculation and the render both need to compute the APS and nodal displacement.

    I understand that I can reuse the cache. But most changes would invalidate the cache (animation, major surface changes, lighting changes) requiring me to start all over and bake on a single machine.

    This method may be great for small projects where you don't need or have a render farm. But it is not very useful to tie up a workstation to do the baking (can you bake from the command line without a dongle?). So LW is relatively cheap, and you could certainly buy more dongles to do the backing, but you are still limited by the speed of your baking machine. All this to save maybe 50% on the render time. So if your farm is two machines then the savings is erased, not to mention the headache of the baking, and keeping the cache in sync, storing the cache files, etc.

    What we need is some way to do distributed baking. I understand this is difficult. We need to maintain the samples for the various points from frame to frame, right? (It's not that each machine generates random numbers differently; that is not how PRNGs work. They take a seed and always produce the same sequence from the same seed.) Perhaps this is something that could use a distributed database (or just a SQL server). This may simply turn into too many transactions for this approach, but perhaps not if all of the existing points were queried once at the start of the frame. Then the new points could be evaluated and when the are inserted they could be inserted just a single time (like always use the first sample received or whatnot).

    For now the non interpolated Monte Carlo still seem like the best bet for animations for anyone with a render farm. Using a single machine to do the most time consuming part of the render just doesn't make much sense.

    Note also that my scene uses bones. Does the caching even work properly with deformations? Looks like no... So the question may be moot for this scene. But even if I didn't have bones the above concerns would still be valid.
    Last edited by joelaff; 05-22-2009 at 02:08 PM.

  13. #43
    LightJustice Panikos's Avatar
    Join Date
    Feb 2003
    Location
    Nicosia Cyprus
    Posts
    1,727
    The time you spend to tune LW GI and to type all these, FPrime would have it ready.

    GI in outdoor cases is easier since geometry is exposed to "light" with ease in accessing.
    The problems begin in indoors cases where accessibility is restricted.

  14. #44
    Quote Originally Posted by Panikos View Post
    The time you spend to tune LW GI and to type all these, FPrime would have it ready.
    As would non interpolated Monte Carlo. Which is my point...

  15. #45
    Quote Originally Posted by joelaff View Post
    always use Classic reconstruction filter with the perspective or advanced cameras. As confirmed by Matt Grainger, all the others are just post filters (though applied after each AA pass). If they are post filters then apply them in the post.
    This is what I used to think, but while they are post filters, they use 3d data, which you can't do in post. These two images are 1 pass aa with no adaptive, classic vs. gaussian. Guassian only took one-tenth of a second longer, and it would take several passes for classic to match it. This only works with the perspective cam, with classic cam it is just a simple blur.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	clssc.png 
Views:	210 
Size:	2.0 KB 
ID:	73709   Click image for larger version. 

Name:	gauss.png 
Views:	198 
Size:	4.7 KB 
ID:	73710  
    Confirmed -
    No Weapons of Mass Destruction
    or links to Al Queda or 9/11. (Sep. 2003)

Page 3 of 29 FirstFirst 1234513 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •