Page 1 of 2 12 LastLast
Results 1 to 15 of 17

Thread: tiled/segmented single image rendering with GI, issues and possible solutions

  1. #1

    tiled/segmented single image rendering with GI, issues and possible solutions

    is there any known solution for rendering properly a *single* tiled image of a GI scene over screamernet without getting different shading results on the tiles borders? as every node renders based on a different random seed for samples, the tiles will never perfectly match, even when using a locked GI cache file.

    known solutions are to render tiles with a border and blend them later in PS, or not to use FG at all, but only MC uncached + uninterpolated.

    problem with first solution is that this makes no sense with lots of small tiles, which is compromising efficiency of using tiled image rendering on a farm. problem with second solution is that uninterpolated MC takes notoriously long to render on LW, so again, you have no advantage to render a single image over a farm.

    would it make sense, technically, if we could define an equal random seed for GI calculations on all render nodes, or is there more which is causing this render differences on the different nodes? maybe it would make a good feature request to ask NT for repeatable/matching GI calculations on any render node for a single image, what do you think?

    i'm using a small squidnet based mono platform (OSX only) network render setup with LW11.

    any thoughts, tricks or solutions would be very welcome...

    cheers

    markus
    3dworks visual computing
    demo reel on vimeo
    instagram

    OSX 10.12.x, macpro 5.1, nvidia gtx 980 ti, LW2015.x / 2018.x, octane 3.x

  2. #2
    Super Member JonW's Avatar
    Join Date
    Jul 2007
    Location
    Sydney Australia
    Posts
    2,235
    Turn on "Use Behind Test"
    Procrastination, mankind's greatest labour saving device!

    W5580 x 2 24GB, Mac Mini, Spyder3Elite, Dulux 30gg 83/006 72/008 grey room,
    XeroxC2255, UPS EvolutionS 3kw+2xEXB

  3. #3
    Super Member Captain Obvious's Avatar
    Join Date
    Dec 2004
    Location
    London
    Posts
    4,502
    Simple: run the GI pre-process on a single machine at lower resolution and save that to a file that all the render nodes can read from. Once there's a 'baseline' GI in place at low res, that'll smooth out most of the discontinuities at the edges. And in answer to the question about using the same 'seed' on the machines, that unfortunately wouldn't help and is probably what LW already does. Getting irradiance caching to match at the borders is very difficult, because the results are interpolated across several pixels. Because a certain tile of the image won't know about the GI in the bordering tile, it cannot interpolate across it. There are many possible solutions to this. You could, for example, calculate the GI preprocess for each tile first, transfer the entire irradiance cache to one machine and merge it together, and then start the main pass. But that's tricky for all sorts of reasons. There are no good easy solutions to this problem. It's just one of the limitations of irradiance caching.
    Last edited by Captain Obvious; 07-09-2012 at 01:54 AM.
    Are my spline guides showing?

  4. #4
    Quote Originally Posted by JonW View Post
    Turn on "Use Behind Test"
    unfortunately, this won't help.

    Quote Originally Posted by Captain Obvious View Post
    Simple: run the GI pre-process on a single machine at lower resolution and save that to a file that all the render nodes can read from. Once there's a 'baseline' GI in place at low res, that'll smooth out most of the discontinuities at the edges. ....
    simon, thanks for the tips! so it's actually better to pre-bake GI at a lower resolution in that case? interesting, i will try this. i did bake GI at 100% for my tests and thought maybe even to set it to 200% or more.

    i wonder how bucket render engines are handling this problem. wouldn't it make sense to make LW capable of rendering this way? i know there's a long debate for or against using a bucket rendering engine, but with multi core machines being a standard nowadays, probably the advantages will outmatch the disadvantages.

    in any case, buckets or not, it would be great to see NT working on a solution to the distributed single image processing problem for LW12! most other engines i know of have a solution for this, i would expect LW to be an option for such a workflow as well.

    cheers

    markus
    Last edited by 3dworks; 07-09-2012 at 03:13 AM.
    3dworks visual computing
    demo reel on vimeo
    instagram

    OSX 10.12.x, macpro 5.1, nvidia gtx 980 ti, LW2015.x / 2018.x, octane 3.x

  5. #5
    Super Member Captain Obvious's Avatar
    Join Date
    Dec 2004
    Location
    London
    Posts
    4,502
    Yes, pre-bake it at lower resolution, but make sure that you set the pre-process setting to always anyway. You'll need the additional pre-process at full res if you want all the details in the GI. So do the pre-process at, say, 10 % resolution on the main machine. Then turn it back up to 100 %, and send it off to the farm.

    Bucket renderers deal with it by having a pre-process, same as Lightwave. In modo, VRay, finalRender, mental ray, et cetera, they all render a pre-process, and then do the regular bucket rendering. The problem doesn't appear until you render on several different machines. It's not possible to keep the cache in sync across all machines throughout the rendering process; it would be way too slow.
    Are my spline guides showing?

  6. #6
    unfortunately, there is no solution which minimizes the problem. here a few experiments which i did with a production scene. i used different multiplier settings for the pre-baking of the cache file. same GI settings, for the rest. i also tried to use the 'gradients' switch (which should basically use LW10 way of sample placement), uninterpolated and uncached FG without getting much better results, only longer render times. 'brute force' montecarlo uncached took too much time to get considered seriously. also, i always used 'fixed' sampling pattern for the camera settings.

    what remains as a possible workaround is the border blending trick, but what we need, is a solid native solution...
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	tiled_fg_gi_prebake_test.gif 
Views:	82 
Size:	268.4 KB 
ID:	105457  
    Last edited by 3dworks; 07-10-2012 at 03:00 AM.
    3dworks visual computing
    demo reel on vimeo
    instagram

    OSX 10.12.x, macpro 5.1, nvidia gtx 980 ti, LW2015.x / 2018.x, octane 3.x

  7. #7
    Almost newbie Cageman's Avatar
    Join Date
    Apr 2003
    Location
    Malmö, SWEDEN
    Posts
    7,650
    Hi,

    I'm using DStorms Divide Scene... I've attached an image which was divided into 100 regions, and rendered on our renderfarm (around 40 machines right now). You can actually see the regions through the transparent window, but everywhere else you really can't see the seams.

    This scene uses Final Gather + Cache. I first cached the GI with multiplier set to 100%, then I used Divide Scene to generate 100 scenefiles that I then submitted to our farm through a neat little batchsubmit-tool.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	100_pieces_put_together0000.jpg 
Views:	100 
Size:	110.5 KB 
ID:	105460  
    Senior Technical Supervisor
    Cinematics Department
    Massive - A Ubisoft Studio
    -----
    Intel Core i7-4790K @ 4GHz
    16GB Ram
    GeForce GTX 1080 8GB
    Windows 10 Pro x64

  8. #8
    was this rendered with LW11?

    looks quite good indeed! i wonder if maybe there's something broken with lwsn loading actually the cache... on the other side i've already checked the single node logs and i found the gi cache loading command entry. i will investigate further - because your example seems to indicate that the differences should not be that massive between tiles. i also was puzzled that the different multiplier factors seem not to have much of an effect on the renderings.

    is there any known bug where lwsn nodes do not correctly load disk cached gi? i'm on osx, just to be sure.
    3dworks visual computing
    demo reel on vimeo
    instagram

    OSX 10.12.x, macpro 5.1, nvidia gtx 980 ti, LW2015.x / 2018.x, octane 3.x

  9. #9
    Almost newbie Cageman's Avatar
    Join Date
    Apr 2003
    Location
    Malmö, SWEDEN
    Posts
    7,650
    My example was rendered with LW11.0.1. All nodes finds the cachefile... However, I do not know if Mac-version might have a bug regarding this.

    EDIT: Our farm is based around Windows and consists of a mix of AMD and Intel CPUs.
    Senior Technical Supervisor
    Cinematics Department
    Massive - A Ubisoft Studio
    -----
    Intel Core i7-4790K @ 4GHz
    16GB Ram
    GeForce GTX 1080 8GB
    Windows 10 Pro x64

  10. #10
    Super Member Captain Obvious's Avatar
    Join Date
    Dec 2004
    Location
    London
    Posts
    4,502
    If you run the cache prepass at 100 % on a single machine and store that to a file, then all the nodes should produce perfectly smooth results because all the data they need is already there. I don't know if there are any bugs that could result in the nodes not loading the data properly.
    Are my spline guides showing?

  11. #11
    thanks guys! so definitely it seems like the border differences should be minimal. this all makes me think that there is likely a problem with my network render setup or even a bug in the way lwsn works under osx. as soon as i have time again later this week, i will check this with a simple test scene and let you know.
    3dworks visual computing
    demo reel on vimeo
    instagram

    OSX 10.12.x, macpro 5.1, nvidia gtx 980 ti, LW2015.x / 2018.x, octane 3.x

  12. #12
    BINGO! found a nasty bug...

    when checking with a very simple scene, i found out that lwsn apparently does NOT read the locked GI cache from the network content folder where it is stored (inside the regular 'Radiosity' folder which the package scene plugin generated)! I checked with a text editor if the line was missing inside the LWS, but it was there, like

    RadiosityCacheFilePath Radiosity/radiosity_lw11.cache


    what i did now was to change that line into an absolute osx style path like

    RadiosityCacheFilePath /Volumes/Netrender_1T/lw_net/Projects/netrender_test/Radiosity/radiosity_lw11.cache

    then i sent the modified scene to the lwsn network setup with squidnet and now the scene was rendering nearly without visible tile seams!

    this seems clearly to be a lwsn bug for the mac version of LW11 (using SP2 here). will fogbugz this immediately and hope that NT can fix that asap. in the meantime, the only solution for mac users seems to edit generated LWS manually by renaming the GI cache path as an absolute path before submitting to a farm! of course this will also affect any full frame animation rendering, which explains why i had quite a lot of trouble when i tried to get a flicker free GI scene rendered with LW...

    cheers

    markus
    Last edited by 3dworks; 07-12-2012 at 12:47 PM.
    3dworks visual computing
    demo reel on vimeo
    instagram

    OSX 10.12.x, macpro 5.1, nvidia gtx 980 ti, LW2015.x / 2018.x, octane 3.x

  13. #13
    here a new test with the edited scene, screenshot from squid net interface...

    http://www.3dworks.com/skitch/Tile_R...712-222318.jpg

    looks much smoother, now
    3dworks visual computing
    demo reel on vimeo
    instagram

    OSX 10.12.x, macpro 5.1, nvidia gtx 980 ti, LW2015.x / 2018.x, octane 3.x

  14. #14
    Super Member JonW's Avatar
    Join Date
    Jul 2007
    Location
    Sydney Australia
    Posts
    2,235
    I had this problem using Matt's SN setup, I put the Radiosity Cache file in my screamernet folder, 1 higher up the chain. If it's in the Radiosity folder the network just refused to find it.
    Procrastination, mankind's greatest labour saving device!

    W5580 x 2 24GB, Mac Mini, Spyder3Elite, Dulux 30gg 83/006 72/008 grey room,
    XeroxC2255, UPS EvolutionS 3kw+2xEXB

  15. #15
    Electron wrangler jwiede's Avatar
    Join Date
    Aug 2007
    Location
    San Jose, CA
    Posts
    6,497
    Quote Originally Posted by 3dworks View Post
    BINGO! found a nasty bug...

    when checking with a very simple scene, i found out that lwsn apparently does NOT read the locked GI cache from the network content folder where it is stored (inside the regular 'Radiosity' folder which the package scene plugin generated)! I checked with a text editor if the line was missing inside the LWS, but it was there, like

    RadiosityCacheFilePath Radiosity/radiosity_lw11.cache


    what i did now was to change that line into an absolute osx style path like

    RadiosityCacheFilePath /Volumes/Netrender_1T/lw_net/Projects/netrender_test/Radiosity/radiosity_lw11.cache

    then i sent the modified scene to the lwsn network setup with squidnet and now the scene was rendering nearly without visible tile seams!

    this seems clearly to be a lwsn bug for the mac version of LW11 (using SP2 here). will fogbugz this immediately and hope that NT can fix that asap. in the meantime, the only solution for mac users seems to edit generated LWS manually by renaming the GI cache path as an absolute path before submitting to a farm! of course this will also affect any full frame animation rendering, which explains why i had quite a lot of trouble when i tried to get a flicker free GI scene rendered with LW...
    Yuck. Did you file a bug on this? Did they give any hint whether it would be fixed in 11.5? That's a pretty nasty bug, IMO, and I worry what else might use that same path-handling code.
    John W.
    LW2015.3UB/2018.0.7 on MacPro(12C/24T/10.13.6),32GB RAM, NV 980ti

Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •