PDA

View Full Version : Choose your weapon GPU or CPU rendering?



Nicolas Jordan
09-10-2018, 11:58 AM
I'm looking at what direction I need to go in with new hardware for rendering for Arch Viz both exteriors and interiors in Lightwave.

1. Looking at getting a new PC with the AMD 2990WX Threadripper and sticking with Lightwave native rendering with 2015 and 2018.

2. As a alternate option I'm looking at getting 2 x GTX 1080Ti cards for my current PC and purchase Octane for rendering. I only have 2 card slots on my current motherboard. This would be about half the cost of a Threadripper machine and would be limited to 22GB of GPU ram which is probably enough for most projects but could be a limitation with some. See spec for my current PC in my signature.

I've played around a bit with Octane and like it but my current video card is a GTX 770 2GB so I can't do any realistic production testing with it. I'm still trying to figure out just how fast GPU rendering is compared to CPU rendering especially now that a 64 thread consumer CPU is out. I'm a bit old school so GPU rendering is still something I'm trying to wrap my head around. I'm looking for opinions on how GPU rendering with 2 x 1080Ti & Octane might stack up against rendering with Lightwave on a 2990WX Threadripper. I realize CPU vs GPU is very hard to make a direct comparison but I'm curious what others would choose based on your experience?

Any opinions or advice is appreciated and may help me make a decision on which direction to go in.

Gungho3D
09-11-2018, 02:17 AM
Good question ...

Can't tell you (comparatively) how much quicker the Threadripper will get work - as in rendering - done for you compared with your current setup, but (if I gauge the hype right), there should be a reasonable speed-up (? maybe check around with someone who has one and knows ?). The immediate advantage is that all of your skills and knowledge and expectations regards surfacing an object will retain their value and be good-to-go "as is."

About Octane (my view only): been using it since about mid-2014, and love the results it gives. That said, you do have to do it "the Octane way," which does involve a little on-ramp time. All surfacing is achieved via Octane-specific nodes, and many of the conveniences and magic available per "the LW way" of getting surfacing done are not in reach any more - not to the Octane renderer at least. Still, you can do surfacing magic with the tool, but it takes a bit of getting used to ...

It does come down to making a value judgement: if you simply can't afford to accommodate (time-wise) the learning curve which comes with Octane (because say you're super busy with projects), then maybe Threadripper is the way to go. But if you can take a little time to put yourself through the school of "I'm going to master Octane," then think seriously about doing that ... here's the thing: you can test the Octane waters with your current system without having to part with a cent: just DL the Octane demo and put it through its paces.

My own conclusion after using Octane for four+ years: I'm not going back to the old way of using LW's renderer, not unless I'm specifically gunning for a particular "look." And yes, currently I'm running the exact config you are proposing: 2 x 1080TI's each with 11GB ram

kopperdrake
09-11-2018, 03:14 AM
I'm also an Octane convert - two years now, running the setup you describe. The only time I've found Octane to not be perfect is interior shots where there's a lot of geometry outside (gardens etc). Then I have to render in two passes and comp. The texturing is a doddle once you get the hang of it and start building your library up - there are some good node tests on the net for things like tree leaves which you can add to your existing tree models. I tend to have Octane specific objects labelled OCT somewhere, for organisational reasons. The great things with Octane for me are the sunlight, it's beautiful, and the built in film grading options. In fact, there are too many reasons to like it I haven't felt the need to use 2018 yet, which I really must do - it's sat on my desktop after all!

I should also say that 2 x 1080Ti cards aren't crazy fast for exterior/interior shots, but I usually render in the evenings when I've packed up. I'd like to do a test animation with Octane, but I usually get asked for stills.

Nicolas Jordan
09-11-2018, 10:02 AM
Thanks for the honest opinions on Octane guys. It's also great you guys both have the same GPU set up that I'm considering if I decide to go the Octane route. This is exactly the kind of first hand info I was looking for to help me make a decision. Octane has interested me for a while and some talk about it as if it is a godsend and maybe it is in some situations. I used FPrime for many years back in the day and Octane reminds me of FPrime even though there are some huge technical differences between them and we do have VPR in Lightwave which is fully integrated and more than fills the FPrime void.

I'm usually busy working on various projects trying to meet an ever growing list of deadlines so the extra time it would take to become proficient enough with Octane to use it in production would definitely be something I need to consider as Gungho3D pointed out.

I have a pretty good idea of how much of a speed increase I would get with a 2990WX so that would be a safe bet but I always like to consider and explore all possible options before purchasing hardware that I will be using for at least the next 5 years. The Threadripper should also speed up VPR previewing considerably as well.

Markc
09-11-2018, 11:54 AM
Isn’t VPR single threaded?
Not too sure.
There is also the extra electricity burn with extra hardware.

Nicolas Jordan
09-11-2018, 02:10 PM
Isn’t VPR single threaded?
Not too sure.
There is also the extra electricity burn with extra hardware.

When I turn VPR on it shows all 12 threads at 100% on my machine. I think VPR would preview super fast with a 2990WX and I would likely be able to turn draft mode off most of the time.

hrgiger
09-11-2018, 06:55 PM
You're only going to get so fast with CPU. And nowhere near as fast as you will with GPU, and you can expand with GPU. And Octane development has some good things in the works.

Gungho3D
09-11-2018, 10:31 PM
Thanks for the honest opinions on Octane guys ...
You're welcome NJ :)



... I used FPrime for many years back in the day and Octane reminds me of FPrime ...
I know the feeling, back in the day I think my colleagues were laughing at how much I sang the praises of the product. It was a speed machine which allowed me to get so much done in so little time, not to mention the incredible feedback during development via the almost instant feedback



... I'm usually busy working on various projects trying to meet an ever growing list of deadlines so the extra time it would take to become proficient enough with Octane to use it in production would definitely be something I need to consider as Gungho3D pointed out.
Yeah, I went the cautious way here as well, doing smaller tasks and jobs to wrap my head around "the Octane way" while everything else remained LW native rendered.

Maybe consider getting the thread ripper plus maybe one decent GPU ... ? It might help you ease in to Octane without being dependent on it from the get go ...

TheLexx
09-16-2018, 10:38 AM
I'm not really sure what this means for CPU rendering in LW, but:-

"Alleged AMD EPYC ‘Rome’ 7nm Based 64 Core Processor Performance Leaks Out – Scores an Incredible 12,500 Points in Cinebench Multi-Tasking Benchmark"
https://wccftech.com/amd-epyc-rome-7nm-64-core-cpu-performance-benchmark-leak/

Would a couple of these on a dual motherboard cut it against a GPU for rendering ?

rustythe1
09-16-2018, 10:50 AM
I'm not really sure what this means for CPU rendering in LW, but:-

"Alleged AMD EPYC ‘Rome’ 7nm Based 64 Core Processor Performance Leaks Out – Scores an Incredible 12,500 Points in Cinebench Multi-Tasking Benchmark"
https://wccftech.com/amd-epyc-rome-7nm-64-core-cpu-performance-benchmark-leak/

Would a couple of these on a dual motherboard cut it against a GPU for rendering ?


and are not renderers like Kray also still only being developed on CPU, and only use GPU for certain open cl features, yet they are still fast.

TheLexx
09-16-2018, 11:17 AM
and are not renderers like Kray also still only being developed on CPU, and only use GPU for certain open cl features, yet they are still fast.
I sometimes come across mention of Kray, Keyshot, Maxwell and Luxrender, but know very little about them. Does anyone know if any of these compatible for Lightwave animation (inc CA), or are there gotchas everywhere ? Either way, it seems good news for LW native renderer.

Ryan Roye
09-16-2018, 12:35 PM
With 3x 1080 TIs loaded into the rig, nothing compares to GPU rendering. We're talking about the difference between 10 seconds and 15 minutes here.

However, something I don't feel octane does well is volumetrics. They are hard to control and always introduce a ton of noise regardless of how much brute force you throw at it. Lightwave's solution is generally better there, and visual effects and anything involving fakery is generally easier to do.

I still feel Lightwave's renderer is much more forgiving and less dependent on trying to emulate real-world scenarios (technically correct lighting is not always artistically correct)... so... it really depends on how much you are willing to spend on brute force solutions and/or how much flexibility you want in your renders.

Also bear in mind that with Octane you may need to use geometry to light non-outdoor scenes if you want anything other than quad/sphere lights. You can't just spawn a spotlight; you have to build a 3d representation of it and illuminate the scene that way.

pixym
09-16-2018, 02:12 PM
Nicolas,

I am re-discovering Octane (V4) for about three months right now with two GTX 1080 TI. For sure I am about to simply give up CPU Rendering !
The results I get and speed increase with Octane and two GC are simply amazing. I do ArchViz.

Nicolas Jordan
09-16-2018, 05:04 PM
Nicolas,

I am re-discovering Octane (V4) for about three months right now with two GTX 1080 TI. For sure I am about to simply give up CPU Rendering !
The results I get and speed increase with Octane and two GC are simply amazing. I do ArchViz.

Based on the machine specs in your signature you have a pretty fast 18 core CPU and you say that 2 x 1080Ti are even faster? That's crazy!

pixym
09-16-2018, 05:50 PM
Better than words (see attached screen grabs)

erikals
09-17-2018, 12:22 AM
Better than words (see attached screen grabs)
https://forums.newtek.com/attachment.php?attachmentid=142823&d=1537141732

antialiasing 24
does it need to be that high ?
if you cut it to 12 you will halve the render time or so

still, yep, that Octane render time kicks.

Lewis
09-17-2018, 05:50 AM
NJ - just to be sure you understand how Octane work, if you plug in 2*1080Ti you wont have 22 GB of VRAM, Each GPU gets exactly same data/textures so RAM is not combined. It renders in parallel so you will still have 11GB VRAM no matter how many of 1080Ti you plug in machine. BUT octane compresses all data when sending to GPUs os VRAM usage is like 5-6x less than what LW 2015-2018 comsumes (i tested some scenes in past and LW was using 34GB of RAM while Octane used only 4.5 GB VRAM for same scene) so 11GB VRAM will be plenty.
Also there is out of core memory so you can use System RAM for off loading textures to machine so it won't make you GPU VRAM that much used. In V4 there is also out of core for Geometry so that basically means it's almost unlimited scene size/models you can render in v4.

I'm using octane for LW since v1.9/2.0 era and I've slowly progressed to use it exclusively for all projects (product design, car shoots, arch.viz...). In 2014 i used 50/50% LW native and Octane and now in 2018 i'm 99% fully on Octane for all rendering projects.

pixym
09-17-2018, 05:55 AM
antialiasing 24
does it need to be that high ?
if you cut it to 12 you will halve the render time or so

still, yep, that Octane render time kicks.

In fact 24 is not enought because this scene is set for animation…

Kryslin
09-17-2018, 07:09 AM
I will probably be sticking to CPU based rendering for a while yet.

Fur and Fiber work consumes memory like PacMan (tm) eats dots. The last scene I did, which had 2 furred characters in it, took up 10GB of RAM, on top of the OS, for about 60% usage. I have managed to consume 20 of 24GB on one character (which turned out beautiful, though my FFX settings were not optimized).

Even if the amount of memory across multiple GPUs summed, It would take 2 1080ti to come close to my current available ram - 20Gb. At current prices, I can get a similarly specced computer. (32 GB Ram, Quad core i7 @3.2 Ghz). For my planned build next year, that would jump to 12 GPUs... which is more than twice the cost of the new build.

oliverhotz
09-17-2018, 07:36 AM
its very important to understand.. something needing 24 gb of cpu ram... requires nowhere near that, on the GPU. couple years ago, i had a project that required 36gb on cpu, that fit in my 3gb 780TI at the time in octane. - just read up, and saw lewis posted it already.. doesnt hurt to underline it again.. if you see how quick it renders fiberfx.. its mindboggling.

Nicolas Jordan
09-17-2018, 10:24 AM
Better than words (see attached screen grabs)

Thanks for the speed comparison example.

Nicolas Jordan
09-17-2018, 10:46 AM
NJ - just to be sure you understand how Octane work, if you plug in 2*1080Ti you wont have 22 GB of VRAM, Each GPU gets exactly same data/textures so RAM is not combined. It renders in parallel so you will still have 11GB VRAM no matter how many of 1080Ti you plug in machine. BUT octane compresses all data when sending to GPUs os VRAM usage is like 5-6x less than what LW 2015-2018 comsumes (i tested some scenes in past and LW was using 34GB of RAM while Octane used only 4.5 GB VRAM for same scene) so 11GB VRAM will be plenty.
Also there is out of core memory so you can use System RAM for off loading textures to machine so it won't make you GPU VRAM that much used. In V4 there is also out of core for Geometry so that basically means it's almost unlimited scene size/models you can render in v4.

I'm using octane for LW since v1.9/2.0 era and I've slowly progressed to use it exclusively for all projects (product design, car shoots, arch.viz...). In 2014 i used 50/50% LW native and Octane and now in 2018 i'm 99% fully on Octane for all rendering projects.

Thanks for explaining that Lewis. Ram was one of the things I was worried about having enough of and was under the impression that I could have more ram with more GPUs. I will probably start with 1 1080Ti then to learn Octane over time and get a 2nd GPU later if I need to speed up rendering.

Nicolas Jordan
09-17-2018, 10:53 AM
I will probably be sticking to CPU based rendering for a while yet.

Fur and Fiber work consumes memory like PacMan (tm) eats dots. The last scene I did, which had 2 furred characters in it, took up 10GB of RAM, on top of the OS, for about 60% usage. I have managed to consume 20 of 24GB on one character (which turned out beautiful, though my FFX settings were not optimized).

Even if the amount of memory across multiple GPUs summed, It would take 2 1080ti to come close to my current available ram - 20Gb. At current prices, I can get a similarly specced computer. (32 GB Ram, Quad core i7 @3.2 Ghz). For my planned build next year, that would jump to 12 GPUs... which is more than twice the cost of the new build.

Since all my assets and everything I do is based around the native Lightwave render engine currently I will probably still be using CPU rendering for a while yet as well. I will get a 1080Ti to learn and test GPU rendering in Octane with some projects in hopes of maybe switching over to it completely someday for my work if it ends up working well for what I do.

50one
09-17-2018, 11:36 AM
I felt into the same trap with Octane. Thought the vram will double...nope, just speed increase. Truth be told there are few other gotchas in Octane that are hidden from the window shoppers.

pixym
09-17-2018, 11:38 AM
Nicolas, From what I understood while nvidia CEO introduced V-link, GC memory will be dynamically allocated with this this new technology. So the number of RAM in NVidia GCards will be added in the near future ;)

TheLexx
09-17-2018, 12:24 PM
In the meantime, I think I read somewhere if there are two unequal cards in the system, say one of 2GB and the other of 8GB, then Octane defaults to the lower ram, if someone can confirm. But all roads do seem to lead to Octane anyway.

Lewis
09-17-2018, 12:30 PM
Yes , it works similar like RAID for HDDs, smallest one dictates how much VRAM is avaiable for rendering.

pixym - NVlink will work only on RTX 2080 and 2080Ti and you can link only 2 GPUs max so for us who have more than 2 GPUs it's not gonna be much of use anyway.

pixym
09-17-2018, 01:32 PM
Yes , it works similar like RAID for HDDs, smallest one dictates how much VRAM is avaiable for rendering.

pixym - NVlink will work only on RTX 2080 and 2080Ti and you can link only 2 GPUs max so for us who have more than 2 GPUs it's not gonna be much of use anyway.

Hi Lewis, Yes I forgot to mention that !

Ztreem
09-17-2018, 05:12 PM
Since all my assets and everything I do is based around the native Lightwave render engine currently I will probably still be using CPU rendering for a while yet as well. I will get a 1080Ti to learn and test GPU rendering in Octane with some projects in hopes of maybe switching over to it completely someday for my work if it ends up working well for what I do.

That’s one thing I love with Blender I can setup all materials native and then render with GPU or CPU or both without redoing anything.

pixym
09-17-2018, 05:16 PM
Take in mind you will have to texture your objects for Octane with nodes.

Gungho3D
09-17-2018, 09:29 PM
... In V4 there is also out of core for Geometry so that basically means it's almost unlimited scene size/models you can render in v4

That is great news, wasn't aware of that. Been hitting an upper polygon limit in Octane v3 which straight out crashes LW/Octane (although LW native handles it ok).

kopperdrake
09-18-2018, 03:34 AM
Likewise - good to hear about the geometry handling. That was why I had to comp one arch viz shot with loads happening inside and outside :thumbsup:

Another caveat I've bumped into with Octane is when two sets of geometry overlap, say an object and its morph target, you can get some rendering weirdness, even if one is totally dissolved. There are a few small gotchas that creep up like that, but I've found a work around for most. There's also the animations where you need to reload the frame due to movement (I think it was instance movement that needed a scene reload for each frame rendered), and the scene reload can take a chunk of your rendering time.

But the reason I went with Octane was a simple project where depth of field and motion blur were crippling the native renderer (2015.3). I tested Octane out on trial with my GTX970 and was so impressed with how it dealt wit motion blur and DoF out of the box that I bought it and a single 1080Ti, usin the 1080Ti in tandem with the GTX970. Each time I did a job I put the money aside I've use for rendering and built a pot for another 1080Ti to replace the GTX970. Next step would be to build an external box for more 1080Ti cards I guess, but I haven't needed to do that yet.

dsol
09-18-2018, 08:43 AM
The thing that limits Octane is the number of GPUs you can stick in a PC - and this is massively affected by the CPU and chipset you have. So, I've just upgraded to a cheap 1st gen 12-core threadripper (the CPU was less than £300!) and will be transplanting my twin 1080tis from my old i7 system to it. With the extra PCI-channels that Threadripper brings, I can stick at least another 2 GPUs in there in future (once the 2080s come down in price). I only do GPU rendering now (with Octane) - I just love the look of the renders so much.

Buuuuuuut..... I think it's worth pointing out the downsides with Octane as well. Though it renders brute force GI incredibly fast, it's achilles heel is that you are pretty much stuck with using a single PC for rendering (network rendering via ORC is supposed to be a nightmare). And their license agreement means that 3rd party render farms aren't allowed. They are planning a rather clever blockchain-based distributed render system called RNDR, but god knows when that will actually be up and running. Octane 4 is close to getting to full release, but there's still quite a few bugs in there.

But getting back to your original post, personally I'd try and get hold of a cheap 16 core threadripper and 1 1080ti. Then later, when prices drop you can either upgrade to a 32-thread Threadripper 2 (if you want to go the CPU route) or get an extra 2 x 2080s - which will have the ability to share a RAM pool via NVLink - so 2 x 8GB cards will create a shared 16GB space, to work alongside the original 1080ti's 11GB.

kopperdrake
09-18-2018, 10:25 AM
The thing that limits Octane is the number of GPUs you can stick in a PC - and this is massively affected by the CPU and chipset you have.

I told myself that if I ever needed more than the two cards, I'd look at something like the Amfeltec Expansion Cluster. They don't seem too bad in price, especially when compared to the cards themselves, plus they'd keep you warm in the winter ;)

http://amfeltec.com/products/gpu-oriented-cluster/


https://www.youtube.com/watch?v=Vm9mFtSq2sg

erikals
09-18-2018, 11:00 AM
in regards to Clusters, or 3 cards or more,

doesn't Octane 4 require you to buy more licenses when using more than 2 Gpu cards though ?

Lewis
09-18-2018, 11:04 AM
in regards to Clusters, or 3 cards or more,

doesn't Octane 4 require you to buy more licenses when using more than 2 Gpu cards though ?

No, you got it wrong.

erikals
09-18-2018, 11:08 AM
[relief]   glad i got that wrong.    https://i.imgur.com/tJGL61i.png

Waves of light
09-18-2018, 11:14 AM
I'm an Octane convert too. I purchased 3 x 780tis from ebay (around £100 each) and a second hand motherboard with 4 PCIe slots (2x 16 and 2x8) which came with an old i7 2600k and 8gb of ram. I then built a home made render box with 4 intake fans and 4 exhaust fans for cooling. I had to buy an expensive raiser (don't buy cheap - it crashes Octane) and I haven't yet had the need to upgrade.

I'm sure I will once my scenes require for VRAM - and that's where the 780 tis may not be good enough for you. They only have 3GB and even if you add say a 1080ti (with 11gb) Octane will only use the VRAM limit of the smallest card. There is something called out-of-core memory, which uses system ram as a backup for heavy scenes, but I've never needed it.

I went down the subscription route (20$ per month) and that will, once V4 comes out, allow me to use upto 19 GPUs and will give me two seats (so one for one PC and one for the render box). The free version will allow upto 2 GPUs, but may not include all third party plugins (e.g. Lightwave, Blender, Cinema 4D plugins) but you'll get the standalone renderer to try out.

For product viz it is stupidly fast.. and with the new denoiser in V4, even quicker. And you get no GI flicker.

I did a cartoon test animation recently and it was coming in at 4-6 secs per frame (using just two 780tis). It was all Octane Toon shaded:
http://rsldesigns.co.uk/downloads/Test_Octane_Toon_Grimm_2.mp4

And that's the best part... you can start off with old tech and still get really good render times. Then, when the time comes, upgrade when necessary (or when a bigger job requires it).

TheLexx
09-18-2018, 02:18 PM
@ Waves of light, very interesting, and at HD resolution too with what appears DOF blur on the hand. I imagine Octane would make very short work of character animation at SD resolution, which I find quite exciting.

In my understanding, HD is about a quarter of 4K resolution, but four times SD resolution, so I was wondering if the rendering time scales in "fours" in direct proportion to those output resolutions.

Waves of light
09-19-2018, 02:43 AM
@ Waves of light, very interesting, and at HD resolution too with what appears DOF blur on the hand. I imagine Octane would make very short work of character animation at SD resolution, which I find quite exciting.

In my understanding, HD is about a quarter of 4K resolution, but four times SD resolution, so I was wondering if the rendering time scales in "fours" in direct proportion to those output resolutions.

@TheLexx -

Ok, I optimised the scene and the results are as follows (no denoiser needed as I'm not using GI for the toon shader):

i7 2600k @3.4, 8GB of RAM, 2x780ti

100 samples, MB done inside of Octane:

SD (1280x720) - 1 sec per frame (system RAM used 2014MB, 1698 by Octane - avg)
HD (1920x1080) - 3 secs per frame (system RAM used 2254MB, 1864 by Octane - avg)
4k (3840x2160) - 15 secs per frame (system RAM used 2944MB, 1998 by Octane - avg)

I need to look more into Denoiser as for some reason it's taking a long time to produce and save out the denoised render (like 15 secs per frame at HD). This maybe to do with my tech or because it's the beta version of Octane 4.

But at SD I was able to render out all these frames in just under 6 mins, stick it into Fusion and produce this video:
http://rsldesigns.co.uk/downloads/Test_Octane_Toon_Grimm_2_SD.mp4

Nicolas Jordan
10-01-2018, 11:43 AM
After some more thinking and trying to put GPU and CPU rendering into perspective I think it might really come down to biased vs unbiased rendering. There is no doubt that GPU can do unbiased rendering much faster than a CPU can. I prefer unbiased rendering myself but I know that I can easily get away doing my work with biased rendering in Lightwave on CPU and everything will still look good. Now that Octane 4 will have the AI denoiser I guess it will be similar to biased rendering for those who use it.

Lewis
10-01-2018, 12:33 PM
Not really, denoiser in not limited or usable only unbiased rendering, Even CPU Biased renders can use Denoising. Good thing with Otoy denoiser is that you can use it with animations too.

pixym
10-01-2018, 12:38 PM
There is also a "static noise" feature in Octane that helps for animation.

lwanmtr
11-25-2018, 03:59 AM
Just my .02...

Ive looked at Octane (havent been able to get full license) and it looks great..though to be honest, I'd like to see the LW guys add in gpu/cuda rendering options natively in Layout.
If I could leverage the gpu's along side my 16 core threadripper, I'd be happy.

rustythe1
11-25-2018, 05:42 AM
well, my .02 would actually be a different compromise, I think one of the big things with octane is not so much its fast, its now the noise reduction that allows you to produce cleaner renders from noise as from what I understand the normal rendering part is now slower than when it first came around, so may be given cpu tech is growing quite fast now (like I said in other threads my 7980 is almost 5 times as fast as my 5960) its worth looking at applying similar/same noise reduction, even using gpu just for that area, im guessing you could even use that in a hybrid sense, rendering on cpu while the gpu is cleaning up the previous frame.

Norka
11-25-2018, 06:48 AM
I chose my weapon (GPU).. over six years ago. I bought my first Octane/LW Octane licenses in 2012. Immediately I saw Octane completely smoke the LW engine, with (at the time) my two paltry GTX460s trouncing my dual Opterons. Then I did "The Mod" (ghetto DIY watercooling on GPUs with CPU WCs) on my two GTX460s and things got even better/quieter.. then my GTX580s 3GB after that.. then my three 780Ti after that.. and now I have four 980Ti Hybrids. I'll likely be upgrading again before long...

I could write for hours about why Octane LW is the shizz. I really could, but I have a ton to do today. To sum it up: I love LW. I love Octane. I really, really love Octane for LW. I love water-cooled GPUs, and as many as I can stuff in The Beast™. Juanjo Gonzales (Octane plug dev) rocks and is The UberDev™! Add these things together and you have your reason why.. I.HAVE.NEVER.LOOKED.BACK....

paulk
11-25-2018, 08:46 AM
Just to muddy the waters, when does it become practical to use a commercial render farm? Can you get signifigant work done on Project B while Project A is being masticated by thread counts high enough for designer sheets or ground out by gargantuan GPUs?

Tim Parsons
11-25-2018, 11:40 AM
Well I see Octane 4 was released a few days ago but I didn't see any reference to free seats with up to 2 GPUs. So I guess it was too good to be true. So it looks like GPU rendering is fairly expensive - $600 for Octane and $XXX for GPU's. I've been mucking about with the demo and I think it's really pretty amazing, but for what we do it's overkill and too complicated for my guys. :)

lwanmtr
11-25-2018, 01:36 PM
That would be a decent compromise for sure. Could even just have the gpu handling volumetric calculations and that sorta thing.

JCG
11-25-2018, 01:42 PM
Well I see Octane 4 was released a few days ago but I didn't see any reference to free seats with up to 2 GPUs. So I guess it was too good to be true. So it looks like GPU rendering is fairly expensive - $600 for Octane and $XXX for GPU's. I've been mucking about with the demo and I think it's really pretty amazing, but for what we do it's overkill and too complicated for my guys. :)

They said that their main objective is to offer the free tier to those who are using products that are free to use. So they started with Unity, which is already available and they should have Blender out in Q1. A free tier for products that are paid is not impossible in the future but would be planned later.