PDA

View Full Version : Upgraded to 32GB RAM, no performance improvement in LW



adrian
03-11-2016, 10:58 AM
I upgraded to 32GB RAM today (from 16GB) hoping to see a significant improvement in LW (11.6.3) with high-poly scenes and models. Absolutely no difference at all, how can this be? I bought the RAM via Crucial.com so it's definately compatible with my system.

Could it be my graphics card, I've attached the specs? My computer is running Windows 7, 64-bit on an Intel Core i7 2700K CPU @ 3.5Ghz. The RAM is 4 x 8GB sticks of DDR3L - 1600 UDIMM 1.35V CL8 if that means anything.

The RAM is installed fine as the computer recognises it has 32GB in it.

Very disappointing and at 150 a very expensive mistake. :devil:

132866

RebelHill
03-11-2016, 11:01 AM
Is the model file size (or sum of multiple files that might be open together) bigger than 16GB? If not, then why would more ram make a difference?

Markc
03-11-2016, 11:47 AM
Ram is never a waste of money when doing graphical work (and other intensive stuff).
As RebelHill says, once you have a large model/scene and/or other apps running you will be thankful for the 32gb.
The less your computer needs to access the HDD the better.

Ryan Roye
03-11-2016, 12:20 PM
More ram will not make your computer faster directly. It will allow your computer to store more "operating data". Most programs that cache any kind of data will benefit from more ram in terms of capacity.

This means you'll be able to cache more MDD frames without crashing, load more/denser models into your scenes (but your processor still has to deal with them!), process more and larger textures in your scenes, play back more video without interruption, save larger previews, render larger images, etc etc etc.

If you want speed, the processor and/or GPU are pretty much the only things you need to be worried about. Ram does come in different speeds, but the difference in performance is not worth the cost. I'm talking a 1% performance boost in most cases between low/high speed ram on a compatible motherboard.

In a nutshell, 16 GB is entry level for computer graphics. 32 GB is generally recommended for when you start delving into ram-hungry programs like Fusion and Realflow, where memory ceilings are always a problem.

MonroePoteet
03-11-2016, 03:48 PM
You can see how much memory applications are using with the Windows Task Manager. Start Layout and load up a "big" scene with a lot of high density models in it. Right click on the Task Bar, and click Start Task Manager. Select View=>Select Columns... and click the box next to the Memory (Private Working Set), and other Memory columns to display if you choose. Then, click the column header on Memory (Private Working Set) to sort by that column.

My experience is that Lightwave is pretty memory efficient. With the smallish projects I work on, I have to load a lot of high-density models to get into the gigabytes of memory usage. Having grown out of the Amiga through the years, I think LW engineering has kept to high-efficiency in-memory management of objects, etc.

Also, make sure you don't have some other memory hog running while trying to run LW. Windows will share the physical memory among all applications, and some other application (or several of them) may be using up your memory. Outlook, Firefox and my backup engine (CrashPlan) are the big memory users on my laptop system.

RE: the video card, you need to enable Layout and Modeler to use the high-end video card. Usually, there is "built-in" graphics for a system which is used by default, and the video card is a separate driver. For NVidia, you can bring up the NVidia Control Panel, select the Manage 3D Settings => Program Settings tab, and use the Add button to add Layout and Modeler and set them to use the NVidia card. Note that for whatever bizarre reason, the NVidia Control Panel calls Layout "Google Sketchup" and Modeler "CST Microwave Studio", but they are the correct EXE files and use the GPU when invoked. Also, under the NVidia Control Panel's Desktop menu there's an option to Display GPU Activity Icon in the system tray. You can then see when the graphics card is being used or not.

mTp

OFF
03-11-2016, 06:41 PM
The main advantage that you get by using more memory - when rendering very large scenes.

erikals
03-11-2016, 07:27 PM
it's kinda common knowledge that Modeler for example ain't a beast when it comes to speed,

even with more Ram
even with a faster Cpu
even with a better Graphics card

Modeler simply has an old architecture.
a rewrite is needed.
until then...



https://www.youtube.com/watch?v=1aEksR8jLLk

magiclight
03-12-2016, 01:32 AM
hoping to see a significant improvement in LW (11.6.3) with high-poly scenes and models

Where did you expect the performance boost ?

As been said above, more ram in itself does not do anything for performance if your scene/model do not use it.

With that said, more ram is always a good investment.

OpenGL performance has very little do with amount of CPU RAM available so if you expected modeler to fly circles around, it will not.

Modern CPU/RAM infrastructure is based on executing a small subset of data very fast, it caches the data in L1/L2/L3 ram caches that are very fast, ordinary RAM is 10-20 time slower then the CPU so accessing large amounts of data like you do in any 3D software does not give perfect performance boost because of increase of memory, it depends a lot on the size of the cache memory in the CPU.

So, check how much memory Modeler or Layout actually use, if it's below 16 gig then you will not see any different, if you have gazzillion sized geometry or huge textures, then it may help, but as said, more ram is never wrong, you will be happy you got it later on.

When it comes to rendering the LW renderer is pretty fast even with very complex geometry, but the first time you render after loading a scene it will take a lot of preprocessing before it actually starts to render, if you render it again you will see a huge difference, with a 3M polygon scene with 3GB of textures it takes 4 minutes first time I render that scene, but once that is done it takes around 20 seconds to render it, just an example, all scenes are unique and HV, motionblur and GI can add a lot to the time.

But the usual advice is good here, don't try to fix something because you think you know what is wrong, find out what is wrong before you try to fix it.

When it comes to opengl there are many things that can cause it to slow down, old video cards, check the settings for the video card, how much ram do you have in the video card, you can lower texture resolution in opengl (I use 256x256 sometimes, look like rubbish but the pixels fly on the screen), if you have big textures this can give a good boost, don't display textures in opengl, don't use any antialiasing settings, check the opengl settings in LW.

Hope you get it working a little faster.

adrian
03-12-2016, 02:16 AM
Thanks for all the replies, I think I expected too much. Basically I expected my texture and poly-heavy models to "perform" better in modeller, ie I expected to be able to navigate around them much easier rather than the jerky display I get now or even worse, press the rotate button, wait a few seconds until the screen updates and then press it again. Rinse and repeat.

I'll certainly set my nVidia card to use LW, I did wonder why Layout and Modeller weren't in the list.

As for scenes my biggest one yet uses 17GB RAM to render.

JonW
03-12-2016, 04:10 AM
If everything running fits within the available RAM adding more will make no difference. On my old machine with 24gb it was a tiny bit slower than with 12gb. Building efficient scenes, PNG 8 bit files for bump maps etc will help & don't have images thousands of pixels wide if they are simply not needed. Removing things from the scene that are not needed helps a bit.

Modeler is only using one core!

Spinland
03-12-2016, 04:34 AM
Modeler is only using one core!

This, I believe, is the crux of the biscuit. From my computer architecture class days, as I understand it more RAM will (as has been said already) only reduce or eliminate swapping pages of memory to disk. That alone is a huge speed-up if you were over-reaching your RAM before; otherwise, not so much.

erikals
03-12-2016, 04:42 AM
Modeler is only using one core!

yep, like i said, a rewrite is needed.

you can have a $2000 computer and it won't make much of a difference in Modeler

Spinland
03-12-2016, 04:45 AM
yep, like i said, a rewrite is needed.

you can have a $2000 computer and it won't make much of a difference in Modeler

+1000!

JonW
03-12-2016, 05:04 AM
swapping pages of memory to disk.

I have always had this set to only a few mb, on, but in other words in effect off so there was basically no memory to swap. But then my PCs are only used for LW. Everything else done on a Mac.

JonW
03-12-2016, 05:20 AM
Also what might help a touch is even though my 6 PCs are networked to my Mac none of the PCs have never been connected to the internet so I have never had to have any anti virus software to slow the things down. They are all still running XP, that's why I can't go further that LW11.

erikals
03-12-2016, 06:23 AM
anti virus softwares are a b****

http://pix.iemoji.com/sams33/0433.png

adrian
03-13-2016, 03:33 AM
Indeed, I only have Microsoft Security Essentials installed on my machine as the only time it's connected to the Internet is to download Adobe updates. I've added Layout and Modeller to my list of programs in the Nvidia card settings so we'll see if that makes a difference.

At least my extra 16GB RAM will come in handy when I go crazy with expansive landscape scenes with hundreds of Xfrog models in it (in terms of being able to load and render it anyway).

Sensei
03-13-2016, 07:08 AM
To exceed 16 GB memory, using current architecture (double linked list), you would have to have 600 millions vertices..
It's pretty safe assumption that 16 GB is way more than enough to keep 300 millions vertices and 300 millions polygons.

magiclight
03-14-2016, 10:38 AM
Well, with a few morphs and vertex maps it run away pretty fast.

Danner
03-14-2016, 11:30 AM
Of the hundreds of scenes I have rendered, I have had only 3 that didn't render on a 16gb machine.

Markc
03-14-2016, 12:36 PM
But once you open other applications like Fusion/Photoshop etc and Interweb/email etc, that 16gb will soon disappear (plus whatever your OS is using).
I currently have 16gb (toying with upgrading to 32, coz it just sounds nice :D), and have had no issues, but I don't do anything huge.

kopperdrake
03-14-2016, 03:12 PM
I regularly go over 16Gb - PC's currently sitting at 28.9Gb, but then when you're doing landscapes at 3m x 2.5m and aren't afraid to throw the instances and high detail textures around, it soon adds up. A LightWave scene is like a goldfish, it fills the capacity of the tank it's in :D

Kevbarnes
03-14-2016, 06:18 PM
A LightWave scene is like a goldfish, it fills the capacity of the tank it's in :D

Like this one
132901
and he keeps on growing

Photogram
03-15-2016, 10:55 AM
Recently i got a new more powerfull system, clone my old configuration to the new one and i had very bad performance overall my new system...

I did have problem reading files, browsing directories and software like Lightwave and Modeler was very often not responding.
I was thinking i got a bad motherboard or memory...

After reviewing my bios configuration i realized that was hard drive configuration not in IDE mode but in AHCI mode

Since switching in IDE mode there is no wait state and everything run pretty fast!

thiyaguthree
03-20-2016, 11:34 PM
If you want speed, the processor and/or GPU are pretty much the only things you need to be worried about. Ram does come in different speeds,.

I want to clarify similar to your discussion and I hope you will help me in this.
I have been using Lightwave since 9.5, right now my team workstation has the configuration of PC, Spec: Precision Workstation T7500
Intel(R) Xenon(R) CPU E5520 @ 2.27 GHz 2.26GHz (2 Processors) 12GB RAM 64Bit OS NVIDIA Quadro2000.
With these spec we have been running Lightwave 2015.3 for creating and rendering big process plants. The process plant layout consists of maximum of millions of polygon counts. The final rendering would last three consecutive days with 3 local nodes.
Now we have planned to reduce the rendering time as much as short in order to work for post production and also submit the project even faster.
In this situtation, I have proposed 3 nos GEFORCE GTX TITAN Z (http://www.nvidia.in/gtx-700-graphics-cards/gtx-titan-z/) and in addition the 12 GB RAM will be elevated to 24 GB RAM (maximum set up of DELL Precision T7500)

My question is.

1. I checked with the results that TITAN Z would be 50x faster than Quadro2000. So my rendering time would be extremely reduced right?
Here's the link of performance details - https://render.otoy.com/octanebench/results.php?sort_by=avg&filter=&singleGPU=1
2. Is there any other processor that you people suggest would give the same result beyond the TITAN Z cost
3. Another reason for going big processor is because of fluid and particles that consuming more time for calculation. Is there any decent plugin especially for fluid to support lightwave apart from real flow. I have been searching for years and i can't get for lightwave. struck with doing animations with liquid dynamics...
4. Lightwave suggests octane renderer for advanced rendering, I have also proposed octane rendering both for 3ds max and lightwave as the octane render needs GPU.
so Octane renderer + TITAN Z processor would give the best results right?
Also if anyone expereinced with plantation type of plugins which gives best realistic look, please suggest.

Please give your ideas so that i can go further to configure with latest settings to improve my work flow.

Thanks
Thiyagu

Below are some of the renderings..

132999
133000

Danner
03-21-2016, 03:02 AM
1. Your render time will not be reduced because you are not using the Quadro for rendering if you are not using Octane.

2. You could put 4 gtx 980 in one case and it would be faster than one Titan z, you'd have to compare benchmarks to see what cards or combination of cards gives you the best deal. The fastert would probably be 4 Titans, hopefully your motherboards supports that many. (again this is only for Octane)

spherical
03-21-2016, 03:33 AM
so Octane renderer + TITAN Z processor would give the best results right?

Just to clarify, Titan Z (or any other GPU), will require Octane to do any render speed improvement. It appears that your understanding of the situation does not take that into account. In other words, plopping a Titian into the machine will not improve render times all on its own, which is that I am taking from your question. And adding Octane is not an option in that case. It is a requirement. Octane needs a GPU to do its work. LightWave is a CPU renderer and your Xeon processors are great for that. GPU rendering is a different approach.

That said, your Quadro can then be dedicated to running the displays, freeing the GTX cards to do the rendering; unimpeded by driving the system interface. If you have a power supply with enough capacity to handle multiple graphics cards, you can get it all done in one box without having to buy multiple Octane licenses for separate boxes.

thiyaguthree
03-23-2016, 09:17 PM
Hi! I am moving to Octane for some high quality rendering. For that I need the best GPU. Yes I am clear that Lightwave is a CPU renderer with the existing system processor. But I don't know how quick the Octane rendering will be, if I install some best mutiple TITAN Z or GTX 980 cards. If anyone using the best set up for Octane rendering alone please let me know. Also please furnish your test render results to choose for my PC setup.

or How to increase my render speed extremely for my individual system with or without external renderer?

Thanks
Thiyagu

spherical
03-24-2016, 12:59 AM
Have you asked these questions on the Otoy forums? They'll know better than anyone. It's their software.

thiyaguthree
03-29-2016, 04:56 AM
No. I didn't enquired about this, issue is with the type of systems and processors. once test with the cards I will come to a clear picture. This weekend i will be receiving a k series card as dell company doesn't prove consumer support for other than Quadro and AMD series.

adrian
03-30-2016, 08:39 AM
I just realised today that it's not the number of polygons that's causing LW to run so sluggishly, it's when they are sub-patched. If I un-subpatch them even with all the textures I can navigate around the high-poly models with ease.

Since upgrading to 32GB my system loads up very fast.

jeric_synergy
03-30-2016, 10:10 AM
What's your display subpatch level? You could probably reduce the worst offenders to zero while you work, and then up them if you need to. And of course RENDER subpatch level is a completely different setting. --Also, I believe that default display subpatch is a preference, so you may want to default to zero for EVERYTHING, and only increase those Objects that need it.

Bit laborious though. Does anybody know of any utilities that allow an animator to toggle subpatching globally, and then restore it to the various varying levels for all the objects?

adrian
03-31-2016, 02:45 AM
My subpatch levels are set to the default ones, so 3 (both render and display) - good tip about reducing display subpatch level to 0, will definitely do that. In terms of objects in scenes does it help to freeze them rather than just sending them to layout> I've never really understood why you would freeze an object.

spherical
03-31-2016, 03:17 AM
I've never understood why you wouldn't. With every render initialization the renderer has to re-freeze the sub-patched object. Why incur that repeated hit?

Danner
03-31-2016, 06:54 AM
I often freeze then do a poly reduction on the resulting mesh. This will speed everything up. I always keep a copy of my objects in subpatch in case I need to modify them.

Sensei
03-31-2016, 10:42 AM
I just realised today that it's not the number of polygons that's causing LW to run so sluggishly, it's when they are sub-patched. If I un-subpatch them even with all the textures I can navigate around the high-poly models with ease.

Since upgrading to 32GB my system loads up very fast.

Traditional sub-patch level 3 has 9 times more polygons than non-subpatched version of mesh.
In the case of Catmull-Clark sub-patch, it's (2^3)^2=8^2=64 times more polygons generated by sub-patch.

Obviously it must run quicker without sub-patches, as there is dozen time less data to process and send to gfx card..

Kevbarnes
03-31-2016, 01:02 PM
I've never understood why you wouldn't. With every render initialization the renderer has to re-freeze the sub-patched object. Why incur that repeated hit?

I have understood that you would get the same benefit by setting the Display sub-patch equal to the Render sub-patch (at Render time - of course).
so typically I animate at display 0 or 1 then adjust this to 3 for Final render stage.

I would be interested to know if this is not the case?

spherical
03-31-2016, 04:01 PM
Traditional sub-patch level 3 has 9 times more polygons than non-subpatched version of mesh.
In the case of Catmull-Clark sub-patch, it's (2^3)^2=8^2=64 times more polygons generated by sub-patch.

Obviously it must run quicker without sub-patches, as there is dozen time less data to process and send to gfx card..

Exactly. Sub-patch modeling definitely has its benefits, but the oft touted "low poly count" isn't one of them. Yes, there are fewer polys needed in Modeler, but when it comes time to render the model, they get increased anyway; as can be seen to sometimes absurd levels. As in everything, there are uses for this; just not in all cases.

Sensei
03-31-2016, 04:09 PM
There are two ways to deal with it: have it frozen in app, or have it frozen in gfx card by GPU.
Each one has its own advantage.
If mesh if frozen in the app, there is possible to manipulate these vertexes after the fact (f.e. nodal displacement of each vertex).
But sending them from physical memory to gfx memory each time change/freeze is done is slow.
If mesh is stored in sub-patches, and send that way to gfx card, and GPU will do freezing,
there will be less to send, CPU will be free from this task, just copy data.
But then vertexes will be unable to modify later (forget about nodal displacement of each of them independently).

jeric_synergy
03-31-2016, 06:14 PM
I can understand why people would forego it: version tracking would be a severe headache, I'd think, if you usually work on the mesh w/Modeler open.

Is there a utility that let's you toggle between NO SUBPATCHING at all, and subpatching according to each mesh's user-defined s.p. level?