PDA

View Full Version : That's just so great, ...



Thomas M.
09-18-2006, 09:51 AM
... it's LW9 and I'm still running into frame buffer problems. Years of struggling to find workarounds, cutting your objects to pieces, stitching images, etc. Endless hours in the web to find solutions fellow sufferes found or to find ways to get that image done.

NT rewrote the core, but this silly problem hasn't been addressed. I still see ugly shadows on subpatched objects I reported two years ago too them (Non-planar polys?). But not to be able to render what you've created is rather frustrating. No borders doesn't help, although I reduced my render area to the minimum, nevermind that the render panel constantly forgets that I chose "No borders". I lowered the render buffer to 2MB. Sometimes it works when I restart LW, but the next time I'll just get an error message. What's wrong with this program that it can't free it's memory on it's own? You need to quit to render again.

And I still need to load my surfaces in node parts and stitch them together, as LW isn't able to save my complex surfaces without loosing the node connections.

2 1/2 month, no bug fix yet. Judging from the problems users run into constantly this is a total mystery to me.

Cheers
Thomas

Sensei
09-18-2006, 10:11 AM
Never heard of problems you've encountered.. Can you share scene files to reproduce them? What are settings?

Thomas M.
09-18-2006, 10:29 AM
Well, it's a very common and long existing problem that LW refuses to render because it can't allocate whatever memory it needs. Sometimes quitting and try to render again helps.

My render output is about 10000x5000pixels, but the polycount is important, too. Lowering the polycount is sometimes the only solution remaining. Slicing the scene is in so far possible, as you can render a limited region, to be behave memory friendly. I also cut my objects into several chunks which fit into the limited render to help LW, but even this doesn't necessarily help.

LWs memory managing has always been a problem from 5.6 on. Point is that NT advertises its product to be able to render up to 16.000 by 16.000 pixels which is utter nonsense with any normal scene with a few objects in it. Last century Strata 3D 2.5 has been able to render pretty much everything in any size with a fPrime like render engine more than 5 years prior to the real fPrime on this on a G3/266MHz with 256MB. But LW, which won so many prices just fails if it comes up to high resolution on a 3GHz with 2GB memory and 4GB memory on the hard disk.

That could have been addressed ages ago. The point that 32bit systems can only address a certain amount of numbers and therefore this program can't render stuff beyond a certain limit is in my opinion a bit lame, as programers always found workarounds. Remeber the C64. 64k memory and what amazing games have been possible!

Unfortunately I can't share any files. Polycount varies between 500.000 to 4.000.000.

Cheers
Thomas

Phil
09-18-2006, 10:40 AM
Hmmm. With a completely blank scene, setting the camera size up, I immediately get 'Not enough memory for frame buffers'. It's completely reproducible for anything 8000x8000 or higher - instant (and I mean instant!) fail.

Anything lower works just fine, it seems.

Phil
09-18-2006, 10:41 AM
Unfortunately I can't share any files. Polycount varies between 500.000 to 4.000.000.

Cheers
Thomas

Send them to NewTek - they will honour NDAs so you shouldn't concern yourself about lost assets.

Thomas M.
09-18-2006, 10:43 AM
Wasndas schon wieder, NDA? Nicht denkende Angestellte?

Phil
09-18-2006, 10:53 AM
Nondisclosure agreement.

Wenn noetig, kann ich alles auch auf Deutsch schreiben/erzaehlen.

Thomas M.
09-18-2006, 11:03 AM
Da schau her! Thanks for the proposal, but let's keep up the good spirit and continue in good ole English.

Sensei
09-18-2006, 11:35 AM
My render output is about 10000x5000pixels, but the polycount is important, too. Lowering the polycount is sometimes the only solution remaining. Slicing the scene is in so far possible, as you can render a limited region, to be behave memory friendly. I also cut my objects into several chunks which fit into the limited render to help LW, but even this doesn't necessarily help.

But 10k x 5k is approximately 50 MB if 1 pixel takes 1 byte.. If channel is 8 bytes long (floating point double precision) it's 50 MB * 8 * 4 channels.. That's 1600 MB of memory... Not even counting additional channels like diffuse, specular etc.

I can write for you virtual renderer that will handle such resolution for 300$, but it'll have some limitations.. Please contant me privately if you want to discuss it..



Point is that NT advertises its product to be able to render up to 16.000 by 16.000 pixels which is utter nonsense with any normal scene with a few objects in it.

What NT advertise is true, but you have to have 64 bit and enough memory.. Continuous memory..



Remeber the C64. 64k memory and what amazing games have been possible!

Games and especially scene demos are pure cheating, which is not an option for real-life application, if it has to have decent render quality..

Limbus
09-18-2006, 11:41 AM
Hi Thomas,
what kind of computer do you use and how much ram do you have? More RAM should enable you to render bigger images. Did you use F9 or F10 to render the image. When using F9, the displaying of the rendered images can take forever and will use lots of RAM so use F10 if possible. Maybe you can split your Scene into multiple layers/objects and render them seperatly and compose them back together.

I just rendered an 10240x10240 pixel image with LW. It did not contain any geometry thou but only a imagefilter (StarPro) and it used alot of RAM but I did finish rendering.

But in the end I have to agree that NewTek should do something about the RAM consumpton of the renderer. Alot of other renderers use much less RAM because they use bucket rendering where only a small part of the scene and the imagemaps need to be loaded into RAM.

Florian

Thomas M.
09-18-2006, 12:06 PM
Hi Sensei,

thanks for the reply. Well, as I wrote I don't intend to render it in one go, but with several chunks and stitch them together. So it's already "limited region" rendering which should change things quite dramatic, but it doesn't. I don't know about your calculation, but in PS it's 92MB in 8bit, 184 in 16bit so floating point should be around 368MB.

Anyway, the point is as a user I don't care how it works. I pay programers to figure this out for me. And if a company advertises their product in a way that it is suitable for this purpose, then they should better take care of that it actually does what's written on the package. Not even in the tiniest letters it says anywhere that it is theoreticaly possible to render this size, but practicaly not.

BTW, 16.000 pixels were announced long before 64bit versions were available.

What happened in the past is one thing, but I'm rather frustrated that they didn't bother to change anything within the program and simply wait till 32bit computers are dying out and 64bit PCs solved everything for them.

Live could be so much nicer ...

Sensei
09-18-2006, 12:52 PM
thanks for the reply. Well, as I wrote I don't intend to render it in one go, but with several chunks and stitch them together.

My virtual renderer would render everything in one go, 10k x 5k, because it would render directly to virtual memory, without using phisical memory.. I was using this technique in the past to quickly scan text files that had 30 MB size and searching single word in this file took 20 micro seconds on AthlonXP 2000 with my algorithm without even loading file to memory, but it still could be better..


So it's already "limited region" rendering which should change things quite dramatic, but it doesn't.

Limiting region might not change size of render buffers that are allocated. I will check this in temporary image/pixel filter..


I don't know about your calculation, but in PS it's 92MB in 8bit, 184 in 16bit so floating point should be around 368MB.

It sounds like each PS version count differently.. I have 143,1 MB for 10k x 5k RGB..

10000 * 5000 * 4 or 8 bytes per channel * 4 channels (at least 4, at maxiumum there is 24 channels send to image filter).. For 4 bytes it's 763 MB, and 8 bytes it's 1526 MB..



What happened in the past is one thing, but I'm rather frustrated that they didn't bother to change anything within the program and simply wait till 32bit computers are dying out and 64bit PCs solved everything for them.


LightWave always was (and is) program that stored everything in memory, without loading and saving resources only when needed. That's good and faster for simple things, but when complexity increases it limits 1 percent of artists that need such complexity.. The rest 99 percent don't care too much.. That's why they didn't do much with it for so long time.

Bog
09-18-2006, 01:00 PM
LightWave always was (and is) program that stored everything in memory, without loading and saving resources only when needed. That's good and faster for simple things, but when complexity increases it limits 1 percent of artists that need such complexity.. The rest 99 percent don't care too much.. That's why they didn't do much with it for so long time.

!!!!

So... LightWave doesn't use virtual memory? That might explain some things. Is that the case?

Sensei
09-18-2006, 01:07 PM
No, of course it does! Open Process Manager, select View and Choose Columns (or whatever it's called in English Windows version) and there is entry responsible for showing virtual memory usage..

But Windows automatic memory handling is not always solution for everything.. Especially when there is need for one continuous large memory region..