PDA

View Full Version : layout openGL viewport improvement with new gpu



unplug_2k2
04-07-2017, 03:22 PM
Hello,

i have a solid computer, but had not change gpu for many years. I was using a GTX570 which is quite old now and just upgraded to a GTX1060 Extreme with 6gb and overclock.
I know that VPR and rendering are CPU only base, but i was expecting a significant boost in performance in viewport for handling high poly scene and texture intensive scene.
So far i have seen close to no gain in performance in layout for high poly scene. I have a scene with 2 millions poly, which is not that high and 75 bones with IK and I can't get all the object to display smootlhy. I have to set a maximum threshold so that some object disapear when i move around the viewport.

Am i missing something here !?
i run with 4 viewport, but there were no significant gain to have 1 viewport.

is it lightwave that have difficulty of really using the power of the gtx !?

Norka
04-07-2017, 04:31 PM
Your GPU is a total puss. It's a mid-level gaming GPU at best. Put your 1060 on eBay and get a 1080Ti or a 980Ti. They will take anything you throw at them and ask for seconds.

jwiede
04-07-2017, 07:55 PM
Hello,

i have a solid computer, but had not change gpu for many years. I was using a GTX570 which is quite old now and just upgraded to a GTX1060 Extreme with 6gb and overclock.
I know that VPR and rendering are CPU only base, but i was expecting a significant boost in performance in viewport for handling high poly scene and texture intensive scene.
So far i have seen close to no gain in performance in layout for high poly scene. I have a scene with 2 millions poly, which is not that high and 75 bones with IK and I can't get all the object to display smootlhy. I have to set a maximum threshold so that some object disapear when i move around the viewport.

Am i missing something here !?
i run with 4 viewport, but there were no significant gain to have 1 viewport.

is it lightwave that have difficulty of really using the power of the gtx !?

Kind of, current LW is using a really old OGL profile and that definitely impacts perf and capabilities. However, the aged LW viewport architecture is probably about as responsible in that regard (in terms of its performance at feeding geometry into OGL). LW.Next may bring some improvement in that regard, it's difficult to know, but as it stands it is unlikely that you'll get a _lot_ better OGL results from a 1080 over a 1060 given LW's current viewport situation.

samurai_x
04-07-2017, 11:00 PM
There's no advantage from 1060 to 1080 in performance in viewport for handling high poly scene and texture intensive scene in your viewport. Set your bounding box threshold lower, like
less 500000.
Don't waste your money.

madno
04-07-2017, 11:28 PM
Users noted that Nvidia Quadro cards are faster than their GTX sisters due to different drivers.
Maybe get a used Quadro to try?
It was said that even quite old and mid class cards work well.
Or wait for LW Next to see if it brings better viewport performance with it.

Sensei
04-08-2017, 01:37 AM
Am i missing something here !?


You're missing entire knowledge how 3D apps/engines work and send data to gfx cards...

Games usually try to upload entire mesh once, while loading level, and then vertex shaders transform (change some vertex position) these data that are all the time on gfx board memory..
3D apps like LightWave transform vertexes by them self, adding new triangles/polygons, using cpu, and then sending them just for drawing.
The next frame, the same procedure.

The same is in Modeler. If you have non Legacy OpenGL enabled, with Buffered VBO (Vertex Buffer Object (https://en.wikipedia.org/wiki/Vertex_Buffer_Object)), spinning viewport is fast even if it's multi-million polygon mesh. It's static mesh.
All transformations are done on gfx card entirely, without transfer data from regular cpu memory to gfx card memory.
But if you try to edit this million polygon mesh, transformation is done by CPU, and mesh has to be cleaned up from gfx card, and new version has to be send..
It's dynamic mesh.

unplug_2k2
04-08-2017, 08:33 AM
You're missing entire knowledge how 3D apps/engines work and send data to gfx cards...

Games usually try to upload entire mesh once, while loading level, and then vertex shaders transform (change some vertex position) these data that are all the time on gfx board memory..
3D apps like LightWave transform vertexes by them self, adding new triangles/polygons, using cpu, and then sending them just for drawing.
The next frame, the same procedure.

The same is in Modeler. If you have non Legacy OpenGL enabled, with Buffered VBO (Vertex Buffer Object (https://en.wikipedia.org/wiki/Vertex_Buffer_Object)), spinning viewport is fast even if it's multi-million polygon mesh. It's static mesh.
All transformations are done on gfx card entirely, without transfer data from regular cpu memory to gfx card memory.
But if you try to edit this million polygon mesh, transformation is done by CPU, and mesh has to be cleaned up from gfx card, and new version has to be send..
It's dynamic mesh.

i'm not sure i should or want to answer your comment. First, I think you didn't read the original post. The current graphic card was a gtx 570 which is about 7-8 years old. an overclocked gtx 1060 with 6gb or memory have much more horse power to work with but use different architecture. I know LW is starting to age and that layout have much more difficulty in performance compare to modeler viewport. Never I mention anything about games or about lightwave functionning like a game or me having any expectation of getting render speed or better vpr performance out of a gpu in a cpu renderer... But when you replace a 8 years old card, you expect some positive difference in openGL performance. It look to me like very poor use from lightwave of current hardware and technology but I was asking the community if I was missing anything of if it is the behavior I should expect from layout. nothing else.

That said, even with the GTX570 i almost never had any problem dealing with complexe scene in modeler, as you said, most part is handle by the cpu. My computer is 4.8ghz so that's fine.

Now about the transformation, despite i mentioned the object being animated, i'm talking about rotating the viewport.
What kill the performance is having the transparency on. I'm guessing that old and dying architecture behind layout is mostly responsible for the bad performance and i'm guessing i reach i need to wait for lightwave next to see any difference. I didn't test it out yet, but i'm sure other application like allegorithmic tool or zbrush will get performance boost. I'm not asking anyone if i did i good purchase lol, none of you know my need for that card and the gtx1060 is known for being a very good bang for the buck option. Any card would be better than the one that was in that computer anyway lol

threshold wise it work well at 600k with transparency on

- - - Updated - - -


Your GPU is a total puss. It's a mid-level gaming GPU at best. Put your 1060 on eBay and get a 1080Ti or a 980Ti. They will take anything you throw at them and ask for seconds.

i'm amazed by people like you.

Sensei
04-08-2017, 11:21 AM
i'm not sure i should or want to answer your comment.

Better not. Because you have not understand it at all...

My the best GFX card is GT 430 with 1 GB..

Better would be if you would take OpenGL book and C/C++ book and write application rendering triangles, millions of triangles, to gain the real knowledge, how to program gfx card...
Where is problem? Don't you want to be smarter than you're now?



First, I think you didn't read the original post.

I think you're too incompetent to even understand what I wrote..



The current graphic card was a gtx 570 which is about 7-8 years old. an overclocked gtx 1060 with 6gb or memory have much more horse power to work with but use different architecture. I know LW is starting to age and that layout have much more difficulty in performance compare to modeler viewport.

I would call your card awesome from my point of view..
500%+ faster than mine..



Never I mention anything about games or about lightwave functionning like a game or me having any expectation of getting render speed or better vpr performance out of a gpu in a cpu renderer...

Then, what do you expect?



But when you replace a 8 years old card, you expect some positive difference in openGL performance.

If you replace car by other car, you also expect new car will be faster.. But instead it's using less fuel, and you're disappointed..

If I would replace my GT430 by anything newer, I wouldn't expect any boost in any 3D apps. I would expect boost only in 3D games.
Which would mean rendering more frames per second.



That said, even with the GTX570 i almost never had any problem dealing with complexe scene in modeler, as you said, most part is handle by the cpu. My computer is 4.8ghz so that's fine.


I would like to have your machine.. ;)



Now about the transformation, despite i mentioned the object being animated, i'm talking about rotating the viewport.
What kill the performance is having the transparency on.

That's because when there are present transparent polygons in mesh, they need to be sorted in Z-distance order from camera, and rendered in reverse order...
Which has nothing to do with gfx card used.

You have to be programmer to know how OpenGL/Direct3D is working, or talking to programmer about it..


Yet another thing for you to check after reading book about OpenGL and C++ programming..



I'm guessing that old and dying architecture behind layout is mostly responsible for the bad performance and i'm guessing i reach i need to wait for lightwave next to see any difference. I didn't test it out yet, but i'm sure other application like allegorithmic tool or zbrush will get performance boost.

Z-Brush is completely absolutely different architecture, not requiring the same what LW is requiring...

ps. I hope so you will appreciate my absolute honest reply. If you don't appreciate honest reply from people, then ignore it without any reply.. :)
What eastern people call honest, UK/US people, are calling being rude.. I am not rude. I am honest/drunk ;)

unplug_2k2
04-08-2017, 02:05 PM
Better not. Because you have not understand it at all...

My the best GFX card is GT 430 with 1 GB..

Better would be if you would take OpenGL book and C/C++ book and write application rendering triangles, millions of triangles, to gain the real knowledge, how to program gfx card...
Where is problem? Don't you want to be smarter than you're now?



I think you're too incompetent to even understand what I wrote..



I would call your card awesome from my point of view..
500%+ faster than mine..



Then, what do you expect?



If you replace car by other car, you also expect new car will be faster.. But instead it's using less fuel, and you're disappointed..

If I would replace my GT430 by anything newer, I wouldn't expect any boost in any 3D apps. I would expect boost only in 3D games.
Which would mean rendering more frames per second.



I would like to have your machine.. ;)



That's because when there are present transparent polygons in mesh, they need to be sorted in Z-distance order from camera, and rendered in reverse order...
Which has nothing to do with gfx card used.

You have to be programmer to know how OpenGL/Direct3D is working, or talking to programmer about it..


Yet another thing for you to check after reading book about OpenGL and C++ programming..



Z-Brush is completely absolutely different architecture, not requiring the same what LW is requiring...

ps. I hope so you will appreciate my absolute honest reply. If you don't appreciate honest reply from people, then ignore it without any reply.. :)
What eastern people call honest, UK/US people, are calling being rude.. I am not rude. I am honest/drunk ;)

thanks again for taking some time to answer. I'll try to cover all your concern about my personal life.

A - I mention not being sure about answering your post because i wasn't sure that:

1- you were trying to flame / be rude
2- we didn't understand each other enough to get anything good out of it.

It was not be being rude.

I was not asking: did I buy the most powerfull card ever ? no, of course not. The thread started with a flamer telling me the card is mid-range... I dont really care, i pick this one for price VS quality and at that price range, it's a good choice. Comparing a 1000$ card to a 300$ card performance is comparing banana to orange and it is not where i wanted my question to go. I was looking for information on lightwave, not graphic card. It started the wrong way and your post look to be more clever but, you know.

B - The original question was ONLY about this: is there any setting anywhere is layout or nvidia pannel that I have miss that cause the very limited gain in power or is this normal behavior from lightwave by the fact that it is an old architecture. According to a few post, it look like lightwave is the cause of it's own lack of performance. A simple, good, honest answer would have been, it is normal for lightwave to get no gain out of your new card because it is the way lightwave was build. See how easy and simple that is ?

C - Calling me incompetent out of the blue just show everyone how unrespectful and careless you are and that my friend will eventually fall back on you. Come with me at my job and i'll make sure you feel that you are the stupidest person on earth because im one of the best at what i do.

D - Thanks for your superb car analogy.. "buying a new car will not make it faster than an old faster car..." WOW, really ? that was a marvelous answer, so clear now !!! I'm sure you work hard on that to make my poor soul understand the basic concept of speed. Let's go back to the fact. We are comparing a GTX 570 to a GTX 1060. Now let say that Lightwave is a racetrack and that the performance in layout is our timer. The GTX1060 does, on the 1/4 mile track 8 sec while the gtx is doing 15sec. Not because it is new or old, but because 1060 is a way more powerful car/card... whatever. So yes, i do expect a brand new porsche boxter (see how i didnt say 911 turbo) to be faster on the track than a 8 years old suffering toyota. Many 3d software will benefit for newer faster car and Lightwave should be design that way too. Unless you don't consider zbrush, houdini, revit, realflow, etc. to be 3D software ? As far as i remember people were complaining about lightwave scene performance. Don't pretend i'm the first, or that you never heard of it.

E - I know how book work. thanks. You should get the one call "being a good person for dummy" . I have better thing to do in life and i have enough to learn in my field of interest. Thanks for your great advice.

F - oh so zbrush is not a clone of lightwave ? wow thanks for the light up again. So clear now. Again, that was kinda my point. It's disappointing to see major software benefit from newer card and not lightwave. Which is why i was asking if it was normal for lightwave to not react positively to a 4 time better card.

G- I'm glad you would like my computer but I don't think it would be compatible with you. I've been running 4.8ghz since 7 years with my overclocking solution. Go back 7 years ago and it was impressive. 4.8ghz is still very good. I can't recall having a computer that was fast enough to stay that long in my company. It's a warhorse workstation. I have 32 and 24 core dual cpu machine, but many function of lightwave are single core so overall they are just better at rendering and worst at working for that specific software.

H- I know that it is normal for reflection/transparency to produce performance drawback. It was just a clarification about the bottleneck.

I - I hope you truly appreciate my honest answers and that you will not answer anymore to this post. As you can see, i rarely come to that forum and i already got the answer i was looking for. I did put your post in google translate and it detected your language and translated your mile long post into a simple short answer. "Yes it is normal"

Thanks

- - - Updated - - -


Kind of, current LW is using a really old OGL profile and that definitely impacts perf and capabilities. However, the aged LW viewport architecture is probably about as responsible in that regard (in terms of its performance at feeding geometry into OGL). LW.Next may bring some improvement in that regard, it's difficult to know, but as it stands it is unlikely that you'll get a _lot_ better OGL results from a 1080 over a 1060 given LW's current viewport situation.

thanks buddy. Yes i've been keeping an eye on LW next. they don't post news often on the forum but so far so good for what we know. Can't wait :)

SBowie
04-08-2017, 02:53 PM
I'm not quite so sure this thread represents a clash of cultural norms. I have been working closely with a group of Asian engineers from a leading organization for something like 9 months. I have found them both extremely well informed, and unfailingly courteous and respectful. I could only wish to see more with such good manners, wherever they live. That said, language and cultural differences can clearly impede good communication. Anyway, I'd suggest a fresh start.

Norka
04-08-2017, 04:36 PM
i'm amazed by people like you.

Thanks. I always love to help when possible. And it's nice to get occasional recognition for it. :-)

madno
04-09-2017, 12:29 AM
... I have a scene with 2 millions poly, which is not that high and 75 bones with IK and I can't get all the object to display smootlhy. I have to set a maximum threshold so that some object disapear when i move around the viewport.


Unplug_2k2, can you share your scene or a similar one?
I have a Quadro and CTX in my PC an can try if there is a difference in viewport speed.

samurai_x
04-09-2017, 01:25 AM
Users noted that Nvidia Quadro cards are faster than their GTX sisters due to different drivers.
Maybe get a used Quadro to try?
It was said that even quite old and mid class cards work well.
Or wait for LW Next to see if it brings better viewport performance with it.

It won't make a difference in lightwave. Lightwave viewport is the bottleneck.
It will make a difference in max, maya. They're slightly optimized for quadro. But since they moved to nitrous direct3d I think gaming cards are better.

Asticles
04-09-2017, 03:43 AM
If I remember well, zbrush is entirely cpu, so you can handle really dense meshes in a laptop with standard graphics card.

It is possible clarisse has a similar approach, not really sure, but maybe this is the best for vfx in multicore cpus.

As a question, will benefit to Lightwave to stay with opengl? What about Vulkan?

jwiede
04-09-2017, 05:31 PM
As a question, will benefit to Lightwave to stay with opengl? What about Vulkan?

LW3DG are going to need to completely reimplement LW's viewport and UI technology (the two are connected) in order to see significant improvements in viewport performance. LW3DG are also going to be limited by what Qt (or whichever cross-platform UI framework they select) makes easily available to them w.r.t. graphics APIs for viewport tech.

While LW3DG could choose to implement their own cross-platform UI framework, and select whichever graphics APIs they wish to integrate, there are compelling reasons to instead use an existing, well-supported cross-platform UI framework versus attempting to make their own (again). Many of LW's current problems stem from LW3DG not adequately maintaining their existing (self-implemented) UI framework's to keep up with OS modernization on Win and Mac. There's little reason to believe LW3DG will be more capable of maintaining a modern, even more complex cross-platform UI framework going forward (nor that they could implement anything remotely as complete, stable and mature in that regard as Qt). Clearly it would benefit LW3DG to have someone else responsible for that (very substantial) dev effort going forward, whether the Qt folks or equivalent competitor in that regard.