PDA

View Full Version : very choopy open gl when using bones



matts152
08-21-2015, 10:34 AM
anyone know why when bones are activated on a decent high poly object the open gl response is complete crap? I just mean the viewport too, not bone deformation speed, just rotating the perspective view becomes very slow, but when switched to wireframe, its very fast. When bones are deactivated, shaded mode is fast again......

Why would just activating bones slow down draw speed?

Ryan Roye
08-21-2015, 11:36 AM
In bone properties, is it set to faster bones? (this is a per-object setting, changing it on one bone changes it for all the bones the object contains).

Also, I have a hunch that your speed issues have to do with transparency being enabled. Turn transparency off and verify this; Lightwave OGL is known to be very slow when a lot of transparent surfaces are visible.

matts152
08-21-2015, 11:44 AM
faster bones are active, and no transparency. Also running 2 titan Z graphics cards, so its not that. whats weird is that again, with no bones, open gl is lightning fast

MSherak
08-21-2015, 12:26 PM
faster bones are active, and no transparency. Also running 2 titan Z graphics cards, so its not that. whats weird is that again, with no bones, open gl is lightning fast

Deformations are done at the CPU level, not at the GPU level. When no deformations tools exist on the model it is cached to the GPU for drawing. When deformation tools are applied this does not happen. Only after the CPU is done it the data pushed to the GPU. Hence moving from frame to frame seems slower since the CPU has to calculate it first. Switching your Dynamic Update and Bounding Box thresholds might help speed up the viewport drawing for you.

-M

matts152
08-21-2015, 02:31 PM
Deformations are done at the CPU level, not at the GPU level. When no deformations tools exist on the model it is cached to the GPU for drawing. When deformation tools are applied this does not happen. Only after the CPU is done it the data pushed to the GPU. Hence moving from frame to frame seems slower since the CPU has to calculate it first. Switching your Dynamic Update and Bounding Box thresholds might help speed up the viewport drawing for you.

-M

what i dont understand though, if its not a gpu thing, why does it respond very quickly with bones active in wireframe mode?

MSherak
08-21-2015, 08:27 PM
what i dont understand though, if its not a gpu thing, why does it respond very quickly with bones active in wireframe mode?

Wireframe requires no calculating of polygons, normals, zbuffer sorting, surfacing, lighting, etc, which the CPU does right now because of OpenGL. For wireframe the CPU hands the GPU the point cloud and says draw the lines. Now if there was a mini GPU vertex shader running in the background for the viewport the speed of the viewport could be greatly increased. Hard part is that vertex shaders in OpenGL can't do the complexity of say DirectX. That is why Maya's new viewport 2.0 is DirectX only and 3DSMAX is only available for a Windows machine. DirectX does not care what card you have in your machine but the problem with this on a multi-platform piece of software there is no DirectX for the MAC so you alienate your users if you go this route.

On the other hand if programs used OpenCL they could run a vertex shader like DirectX and get the speed into the viewport and be multi-platform friendly. It takes some work to get OpenCL to work right but it does work. Kinda why I don't like most of the GPU renderers out there since they require one to use one video card type due to using CUDA (Nvidia). Any package that adopts OpenCL in the future will be ahead of the game in the end and one can see this even with current developers. Octane is now going OpenCL in 3.0. Bullet is going that route for 3.0 to use the GPU for calculating. There are a couple of other GPU renderers moving that way also. So I would not be surprised that Lightwave goes that route in the future. There is no need to limit a GPU renderer or vertex shader to one video card type. In the end it is easier to ask the GPU to do something and not care what maker of the card is. (ATI or Nvidia) This is what DirectX does and it works well for Windows. So OpenCL is the DirectX of multi-platform to take advantage of any GPU and believe it or not OpenCL was originally developed by Apple then given to the Khronos Group. (OpenGL devs)

One of the best things about OpenCL most don't know is it takes advantage of all computing power on a machine allowing CPUs, GPUs, DPAs, FPGAs, etc. to all work together in parallel. Course this is also why it takes some reworking from developers since most software packages are not written to do 100% parallel processing (full multi-threading). Usually in 3D packages it's only the renderer that takes advantage of this ability. Can see it easy in Lightwave. Do a boolean on a million polygon model and take a look how many of your CPU's is working. Now imagine if you could use all your GPUs and CPUs at the same time to do that boolean.

Parallel processing software will happen since we can't move past the 5.0ghz limit of silicone. If you want to see it in action just take a look at the latest game consoles. They are running at 1.8mhz on the 8 CPU's but everything is 100% parallel processing with the GPU's. They have some crazy polygon fill rates for a under $400 machines. With OpenCL it's the first step for personal computers to move in this direction.

Anyway sorry for the long message.
-M

matts152
08-22-2015, 07:25 AM
cool, thanks for your breakdown of that, very cool!...... I get it now I think. Would love to have that OpenCl multi-threading one day!....

So the reason it slows down the viewport when just orbiting the perspective view with bones active is that the cpu is slowing it down when bones are active? because again when deforemations are turned off it can tumble the model like butter. It's just so different, figured there must be something going on...... when its in wireframe I can easily even move the joints and pose the character very smoothly.

jwiede
08-24-2015, 09:19 PM
Hard part is that vertex shaders in OpenGL can't do the complexity of say DirectX.

Can you please provide some actual examples of what you're referring to above?