PDA

View Full Version : Modeler only uses ONE CPU core



tcoursey
03-27-2013, 07:41 AM
How come modeler only uses ONE CPU core in 2013? We have graphics cards that have hundreds if not thousands of CUDA cores, multi core CPU's and in many cases multi CPU's with Multiple CORES. (ie 8-16+)

Modeler is so old....The new features mean nothing if they can't tap into modern tech on our computers.

Moving 5 points around on a 500,000 poly model takes 12 seconds for it to respond. Don't even think about trying an UNDO. Modeler has to rethink what it just did, sort through those 500,000 polys with one core of the CPU and then redraw the OGL display.

Please someone tell me this will get better VERY SOON!

If you'd like for it too, let's let NewTek know that features are great but a modern modeler is absolute necessary regarding the future.

ridasaleeb2
03-27-2013, 07:51 AM
I was thinking to upgrade both the computer and get LW 11.5 upgrade (from 9.6) . So are you telling me that , even with hyped up hardware, windows 7, etc, it's still going to have these lags to respond to operations? I want fast performance- isn't Virtual real time supposed to be the strength of LW 10+. and that quickness embedded in modelling tools??

tcoursey
03-27-2013, 07:55 AM
Always has and NewTek isn't telling us any different. Modeler 11.5 is terrific in many ways. Great new features enhancments and bug fixes. Layout is even better. Instancing, FiberFX etc..etc.. The list goes on and on. I'm not complaining about that.

The fact that Modeler's OGL and command system (best way to describe it) only uses one SINGLE core on most if not all operations is crazy in this day in age.

Short answer is yes hyped up system, Quadro K6000 or whatever you want to spend will still have some slow downs because modeler relies on CPU and only one part of it.

Watch your CPU usage on slicing a large poly object, selecting a group of thousands of points in a large poly object. Try doing an UNDO after a large operation. ONE CORE.

But do get 11.5 upgrade and do get a new machine. It will all help. Just not as much as it truly should! IMO.

Snosrap
03-27-2013, 08:05 AM
Modeler just has clunky innards. :) 11.5 has a sampling of possible things to come with the Render Modes of some of the new tools. Not sure if you had an experiance with LW Core, but that had amazing polygon pushing performance. So NT is aware of the issue and has solved it - they just need to shoe horn the new code into classic LW. The new tools show that they are doing it little by little. BTW most modeling apps don't take advantage of multiple cores, even Bullet is single threaded.

tcoursey
03-27-2013, 08:12 AM
Good to know Snosrap....and in a wierd sort of way you make my point.

NewTek is aware of it. For several years now. They have proven that it works (I have not seen it, I did not jump on CORE, although like many was pumped to see the launch, only to find it was not coming to the masses as promised)

I'm not a programmer, but can get my way around scripts. I doubt it will be a shoe horn approach, or we would have seen some it by now. It's a total rework, and that's why CORE was being explored.

I don't care how many cores technically a program uses I just care of the performance I see. Bullet is apparently a good point. I did not know it was single threaded. It performs well. Think though how it could perform if it was multi-threaded.

It just seems silly or backwards in many ways that Lightwave at it's core (no pun intended) has been multithreaded way before others. In a significant way. The render engine is bar none the best on the market IMO. Fast, multithreaded before it was cool or necessary.

Yet many features and commands are held back so drastically by only addressing one core.

I've been in other competing modler apps and they perform amazingly! period. Don't care if they use one core or not. They perform!

MarcusM
03-27-2013, 08:16 AM
What extreme powerful software LW could be if this guys still developing LW. I can see it was really big hit, many years ago, for dev team when i read Luxology team description:
http://www.luxology.com/company/key_staff.aspx

Modeler speed for me is fine. Just once when i import model from Solidworks/Rhino by (.obj more than 400Mb) only Layout not hang ;]

tcoursey
03-27-2013, 09:41 AM
Yes Luxology has come a long way in a very short time relatively speaking. But the dev team for Lightwave has made huge improvements in LW as well. Especially considering they are having to work from an old code base. Nothing wrong with the team or it's efforts! Great job NT.

Just wish there was some way we could inject some new code for multithreading modeler. I know it's been on the forums for years now!!!

OFF
03-27-2013, 09:58 AM
+1 for multithreading modeler!

Titus
03-27-2013, 10:27 AM
There are certain problems in computer programs you can divide in threads, others are not so efficient. Rendering may be highly parallelizable (and sometines you only get a 10% gain while rendering multithreading), I'm not sure if modeling is parallelizable.

Tobian
03-27-2013, 11:39 AM
Sorry, remind me which other software does multi-threaded modelling operations again? So there's Houdini.. and Hmm.. Houdini... and erm.... :p Oh and all those Cuda powered Modellers, like uhm......

I'll not argue that Modeler could do with a huge speedup (though note that the new tools use the new Modeler core which is a LOT faster!) But most software does slicing and dicing of models in a single thread, as some operations just don't work very well if they are diced up into multiple threads. There's plenty of things they could do to speed up Modeller: viewport acceleration, better handling of large models (without the entire model being evaluated, every time you move or select a point). But very few of those things would benefit from multithreaded.

Sensei
03-27-2013, 12:02 PM
Think though how it could perform if it was multi-threaded.

Think how fast thousand people could cut single apple..

You logic is- "thousand people will cut apple thousand times faster than one"..

OFF
03-27-2013, 12:10 PM
Generally speaking it is about of manipulation of large-count-polygon objects in the Modeler's windows.

shrox
03-27-2013, 12:21 PM
Think how fast thousand people could cut single apple..

You logic is- "thousand people will cut apple thousand times faster than one"..

One blender can do the work of thousands with that apple...mmm applesauce.

Titus
03-27-2013, 12:29 PM
Generally speaking it is about of manipulation of large-count-polygon objects in the Modeler's windows.

This was supposed to be solved with Modeler 64-bit. NT even had a screenshot of Battlestar Galactica scene with millions of polygons. Right now I'm working in a scene with a half dozen medium-poly sized storks flying and my machine is crawling.

bazsa73
03-27-2013, 12:32 PM
never mind...

- - - Updated - - -

yes, but not blender the app! LOL

Tobian
03-27-2013, 12:43 PM
Your Modeler looks odd there titus.. those look very much like bones.. Mine can't do that :p

TBH tumbling performance in both Layout and Modeler with many millions of polygons is actually ok. It's just switching views, deformations or poly editing which is slow. Multiple deformations is always going to be slow in Layout. Do you have multithreaded deformations enabled there Titus, now that IS multithreaded... :D

As I said I agree that Modeler needs to be much faster, but most modelling tools do not use multithreading to achieve that. Houdini is a special case because it's entirely procedural driven. Of course it is a horrifically poor modeller, in the classical sense.

hrgiger
03-27-2013, 03:38 PM
Modo 701 showed a demo that had a scene with 2.7 billion polygons in it and they were able to make a selection and bevel some polys pretty interactively. Lightwave needs much improvement in this area.

Tobian
03-27-2013, 04:17 PM
So they bevelled 2.7 billion polys? Wow impressive :p

hrgiger
03-27-2013, 05:45 PM
That would be impressive, but no, just a model within a scene with 2.7 billion polygons. That would make LightWave cry. A lot.

jwiede
03-28-2013, 08:10 AM
Sorry, remind me which other software does multi-threaded modelling operations again? So there's Houdini.. and Hmm.. Houdini... and erm.... :p Oh and all those Cuda powered Modellers, like uhm......
Softimage, modo, and C4D to name three off the top of my head. All three have multithreaded deformers, etc. which are commonly used as modeling tools in their environments. C4D & modo both have multithreaded sculpting ops as well, not sure about SI in that regard (but will check this weekend if I have time). I'll also try checking whether the "basic" modeling tools in each (bevel/chamfer, bridging, etc.) will multithread when heavily loaded, I believe at least some of SI's and modo 701's do now, but not sure about C4D.

Tobian
03-28-2013, 08:39 AM
Deformers... That being cage deformation, not poly editing. Lightwave does multithreaded deformation, In layout... Make sure you're comparing apples with apples there.

Lightwolf
03-28-2013, 09:02 AM
That would be impressive, but no, just a model within a scene with 2.7 billion polygons. That would make LightWave cry. A lot.
Yup, since there's no modelling in Layout... at least not enough to select polygons and bevel them.

Cheers,
Mike

Dexter2999
03-28-2013, 09:21 AM
Think how fast thousand people could cut single apple..

You logic is- "thousand people will cut apple thousand times faster than one"..

I don't know, Sir. If you were talking about people trying to write code I'd agree with you. But we aren't. We are talking about how a computer handles a dataset. And I think a computer is capable of efficiently splitting the data among various processors/cores and being more efficient in this than people would be. Or rather people have to be clever in how they tell the computer to handle the data. (It obviously isn't magic.)

I think I will have to respectfully disagree with you on this.

Lightwolf
03-28-2013, 09:58 AM
Or rather people have to be clever in how they tell the computer to handle the data. (It obviously isn't magic.)
And that, precisely, is the problem. Coupled with the fact that some tasks can not be split (which is what Sensei tried to hint at) or have diminishing returns (one example being shared resources especially if they're read and write) .

And then there's Amdahl's Law (http://en.wikipedia.org/wiki/Amdahl's_law) as well.

Cheers,
Mike

Sensei
03-28-2013, 10:09 AM
My analogy was very appropriate. In many tasks too many people/cores working on the same will cause slow down/not see any advantage.
1000 people cutting 1000 apples will do work 1000 times faster than 1 person cutting 1000 apples.
But 1000 people working on something that's not splittable won't give any speed up, even worse there will be endless discussions "how to split".
Modeling tools except the one that are in Modify (which are simply deformers taking vertex, and changing its position) often require doing stuff one by one, where result from previous stage is input to next stage. You can't start next stage prior finishing 1st stage. How do you want to automagically speed up across entire app?
Basic deformers are extremely easy to multi-thread. Especially the one in LW where Evaluate() function process just one vertex at a time. So split it- and one core is working on vertex+0 index, 2nd core is working on vertex+1, etc.

Slowness of modeling in LW is IMHO result of undo system.
LW doesn't know which vertexes or geometry will be modified, so it's making duplicate of entire object, then pass mesh to tool, tool is doing its stuff, if it's final, mesh is leaved as-is, if it's not final (because user dragged mouse farther), mesh is freed and copied again, and tool is receiving fresh new mesh to modify. You have mesh with 1m poly, so these 1m poly must be duplicated in every mouse move..

hrgiger
03-28-2013, 11:36 AM
... Or rather people have to be clever in how they tell the computer to handle the data. (It obviously isn't magic.)



Thanks a lot for the heartbreak. Next you're going to tell me there's no Santa Clause. Not that I would you believe you but...

Dexter2999
03-28-2013, 11:47 AM
I see your point, but still disagree. Sorry.

The Amdahl's Law thing you linked to Mike makes a valid point. But whose to say that the "critical 1 hour" or critical point couldn't be the loading process? An object is loaded and the dataset ranges split among cores.

It isn't a question of "1000" or Million. Even a simple Quad would benefit the process.
Splitting the data into quads on loading for four cores (which is fairly common these days) I believe would speed up data handling operations. And I don't think there needs to be "endless discussion" about how to split things.

In truth benefit can be derived without taking it to extremes. 1000 people can't always do something better than one person. But that doesn't mean one person can always do something better than a group of people. How many people to dig a well? How many to dig a trench? How many to cut and apple? How many run a kitchen? The answer will vary according to the situation.

I simply disagree that there is a valid argument in maintaining that single core operations are the only viable solution.

jwiede
03-28-2013, 11:56 AM
Deformers... That being cage deformation, not poly editing. Lightwave does multithreaded deformation, In layout... Make sure you're comparing apples with apples there.
It IS apples to apples. Modeler has "Bend", "Twist", "Skew" model operations which are essentially the same as the deformers of the same name in other unified apps. However, being unified environments, SI, C4D, and modo can all use the same deformers for animation and to permanently alter the geometry of the model in a tool-like fashion, where Modeler has to use separate "modeling deformer tools" to do the same thing.

Tobian
03-28-2013, 12:08 PM
You can disagree all you want Dexter. It's the single biggest issue in computing right now, how to deal with the fact some processes just don't split up well!

and jwiede, yes, but since you're comparing 'unified' environments, then we have to get a unified environment before you can start doing stuff like that. As I said already, the deformers in Layout are multithreaded for that reason.

jwiede
03-28-2013, 12:29 PM
and jwiede, yes, but since you're comparing 'unified' environments, then we have to get a unified environment before you can start doing stuff like that. As I said already, the deformers in Layout are multithreaded for that reason.
Whether Lightwave has a unified environment has no bearing on whether Modeler's deformer tools could be rewritten to use multithreading -- that Layout's do generally imply Modeler's could do so as well, they just haven't been updated to do so.

Sensei
03-28-2013, 12:40 PM
As I said already, the deformers in Layout are multithreaded for that reason.

Deformers in Layout don't have to worry about undo, that's why they're fast..

Move is the simplest operation, just three add floating point number operations per vertex.

Lightwolf
03-28-2013, 12:56 PM
Whether Lightwave has a unified environment has no bearing on whether Modeler's deformer tools could be rewritten to use multithreading -- that Layout's do generally imply Modeler's could do so as well, they just haven't been updated to do so.
But it does have a bearing on the performance when it comes to massive scenes.
In Modeler... everything is instantly editable. And that goes beyond shuffling coordinates around (which is easy to multi-thread) down to intricate topology changes (which isn't... with differing levels of difficulty).
In a unified environment, only what you explicitly edit is editable, allow the system to optimise the remaining geometry for faster display (and deformations).

As a side note... the current speed up in the new modeler tools is due to optimised data structures - not due to multi-threading. There's still a lot of room for optimisations here before taking multiple threads into account. And I'd suspect that a more modern backbone would make adding threads (if the algorithm can make use of them) and shared access to data across them easier as well.

Cheers,
Mike

Sensei
03-28-2013, 01:01 PM
Even a simple Quad would benefit the process.

Quad of what?
4 points/polygons split to 4 cores?


Splitting the data into quads on loading for four cores (which is fairly common these days) I believe would speed up data handling operations. And I don't think there needs to be "endless discussion" about how to split things.

You're not programmer? That's rather obvious reading you..


The answer will vary according to the situation.

And that's why programmers optimize areas that have sense to optimize.


I simply disagree that there is a valid argument in maintaining that single core operations are the only viable solution.

Who said so?
If something is easy to multi-thread, it's programmed to use it.
Rendering image is such easy to multi-thread thing, just split image per rows for example.
One thread is crunching row that it allocated. Other thread is working on its own row..

int y = 0;
while( image.AllocateRow( y ) )
{
for( int i = 0; i < width; i++ )
{
image.SetPixel( i, y, RenderPixel( i, y ) );
}
}

AllocateRow() implementation

bool result = false;

// lock multi-threading
mutex.Lock();

// check which row is not yet allocated, if there is none, return false.
if( current_y < height )
{
y = current_y;
current_y++;
result = true;
}

// unlock multi-threading
mutex.UnLock();

return( result );

Lightwolf
03-28-2013, 01:04 PM
I see your point, but still disagree. Sorry.

The Amdahl's Law thing you linked to Mike makes a valid point. But whose to say that the "critical 1 hour" or critical point couldn't be the loading process? An object is loaded and the dataset ranges split among cores.
No, that happens any time a shared resource is accessed, especially if you write to it as well. That could be as little as a vertex shared between both two parts of your dataset. Or just merging the split dataset into one at the end of the processing.

It isn't a question of "1000" or Million. Even a simple Quad would benefit the process.
No, it's a question of either one... or any higher number (with less of a benefit the more threads you add due to scalability issues).
Two threads cause as many problems as 1000 (it's just more likely to show up in testing with 1000).

Splitting the data into quads on loading for four cores (which is fairly common these days) I believe would speed up data handling operations. And I don't think there needs to be "endless discussion" about how to split things.
Which operations are you talking about though? There's tons of different algorithms and purposes. A simple deform is different from creating sub-patches (which would require a single dataset by the way), different from re-topo or a mesh decimation.


I simply disagree that there is a valid argument in maintaining that single core operations are the only viable solution.
But that's not the argument. The argument is that: multiple cores are _very_ hard to use in certain situations, some of which certainly relate to modelling. Also, there's plenty of room for other changes to speed up things - and those would be beneficial across the board.

To phrase it differently... if it was so easy, why isn't everybody doing it? And, except for trivial cases (like deformations) it certainly only is Houdini (and then I don't know how far they go either).

Cheers,
Mike

Lightwolf
03-28-2013, 01:07 PM
// lock multi-threading
mutex.Lock();

...

// unlock multi-threading
mutex.UnLock();

Btw (for Dexter) - anything between those two -> Amdahl. ;)

Cheers,
Mike

Sensei
03-28-2013, 01:12 PM
it certainly only is Houdini (and then I don't know how far they go either).

Without being programmer that wrote app, we can't know what these threads are really doing.
Maybe simply one thread is working on single item at a time, and other item is handled by another thread, so they don't conflict each other..

dsol
03-28-2013, 01:21 PM
The new modelling tools in 11.5 are loads faster than the old ones - and look very different UI-wise. Which leads me to think they're using some of the plumbed-in CORE code. Either that, or they're just very nicely implemented. The new edge loop tool is sweet!

Lightwolf
03-28-2013, 04:31 PM
The new modelling tools in 11.5 are loads faster than the old ones - and look very different UI-wise. Which leads me to think they're using some of the plumbed-in CORE code. Either that, or they're just very nicely implemented. The new edge loop tool is sweet!
They use a new geometry core... thus the lag when starting the new tools and when exiting it - that's for copying the mesh information.

Hey, the old geometry sub-system was designed before CPUs had caches, when memory was running at the same clock as CPUs, when there was no OpenGL - etc, etc.

Cheers,
Mike

Lightwolf
03-28-2013, 04:40 PM
Without being programmer that wrote app, we can't know what these threads are really doing.
Maybe simply one thread is working on single item at a time, and other item is handled by another thread, so they don't conflict each other..
Well, it is easy to observe the CPU usage when doing certain operations. There's a lot that can be deduced that way - without having seen the code.

Cheers,
Mike

Mastoy
03-28-2013, 04:40 PM
They use a new geometry core... thus the lag when starting the new tools and when exiting it - that's for copying the mesh information.

Mike

I did not know that ! Very interesting !

netstile123
03-28-2013, 05:08 PM
why in the task manager when you Manipulate and object it shows that more than 1 cpu core or thread is working under the cpu usage history. Or Are we talking video card cores here?

Dexter2999
03-28-2013, 05:45 PM
Quad of what?
4 points/polygons split to 4 cores?

Seriously? Is that to be sarcastic or merely insulting?
Quad split of a data set. Such as X-,Y-;X-,Y+;X+,Y-;X+,Y+

You have eight cores? Why not throw the Z axis into the equation.



You're not programmer? That's rather obvious reading you..

That would be correct. I studied programming for two years and decided it wasn't something I wanted to spend my life doing.



And that's why programmers optimize areas that have sense to optimize.

Perhaps some programmers do that. Others go in do what their bosses tell them to do.


Who said so?
If something is easy to multi-thread, it's programmed to use it.


I never said anything about it being easy. If I simplified my argument to a point to make it seem like the process would easy that would be my error. What I did say is that single core coding for large data sets is at an end of life in today's market. It can/will/must be addressed.


Not sure I appreciate the TONE in this thread.

Tobian
03-28-2013, 08:12 PM
Well I know I don't like the 'tone' of this thread. Calling LightWave antiquated and old fashioned because it doesn't support multi-threaded modelling... when NO application does (save Houdini and some simplistic deforms) is very annoying. It is not a Lightwave problem, it is a computing problem. Half of the computing world is trying to solve multi-threaded, not just Newtek :)

As I've said several times, in, yes, a sarcastic tone, No one really does it. The reason LW is slow is *not* because it's not handling things in a multi-threaded way. Yes they could start converting them all, but I;d rather they start just making Modeler more modern and more efficient in handling single threaded modelling first. Making things in Modeler multi-threaded right now wouldn't actually help, because it's not the transforms which are slow, but the data architecture. The new geometry core is a step in the right direction.

OFF
03-28-2013, 08:32 PM
For me is not important as will be realized better performance in the modeler - through multi-threaded way or by improve OGL code,
but the current level of performance is quite insufficient for "hard-n-heavy" projects.

Sensei
03-29-2013, 03:44 AM
Seriously? Is that to be sarcastic or merely insulting?
Quad split of a data set. Such as X-,Y-;X-,Y+;X+,Y-;X+,Y+

You have eight cores? Why not throw the Z axis into the equation.

That's splitting is one of the worst ideas for splitting. What if entire geometry is in positive X, positive Y, positive Z? Only one core will get it.. Additionally-you move or modify location of geometry- vertex that was in one core is going to other core?, and then in some point the all vertexes are in just one core, and the all 7 remaining are empty. What if polygon is using vertexes from the all cores? Which core is responsible for it? What if all polygons are crossing boundary like this happens with cube?

Idea for quad splitting image in renderer is bad too. 1-2 cores will get sky and finish instantly, and then 3-4 or just one of them will be crunching real object. That all depends on what is placed on screen and where.
That's why in the previous post I told about splitting per row-there is less chance for some thread go idle while others are working heavy. But after reaching number of cores = height of image, problem will return.

You're expecting that data will be uniformly spaces. But that's very very very unusual case.



That would be correct. I studied programming for two years and decided it wasn't something I wanted to spend my life doing.


I am programming computers for 27 years..



Perhaps some programmers do that. Others go in do what their bosses tell them to do.


That depends whether bosses are programmers too, so they do know what is possible and what is not.



I never said anything about it being easy. If I simplified my argument to a point to make it seem like the process would easy that would be my error. What I did say is that single core coding for large data sets is at an end of life in today's market. It can/will/must be addressed.
.

I don't see how in 3d application, especially the one that has no modifier stack and history, modelling tools can be using general multi-threading..
Multi-threading can be used for instance to: freeze sub-patches to polygons. That should be easy to do. Input mesh with sub-patches is treated as read-only. And each core collect output frozen geometry. When they finish all there is only needed gathering data. So they don't interfere each other and break work. No synchronization needed.

I am still not sure about what kind of multi-threading you're talking about - add it to each tool one by one, rewriting them completely. So tool looks like

Mesh::Build( ... )
{
// start threads
Threads::Start( Mesh::ThreadFunc );
}

Mesh::ThreadFunc( ... )
{
// do stuff
}

which means 100 modelling tools = 100 independent rewriting.

or some general one in core.

How modelling tool would be using general multi-threading? Simply HOW?

Red_Oddity
03-29-2013, 04:21 AM
Modo 701 showed a demo that had a scene with 2.7 billion polygons in it and they were able to make a selection and bevel some polys pretty interactively. Lightwave needs much improvement in this area.

Actually, 15 million polygons (still insanely impressive mind you), the 2.7 billion is mostly because of the use of replicators (instances) and render subd levels being higher than GL subd levels.

dsol
03-29-2013, 06:13 AM
Well I know I don't like the 'tone' of this thread. Calling LightWave antiquated and old fashioned because it doesn't support multi-threaded modelling... when NO application does (save Houdini and some simplistic deforms) is very annoying. It is not a Lightwave problem, it is a computing problem. Half of the computing world is trying to solve multi-threaded, not just Newtek :)

As I've said several times, in, yes, a sarcastic tone, No one really does it. The reason LW is slow is *not* because it's not handling things in a multi-threaded way. Yes they could start converting them all, but I;d rather they start just making Modeler more modern and more efficient in handling single threaded modelling first. Making things in Modeler multi-threaded right now wouldn't actually help, because it's not the transforms which are slow, but the data architecture. The new geometry core is a step in the right direction.

Exactly. The new tools are a huge leap forward and feel ridiculously smooth compared to the old ones. I don't know how pervasively threaded they are, or if they even can support greater use of parallelism, but right now they feel smooth and responsive - so who cares if they're running on a single core or not? Apart from Houdini, Zbrush is probably the most extensively deeply threaded app I've used and the performance is amazing. That being said, it kind of has to be as it uses a software renderer for display (which is actually a good thing for purposes of showing uber-high-density meshes).

Parallelism is a fundamental problem with no easy solution. Some tasks lend themselves to it, and for those - yes, it should be supported. For others you just have to make it as efficient as possible on a single core, while making sure that no one task locks the UI or impedes the performance of other tools. I'm assuming that LW runs its UI and displayport code off the main thread, which is why single plugins can lock up everything (using bullet in Layout, the UI slows to a crawl while it's calculating). If that could be changed - the UI and other components were running on separate threads - you'd get a lot more of "teh snappyz" back. But I appreciate that's a pretty major architectural shift (an MVC model)- one akin to the jump from android 2.3 to 4.1 (android had the same issue with the display code tied to the main thread - fixed now)

Hail
03-29-2013, 08:34 AM
Errhh.. weren't we told core was fully multithreaded?

probiner
03-29-2013, 10:03 AM
So, out of curiosity... Each Layout object can use a different core right? What about in Modeler, each Layer, each undo copy, each vertex map, etc using a different core?

Cheers

Lightwolf
03-29-2013, 10:30 AM
So, out of curiosity... Each Layout object can use a different core right?
Yes and no, mostly no though. I.e. displacements are threaded per mesh. And you still have dependencies within Layout that enforce certain orders of evaluation.


What about in Modeler, each Layer, each undo copy, each vertex map, etc using a different core?

That's not how multi-threading works and it also wouldn't make any sense either. I.e. vertex maps are attributes of vertices... giving each a different core wouldn't help at all (I can't even think of a way where it would make sense to start with).
The same really goes with layers... since you usually work in one only anyhow.

In the end it would be up to individual tools in Modeler... along with an infrastructure that makes it easier to allow for multi-threading and maybe some hooks that use multiple threads on demand (@Sensei: I.e. some of the scanning functions with callbacks).

Cheers,
Mike

- - - Updated - - -


Exactly. The new tools are a huge leap forward and feel ridiculously smooth compared to the old ones. I don't know how pervasively threaded they are,...
Afaik not at all. It's "just" more modern data structures.

Cheers,
Mike

Lightwolf
03-29-2013, 10:32 AM
Errhh.. weren't we told core was fully multithreaded?
No, that was only assumed. The wording was "multi-cpu and gpu aware" (or something along those lines) - which tells you little about how they're actually used.

Cheers,
Mike

akademus
03-29-2013, 11:05 AM
Being a modeler myself I know for sure LW Modeler didn't receive even a fraction of love Layout did in the past years. And its understandable completely. I don't see how beveling 2 milion polygons could be enough attractive to me to buy a software. LW is receiving steady grow and its back on track so I'm sure we'll see some huge improvements in modeler as well.

I think that is exactly why they are bundling in with LW Cad and I would be surprised to see integration of two. I think Viktor and other developers proved Modeler is much more capable of than it shows now.

I wish some things are multithreaded as well, but rare are the functions that are so extensive in calculations that it will take 2-3 hours on any modern computer. It usually bellow 10 minutes which gives you a great excuse to go for a walk, look through the window or read something interesting.

- - - Updated - - -

Being a modeler myself I know for sure LW Modeler didn't receive even a fraction of love Layout did in the past years. And its understandable completely. I don't see how beveling 2 milion polygons could be enough attractive to me to buy a software. LW is receiving steady grow and its back on track so I'm sure we'll see some huge improvements in modeler as well.

I think that is exactly why they are bundling in with LW Cad and I would be surprised to see integration of two. I think Viktor and other developers proved Modeler is much more capable of than it shows now.

I wish some things are multithreaded as well, but rare are the functions that are so extensive in calculations that it will take 2-3 hours on any modern computer. It usually bellow 10 minutes which gives you a great excuse to go for a walk, look through the window or read something interesting.

zapper1998
03-29-2013, 12:41 PM
modeler runs fine here, with multi million polygon models...
i have no problems....

i did OC my ram and that did seem to help alot..

geo_n
03-29-2013, 08:55 PM
modeler runs fine here, with multi million polygon models...
i have no problems....

i did OC my ram and that did seem to help alot..

Try editing it not just tumbling.

In any case lightwave needs to boost up its content creation capabilities. Maya 2013

http://www.youtube.com/watch?v=L5fOwSmSaW8

OFF
03-29-2013, 09:49 PM
modeler runs fine here, with multi million polygon models...
i have no problems....

i did OC my ram and that did seem to help alot..
Which configuration of computer you use, what is your graphics card? I have in computer 32 gigabytes of memory and video card 550Ti and not enough for to comfortably work on the 200,000 polygon objects with dozens of surfaces. Even an object like a tree, with only two surfaces, and 500 000 polygons are very heavy in the modeler to my system.

probiner
03-31-2013, 08:46 AM
The same really goes with layers... since you usually work in one only anyhow.


In the current modeling paradigm no. Well, at least me. First there's all layers that are used while doing the models, with all the background layers features, non-existent stack that forces one to keep copies of each step to it can change go back and redo things with changes. Then there's the whole separation of meshes into moving parts. So for me in the course of actually modeling as average I use all the 10 first layers and when I'm done I usually end with 2-4 layers of moviing parts and Layer 10 with construction mesh leftovers.
Though I would agree that layer 1 usually gets the heavier mesh, there's a lot going on while modeling.

Cheers