PDA

View Full Version : What development of LW is needed to reach the speed and power of XS ICE?



AbstractTech3D
10-07-2012, 03:41 PM
(Not sure if this discussion will be allowed…)

As one who regularly reaches the limits of LW nodal motion and displacement control, and has seen the power and speed of XSI ICE… I long for more. (My fundamental hope for Core had been such).

However, I despise AD licensing terms + international pricing policy, and am not confident of the product's future in AD hands.


1) What developments of LW are needed to reach the speed and comprehensive power & reach of XSI ICE?
2) Are those developments likely?

Phil
10-07-2012, 04:40 PM
I'd suggest filing requests for features and enhancements that you need with [email protected] The only way to effectively influence the development direction of LW is to file feature requests (detailed, with use cases and reference information, where relevant) in the system that NT uses to solicit this kind of feedback.

A long discussion here will not be, in general, seen by the development team - there are just too many threads in the forum for NT to keep track of, and is certainly not a usable metric for them. :)

erikals
10-07-2012, 05:54 PM
i've been wondering about this myself, but until we see some heavy core technology integration it will be hard to say.
i think that's far-far in the future.

geo_n
10-07-2012, 07:31 PM
1) What developments of LW are needed to reach the speed and comprehensive power & reach of XSI ICE?
Imho increase the price of lightwave after sorting out the split app issue. Can't increase the price(to AD levels) and expect non-lwvers to adapt with the oldest issue in lw still existing. Many people will complain about the price but there's no way to pay for programmers to speed up development without more money. I don't like AD policy either and its one reason I have not bought a personal license from them even though I've spent nearly equal on lw plus plugins. But as AD software itself goes no major complaints.

2) Are those developments likely?
With more money for development, anything is possible.

AbstractTech3D
10-07-2012, 07:32 PM
I'm not particularly querying specific features or enhancements (there are more than enough threads on those), rather sweeping 'under the hood' engine changes - the fundamental tech of which 'might' exist already in Core tech - (I hope and wonder!) Waiting to be integrated…

edit…. price increases are of course not possible for HC members for the next 4 releases

geo_n
10-07-2012, 07:51 PM
Don't know what is in Core tech that they say around the forums but the software itself, Lw core, was not even beta. VPR is not core tech and an ex-lw dev already said it could have been integrated as early as lw 8 but wasn't and lw 8 pre-dates Lw core. Unified node architecture probably will not happen without merging the two appz. I've asked Sensei his opinion about this stuff but no answer. Maybe its impossible without starting from scratch. :D
I'm in near beta with another software that is full nodal architecture. 3 guys developing it. I'm trying to infuse as much lightwave and AD software in it and its actually like Lw core mixed with some AD soft.
But I've come to realize its nearly impossible for development to go as fast without more devs.

Sensei
10-07-2012, 10:45 PM
Show something in ice that is not doable in LW, then we can discuss..

3D Kiwi
10-07-2012, 11:34 PM
Are you for real. Do you live under a rock???

AbstractTech3D
10-07-2012, 11:49 PM
https://vimeo.com/47852627
https://vimeo.com/36709750 (http://www.myshli.com/2012/02/26/ice-wind-tunnel/)
http://www.mootzoid.com/wb/pages/softimagexsi/emnewton2.php -not that its mathematically impossible in LW, but I'd bet the LW engine would just be too impractically slow

Have a look around mootzoid.com for some impressive stuff.

And then of course there is Lagoa...

Sensei
10-08-2012, 01:20 AM
https://vimeo.com/47852627
https://vimeo.com/36709750

Aren't nodes to get point position at index, and finding the closest position to given position in DP_Kit?

Not to mention that instead of copying literally these videos, you can simply make nulls and get their positions instead of reading/finding points..

vncnt
10-08-2012, 02:15 AM
https://vimeo.com/47852627


I stopped watching the video when there were 15 (!!!) nodes on the screen just to solve an issue using morph for eye blink.

I understand some of us want flexibility by asking for universal tools but a more user friendly approach would be a turnkey solution for (morphed) eye blinks. Because that is what seems to be the goal in this case, right? And itīs needed so many times when you want character animation.

OPTION #1
Use a bone and a weightmap to drive the eyelid and a morph for wrinkles, driven by the bone rotation.
LW ISSUE: we have no weightmap and morph modification in Layout so there is no feedback of the result using the current rig. If they could rebuild this in Layout I would never ask for a unified application.

OPTION #2
Newtek could change the current morph system into separated channels for X, Y, Z so that you can drive these with a phase shift via a bone rotation.
LS ISSUE: without a phase shift system the setup becomes more complicated for simple folks like me. This should be a turnkey system (plugin).

AbstractTech3D
10-08-2012, 02:23 AM
Well maybe its just my not being smart enough yet. But I've so far not been able to replicate the XSI ICE rotational morph setup using Point Info inside LW.

You're most welcome to show me how its done! :)

pooby
10-08-2012, 02:38 AM
I stopped watching the video when there were 15 (!!!) nodes on the screen just to solve an issue using morph for eye blink.

Hello.

I'm the guy that did that video.
It must be said that I am showing the programming of the tools, not the tools usage. For example the rotational morph, is to demonstrate a concept that can be used when programming a deformer.
In no way do you need to go through that when applying the tool ,so please do not confuse the two matters. It would be the equivalent of watching someone code the lightwave morph mixer in order to apply it.
This video hopefully demonstrates the difference. Once a tool is finished, it can be put on the menu and used like any other tool. It takes a few minutes to program the deformer and about 2 seconds to apply it.
You do not have to access the nodes to apply and use it.

vncnt
10-08-2012, 03:25 AM
I think your video was exellent.

My point was that we should not need such a complicated setup for such a common problem.
Even with less nodes.

pooby
10-08-2012, 03:39 AM
Thanks,

And my point is that the setup is irrelevant if its simply part of the programming and not part of the workflow of the artist using the final tool.
You shouldnt worry about it any more than you worry about the lines of code that go into making the morph mixer, when you use it.

The confusion with ICE come from the fact that the programming aspect is so transparent to the user and easy, compared to coding, and can be done on the fly that the line between using and programming can be blurred.

vncnt
10-08-2012, 03:57 AM
... the line between using and programming can be blurred.

This is exactly the problem I experience.

I donīt mind some programming (with Unity - I never scripted for LW) but in LW for years Iīm trying to focus on animation and story without spending 99% on modeling and rigging. Just enough to build my own models.

Thatīs the reason I prefer NT to focus on user friendly tools.

pooby
10-08-2012, 04:17 AM
No. You're missing the point.. it doesnt HAVE to be blurred if you just package tools up like I did in the last video.
It simply means that users have easy access to change the tools IF they want to.
There is nothing that is any more complex in an ICE tool than any other tool. its just that you can open up the hood and access the component parts IF you want to. If you dont want to, then you dont. You just use the tool.

Is that hard to understand? I thought I was being quite clear on it.


You can either have a tool that is unchangeable IE morph mixer, or you have an ICE one that works exactly the same way from the interface, without any nodal interaction from the user.

BUT if you want to, but arent obliged to ....you can adapt easily to accept weightmap input, or customise it in any way other way you like without coding anything.

Most of the people who use my tools have no idea how they work, and they dont need to. What ICE has done for the Softimage user base is provide and environment where tools can be made easily. It has resulted in free tools coming out at about a faster rate than Flay.com in its glory days. Many hosted here http://rray.de/xsi/

vncnt
10-08-2012, 04:59 AM
Ok, sorry I missed that.
Must be the flu.

Indeed an implementation like that would lower the threshold to dive in.

Sensei
10-08-2012, 07:34 AM
Hello.

I'm the guy that did that video.
It must be said that I am showing the programming of the tools, not the tools usage. For example the rotational morph, is to demonstrate a concept that can be used when programming a deformer.
In no way do you need to go through that when applying the tool ,so please do not confuse the two matters. It would be the equivalent of watching someone code the lightwave morph mixer in order to apply it.
This video hopefully demonstrates the difference. Once a tool is finished, it can be put on the menu and used like any other tool. It takes a few minutes to program the deformer and about 2 seconds to apply it.
You do not have to access the nodes to apply and use it.

So it's pretty much like TrueGroup - build node tree, then store it somewhere on disk, and then import in project where it's needed and use what was used in other hundred projects..

And then you can use this:
108422

where TrueGroup is actually doing internal stuff (not needed to know how):
108421

jwiede
10-08-2012, 11:13 AM
So it's pretty much like TrueGroup - build node tree, then store it somewhere on disk, and then import in project where it's needed and use what was used in other hundred projects..
Except that because compound support is built-in, any XSI user can use them (unlike TG where only TG owners can). Another difference is that XSI allows for effective distribution of "protected" compounds, so you don't have to expose _everything_ to the user, which allows for commercial distribution of compounds. Finally, there's the obvious difference that you really can program "tools" using nodes in XSI -- f.e. you really could write morphmixer-like tool, with UI, etc. -- LW doesn't allow node networks that level of autonomy and user interaction. ICE's infrastructure-level integration in XSI means that nodes can do much, much more there than in LW (where nodes are integrated above the infrastructure level).

Surrealist.
10-08-2012, 11:50 AM
1) More resources than I think are available within a reasonable time frame.
2 See#1

Options:

1) Stick with SI (considering the alternatives are pretty much nil how bad off will you really be?)
2) Look into Houdini if you can stomach it. The only other and even potentially more powerful solution within any reasonable time frame.

I am sure the developers of NT have had great ideas in this area. The intent and the desire is there I think, but competing with something like ICE anytime soon is not realistic. Something different, probably along the same lines, but who knows what that will be.

Sensei
10-08-2012, 01:06 PM
Except that because compound support is built-in, any XSI user can use them (unlike TG where only TG owners can).

You have to pay for XSI, to use XSI solutions.
You have to pay for LW upgrade to use future LW.
30 usd for TG is much less than any of these.


Finally, there's the obvious difference that you really can program "tools" using nodes in XSI -- f.e. you really could write morphmixer-like tool, with UI, etc. -- LW doesn't allow node networks that level of autonomy and user interaction.

But that's just matter of adding node generating gui. Week of programming or so..

Any 3rd party developer can do that since ever in plugin - and proves are TrueGroup, GlobalMaterials and Node Editors in DP_Kit.

If somebody know how to program, he should always go native plugin way than connecting components, because of speed of execution (faster rendering).

Selling commercially secret protected node trees is non sense. It's rendering/working slower for end user. He has to pay the same for less, than for native C/C++ compiled code.


ICE's infrastructure-level integration in XSI means that nodes can do much, much more there than in LW (where nodes are integrated above the infrastructure level).

I disagree.
They're at similar level.
Nodes in LW is yet another LWSDK global (library of functions), like LWItemInfo global is returning info about items, LWObjectInfo geometric meshes, etc. etc.
Node has to call LWSDK global to query mesh or so, so the same ICE node, but XSI equivalent library function.

Dexter2999
10-08-2012, 02:04 PM
Putting the argument of specific tools aside, LW has introduced Python with the idea that this will expand upon the LW toolset. There is currently a few active members of the board signed up for a Python programming class online. Big Hache is aslo creating a series of Python Scripting videos for beginners as he learns himself. This all is a big step to creating a community that can create new tools to exploit this new addition to LW.

I think what is attractive about ICE is that it is somewhat of a nodal graphical interface to do the same thing. Rather than being bogged down in reading lines of code, you pull in function and create the function path. In the end it makes a tool.

I think if LW had a GUI to facilitate Python in a similar manner it would be a good way to get users to start creating tools who would otherwise be intimidated looking at lines of code. Pull in the blocks, connect the dots, compile, use the tool.

But this isn't anything new, many people have been clamoring on the boards for "nodal everything".

zarti
10-08-2012, 02:11 PM
So it's pretty much like TrueGroup - build node tree, then store it somewhere on disk, and then import in project where it's needed and use what was used in other hundred projects..
+

You have to pay for XSI, to use XSI solutions.
You have to pay for LW upgrade to use future LW.
30 usd for TG is much less than any of these.

..

please , with all due respect to you as the TG 's developer , do not make it sound so easy .

a 30$ thingie does the same as a whole architecture of a software ( ?! )

i have used your plugin , so it is better you mention its limits too ..

also , if you 'pass' the arguments later to the host app ( LW ) as a limited technologie .. well .. thats point of this thread " to make LW BETTER " .

NO !? ;)

50one
10-08-2012, 02:21 PM
So...there you have it guys and gals!
Simple answer to that question - Making the Lightwave capabilities on pair with XSI is a mater of $30 :D

I'll have beer now.

Surrealist.
10-08-2012, 02:34 PM
You don't have to be a programer to use ICE or develop your own tools within ICE. There is so much available to a user in ICE without even getting into the SDK. You don't have to use even Python much less any programing language at all. It is all there already in the existing GUI.

Remember ICE stands for Interactive Creative Environment. (for the average end user not a software engineer)

But if it is really not such a big deal to write something as a plugin to LightWave then the door is wide open. Lots of money to be made. Because if you could write a plugin that allowed users the same functionality within LightWave (for the average user) or even remotely close, then you'd really have something.

But before you could do that you'd have to first understand all that ICE can do exactly. And it seems so far that is not understood here.

I think the OP understands, hence the question. And the answer is a resounding no. Unless someone wants to take it on as a massive several year 3P project. So even if a programer took it on or a team of them, we would not see it for years to come.

It is not nearly as simple nor limited as what is being discussed here. At all. In fact it that aspect should not even be in question.

The question is not "Can LightWave do this with existing plugins". That is not even a question. The question is, when if ever will LightWave have anything nearly as close.

If anyone wants to brush up on it, there are videos all over the net on ICE and in particular Lagoa which is fairly awesome just in itself. Then there is Syflex on ICE, ICE modeling, ICE rigging, completely aside from the usual dynamics and particles.

AbstractTech3D
10-08-2012, 02:35 PM
Putting the argument of specific tools aside, LW has introduced Python with the idea that this will expand upon the LW toolset. There is currently a few active members of the board signed up for a Python programming class online. Big Hache is aslo creating a series of Python Scripting videos for beginners as he learns himself. This all is a big step to creating a community that can create new tools to exploit this new addition to LW.

I think what is attractive about ICE is that it is somewhat of a nodal graphical interface to do the same thing. Rather than being bogged down in reading lines of code, you pull in function and create the function path. In the end it makes a tool.

I think if LW had a GUI to facilitate Python in a similar manner it would be a good way to get users to start creating tools who would otherwise be intimidated looking at lines of code. Pull in the blocks, connect the dots, compile, use the tool.

But this isn't anything new, many people have been clamoring on the boards for "nodal everything".

The XSI ICE architecture is extremely fast. In fact I've seen examples where traditionally coded tools have been recreated in ICE, and the ICE versions significantly outperform the traditional coded versions.
Additionally the level of architecture exposure is vast and fundamental.

Correct me if I'm wrong, but I had understood that Python in LW is similar in scope and speed only to LSscript (sitting a layer or 2 above the fundamental architecture) with limited architecture exposure. A good high level of control accessible to those in the industry commonly using Python elsewhere - but very unlikely to be able to build a tool like Lagoa (or this http://www.mootzoid.com/wb/pages/sof.../emnewton2.php) with it (and expect performance).

zarti
10-08-2012, 02:50 PM
the usage of Python( in any major app )mostly consists in doing tasks practically un-acceptable for a human being =)

example : assigning a channel modifier to 200 items , tweak them alltogether , and such ..

not for computational purposes .




.cheers

Sensei
10-08-2012, 02:57 PM
In fact I've seen examples where traditionally coded tools have been recreated in ICE, and the ICE versions significantly outperform the traditional coded versions.

Programmer (should I say 3d artist?) using native C/C++ had no idea how to write program, due to his lack of knowledge.
So he had no idea what ICE node, which was used in "ICE version" was doing.
That's the only answer.

Otherwise it's like "exceeding speed of light" in physics.

Sensei
10-08-2012, 03:07 PM
Example:

Node for finding the closest point in mesh is using kd-tree.
Traditional code has loop for each point in mesh and then doing distance between two vectors. Mesh has 1m points, there is done 1m sqrt() and 3m multiply and 2m adding..
Obviously node way will be (almost always) faster.
Just (JUST!!!) because programmer writing traditional code was ignorant/lazy.

Is it prove that nodal way was BETTER?

No.

It's just prove that programmer was weak. Unpaid or so. Who would waste hours optimizing code for nothing?

zarti
10-08-2012, 03:14 PM
In fact I've seen examples where traditionally coded tools have been recreated in ICE, and the ICE versions significantly outperform the traditional coded versions.

ohhh , boy !!

i bet you havent seen this yet

python and GPU faster than ICE ..

{ NANODE }

.. yep ! nodes everywhere =)

the big Q is : can NT bring this to LW ??

--


https://vimeo.com/47892648

--



.dream on

AbstractTech3D
10-08-2012, 04:21 PM
Why should they not bring it to LW?

Hail
10-08-2012, 04:35 PM
Guys! Guys!!! XSI ICE WINs!!!
flawless victory!
Fatality!!

Surrealist.
10-08-2012, 05:40 PM
lol

Inevitable debate, here it seems.


Why should they not bring it to LW?

I had the idea way back when that nodes was going to be a big part of the next LightWave. The problem is now, they have decided to scale back development projections quite a bit. I am not sure how extensive the original plans were for nodes. But if the current development rate is any yardstick, I'd say that if they decided to only work on a code base that supported something half as powerful as ICE with even 1/4 of the functionality we could be looking at 5 years time. Just for that. Never mind any other improvements.

However in reality, what you might expect to see is a gradual development over time of the nodal system to include more areas of LightWave. You will probably see this along side other little bits of functionality, little by little. I can not imagine it being a major change in one shot. And if there is such a change in the works it would only be at a very basic level without a lot of functionality yet. In other words a major release being a roll out of "LightWave Nodal", with an all around functionality in place under the hood so to speak but a limited number of tools taking advantage of it. And then promises for future releases to invoke it in more tools. Just a guess, to give you an idea.

That's just from being on the sidelines observing and then finally using LightWave 11 after what, 3 years development time? We got Bullet Physics, a fracture tool. Some FFX enhancements, render enhancements, tid bits here and there. I mean a great release. But that was 3 years. Compared to the task of something like ICE. An "ICE" project would be monumental by comparison, I would think. Especially when you consider all it can do out of the box.

fablefox
10-08-2012, 09:12 PM
From where I can see it, LW development is focusing on:

a) become a pipeline tools
b) do things other tools didn't yet.

Hence the slow update on modelling, since people who want to model would already own modo. also why LW decided to play nice with Goz. People who want to model would already own ZBrush.

People who need ICE would already own SoftImage. But I guess that is how LW can get a foot hold. They cannot play catch up, they can only play "you want to do this? buy me" game.

3D Kiwi
10-08-2012, 09:23 PM
But what can Lightwave do that other software cant?

m.d.
10-08-2012, 10:59 PM
It would be nice if we could turn a node network into a menu button....much the way you can with MEL and Houdini assets.

Question for the technical people....
How fast does Lightwave nodes compute, compared to compiled plugins....I would assume slower, but I know with Houdini VEX nodal is pretty well just as fast.

pooby
10-09-2012, 01:51 AM
Rather than building its own ICE like tool, Lightwave would benefit from hooking up to Fabric Engine. (not a cloth sim) http://fabricengine.com/ It is a 3d Programming tool that (amongst other things) allows extremely fast 3d processing to be offloaded from a DCC package to Fabric and it allows custom applications to be written in dynamic languages such as Python, but that run nearly as fast as if they were written in C++.

It is possible to run Fabric code (plugins) from within Maya or Softimage (currently) that are independent of the host packages code but that interact with the host package.
the concept is demonstrated on this page http://fabricengine.com/creation/integrations/

It would mean that Lightwave could host the exact same 'plugins' that Maya and Softimage will use and the work is interchangeable between packages.. This is not a baking import export tool like mdd or alembic, its the exact same code running independent of the application it is running within.

Fabric is very new, in beta, and you may not have heard of it yet, It is being developed by some of the team behind ICE and the signs are that it is going to be huge in the CGI industry in a few years. It solves so many pipeline issues. Its speed is phenomenal and it will likely grow into a DCC package itself over time.

Surrealist.
10-09-2012, 02:10 AM
But what can Lightwave do that other software cant?

There is a lot of difference in functionality and workflow I think with LightWave. I am not sure about the virtual production tools. That might be one thing many of the other 3D apps don't have (out of the box). But I think generally speaking it is not what it can do but how it does it and the workflow it allows you which in many ways is simpler and in many ways, cost effective.

beverins
10-09-2012, 02:16 PM
+ 4000 for integrating a 100% workable solution in LW for Fabric Engine. Holy smokes.

AbstractTech3D
10-09-2012, 03:24 PM
+ 4000 for integrating a 100% workable solution in LW for Fabric Engine. Holy smokes.

How difficult would that be (technically)?

To try to clarify exactly what I think I understand Fabric to be: Would Fabric democratize the 3D app scene - giving them all potentially relatively equal power, toolsets, and high performance, as they move towards essentially becoming an interface layer sitting above the Fabric Engine where the workload would increasingly be moved off to?

Would that would put LW in a strong position, with its current price point, relatively broad solution offering and integrated renderer?

How would it work with the non-unified LW environment. Would it mean modelling possibilities inside Layout? History stack possibilities?

If indeed it become commonly used in conjunction with Maya, XSI and many others - there would be an enormous range of toolsets that would become available to all - would there not?


Could be quite a game changer.

pooby
10-09-2012, 03:57 PM
Fabric indeed promises that for the tools built with it.
If lightwave has all the hooks that fabric needs and someone makes the adaptor CAPI part then it can run fabric applications. I suspect however that it will be hard to do in LW at least for geometry creation tools as layout can't really handle that kind of thing.
If it worked for deformers though, there is no reason I can see why a fabric based modifier stack could not be made. It would take the mesh in layout, put it through a fabric based stack with fabric deformers, and write it back to layout as one final deformer, circumnavigating layouts own lack of a stack.
And it could feel to the user like it is lightwave doing the work as it could use Lw nulls ,bones etc to to be fed in to aid the deforms. It all depends on how the tools are designed.
If I were newtek I would look very seriously at implementing the adaptor.

Another exciting thing , is that fabric will likely evolve into a dcc app itself, but one that could be modular and totally customisable and is not setting itself up as a complete alternative that one has to switch over to.
For example, you could write your own painting module if you didn't like the one they had made themselves, or adapt and edit theirs to suit your needs and use as many modules as you like to supplement your pipeline, either in your own app, or in fabric standalone.
I believe that Nothing the fabric team make is black boxed except the core engine, which you don't need to access anyway. However, developers can black box their own fabric creations if they wish to protect the source code of their tool.

I don't think autodesk is going to like Fabric. And that's a good thing for them to have some serious competition from all sides when fabric starts ramping up and making a splash. Which I think will really start happening over the next year.

Cageman
10-09-2012, 03:57 PM
From the description that Pooby made, it sounds like a sort of DCC API, that if you follow certain rules, the same tools can be run within any environment supporting that API. Just like every game following DirectX10 specifications can run on all gfx-cards supporting it. Maybe a bad comparsion because we only have two wendors of GFX-cards these days... but I guess you get my sentiment about it.

It sounds like a fantastic thing, really.... and if we consider how flexible LW has become since Nodal was introduced, it could be a good idea for LW3DG to investigate it further.

Cool post, Pooby! Thanks for the tip! :thumbsup:

pooby
10-09-2012, 04:16 PM
From the description that Pooby made, it sounds like a sort of DCC API, that if you follow certain rules, the same tools can be run within any environment supporting that API. :

That's exactly what it is. I think it's the most exciting development in the cgi world for a very long time. Something truly innovative.

Cageman
10-09-2012, 04:31 PM
That's exactly what it is. I think it's the most exciting development in the cgi world for a very long time. Something truly innovative.

Talk about play nice with others! Yeah... this is going to be extremely interresting, and I will surely poke LW3DG about it (I bet they do know about it allready though). But getting on this asap, will most likely be very beneficial!

:)

AbstractTech3D
10-09-2012, 04:36 PM
I will surely poke LW3DG about it

:)

Please do!

Surrealist.
10-09-2012, 04:43 PM
Yeah Fabric makes a lot of sense for LightWave in its current state of development. I would say it would be a must to adapt.

AbstractTech3D
10-09-2012, 04:46 PM
Would Fabric integration at all imply or make possible an ICE type development / control solution? Shared amongst 3D apps?

adk
10-09-2012, 05:26 PM
This does look very very interesting, thanks for the heads up & links pooby.

geo_n
10-09-2012, 07:32 PM
Yeah Fabric makes a lot of sense for LightWave in its current state of development. I would say it would be a must to adapt.

Core could have been a good candidate for fabric engine in its early days with Cores open architecture. Current lightwave would need a miracle.
Anyway, how would newtek make money by basically offering a skin for fabric that everyone would offer, too.

A thread about fabric already existed.
http://forums.newtek.com/showthread.php?129672-Fabric-Engine

Dexter2999
10-09-2012, 07:54 PM
I think there are threads about CORE being dead and buried as well.

No offence intended, but I swear I wish they could build a bot to remove any mention of it.

AbstractTech3D
10-09-2012, 08:08 PM
Anyway, how would newtek make money by basically offering a skin for fabric that everyone would offer, too.

http://forums.newtek.com/showthread.php?129672-Fabric-Engine

LW skin is a lot cheaper than AD skins, LW don't geographically price discriminate, and their licensing terms are far more agreeable. Altogether much more attractive.

geo_n
10-09-2012, 08:22 PM
LW skin is a lot cheaper than AD skins, LW don't geographically price discriminate, and their licensing terms are far more agreeable. Altogether much more attractive.

Good point. Wonder if the makers of Fabric Engine are also willing to license it to blender. That would be a big threat to AD.

3D Kiwi
10-09-2012, 08:25 PM
Newtek would still have alot of work to do so Lightwave could handle large amounts of data. In tests I did for myself with large scenes and objects etc Blender and Lightwave were the worst.

AbstractTech3D
10-09-2012, 08:39 PM
Good point. Wonder if the makers of Fabric Engine are also willing to license it to blender. That would be a big threat to AD.

The problem though with Blender interface is the Blender interface.

geo_n
10-09-2012, 09:17 PM
Newtek would still have alot of work to do so Lightwave could handle large amounts of data. In tests I did for myself with large scenes and objects etc Blender and Lightwave were the worst.

Data handling is not so bad with 64bit in layout. Biggest problem is viewport speed even in layout. Its abysmal compared to even blender.
I'm measuring 10fps on fraps with nothing in scene except one subd character with expressions, complex hierarchy. Add another character it gets exponentially worse. I'm using cards from as low as nvidia 220gt, 640gt, 8800gts, 460gt and the difference is for 460gt its 12fps while the rest are all 10fps. :D Newtek needs to take advantage of gfx power more. Its not going to handle Fabric Engine interactive speed.

geo_n
10-09-2012, 09:21 PM
The problem though with Blender interface is the Blender interface.

Yeah, even with photomemory I can't remember blender interface and have to watch video tut to refresh memory when I use it. Thousands of people don't have a problem with it though so its us not blender.

Surrealist.
10-09-2012, 11:35 PM
I was under the impression that whatever core was, is eventually still on the way, but taking a different route. And with the speed of (or lack of) development by the time anything new arrives in LightWave it ought to have some clever way to stay linked up to other software. Seems that alone is what is keeping LightWave alive at all.

That was my line of thinking anyway, for what it is worth, just to clarify.

OT warning:

As for Blender, the interface is extremely irritating. I am not sure what they were thinking. Sure it is slick and new an modern, and has some more modern GUI functionality that the old stable, solid interface was missing. But they could have done it much different INMHO. In fact the old interface was fine. They should have just ignored the cries, changed the color and then added in all the new functions that make the difference, and left all of the solidity of the old one intact. And then flat out lied about it and simply told everyone it was brand new. Then people would have bought it. :D Because all that about the interface before was just urban legend. It was a false impression. 2 hours with the manual and all of that goes away. But it didn't look sexy and "appeared" to be difficult. It was not some kind of insider, "only for the few club" kind of thing. It was actually well thought out, just missing some more modern functions, like object lists in modifiers and that kind of thing where you had to type or copy rather than pick. And a few other things that it was lacking that could have simply been added. Change the color change the fonts, a new OGL look and presto! It's new everyone! :D

And that comes from using Blender since 2008, the last year or so on the new interface working full time. Full time everyday of the week and I had had enough and was just screaming to get away from using Blender ever again. That led me to where I am now. Drove me to start testing other options and I gravitated toward AD and in particular Softimage. Thankfully I was making enough with Blender that I could afford the switch. :) I still have to use Blender for work. But I do all of the modeling parts of that in SI. And I dread when I have to go back to Blender. Though, there are many features there that I love, so it is a love hate relationship.:D

And all of that aside. Blender is still pretty amazing. And getting better all the time with a fairly swift development considering the small team.

zarti
10-10-2012, 04:23 AM
hot video ! ( some hours ago )

now with fyr

--


http://vimeo.com/51077186

--

any 'sign' from NT wd be nice to read here . are they looking into this ? is this an option into LWs future ??




.cheers

pooby
10-10-2012, 05:57 AM
Helge, (the developer) only started on that Fur system on friday, so it goes to show how much can be achieved with Fabric in a few days. The team spent a few years building the framework and now are yielding the results of their efforts.
I do not want to drag CORE up again or bash, but for many of us, this kind of time investment in order to bring future benefits were why we were behind the CORE promise. As it turned out, it was a bit of a mess, so i think Newtek may have done the best thing to retract and go back to familiar ground.
Maybe though Fabric could provide an alternative route.

By the way. There is no dependency on the Fabric team to get this hooked up to LW. Theoretically, If the hooks in the LW SDK are there, somebody could start linking LW up to Fabric today, which happens to be the day of release of the CAPI. The first license is free.

50one
10-10-2012, 06:10 AM
Helge, (the developer) only started on that Fur system on friday, so it goes to show how much can be achieved with Fabric in a few days. The team spent a few years building the framework and now are yielding the results of their efforts.
I do not want to drag CORE up again or bash, but for many of us, this kind of time investment in order to bring future benefits were why we were behind the CORE promise. As it turned out, it was a bit of a mess, so i think Newtek may have done the best thing to retract and go back to familiar ground.
Maybe though Fabric could provide an alternative route.

I have no idea in what stage the LW code is at the moment, can only judge by the pace of the previous releases / versus features / LW capabilities and I don't want to sound negative or piss on anyone's parade but I can imagine it will take at least a year or two before we see this in Lightwave as there are other things the development need to take into consideration[as soon as...] But I would like to be proven wrong....:)

zarti
10-10-2012, 06:54 AM
{ WARNING !!! CORE word might be mentioned ahead . i adapted it visually so the CORE-Phobia shoudnt happen to the reader . at least i hope ! }

--

as someone up mentioned ; the KORE appears to be the best place where such a techno wd be injected .

the fact that NT is silent lately about the progress of KORE's implementation under the CLASSIC's hood , gives hope for the next version .

.. but as it seems to me , a lot of work seems to be done by 'others' ( esp in sucking power from the hardware which we all have paid and use ) .
NT ( or any other app-maker ) should have half of the job already Done .

the most impressive aspect to me is "SPEED" .
the UI or how it IS INTERNALLY BUILD are secondary in LWs context .

Q > does anyone knows if Fabrik guys are strategically open 360 degrees to the market ? ..
.. or there are some strategico-political hidden vectors into that techno ??


--

hello NT !
are you there !

=)

pooby
10-10-2012, 06:56 AM
You dont necessarily need Newtek to do anything. As long as the hooks are there in the SDK, any developer could make the integration.

zarti
10-10-2012, 07:06 AM
oh ! fine then !

( beside the SDK part which is in the NTs bag )

we seem to have a lot of enthusiast developers around ..

Sensei .. ?

what are you thinking ? .. =)

50one
10-10-2012, 08:20 AM
oh ! fine then !

( beside the SDK part which is in the NTs bag )

we seem to have a lot of enthusiast developers around ..

Sensei .. ?

what are you thinking ? .. =)

It may be only me, but I would rather pay extra for the LW license and let Lightwave3DG implement this, rather than rely on any 3rd party, no offense Sensei!

zarti
10-10-2012, 08:35 AM
for me personally there is nothing wrong with a 'ninja dev' starting with it and later ( as has always been ) integrate natively .

NTs devs ( i imagine ) are busy with the app itself , which is in an strong turning point in its 'life' .

also having an 'independent' and relatively detailed opinion about where the things are ( + can be ) is good for users & decision makers in general .

..

the ' thing ' is appearing extremely attractive lately .

maybe this attraction will affect many people's ( + studio's ) decisions in the years to come .

sometimes Sooner is Better ..




.cheers

50one
10-10-2012, 08:41 AM
for me personally there is nothing wrong with a 'ninja dev' starting with it and later ( as has always been ) integrate natively .

NTs devs ( i imagine ) are busy with the app itself , which is in an strong turning point in its 'life' .

also having an 'independent' and relatively detailed opinion about where the things are ( + can be ) is good for users & decision makers in general .

..

the ' thing ' is appearing extremely attractive lately .

maybe this attraction will affect many people's ( + studio's ) decisions in the years to come .

sometimes Sooner is Better ..




.cheers


So basically, you're saying that Sensei will provide the time & resources, then Newtek came in and start from scratch implementing it again, getting through the same trial & error etc. Besides I don't think that ninja dev knows all the bits & bobs of the native code, but I'm loooking more form the project management POV as I know nothing about the SDK tho.

But your example is quite similar to Fprime situation, we all know what happened after....:hey:

HDI is in the smae ballpark I can imagine....& Ibounce[tho Newtek implemented Bulet before the official release of Ibounce If I'm not mistakn]

Surrealist.
10-10-2012, 10:50 AM
I don't think Newtek would benefit from a 3P solution here, come to think of it. Really, we've been down that road. It would be best if NT got on it ASAP and made this a priority. Once Farbric is in LightWave, they would open up the door to existing development of tools that are not codependent on LightWave. That is the key issue here. With FPrime, Sas, IKB, anything that has been developed outside and/or integrated in some way has suffered so long as Newetek went through changes. The IKB guy is not around, (lets not start up that whole thing again, it is what it is) and so there it sits, without further development. In this case, if a team has developed a tool that they have targeted for something say like 3Dmax for rigging and animating characters that kicks A and puts CAT on it's end, then LightWave users would benefit from development targeted at the very lucrative and "stable" market of the game industry. Which means, a better hope of continued development. And likewise, this would attract talented developers back to the LightWave market. Because if they develop a tool on Frabric, it can also be marketed in the other markets as well. Looks to me like in the long run a win win for everyone. And as we have been saying, LightWave took a big hit from the old...*&^%... debacle. This is just the kind of thing it needs. This way the team can concentrate at what they are good at. Which by track record in the last 10 years has been at keeping LightWave playing well within pipelines. That has been the bread and butter of LightWave as of late. So they can continue to make improvements on the rendering, (something they seem to be talented at making easy and fast to use) work on integrating LightWave as a unified app and other things that will help the general condition. I'd personally rather see that than Bullet Physics and other halfway implemented tools (Thinking FFX) that have limited use and function. It is kind of like piss or get off the pot kind of thing. Like LightWave is in limbo between trying to be something it can't be right now and being better at what it is. So I'd say put the beans in the other pile and get going in that direction. That's kinda how I see it now.

Hail
10-10-2012, 11:00 AM
I don't think Newtek would benefit from a 3P solution here, come to think of it. Really, we've been down that road. It would be best if NT got on it ASAP and made this a priority. Once Farbric is in LightWave, they would open up the door to existing development of tools that are not codependent on LightWave. That is the key issue here. With FPrime, Sas, IKB, anything that has been developed outside and/or integrated in some way has suffered so long as Newetek went through changes. The IKB guy is not around, (lets not start up that whole thing again, it is what it is) and so there it sits, without further development. In this case, if a team has developed a tool that they have targeted for something say like 3Dmax for rigging and animating characters that kicks A and puts CAT on it's end, then LightWave users would benefit from development targeted at the very lucrative and "stable" market of the game industry. Which means, a better hope of continued development. And likewise, this would attract talented developers back to the LightWave market. Because if they develop a tool on Frabric, it can also be marketed in the other markets as well. Looks to me like in the long run a win win for everyone. And as we have been saying, LightWave took a big hit from the old...*&^%... debacle. This is just the kind of thing it needs. This way the team can concentrate at what they are good at. Which by track record in the last 10 years has been at keeping LightWave playing well within pipelines. That has been the bread and butter of LightWave as of late. So they can continue to make improvements on the rendering, (something they seem to be talented at making easy and fast to use) work on integrating LightWave as a unified app and other things that will help the general condition. I'd personally rather see that than Bullet Physics and other halfway implemented tools (Thinking FFX) that have limited use and function. It is kind of like piss or get off the pot kind of thing. Like LightWave is in limbo between trying to be something it can't be right now and being better at what it is. So I'd say put the beans in the other pile and get going in that direction. That's kinda how I see it now.

+100 :)

Sensei
10-10-2012, 11:24 AM
If something is working on multiple softwares/hardwares, it's nowhere using full potential of any of them. From math point of view it's called common denominator.

Let's for example renderer which is working on platform X and Y.
How does it work?
It's scanning all geometry, all points, all data what are understood by renderer and it's building compatible copy of scene (so there are two- one in app, second in render engine, and maybe third in OpenGL).
Unique application features are ignored - renderer doesn't know them.
If application doesn't have feature that renderer has, it's also not possible to use.
When there's whole scene duplicated, renderer is starting working on data it understands.

Result- it works slower, it eats more memory.

pooby
10-10-2012, 11:48 AM
I can see your point, although I dont agree, but in say a deformer scenario I don't think light waves uniqueness ( no stack etc) is a real asset.
Why would users not want access to more power? That makes little sense to me.
Plus fabric could be used to make lightwave specific tools too. It's very flexible in that regard.

Sensei
10-10-2012, 12:30 PM
I can see your point, although I dont agree,

With what you don't agree?
With that it will eat more memory?
With that it will be slower (because of needed duplication of whole data) ?


but in say a deformer scenario I don't think light waves uniqueness ( no stack etc) is a real asset.
Why would users not want access to more power? That makes little sense to me.
Plus fabric could be used to make lightwave specific tools too. It's very flexible in that regard.

If Fabric will take care of holding and managing scene data (and base application won't have them duplicated!), what is "base application" in such situation? Just GUI and OpenGL.

If people will start making application specific tools on top of Fabric engine - they will work with just the app they wrote it for, and no our app, and vice versa - so it'll be like now. Users won't benefit from it.
Users can only benefit when tool will work on multiple applications.
But you know that it's real hassle, platform independent writing software.
Learn on examples such as Windows incompatibilities Win v3.1, Win 95, Win 98, Win Me, Win XP, Win Vista, 7, and 8 and apps using them...
Learn on different graphics cards and their drivers and games using them.
The more specialized tool, the more issues with compatibility.
Programmer making fabric module will have f.e. Maya with Fabric support, he won't have C4D with Fabric, nor LW with Fabric (list all apps which might use it), so he won't test his code on them - and it might not work correctly.

Cageman
10-10-2012, 12:48 PM
I can see your point, although I dont agree, but in say a deformer scenario I don't think light waves uniqueness ( no stack etc) is a real asset.
Why would users not want access to more power? That makes little sense to me.
Plus fabric could be used to make lightwave specific tools too. It's very flexible in that regard.

I notice that many here thinks that NewTek has to change a lot of the architecture to add something like this into it... as far as I can see, they shouldn't need to except allow for the hooks in the SDK. Any complex data-set that would be thrown at a Fabric node inside of LW wouldn't force LW to do the evaluation; it would be Fabric doing it... at least, that is my understanding of what an API stands for and does for the host software.

To make a comparsion.... when using HD-Instance, the user is effectively using a tool that operates outside of LW in terms of what it (HDI) does internaly, and it only takes certain inputs and process those, and send the result back through the renderengine. NewTek didn't have to re-code anything in LW to support HDI, they only needed to provide the right SDK-hooks.

Cageman
10-10-2012, 01:05 PM
Newtek needs to take advantage of gfx power more. Its not going to handle Fabric Engine interactive speed.

This is a common belief, but it is far from the truth though. There are other things in LW that needs to be done to speed things up in this regard... LW currently has the fastest possible OGL implementation, it is called Buffered VBO (you can read up about it here) (http://www.songho.ca/opengl/gl_vbo.html). There is very little LW3DG can do to make things go faster by trying to optimize OGL even further. While VBO does require resyncing of vertex arrays for when something is changed, this operation is certanly fast enough for DCC applications. The limitations only becomes evident when you want to push data in realtime with lots of FPS, such as games. That is why DirectX is a much better system for that level of realtime content (but it comes with a bunch of limitations as well), compared to OGL apps.

Sensei
10-10-2012, 01:06 PM
they only needed to provide the right SDK-hooks.

Hooks doing what literally?

If Fabric engine will be implemented as volumetric- yep, that can be done without exchange large data-sets between fabric and LW, so it won't be slower down during synchronization and duplication..
Such implementation doesn't need changes in LWSDK/API.
And fabric stuff will be visible in VPR, but not in OpenGL.
To display in OpenGL there would be needed gizmo with draw function, or custom object attached to some null..

Cageman
10-10-2012, 01:17 PM
Hooks doing what literally?

If Fabric engine will be implemented as volumetric- yep, that can be done without exchange large data-sets between fabric and LW, so it won't be slower down during synchronization and duplication..

True.. the data-transfer will certanly become a bottleneck at some point... but the calculation of those datasets still happens outside of LW; no need for integration.


Such implementation doesn't need changes in LWSDK/API.
And fabric stuff will be visible in VPR, but not in OpenGL.
To display in OpenGL there would be needed gizmo with draw function, or custom object attached to some null..

Agreed. And none of this is impossible looking at the current SDK? Are all the hooks there to allow for this to work? Give or take slow data-translationtimes sending data to Fabric and then back again to display (or render) the result, correct?

Sensei
10-10-2012, 01:22 PM
That depends whether Fabric opens its own windows or rely on base app to display lists, trees, nodes etc. everything.

Doing GUI is always the most of developing application ;)
Probably 90% or more.

tischbein3
10-10-2012, 01:29 PM
As long as the hooks are there in the SDK, any developer could make the integration.

The problem aren't the" hooks" itself, but also _when_ those are called and wich data is already aviable for manipulation (either generated or initialised).
(Mesh data, displacement etc).
To orchestrate both, lw and fabric, work synchronized might be also one of the bigger challenges and the most limiting factor.

chris

pooby
10-10-2012, 01:50 PM
I'm using the term 'hooks' in more general terms, I just mean if the ability to do it is already there in LW as it is in maya and soft, then it doesn't need newtek or fabric to implement the CAPI.
I have no idea whether it's feasible at all. I would hope so though.

Hail
10-10-2012, 01:52 PM
I notice that many here thinks that NewTek has to change a lot of the architecture to add something like this into it... as far as I can see, they shouldn't need to except allow for the hooks in the SDK. Any complex data-set that would be thrown at a Fabric node inside of LW wouldn't force LW to do the evaluation; it would be Fabric doing it... at least, that is my understanding of what an API stands for and does for the host software.

To make a comparsion.... when using HD-Instance, the user is effectively using a tool that operates outside of LW in terms of what it (HDI) does internaly, and it only takes certain inputs and process those, and send the result back through the renderengine. NewTek didn't have to re-code anything in LW to support HDI, they only needed to provide the right SDK-hooks.

Well.. this reminds of ehhrr CatmullClark... :P

pooby
10-10-2012, 01:55 PM
Duplicate post, sorry

zarti
10-10-2012, 02:41 PM
( pardon is almost impossible to quote from this device )

--

@50one #67 :

i know i look somehow a bit impatient =) , but my idea was to better manage resources .

i was 'thinking' if NT wd support , promote , why not direct ( ? ) an developer / a group of devs to benefit from gradually adding tools and extensions to LW . more about finding and using more resources to faster deliver tools to users , more than anything else . LW3DG-job-type .. vision , strategy and partnerships .

non-updated 3rdP stories are a good lesson for everyone i agree , thats why i 'insist' in NTs primary role .. or should i say LW3DG ?? =)

p.s. there are succesful stories too about plugs 'plugged forever' or plugs living 'happily unpluged' ( in LW or somewhere else ) .

=)

--

@Cageman #74

fastest OGL on market . Ok . lets accept that . now ..

have you seen the video ( of Fabrik ) about building character setup , muscles , mesh weighting & Co ??

so imagine a LW user builds a complex system and wears a lot of info into the char's mesh .. now go on and drop tha character into Layout . whats the performance you get there ? can you animate ?

maybe it is yet unclear to me how FabTools which appear superFast are going to keep that performance inside the host app ..

generating is one thing , deforming is another one ( !!? )



.cheers

Sensei
10-10-2012, 03:22 PM
This is a common belief, but it is far from the truth though. There are other things in LW that needs to be done to speed things up in this regard... LW currently has the fastest possible OGL implementation, it is called Buffered VBO (you can read up about it here) (http://www.songho.ca/opengl/gl_vbo.html). There is very little LW3DG can do to make things go faster by trying to optimize OGL even further. While VBO does require resyncing of vertex arrays for when something is changed, this operation is certanly fast enough for DCC applications. The limitations only becomes evident when you want to push data in realtime with lots of FPS, such as games. That is why DirectX is a much better system for that level of realtime content (but it comes with a bunch of limitations as well), compared to OGL apps.

I don't agree.

Why games are so fast?
Because in the first place they have "loading" stage where whole level and objects are uploaded to gfx card memory.
Then OpenGL is just using data which are in gfx card memory.
3D applications are constantly generating and uploading data from regular memory to gfx card memory.
Morphing, bone, deformation is example of this.
Try using new OpenGL and just spin viewport - it's very fast - all data are in gfx memory. But when you try selecting some points and moving them it's jumping - old way of uploading data is starting working. It could be optimized if display engine would know which part of object was changed and needs updating, and which is same. So only changed area is uploaded, few vertexes instead of millions in complex scene.

In game character even if it's animated by bones have one vertex controlled by just one bone - so it's very easy to handle it from GPU code. Different bone = different transformation matrix by which vertex is multiplied. Vertex is always multiplied by some transformation matrix, so it doesn't care by which matrix it's multiplied. Speed is same.
3D application would be very very limited if such thing would be done in it. In 3d app every bone can control every object vertex.

Both game and 3d application can instantly generate objects with much less quality because it's far from camera as another optimization. But it's another programmer work.
Game doesn't even have to generate it - just load from disk couple level-of-detail versions of objects.

Speed optimizations are hard to sell to customers. What they would say in changes log? "we speed up routine xxx by 10%, and it won't execute in 0.1 seconds but 0.09 seconds..." nobody will notice it!
It's easier to sell adding new features..

AbstractTech3D
10-10-2012, 04:09 PM
I'm not sure that the technical implementations are the fundamental concern.

What will the competitive landscape look like after the release of Fabric? In that context - does LW 'NEED' to implement Fabric to effectively compete?

Additional Fabric interface alternatives to the AD solutions will almost certainly arise. (The AD proposition certainly leaves room for competitors!).

As I understand it, Fabric will cost $3K.

LW still costs - what - about $800 (not sure exactly). LW by itself (without a Fabric license) will remain a relatively comprehensive solution accessible at that price point. A Fabric license proposes to be the ultimate upgrade to that - possibly bringing capabilities on par with XSI also integrating Fabric, but costing in total 2-3x as much as LW plus Fabric.

But to return to the questions of technical implementation (data bottlenecks and so on) since they should not be trivialised: anything is possible, merely requires a creative solution!

Ultimately the customer is right. If fundamental changes in technical architecture need to happen to accommodate survival in a dynamic market - then they most certainly need to happen! (Harsh as it might be for the engineers).

Lightwolf
10-10-2012, 04:17 PM
fastest OGL on market . Ok . lets accept that . now ..
Just to clarify... it doesn't have the the fastest OpenGL on the market... but it is using the fastest OpenGL hooks / API functions that are available.
Which means that the bottleneck isn't the usage of OpenGL - and speeding that up won't help.
It's likely to be more related to the general architecture, data structures, data flow etc. And that's a lot harder to fix (and then get in a stable state again) because it affects the backbone of the application and various sub-systems around it.

Cheers,
Mike

Cageman
10-10-2012, 04:43 PM
Well.. this reminds of ehhrr CatmullClark... :P

Eh?

In what way?

Catmul Clark SDS does need to be implemented deeply into the code and has to support all tools you throw at it. Fabric, in the context of an API and as such, a stand alone product that does the magic "outdoors" so to speak, doesn't need to be implemented in the same way Catmul Clarke SubdS has to be.

As I said, you should compare Fabric to HD-Instance (which operates outside of LW, even if it "feels" like inside) and it doesn't tie in in the same way Catmul Clarke SDS does. It is a huge difference in the level of implementations needed from the host application.

Think about NextLimit Realflow... it is a separate product, does magic stuff and you send that data back to LW. Imagine Fabric as such a tool, but instead of needing to fire up an externa GUI, you simply change settings on a node that then sends the data into the externa system to be computed and then send it back to the output of the node (I certanly makes it sound extremely simple here, but that should be the nuts and bolts for an API to operate in a host environment).

HUGE difference!

EDIT: When did CC Subd become an API btw... I am very curious to see that information.

Cageman
10-10-2012, 04:55 PM
*Hmm*

Cageman
10-10-2012, 05:09 PM
fastest OGL on market . Ok . lets accept that . now ..

have you seen the video ( of Fabrik ) about building character setup , muscles , mesh weighting & Co ??

OpenGL is about drawing the results from the computations done. That process is usually blindingly fast in all applications.

Here is an example you should try.... Create a 5 million poly sphere. Bring into Layout... orbit it and also translate it (position, rotation, scale). It is very responsive, right? Now, bring this object over to Maya... do the exact same thing... which one is faster? Answer: LW!

Now, apply a deformer in LW... it will turn into a slideshow.... do the same in Maya, and Maya will suddenly become the faster out of the two!! Is this related to OpenGL or something else!? *sigh*

erikals
10-10-2012, 05:13 PM
hm, interesting, so maybe there is a fix... i hope they can look into it,.. soon...

so, turn of all possible deformers before rendering to speed things up...
(wish there was a script that auto-did-it)

Sensei
10-10-2012, 05:20 PM
Enabled deformation means that every vertex is passed through deformation handler or deformation node tree evaluate() function. 5 millions is A LOT. Other applications that have history/modifier stack are asking child node to provide full geometry to them. Child node finds out there is no change between evaluations, so is returning what it returned previously. LW is asking for single vertex vector. 5 million times.

See Task Manager, also enable/disable Threaded Mesh Eval in Preferences
Maybe LW is doing deformation every spin of viewport, without double buffering results from previous evaluation.

Attach example scene.

Cageman
10-10-2012, 05:39 PM
LW is asking for single vertex vector. 5 million times.

Exactly... and there are other things deeper than that!


Attach example scene.

Uh? You can't make such a simple scene yourself?

zarti
10-10-2012, 05:44 PM
yes , ok , i included all calculations and OGL ( as the last procedure ) in a single word : OGL .

oc what we as users measure things is the feedback in viewport till we hit RENDER .

in the char-setup presented on the video there are happening a lot of calculations before the mesh gets deformed .

beside the hierarchical maths there are applied several kind of maps , which are related to muscles ( very 'expensive' themselves by their nature ) .

in char-animation context i see a lot of difference in responsiveness and this makes me very scepitc about current LW and Fab relationship ( ! ) .

--

never touched Maja , but just to share my OGL experience with houdini 11 ; i have a quadro 2000 and on heavy geometries , when i need to rotate my view , the viewports stops for a fraction of a second ( seems to suck it in all in memory ) and then it flies ... just like the scene is empty .

houdini 12 has done a great progress in viewport , but im not currently using it ..



.cheers

Cageman
10-10-2012, 05:45 PM
Just to clarify... it doesn't have the the fastest OpenGL on the market... but it is using the fastest OpenGL hooks / API functions that are available.
Which means that the bottleneck isn't the usage of OpenGL - and speeding that up won't help.

True about the facts! In reality though, as long as you don't try do deform the items, LW has shown to be the (in some cases) much faster one to use for pure display/orbit and translation/rotation/scale. Pretty much anything that doesn't deform the mesh. I have only compared LW, Max and Maya... Maya being the worst of them. But once you start to add deformers into this mix, LW looses badly...



It's likely to be more related to the general architecture, data structures, data flow etc. And that's a lot harder to fix (and then get in a stable state again) because it affects the backbone of the application and various sub-systems around it.

Cheers,
Mike

Yep! :)

geo_n
10-10-2012, 06:07 PM
Speed optimizations are hard to sell to customers. What they would say in changes log? "we speed up routine xxx by 10%, and it won't execute in 0.1 seconds but 0.09 seconds..." nobody will notice it!
It's easier to sell adding new features..

True. But I would kill(pay) to have speed optimazations in lw. Right now less than 15fps in layout for one character....Fabric engine would be super slow if it worked within the lw environment.
Lw would be a skin for fabric engine that operates slow compared to other skins. Needs to speed up first in real world use.

geo_n
10-10-2012, 06:21 PM
Just to clarify... it doesn't have the the fastest OpenGL on the market... but it is using the fastest OpenGL hooks / API functions that are available.
Which means that the bottleneck isn't the usage of OpenGL - and speeding that up won't help.
It's likely to be more related to the general architecture, data structures, data flow etc. And that's a lot harder to fix (and then get in a stable state again) because it affects the backbone of the application and various sub-systems around it.

Cheers,
Mike

So in essence it is the architecture of lw itself that causes these slow performance and can't be fixed until the architecture is fixed. I guess we're out of luck. They've been trying to do that since lw 6.

Cageman
10-10-2012, 06:27 PM
yes , ok , i included all calculations and OGL ( as the last procedure ) in a single word : OGL

OpenGL speed has nothing to do with the internal computations of any application though, so, while you are being "smart" by optimizing very different subroutines into "OGL", you are effectively confusing users and developers. Two words: Be specific!

I do understand that this ain't easy... but some common sense do apply in this. For instance... if you load a mesh and can orbit and translate/rotate/scale it with good performance, but when you apply a deformer it becomes a slideshow, you should be able to draw the conclusion that it isn't related to the display (OGL), but rather a subroutine dealing with the mesh itself.

Lightwolf
10-10-2012, 06:29 PM
So in essence it is the architecture of lw itself that causes these slow performance and can't be fixed until the architecture is fixed.
More or less, yes.

One character with expressions and multiple hierarchy with nothing in the scene causes lightwave to slow down drastically. I was hoping it was a bug(reported before) or something that could be fixed easily. Removing the mesh improves it a bit but the real slowdown is rig calculation.
Bingo. There's probably ways to speed up the playback (once mesh has been deformed for a way) by baking it automatically for display. However, that still wouldn't help once you edit. And it'd probably make more sense as a part of bigger changes.

I'm guessing modeller uses a different OGL implementation, it certainly seems OpenGL bottleneck.
Same story really. Even more so, since meshes in Modeler must be editable at any time (which means that the underlying data structures are likely to be even slower for display)... any tool can potentially change the complete topology.

Cheers,
Mike

geo_n
10-10-2012, 06:36 PM
oc what we as users measure things is the feedback in viewport till we hit RENDER .

in char-animation context i see a lot of difference in responsiveness and this makes me very scepitc about current LW and Fab relationship ( ! ) .



Right, all the technical aspects mean nothing if the end result is slow performance.
Realworld feedback, no excuses. No need to defend lw here.

Btw looking at the money involved to get Fabric, I doubt lwvers are willing to pay. 3k for Fabric plus lightwave base price.
The same users are common with lw and modo. Not willing to pay users. :D

Cageman
10-10-2012, 06:45 PM
Right, all the technical aspects mean nothing if the end result is slow performance.
Realworld feedback, no excuses. No need to defend lw here.

OpenGL speed has nothing to do with the internal computations of any application though, so, while you are being "smart" by optimizing very different subroutines into "OGL", you are effectively confusing users and developers. Two words: Be specific!

I do understand that this ain't easy... but some common sense do apply in this. For instance... if you load a mesh and can orbit and translate/rotate/scale it with good performance, but when you apply a deformer it becomes a slideshow, you should be able to draw the conclusion that it isn't related to the display (OGL), but rather a subroutine dealing with the mesh itself.

geo_n
10-10-2012, 06:46 PM
Same story really. Even more so, since meshes in Modeler must be editable at any time (which means that the underlying data structures are likely to be even slower for display)... any tool can potentially change the complete topology.

Cheers,
Mike

Ingrained in my brain.
Modeller performance wll never improve regardless of gfx card - a bit common knowledge
Modeller needs to be rewritten to improve mesh handing - too much work like core
So abandon modeller please while there's still time!!!
Unification...time well spent :D

erikals
10-10-2012, 06:50 PM
to speed up stuff in Layout (opengl) the Smooth Threshold trick is a good trick too...

Lightwave test - Smooth Threshold
http://www.youtube.com/watch?v=czWBYThz-Zo

Cageman
10-10-2012, 06:53 PM
Ingrained in my brain.
Modeller performance wll never improve regardless of gfx card - a bit common knowledge

The same issue is evident in Layout, and not related to OGL speed at all. As I said before, if you can orbit/translate/rotate/scale a mesh with good performance, but it becomes a slideshow as soon as you apply a deformer, it is something not related to OGL, since OGL is, effectively, displaying the same number of verts/polygons.

As such, you can't tell someone that OpenGL in LW is slow, because that isn't true; if it is fast without deformers, and slow with them, but the number of polygons and verts are the same... where is the problem? Certanly NOT OGL-speed, right?

geo_n
10-10-2012, 06:54 PM
OpenGL speed has nothing to do with the internal computations of any application though, so, while you are being "smart" by optimizing very different subroutines into "OGL", you are effectively confusing users and developers. Two words: Be specific!

I do understand that this ain't easy... but some common sense do apply in this. For instance... if you load a mesh and can orbit and translate/rotate/scale it with good performance, but when you apply a deformer it becomes a slideshow, you should be able to draw the conclusion that it isn't related to the display (OGL), but rather a subroutine dealing with the mesh itself.

Did you read my other post? I said removing the mesh improves a bit. The bigger problem is rig calculation in that situation.

In another static scene with rigid objects the LW viewport is slow at everything and I have to resort to vertex display mode. I don't know if its the multiple 4K maps but removing the rigd objects makes the scene fly. LW ogl is a fault there.
My experience with other software is not the same as LW feedback. Lw is slower no doubt.

Cageman
10-10-2012, 06:55 PM
to speed up stuff in Layout (opengl) the Smooth Threshold trick is a good trick too...

Lightwave test - Smooth Threshold
http://www.youtube.com/watch?v=czWBYThz-Zo

This is old news, and has nothing to do with the problems we are discussing. Smoothing has allways been faster for OGL, but it will not solve the slowness of dense, deforming geometry.

erikals
10-10-2012, 06:59 PM
i know, but i though it was worth a mention, as we're talking Layout speed.

yep, has nothing to do with Lightwave deformation and speed.
sorry to confuse, i know some people read too fast and might think i mentioned that as a fix, which it is indeed not.

Cageman
10-10-2012, 07:00 PM
Did you read my other post? I said removing the mesh improves a bit. The bigger problem is rig calculation in that situation.

No... apply a displacement map and you will see the same drop in performance. Hardly a rig-evaluation thing.


In another static scene with rigid objects the LW viewport is slow at everything and I have to resort to vertex display mode. I don't know if its the multiple 4K maps but removing the rigd objects makes the scene fly. LW ogl is a fault there. My experience with other software is not the same as LW feedback. Lw is slower no doubt.

In your case, what Erikals mentioned will most likely speed things up. LWs OGL is waaay faster with Smoothing turned on. Have you tried that.. if you don't have hundreds or thousands of objects, you should get extremely smooth performance with at least 6-10 millions of polygons, as long as the surface(s) of them are set to use Smoothing.

Sensei
10-10-2012, 07:03 PM
I don't know if its the multiple 4K maps but removing the rigd objects makes the scene fly. LW ogl is a fault there.

That's exactly example of what Cageman is saying that OpenGL is not guilty..

Cageman
10-10-2012, 07:09 PM
That's exactly example of what Cageman is saying that OpenGL is not guilty..

LOL :D

It would be very cool to investigate the scene geo_n is using. Over here, I have a scene with 4k maps and about 10 million polygons, and it flies.

geo_n
10-10-2012, 07:43 PM
No... apply a displacement map and you will see the same drop in performance. Hardly a rig-evaluation thing.



In your case, what Erikals mentioned will most likely speed things up. LWs OGL is waaay faster with Smoothing turned on. Have you tried that.. if you don't have hundreds or thousands of objects, you should get extremely smooth performance with at least 6-10 millions of polygons, as long as the surface(s) of them are set to use Smoothing.

I said in that case, one rigged character in empty scene. It is rig evaluation in that case.
I think you're reading too fast! Too much coffee :D

Smoothing is on. I dont make scene without smoothing on since it makes model faceted even subd objects from what I recall. Anyway its always on with some deg.
its the multiple 4k maps. Removing the maps LW Ogl flies again.

geo_n
10-10-2012, 07:46 PM
That's exactly example of what Cageman is saying that OpenGL is not guilty..

So what is at fault when there's slow down with rigid objects and 4k maps? Texture memory?
Really this is interesting because it will speed up work.
But I do recall that working with lw 64bit in that project seems to speed up viewport. I have to test with fraps.
So could it be 32bit and 64bit issue?

Sensei
10-10-2012, 07:47 PM
In your case, what Erikals mentioned will most likely speed things up. LWs OGL is waaay faster with Smoothing turned on. Have you tried that.. if you don't have hundreds or thousands of objects, you should get extremely smooth performance with at least 6-10 millions of polygons, as long as the surface(s) of them are set to use Smoothing.

Something is really wrong in this area - flat polygons with same normal vector for entire polygon, should be always faster than with smoothing. When smoothing is turned off, a lot of calculations are not needed at all..

geo_n
10-10-2012, 07:51 PM
Something is really wrong in this area - flat polygons with same normal vector for entire polygon, should be always faster than with smoothing. When smoothing is turned off, a lot of calculations are not needed at all..

Its a unique lightwave feature? :D

Sensei
10-10-2012, 07:52 PM
Its a unique lightwave feature? :D

Made especially for you ;)

erikals
10-10-2012, 07:55 PM
geo_n,

.iff files might be faster
http://forums.newtek.com/showthread.php?102189-Highres-texture-amp-Baking-%2816000px%29

Cageman
10-10-2012, 07:55 PM
http://hangar18.gotdns.org/~cageman/LWOGL/LW_OGL.rar

Load the scene called LWOGL.lws... it consists of 5 objects, each having a unique texturemap in 4k (it is the same texturemap, but with a unique filename, so it forces LW to load it 5 times). The total scene consists of 12.7 million polygons. This stuff is absolute realtime for me, even with sRGB colour correction turned on in OGL (this is setup by default.. some gfx-cards can't compute this fast, so if you end up having slowness, turn off OpenGL colour correction). I can orbit, translate/rotate/scale these objects without any problems or lag.

Now, load the scene called LWOGL_1_object.lws. It constists of one of these objects (approx 2,5 million polys), but with a displacement deformer applied to it, no animation or anything... which one is the fastest of the two? Bare in mind that in the first one, LW displays over 12 million polygons in realtime, in the other one it is just 2.5 million polygons, but it is way, way slower.

Conclusion: The way LW handles meshes when deformed, no matter if it is animated or not, is the culprit to what people wrongly refere to as OGL slowness, but in essense it is the way LW handles mesh-deformation that makes it slow; not OpenGL.

This is very important to know when submitting bugreports; be specific!

EDIT: Just for fun... turn off Smoothing on those 5 spheres in the content I provided and you are in for a slideshow. :D Not nearly as severe as that single object with a displacement deformer though, but still... a HUGE difference on my system.

Sensei
10-10-2012, 07:57 PM
So what is at fault when there's slow down with rigid objects and 4k maps? Texture memory?
Really this is interesting because it will speed up work.

Remove 4k maps, and see again.. Maps? So, how many of them?
Single 4k map at 32 bit is taking 64 MB of memory..

Cageman
10-10-2012, 08:10 PM
So what is at fault when there's slow down with rigid objects and 4k maps? Texture memory?

Maybe. Are you using VBO btw? VBO stores all vertex arrays in GFX-memory / object, and that does eat some ram. Since I have no clue to what size your scene is (regarding polygons or the number of texturemaps), it is hard to tell if you run out of memory or not. Usually though, out of memory will display all kinds of gfx-bugs... not producing slowness, usually.

- - - Updated - - -


Smoothing is on. I dont make scene without smoothing on since it makes model faceted even subd objects from what I recall. Anyway its always on with some deg.
its the multiple 4k maps. Removing the maps LW Ogl flies again.

I'm on LW11.0.3, which version are you using? There has been quite some improvements since LW9.x days regarding these things...

geo_n
10-10-2012, 08:13 PM
Remove 4k maps, and see again.. Maps? So, how many of them?
Single 4k map at 32 bit is taking 64 MB of memory..

So I just tried to test scene. This is my result scrubbing timeline, playback, orbit objects. I'm no programmer so I don't know technical side and only give from real world experience. What is happening here but for users to conclude its ogl.


http://forums.newtek.com/attachment.php?attachmentid=108488&d=1349921150
vertex display 30-40fps deformations on. Lw, like!


http://forums.newtek.com/attachment.php?attachmentid=108489&d=1349921168
textured display 0-1 fps deformation off but now viewport displays multiple 4k maps baked to object. Object surface has nothing special, no reflection, no specular, etc just baked 4k textures. Lw unlike!
Renders fast ofcourse, Lw like!

Cageman
10-10-2012, 08:32 PM
Is it possible for you to share that scene?

geo_n
10-10-2012, 09:21 PM
Nope its nda unfortunately.

Sensei
10-10-2012, 09:25 PM
I forgot about one thing: transparent polygons are slowing down OpenGL a lot.
Because they completely change the way viewport is rendered - polygons have to be sorted from farthest to nearest to camera, and then draw in that order.
If near transparent polygon would be draw first, it would update Z-buffer disallowing rendering farer polygons, either transparent and opaque, and everything would look bad.

geo_n
10-10-2012, 10:50 PM
So this is OGL related? Newtek needs to improve this then. Its not even a heavy scene comparatively.
Can't imagine Fabric Engine using Lw ogl and environment. It has to go the messiah route. But what's the point then, better to go for the stand alone Fabric Engine if it comes out and use its full potential there not limited by lw environment. I would be curious to see Fabric engine perform in a similar class software like modo.

pooby
10-11-2012, 12:57 AM
As for the 3k license. It is worth bearing in mind that the first licence is free, so an individual would not have to pay anything.
A studio would get 2 free licences too.

Lightwolf
10-11-2012, 03:17 AM
Something is really wrong in this area - flat polygons with same normal vector for entire polygon, should be always faster than with smoothing. When smoothing is turned off, a lot of calculations are not needed at all..
Except that OpenGL uses vertex normals, not polygon normals. Hard edge -> duplicate points to get different normals.

Cheers,
Mike

erikals
10-11-2012, 05:42 AM
can't you just turn off opengl transparency?

Sensei
10-11-2012, 12:06 PM
Except that OpenGL uses vertex normals, not polygon normals. Hard edge -> duplicate points to get different normals.

In mine game engine speed tests the best performance I got using

glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glEnableClientState( GL_VERTEX_ARRAY );
glEnableClientState( GL_NORMAL_ARRAY );
glInterleavedArrays( GL_T2F_N3F_V3F, sizeof( GameVertex ), m_OpenGL_Vertexes );
glDrawArrays( GL_TRIANGLES, 0, polygon_count * 3 );

In other words, every triangle vertex has it's own UV and normal vector. And they're in continuous memory. 96 bytes of memory for each triangle vertex.

Flat polygon- use same normal vector (copied from polygon normal).
Smooth polygon - needed analyze to what polygons each vertex is attached and average their normals (a lot of work slowing down routine while building; while spinning it should use what was build previously anyway).

Conclusion- building vertex array for OpenGL- should be faster for flat polygons.
Displaying i.e. spinning viewport (because of just reading data from buffer) - should be equal speed.

Cageman
10-11-2012, 01:06 PM
So this is OGL related? Newtek needs to improve this then. Its not even a heavy scene comparatively.

Have you tried all 3 different modes of transparency sorting?

kopperdrake
10-11-2012, 03:32 PM
As for the 3k license. It is worth bearing in mind that the first licence is free, so an individual would not have to pay anything.
A studio would get 2 free licences too.

That sounds amazingly fair. With three or more artists warranting the need for it then 3k is not a great deal - it's a specialist thing and you're likely to be larger than three artists if you need more than 2 seats. And a great way to get in solo artists who can train themselves up on it for free.

Lightwolf
10-11-2012, 04:11 PM
Conclusion- building vertex array for OpenGL- should be faster for flat polygons.
But there's less vertices for smoothed polygons, since the normal can be shared.

Cheers,
Mike

Sensei
10-11-2012, 04:17 PM
But there's less vertices for smoothed polygons, since the normal can be shared.

Not while using glDrawArrays(). Vertex always has 96 bytes. Triangle always has 288 bytes. 1m triangles have always 288,000,000 bytes..
You probably are thinking about other gl function which is providing indirect storing vertexes by indexes, normals as indexes and uvs as indexes.
From mine experience it's slower.
Apparently OpenGL (or particular driver?) is building direct data array internally from indirect data provided as indexes.

zarti
10-11-2012, 04:26 PM
OpenGL speed has nothing to do with the internal computations of any application though, so, while you are being "smart" by optimizing very different subroutines into "OGL", you are effectively confusing users and developers. Two words: Be specific!

I do understand that this ain't easy... but some common sense do apply in this. For instance... if you load a mesh and can orbit and translate/rotate/scale it with good performance, but when you apply a deformer it becomes a slideshow, you should be able to draw the conclusion that it isn't related to the display (OGL), but rather a subroutine dealing with the mesh itself.

hey Man , i think i described what i meant with OGL :/

but im not sure i need to do it again

, since you replied with the Exactly The Same Words to another forum members 3 posts later .

it was intentional ?

it sounded like a web-bot was using your account ..


=)

Lightwolf
10-11-2012, 04:26 PM
Not while using glDrawArrays(). Vertex always has 96 bytes. Triangle always has 288 bytes. 1m triangles have always 288,000,000 bytes..
You probably are thinking about other gl function which is providing indirect storing vertexes by indexes, normals as indexes and uvs as indexes.
From mine experience it's slower.
Apparently OpenGL (or particular driver?) is building direct data array internally from indirect data provided as indexes.
I suppose one would normal try to render strips or fans using glDrawArrays, since it combines high speed with low memory usage.... but then it's back to the problem I mentioned.

And yes, performance is certainly driver and even more so vendor dependent.

Cheers,
Mike

geo_n
10-11-2012, 09:24 PM
can't you just turn off opengl transparency?

You can but you shouldn't have to. Need to see all fake layered procedural gas flow since timing of all engine process is critical. This simple scene should not slow down ogl. Its 2012. :D
Removing the engine or making the engine without textures and leaving the gas makes lw ogl fast again. So there's something not right with lw ogl.


Have you tried all 3 different modes of transparency sorting?
yes, the default is the best, sort by polgyon is worse and I can't even scrub the timeline. Alpha clipping does not have transp and objects look solid.

Surrealist.
10-12-2012, 03:44 AM
Seems to me from all of this (and my own experience), it is clear that LightWave needs a lot of work under the hood - that I am sure it is getting as we speak.

It will be interesting to see what - if anything - NT does about the Fabric potential.

erikals
10-12-2012, 06:47 AM
Seems to me from all of this (and my own experience), it is clear that LightWave needs a lot of work under the hood - that I am sure it is getting as we speak.

i'm not so sure, as it might be quite difficult... :/

Lightwolf
10-12-2012, 06:53 AM
i'm not so sure, as it might be quite difficult... :/
Difficulty is no excuse... it "just" requires better planning and more work/time. ;)

Cheers,
Mike

zarti
10-13-2012, 03:28 PM
Seems to me from all of this (and my own experience), it is clear that LightWave needs a lot of work under the hood - that I am sure it is getting as we speak.

It will be interesting to see what - if anything - NT does about the Fabric potential.

more or less my actual opinion too ..




.cheers

AbstractTech3D
10-14-2012, 10:21 PM
Might it be too much to hope for a comment from LW3DG on the possible inclusion of a Fabric hook?

(Its not exactly the standard future feature speculation we're talking about here).

pooby
10-15-2012, 04:17 AM
I think it's highly unlikely to get a statement. Fabric is still in an early stage of development and has not yet penetrated the 'public' cgi consciousness.
I do hope they will continue to look at it and consider it as an option though. It's the kind of thing that, if all the other apps have it, and it becomes generally used in the industry, then it becomes a must have.
Whether newtek decide to support it early or jump on the bandwagon later, it could end in the same result, but being ahead of the game is always good.

hrgiger
10-15-2012, 06:50 AM
Well I've been uncertain the last few years since CORE where things will head (as are most people) in terms of LW development but the one thing I will credit towards NT is improving interchange with other applications. I do hope that they will look at and consider what Fabric Engine could do for LightWave. Coming from the developers of XSI, it seems like a game changer.

ApacheDriver
10-15-2012, 07:31 PM
I don't know. Seemed awfully complex for the results.
I did not see anything within the links that could not be done without the nodal data exercise
Nodal seems to be more a 'fashion statement' than useful in some of it's uses.
I'll stick with using my noodle rather that your nodal.

Surrealist.
10-16-2012, 10:00 AM
Here is a collection of things:

Starting at the beginning of the evolution of Lagoa:


What was the reason behind choosing ICE to develop Lagoa upon (vs. standalone)?

Thiago:
ICE is very fast to develop, you can reach really quick results in zero time... As a research platform there’s nothing better today.
Even if I need to write code outside of ICE and port it back (to make it more usable), I can still gain a lot of time by using the whole Softimage framework (geometry API, manipulation tools, UI, etc...)

http://cgnewsupdate.blogspot.com/2011/01/interview-with-thiago-costa-lagoa.html

There are some reasons why might want to use ICE for rigging. Perhaps not the best example. But with a little imagination...:

http://www.youtube.com/watch?v=Q2PoPmLGQFc

And the interesting thing about this video is that he explains fairly well why you would learn to do some simple things that you could do with normal methods. But when a part of an ice tree it starts to get interesting:

http://www.youtube.com/watch?v=vQAv0q6pcco

And a few more:

http://www.youtube.com/watch?v=FuZinstlgYE&feature=endscreen&NR=1

http://www.youtube.com/watch?v=4VMG5jWk8yg&feature=relmfu

http://www.youtube.com/watch?v=4VMG5jWk8yg&feature=relmfu

And a favorite of mine :):

http://www.youtube.com/watch?v=qxtdAvvo0KU&feature=relmfu

Just a few things. This is an entirely different kind of environment.

Creative Ingteractive Environment. :)

erikals
10-16-2012, 10:33 AM
 
has anyone been able to use this for something in production though?
http://area.autodesk.com/th.gen/?j/x26wn-wyych-03equ-5hb85:720x0.jpg
maybe if the mesh could be freezed per frame just like realflow it could be of use.
i'm just not sure what i would use it for, as the resolution is too low. (still impressive though)

this looks great, but can't match realflow (unless they develop it further)
http://area.autodesk.com/th.gen/?7/t1md6-4n498-50z41-ze5ha:720x0.jpg
(hope it happens, could make calculations quite fast...)

remember being very impressed by these simulations when i first saw them,
but how would i use them in production, how can i fake or pump up the resolution without getting a severe render-hit... (?) has anyone done it... ?

pooby
10-16-2012, 11:06 AM
I don't know. Seemed awfully complex for the results.
I did not see anything within the links that could not be done without the nodal data exercise
Nodal seems to be more a 'fashion statement' than useful in some of it's uses.
I'll stick with using my noodle rather that your nodal.

If you are referring to working in ICE, then as has already been stated in the thread, you are looking at the actual programming of the tool, not the usage of the tool.

Surrealist.
10-16-2012, 03:06 PM
 
has anyone been able to use this for something in production though?
http://area.autodesk.com/th.gen/?j/x26wn-wyych-03equ-5hb85:720x0.jpg
maybe if the mesh could be freezed per frame just like realflow it could be of use.
i'm just not sure what i would use it for, as the resolution is too low. (still impressive though)


I have not been using it in production yet. But I have done a fair amount of tutorials in ICE and Lagoa and I understand the basic workflow.

Tthe above is a demonstration of elasticits and inelasticits among other things I am sure such as internal and external pressure. But it is fairly easy to set up and control. What you are doing here is not looking for a fluid simulation as in liquid, but rather fine particles such as dirt. You can adjust all kinds of parameters as far as how much they stick together and so on. This is not intended as liquid. And as for resolution, at this point you could do anything. If it were me I'd get as fine a res as possible and then apply a texture to the mesh particles to create more of a fine look to it.


Elasticity is the ability of a material to return to its original shape after a deformation has occurred. The Elasticity parameters here create a structure that is controlled by spring-like connections between points. This structure can have a breaking point.

For fluid sims you would use the Polygonizer op.


Polygonizer allows you to create a mesh around the points of a cloud and other objects in your scene. The effect is similar to metaballs and blobs. This is especially useful for water and other liquids, such as the result of a Lagoa simulation.

So basically what you are looking at is the sim particles. After polygonizing it, you then of course texture it.

I am not an expert on ICE in any way. But that is my undersanding of the basic workflow. But it helps to understand that when looking at some of the sim examples.

Verlon
10-17-2012, 04:58 AM
Hey Pooby, as long as you're here: A while back, you were talking about animating a Lion and getting the hind legs to move correctly. You were pointing out how it was difficult/impossible to get it right. What do you think of Genoma and how it will improve this?

jasonwestmas
10-17-2012, 05:38 AM
Hey Pooby, as long as you're here: A while back, you were talking about animating a Lion and getting the hind legs to move correctly. You were pointing out how it was difficult/impossible to get it right. What do you think of Genoma and how it will improve this?

Most likely, people who complain about LW creature animation are talking about realistic muscle, fat and bone deforms of the mesh. Not necessarily the motion of the IK/FK setups within the bones hierarchy. Genoma I think addresses a few of these deform issues with stretching bones and what-not but without a deformer stack it will still be difficult to get better realism in this sliding muscle, fat and bone category. So as a result you are still stuck with that Robot/Doll kind of appearance/feel within the motion of the surface detail if using Genoma or without it.

pooby
10-17-2012, 06:58 AM
Hey Pooby, as long as you're here: A while back, you were talking about animating a Lion and getting the hind legs to move correctly. You were pointing out how it was difficult/impossible to get it right. What do you think of Genoma and how it will improve this?

I cant remember the lion thing, but I was probably referring to the fact a spine doesnt' have a fixed root and heirarchy in the way that you would make it with FK chains. Preferably you can control both ends and and it interpolate between.
In that regard, I dont think Genoma helps as far as I am aware.

hrgiger
10-17-2012, 07:06 AM
I cant remember the lion thing, but I was probably referring to the fact a spine doesnt' have a fixed root and heirarchy in the way that you would make it with FK chains. Preferably you can control both ends and and it interpolate between.
In that regard, I dont think Genoma helps as far as I am aware.

Yeah, I think Paul is right I don't see Genoma changing the way LightWave bones functionally work, only in the way it is set up. We haven't heard anything to the contrary as of yet.

Verlon
10-17-2012, 08:21 AM
Yes, that was it. You were talking about how you could not move the spine correctly easily in the LW system and were pointing our the advantages of XSI. I'd dig the thread up, but the discussion got heated.

I thought that might be the case

erikals
10-17-2012, 08:48 AM
i might be wrong, but i think it might be possible using the IKB quadruped rig in Genoma.
they are also working on a spline tool for LW11.5

as for muscles, the Genoma method seems limited, though useful in some cases.

jasonwestmas
10-17-2012, 08:54 AM
well yeah, Like HR is kinda saying, the only thing that seems to be changing here is quicker setup time but nothing under the hood appears to be changing much in regards to doing anything new to how the character looks when animated.

robertoortiz
10-22-2012, 10:12 AM
For a fantastic example of the power of Softimage ICE check out this thread from Gustavo Boehs. It is his entry in the the latest FXwars Challenge 7 Towers.

It is VERY COOL what you can do with ICE nodes:
http://forums.cgsociety.org/showthread.php?f=139&t=1071546

jasonwestmas
10-22-2012, 10:18 AM
Super wow, I really want to do sand FX.

pooby
10-22-2012, 10:40 AM
Or you could do it like this

https://vimeo.com/51760127

adk
10-22-2012, 04:58 PM
Looking at more and more XSI vids of late makes me seriously want to give it a proper go.
I won a copy of it a few moons ago (before AD took over) but I seriously doubt their upgrade policy would entice / help me in any way.

erikals
10-22-2012, 05:25 PM
Or you could do it like this
https://vimeo.com/51760127

less banding / moire > http://vimeo.com/51815723

jasonwestmas
10-22-2012, 05:38 PM
Man, I love that sandy stuff! Thanks.

erikals
10-22-2012, 05:40 PM
Lagoa = Sandbox :]

AbstractTech3D
10-22-2012, 07:38 PM
So am hoping for a Lagoa like tool (plus all the other tools!) for Fabric, and a Fabric implementation into LW!

geo_n
10-22-2012, 08:40 PM
Looking at more and more XSI vids of late makes me seriously want to give it a proper go.
I won a copy of it a few moons ago (before AD took over) but I seriously doubt their upgrade policy would entice / help me in any way.

You probably don't need to upgrade too much with softimage if you get the current version. Its atleast 10 years ahead of lw. 5 years ahead of 3dmax. Toe to toe with maya. Years from now we probably would catch up to current softimage and thats the time you would upgrade again. :D

adk
10-22-2012, 09:56 PM
You probably don't need to upgrade too much with softimage if you get the current version. Its atleast 10 years ahead of lw. 5 years ahead of 3dmax. Toe to toe with maya. Years from now we probably would catch up to current softimage and thats the time you would upgrade again. :D

Just out of curiosity I had a quick look at the AD website ...
US - from $3145
AU - from $6758
... supposedly we're at parity with US atm ? Am I even supposed to take seriously ? Something would have to freeze over first before I pay double for the same thing !

geo_n
10-22-2012, 10:08 PM
Well thats AD. To be expected knowing their business practice. Software is great but pricing and policy sucks.

adk
10-22-2012, 10:35 PM
All I can say is ... with that sort of policy & practice they have 0% chance of me being even remotely interested on a personal level. How do NT & other small companies manage to offer close to US prices, even through resellers here, while the others are simply here to plainly rip us off ? Anyway, I'll grab a demo and dip my toes in.

geo_n
10-22-2012, 11:03 PM
To be fair the markup price of lw in japan is very significant thru dstorm. So newtek is in the same boat of having very different pricing scheme per region like AD. The best thing they did was offer lw thru the web that has no markup price.

Surrealist.
10-23-2012, 12:52 AM
If you contact an AD authorized reseller you can find out about deals and arrangements you might not otherwise find. For instance there is a competitive side grade offer if you own LightWave. Knocks XSI down to about 2 gs USD. If you are outside the US, you can try and contact a reseller in the US. They need the sale as much as the next guy and might be willing to sell you a version you can buy without shipping/tax to save on those costs as well. That's what I did. You can contact me directly also and am I am happy to put you in touch with the salesman I deal with.

adk
10-23-2012, 04:58 AM
If you contact an AD authorized reseller you can find out about deals and arrangements you might not otherwise find. For instance there is a competitive side grade offer if you own LightWave. Knocks XSI down to about 2 gs USD. If you are outside the US, you can try and contact a reseller in the US. They need the sale as much as the next guy and might be willing to sell you a version you can buy without shipping/tax to save on those costs as well. That's what I did. You can contact me directly also and am I am happy to put you in touch with the salesman I deal with.

Cheers a bunch Richard. Really appreciate the offer & might take you up on it.
This is not knocking LW in any way btw - I've been enjoying 11 all the way. Just want to expand my toolset & nodal horizons somewhat.

50one
10-23-2012, 05:35 AM
If you contact an AD authorized reseller you can find out about deals and arrangements you might not otherwise find. For instance there is a competitive side grade offer if you own LightWave. Knocks XSI down to about 2 gs USD. If you are outside the US, you can try and contact a reseller in the US. They need the sale as much as the next guy and might be willing to sell you a version you can buy without shipping/tax to save on those costs as well. That's what I did. You can contact me directly also and am I am happy to put you in touch with the salesman I deal with.

+1, might contact you regarding updating my max 2012 to 13 or 14 when it's out, as i was on the upgrade program but run out of time, besides haven't seen anything interesting in the 2013...

newdogg
01-16-2013, 01:59 AM
which LW version recognizes over 2 Gb of RAM? what OS would I need?

kolby
01-16-2013, 03:30 AM
which LW version recognizes over 2 Gb of RAM? what OS would I need?

Any 64bit version of LW (v8.5 or higher) and 64bit OS

newdogg
01-19-2013, 11:17 AM
Any 64bit version of LW (v8.5 or higher) and 64bit OS
my LW64 8.5 (Windows 7 64) only recognizes 2 Gb of RAM. what am i doing wrong?
110500

kolby
01-20-2013, 03:46 AM
my LW64 8.5 (Windows 7 64) only recognizes 2 Gb of RAM. what am i doing wrong?
110500

64bit Lightwave uses all the RAM that is installed in the system. Segment memory limit is used to split the image into segments if you do not have enough memory needed to render large images at once. Up to LW10.1 is the limit 2GB. LW11 has a limit of 16GB.

erikals
01-23-2013, 09:57 AM
maybe try converting the image to a .png indexed color to save memory.


Lightwave test - 25000px Indexed Png
http://youtu.be/iFccG3v5ei8

AbstractTech3D
07-10-2013, 04:15 AM
Does anyone have any new info on LW + Fabric (Creation)?

dsol
07-10-2013, 08:15 AM
64bit Lightwave uses all the RAM that is installed in the system. Segment memory limit is used to split the image into segments if you do not have enough memory needed to render large images at once. Up to LW10.1 is the limit 2GB. LW11 has a limit of 16GB.

Segment memory limit is the buffer size that LW uses for rendering final images. These days, you rarely need to touch it unless you're rendering at a stupidly high (like Print res) resolution. Don't set it unnecessarily high, otherwise you'll just waste ram. LW64 uses all the memory you have in your system - for Meshes, Textures, GI Caching, Shadow maps etc. If you look in Windows System Monitor you should see it uses more than 2GB.

But really, LW8 is ancient. C'mon and join us in the Lw11.x pool, the water's lovely :)

jasonwestmas
07-10-2013, 08:20 AM
I'd also keep my eye on modo for anything nodal-animation related. I only say that because Foundology (Foundry) are making huge strides in the nodal rigging dept.

I can only assume that this will all be related to 3rd party simulations and game development.

chikega
07-11-2013, 04:19 PM
Houdini Engine as a plugin in LW may be the more the plausible solution in the foreseeable future.

http://www.sidefx.com/index.php?option=com_content&task=view&id=2525&Itemid=66

jasonwestmas
07-11-2013, 04:41 PM
Houdini Engine as a plugin in LW may be the more the plausible solution in the foreseeable future.

http://www.sidefx.com/index.php?option=com_content&task=view&id=2525&Itemid=66

Ooo, that's a cool idea. In the past I had wished that NT would allow "Lightwave Render" to be able to be used as a plugin in other animation packages.