PDA

View Full Version : Some questions



adrencg
05-29-2006, 07:44 PM
Can anyone answer these questions?

1. Is the modeler selection bug fixed?

2. Is OpenGL really much faster? Is it only for regular polys or is there a speed-up with SubD's as well?

3. Does dynamics use both cores of a dualCore chip? At the present, 8.5 uses just one.

Thanks to all of the poster's who are filling in the non-beta users.

mattc
05-29-2006, 07:56 PM
Can anyone answer these questions?

1. Is the modeler selection bug fixed?

2. Is OpenGL really much faster? Is it only for regular polys or is there a speed-up with SubD's as well?

3. Does dynamics use both cores of a dualCore chip? At the present, 8.5 uses just one.

Thanks to all of the poster's who are filling in the non-beta users.

1. Yes. You have no idea how happy I was to see that one gone...

2. It is in Layout. Modeler will be dealt with later in the 9.x cycle.

3. I don't think so.....but I could be wrong.

Regards
Matt

lots
05-30-2006, 09:17 PM
Dynamics is not exactly an easy problem to attack through multiple threads. Dynamics calculations often times rely on calculations of previous steps, and thus cannot be computed out of order by multiple threads.

RedBull
05-30-2006, 09:37 PM
Not sure about other applications, i believe they are only single threaded.
But Realflow is mult-iaware, and i believe that Blender might be, (If not it's to be re-implemented in a future version)

I would like to see LW multithread many bits Viper, SE, and Dynamics.
With the trend of multicore, set to be a standard mainstream thing.

Improvements like this will only mean more in the future.

wacom
05-31-2006, 12:41 AM
I'm no hardware wiz- but isn't mulithreading different than multiprocessor/core? Just confused here...

mattc
05-31-2006, 01:48 AM
Yes, though both are schemes to increase instruction level parallelism (ILP).

M.

LSlugger
05-31-2006, 11:31 AM
As clock speed growth slows, systems are becoming increasingly parallel. You can have more than one CPU, which is called symmetric multiprocessing (http://en.wikipedia.org/wiki/Symmetric_multiprocessing) (although the Opteron touches on a related technology called non-uniform memory access (http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access)). You can have multiple cores on each CPU, which is really just a special case of SMP. Finally, you can have parts of each core work independently, which is called simultaneous multithreading (http://en.wikipedia.org/wiki/Simultaneous_multithreading). Intel's HyperThreading is the most well known implementation of SMT (and, unfortunately, not a very good one).

Normally, each application you run (e.g., Firefox or LightWave) is a single process (e.g., firefox.exe or lightwav.exe. The operating system can multitask (http://en.wikipedia.org/wiki/Computer_multitasking), switching between hundreds of different processes. If LightWave is busy with a render, you can still browse the web with Firefox. Your system may feel a bit sluggish, but it shouldn't completely hang. If you have a multi-core system, your system may not even feel sluggish, because the two processes can actually run at the same time.

Some applications are split into multiple processes (e.g., the Apache web server may spawn off several httpd.exe processes). Such an application should run faster on a multi-core system. Why aren't all applications written to use multiple processes? Two reasons: starting a lot of processes can be expensive (i.e., slow), and it is inconvenient for processes to communicate with each other. You may have heard people criticize the implementation of 64-bit OS X for the G5, because the GUI isn't 64 bit. A 64-bit Mac application has to be split into a 64-bit background process and a 32-bit GUI process, which is a pain.

Some processes are also split into multiple threads (http://en.wikipedia.org/wiki/Thread_%28computer_science%29). Threads are sometimes called lightweight processes, because it doesn't take as long to create or destroy them as a full fledged process. Threads can more easily communicate with each other, because they automatically share memory. Because CPUs are so much faster than the rest of your computer (memory, hard disk, network, etc.), an application can sometimes benefit from threads even on a single-core system. For example, your word processor can spell check and paginate while you are typing, and your web browser can render one image while it is downloading another. Why aren't all applications written to use multiple threads? Three reasons: as lots pointed out, some task are hard to split up; even though threads are lighter than processes, there is still some overhead; and threaded programming is hard.

jayroth
05-31-2006, 01:09 PM
They scale in speed with the number of CPUs as you would expect.

Regards,

Jay Roth

lots
05-31-2006, 02:09 PM
I think only in special cases can you split up a dynamics calculation. Maybe if you have alot of area to cover (fluid effects), you can break your computations down into sections to be computed by difference CPUs. But overall if you only know the initial point and velocity of a particle, you will have to compute its dynamics one frame at a time, in order. You can't know the position of the particle 500 frames from now, since the 499 frames before it have not been computed.

Now, if you have ALOT of communication between each thread/process/whatever you could in theory compute multiple particle's positions at a time, but your IO will be severly limited because of the overhead needed to keep the separate processes in sync. with eachother. Losing the synchronization would mean an error in your dynamics computations (odd randomly moving particles :P).

Its a complex problem, and its far simpler to computer dynamics with one process/thread (whatever :P).

Then try and introduce self collision or collision with other particles in the simulation :/ You can see how this problem gets even more complicated with that :P

Intuition
05-31-2006, 02:35 PM
How possible is it that 3d apps will take advantage of the new physics cards that they are making for games?

Is this dynamics hardware idea a good route to take for future development inside 3d apps? I would love to have hardware driven dynamic calcs since I want faster simulations so I can try many variations out in a day as opposed to running a couple a day.

Oops don't mean to derail the thread....

1. yes.
2. Yes
3. Still single threaded.

wacom
05-31-2006, 03:39 PM
To me the hardware solution is good, but it is always seems like a stopgap method. When things get too heavy for current software/standard hardware solutions it seems someone comes up with a hardware solution. Like the ICE cards, or even the early VT. It seems though that things always come full circle eventually and lead back to more flexible software solutions as the general CPUs etc. catch up.

The game industry is interesting, but in some ways a bad example for the FX/creative part of the industry. The game industry puts out standards and game machines which are fixed/semi-fixed for several years and therefore can benefit from hardware solutions, such as that physics chipset. On the 3D software front though it seems that things need to be too dynamic (no pun intended) and scalable for a hardware solution to really be feasible.

This might be different if say real-flow was the ONLY way to do fluids, kind of like DirectX has become the standard for many console game engines. Then you could have hardware acceleration on standardized operations while still having other parts more dynamic.

But what do I know. **** if there was a chip for 100 bucks that did 90% of physics tasks in near to real time I wouldn't argue.

lots
05-31-2006, 09:10 PM
Speaking of dynamic, an FPGA (hmm thinking about it i may have butchered the acronymn) chips are basically hardware that is reconfigurable on the fly, while the machine is on. I could see a well written software algorythm, able to configure such a piece of hardware, really having an advantage of both software and hardware. ON the one hand, it is easily tweaked like hardware, with patches and updates, etc. On the other, it is fast and specialized like hardware. This is basically the holy grail of hardware design, though it is not without its faults.

It would be interesting, to say the least, if 3D apps took advantage of FPGAs and thier flexibility (assuming FPGAs eventually make it to practical real world speeds), we would have a solution that fits almost anything. Especially when you consider what AMD has in store for the future of Coherent HyperTransport (licensing the technology for that second socket in a dual socket motherboard to be dedicated to specialized hardware, physics chips, FPGAs, etc.)