PDA

View Full Version : Cloud Computing



Greg Lloyd 5
09-27-2010, 09:43 PM
Anyone know of a program which a person can buy to turn 16 dual quad core xeon computers to make a single system wicked fast? I can only find render software like Smedge, Muster, Spider etc, but I would like LW to be just really fast. Hit the F12 render and have it done in a second instead of hours and hours. Anyone know of any projects or software out there?

JonW
09-27-2010, 10:52 PM
The way you put it, It sounds that the 16 dual Xeons are at hand, or you may be just using this as an example. Lightwave Screamernet just needs setting up. It’s a pain in the neck to set up, but works perfectly ok & it comes with LW. Or you could get something like Butterfly Net Render.

When rendering on third party computers they will want about 4 to12 cents per Ghz of processing per hour.

So if your scene is 10,000 Ghz hours, you can chuck one 920 CPU at it & it will take 39 days to render. Or put 16 920 CPUs on the job & it will take 2.5 days to render. Or 16 2.66 GHz dual Xeons & it will take 1.25 Days to render.

But it will still cost $400.00 dollars to render at $0.04 per GHz/hour.


LightWave & all plugins will need to be on every computer, you would not want to up load this to every node for a render, this would be a nightmare on a cloud set up, & then you have the scene to up load. A local network is the go. Or you could use Garagefarm for example.

Eagle66
09-29-2010, 01:12 PM
RENDERPAL V2
http://www.renderpal.com/features.php (LW plugin or built-in)

or

http://www.garagefarm.net

Animation will take 4 days to render on a dual Xeon W5580 3.20Ghz, 8 Cores, 16 threads in total. If Garage Farm will be used for this project, it will be finished in about 16 hours and will cost $39.99

Animation would take two full weeks of rendering on a Intel Core i7-860 Quad Core + Hyper Threading 2.80GHz. It will be finished in about 25 hours on the farm and will cost $69.99

Greg Lloyd 5
09-29-2010, 04:30 PM
I have a farm and use BNR, but I would like a cloud computing solution. Its kind of a reverse virtual environment. Hook up as many PC's as possible to make a super computer with the power to complete everything in the interface extremely fast. This would include test renders and everything. I have F-prime, but it has limitations, especially with volumetric lighting.

Hopper
09-29-2010, 06:41 PM
What you are referring to is a form of "distributed" computing at its most basic level and not cloud computing. Cloud computing is considered to be "Internet-based computing" in real time. It is designed to share resources on demand from service based providers across disparate networks on the Internet.

Unless you are rendering across the farm in real-time, most render farms are really considered to be just an ordinary run-of-the-mill batch processing system with a shared repository.

But still ... 16 dual Xeons would be pretty badáss to have. :)

Greg Lloyd 5
09-29-2010, 06:54 PM
I'm sorry, my mistake... I guess I don't know what to call it, but I hope someone is working on a solution like this. It would be very cool to have all that power in a single system.

Hopper
09-29-2010, 06:58 PM
I'm sorry, my mistake... I guess I don't know what to call it, but I hope someone is working on a solution like this. It would be very cool to have all that power in a single system.
No worries ... just informational. I wasn't pokin' at ya.

Greg Lloyd 5
09-29-2010, 07:07 PM
I appreciate the correction, thank you. I searched for distributed computing software, not too much out there yet. Hopefully soon there will be all kinds :)

crashnburn
09-30-2010, 10:18 AM
I would imagine there would be a serious bottle neck within your network connections. wouldn't each pc be basically as powerful as it's link? All 16 PCs having to communicate with each other or a central computer constantly, I can see a potential bottleneck unless you can afford some serious network hardware.

Greg Lloyd 5
10-04-2010, 08:10 PM
Hmmm... I see your point. :agree:

I was at a trade show in '07 and saw a Gigabyte booth, so I checked it out. This guy had a huge case with 4 Gigabyte boards in it. They were all hooked in together to form a monster computer. I didn't get a good enough look to see what he connected the boards together with, but now I am kicking myself cause I should have gotten his personal contact info instead of the basic corporate business card.

I think you are right, the motherboards would have to be connected together in a fashion which would circumvent the need for a network switch.

I guess I should be going to school for computer engineering instead of computer science ;-)

Hopper
10-04-2010, 08:49 PM
I think you are right, the motherboards would have to be connected together in a fashion which would circumvent the need for a network switch.

The system you are referring to is called a backplane. The architecture has been around since the days of the main frame. It still works and is still a good design. Many motherboard manufacturers have backplane enabled systems.

Another similar architecture (and more modern design) would be a blade system. It is very similar to a back plane system, however each individual blade is a single system and is connected to an internal switch via GigaSwift MMF (Fiber).

They are fast and reliable. Many systems are becoming priced well within the range of consumer purchase.

I sold my B1600 about a year ago. It was more than what I needed and somewhat of a power hog. 16 systems in one chassis. Or get load-balancers that take 2 slots each (for things like distributing traffic between blades - i.e. web servers, etc..).

http://t2.gstatic.com/images?q=tbn:ANd9GcQDdkOIheTFSYSw4kEAHVPvxFSGCRhyy eqZ0C9larT13HU9EOw&t=1&usg=__nW6v6XfkUWmtKzz4E3bkfJ9asdw=

Note: I would not recommend looking into the B1600 as a purchase. They have been end-of-life'd and the blades are hard to come by these days (that and Sun was bought by Oracle). Dell, HP, and IBM have reasonably priced blade systems.

Greg Lloyd 5
10-04-2010, 09:04 PM
Were you able to use your sun system with LW and other programs? How much faster was it and was it worth it to have? What was the down sides to that 16 blade system?

I apologize for all of the questions, but I am really interested in building a system, maybe similar to a backplane, but I am wondering if LW would even work on it. Seems like Windows would have problems with it (seems Windows always has problems with certain different types and configurations of hardware).

Hopper
10-04-2010, 09:19 PM
Were you able to use your sun system with LW and other programs? How much faster was it and was it worth it to have? What was the down sides to that 16 blade system?
I was running 2 load balancers, 4 SunOs, 6 RedHat Linux, 4 Windows Server 2003 VM's, and 2 cooler (fan) inserts on each side of the chassis. They were all faster than I needed them to be. Overkill actually, but I also wasn't using them as a render farm. They were for 2 Oracle database systems, 2 Linux app servers, and the rest were test Grid systems for running security simulations.

Using SunOs's imaging system, I could re-image any blade (or all 14 blades for that matter) to any previously saved OS image in about 4 minutes.

Advantages ... (too many to list).
- Different blade types (Sun, Linux, x86, etc.). Imaging subsystem is a dream.
- Gigabit switch fabric was rock solid and supported LACP protocols and full VLAN configurations.
- System bus never failed and had a remote CPU that you can connect to via modem to turn the entire unit on and off remotely even when the main power was off.
- Configuration is simple... and you could store infinite OS config images.
- and on .. and on .. and on ... I could go on for days with it .. I really enjoyed having it, but like I said. It was EOL'd, so I can't afford to have unsupported hardware.

Disadvantages...
- Dual power supplies are loud, and take a full 30amp circuit when at full load (keep in mind they are built to power 32 CPU's at once at full load and be redundant - one is always in standby mode).
- Blade hard drives weren't upgradeable (120GB max) - but I rarely used local drives -> connected to the SAN for all but boot images.
- Cooling is an issue. They are built for front-to-back server room rack systems and make a room quite uncomfortable even in the winter time.

Oh .. and it had wicked cool purple LED's in the dark ... :-)

EDIT: That pic doesn't show scale very well ... it's ony 5.25 inches tall (3RU) and fits a standard 19 inch rack. If you stood it up on end, it would be smaller than your average desktop / tower system.

Hopper
10-04-2010, 09:34 PM
But to be specific about your question... I could have easily loaded up all 14 systems with Linux and used them with ScreamerNet through a Windows VM. I use my notebook (running RedHat) as a ScreamerNet node all the time. Just load up a windows VM and set up ScreamerNet just as you normally would. It's a no-brainer.

I also noticed my renders are faster through the VM than it is with a "real" windows system.

Greg Lloyd 5
10-04-2010, 09:42 PM
I can't stand Windows. I wish Newtek would create a Linux version of LW.

Can you run LW on Linux normally with Win vm?

It would be nice to have a system which runs all 14 systems in parallel so one single system is so quick that you don't need a render farm, just hit F12 render and it's done in a couple of hrs. Or with a rather large scene, render over night, or even over the weekend. Do you think that would have been possible with your old system?

Hopper
10-04-2010, 09:55 PM
I can't stand Windows. I wish Newtek would create a Linux version of LW.
The new LW CORE works on Linux.


Can you run LW on Linux normally with Win vm?
Not reliably. The issue is the Sentinel driver, and OGL through a VM usually isn't pretty with an app using multiple viewports. You can get Modeler to stumble along, but Layout is too unstable.



It would be nice to have a system which runs all 14 systems in parallel so one single system is so quick that you don't need a render farm, just hit F12 render and it's done in a couple of hrs. Or with a rather large scene, render over night, or even over the weekend.
They have those... you just can't afford one.. :-)

You can get a nifty IBM P 595 for about $2.5 mil. But then again .. AIX is not a very LW friendly OS...


Do you think that would have been possible with your old system?
Nope. I could render individual frames in a batch, across all systems, but not make act as one coherent system.

What you are wanting is no simple thing - engineering wise. If it were that simple, you would have seen this capability in an affordable package long ago. A couple of multi-core Xeons is about as exotic as you're going to get within a reasonable price range. Internal multi-core systems are infinitely more simple than multiple external multi-core cpu's. That's why you don't see 4 and 8 non-ASMP cpu systems (not cores... physical cpu's) at your local Best Buy....

At this point, most 4+ cpu servers require what's called a "mid-bridge", which is an ultra high precision timing system to integrate multiple bridge chip sets. These systems are extremely expensive to manufacture and that's why these servers (often called mini's) usually start at around $50k and up.

Greg Lloyd 5
10-04-2010, 10:17 PM
You are very knowledgeable, that is for sure:bowdown:

Thank you for your time and for answering all of my questions so throughly.

G

Hopper
10-04-2010, 10:37 PM
You are very knowledgeable, that is for sure.
Thank you for your time and for answering all of my questions so throughly.

You're welcome... it's what I've been doing for the past 25 years so it's pretty easy... BUT .. ask me how to do morph targets, animate a character in Layout, or surface a complex texture, and I'd be better off with a box of crayons. In comparison to 99% of the LW professionals here, I am like the old lady that uses her cd-rom tray as a coffee cup holder.