Page 1 of 2 12 LastLast
Results 1 to 15 of 18

Thread: Cloud Computing

  1. #1
    animate-a-holic
    Join Date
    Oct 2003
    Location
    Fairfield, California
    Posts
    64

    Cloud Computing

    Anyone know of a program which a person can buy to turn 16 dual quad core xeon computers to make a single system wicked fast? I can only find render software like Smedge, Muster, Spider etc, but I would like LW to be just really fast. Hit the F12 render and have it done in a second instead of hours and hours. Anyone know of any projects or software out there?

  2. #2
    Super Member JonW's Avatar
    Join Date
    Jul 2007
    Location
    Sydney Australia
    Posts
    2,235
    The way you put it, It sounds that the 16 dual Xeons are at hand, or you may be just using this as an example. Lightwave Screamernet just needs setting up. It’s a pain in the neck to set up, but works perfectly ok & it comes with LW. Or you could get something like Butterfly Net Render.

    When rendering on third party computers they will want about 4 to12 cents per Ghz of processing per hour.

    So if your scene is 10,000 Ghz hours, you can chuck one 920 CPU at it & it will take 39 days to render. Or put 16 920 CPUs on the job & it will take 2.5 days to render. Or 16 2.66 GHz dual Xeons & it will take 1.25 Days to render.

    But it will still cost $400.00 dollars to render at $0.04 per GHz/hour.


    LightWave & all plugins will need to be on every computer, you would not want to up load this to every node for a render, this would be a nightmare on a cloud set up, & then you have the scene to up load. A local network is the go. Or you could use Garagefarm for example.
    Procrastination, mankind's greatest labour saving device!

    W5580 x 2 24GB, Mac Mini, Spyder3Elite, Dulux 30gg 83/006 72/008 grey room,
    XeroxC2255, UPS EvolutionS 3kw+2xEXB

  3. #3
    RENDERPAL V2
    http://www.renderpal.com/features.php (LW plugin or built-in)

    or

    http://www.garagefarm.net

    Animation will take 4 days to render on a dual Xeon W5580 3.20Ghz, 8 Cores, 16 threads in total. If Garage Farm will be used for this project, it will be finished in about 16 hours and will cost $39.99

    Animation would take two full weeks of rendering on a Intel Core i7-860 Quad Core + Hyper Threading 2.80GHz. It will be finished in about 25 hours on the farm and will cost $69.99
    Last edited by Eagle66; 09-29-2010 at 01:19 PM.

  4. #4
    animate-a-holic
    Join Date
    Oct 2003
    Location
    Fairfield, California
    Posts
    64
    I have a farm and use BNR, but I would like a cloud computing solution. Its kind of a reverse virtual environment. Hook up as many PC's as possible to make a super computer with the power to complete everything in the interface extremely fast. This would include test renders and everything. I have F-prime, but it has limitations, especially with volumetric lighting.

  5. #5
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    What you are referring to is a form of "distributed" computing at its most basic level and not cloud computing. Cloud computing is considered to be "Internet-based computing" in real time. It is designed to share resources on demand from service based providers across disparate networks on the Internet.

    Unless you are rendering across the farm in real-time, most render farms are really considered to be just an ordinary run-of-the-mill batch processing system with a shared repository.

    But still ... 16 dual Xeons would be pretty badáss to have.
    Last edited by Hopper; 09-29-2010 at 06:50 PM.
    Playing guitar is an endless process of running out of fingers.

  6. #6
    animate-a-holic
    Join Date
    Oct 2003
    Location
    Fairfield, California
    Posts
    64
    I'm sorry, my mistake... I guess I don't know what to call it, but I hope someone is working on a solution like this. It would be very cool to have all that power in a single system.

  7. #7
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    Quote Originally Posted by Greg Lloyd 5 View Post
    I'm sorry, my mistake... I guess I don't know what to call it, but I hope someone is working on a solution like this. It would be very cool to have all that power in a single system.
    No worries ... just informational. I wasn't pokin' at ya.
    Playing guitar is an endless process of running out of fingers.

  8. #8
    animate-a-holic
    Join Date
    Oct 2003
    Location
    Fairfield, California
    Posts
    64
    I appreciate the correction, thank you. I searched for distributed computing software, not too much out there yet. Hopefully soon there will be all kinds

  9. #9
    Super Member crashnburn's Avatar
    Join Date
    Oct 2003
    Location
    UK, Yorkshire
    Posts
    565
    I would imagine there would be a serious bottle neck within your network connections. wouldn't each pc be basically as powerful as it's link? All 16 PCs having to communicate with each other or a central computer constantly, I can see a potential bottleneck unless you can afford some serious network hardware.
    i7 920 2.8Ghz, 12GB OCZ Gold, Gigabyte X58A-UD3R, Nvidia 250 GTS, Cheiftec A-135 750W, Zalman Z7
    Core 2 duo E6600 2.4Ghz, 8GB Corsair Dominator, Gigabyte GA-965P-DQ6, CiT PSU, Quadro FX4500

  10. #10
    animate-a-holic
    Join Date
    Oct 2003
    Location
    Fairfield, California
    Posts
    64
    Hmmm... I see your point. :agree:

    I was at a trade show in '07 and saw a Gigabyte booth, so I checked it out. This guy had a huge case with 4 Gigabyte boards in it. They were all hooked in together to form a monster computer. I didn't get a good enough look to see what he connected the boards together with, but now I am kicking myself cause I should have gotten his personal contact info instead of the basic corporate business card.

    I think you are right, the motherboards would have to be connected together in a fashion which would circumvent the need for a network switch.

    I guess I should be going to school for computer engineering instead of computer science ;-)

  11. #11
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    Quote Originally Posted by Greg Lloyd 5 View Post
    I think you are right, the motherboards would have to be connected together in a fashion which would circumvent the need for a network switch.
    The system you are referring to is called a backplane. The architecture has been around since the days of the main frame. It still works and is still a good design. Many motherboard manufacturers have backplane enabled systems.

    Another similar architecture (and more modern design) would be a blade system. It is very similar to a back plane system, however each individual blade is a single system and is connected to an internal switch via GigaSwift MMF (Fiber).

    They are fast and reliable. Many systems are becoming priced well within the range of consumer purchase.

    I sold my B1600 about a year ago. It was more than what I needed and somewhat of a power hog. 16 systems in one chassis. Or get load-balancers that take 2 slots each (for things like distributing traffic between blades - i.e. web servers, etc..).



    Note: I would not recommend looking into the B1600 as a purchase. They have been end-of-life'd and the blades are hard to come by these days (that and Sun was bought by Oracle). Dell, HP, and IBM have reasonably priced blade systems.
    Last edited by Hopper; 10-04-2010 at 08:54 PM.
    Playing guitar is an endless process of running out of fingers.

  12. #12
    animate-a-holic
    Join Date
    Oct 2003
    Location
    Fairfield, California
    Posts
    64
    Were you able to use your sun system with LW and other programs? How much faster was it and was it worth it to have? What was the down sides to that 16 blade system?

    I apologize for all of the questions, but I am really interested in building a system, maybe similar to a backplane, but I am wondering if LW would even work on it. Seems like Windows would have problems with it (seems Windows always has problems with certain different types and configurations of hardware).

  13. #13
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    Quote Originally Posted by Greg Lloyd 5 View Post
    Were you able to use your sun system with LW and other programs? How much faster was it and was it worth it to have? What was the down sides to that 16 blade system?
    I was running 2 load balancers, 4 SunOs, 6 RedHat Linux, 4 Windows Server 2003 VM's, and 2 cooler (fan) inserts on each side of the chassis. They were all faster than I needed them to be. Overkill actually, but I also wasn't using them as a render farm. They were for 2 Oracle database systems, 2 Linux app servers, and the rest were test Grid systems for running security simulations.

    Using SunOs's imaging system, I could re-image any blade (or all 14 blades for that matter) to any previously saved OS image in about 4 minutes.

    Advantages ... (too many to list).
    - Different blade types (Sun, Linux, x86, etc.). Imaging subsystem is a dream.
    - Gigabit switch fabric was rock solid and supported LACP protocols and full VLAN configurations.
    - System bus never failed and had a remote CPU that you can connect to via modem to turn the entire unit on and off remotely even when the main power was off.
    - Configuration is simple... and you could store infinite OS config images.
    - and on .. and on .. and on ... I could go on for days with it .. I really enjoyed having it, but like I said. It was EOL'd, so I can't afford to have unsupported hardware.

    Disadvantages...
    - Dual power supplies are loud, and take a full 30amp circuit when at full load (keep in mind they are built to power 32 CPU's at once at full load and be redundant - one is always in standby mode).
    - Blade hard drives weren't upgradeable (120GB max) - but I rarely used local drives -> connected to the SAN for all but boot images.
    - Cooling is an issue. They are built for front-to-back server room rack systems and make a room quite uncomfortable even in the winter time.

    Oh .. and it had wicked cool purple LED's in the dark ... :-)

    EDIT: That pic doesn't show scale very well ... it's ony 5.25 inches tall (3RU) and fits a standard 19 inch rack. If you stood it up on end, it would be smaller than your average desktop / tower system.
    Last edited by Hopper; 10-04-2010 at 09:29 PM.
    Playing guitar is an endless process of running out of fingers.

  14. #14
    Fórum áss clówn Hopper's Avatar
    Join Date
    Jan 2005
    Location
    Austin
    Posts
    3,393
    But to be specific about your question... I could have easily loaded up all 14 systems with Linux and used them with ScreamerNet through a Windows VM. I use my notebook (running RedHat) as a ScreamerNet node all the time. Just load up a windows VM and set up ScreamerNet just as you normally would. It's a no-brainer.

    I also noticed my renders are faster through the VM than it is with a "real" windows system.
    Playing guitar is an endless process of running out of fingers.

  15. #15
    animate-a-holic
    Join Date
    Oct 2003
    Location
    Fairfield, California
    Posts
    64
    I can't stand Windows. I wish Newtek would create a Linux version of LW.

    Can you run LW on Linux normally with Win vm?

    It would be nice to have a system which runs all 14 systems in parallel so one single system is so quick that you don't need a render farm, just hit F12 render and it's done in a couple of hrs. Or with a rather large scene, render over night, or even over the weekend. Do you think that would have been possible with your old system?

Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •