Page 5 of 42 FirstFirst ... 3456715 ... LastLast
Results 61 to 75 of 619

Thread: 11.5's BenchmarkMarbles.lws - share your machine's render time here

  1. #61
    gettin all wavy rwhunt99's Avatar
    Join Date
    May 2004
    Location
    Osceola, IN
    Posts
    213
    Well, my 1st gen i7 920 2.66 GHz with 12 Gig of Ram an Nvidia GTX650 and Win 7 64 bit turned in an OK performance at 2Hr and 57 Min 05 seconds.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Marbles.jpg 
Views:	170 
Size:	522.2 KB 
ID:	111224  

  2. #62
    obfuscated SDK hacker Lightwolf's Avatar
    Join Date
    Feb 2003
    Location
    Stuttgart, Germany
    Posts
    13,585
    Quote Originally Posted by Celshader View Post
    Dave's cores run at 2.1Ghz, though, not 3.6Ghz.

    A better comparison would be my own FX-8150 3.6Ghz machine, which has four Bulldozer modules on the processor (eight integer cores + four shared 256bit FPUs). The shared 256bit FPUs are supposed to be able to execute two 128bit instructions at a time, but in our tests it seemed like the speed boost was more like 1.6x instead of 2x.

    ...

    So, your $319 3.6Ghz processor easily bested my $180 3.6Ghz processor, even though my processor executed more instructions per second.
    I computed something similarly, but trying to find out the normalised render time: I.e. one physical core of a certain machine at 1Ghz. Essentially by multiplying the time by the amount of (physical) cores and Ghz. Precisely to find out the speed per clock (lower time = better).

    In my case, it'd be 1814 minutes running on a single core at 1GHz (including the HT boost), 3626 minutes assuming that one physical core+HT is two cores.
    Dave's machine would be 3939 minutes, yours 3888 (assuming 8 cores, especially since LW doesn't use 256bit fp instructions and the cores can thus split the shared fpu).
    Two double check, the E5-2687W mentioned earlier would take 1636 minutes (assuming to automatic overclocking, stock speed of 3.1Ghz). Assuming that it automatically overclocks by a bin (like mine does) it'd be around 1700 minutes, which is (not) surprisingly close to the performance of my i7 (it's the same gen CPU).

    However, that also implies that Intel has the lead when it comes to the instructions per clock, one way or another (which is consistent with the background information that I've been reading).

    Cheers,
    Mike

  3. #63
    obfuscated SDK hacker Lightwolf's Avatar
    Join Date
    Feb 2003
    Location
    Stuttgart, Germany
    Posts
    13,585
    Quote Originally Posted by jeric_synergy View Post
    Time to introduce a dollar-based metric in these benchmarks. Surely render-farm managers must.
    Yes, but then you also need to factor in the complete system price (which will be a lot more equal since most components cost the same) and then compare the price difference.

    I.e. a difference of 140 bucks sounds like a lot if the performance differences is, say, 30-10% only. However, if the base system costs you, say, US$1500 then the difference in price is in tune with the difference in performance.

    For a larger installation, power use is crucial as well (direct and indirect cost: electricity and cooling).

    Cheers,
    Mike

  4. #64
    gettin all wavy rwhunt99's Avatar
    Join Date
    May 2004
    Location
    Osceola, IN
    Posts
    213
    Spnland; I was noticing your render, it seems different from mine and others. The marble on the left has little color in it, and you seem to have much more I don't know what you call it (clear glass) on the edges on the left and right marbles. Wonder what causes that?
    Last edited by rwhunt99; 02-06-2013 at 07:12 PM.

  5. #65
    Red Mage Celshader's Avatar
    Join Date
    Feb 2003
    Location
    Burbank, California
    Posts
    1,817
    Quote Originally Posted by Lightwolf View Post
    assuming 8 cores, especially since LW doesn't use 256bit fp instructions and the cores can thus split the shared fpu
    We've rendered the benchmark at 16 threads on Dave's machine (one thread per FPU) and 32 threads (two threads per FPU). Instead of twice the performance, it's more like 1.6x the performance.

    The speed boost is there, but it is not the same as 32 independent FPUs would have delivered.

    I also botched my description above -- my 3.6Ghz system executed more total clock cycles than your 3.6Ghz system, not more instructions. The Intels definitely do more per clock cycle than the AMDs. However, AMD systems are cheaper to build in the States and have much more forgiving upgrade paths, so Dave and I are sticking with the AMD platform at this time.
    Jen's 3D -- LightWave stuff.
    Jen's 2D -- my comic book.

    Python is my smashing board. LightWave is my S.M.A.K.

  6. #66
    obfuscated SDK hacker Lightwolf's Avatar
    Join Date
    Feb 2003
    Location
    Stuttgart, Germany
    Posts
    13,585
    Quote Originally Posted by Celshader View Post
    We've rendered the benchmark at 16 threads on Dave's machine (one thread per FPU) and 32 threads (two threads per FPU). Instead of twice the performance, it's more like 1.6x the performance.
    There's likely to be some overhead due to managing the threads as well. However, I think my metric still stands true in practical use.
    Quote Originally Posted by Celshader View Post
    The speed boost is there, but it is not the same as 32 independent FPUs would have delivered.
    If it is the FPUs - there's a whole lot of other issues that can come into play, especially given the amount of CPUs.
    Quote Originally Posted by Celshader View Post
    I also botched my description above -- my 3.6Ghz system executed more total clock cycles than your 3.6Ghz system, not more instructions.
    Absolutely... even if you count HT as an extra 25% (which seems to be realistic). Which is all it really does, fill the pipeline of a core to execute more instructions.
    Quote Originally Posted by Celshader View Post
    The Intels definitely do more per clock cycle than the AMDs. However, AMD systems are cheaper to build in the States and have much more forgiving upgrade paths, so Dave and I are sticking with the AMD platform at this time.
    If my box was a pure rendering machine it might make more sense as well. However, the Intels still have a massively better (+30%-40%) single threaded performance - and looking at a lot of applications while I use them, such as LW, that still counts.
    I suppose the extra 100W under load also make a difference considering our prices for electricity - that easily eats up the price difference for the CPU after two years with heavy use.

    And I seriously wish that AMD wouldn't give up the enthusiast/semi-pro/workstation market - but it seems like there won't be any new hardware this year. Great, then it's only Intel left to dominate the market and prices ( E5-2687W = US$1800, 8 cores, 3.1GHz *ouch* - yes, per piece.) .

    Cheers,
    Mike

    P.S. I suppose the only way to really check the scaling of the CPUs is to compare a single threaded render with one that uses all cores. If you have the patience for the single threaded version. I suppose the only good news is that it should be faster than my metric suggests, due to a much lesser thread management overhead.
    Last edited by Lightwolf; 02-06-2013 at 07:44 PM.

  7. #67
    geo messy madno's Avatar
    Join Date
    Feb 2009
    Location
    Germany
    Posts
    794
    Quote Originally Posted by madno View Post
    33 min, 40 sec
    Attachment 111036

    System : DELL Precision T7600
    CPU : 2 x Xeon E5-2687W
    RAM : 32 GB 1600 MHz ECC
    SSD 1 : Samsung PM83 (OS)
    SSD 2 : Crucial M4 (Scenes)
    SSD 3 : Crucial C300 (Storage)
    Graphic : AMD FirePro V7900

    I love it:
    Attachment 111037
    Price wise I would have not bought the monster Dell. I think one who knows how to set up a render farm can get better results at a lower price. But I used my old PC for 4 years and it was to slow even for VPR in most cases. Not spending much money on other things I finally decided to buy the fastest workstation available. Now I am suprised that it runs so fast. I expexted it to be faster than those incredible single CPU i7 sandy or ivy bridge Intels, but I never thought the extra cores of the Xeon would give that much extra speed. By the way nobody here who runs a BOXX machine?

  8. #68
    Super Member JonW's Avatar
    Join Date
    Jul 2007
    Location
    Sydney Australia
    Posts
    2,235
    Quote Originally Posted by madno View Post
    Price wise I would have not bought the monster Dell. I think one who knows how to set up a render farm can get better results at a lower price. But I used my old PC for 4 years and it was to slow even for VPR in most cases. Not spending much money on other things I finally decided to buy the fastest workstation available. Now I am suprised that it runs so fast. I expexted it to be faster than those incredible single CPU i7 sandy or ivy bridge Intels, but I never thought the extra cores of the Xeon would give that much extra speed. By the way nobody here who runs a BOXX machine?
    My W5580 box will be 4 years old in a few months. Getting the fastest machine saves a lot of buggering around upgrading, so it's actually an economical path to choose. I also use a small company that will build what I want at a much cheaper price than a brand name, cheaper than if I get the parts myself, & they have been extremely helpful over the decades.


    W5580 x 2 (3.2 Ghz x 8 real cores 25.6GHz total)
    24 GB
    SSD + Velociraptor
    XP64
    Noctua coolers (1 needed a slight modification to the bracket) & they don't mind me modifying the computers!

    1h 13m 17s
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Marbles_W5580x2.jpg 
Views:	163 
Size:	209.2 KB 
ID:	111240  
    Procrastination, mankind's greatest labour saving device!

    W5580 x 2 24GB, Mac Mini, Spyder3Elite, Dulux 30gg 83/006 72/008 grey room,
    XeroxC2255, UPS EvolutionS 3kw+2xEXB

  9. #69
    1h 50m 28s on a core i7-3770 3.4ghz, 16gig ddr4 ram, 620 nvidia, win7.

  10. #70
    Studio Animator DonJMyers's Avatar
    Join Date
    Aug 2003
    Location
    Los Angeles, CA
    Posts
    573
    I take it from these posts Xeon processers are totally kick-butt. So I looked up how much one costs. $1,400 for a PROCESSOR! NOT A COMPUTER, A FRIGGIN PROCESSOR! ITS A CITY ON A POSTAGE STAMP AND IT COSTS AS MUCH AS A USED CAR.

  11. #71
    1h 39m 44s
    Core I7 2600K @4.3Ghz / 8gig / Win7 64bits

    Not so bad for an old single CPU

  12. #72
    dang good :]

    what kind of cooling is that?...
    LW vidz   DPont donate   LightWiki   RHiggit   IKBooster   My vidz

  13. #73
    Not so newbie member lardbros's Avatar
    Join Date
    Apr 2003
    Location
    England
    Posts
    5,817
    OUCHHHH!!

    My Laptop
    4hrs 47mins!!!!!!

    Dell XPS 1501
    Intel i7 Q840 @1.87ghz (quad core - 8 threads)
    8GB RAM
    Windows 8 Pro 64-bit


    Reaaalllly nice to know that some people on here can render it out 9 times faster than me
    LairdSquared | 3D Design & Animation

    Desk Work:
    HP Z840, Dual Xeon E5-2690 v2, 32GB RAM, Quadro K5000 4GB
    Desk Home:
    HP Z620, Dual Xeon E5-2680, 80GB RAM, Geforce 1050 Ti 4GB

  14. #74
    I sincerely don't understand why aren't they just called BenchMarbles

    Some beastly setups around here...

  15. #75
    Newbie Member
    Join Date
    Oct 2004
    Location
    London
    Posts
    23
    1h 58m

    Macbook Pro OSX 10.8.2
    2.7 GHz i7
    16 GB 1600 Mhz DDR3

    Will try PC when I have a chance

    Click image for larger version. 

Name:	Screen Shot 2013-02-07 at 23.03.35.png 
Views:	195 
Size:	1.76 MB 
ID:	111265

Page 5 of 42 FirstFirst ... 3456715 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •