Page 1 of 4 123 ... LastLast
Results 1 to 15 of 54

Thread: 4790K - 3950X speed comparison

  1. #1
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    172

    4790K - 3950X speed comparison

    ...so, I got my new Ryzen 9 3590X system installed, and have done a comparison with a frame from one of the more intensive frames from my Sleeper Service movie:

    I7 4790K - 4x4500 MHz - 68min 19sec - RAM speed 1600 MHz
    R9 3950X - 16x3500 MHz - 19min 53sec - RAM speed 3200 MHz (CL14), cores clocking themselves to 3900-4000 MHz, max registered temp 74C

    (R9 3950X - 16x3500 MHz - 20min 17sec - RAM speed defaulted by BIOS to (2133 MHz)

    Cooled by a completely noiseless Noctua air cooler, no liquid bother needed.

    I was going to try to hack Windows 7 to working with this getup, for benchmarking if nothing else. but it would not even load the installation disk, so had to scrap that.

    I have used Win10 a lot at work, never privately, and boy, it is REALLY a piece of s**t software. Tried Home edition first, which was a nightmare; basically no access to own system, really sluggish file access perfomance, horrible OpenGL performance in LW, and for some reason it changed - messed up - all my lights and cameras in my scenes. All the focal lengths set to 0mm, and zoom factors all over the place.

    So, I went for the Pro Edition instead, where the addition of Group Policy control let me kill most of the nonsense that comes with it. Most of it - it is still a piece of s**t software that do not give you full control of your own machine. But so far, everything acts normally.


    Anyway - the 3950x is predictably fast, it renders that scene at 28,6% the time used by the 4790K, so the arithmetic adds up nicely.
    Last edited by Amerelium; 12-16-2019 at 02:54 AM.
    - Ignorance is bliss...

  2. #2
    Registered User
    Join Date
    Jun 2014
    Location
    Right here
    Posts
    1,810
    Quote Originally Posted by Amerelium View Post
    ...so, I got my new Ryzen 9 3590X system installed...

    Cooled by a completely noiseless Noctua air cooler, no liquid bother needed.
    Hey Amerelium thanks for your post, good to know the 3950X runs fine with the Noctua cooler! (I don't like water cooling)

    I've not yet decided on 3900X or 3950X but the additional cooling and more expensive boards required for 3950X (afaik), seems to make the 3900X a less troublesome installation.

  3. #3
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    172
    Quote Originally Posted by Marander View Post
    Hey Amerelium thanks for your post, good to know the 3950X runs fine with the Noctua cooler! (I don't like water cooling)

    I've not yet decided on 3900X or 3950X but the additional cooling and more expensive boards required for 3950X (afaik), seems to make the 3900X a less troublesome installation.

    The Noctua coolers are really good; Because of the large RAM modules, I could only install one of the two fans that comes with it, and still it does it's job. As for motherboards, I recon both CPUs should have be on the same chipset; I got a ASUS TUF Gaming X570-Plus, which isn't that pricey compared to the CPU and the RAM I got - CL14 IS pricey... The advantage with the X570 chipset is that you do not have to update the bios beforehand. And the Noctua cooler installation is REALLY easy - they know what they are doing.

    This is the cooler: Noctua NH-D15 SE-AM4 - they have three models for these CPUs, albeit that one is the best. Scores better than most liquid coolers in the tests.

    What surprised me the most is that the CPU jack's itself up to 3900-4000 MHz on it's own, basically runs in constant turbo mode as long at the temperatures permit it.
    Last edited by Amerelium; 12-16-2019 at 03:33 AM.
    - Ignorance is bliss...

  4. #4
    Registered User
    Join Date
    Jun 2014
    Location
    Right here
    Posts
    1,810
    Quote Originally Posted by Amerelium View Post
    ...As for motherboards, I recon both CPUs should have be on the same chipset...
    Yes correct, it's the 3960X and 3970X that need new (much more expensive) mainboards / a different chipset.

    https://www.anandtech.com/show/15044...2-cores-on-7nm

    Thanks again for the details, makes the 3950X with the Nocta Cooler an interesting option.

    Cheers, Marander

  5. #5
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    172
    Quote Originally Posted by Amerelium View Post
    ...so, I got my new Ryzen 9 3590X system installed, and have done a comparison with a frame from one of the more intensive frames from my Sleeper Service movie:

    I7 4790K - 4x4500 MHz - 68min 19sec - RAM speed 1600 MHz
    R9 3950X - 16x3500 MHz - 19min 53sec - RAM speed 3200 MHz (CL14), cores clocking themselves to 3900-4000 MHz, max registered temp 74C
    Update - I didn't overclock, but set turbo mode 3 in the bios, which lets them go nut for longer:

    R9 3950X - 16x3500 MHz - 18min 54sec - max registered temp 85C

    We're talking almost four times as fast now, as the 4x4,5 GHz Devil's Canyon...
    - Ignorance is bliss...

  6. #6
    www.Digitawn.co.uk rustythe1's Avatar
    Join Date
    Feb 2006
    Location
    england
    Posts
    1,391
    um, with the risk of starting one of those degenerating threads, doesn't this say the AMD is not a great (or bad) improvement? its a 6 year old 4 core chip with half the ram speed vs a new 16 core chip, and per core its considerable slower, where even at a lower frequency I would expect it to be faster?
    16 / 4 = 4 (so yea should be at least 4 times as fast being as 6 years more advanced)
    68 / 18.9 = 3.59 (3.4 before you had turbo) so is that not actually about 14% slower per core, not faster?
    not trying to put it down, just pointing out its not yanking the LW chain, seems to me the AMD still has issues with needing things like 2 clock cycles for SSE etc or some other rendering calculation issue?
    Intel i9 7980xe, Asus Rampage Vi extreme, 2x NVIDIA GTX1070ti, 64GB DDR4 3200 corsair vengeance,
    http://digitawn.co.uk https://www.shapeways.com/shops/digi...ction=Cars&s=0

  7. #7
    Registered User
    Join Date
    Nov 2018
    Location
    Germany
    Posts
    44
    I think this might be a software problem.

    Many programs have been compiled with Intel compilers and some programs, even really recent ones check for vendor id when the program starts up.
    As an example if it detects an Intel CPU the program will use AVX when it detects an AMD CPU it will use SSE2 even if that CPU would support AVX just fine.
    That results in very poor performance on the AMD part. Just recently it was discovered that the program Matlab did just that. It uses AVX on Intel and a slow codepath on AMD.
    A small fix to the program resulted in performance gains on AMD CPUs in excess of 200%.

    Other render programs like Blender seem to be well optimized for both Intel and AMD like you can see in this chart.



    Even Corona seems to be well optimized for both vendors.


    I mentioned this recently in another thread but it would be very interesting to know what lightwave actually does when it detects an Intel/AMD CPU at startup and what kind of extensions it is using on the CPUs from both vendors.
    So far nothing but silence.

  8. #8
    www.Digitawn.co.uk rustythe1's Avatar
    Join Date
    Feb 2006
    Location
    england
    Posts
    1,391
    Quote Originally Posted by Lemon Wolf View Post
    I think this might be a software problem.

    Many programs have been compiled with Intel compilers and some programs, even really recent ones check for vendor id when the program starts up.
    As an example if it detects an Intel CPU the program will use AVX when it detects an AMD CPU it will use SSE2 even if that CPU would support AVX just fine.
    That results in very poor performance on the AMD part. Just recently it was discovered that the program Matlab did just that. It uses AVX on Intel and a slow codepath on AMD.
    A small fix to the program resulted in performance gains on AMD CPUs in excess of 200%.

    Other render programs like Blender seem to be well optimized for both Intel and AMD like you can see in this chart.



    Even Corona seems to be well optimized for both vendors.


    I mentioned this recently in another thread but it would be very interesting to know what lightwave actually does when it detects an Intel/AMD CPU at startup and what kind of extensions it is using on the CPUs from both vendors.
    So far nothing but silence.
    that's what I was getting at, seen a couple of other render engines too, so my whole point through all these vs threads has been if your primarily using LW render, you need to be careful on the hardware you select and not rely on benchmarks to make your decision, you need to see the exact result,
    Intel i9 7980xe, Asus Rampage Vi extreme, 2x NVIDIA GTX1070ti, 64GB DDR4 3200 corsair vengeance,
    http://digitawn.co.uk https://www.shapeways.com/shops/digi...ction=Cars&s=0

  9. #9
    Registered User
    Join Date
    Jun 2014
    Location
    Right here
    Posts
    1,810
    Quote Originally Posted by rustythe1 View Post
    that's what I was getting at, seen a couple of other render engines too, so my whole point through all these vs threads has been if your primarily using LW render, you need to be careful on the hardware you select and not rely on benchmarks to make your decision, you need to see the exact result,
    Yes agree, considering only the LW render engine, the leap might not be similar to other engines or Cinebench results.

    However compared to what Intel can provide in that performance segment, AMD is still a very good option to have much performance on a single machine.

    The Ryzen 9 3950X is the sweet spot for LW rendering in my opinion.

    Also looking at Cinebench results, it seems that Ryzen 3950X is faster then an Intel 9900K in the single thread score, even with less GHz. Further the Ryzen 9 / Zen 2 architecture is much better with Core Interconnect and Direct Memory Access & I/O (which is important with a high amount of cores).

    By the way, 3900X and 3950X are not Threadripper processors. The Threadripper CPUs are 3960X - 3990X, so there is even more potential with up to 64 Cores / 128 Threads. But these require more expensive mainboards.

  10. #10
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    172
    There is no problem with AMD, or LW - you're just underestimating the speed of the 4790K.

    CPU rendering performance is quite linear. RAM speed helps a bit, but not that much. Windows 10 is a much bigger hurdle, as I have noticed over 30 seconds of variance in render speeds on this frame, depending on what mood it is in. On Windows 7, it was 100% predictable; I could render the same frame 6 months apart, in excatly the same amount of seconds (I have datelogget test renders with rendertimes going years back)

    So let's look at the math:

    4 x 4500 = 18000
    16 x 3500 = 56000

    56000 / 18000 = 3,11

    4099 sec / 1134 sec = 3,61

    That is a factor 0,5 better ratio that the arithmetics would have suggested; Remember, I have not overclocked anything, it's just the quality of the Noctua cooler that allowes it to turbo all the cores persistently.

    The difference is speed is much greater if I disable the corona effect I use, by the way. I don't really know what mechanics are used for applying Corona, but that part is only marginally faster now than with the 4790K, and, the cores clock themselves down whilst applying it. More a memory thing?

    I would have liked to have been able to benchmark this on Win7, but as mentioned, installer would not even load.

    In any event; I am very pleased with the 3950x setup (except having to use Win10) - and every test shows that it is faster than Intels that cost twice as much. I do not think we will see anything like it in the near future, in terms of the amount of bang you get for you bucks.

    Edit: LW only uses two or the cores when applying the corona effect, clocking themselves to 4500+. So there you go, that's close to two minutes one can slice off both render times, which means even greater difference.
    Last edited by Amerelium; 12-22-2019 at 07:44 AM.
    - Ignorance is bliss...

  11. #11
    Registered User
    Join Date
    Nov 2018
    Location
    Germany
    Posts
    44
    That is very interesting.
    To be honest we dont know if there is a software issue with Lightwave or not.
    Other data taken from different render packages suggest that there might be one.
    If parts of your render scene only use 2 cores maybe you can test a scene where all cores are being used for the entire duration of the test?
    That would give you more accurate results.

  12. #12
    Registered User
    Join Date
    Jun 2014
    Location
    Right here
    Posts
    1,810
    Quote Originally Posted by Lemon Wolf View Post
    That is very interesting.
    To be honest we dont know if there is a software issue with Lightwave or not.
    Other data taken from different render packages suggest that there might be one.
    If parts of your render scene only use 2 cores maybe you can test a scene where all cores are being used for the entire duration of the test?
    That would give you more accurate results.
    Maybe LW scales a tiny bit less then other engines but to be fair towards LW, there are parts in every engine that don't use all cores (pixel filter / post effect) like Corona effect in this example.

  13. #13
    Quote Originally Posted by Marander View Post
    Maybe LW scales a tiny bit less then other engines but to be fair towards LW, there are parts in every engine that don't use all cores (pixel filter / post effect) like Corona effect in this example.
    I noticed Lightwave 2019 uses all threads more often than even Modo does when rendering when doing tests on mine.
    Threadripper 2990WX, X399 MSI MEG Creation, 64GB 2400Mhz RAM, GTX 1070 Ti 8GB

    https://www.dynamicrenderings.com/

  14. #14
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    172
    Quote Originally Posted by Lemon Wolf View Post
    That is very interesting.
    To be honest we dont know if there is a software issue with Lightwave or not.
    Other data taken from different render packages suggest that there might be one.
    If parts of your render scene only use 2 cores maybe you can test a scene where all cores are being used for the entire duration of the test?
    That would give you more accurate results.
    Can just remove a minute and a half or so from each render time to get a fairly accurate result
    - Ignorance is bliss...

  15. #15
    Registered User
    Join Date
    Jun 2014
    Location
    Right here
    Posts
    1,810

    Red face

    Hey Amerelium thanks again for the tip with the Noctua with the Ryzen 3950X! This cooler is huge, well designed, great quality and very flexible to use. I built my new machine today:

    AMD Ryzen 3950X 16 Core / 32 Threads CPU
    ASUS STRIX ROG X570-E Mainboard
    BeQuiet Dark Power Pro 850W
    Noctua NH-D15 SE-AM4 Cooler
    64 GB Corsair Vengeance RGB Pro RAM
    Geforce RTX 2070 Super GPU
    1 TB Samsung 970 Plus m.2 SSD

    After I got all components I realized I should have gotten the low profile RAMs instead. Bummer, didn't even want any RGB nonsense but Corsair always work well for me and I didn't see the LP ones when I ordered. However I managed to replace the (optional) second Noctua CPU fan with a different one plus 3 variable case fans for good air flow.

    The system works fine but haven't had time to test anything yet.

    Since I already have two good dual GPU machines I won't add another GPU in this machine for now.

    Looking forward to test LW and other applications soon!

Page 1 of 4 123 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •