PDA

View Full Version : AMD teases Zen 2 Epyc CPUs with up to 64 cores and 128 threads



Ernest
11-06-2018, 05:23 PM
https://techreport.com/news/34242/amd-teases-zen-2-epyc-cpus-with-up-to-64-cores-and-128-threads


TSMC's 7-nm process offers twice the density of GlobalFoundries' 14-nm FinFET process. It can deliver the same performance as 14-nm FinFET for half the power, or 1.25 times the performance for the same power, all else being equal.

Dan Ritchie
11-08-2018, 03:43 PM
If I heard it right, they're going to 256 bit bus and doubling the floating point performance.

ianr
11-09-2018, 09:33 AM
If I heard it right, they're going to 256 bit bus and doubling the floating point performance.


yeah Dan, then they gotta chase the 512 one like in skylake thru put

But I think cos someone must have put something in AMD's tea

to speed them up. ? After a period solo domination by Intel

things are looking fruitful, like there's more to play for.

Dan Ritchie
11-09-2018, 10:19 AM
I'm using their last-gen a12 right now, and I've always liked it, but gotta admit the first gen r5 laptops meets or tops it in just about everything, performance wise, and with better battery life. Hopefully mobile will catch up with the desktop parts eventually and get final drivers. Still missing features like the AMD control panel. They should be coming out with some 45 watt parts soon too.

rustythe1
11-10-2018, 11:08 AM
yeah Dan, then they gotta chase the 512 one like in skylake thru put

But I think cos someone must have put something in AMD's tea

to speed them up. ? After a period solo domination by Intel

things are looking fruitful, like there's more to play for.

yes, I love my new i9 7980ex, for rendering in certain apps its smashes the AMD threadripper by nearly twice even though the numbers say it should be about the same, its 5 times as fast as my old i7 5960x! ive got the marbles scene down to about 12 mins and there still probably room for more clocking

Nicolas Jordan
11-10-2018, 01:31 PM
I couldn't wait any longer to jump on the Threadripper bandwagon. I'm in the process of building a 2990WX machine and can't wait to see 64 buckets on my screen in Lightwave!

Nicolas Jordan
11-10-2018, 09:12 PM
yes, I love my new i9 7980ex, for rendering in certain apps its smashes the AMD threadripper by nearly twice even though the numbers say it should be about the same, its 5 times as fast as my old i7 5960x! ive got the marbles scene down to about 12 mins and there still probably room for more clocking

Which Threadripper CPU are you referring to? The 2950X or the 2990WX? The 2990WX is priced about the same as the i9 7980EX but packs way more punch with it's 32 cores at a higher base clock. I'm also curious why you got a i9 7980EX over a 2990WX?

rustythe1
11-11-2018, 07:25 AM
because its 32 cores, and because its twice as powerful (maybe not in the numbers but in any benchmarks it just smashes the thread ripper as the threadripper is not very good at multi threaded rendering, in fact there is another recent thread that points to a benchmark where the old 8800k was rendering lightwave Arnold etc almost and if not faster than the AMDs,
also the x/ex range are known for there longevity as they are based on xeon chips, ive used x/ex chips for the last 20 years or so since the p3ex, and I still have every single one working today
https://www.cpubenchmark.net/high_end_cpus.html
if you check this chart, the 2950 was running at 3.2ghz, the intel was only running at 2.6 stock box, mine is running 4.6ghz, so if your on a budget, yes the treadripper is very appealing, if your on tight deadlines and want quality (and the fact its nearly twice as fast) than the much more expensive intel wins hands down (even smashes the $4-5000 xeon chips!) (the intel will also turbo on a decent air cooler too where as the amd has to be a full water rig at minimum, at 4.6ghz mine is running on a h150ipro corsair cooler at 60deg!)

sorry couldn't find the thread I was referring to as I think they were actually on FB. but you can see in the marbles benchmark thread I rendered it twice as fast as the threadrippers on the previous pages,

Nicolas Jordan
11-11-2018, 09:26 AM
because its 32 cores, and because its twice as powerful (maybe not in the numbers but in any benchmarks it just smashes the thread ripper as the threadripper is not very good at multi threaded rendering, in fact there is another recent thread that points to a benchmark where the old 8800k was rendering lightwave Arnold etc almost and if not faster than the AMDs,
also the x/ex range are known for there longevity as they are based on xeon chips, ive used x/ex chips for the last 20 years or so since the p3ex, and I still have every single one working today
https://www.cpubenchmark.net/high_end_cpus.html
if you check this chart, the 2950 was running at 3.2ghz, the intel was only running at 2.6 stock box, mine is running 4.6ghz, so if your on a budget, yes the treadripper is very appealing, if your on tight deadlines and want quality (and the fact its nearly twice as fast) than the much more expensive intel wins hands down (even smashes the $4-5000 xeon chips!) (the intel will also turbo on a decent air cooler too where as the amd has to be a full water rig at minimum, at 4.6ghz mine is running on a h150ipro corsair cooler at 60deg!)

sorry couldn't find the thread I was referring to as I think they were actually on FB. but you can see in the marbles benchmark thread I rendered it twice as fast as the threadrippers on the previous pages,

I guess I will have to see what kind of power it has with Lightwave. I have been impressed with all the rendering benchmarks I have seen for the 2990WX so far in Cinabench and Blender. I decided at the last minute to air cool it with a Noctua NH-U14S and will probably add a 2nd fan to it eventually. I decided to go with air cooling since it appears to be much more reliable for heavy usage. It will probably be a little while yet but once I get it up and running I will render the marble scene at base clock to see how it does. I expect the 2990WX to be about 5 times faster than my i7 4930K is in Lightwave. There is also only one motherboard that was made with the 2990WX in mind and that is the MSI MEG X399 Creation which I purchased for my build so it will be interesting to see how far I might be able to push it eventually.

rustythe1
11-11-2018, 09:48 AM
sorry I got a bit confused about core counts, looking at this https://www.cpubenchmark.net/compare/AMD-Ryzen-Threadripper-2990WX-vs-Intel-Core-i9-7980XE/3309vs3092 would seem the 2990wx is a fairly poor chip in comparison (its actually slower than the 2950 should that be correct?), the 7980 smashes that even though its got half the physical cores, and just look at the yearly running costs! I would expect its more than 5 times faster than the 4930 as its close to the 7980, and my 7980 is 5 times the speed of my 5960 which is quite a bit faster than the 4930,

Nicolas Jordan
11-11-2018, 10:24 AM
sorry I got a bit confused about core counts, looking at this https://www.cpubenchmark.net/compare/AMD-Ryzen-Threadripper-2990WX-vs-Intel-Core-i9-7980XE/3309vs3092 would seem the 2990wx is a fairly poor chip in comparison (its actually slower than the 2950 should that be correct?), the 7980 smashes that even though its got half the physical cores, and just look at the yearly running costs! I would expect its more than 5 times faster than the 4930 as its close to the 7980, and my 7980 is 5 times the speed of my 5960 which is quite a bit faster than the 4930,

For multi core multi thread usage the 2990WX should be way faster at rendering than a 2950X since it has double the cores just at a slightly lower base clock speed. I'm not sure what's going on with the benchmarks there, it seems a bit odd.

Strider_X
11-11-2018, 10:59 AM
Windows 10 strikes again
https://www.phoronix.com/scan.php?page=article&item=2990wx-linux-windows&num=1

hrgiger
11-11-2018, 12:14 PM
There are some inherent limitations to the 2990. That and cost vs performance gain is the reason I went with the 2950.

https://www.pcworld.com/article/3298859/components-processors/how-memory-bandwidth-is-killing-amds-32-core-threadripper-performance.html

That said, I only went with the threadripper to improve general computing tasks in applications that depend on CPU power outside of rendering. GPU is still the way to go forward.

Nicolas Jordan
11-11-2018, 02:43 PM
There are some inherent limitations to the 2990. That and cost vs performance gain is the reason I went with the 2950.

https://www.pcworld.com/article/3298859/components-processors/how-memory-bandwidth-is-killing-amds-32-core-threadripper-performance.html

That said, I only went with the threadripper to improve general computing tasks in applications that depend on CPU power outside of rendering. GPU is still the way to go forward.

I was thinking about getting the 2950X or even the 1950X since it's a bit cheaper but my reasoning for getting the 2990WX was that if I'm already spending thousands on building a new machine I might as well spend double on the CPU to squeeze more performance out of my entire investment. I'm just waiting for my Noctua cooler to arrive and then I can finish putting everything together and run some tests. Hopefully it will work well for what I need.

Imageshoppe
11-11-2018, 03:36 PM
I'm building a new high end box to complement and expand my CPU based rendering needs the end of the year. I went from Intel to AMD a couple years ago, after a major lighting strike killed my Intel systems. I've been running an AMD 1800x I built at release, and a 1700, both overclocked to their limits for that period of time and have been very pleased with the dollar/value performance. However, I have NO loyalty to AMD or Intel. Whoever wins, wins. And a few hundred dollars one way or the other on the CPU isn't an issue.

I've been leaning toward the 29900WX as I can't imagine that every core/thread won't be utilized to the max (with Lightwave) and wouldn't best any other offering with less cores or less threads, at the same Mhz. I will of course overclock to 24/7 sustained performance limits.

So, to be very clear, are people saying and confirming that the Intel i9 7980ex (overclocked to the usable limit) would out render the 29900WX overclocked to its performance ceiling in Lightwave? I'm not concerned about single thread performance in apps that aren't nicely multi-threaded, just the raw Lightwave render power of one box over the other. Please, no "might" or "could" if you "just do this or that" based on anyone's platform bias. Either yes or no based on personal experience or links to specific benchmarks.

Thanks,

hrgiger
11-11-2018, 04:09 PM
I was thinking about getting the 2950X or even the 1950X since it's a bit cheaper but my reasoning for getting the 2990WX was that if I'm already spending thousands on building a new machine I might as well spend double on the CPU to squeeze more performance out of my entire investment. I'm just waiting for my Noctua cooler to arrive and then I can finish putting everything together and run some tests. Hopefully it will work well for what I need.

Well unless you're overclocking, you're not really doubling render speed with the 2990 as it has a .5 slower clock speed. Its more like a 58% speed boost. As always, the newest thing on the market is usually twice the cost of the next thing down without delivering twice the performance.

Nicolas Jordan
11-11-2018, 04:36 PM
I'm building a new high end box to complement and expand my CPU based rendering needs the end of the year. I went from Intel to AMD a couple years ago, after a major lighting strike killed my Intel systems. I've been running an AMD 1800x I built at release, and a 1700, both overclocked to their limits for that period of time and have been very pleased with the dollar/value performance. However, I have NO loyalty to AMD or Intel. Whoever wins, wins. And a few hundred dollars one way or the other on the CPU isn't an issue.

I've been leaning toward the 29900WX as I can't imagine that every core/thread won't be utilized to the max (with Lightwave) and wouldn't best any other offering with less cores or less threads, at the same Mhz. I will of course overclock to 24/7 sustained performance limits.

So, to be very clear, are people saying and confirming that the Intel i9 7980ex (overclocked to the usable limit) would out render the 29900WX overclocked to its performance ceiling in Lightwave? I'm not concerned about single thread performance in apps that aren't nicely multi-threaded, just the raw Lightwave render power of one box over the other. Please, no "might" or "could" if you "just do this or that" based on anyone's platform bias. Either yes or no based on personal experience or links to specific benchmarks.

Thanks,

When it comes to rendering in Lightwave I'm certain that the 2990WX would outdo the i9 7980EX no problem. Watch this video https://www.youtube.com/watch?v=vmkLZcjFu2Y to see them compared at both base clock and overclocked speeds. He does a test using Cinabench later in the video.

Imageshoppe
11-11-2018, 04:36 PM
Well unless you're overclocking, you're not really doubling render speed with the 2990 as it has a .5 slower clock speed. Its more like a 58% speed boost. As always, the newest thing on the market is usually twice the cost of the next thing down without delivering twice the performance.

But you ARE overclocking... isn't it sort of expected? Every box I've built in the last 6-8 years I've overclocked. Overclocking within the reasonable tolerance is not the mad science project overclocking was 10-15 years ago. The tests I've seen on the 2990WX is a safe bet at 4.0, or at the outside 4.1. Sure, you need to be prepared for a system that will hit 700 or more watts under full load, but what's so bad about that, compared to several smaller boxes that total 1000 watts or more? It will throttle down when you don't need it.

So that is where I'm confused with aspects of this conversation... if the 2990WX is 32 cores at 4.0, and the Intel i9 7980EX is 18 cores at maybe 4.5, how is the 2990WX not the winner by a substantial amount? Again, both are overclocked to the limits of non-science project 24/7 workload expectations. And again, I'm talking pure LW rendering, not how one measures up under single or reduced core situations with apps like After Effects or the like... just LW rendering.

So show me the way... is there a better total LW experience to be had with Intel i9 18 core over 32 core Threadripper? I suppose I'd be happier if it's true for all the other reasons people would choose that Intel box over an AMD...

Imageshoppe
11-11-2018, 05:27 PM
When it comes to rendering in Lightwave I'm certain that the 2990WX would outdo the i9 7980EX no problem. Watch this video https://www.youtube.com/watch?v=vmkLZcjFu2Y to see them compared at both base clock and overclocked speeds. He does a test using Cinabench later in the video.

I agree with you 100%, and those numbers for Cinebench in that video seem to validate my intuitive argument in my last post about clock speed multiplied by cores.

So, if your goal is the best times in your fully multi-threaded 3D app, 2990WX is the clear winner. If you main work is in editing with Premiere or Resolve, the i9 7980EX is the clear winner. Different results based on what you need the most productivity boost for. I live in both worlds, but find my what I get paid to do in life is 75% 3D animation and 25% video production/editing/coloring/etc. So the "winner" in my world would be the 2990WX. I'm sure the 4K editing experience in Resolve is vastly improved in the 2990WX over my simple 1800X, so it's still a significant "win" in a way, in my particular situation.

Would anyone disagree with that as a valid and fair comparison/evaluation of the 2990WX vs the i9 790EX? And if so, why?

hrgiger
11-11-2018, 06:53 PM
I'm not talking about the i9, don't care about Intels overpriced cpu, I was talking about the choice between a 2950 and 2990.

Imageshoppe
11-11-2018, 10:15 PM
I'm not talking about the i9, don't care about Intels overpriced cpu, I was talking about the choice between a 2950 and 2990.

Are you doing any overclocking on your 2950? It sounded as if you may not be. If not, why not?

rustythe1
11-12-2018, 02:18 PM
So that is where I'm confused with aspects of this conversation... if the 2990WX is 32 cores at 4.0, and the Intel i9 7980EX is 18 cores at maybe 4.5, how is the 2990WX not the winner by a substantial amount? Again, both are overclocked to the limits of non-science project 24/7 workload expectations. And again, I'm talking pure LW rendering, not how one measures up under single or reduced core situations with apps like After Effects or the like... just LW rendering.


Because as mentioned before, 256 vs 512 FP, and also amd were in a rush to compete and made a very serious design flaw in the chip. As soon as you use more than 8 cores it cuts the memory band with by 75% as it basicly has 2 16 core processors. But the memory from the first processor has to go through the second, even PC world etc are reporting it etc so must be serious as the story damages their own sales! for rendering the Intel is a much better choice, even in his tests above he was running the Intel bellow stock turbo,

Imageshoppe
11-12-2018, 03:19 PM
Because as mentioned before, 256 vs 512 FP, and also amd were in a rush to compete and made a very serious design flaw in the chip. As soon as you use more than 8 cores it cuts the memory band with by 75% as it basicly has 2 16 core processors. But the memory from the first processor has to go through the second, even PC world etc are reporting it etc so must be serious as the story damages their own sales! for rendering the Intel is a much better choice, even in his tests above he was running the Intel bellow stock turbo,

Sure, I get and understand those issues. But will those defects and design "errors" impact Lightwave CPU rendering in any meaningful way? It most certainly will and does impact the volume of data moved around for video editing, but compared to that, moving memory to and from cores for 3D apps should be many factors less critical and "time sensitive".

I grant and agree that the Intel chip is superior for video/compositing situations (and said so already), where memory needs and speed are at their highest. But if your PRIMARY intent is Lightwave rendering, won't the 29990wx be quite a bit faster? If not, why?

jwiede
11-12-2018, 04:07 PM
I grant and agree that the Intel chip is superior for video/compositing situations (and said so already), where memory needs and speed are at their highest. But if your PRIMARY intent is Lightwave rendering, won't the 29990wx be quite a bit faster? If not, why?

You're joking, right? What you've said is essentially a "lose a little per item, but make it up in volume" argument.

Having reduced bandwidth capacity doesn't mean it can run less-taxing loads faster than a higher-bandwidth system, that makes no sense. Even if we were to grant that video editing is somehow a more memory-bandwidth-taxing load than CPU rendering (I disagree), the higher-bandwidth system will still fulfill the memory access requests of any bandwidth-requiring task _faster_ than the lower-bandwidth system, ceteris paribus. That's precisely what "higher-bandwidth" actually MEANS, it is capable of higher throughput.

BTW, CPU renderings' heavy dependency on FP (often DP) math also pounds on a specific 2990 shortcoming.

Imageshoppe
11-12-2018, 04:33 PM
Well, apologies to all for my ignorance. My poor feeble mind was trying to make an analogy back to the day I had a three computer Pentium D network, where the 100baseT was slow, but it didn't matter so much because there wasn't a lot of texture swapping and frames were so slow to render, the net speed of moving the final frames was almost irrelevant. Crappy analogy for a tiny brain, I guess.

My original and only question... which is the faster single CPU for running Lightwave, the AMD 2990 or the INTEL i9 7980EX?

jwiede
11-12-2018, 05:34 PM
My original and only question... which is the faster single CPU for running Lightwave, the AMD 2990 or the INTEL i9 7980EX?

I'd wait and see if anyone posts 2990WX numbers in the Marbles benchmark thread (ideally from 2015, otherwise difficult to directly compare), as there are some i9-7980XE numbers already posted. As the core counts get really high, LW appears to suffer from threading inefficiencies a bit more than other more-recently-updated app infrastructures, so it's a bit difficult to call whether LW will favor the 2990WX the same way Cinebench (https://www.tomshardware.com/reviews/amd-ryzen-threadripper-2-vs-intel-skylake-x,5727.html) and the V-Ray Benchmark* (https://www.pugetsystems.com/labs/articles/V-Ray-CPU-Rendering-AMD-Threadripper-2990WX-Takes-the-Single-CPU-Performance-Crown-1214/) do over the i9-7980EX.

For full transparency, I used to be an employee of the AMD processor division years ago. I think they're both excellent CPU offerings, and you'd be fine with either. Personally, I tend to believe the 2990WX's greater thread count will offer better overall scalability for DCC-type work in long run compared to the i9-7980EX, given modern software trending towards more efficient fine-grained multithreading. OTOH, the 2990WX is also newer, so potential for undesirable "surprises" appears a bit higher based on history.

BTW, it would be really useful if Newtek could put together a new LW2018-based "benchmark scene", with similar focus w.r.t. taxing the LW2018 renderer and infrastructure, as was done w.r.t. prior engine and "Marbles" scene. Results obviously won't be comparable between the new benchmark scene and the prior "Marbles" results, but being forced to use LW2015 to generate currently-comparable results doesn't really give an accurate sense of how LW2018 will perform either. Customers need an effective (and lasting, as the "2018-updated Marbles" scene isn't heavyweight enough) way to evaluate LW2018 performance on modern hardware.

*: Non-"independent" testing, compared to sources like Tom's.

next_n00b
11-12-2018, 06:07 PM
Maybe this link can help? 9:00
https://www.youtube.com/watch?v=YeXtTYAPzXU&feature=youtu.be

Strider_X
11-12-2018, 06:18 PM
I would go witth 2990WX for rendering with Lightwave and upgrade your current system with the drop-in replacement 2700 for video editing

My reasons are:
- far as I can tell Lightwave doesn't utilize AVX.
- the 2990WX can use ECC ram
- If you wanted to you could GPU render with 4 graphics cards and have fully enabled PCIe-16x slots for each directly connected to the CPU
- Cost less for better performance in CPU rendering

Imageshoppe
11-12-2018, 06:49 PM
I would go witth 2990WX for rendering with Lightwave and upgrade your current system with the drop-in replacement 2700 for video editing

My reasons are:
- far as I can tell Lightwave doesn't utilize AVX.
- the 2990WX can use ECC ram
- If you wanted to you could GPU render with 4 graphics cards and have fully enabled PCIe-16x slots for each directly connected to the CPU
- Cost less for better performance in CPU rendering

Thanks, and thanks to all the positive replies. I'm sure it makes sense to wait to make a decision until there's at least ONE data point with the 2990WX on a LW benchmark (no pressure at all Nicolas :)).

Also echo the need for an "official" 2018 version of the benchmark. LW 2018 is the bloody reason I need to "go faster" in the first place.

As background, I've built all my own systems post Amiga era going back to my first LW PC, a Pentium 100 (a build which cost MORE in 1996 or so than anything we're debating today, BTW), however I don't live and breathe this stuff until shortly before I want to move, then I just stick my head up and see what's what with all the new gear. I'll be watching with interest over the coming weeks...

Regards

jwiede
11-12-2018, 08:37 PM
Thanks, and thanks to all the positive replies. I'm sure it makes sense to wait to make a decision until there's at least ONE data point with the 2990WX on a LW benchmark (no pressure at all Nicolas :)).

You might also find this article (https://techgage.com/article/amd-ryzen-threadripper-2990wx-32-core-workstation-processor-review/4/) interesting, as it shows 2990WX versus i9-7980XE across a few other rendering engines. In general, it does appear the additional cores of the 2990WX tend to push it ahead of the i9-7980XE with rendering tasks.

Nicolas Jordan
11-12-2018, 09:51 PM
Thanks, and thanks to all the positive replies. I'm sure it makes sense to wait to make a decision until there's at least ONE data point with the 2990WX on a LW benchmark (no pressure at all Nicolas :)).

Also echo the need for an "official" 2018 version of the benchmark. LW 2018 is the bloody reason I need to "go faster" in the first place.



I didn't really realize until now that I could be one of the first using Lightwave on the 2990WX. I will try to have some render tests out as soon as I can which might be in the next week or 2 but they will likely be at base clock speed at least to start with. The longer render times in LW 2018 is also part of the reason I decided to get a 2990WX.

rustythe1
11-13-2018, 03:19 AM
I didn't really realize until now that I could be one of the first using Lightwave on the 2990WX. I will try to have some render tests out as soon as I can which might be in the next week or 2 but they will likely be at base clock speed at least to start with. The longer render times in LW 2018 is also part of the reason I decided to get a 2990WX.

well I think both these processors put you back into the race with renderers like octane so I think you will be impressed whatever the shortcomings of either are, like I said my 7980 is 5 times as fast as my 5960 which was a hefty processor anyway, im currently rendering high quality renders of a very detailed early 1900s battleship at 10,000 pixels wide, brute force GI etc in 3 or 4 mins, so yes its a massive step forward with these new core counts.
I don't think you can rely on many of the online results to give you a fair picture as most of these are done using test benchmarks or run at stock, or use un recommended hardware etc, for example the 7980 has specific pins and features that are only available through asus rampage motherboards, out of the box its a considerable difference between that and whatever desk rig they might be using for tests, with recommended hardware it will run at 4.5 turbo with no user tweaking on the standard xmp profile,

TheLexx
11-13-2018, 05:49 AM
These high end of processors are obviously being released due to increased demands, but I wonder what sort of performance could be expected in practice if a user took a "backward step" and rendered animation out at 480p. Would that be closer to realtime ?

Imageshoppe
11-13-2018, 07:17 AM
These high end of processors are obviously being released due to increased demands, but I wonder what sort of performance could be expected in practice if a user took a "backward step" and rendered animation out at 480p. Would that be closer to realtime ?

If only I could convince todays clients of that! I have un-archived old projects I did when free-lancing for Computer Cafe back in the early 2000's, and they literally blaze through in SD, and those were jobs I'd turn on all the bells and whistles, then send to them to Cafe to render on the their "massive" renderfarm for final output.

Ironically, some of the LW 2018 provided examples ARE at super reduced SD rez, probably because you'd never finish them if they weren't. Even in SD it's slow, slow, slow! Granted, based on all the post LW18 release notes about how to optimize, most of those scenes aren't very well setup, but there's no denying that LW2018 is more demanding.

Some of my jobs have to be at UHD rez, which was workable for me with two, two year old machines screamer'ed together under LW 2015, but despite all the optimizations and research I've done, LW 2018 sort of requires a magnitude jump. Of course there's the GPU approach, but I'm stubborn about wanting all the
native LW features all the time, without discovering some interesting look or trick I come up with can't be duplicated or rendered with Octane. Maybe I'm wrong on that, but in any event a blistering fast machine would seem to be a solid base for ANY direction you might have to go, GPU or CPU.

Nicolas Jordan
11-13-2018, 08:58 AM
Granted, based on all the post LW18 release notes about how to optimize, most of those scenes aren't very well setup, but there's no denying that LW2018 is more demanding.



Yes LW 2018 is much more demanding for rendering no matter how you look at it. For me in order to keep using Lightwave and move to using LW 2018 in production I made the decision to invest in a powerful CPU to make that work. I also want to be able to use all Lightwaves native features without having to worry about what's compatible with Octane so that is part of the reason I decided to stick with CPU rendering rather than GPU. I personally think the future of rendering is being able to use both CPU and GPU for rendering together so you are not required to choose between one or the other.

Lemon Wolf
11-13-2018, 10:27 AM
I am really looking forward to the release of Zen2.
Some very promising features like the enhancements done to the FP units and AVX performance.
Hopefully they will release a 64 core Threadripper. Will be an easy upgrade with the TR4 platform.


for rendering in certain apps its smashes the AMD threadripper by nearly twice
That is a rare occasion. It would be interesting to find out why this is the case. If its the windows scheduler, perhaps AVX 256 or some other code optimization that simply doesnt run that well on Ryzen.
Lightwave 2015 was after all written before Ryzen was even on the horizon.



its 5 times as fast as my old i7 5960x!
No its not.
According to your data the 5960x took 56 mins to finish. The 7980XE 17mins 16 seconds. Thats 3,24 times faster.
Besides you should post the clockspeeds of both CPUs for better comparison.
Still that kind of improvement isnt just the increase in core count, it really does look like the 7980XE can use AVX 512 for this.
It would be interesting to know "if" AVX 512 is only being used for certain parts of the render engine or if its the entire engine.


and because its twice as powerful (maybe not in the numbers but in any benchmarks it just smashes the thread ripper as the threadripper is not very good at multi threaded rendering
No its not.
Your statement is quite wrong.
I read some of your other posts regarding CPU performace etc. You seem to rely solely on Passmark to make your assumptions. Do you think this is a good idea to use just one source to form an opinion?
Besides Passmark is very inconsistent with the results. It obvious when looking at them.



in fact there is another recent thread that points to a benchmark where the old 8800k was rendering lightwave Arnold etc almost and if not faster than the AMDs
I cant find this thread anywhere here. More information is required to put this into perspective.



also the x/ex range are known for there longevity as they are based on xeon chips, ive used x/ex chips for the last 20 years or so since the p3ex, and I still have every single one working today
Where does that come from? Can you tell me where you got this information about longevity? It is known around where? Are there sources stating that other than yourself?
Remember it doesnt matter if the CPU is a Xeon a X or XE or an i3, they all are manufactured in the same factories, all using the same silicon. All these CPUs have the same "longevity". There is no difference.



https://www.cpubenchmark.net/high_end_cpus.html
if you check this chart, the 2950 was running at 3.2ghz, the intel was only running at 2.6 stock box, mine is running 4.6ghz, so if your on a budget, yes the treadripper is very appealing
Look at the chart you posted. The Intel Xeon Platinum 8173M is just slightly above the 7980XE even tho i has 10 more cores/20 more threads. Even the clock difference doesnt account for that. Passmark doesnt scale well.


the intel was only running at 2.6 stock box
You dont know that.
Passmark shows the default stock values in their charts not the real value the CPU was tested at.
Besides look at the result i got right now. With that result it would be at the top of the chart.
Now how can that be? According to the chart it should have around 23,953.
Btw when i did a second run all the values were slightly better yet it showed a lower CPU result.

http://lemonwolf.de/passmark.png



if your on tight deadlines and want quality (and the fact its nearly twice as fast) than the much more expensive intel wins hands down
Why are you insinuating that AMD CPUs are of lower quality? They are not.
And just because the 7980XE is faster in one application( we still dont know the cause) doesnt mean its faster in general.


the intel will also turbo on a decent air cooler too where as the amd has to be a full water rig at minimum, at 4.6ghz mine is running on a h150ipro corsair cooler at 60deg!)
You should have watched some of the 7980XE reviews. When they did a moderate overclock the 12V power cables got hot! And the CPU needed serious cooling.
Oh btw i am running my 2990WX with an air cooler. I can even overclock it with an AIR cooler and it reaches 68°C at maximum. I really dont understand why you make these false claims.


sorry I got a bit confused about core counts, looking at this https://www.cpubenchmark.net/compare/AMD-Ryzen-Threadripper-2990WX-vs-Intel-Core-i9-7980XE/3309vs3092 would seem the 2990wx is a fairly poor chip in comparison (its actually slower than the 2950 should that be correct?)
Thats because Passmark is rubbish.
Again i would highly recommend reading or watching some reviews, especially ones that are more technical.



the 7980 smashes that even though its got half the physical cores,
Well not anymore. See image above. Besides it doesnt have "half" the physical cores.



and just look at the yearly running costs! I would expect its more than 5 times faster than the 4930 as its close to the 7980, and my 7980 is 5 times the speed of my 5960 which is quite a bit faster than the 4930,
I dont know where to begin. So many things mixed up and values that dont reflect reality.


and also amd were in a rush to compete and made a very serious design flaw in the chip. As soon as you use more than 8 cores it cuts the memory band with by 75% as it basicly has 2 16 core processors. But the memory from the first processor has to go through the second, even PC world etc are reporting it etc so must be serious as the story damages their own sales! for rendering the Intel is a much better choice, even in his tests above he was running the Intel bellow stock turbo,
That is not quite accurate.
This issue occurs when when than two dies are being used (more than 16 cores) in applications that require a lot of memory bandwidth.
Its not a design flaw. They had to implement something to distinguish the 2990WX from the Epyc 32 core CPUs.
No Intel is not a much better choice for rendering in general because this problem doesnt affect rendering programs because they dont require lots of memory bandwidth.


like I said my 7980 is 5 times as fast as my 5960
Not according to what you posted in the benchmark thread. 3,24 times faster when doing the math on your results.



I don't think you can rely on many of the online results to give you a fair picture
Really?
But Passmark is?
The many online reviews be it in written text or videos are done by people who know a lot more about hardware and test methodology than either of us.
They do indeed give you a fair picture.



as most of these are done using test benchmarks
Oh i see. So Passmark (the one source you keep on posting) isnt a test benchmark?


or run at stock
Seriously?
So you think they should all should use overclocked settings?
That would create a huge mess as results would be comparable anymore.
Besides many reviews have an OC section with results.



or use un recommended hardware etc, for example the 7980 has specific pins and features that are only available through asus rampage motherboards, out of the box its a considerable difference between that and whatever desk rig they might be using for tests, with recommended hardware it will run at 4.5 turbo with no user tweaking on the standard xmp profile,
What are you talking about?
Most if not all modern motherboards have overclocking features that is not something exclusive for the Asus rampage boards.
Did you ever consider applying for a job at Principled Technologies?

rustythe1
11-13-2018, 12:09 PM
I am really looking forward to the release of Zen2.
Some very promising features like the enhancements done to the FP units and AVX performance.
Hopefully they will release a 64 core Threadripper. Will be an easy upgrade with the TR4 platform.


That is a rare occasion. It would be interesting to find out why this is the case. If its the windows scheduler, perhaps AVX 256 or some other code optimization that simply doesnt run that well on Ryzen.
Lightwave 2015 was after all written before Ryzen was even on the horizon.


No its not.
According to your data the 5960x took 56 mins to finish. The 7980XE 17mins 16 seconds. Thats 3,24 times faster.
Besides you should post the clockspeeds of both CPUs for better comparison.
Still that kind of improvement isnt just the increase in core count, it really does look like the 7980XE can use AVX 512 for this.
It would be interesting to know "if" AVX 512 is only being used for certain parts of the render engine or if its the entire engine.


No its not.
Your statement is quite wrong.
I read some of your other posts regarding CPU performace etc. You seem to rely solely on Passmark to make your assumptions. Do you think this is a good idea to use just one source to form an opinion?
Besides Passmark is very inconsistent with the results. It obvious when looking at them.


I cant find this thread anywhere here. More information is required to put this into perspective.


Where does that come from? Can you tell me where you got this information about longevity? It is known around where? Are there sources stating that other than yourself?
Remember it doesnt matter if the CPU is a Xeon a X or XE or an i3, they all are manufactured in the same factories, all using the same silicon. All these CPUs have the same "longevity". There is no difference.


Look at the chart you posted. The Intel Xeon Platinum 8173M is just slightly above the 7980XE even tho i has 10 more cores/20 more threads. Even the clock difference doesnt account for that. Passmark doesnt scale well.


You dont know that.
Passmark shows the default stock values in their charts not the real value the CPU was tested at.
Besides look at the result i got right now. With that result it would be at the top of the chart.
Now how can that be? According to the chart it should have around 23,953.
Btw when i did a second run all the values were slightly better yet it showed a lower CPU result.

http://lemonwolf.de/passmark.png



Why are you insinuating that AMD CPUs are of lower quality? They are not.
And just because the 7980XE is faster in one application( we still dont know the cause) doesnt mean its faster in general.


You should have watched some of the 7980XE reviews. When they did a moderate overclock the 12V power cables got hot! And the CPU needed serious cooling.
Oh btw i am running my 2990WX with an air cooler. I can even overclock it with an AIR cooler and it reaches 68°C at maximum. I really dont understand why you make these false claims.


Thats because Passmark is rubbish.
Again i would highly recommend reading or watching some reviews, especially ones that are more technical.


Well not anymore. See image above. Besides it doesnt have "half" the physical cores.


I dont know where to begin. So many things mixed up and values that dont reflect reality.


That is not quite accurate.
This issue occurs when when than two dies are being used (more than 16 cores) in applications that require a lot of memory bandwidth.
Its not a design flaw. They had to implement something to distinguish the 2990WX from the Epyc 32 core CPUs.
No Intel is not a much better choice for rendering in general because this problem doesnt affect rendering programs because they dont require lots of memory bandwidth.


Not according to what you posted in the benchmark thread. 3,24 times faster when doing the math on your results.


Really?
But Passmark is?
The many online reviews be it in written text or videos are done by people who know a lot more about hardware and test methodology than either of us.
They do indeed give you a fair picture.


Oh i see. So Passmark (the one source you keep on posting) isnt a test benchmark?


Seriously?
So you think they should all should use overclocked settings?
That would create a huge mess as results would be comparable anymore.
Besides many reviews have an OC section with results.


What are you talking about?
Most if not all modern motherboards have overclocking features that is not something exclusive for the Asus rampage boards.
Did you ever consider applying for a job at Principled Technologies?

you obviously didn't read anything I wrote in context here or in the other threads and you know nothing of what I do or involved in and many of your statements would be incorrect too,

I stated I clocked my pc down to match the TR, fully turbo it was 12 mins, that's a hair under 5 times!

Twice as fast as the TR, yes at least in the marbles test, 28 to 32 mins was what first users had posted in the thread, simple maths if im doing it 16 to 12 mins

passmark, I simply posted that as an example and my later comment of saying you cant trust online examples included passmark

the passmark clearly indicates with an "@" meaning "running at" when the run it at a lower stock speed,

problem occurs with 16 cores, no again the stories state 8 cores, just google it, I stated PC world were one of the sites releasing the news so I didn't state it without evidence

AMDs are not reliable, intel x/ex reliable, it may be an unbiased statement from me, but its based from working in I.T. and knowing fail rates and what industries around the country use, news stories etc, knowing people in high places, and the fact

Why do I need to watch reviews when I work with the actual hardware? i.e. lots of my statements are based on facts of what is in front of me, not web or guess and merely point to web to make a point?

keep posting passmark, think I only posted once or twice? and ive posted cinebench and others more?

cant find the thread, I already stated it may have been on FB and couldn't find it myself, but its there somewhere,

AAAAAnnnndddd!, if you had read my posts correctly I was in fact referring to the 2950 for the first part of the thread which is the more equivalent hardware,

Lemon Wolf
11-13-2018, 01:10 PM
you obviously didn't read anything I wrote in context here or in the other threads and you know nothing of what I do or involved in and many of your statements would be incorrect too,
I am pretty sure i read everything in context. What do you mean by i dont know what you do or what you are involved with? I never claimed i did.



I stated I clocked my pc down to match the TR, fully turbo it was 12 mins, that's a hair under 5 times!
Then you should post a screenshot of that result. Present clear evidence. Its still not 5 times.



Twice as fast as the TR, yes at least in the marbles test, 28 to 32 mins was what first users had posted in the thread, simple maths if im doing it 16 to 12 mins
I never doubted that. I dont understand why you ignore all my other statements i made.



passmark, I simply posted that as an example and my later comment of saying you cant trust online examples included passmark
No you didnt say that about passmark. You used Passmark as an example several times about how great the 7980XE performs and how terrible AMD CPUs are.



the passmark clearly indicates with an "@" meaning "running at" when the run it at a lower stock speed
Oh my goodness. Please have a look at the CPUs in their charts and look them up to see their specs.
They are listing the stock speeds with the "@". The passmark results are averages of all submitted user results.
Not all users run their system at stock. Which means the @... speeds should be all over the place but they are not because Passmark uses the default stock speeds in their charts.
Its obvious, you just have to take a closer look at this.



problem occurs with 16 cores, no again the stories state 8 cores, just google it, I stated PC world were one of the sites releasing the news so I didn't state it without evidence
I am sorry but then you cant interpret their results correctly.

http://lemonwolf.de/16.png

As you can see the problem will occur when using more than 16 cores not 8. It may have a slight impact in a handful of apps between 9-15 but the main problems starts when more than two dies are being used.
Two dies have direct memory access the other two have to go thought the dies with memory access and the infinity fabric. Thats one of the reasons why AMD implemented Dynamic Local Mode recently in their Ryzen Master software. With dynamic local mode it is possible to pin threads of applications that dont use all the cores to the dies that have direct memory access, which is greatly improving things like gaming performance and lightly threaded memory intensive tasks. You knew that right?
Besides it is still largely unknown if the memory bandwidth "issue" is really just that. Texts on Linux suggest that there is some issue with Windows itself as the performace is a lot better with Linux in the same tests.



AMDs are not reliable, intel x/ex reliable, it may be an unbiased statement from me, but its based from working in I.T. and knowing fail rates and what industries around the country use, news stories etc, knowing people in high places, and the fact
This is not an unbiased statement. I am sorry to say it like that but i am calling this BS.
Its is a decades old sentiment that somehow is still in some peoples mind. Both companies manufacture products that are equally reliable.



Why do I need to watch reviews when I work with the actual hardware? i.e. lots of my statements are based on facts of what is in front of me, not web or guess and merely point to web to make a point?
I almost cant believe that. Some of your statements suggest that you are sort of a novice when its about Computers.
Lots of bias, focusing on single sources, misinterpreting charts and sources, not knowing the difference between cores and threads.
If you know everything so well why didnt you know that you do not need liquid cooling for Ryzen products? As mentioned i can use Air cooling on the 2990WX and i can still overclock it.
Surely you must have know that since you know the industry so well or your contacts in high places or the fact.
I still say that you should apply for a job at Principled Technologies.
We had a similar discussion over at F3D quite a while ago. Didnt go any better than this one.



AAAAAnnnndddd!, if you had read my posts correctly I was in fact referring to the 2950 for the first part of the thread which is the more equivalent hardware,
Thats something i knew and never questioned in my response. Why do you bring this up?
First you talked about 32 "cores" and then you mentioned the 2950 somewhere in that post. Thats exactly what i mean.
You keep mixing up cores and threads among other things.

Nicolas Jordan
11-13-2018, 01:25 PM
Mr Linus seemed pretty upset with Intel recently regarding the review of the i9-9980XE if you watch this https://www.youtube.com/watch?v=s1Ww2vNAjN0

jwiede
11-13-2018, 01:27 PM
keep posting passmark, think I only posted once or twice? and ive posted cinebench and others more?

I'm leaving the rest of this alone, but as I posted above, there are Cinebench and William George's V-Ray Benchmark tests all showing 2990WX beating i9-7980XE at rendering in C4D & V-Ray, respectively. Regardless of the 2990WX's DP issue, or memory interconnect limitations, the sheer number of cores/threads appears to give it significant advantage on rendering-type tasks over the i9-7980XE.

OFF
11-14-2018, 12:09 AM
Corona Render Benchmark. Results are sorted for AMD 2990WX and i9-7980XE:

https://corona-renderer.com/benchmark/results/cpu/2990WX/all

https://corona-renderer.com/benchmark/results/cpu/7980XE

In sum - best 10 results for AMD between 33s and 36s. Best results for i9 7980XE between 37 sec and 40 sec. It is almost the same.

Imageshoppe
11-14-2018, 07:14 AM
In sum - best 10 results for AMD between 33s and 36s. Best results for i9 7980XE between 37 sec and 40 sec. It is almost the same.

I know nothing about Corona, but did see this at the site... perhaps some insight on results? I'm not a chip wiz, so I don't know if this might or might not explain it...

"Corona Renderer uses the Intel Embree ray tracing kernels, the fastest CPU ray tracing primitives on the market. Since they mesh well with the Corona architecture, they are an important factor in its performance. "

And from the Embree site...

"The kernels are optimized for the latest IntelŪ processors with support for SSE, AVX, AVX2, and AVX-512 instructions."

jwiede
11-14-2018, 11:54 AM
Corona Render Benchmark. Results are sorted for AMD 2990WX and i9-7980XE:

https://corona-renderer.com/benchmark/results/cpu/2990WX/all

https://corona-renderer.com/benchmark/results/cpu/7980XE

In sum - best 10 results for AMD between 33s and 36s. Best results for i9 7980XE between 37 sec and 40 sec. It is almost the same.

Agreed, there too sheer core/thread count seems to (marginally) win out over the better DP & AVX512 benefits of i9-7980XE, but for all intents and purposes most of these benchmarks are showing very similar results for 2990WX versus i9-7980XE. Either CPU choice is likely fine for LW use, and the results will likely be quite similar to the other render benchmarks.

Other than waiting for some kind of LW2018-specific benchmark scene results run against both CPUs, I'm not sure there's likely to be a "clearer" result available any time soon. Even if there were direct LW2015 comparison results between the two CPUs, the reality is that LW2018's behavior is different enough from LW2015 that LW2015 results are a "general guideline" at best, little different from the situation with Cinebench, V-Ray Benchmark, or the above Corona results.

Nicolas Jordan
11-14-2018, 08:48 PM
I should probably start a new thread for this. I finally got my 2990WX rig up and running and ran a quick test with the LW 2015 marbles benchmark scene to start with.

My i7 4930K at base clock left to auto boost up to 3.8 Ghz as it pleased and renders it in 2h 7m 57s.

The 2990WX at base clock left to auto boost up to 3.4 Ghz as it pleased and rendered it in 22m 21s.

The 2990WX is rendering just as fast as I expected it to with Lightwave.

Nicolas Jordan
11-15-2018, 03:26 PM
I loaded up a fairly heavy and complex LW 2018 production scene and tested VPR and F9 render. Everything seems super fast and super stable so far with the 2990WX! VPR uses all 64 threads and it's super fast to preview things compared to to my 4930K. Even with draft mode turned off it's super fast. The only thing I noticed is that the 2990WX runs warmer than my old Intel CPU. I'm using a Nactua NH-U14S with 1 fan and it sits around 60C when rendering when the 4930K would be around 40-50C usually but that was water cooled though. I think I'm going to pick up a 2nd fan for the Noctua and maybe see if upgrading my case fans would help any. If if can get it running a bit cooler at base clock I might try doing some overclocking eventually but it's plenty fast already.

Imageshoppe
11-16-2018, 08:42 AM
The 2990WX is rendering just as fast as I expected it to with Lightwave.

Awesome Nicolas, thanks for letting us know!

jwiede
11-16-2018, 05:01 PM
The 2990WX at base clock left to auto boost up to 3.4 Ghz as it pleased and rendered it in 22m 21s.

Yep, that's right about where I'd expect it to perform (a little tweaking would likely get it into under 20m range). Further reinforces the position that sheer core/thread count is strongest weighting factor for CPU 3D rendering (as other tests previously mentioned here were demonstrating as well).

Lemon Wolf
11-16-2018, 05:19 PM
Yep, that's right about where I'd expect it to perform (a little tweaking would likely get it into under 20m range). Further reinforces the position that sheer core/thread count is strongest weighting factor for CPU 3D rendering (as other tests previously mentioned here were demonstrating as well).
With a little tweaking i got it to slightly under 18 mins. Could certainly go below that but i am not too comfortable overclocking it to 4Ghz.

Nicolas Jordan
11-16-2018, 07:07 PM
With a little tweaking i got it to slightly under 18 mins. Could certainly go below that but i am not too comfortable overclocking it to 4Ghz.

What are you using for cooling the 2990?

Lemon Wolf
11-16-2018, 07:29 PM
What are you using for cooling the 2990?
I am also using the Noctua NH-U14S, but with two fans.
Its working pretty well with PBO enabled.

Nicolas Jordan
11-16-2018, 09:11 PM
I am also using the Noctua NH-U14S, but with two fans.
Its working pretty well with PBO enabled.

I think I'm going to add a 2nd fan to mine as well.

Nicolas Jordan
11-20-2018, 08:50 PM
I finally installed my 2nd Noctua CPU fan and ran Cinabench R15 getting a score of 4924. My 2990WX seems to stay around 3.4 Ghz most of the time on auto boost. We need some kind of benchmark scene for LW 2018.

RPSchmidt
11-21-2018, 08:58 AM
I think a rendering shoot-out using LW2018 and top of the line competitive CPUs would make a superb and informative blog post.

RPSchmidt
11-21-2018, 09:16 AM
Oh, and thanks for posting up the Linus Tech Tips video....

"Threadripper, thanks to it's lower pricing and higher core counts, continues to either compete valiantly or make Intel look downright stupid ... "

BODY BLOW! BODY BLOW!!

Just kicking myself that I got a 1950x instead of waiting for the 2990WX.

Ah well.

Nicolas Jordan
11-21-2018, 10:49 AM
Just kicking myself that I got a 1950x instead of waiting for the 2990WX.

Ah well.

Maybe next year I'll be kicking myself if AMD comes out with a 48 core 96 thread or even a 64 core 128 thread Threadripper CPU. I figure if they do it will cost a bit more and I can only spend so much on a new machine anyway so I thought the 2990WX was probably a good time to jump in. I was very close to getting a 1950X last year myself but decided my current machine would probably last one more year and then I would see what was available.

I'm assuming next year we will likely see a much improved version of the 2990WX at the vary least so maybe the 2990WX will come down a bit in price after that.

jwiede
11-21-2018, 12:28 PM
Maybe next year I'll be kicking myself if AMD comes out with a 48 core 96 thread or even a 64 core 128 thread Threadripper CPU.

Make sure you get a good deal out of what was available at the time of purchase, but beyond that, there's just no value in worrying about it. There will always be faster/bigger/better coming out, it's basically guaranteed to happen eventually, so worrying about that is like worrying tomorrow might be a better day.

Personally, I'm just glad to finally see CPU/MB-combos available at non-ridiculous prices offering more cores/threads than my ancient 2012 MacPro5,1. For much of the work I do, sheer core/thread count matters most. It's nice to finally be able to buy single systems with more available cores/threads without having to spend many thousands on just the CPU & MB.

rustythe1
11-22-2018, 11:28 AM
Maybe next year I'll be kicking myself if AMD comes out with a 48 core 96 thread or even a 64 core 128 thread Threadripper CPU. I figure if they do it will cost a bit more and I can only spend so much on a new machine anyway so I thought the 2990WX was probably a good time to jump in. I was very close to getting a 1950X last year myself but decided my current machine would probably last one more year and then I would see what was available.

I'm assuming next year we will likely see a much improved version of the 2990WX at the vary least so maybe the 2990WX will come down a bit in price after that.

problem is once you go past 64 threads most software including LW (as far as I last knew) cant use any more than 64 threads (look through the marbles thread for some of the dual xeon 72 core set ups) so it wont help for now, although that may be more to do with the dual processors, over a single, however is this something that's affecting the 2990 speed in some software like lightwave? like I said in the marbles thread im rendering it in 11 to 12 mins (16 without turbo) that's almost twice as fast but almost half the thread count can you monitor all threads to see if they are all running 100%?

Nicolas Jordan
11-22-2018, 01:05 PM
problem is once you go past 64 threads most software including LW (as far as I last knew) cant use any more than 64 threads (look through the marbles thread for some of the dual xeon 72 core set ups) so it wont help for now, although that may be more to do with the dual processors, over a single, however is this something that's affecting the 2990 speed in some software like lightwave? like I said in the marbles thread im rendering it in 11 to 12 mins (16 without turbo) that's almost twice as fast but almost half the thread count can you monitor all threads to see if they are all running 100%?

All threads run at 100% on the marbles scene except for dropping for a second when transitioning between AA passes but it also did that on my old machine as well. I do get the speeds that I expected from it though and I haven't seen any noticeable slowdown or bottle necks in anything I have rendered yet.

rustythe1
11-22-2018, 01:18 PM
Don't get me wrong, its still crazy fast for a single processor compared to stupidly priced dual Xeon that could set you back 10-20 grand and for the most part it passes the Intel, but its odd that there are a handful of engines that seem to suffer,

jwiede
11-22-2018, 02:21 PM
Don't get me wrong, its still crazy fast for a single processor compared to stupidly priced dual Xeon that could set you back 10-20 grand and for the most part it passes the Intel, but its odd that there are a handful of engines that seem to suffer,

Perhaps a bit more accurate to say the (Intel) Embree benefits in adaptive use of wider AVX instructions allow the i9-7980XE to slightly offset the typical core-/thread-count dominance factor in rendering stats. Were it not for that offset, the relative results would likely resemble prior relative render stats where core-/thread-count was clearly most dominant factor (which appears to stem from similar architectural structure across most CPU render engines). Complicating matters even further, while Embree kernels are widely used in render engines these days, the adaptive benefit using wider AVX only occurs for engines with relatively recent versions of Embree kernels.

Put another way, it isn't so surprising the 2990WX is doing great, high core-/thread-count CPUs generally dominate render benchmarks. It _is_ a little surprising the i9-7980XE is doing as well as it is, likely due to advanced AVX (and adaptive use of them in recent Embree kernels) somewhat compensating for its lower core-/thread-count.