Page 7 of 12 FirstFirst ... 56789 ... LastLast
Results 91 to 105 of 175

Thread: intel vs AMD - Benchmark 2019

  1. #91
    Super Member
    Join Date
    Aug 2016
    Location
    a place
    Posts
    2,484
    2.25 mins (145.7 seconds)

    Dual Xeon E5-2670 v3 @ 2.3Ghz
    64gb ram
    GTX Titan X
    Windows 10 Pro x64
    Last edited by gar26lw; 12-12-2019 at 02:31 AM.

  2. #92
    Super Member
    Join Date
    Aug 2016
    Location
    a place
    Posts
    2,484
    10.19 mins (619.9seconds)

    i7-6700 @ 4.0Ghz
    32gb ram
    RTX 2080
    Windows 10 Pro x64

  3. #93
    Mine with Intel i9-7980XE no OC
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Pixym_Screen.jpg 
Views:	92 
Size:	190.7 KB 
ID:	146409  
    Last edited by pixym; 12-12-2019 at 09:04 AM.
    Eddy Marillat - PIXYM
    WS : MB ASUS X299-Pro/SE - i9 7980XE 2,6ghz 18C/36T - 32GB Ram- 2 GTX 1080 TI - Win 10 64
    GPU Net Rig : MB Biostar - Celeron 2,8 ghz 2C/2T - 8GB Ram - 2 RTX 2080 TI - Win 7 64

  4. #94
    Super Member
    Join Date
    Aug 2016
    Location
    a place
    Posts
    2,484
    CPU Speed in Seconds

    AMD 3970X 48.7
    AMD 3960X 75
    AMD Ryzen 9 3950X default settings 103
    AMD Ryzen 9 3950X tweaked settings 97
    AMD Threadripper 1950x 145.7
    AMD Threadripper 2990WX 80

    Intel i9-7980XE 111
    Intel i9 9900KF 213
    Intel i7-6950X 207
    Intel i9 9820X 177
    Intel i9 7890xe 85.8
    Intel Dual Xeon E5-2670 v3 @ 2.3Ghz 145.7
    Intel i7-6700 @ 4.0Ghz 619.9

  5. #95
    www.Digitawn.co.uk rustythe1's Avatar
    Join Date
    Feb 2006
    Location
    england
    Posts
    1,430
    Quote Originally Posted by gar26lw View Post
    CPU Speed in Seconds

    AMD 3970X 48.7
    AMD 3960X 75
    AMD Ryzen 9 3950X default settings 103
    AMD Ryzen 9 3950X tweaked settings 97
    AMD Threadripper 1950x 145.7
    AMD Threadripper 2990WX 80

    Intel i9-7980XE 111
    Intel i9 9900KF 213
    Intel i7-6950X 207
    Intel i9 9820X 177
    Intel i9 7890xe 85.8
    Intel Dual Xeon E5-2670 v3 @ 2.3Ghz 145.7
    Intel i7-6700 @ 4.0Ghz 619.9
    sorry I miss typed mine, its a 7980xe not a 7890xe
    Intel i9 7980xe, Asus Rampage Vi extreme, 2x NVIDIA GTX1070ti, 64GB DDR4 3200 corsair vengeance,
    http://digitawn.co.uk https://www.shapeways.com/shops/digi...ction=Cars&s=0

  6. #96
    Registered User
    Join Date
    Dec 2008
    Location
    Ghana West Africa
    Posts
    869
    Quote Originally Posted by gar26lw View Post
    CPU Speed in Seconds

    AMD 3970X 48.7
    AMD 3960X 75
    AMD Ryzen 9 3950X default settings 103
    AMD Ryzen 9 3950X tweaked settings 97
    AMD Threadripper 1950x 145.7
    AMD Threadripper 2990WX 80

    Intel i9-7980XE 111
    Intel i9 9900KF 213
    Intel i7-6950X 207
    Intel i9 9820X 177
    Intel i9 7890xe 85.8
    Intel Dual Xeon E5-2670 v3 @ 2.3Ghz 145.7
    Intel i7-6700 @ 4.0Ghz 619.9
    The new threadrippers are killing it.
    Even the dual Xeons can't keep up.
    Would be interesting to see how the 64 core dual Epycs fare on this bench.
    They'd probably do it in less than 10 seconds.

  7. #97
    Super Member vncnt's Avatar
    Join Date
    Sep 2003
    Location
    Amsterdam
    Posts
    1,779
    Does this mean we don't really need GPU rendering?

  8. #98
    Registered User
    Join Date
    Dec 2008
    Location
    Ghana West Africa
    Posts
    869
    Quote Originally Posted by rustythe1 View Post
    sorry I miss typed mine, its a 7980xe not a 7890xe
    There's a 25 second difference between your 7980xe and the other 7980xe.
    That's some pretty good overclocking you are running.
    What's your cooling solution and how stable is it?

    A mate of mine is looking to dispose off his old 7980xe.
    I might get to pick it up for cheap.
    Last edited by Hail; 12-14-2019 at 04:11 AM.

  9. #99
    Registered User
    Join Date
    Dec 2008
    Location
    Ghana West Africa
    Posts
    869
    Quote Originally Posted by vncnt View Post
    Does this mean we don't really need GPU rendering?
    Not quite yet but maybe when we start seeing 500+ threaded mainstream cpus, then cpu renderers would start to give their gpu counterparts a good run.
    And at this rate,it won't be long before that happens.
    The next top end Threadripper would probably spot a 128 cores and with 256 threads or more and dual Epycs are going come 256 cores and 512 threads!
    Also, AMD is said to be working on 4 threads per core cpu technology and if that goes well, it's going to be a massive game changer for threaded workloads.
    Last edited by Hail; 12-14-2019 at 05:45 AM.

  10. #100
    Super Member vncnt's Avatar
    Join Date
    Sep 2003
    Location
    Amsterdam
    Posts
    1,779
    Seems like Newtek should not invest resources into a GPU solution.

    A bridge to Blender would be logical for users who want to experiment or want to utilize their current hardware right now.

  11. #101
    www.Digitawn.co.uk rustythe1's Avatar
    Join Date
    Feb 2006
    Location
    england
    Posts
    1,430
    Quote Originally Posted by Hail View Post
    There's a 25 second difference between your 7980xe and the other 7980xe.
    That's some pretty good overclocking you are running.
    What's your cooling solution and how stable is it?

    A mate of mine is looking to dispose off his old 7980xe.
    I might get to pick it up for cheap.
    as argued in another thread (don't want to start that again) its not a full overclock, I actually have the asus AI overclock turned off, im using the Asusboost (asus version of turbo that allows all cores to turbo the same) so its only running at around 4.1 to 4.2ghz, it will run to 4.5 when using AI overclock,
    the boost part is very stable and with corsair HX cooler only gets up to about 60deg with prolonged use, will even not get hot in silent mode! the boost is running with the default profile that came with the rampage motherboard so I didn't tweak the settings myself so will work out of the box like this
    Intel i9 7980xe, Asus Rampage Vi extreme, 2x NVIDIA GTX1070ti, 64GB DDR4 3200 corsair vengeance,
    http://digitawn.co.uk https://www.shapeways.com/shops/digi...ction=Cars&s=0

  12. #102

    Seems like Newtek should not invest resources into a GPU solution.
    as of now, they definitely should add GPU render support, as long as it doesn't take 4ever to code.
    LW vidz   DPont donate   LW Facebook   IKBooster   My vidz

  13. #103
    Super Member vncnt's Avatar
    Join Date
    Sep 2003
    Location
    Amsterdam
    Posts
    1,779
    Quote Originally Posted by erikals View Post
    as of now, they definitely should add GPU render support, as long as it doesn't take 4ever to code.
    Option A.
    I have the impression that the point of stable results for (industry standard) large scenes that need to fit in a tiny GPU memory is still not reached, even if you pay a lot for it.
    Even then, this expensive strategy is not feasible for a limited budget.
    I you can afford time and/or money to buy an external renderer, you could just start using Blender or Maya, or send your scene files to a commercial renderfarm.

    Option B.
    If AMD continues to expand the threadripper success by adding a number of cores that rival GPU designs, at some point in the future it doesn't matter if you choose the CPU architecture or the GPU architecture. Realistically that point in time is at least 5 years from now. Ten years from now it could be a different story.
    Buy threadrippers in 0 to 5 years from now to replace your desktop PC or build a render farm, and 70 % of Lightwave users could survive.
    In this case we should invest in new hardware and Newtek could just wait.

    Option C.
    If Newtek still needs to start from scratch with GPU raytracing (we simply don't know that yet) their first useable implementation could be 5 to 10 years away from now.
    With this strategy we might end up with a GPU renderer that rivals a CPU renderer.
    For users it's nice to have a choice but this choice doesn't make a lot of sense production wise.
    Why building something for 5-10 years if it's is equally fast?
    Not to mention market acceptance and backward/side/forward compatibility. Will your beautiful software run on the hardware that we all use in 10 years time? The CPU has a better track record in this regard. Does your GPU card still support nvidia stereo-3D? Does your NLE utilize all available GPU acceleration techniques in your hardware?

    Option D.
    If Newtek still needs to start from scratch with implementing nvidia technology (we already have the nvidia denoiser), maybe a basic system could be delivered in 1 to 3 years from now.
    In that case Newtek should probably support every modification that nvidia releases.
    I would be very surprised if this is going to happen. Too much competition. Not enough profit.
    Is Lightwave already 100 % current with the latest developments in Bullet dynamics?

    Option E.
    If Newtek still needs to start from scratch to build an uni-directional bridge between Lightwave and Blender, maybe 1 year from now before we're able to test it.
    The Open Source community has done a lot of work, and they continue to do so with a number of inexpensive resources that you cannot compete with.

    What should we wish for?
    I have the impression, Lightwave users typically run small to mid sized companies, or are on their own.
    At least for the next 10 years, many of them could use (or may want to experiment with) GPU raytracing. Especially if you're dealing with Global Illumination, realistic Materials, and realistic Lights. Let's not forget 4K/8K hdr frame sizes and more rendered frames caused by hfr.

    I can only hope for option D or E.

    Option E might be cheaper and the most clever choice.
    It enables Newtek to spend resources on other parts of the software, it enables Newtek to learn from the mistakes that the Open Source community makes, they do not need to alienate their users from them for the time being, and within 10 years the entire CPU vs GPU discussion can be over.

    If Newtek wants to guide users to stay away from Blender, it might be option D.

  14. #104
    Registered User
    Join Date
    Jun 2014
    Location
    Right here
    Posts
    1,905
    @vncnt

    Option A:

    Tiny GPU memory? That was in the past. There is out-of-core memory (geometry/textures), NVLink and proxies.

    Option B:

    So far rendering on a Threadripper CPU is still way slower than a uo-to-date multi GPU rig with a fast GPU renderer (not Cycles). However CPU rendering has also advantages. But in order to have more processing units / cores, the manufacturing needs to shrink to even smaller sizes. But the same applies to GPUs, they get faster. RTX support is just starting to take advantage.

    Option C + D:

    Yes, I agree, it could take NewTek a long time to develop a GPU solution. Maybe NVidia Optix could help reducing these efforts. However there's another problem: they would need to develop a solution for MacOS too (e.g. Metal). Not realistic with their limited skills and resources imho. I use several render engines, some in their beta form and can see how much it takes experts to deliver a good and stable engine.

    Option E:

    Yes that could be the best option in my opinion too. However there are 3 issues I see:

    - It took them about 3 years to develop the current LW render engine. Also future LW feature improvements need to be supported in that engine and that requires development efforts too. If they focus fully on a Cycles bridge, was the last couple of years waste of time?
    - Cycles is a beautiful and versatile engine, however it's not a fast GPU renderer. (but it's hybrid and multi platform)
    - Blender is interesting (specially) because of Eevee. Future Blender developments could focus more on that.

    Other Options:

    - NewTek / VizRT focus / relies on UE for RT graphics (would make sense with its new owner)
    - AMD ProRender 2.0. While the previous version was not that amazing, I heared good things about version 2.

    Overall I must say with the speed and quality of Blender development and other strong applications, LightWave has not an easy stand now (less than ever) and in future, so VizRT better comes up with a good strategy. Improving Modeler would be the worst in my opinion.

    Edit: I'm not against having a good CPU engine and NewTek did a good job. CPU engines are more versatile and can scale with additional nodes / server farm / cloud much better then GPU. And the new Ryzen 9 chips can mitigate the slow rendering quite a bit.

    However by now the LW2020 code might be finalized already and architecural / feature decisions must have been made long ago. Our discussion is pointless for LW development anyway.
    Last edited by Marander; 12-16-2019 at 10:46 AM.

  15. #105
    Super Member kolby's Avatar
    Join Date
    Aug 2004
    Location
    CZ, EU
    Posts
    433
    I don't think Newtek should develop its own GPU renderer. It'll take years and I think it's a waste of time. I would prefer to spend this time optimizing their CPU renderer. If anyone wants a GPU renderer there's Octane. Years in development, perfectly integrated into LW through the plugin.
    System info: CPU: Ryzen 9 3950X RAM: 32GB DDR4/3200MHz MB: Asus Prime X570-P GFX: Asus GTX 750 Ti / 2GB OS: Win10, LW2019x64

Page 7 of 12 FirstFirst ... 56789 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •