PDA

View Full Version : i7 4930k is it worth it for LW?



yoshiii
02-27-2014, 07:31 AM
Hello

I am deciding between a FX 8350 and a i7 4770k to use for LW and to probably game with too.

But I am thinking about buying the i7 4930K if I can hold out until next month. I need a new computer now but cant decide between the 8350 and the i7 477oK. So maybe just get the i7 4930K.

How much expensive will this be with the motherboard to go along with it? Is it worth it? Or should I stick with my other choices.

3DGFXStudios
02-27-2014, 07:37 AM
Buy the 4930k. It's the fastest cpu you can get for the least amount of money.

http://www.cpubenchmark.net/high_end_cpus.html

Waves of light
02-27-2014, 09:43 AM
Buy the 4930k... it runs LW beautifully. VPR is very quick. Check out this thread for benchmarks and different setups using the 4930k:

http://forums.newtek.com/showthread.php?133251-11-5-s-BenchmarkMarbles-lws-share-your-machine-s-render-time-here

ianr
02-27-2014, 09:47 AM
Yep yep
4930K all the way & watercool it if U want to Over-Clock it

Look up posts in this forum for a suitable nvidia card too.

Waves of light
02-27-2014, 10:06 AM
If you are not going the GPU render route, then don't go overboard on graphics card... it won't make that much difference in LW. In fact, some of the gtx 580 series, outperform the 600 series cards.

hrgiger
02-27-2014, 10:29 AM
Its what I went with. Definitely worth the investment.

AbnRanger
02-27-2014, 01:50 PM
If you are not going the GPU render route, then don't go overboard on graphics card... it won't make that much difference in LW. In fact, some of the gtx 580 series, outperform the 600 series cards.Yeah...the GTX 580 STILL outperforms all but the $1k Titan, because the Fermi architecture is much more stout in terms of CUDA performance, than the Kepler series. The Kepler cards (GTX 6xx-770) also have a much smaller Memory bus. Can't figure out why NVidia thought it was a good idea to take a big step backwards in both of those departments.

yoshiii
02-27-2014, 05:20 PM
Yeah...the GTX 580 STILL outperforms all but the $1k Titan, because the Fermi architecture is much more stout in terms of CUDA performance, than the Kepler series. The Kepler cards (GTX 6xx-770) also have a much smaller Memory bus. Can't figure out why NVidia thought it was a good idea to take a big step backwards in both of those departments.

Does it matter if it is 1.5gb or 3gb ?

JonW
02-27-2014, 05:23 PM
I don't know how much a whole box set up it. But all things the same other than the CPU, lets say $2000 with a 4770k ($340) & $2239 with a 4930k ($580). $240 difference for the dearer CPU.

$2000 / (4 cores x 3.5 GHz) 14 GHz = $142.86 per GHz
$2240 / (6 cores x 3.4 Ghz) 20.4 GHz = $109.80 per GHz

The dearer CPU is in effect cheaper as your renders will be done in approximately 69% of the time. Or 77% of the GHz per $ for a whole box setup cost. The time saved will pay for the dearer CPU on the first job.

Just looking at the cost of a CPU only is not the way to price a computer.

yoshiii
02-27-2014, 05:38 PM
I don't know how much a whole box set up it. But all things the same other than the CPU, lets say $2000 with a 4770k ($340) & $2239 with a 4930k ($580). $240 difference for the dearer CPU.

$2000 / (4 cores x 3.5 GHz) 14 GHz = $142.86 per GHz
$2240 / (6 cores x 3.4 Ghz) 20.4 GHz = $109.80 per GHz

The dearer CPU is in effect cheaper as your renders will be done in approximately 69% of the time. Or 77% of the GHz per $ for a whole box setup cost. The time saved will pay for the dearer CPU on the first job.

Just looking at the cost of a CPU only is not the way to price a computer.

So the 4930K is a better value? I am looking at the FX 8350 but I feel that the i7 4770k is not worth the extra 130 dollars more for the performance that it will give.
I am willing to save up longer and get the i7 4930k.

What's your opinion on that?

JonW
02-27-2014, 05:46 PM
Add up the cost of each whole box setup including OS, then divide that by the GHz to get GHz per $.

Better still look at the fame rate (as not all GHz are equal) in the benchmark marbles & divide the frame rate by the whole box setup.

JonW
02-27-2014, 05:53 PM
not worth the extra 130 dollars more for the performance that it will give.

This is why 5 years ago I bought 2 x W5580 CPUs. They were about $2000 each ($4000 for 2 CPUs) but the whole box set up was very reasonable. It was so fast that even today 5 years later I am happy with the performance for the work I am doing at the moment. So in effect it has been the cheapest computer I have owned.

Delaying fiddling around with upgrading to me has a very high value.

yoshiii
02-27-2014, 06:16 PM
Add up the cost of each whole box setup including OS, then divide that by the GHz to get GHz per $.

Better still look at the fame rate (as not all GHz are equal) in the benchmark marbles & divide the frame rate by the whole box setup.

I tried finding the bench marks but so many pages of info.

JonW
02-27-2014, 08:31 PM
Post #96 has an 8350

Post #336 has a 4930

AbnRanger
02-28-2014, 12:56 AM
Here is a good comparison between the 8350 and i7 4930:
http://cpuboss.com/cpus/Intel-Core-i7-4930K-vs-AMD-FX-8350

As I have mentioned in another thread, there are a number of apps that benefit greatly, in every day operation, from having an Intel CPU as opposed to an AMD. My main workstation has an i7 970 (6cores/12threads), and I can see a world of difference just in 3D Coat, alone. The main reason is that the app is multi-threaded using Intel's TBB (Thread Building Blocks) Library...and obviously it is going to be heavily optimized for Intel CPU's. I previously used an AMD Phenom X6, and head to head with an i7 950 (4core/8threads), there was a very noticeable improvement.

I don't know what Newtek uses for it's multi-threading, but if it's Intel's libraries, there will be a distinct difference...regardless of some of the other rendering benchmarks. I'm not an Intel Fanboy at all. I just buy whatever I think is the best solution within my budget. And for the past 2-3yrs+, it's been Intel. I wish AMD would compete harder...but they don't pay the big developments $$$ to develop streaming technology like NVidia does (CUDA) nor thread-building or raytracing (Embree) technologies like Intel does.

That's why I don't see myself buying anything else of their for some time. As for the GTX 580....3GB will make a major difference if you want to use the GPU to render anything with it (Octane, Thea, Arion, FurryBall, Blender Cycles, etc.). I'm sure it also matters when working with large texture map sizes in apps like 3D Coat. Mudbox relies heavily on the GPU, and VRAM would be an important consideration if you use that app.

Waves of light
02-28-2014, 01:22 AM
Does it matter if it is 1.5gb or 3gb ?

Only if you are using the card to do GPU rendering (Like Octane). It will make no difference in LW, as LW is a CPU renderer.

@AbnRanger: Yep, you're the reason I bought a second hand 580 off ebay, because of your expert views over on the 3DC forums. Cheers.

AbnRanger
02-28-2014, 02:10 AM
Only if you are using the card to do GPU rendering (Like Octane). It will make no difference in LW, as LW is a CPU renderer.

@AbnRanger: Yep, you're the reason I bought a second hand 580 off ebay, because of your expert views over on the 3DC forums. Cheers.Thanks...so, are you using the Octane Plugin for LW?

Waves of light
02-28-2014, 02:18 AM
Thanks...so, are you using the Octane Plugin for LW?

No mate. Haven't made that jump yet, but I'm seeing some wonderful renders and very fast render times, from other artists on here.