PDA

View Full Version : Intel Xeon Phi unveiled.



Ernest
11-12-2012, 09:20 PM
http://techreport.com/news/23884/intel-joins-the-data-parallel-computing-fraternity-with-xeon-phi


"Intel formally unveils its first Xeon Phi offerings. As you may recall, Xeon Phi is the brand name given to products based on Knight's Corner, the chip that evolved from the prior Knight's Ferry project, which itself was derived from Larrabee, Intel's aborted attempt at producing a graphics processor."

First details on what it finally turned out to be.

silviotoledo
11-13-2012, 10:06 AM
Available at 28 january 2013!

Sanchon
11-19-2012, 04:00 AM
As I know Lightwave should have a little rewritten and optimised code to support this coprocessor unit - http://www.wykop.pl/ramka/1323243/xeon-phi-koprocesor-infografika/

silviotoledo
11-19-2012, 07:21 AM
rewritten? So it sounds this is not going to work with lightwave anymore.

erikals
11-19-2012, 08:07 AM
no, the other way around...

i'm curious to just what speed improvements it will bring, and price...

Sanchon
11-19-2012, 12:31 PM
$2650 and $2000 - http://news.softpedia.com/news/Intel-Xeon-Phi-Coprocessors-for-Supercomputers-Now-Shipping-306754.shtml. I read before, that software must support this coprocessor to take an advantage.

silviotoledo
11-19-2012, 07:00 PM
a bit more expensive than GPUs and less powerfull. I'm happy Octane is bringing GPU power to lightwave.

dsol
11-20-2012, 10:03 AM
I'm assuming it's going to support OpenCL and DirectCompute? Good luck to them if they expect to get software vendors to adopt a proprietary API at this stage for what is clearly an uber-niche product.

dsol
11-20-2012, 10:10 AM
Ah - this is interesting. Does it support Quickpath interconnect? If so that could make it a VERY interesting prospect for dedicated HPC systems. It would mean you could have a dual-socket Xeon system with the Phi connected directly to the main Xeon CPU for fast memory access to main system RAM. This is what AMD had planned to offer with HyperTransport (a standard socket that could host different types of co-processors), but never followed through. If Intel manage it, it will definitely be an interesting prospect.

Wonder if Apple are thinking of using a Xeon/Phi combo in the 2013 Mac Pro? It's another reason why it might have been delayed.

allabulle
11-20-2012, 04:19 PM
a bit more expensive than GPUs and less powerfull. (...)

Why do you say that? Expensive or not, I'd like to know how powerful this toy is and, in particular, doing what. So, why do you say it's less powerful, if you care to explain.

dsol
11-20-2012, 05:39 PM
Even Intel's own benchmarks only have it around 2-3 times as fast as a Xeon CPU - and that's with specific code examples. I'm not saying it's not worth examining, but let's not get carried away here. It's not revolutionary.

JonW
11-20-2012, 11:59 PM
It's going to be a bit slow with single core programs - Layout & Modeler for example.

dsol
11-21-2012, 10:12 AM
Layout isn't single core? (if I assume you mean "not multithreaded") The most important parts (like the renderer - including VPR) scale up nicely with more cores.
Has anyone heard anything about Xeon Phi's memory model - can it access shared system ram with the main CPU? From what I've read so far, the PCI-E card version is pretty much a self-contained system on a board, running it's own OS (linux). A bit like the original LW screamernet cards were.

Ernest
11-21-2012, 12:24 PM
I take everything Charlie writes with a salt mine worth of salt, but here's an article about how easy it is to convert code to be Phi compatible, compared to, say, CUDA. If the code is already multi-threaded, it seems to be a matter of adding a couple of lines and recompiling.


Even Intel's own benchmarks only have it around 2-3 times as fast as a Xeon CPU - and that's with specific code examples. I'm not saying it's not worth examining, but let's not get carried away here. It's not revolutionary.
I'm pretty sure the benchmark said it was 2-3 times as fast as a dual Xeon workstation; not a Xeon CPU.


I'm assuming it's going to support OpenCL and DirectCompute? Good luck to them if they expect to get software vendors to adopt a proprietary API at this stage for what is clearly an uber-niche product.
It's not OpenCL/DC and it's not proprietary; it's x86 (with vector extensions).

silviotoledo
11-21-2012, 05:41 PM
Why do you say that? Expensive or not, I'd like to know how powerful this toy is and, in particular, doing what. So, why do you say it's less powerful, if you care to explain.

and that's the answer:


Even Intel's own benchmarks only have it around 2-3 times as fast as a Xeon CPU - and that's with specific code examples. I'm not saying it's not worth examining, but let's not get carried away here. It's not revolutionary.

And a GTX 580 plus Octane render will give you almost 20x more render power than the addition of another Hi-end GPU. That's why GPU is still better and cheapper.

dsol
11-26-2012, 09:04 AM
This is a good writeup on Xeon Phi for the HPC market - though it might be a little over opinionated!
http://semiaccurate.com/2012/11/13/what-will-intel-xeon-phi-do-to-the-gpgpu-market/

Talks a lot more about the development side too - and how relatively easy it is to write code for versus Nvidia and CUDA

gerry_g
12-02-2012, 09:17 AM
This is the paragraph that got me and I'm thinking of Adobe and their lock in to Quicksilver/Nvidia Cuda GPU acceleration and all of the other GPU renderers doing the same and how I'd love to see their downfall and a real single standard solution replace them for good Ė


With the cost of GPGPU coding coming down at a snails pace, and lock in being prioritised over customer benefits, the viability of the whole market is now in question.
From SemiAccurateís point of view it is already dead, but may have a long tail as those locked in struggle to break free. Anyone trying to push a proprietary solution at this point
is in full panic mode right now, several backchannel signs removed any doubt earlier this week.
Phi is having immediate effects on the market that are quite visible to any onlooker too. The high end Nvidia K20 cards announced today have not been priced officially,
that in itself is strange, but word on the street is that it is priced 20% less than expected. If that isnít a red flag that the market is in trouble and the players know it,
I donít know what is. The end times for GPGPU is here, pity itís purveyors, Intel doesnít take prisoners.S|A

erikals
12-02-2012, 09:22 AM
competition is good.

dsol
12-03-2012, 03:53 AM
This is the paragraph that got me and I'm thinking of Adobe and their lock in to Quicksilver/Nvidia Cuda GPU acceleration and all of the other GPU renderers doing the same and how I'd love to see their downfall and a real single standard solution replace them for good

I dunno if Adobe are "locked in" to CUDA. After Effects 3D renderer is based on Nvidia code, but their other apps are now GPU-agnostic. Premiere Pro CS6 introduced support for OpenCL. It's not 100% parity feature-wise with the CUDA version, but very close - and in many benchmarks it appears to run faster (even on NVidia hardware!)

Their other apps use OpenGL, so - of course - are also not locked into one particular hardware vendor. And the AE CUDA 3D issue right now is being dealt with by 3rd party replacement 3D render engines.