CGTalk > Main > News
Login register
Thread Closed share thread « Previous Thread | Next Thread »
 
Thread Tools Search this Thread Display Modes
Old 06-04-2013, 07:07 AM   #1
billpayer2005
Veteran
Ill Player
Los%2BAngeles,
 
Join Date: Dec 2005
Posts: 57
Intel Xeon Phi Coprocessor- 60 cores on PCI card?

Has anybody tried rendering on these ?

http://www.tomshardware.com/news/In...-CPU,22700.html

60 cores at 1GHz each, on a PCI card...
If multiple cards are possible this could be an awesome farm in one box...
 
Old 06-04-2013, 07:26 AM   #2
darthviper107
Expert
 
darthviper107's Avatar
portfolio
Zachary Brackin
3D Artist
Precocity LLC
Dallas, USA
 
Join Date: Feb 2004
Posts: 3,923
If it's anything like Nvidia's cards then it's a waste for rendering.
__________________
The Z-Axis
 
Old 06-04-2013, 07:51 AM   #3
billpayer2005
Veteran
Ill Player
Los%2BAngeles,
 
Join Date: Dec 2005
Posts: 57
Is isn't like Nvidia cards, it uses x86 instructions (like an Intel processor)
 
Old 06-04-2013, 08:00 AM   #4
SheepFactory
Expert
CGSociety Member
portfolio
x y
Canada
 
Join Date: Dec 2001
Posts: 16,141
How much do these cost?
 
Old 06-04-2013, 08:23 AM   #5
tuna
|||||||||||||||||||||||||
 
tuna's Avatar
Thomas Cannell
String puller
Venice, USA
 
Join Date: Dec 2001
Posts: 1,094
Would these even be that fast for cg rendering? They have 8GB of memory across all of their processors (if you're sharing your regular RAM would there be a big performance hit?), they don't support SSE* instructions (would renderers/software even work on this?), They cost $3k+++++

[edit] Pretty sure they will require reprogramming to make use of, however the learning curve is "easier" as they use more standard C++, OpenCL, etc. Instead of just Cuda. This is Intel fighting back against GPGPU from NVidia/AMD.
__________________
www.puppetstring.com

Last edited by tuna : 06-04-2013 at 08:28 AM.
 
Old 06-04-2013, 08:30 AM   #6
ThE_JacO
MOBerator-X
 
ThE_JacO's Avatar
CGSociety Member
portfolio
Raffaele Fragapane
That Creature Dude
Animal Logic
Sydney, Australia
 
Join Date: Jul 2002
Posts: 10,954
They'd be pretty awful for practically any kind of DCC work.
They are meant as massively dense platforms for low footprint, highly parallelized farming. IE: running gazillion of virutal machines, serving and so on in a webfarm, data management and other similar things.
They are of little interest at an artist/TD desktop.
__________________
"As an online CG discussion grows longer, the probability of the topic being shifted to subsidies approaches 1"

Free Maya Nodes
 
Old 06-04-2013, 10:35 AM   #7
SYmek
PRO
 
Join Date: Mar 2004
Posts: 82
The usefulness of Phi for rendering is still widely open question, obviously because we don't have applications to try it out. Nevertheless from a starter situation seems to be much better than with GPU computing, so loudly speculating, if Nvidia was able to sell GPU as raytracing/simulating device, why not Phi?

Optimization wise, Phi is in a much better position, than GPU. It won't run automatically x86 app with full speed, but we talk about re-implementing low level code (sse math, structures) not the entire renderer. Some part of this work will be done by the compiler anyway (also specialized compilers like spmd). The main difference is a number of threads, cache access, which makes new design challenging.

Obviously Phi wasn't designed with rendering in mind, but facing the need of extending render farm with tesla server, I would prefer to hold for a couple of month more to see first dcc app (render or batch sim) running on Phi. It will happen.
 
Old 06-04-2013, 11:55 AM   #8
ThE_JacO
MOBerator-X
 
ThE_JacO's Avatar
CGSociety Member
portfolio
Raffaele Fragapane
That Creature Dude
Animal Logic
Sydney, Australia
 
Join Date: Jul 2002
Posts: 10,954
Phi might have a future, but in DCC it sure doesn't have a present.

Binary incompatibility means nothing runs on it yet (of what we use), and despite Intel's claims to the contrary it's not as simple as a recompile.
It's not like you just swap the SSE intrinsics for IMCI's and voila, everything done, like they insist
A lot of SIMD is very, very neatly hand tailored, it doesn't just "get recompiled".

On top of that, for now and for the near future phi will be wide and shallow, so even if they provided everybody with a magic translator to port across from SSE, no commonly available software is oriented to this level of wide and shallow parallelism. The tweak and recompile might get you back on the binary compatibility train, but stuff would still be running like a sloth, if it even fit in memory to begin with.

It's a glimpse of a potential future, but it's of little interest yet to Joe Average.

And I'm sure the risk somebody was just about to kit up a tesla renderfarm, other than some nVIDIA sponsored rendering engine, were slim to none in the entertainment industry
__________________
"As an online CG discussion grows longer, the probability of the topic being shifted to subsidies approaches 1"

Free Maya Nodes

Last edited by ThE_JacO : 06-04-2013 at 11:59 AM.
 
Old 06-04-2013, 02:08 PM   #9
Crushbomber
New Member
 
Crushbomber's Avatar
portfolio
Crush
RR-Selects
Stuttgart, Germany
 
Join Date: Mar 2012
Posts: 22
Cool

Adapteva Supercomputer Epiphany chipcard with x86 compatible processors for your PCs are now entering the market with their first 64 core chip for only 99$ (price of the complete pcie card! speed of a 45Ghz actual I7 CPU). Next year there should be a 1024 cores version and one year later 4096 cores they said in an interview. Thats much faster and more cores than their actual posted roadmap.

This small company & their chip could get really interesting compared to whats coming from the big ones - especially for the price. Im not quite sure whether Intels Xeon Phi is capable to upscale to 4096 cores or at least 1024.
 
Old 06-04-2013, 02:19 PM   #10
Srek
Some guy
 
Srek's Avatar
CGSociety Member
portfolio
Bjrn Dirk Marl
Technical Design
Maxon Computer GmbH
Friedrichsdorf, Germany
 
Join Date: Sep 2002
Posts: 11,268
Quote:
Originally Posted by Crushbomber
Adapteva Supercomputer Epiphany chipcard with x86 compatible processors for your PCs are now entering the market with their first 64 core chip for only 99$ (price of the complete pcie card! speed of a 45Ghz actual I7 CPU). Next year there should be a 1024 cores version and one year later 4096 cores they said in an interview. Thats much faster and more cores than their actual posted roadmap.

This small company & their chip could get really interesting compared to whats coming from the big ones - especially for the price. Im not quite sure whether Intels Xeon Phi is capable to upscale to 4096 cores or at least 1024.

I don't think i will be overwhelmed
Quote:
The Epiphany has a flat 32 bit address space split into 4096 1-MiB chunks. Each core is assigned his own 1-MiB chunk
__________________
- www.bonkers.de -
The views expressed on this post are my personal opinions and do not represent the views of my employer.
 
Old 06-04-2013, 05:27 PM   #11
fablefox
Lord of the posts
portfolio
Azhar Mat Zin
Fable Fox
Kuala Lumpur, Malaysia
 
Join Date: Nov 2010
Posts: 1,162
The moment I heard about Adepteva and Srek view on it, I knew immediately its for parallel computing. So I visit their website:

Quote:
At Adapteva, we believe that the future of computing is parallel and heterogeneous and have set out to create a clean slate architecture optimized with these assumptions in mind.


General computing is moving into different realms now. Due to 'big data', and also physical limitations, parallel computing has its place. I know maybe its too early for CGI to have one parallel for each render bucket, but imagine 3d point transformation.

The other side is quantum computing. Recently it made headlines when Google bought the latest version and planned to share it with NASA. Some papers written about the test on the quantum computer shows a speed up of 10,000 for certain algorithm. Search Ars Technica for articles.

Anyway, I'm sure it will have its use in DCC. We'll see.
 
Old 06-04-2013, 06:45 PM   #12
SYmek
PRO
 
Join Date: Mar 2004
Posts: 82
Quote:
Originally Posted by ThE_JacO
Phi might have a future, but in DCC it sure doesn't have a present.

Binary incompatibility means nothing runs on it yet (of what we use), and despite Intel's claims to the contrary it's not as simple as a recompile.
It's not like you just swap the SSE intrinsics for IMCI's and voila, everything done, like they insist
A lot of SIMD is very, very neatly hand tailored, it doesn't just "get recompiled".

(...)


I spent last year mostly simd-optimizing my code, so I really understand what you're talking about , but as I said previously the situation is much better than with Cuda for example, so I'm also more enthusiastic about Phi.

Also the environment is changing too, there is less hand tailored code out there, as compilers and vector libraries become better and better. It's really expensive to maintain the code which was hand optimized, so developers try to rely on middleman here.

Quote:
And I'm sure the risk somebody was just about to kit up a tesla renderfarm, other than some nVIDIA sponsored rendering engine, were slim to none in the entertainment industry


You would be surprised. Definitely not in film industry, but in Tv/commercials all look greedily into gpu computing.
 
Old 06-04-2013, 07:22 PM   #13
billpayer2005
Veteran
Ill Player
Los%2BAngeles,
 
Join Date: Dec 2005
Posts: 57
Intel Phi Performance comparison with Nvidia Tesla and Dual X5680

http://goparallel.sourceforge.net/i...ocks-tesla-gpu/

The Phi compares very favorably to Dual Xeons or Nvidia Tesla (nearly double the performance).
Up to 1 T-Flops... (100GFlops)

Article indicates they will be sub $2000...

Last edited by billpayer2005 : 06-05-2013 at 02:05 AM.
 
Old 06-04-2013, 08:11 PM   #14
techmage
living in maya
 
techmage's Avatar
portfolio
Ryan
USA
 
Join Date: Apr 2005
Posts: 1,076
The thing about rendering, with both this and the tesla card is they need more RAM. These things are intended for scientific simulation and calculation and just don't have the RAM needed for doing serious CG rendering work. They need at least 16 gb I think, ideally 32 gb before they can truly replace typical CPU rendering.

And no company is really aiming at that directly to make it a reality. Nvidia is just kind half-assing it with iRay + Tesla I think. Where it's cool, it's nice. But really they should just drop the R&D cash on it to make it so you can buy a $5000 - $6000 on a 32 gb double powered Tesla card and use it with a version of iRay that any app can forward to, which will then allow you to truly have a multi-purpose cpu replacement render farm in a desktop. Without having to jump through any special GPGPU only rendering hoops. I really just don't understand why nvidia is still only has it's foot half in the water on this. It's like there unsure if it will actually be profitable. So they keep tesla in the area of scientific simulation and "maybe" cg rendering.

I see the same issue happening with this. This 'might' be useful for CG but who's going to drop the serious cash on it until it's a well refined package for rendering? Intel should be pairing up with like VRay or Maxwell, or Luxology, or someone, to make there renderer be directly and ideally supported by this with the intent of it being a cpu replacement.
 
Old 06-04-2013, 11:12 PM   #15
ThE_JacO
MOBerator-X
 
ThE_JacO's Avatar
CGSociety Member
portfolio
Raffaele Fragapane
That Creature Dude
Animal Logic
Sydney, Australia
 
Join Date: Jul 2002
Posts: 10,954
Quote:
Originally Posted by SYmek
I spent last year mostly simd-optimizing my code, so I really understand what you're talking about , but as I said previously the situation is much better than with Cuda for example, so I'm also more enthusiastic about Phi.

Also the environment is changing too, there is less hand tailored code out there, as compilers and vector libraries become better and better. It's really expensive to maintain the code which was hand optimized, so developers try to rely on middleman here.

Don't get me wrong, I wasn't arguing your point. If anything I thought it obvious enough from your post that you must have had a programming background (I think mentions of intrinsics fall flat on 99.9% of the population otherwise ).

I agree there's a potential future there, but for the benefit of non-programmers reading potentially getting excited, I was pointing out there's no present just yet.

If anything the only good thing coming out of LRB and Phi isn't the majorly F'ed up roadmap, the four years delays, and the underwhelming performance. The good think coming out of it is the amazing work by Intel's software team on their compiler, and that might cover more and more profitable ground in changing models than them constantly crippling their computing depth in favour of width will ever do.

Quote:
You would be surprised. Definitely not in film industry, but in Tv/commercials all look greedily into gpu computing.

Oh, I know many do, but how many have actually bothered setting up a fermi or kepler farm of any description? I have insofar only heard of a couple mid-sized TVC shops, and one previz one. Right now it's a system that seems to run very, very fast, but eventually always into a wall that's a few yards before the quality line expected for delivery.
__________________
"As an online CG discussion grows longer, the probability of the topic being shifted to subsidies approaches 1"

Free Maya Nodes
 
Thread Closed share thread


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 03:53 AM.


Powered by vBulletin
Copyright ©2000 - 2016, Jelsoft Enterprises Ltd.