Intel Xeon Phi Coprocessor- 60 cores on PCI card?

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
  06 June 2013
Intel is (or has already) ported their own Embree raytracing kernal to Phi...I googled it months ago. There's definitely speedups, but not as dramatic as I'm sure people hope for.

-Greg
 
  06 June 2013
Thumbs up

ChaosGroup have some results with VRay and Xeon Phi card.
The VRay code implementation into Xeon Phi was started since several months, and the first image was here 1 month ago :

http://www.chaosgroup.com/forums/vb...&highlight=xeon
 
  06 June 2013
Chaosgroup V-Ray with Phi ? Yess !!! Awesome News !
 
  06 June 2013
Quote: no commonly available software is oriented to this level of wide and shallow parallelism


Many raytracing tasks indeed could be considered as "wide ans shallow parallelism"
__________________
www.yafaray.org
Free rays for the masses
 
  06 June 2013
Originally Posted by Samo: Many raytracing tasks indeed could be considered as "wide ans shallow parallelism"

They can deal with the wide part, raytracing is perfectly indicated for that, but shallow, not so much.
6-8GB of ram for such width is really tight, videocard tight. It's fine for demos and procedural heavy stuff, but it's not uncommon these days to need almost that much for the textures alone in some shots, and that leaves you with very little space for everything else, and a cramped footprint for the acceleration structure.

So just as things are finally coming around to parallelism, and we've been liberated by memory constraints by 32-64GB being, relatively speaking, affordable, you want to kidney punch it into one eight of what we have currently available?

I'm looking forward to the first or second model of the next gen, the price point is interesting for sure, but right now it's just too constrained.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
  06 June 2013
Originally Posted by techmage: Intel should be pairing up with like VRay or Maxwell, or Luxology, or someone, to make there renderer be directly and ideally supported by this with the intent of it being a cpu replacement.


If anything, Intel will most likely (if they aren't already) be collaborating with SolidAngle. Back at SIGGRAPH 2012, Marcos Fajardo expressed wanting to partner with Intel. In an interview published at Intel's YouTube channel. So take that for what it's worth.

Source: http://www.youtube.com/watch?v=ldwRpJP6ApA (1:47)

I've also noticed that Intel published some videos detailing the Phis a couple of months ago on that same channel.
 
  06 June 2013
Originally Posted by ThE_JacO: The good think coming out of it is the amazing work by Intel's software team on their compiler, and that might cover more and more profitable ground in changing models than them constantly crippling their computing depth in favour of width will ever do.


Totally agreed with software enhancements, I'm a big fun of what Intel has done recent years with expanding their software culture (learned probably from work on GPGPU environment).

But when it comes to computing depth, it's not like 30+ instructions pipelines requiring complicated branch predictors and multi stage caches is such a great thing every time. It's a consequence of choices made many years ago in venture of scalar performance, which one after another Intel tries to eliminate today. Slower, simpler and wider cpu pipelines are considered better ideas (inside modern multicore hyper threaded chips that is), specially that outside entertainment industry, HPC people are rather considering flops per watt or message passing delays as critical parameters.


Quote: Oh, I know many do, but how many have actually bothered setting up a fermi or kepler farm of any description? I have insofar only heard of a couple mid-sized TVC shops, and one previz one. Right now it's a system that seems to run very, very fast, but eventually always into a wall that's a few yards before the quality line expected for delivery.


Yes, when asked, I also usually advice to not invest into somethings studios will hit the wall with next real life project. Saying that, I would love to be able to propose hp380 with 5 Phis as a renderfarm extension for average shops, once some renderers will be ported. Keep in mind Side Effects has ported Houdini Batch and Mantra to Cell architecture some years ago.


Originally Posted by ThE_JacO: 6-8GB of ram for such width is really tight, videocard tight. (...) but it's not uncommon these days to need almost that much for the textures alone in some shots, and that leaves you with very little space for everything else, and a cramped footprint for the acceleration structure.


Surely RAM has to be considered. Note though that most decent renders don't keep textures in memory, they are streamed from disk. People have successfully rendered scenes with +15GB of textures on 32bit computers .
 
  06 June 2013
Originally Posted by ThE_JacO: And I'm sure the risk somebody was just about to kit up a tesla renderfarm, other than some nVIDIA sponsored rendering engine, were slim to none in the entertainment industry


Hey the Jaco !

This may change fairly soon. Currently testing the alpha for Redshift Renderer . While I can't get into too many specifics, a lot of guys are using Titans (basically the same GPU/RAM config as a tesla) and getting crazy render-times with full GI, MB, DOF, SSS and caustics. My modest gtx580 is getting exterior renders out at under 2 mins @ 2K with all the fruit (complex interiors @ about 8 mins). Still heaps of work to go, but the RS guys are coding REALLY fast. So, the dawn of GPU render farms may be closer than you think

But I have said to much!

Dr.
 
  06 June 2013
Originally Posted by DrDardis: Hey the Jaco !

This may change fairly soon. Currently testing the alpha for Redshift Renderer . While I can't get into too many specifics, a lot of guys are using Titans (basically the same GPU/RAM config as a tesla) and getting crazy render-times with full GI, MB, DOF, SSS and caustics. My modest gtx580 is getting exterior renders out at under 2 mins @ 2K with all the fruit (complex interiors @ about 8 mins). Still heaps of work to go, but the RS guys are coding REALLY fast. So, the dawn of GPU render farms may be closer than you think

But I have said to much!

Dr.


This redshift looks really promising.

I wonder why chaos group hasn't been sprinting towards a similar design pattern. As in, assets stream into the GPU memory to get over VRAM limitations. It seems to me like if that works reliably this will seriously be a game changer.

Has chaos group even showed in any interest in creating a biased gpu renderer?
 
  06 June 2013
Originally Posted by techmage: This redshift looks really promising.

I wonder why chaos group hasn't been sprinting towards a similar design pattern. As in, assets stream into the GPU memory to get over VRAM limitations. It seems to me like if that works reliably this will seriously be a game changer.

Has chaos group even showed in any interest in creating a biased gpu renderer?


Howdy Techmage,

Someone asked this and Vlado said the following;

Quote: It can be done, obviously. If you are asking whether we will do it - I can't say for the moment.

Biased solutions will always have issues - artifacts, splotches, light leaks. I personally am looking forward to the day when we won't need them.


So short-term, I guess not. I agree to a point, but if you can use unified sampling and ray termination to speed things up, why not?

I gotta just say again though, redshift is really going places, some users are starting to use it in production already. So much work to do on their roadmap (small team, very responsive to questions/bugs), but they are moving at lightning pace!

When it becomes available, buy it with monies!

Last edited by DrDardis : 06 June 2013 at 09:57 AM.
 
  06 June 2013
Originally Posted by darthviper107: If it's anything like Nvidia's cards then it's a waste for rendering.


I don't know if you saw this but at first glance I was very impressed:
http://www.youtube.com/watch?v=uRdSxZtUpFk

Also, only the demos here: http://www.youtube.com/watch?v=HYUOUMy-VDo

NVIDIA is beggining to sell it's GRID server racks... Instead of multiple CPUs, they have multiple GPUs... The technology of Path Tracing on GPUs is in it's early days but it actually is the future of motion graphics and 3D software...
 
  06 June 2013
Thread automatically closed

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
__________________
CGTalk Policy/Legalities
Note that as CGTalk Members, you agree to the terms and conditions of using this website.
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 07:44 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.