Haswell chips reviews/NDA lifted

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 06 June 2013   #1
Haswell chips reviews/NDA lifted

Well, so far not so good.
Might be interesting for the next gen of netbooks/slim PCs (Wacom Tablet?) But for the desktop it seems to be unanymously not worth the upgrade.

http://www.rockpapershotgun.com/201...-haswell-chips/

http://www.bit-tech.net/hardware/20...0k-cpu-review/1

http://benchmarkreviews.com/index.p...=1159&Itemid=63

http://www.techradar.com/au/reviews...-1156062/review

Not that those are the best websites ever, it's just a good snapshot to show practically nobody, even the historically pro-intel reviewers, can find the guts to say anything good about Haswell for desktop computing.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
Old 06 June 2013   #2
ya, I was worried that I'd be envious of Haswell's extra oomph when I buy the Ivy Bridge Xeon Mac Pro when it comes out but it is just a minor update performance-wise, mostly just for power consumption.
 
Old 06 June 2013   #3
I would like to see more of the power consumption reports.. i think this was probably the focus for mobile, so I'm curious to see how that translates a year from now into the next cycle.

My fear is that because of such a strong modus operandi shift to mobile, that the long term development of desktop computing is in jeopardy... I think, they believe that mobile computing is their future, they've been trying for a while now to get their architecture in that market.

Maybe now is the time for AMD.
__________________
-- LinkedIn Profile --
-- Blog --
-- Portfolio --
 
Old 06 June 2013   #4
no, Intel makes too much money on servers and Xeons to abandon them. And the gaming PC market is fine so you don't have to worry about higher end PCs. Most of the growth is in laptops and tablets though, so this is important.

I am looking forward to more conservative power usage on Xeons. It makes no sense to use 200-300 Watts to browse the web and do email.
 
Old 06 June 2013   #5
More conservative?
Xeons are already extremely power conservative, they are crippled well slower than i7s these days for how conservative they tend to be.
Middle of the line ones are 90w peak TDP, the absolute top struggle to draw 130w in really bad and sustained conditions.
Xeon hasn't being a HPC for a while now. i7k are and have been since the i7 9xx.

Intel has simply realized the Mhz rush is well over, but at the same time they aren't doing that great at super wide scaling (Phi feels very tentative and narrow yet).

They already killed the high performance market entirely with Haswell, never really made a secret of it not being their focus anyway.

They are converging things elsewhere, where mobility, high density farms and all all live together so they can get their best bang for bug from one main research branch.

Their compiler work these days is a lot more interesting than their HW work.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
Old 06 June 2013   #6
I feel like future CPU hardware is looking a little gloomy over the next few years specifically for the high-end market.

We're in a transition period with hardware and it kinda sucks since the software is lagging behind the mass parallelism hardware that keeps getting expanded on.

There's a lot of great potential in things like phi and their compiler work, but it doesn't stop the fact that a lot of things don't improve in speed with multithreading and can even slow down in some cases because of the distribution overhead. Though maybe compiler development will solve that.

Things like the mental ray software renderer making use of the GPU to output AO passes is neat, and it'll be nice to see more and more things dumped onto PCI GPU's or coprocessors. It's also neat to see OpenSubdiv dump all its calculations onto the GPU.

I guess it remains to be seen if renderfarms full of PCI rack servers will render frames faster than dense dual-CPU blades or if they'll be more cost efficient. Maybe renders will happen faster by adding more large GPU's, but right now they'll also go faster by adding another computer or another CPU socket.

I just wonder about if transitioning to mass parallelism will actually bring any real benefit. Similar to how a really high quality GPU render takes about as long as an identical-looking high quality CPU render.
 
Old 06 June 2013   #7
I firmly believe we're in a similar transitional phase to the one we were in when we moved from 32 to 64 bit. Several things need to catch up all at once before things will get better, but that's in stark contrast to the global scene of developer and user relationships.

It'll be two or three lean years for sure computationally, where we will move sideways before we go anywhere else.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
Old 06 June 2013   #8
Thread automatically closed

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
__________________
CGTalk Policy/Legalities
Note that as CGTalk Members, you agree to the terms and conditions of using this website.
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 06:52 AM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.