PDA

View Full Version : Dual Opteron or Xeon?


orbitalpunk
10-19-2004, 08:47 PM
Hi people,

I am building a new modeling computer and am going crazy weither i should get a dual opteron or dual xeon system. My budget is about $350 per cpu so it looks like it'd would be between a AMD Operon 244 or a Intel Xeon 3.0G 800FSB based on the Nocona core.

I did see the benchmark test at Toms Hardware page at http://www.tomshardware.com/cpu/20040927/index.html but i feel its an unfair test for the xeon because, and i don't know why he did this, he used slower memory. DDR266 =266mhz and DDR2-400=200mhz. The opterons used DDR400=400mhz. He should have used the Iwill DH800 motherboard and used DDR400 memory to make it a fair test.

My other concern is what would be best for the future of 64-bit computing and what would benefit most when maya is released in 64-bit form running on windows XP 64-bit version. Is Intel's EMT64 really 64 bit computing? it seem AMD64 is in the Opterons is. but am still confused about Intels EMT64.

Any opinions would be greatly appreciated. This is for Maya 6.

Thanks

lots
10-19-2004, 09:42 PM
From what I've seen EMT64 is almost verbatim what AMD64 is (Even the spec sheets for the technologies are almost identical). There are a few discrepancies (EMT64 utilizes a 36bit address bus as opposed to the Opteron/Athlon64's 40bits for example). I've also heard (as in rumors :P) that EMT64 is not as well implimented on the Intel chips as the AMD64 is on the AMD chips. Granted, only final versions of WinXP64, and a comparison of EMT64 capable chips from Intel and AMD64 chips on that OS, will solve this question...

There was an article I came across a few months ago featuring an interview with some MS people, and one of them said that "AMD has done a very good job" in reference to the x86-64 extentions found in AMD64 and Intel's EMT64. Take that for whatever .. I'm just spitting out what I've read (rumor wise) around the net. There's a possibility that none of this is true :)

It is just rumor after all...

novadude
10-19-2004, 10:29 PM
DDR2 400 and DDR 400 run at the same speed, 200 x2
Now, if you want to debate latency, bring that argument up with Intel for choosing to utilize DDR2

orbitalpunk
10-19-2004, 10:46 PM
hi nova dude, umm.. maybe i missunderstood tom's statment on the first page where he explains the new chipset from intel.

"With out a doubt, Intel pulled out all the stops to upgrade the new platform. As a result, significant ingredients have been added, which are already spiking the desktop platform based on socket 775, such as PCI Express graphics and DDR2 memory. However, the latter only works with 200 MHz (DDR2-400)"

If thats the case that DDR2-400 is at 400mhz, I'd still consider the intel. I just want whats best for rendering in maya as well as knowing who'd gain most from a 64bit verion of maya and windows.

ugh.. how did my text turn black? eh..

novadude
10-19-2004, 10:53 PM
DDR2 and DDR are Double Data Rate RAM. They throughput data at a rate twice as fast as their Mhz rating, which at 200Mhz is 400.

orbitalpunk
10-21-2004, 07:03 PM
geez, i think this new review answers my question.

http://www.gamepc.com/labs/view_content.asp?id=noconaopteron&page=1

Maya benchmark is on page 11. Looks like Opterons are it. I kinda was rooting for Intel tho. But overall, i want what renders fastest in Maya and if Opterons are the ones, that thats where im headed.

RogueLion
10-21-2004, 07:16 PM
I would go with Xeon.

lots
10-21-2004, 07:42 PM
Really DDR2 ram's only advantage is that it can reach MUCH higher base clock speeds. DDR maxes out when you get to DDR speeds of around 400-550, DDR2 is designed to go beyond these speeds. However clock for clock DDR2 performs close to DDR. Meaning at its current speeds (DDR2 400), you wont find much of a difference. In fact DDR2 (at least right now) has fairly bad latencies. For example, if the Athlon/Opteron architecture took DDR2 (in its current form), the slower latency of the chips would probably cause a drop in performance on that platform, since AMD's new chips seem to be sensitive to timings..

elvis
10-21-2004, 09:41 PM
Personally speaking I've been using Opterons for 12 months now, and Xeons for much longer.

Xeons are starved for decent front-side-bus speeds. All the marketing numbers in the world can't hide the fact that two CPUs at 3GHz or more are being fed by effectively a 400MHz FSB to the system RAM. Where I come from, we call that "a garden hose into a stormwater drain".

Opterons have moved the memory controllers to the CPUs. Dual Opterons on Nforce3 hardware with 4 sticks of RAM and NUMA have a maximum theoretical throughput of 12.8GB/s. I've seen around 10GB/s real world. Compare this to the 2.5GB/s of a Xeon setup, and you see why my entire company has dumped Xeons in favour of Opterons.

Others' mileage may vary. I can tell you now, 3DSMax users will be sticking with Intel for some time. Regardless of how wonderful AMD's latest are, 3DMax benchmarks speak for themselves: Max loves Intel. Maya, Houdini, XSI, PRMan and Mental Ray users will definitely be wanting to check out AMD. The higher bus speeds and great FPU power make these things a pleasure to work with.

And not only that, but at 40-50 degrees, Opterons make for nice renderfarm nodes, compared to 70-80 degree Xeons. Stick 42 of those Xeons in a rackmount setup, and watch your air-conditioning bills go through the roof!

orbitalpunk
10-22-2004, 12:20 AM
see thats one thing i hate about these reviews is the unequal latency in the memory. 3 vs i think 2.5. they should have used the Iwill DH800 board. its the only one that supports the nacona core and DDR (1) memory. but even with equal latency, i wonder if the xeon can catch up in maya. man that review sure niped it in the bud. another thing i was considering was longevity and who would perfrom weill in xp-64bit and maya 64-bit. it seems amd is in the lead with there 64 bit implementation. for 1, AMD64 is really the established form factor for 64 bit . also amd uses 40 bit vs's intels 36 bit address bus. also AMD dual cores will be compatible with the 940 socket. wohoo.

man , if only maya used SS3, it would proboably be a different story. thats why max kicks with intel. intels use SS3, opterons dont.

thanks for all the input guys. if anyone has more experinence with these setups, please share.

lots
10-22-2004, 03:51 AM
man , if only maya used SS3, it would proboably be a different story. thats why max kicks with intel. intels use SS3, opterons dont.

Opterons are going to support SSE3 come some time next year I believe (possibly with the Dual core chips?). That is what I've read. Though I could be wrong ;)

leas5040
10-22-2004, 07:10 AM
The "Hammer" core is going to be the first one to be updated with SSE3. See here (http://www.techreport.com/onearticle.x/6363)

orbitalpunk
10-22-2004, 08:06 PM
damn damn, when when.... hopefully in there next bum. opterons 252? please!!!

lots
10-22-2004, 08:28 PM
Just to be nit-picky :)

The "Hammer" core as you refer to it is not what its called. Hammer simply refers to AMD's Athlon64 and Opteron architectures (the first cores in the series being the ClawHammer and the SledgeHammer, respectivly). The 90nm Athlon64 core is called Winchester, and the 90nm Opteron core is split into 3 different versions, the 1-8 way chips (Athens), the 1-2 way chips (Troy), and the 1 way opteron (Venus)

EDIT: The Opterons are scheduled for a 90nm shrink (IE: new core) very soon. I want to say before the year is up, but we'll see..

leas5040
10-22-2004, 11:18 PM
Ah, yes, Thanks for the clarification Lots. I made the reference to "Hammer" as I wasn't sure what the latest cores were being called.

orbitalpunk
10-23-2004, 07:29 AM
Hamma Time!!...... (do the crab side step)

MadMax
10-24-2004, 06:38 PM
AMD is apparently having a great transition to 90nm process, and several 90nm A64's are currently available since they were released last month.

They are cool and stable and have suffered none of the problem Intel had with their die shrink.

SSE3 is part of the 90nm die shrink for Opteron, so we should see it very soon. AMD is well on track for Dual cores first half of next year, Intel has slid their plans for dual core back to sometime in 2006, and no prospect of faster chips from them in the near future.

AMD 64 vs. Intel's 64, no comparison.

As has been stated previously, Nocona suffers froma starved FSB. 2 CPU's sharing a single slow bus.

AMD uses Hypertransport links, 1ghz speed to each CPU. Each CPU has it's own bank of memory except for on the cheap boards.

Much of the Intel design emulates 64 bit, it isn't complete like AMD's chip is and speedwise it is a dog. AMD wipes the floor with Nocona.

The AMD chips also run much cooler these days, and apparently will be even cooller as it looks as if AMD has plans to introduce a peltier type technology built into the chips in the future.

If you are looking for a dual, you definitely want a dual with nForce4. Tyan has a new board, Thunder something, nForce 4 SLI and tons of extras.

When dual core arrives next year, you'll be able to build a reasonably priced Quad, while Intel will still be sometime off int he future.

Smart money is AMD these days. They are the ones that are doing all the innovating.

lots
10-24-2004, 07:22 PM
Intel has slid their plans for dual core back to sometime in 2006, and no prospect of faster chips from them in the near future.

Not entirely true.. Intel's plans show the dual cored P4 appearing on desktops in late '05, granted unforseen problems could push this chip back to '06 (which if you look at Intel's current trends, wouldnt surprise me). The dual cored Xeon is what is appearing in '06 (and where that rumor comes from). In fact Intel's only for sure Dual core chip, from what i've heard will be the new Itaniums. Who can afford those ;). It's interesting to see where AMD and Intel are putting thier money for Dual core, though. If you look at it, AMD is starting its dual core kick with the workstation/server market (where it is benficial) and Intel is starting out in the desktop (where it wont make much of a performance impact, since most desktop software is NOT multi threaded). Interesting ;). Also, from news around the web, Intel's dual cored P4 will not have HyperThreading. Probably in an attempt to reduce heat created by the chips, since in essence the dualcored P4 will be two prescotts slapped onto one package. Which would explain thier lower clock speeds (2.8 to 3.2 GHz).

In either case, 2006 looks like its stacking up to be the year of the dual core...

MadMax
10-24-2004, 07:51 PM
Not entirely true.. Intel's plans show the dual cored P4 appearing on desktops in late '05, granted unforseen problems could push this chip back to '06 (which if you look at Intel's current trends, wouldnt surprise me). Um, nope. Might want to read this article. Delayed till 2006.

Intel Dual core delayed to 2006 (http://news.com.com/Intels+dual-core+Xeon+due+in+2006/2100-1006_3-5416330.html?tag=nl)

Intel had said in September that dual-core processors for desktop, laptop and server computers would arrive in 2005. However, it now appears that the only dual-core server chips coming from the Santa Clara, Calif.-based company will be a "Montecito" member of the Itanium family, not a member of the vastly more widespread x86 family that includes Xeon.
From reading that article, it looks as if ANY dual core other than Itanium based is going to be even further out.

lots
10-24-2004, 08:05 PM
That article wasnt talking about Pentium4.. It merely suggests that you wont see a dual cored P4 until 06 from Intel. Which is probably true. But not what intel says. The guys over at Anandtech compiled a bunch of info out there about Intel's dual core roadmaps here. (http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2252)

But at this point I'd say most of what you see online about Intel's dual core stuff is just rumor. Even the articles we just quoted to eachother :P. I know I wont take any of it as fact until I see it.

leas5040
10-24-2004, 08:10 PM
The AMD chips also run much cooler these days, and apparently will be even cooller as it looks as if AMD has plans to introduce a peltier type technology built into the chips in the future
I thought Peltier went out with the Pentium III's. Any idea if it is going to be a type of Peltier system, or be the same thing rehashed?

MadMax
10-24-2004, 08:19 PM
I thought Peltier went out with the Pentium III's. Any idea if it is going to be a type of Peltier system, or be the same thing rehashed?
apparently it is integrated into the core of the chip, so it isn't really anything like the old peltier.

dotTom
10-25-2004, 04:51 AM
Jeez this thread does the rounds again. I honestly don't think it's half as big a deal as some people think. Now that you can get PCI Express for both platforms I think you're better off worrying about other parts of the system. Too often I see people fretting over the CPUs. Workstations, more than "gaming PCs", benefit from being balanced. What I mean is that you should be worrying about how much memory you can afford (I would recommend a minimum of 2GB), how many hard discs. What high capacity backup medium can you afford (get tape) etc. Raw CPU performance figures beyond a certain level tell you very little, what you're really interested in is how does your workstation perform day to day once you open up all the applications that constitute your pipeline.

As for the dual core thing, stop worrying about it. It'll happen in the goodness of time. You should buy the best balanced system today that supports your workflow for the next 18-24 months. The only thing I'd suggest is that you do get an x64 ("x64" is Microsoft speak for AMD64/Intel EM64T) capable CPU(s).

Lastly, don't overlook the display question. It's almost always wrong (assuming your employeer isn't paying) to go for the top end CPU. You pay a huge premium for a tiny delta, better to spend that money on either more memory or put it towards the often overlooked display (get a couple of TFTs and a dual DVI graphics card with at least 256MB on it).

For the record I dual boot Windows Server 2003 x64 and Windows XP x86 on my dual Nocona (by day I code, by night I learn Maya) and I have no trouble with this machine - I'm throwing very large compiles and rendering at the same time on it and I find the responsiveness to be excellent. I'd expect the same experience on a similar AMD box. Just get a balanced rig and enjoy.

orbitalpunk
10-25-2004, 06:11 AM
hi dotTom,

well the reason CPU's are so important here is becuase my main concern is Rendering in Maya. and CPU's make the biggest impact there. I am doing low poly counts but complicated renderings. my quadrofx video card is fine for that, my 3 SATA drives are fine as well. one is used just for backups. fine there too. just really wanna rip thru my renders now. I looked up tyans site and i dont see anything about a nForce4 board. was is at tyan.com or another global site? I know iwill mentions it already. i was looking at tyan tho. thats what most people seem to recomend. also i have a agp card and dont wanna lose that to a pci-x only board. but i know nforce4 would be nice. if i could find a nforce4 with a agp slot, id be really happy, but non can be found.

my new question is this,
ive been pricing memory and wondered is a a 400DDR stick with a CAS Latency of 2 really gonna show an improvment compared to one with a CAS Latency of 2.5 when it comes to rendering? for 1 GIG i 450 compared to 250. just wanna know if its worth it.

Thanks again for everyones reply,
I've really enjoyed this discussion.

MadMax
10-25-2004, 06:26 AM
I looked up tyans site and i dont see anything about a nForce4 board. was is at tyan.com or another global site?
Did someone say nForce 4???

Tyan nForce 4 SLI board (http://xtremesystems.org/forums/showthread.php?t=44016)

lv-88
10-25-2004, 11:20 AM
yes, those nforce4 should be really god, I'm uppgrading as soon as they come out ;)

lots
10-25-2004, 07:53 PM
Why not get an Nforce 3 250Gb if you want AGP? The main difference (aside from improved features on the NF4, IE: advances in the onboard firewall.) is the ability for the Nforce 4 to use PCIe. You wont find an Nforce 4 board with AGP, unless some company does some fancy wiring. Which I doubt ;) but hey, it could happen. And due to the nature of the Athlon64/Opteron's onboard memory controller, there is hardly any difference in performance between different chipsets that support the CPUs. At least on the Single CPU scope.

orbitalpunk
10-25-2004, 09:32 PM
hi lots,

what do you know about performance differences bewteen CAS 2 and CAS 2.5 memory sticks for rendering? any improvment?

also what about a double cpu scope with a nforce4 compared to a nforce3, any speed improvments?

thanks again

novadude
10-25-2004, 09:38 PM
remember that all socket 940 boards require registered RAM

lots
10-26-2004, 03:41 AM
I doubt you'll find much difference in terms of CPU performance when it comes to Nforce 4 vs Nforce3. However, in a dual CPU setup the memory performance can very depending on how the manufacturer implimented the chipset.

If both boards (Nforce 3 and Nforce4) are configured in a NUMA design, you wont see much (maybe just a little) of a performance difference, probably nothing real world either. The reason for such small differences between Athlon64/Opteron motherboard chipsets is due to the fact that, in the past, the northbridge (where the memory controller resides in traditional designs) was a pretty determining factor in performance. This is why Nforce2 was faster than the VIA chipsets of the time. But with the onboard memory controller found on the Athlon64/Opteron CPUs them selves, this very big performance changing aspect of a chipset was eliminated, and now the only real difference is feature sets. I know benchmarks are usually not that helpful when it comes to real world apps, but if you look at the trends between the current VIA and Nforce chipsets, you'll notice very little performance differences between each chipset.

As for memory timings, I've never really done any personal testing of my own. But my guess would be you wouldnt notice that much. You could always try it out and see how it does and then you'll know :P

elvis
10-26-2004, 05:32 AM
Did someone say nForce 4???

Tyan nForce 4 SLI board (http://xtremesystems.org/forums/showthread.php?t=44016)
Iwill still have my vote for superior build quality, lower fault rates, and overall better performance.

The DK8ES is the Nforce4 SLI board of choice for me:

http://www.iwill.com.tw/sppage/2004_10/2004_10.htm

We're currently selling DK8N workstations, and people are loving them. We'll be introducing the DK8ES into our product line shortly. These are great boards under Windows and Linux, and some of our higher-end users have mentioned that these boards are increasing their workflow over older Xeon setups by several times thanks to their bandwithd, and being able to do things like still model at full speed while rendering in the background on a single processor. Onboard memory controllers and NUMA mean that doing so doesn't kill your whole system like it did on Intel boxes.

Personally speaking I'm glad there's finally a decent alternative on the market to Intel's "clockspeed is king" offerings. For a long time I was considering migrating to Apple which would have been more costly (all my software would have to be relicensed). Now thankfully there's a decent platform that I can use all my x86 stuff on without being forced to put up with the sorry excuse for a system design that is the Xeon.

Opterons are currently outselling Xeons for high-end workstation and server users. I wonder if this means we'll see the big players like Dell start to consider "the dark side of the force"? :)

lots
10-26-2004, 06:05 AM
Doubt it. Most of dells customers proably care more about if it works and does what they want rather than is it going to do the MOST. Plus Dell has a pretty sweet deal with Intel.. or so I hear.. Just a thought..

PureFire
10-26-2004, 08:11 AM
If you do end up buying a dual Xeon system, make sure you dont get the standard cpu fans as I have a dual 2.8 Xeon system and believe me its an ear bleeder. The noise is attrocious. Im currently looking at different fans or a water cooled option.

maninflash
10-26-2004, 08:48 AM
Just to add my two cents, if you're going to do heavy rendering with MENTALRAY, Hyper Threading will have a huge cut in your rendering time, I tested the scene from zoorender on a dual opteron 248 vs. dual xeon 3.2 with HT on and the xeons finished almost 8 seconds faster, on a single frame render.

See the detail results here:
http://www.highend3d.com/boards/showflat.php?Cat=&Board=hardware&Number=182199&page=0&view=collapsed&sb=5&o=&fpart=


This is due to the fact that MentalRay is a multi threaded renderer and can use 4 CPUs (2 physical, 2 virtual by HT).

I sent my result to www.zoorender.com (http://www.zoorender.com) , but oddly, they never put them in their chart, maybe they WANT the opterons to have the lead!

Thalaxis
10-26-2004, 01:52 PM
I
Opterons are currently outselling Xeons for high-end workstation and server users. I wonder if this means we'll see the big players like Dell start to consider "the dark side of the force"? :)
Logistics will probably prevent it. Dell still sells more computers than AMD can make processors in a quarter, so it wouldn't be practical, no matter what their deal is with Intel, and technical merit can't change that.

But, AMD isn't stupid; they're not sitting on their thumbs waiting for Intel to get it together, they're busily working on getting both more fab capacity online (a monster of a fab), and I wouldn't be surprised if they hold up some of their product plans in the future to keep in reserve to counter Intel while they work on next-generation stuff, like the Geode, K9, and on bringing dual-core processors into the mainstream, rather than just the high end.

Hopefully, they're also working on something to match up with Intel's dual-core mobile product plans as well, which it looks like they will be launching before dual core Xeons are ready to go. A dual core Athlon64 based laptop would be even sweeter than a dual core Pentium-m laptop :)

MadMax
10-26-2004, 03:22 PM
Just to add my two cents, if you're going to do heavy rendering with MENTALRAY, Hyper Threading will have a huge cut in your rendering time, I tested the scene from zoorender on a dual opteron 248 vs. dual xeon 3.2 with HT on and the xeons finished almost 8 seconds faster, on a single frame render.

See the detail results here:
http://www.highend3d.com/boards/showflat.php?Cat=&Board=hardware&Number=182199&page=0&view=collapsed&sb=5&o=&fpart=


This is due to the fact that MentalRay is a multi threaded renderer and can use 4 CPUs (2 physical, 2 virtual by HT).

I sent my result to www.zoorender.com (http://www.zoorender.com) , but oddly, they never put them in their chart, maybe they WANT the opterons to have the lead!
Zoorender has an Opteron score that is only 2 seconds difference that the one you posted for your dual Xeon with HT running.

Sorry.


Oh yeah, the Tyan boards there had a slightly limited Hyoertransport bus (600mhz) due to a silicon problem.

The newer boards with faster HT busses 800-1ghz will shave a number of seconds off of that score as well, so your Xeon comparison still isn't top dog. I could post results from a much faster iWill board if it will make you feel better about your conspiracy theory.........

maninflash
10-26-2004, 04:44 PM
That's right. But Xeons are still faster, aren't they? ;) and besides, the time you mentioned being only 2 seconds slower is on a Dual Opteron 250, not 248, you shouldn't compare 250 with Xeon 3.2GHz because they are not in the same level, 250 should be compared to Xeon 3.4GHz.

I'm not meaning to start a pointless "mine is better than yours" discussion, just to clarify the issue, and keep in mind, when we talk about a render node, even the mere 2 seconds cut on every frame is going to make a big difference in the overall production cost (for instance, if you're rendering a 15min video, which is 27000 frames on NTSC, the difference would be 15 hours!)

But I agree, that Opterons are better CPUs than Xeons overall, but comparing the two specificaly for rendering with MentalRay and the fact that it is a multi-threaded renderer, makes Xeons a better choice.

Peace out :)

-Michael

MadMax
10-26-2004, 05:23 PM
That's right. But Xeons are still faster, aren't they? ;) and besides, the time you mentioned being only 2 seconds slower is on a Dual Opteron 250, not 248, you shouldn't compare 250 with Xeon 3.2GHz because they are not in the same level, 250 should be compared to Xeon 3.4GHz. Somewhat of a nit pick there, but since 252's are supposed to ship soon, I think I'll let that stand.



I'm not meaning to start a pointless "mine is better than yours" discussion, just to clarify the issue, and keep in mind, when we talk about a render node, even the mere 2 seconds cut on every frame is going to make a big difference in the overall production cost (for instance, if you're rendering a 15min video, which is 27000 frames on NTSC, the difference would be 15 hours!)

But I agree, that Opterons are better CPUs than Xeons overall, but comparing the two specificaly for rendering with MentalRay and the fact that it is a multi-threaded renderer, makes Xeons a better choice. First off, are you sure the other scores there are NOT using HT? Look at the various processors listed there.

I see at least one 3.6 Xeon setup there with a score of 56.11 which is substantially slower than the Opteron 248 score. Of course there are some slower Xeons that have a faster score than the 56.11

builder skill in setting up and tweaking a system for optimum performance obviously makes a difference. If I use the results posted, The Top Opteron speed grade is 13 seconds faster than the top rated mhz. Intel at 3.6ghz. You quote 2 seconds as being 15 hours difference, but the posted results show the 3.6 dual Xeons listed would take almost 105 hours longer to render.

Your single result is substantially different than the AVERAGE score listed at Zoorender. Normally that would tend to make people skeptical. I'm not implying that you are making anything up, but it does raise a couple of questions.

Likewise, I have no idea of the actual building skill of the people listed at Zoorender who who built ANY of the systems. Maybe the guy with the top 248 score isn't as skilled a builder as you? maybe he doesn't know how to tweak for optimum performance?

There are a lot of variables and one result doesn't prove Xeon is faster.

maninflash
10-26-2004, 05:37 PM
Of course, how to biuld the rig makes all the difference.

About other Xeon scores not being as fast as mine on zoorender, there is a simple explanation, if you look at the mentalray page at zoorender it says "Just open the scene with Maya 5.0 and hit the render button." The DEFAULT setting for mentalray within Maya 6.0 is "Render using 2 threads", so when you download the scene, load in it Maya and hit the render button, even if you have Hyper Thread on, mentalray will only use 2 CPUs and those other virtual 2 created by HT will sit idle throughout the rendering job.

The rendering instruction on zoorender is obviously wrong. The correct way to do this, is setting the rending option to use "4 threads" within mentalray and THEN hit the render button, you can confirm this by looking at your windows task manager/CPU activity and having mental ray use 4 threads vs. 2 threads.

orbitalpunk
10-28-2004, 12:59 AM
question,

can pci cards fit into a a PCI-X slot? im a bit consufed. the only reason being is alot of the nicer opteron boards have like 1 32 bit pci slot and the rest are PCI-X. i have 2 pci cards i abolutly need. pro audio stuff. just cant trade that kind stuff in. I hear confilicting reports about this and how the notches work. i mean the slots do look different. somone at Fry's said no, no pci card work on PIC-X slots, then i emailed belkin about here 2.0 USB cards and they said they all will work in a PCI-X alot. reason for a 2.0 USB card is the Tyan Thunder only has 1.1 USB and i wanted to add a 2.0 USB card to.

any info would be appreciated
Thanks again guys

Tarrbot
10-28-2004, 01:31 AM
maninflesh, I think you just proved the point of the fallibility of looking at "community benchmarks" where you absolutely have to follow the instructions or it throws everything off-kilter.

Ever wonder if that's not why they didn't include your score? That you didn't follow the actual instructions?

Your one result posted would invalidate the entire data set. Just because you found the "optimized" solution doesn't mean you should render using that solution and then post it.

maninflash
10-28-2004, 07:13 AM
Tarrbot, well, it happened to be a simple mail server error, the first mail I sent them bounced back to me for some unknown reason, I resent my result again yesterday and it's up there if you'd like to see.

"Your one result posted would invalidate the entire data set. Just because you found the "optimized" solution doesn't mean you should render using that solution and then post it."

I don't agree, the MAIN POINT here is to compare Opterons with Xeons FOR rendering with MENTALRAY to find out which one would render faster. We would be fooling ourselves by not optimizing mentalray to take full advantage of xeons Hyper-Threading technology and then say "Oh look, opterons render faster!"

Because, again, THE VERY strength of mentalray is in its multi-thread rendering capability!

Tarrbot
10-28-2004, 12:32 PM
I understand your reasoning. But from a standpoint of validity, your optimization undermines the entire data set of other users who posted their scores.

I hope you explained how you got this score so that whoever runs that site can fix the problem and bring the rest of the data from other users into compliance.

I agree that the reason for the render times being posted is for determining which is quicker doing what.

What I'm getting at is that the rest of the scores need to be redone.

Perhaps I'm too militant on methodology. I blame my background in torture testing hardware and software.

Thalaxis
10-28-2004, 01:55 PM
No, the flaw is not with maninflesh's submission, it is with the instruction. It's clearly flawed -- if you use a quad processor machine, the result will be hugely different from reality if you stick to the letter of the instruction.

In fact, if the test instructions are not updated, then a dual core Operon will be exactly as fast as a single core Opteron at the same clock speed when installed in a dual configuration, which would be a pretty wildly inaccurate result.

Tarrbot
10-28-2004, 02:24 PM
The flaw is with both actually. The instructions are wrong, clearly.

However, submission to a multi-user data set should always follow the directions, however wrong they may be.

Do I think this was wrong of maninflesh to submit via this method?

No.

Let me elaborate since this seems to be two-faced on my part.

It is clear the instructions are inaccurate. They need to be updated. But to get a "clear picture" of the data set. ALL of the results would have to be rerun.

maninflesh is absolutely correct in pointing this out. The data set is now "muddied" and aren't a *true* representation of processor power.

But then again, I said I was militant. I don't know why I'm even bickering over this. It doesn't really matter since all of the systems aren't standardized anyway.

As I mentioned, I spent way too much time with standards and methodology. In my mind, right now it's all inaccurate. But honestly, who cares if it's inaccurate in my mind? :p

Thalaxis
10-28-2004, 02:51 PM
I disagree. Maninflesh' submission gives you a different datapoint, by providing one using HT, while others have provided data that does not, which is useful information. It invalidates none of the previous data, because they are still correct, since if he had turned off HT, his result would have matched.

maninflash
10-28-2004, 04:25 PM
The instruction on zoorender has been updated, hopefully people will test their machines with the correct setting and would post them for all to see. :)

Thalaxis
10-28-2004, 05:24 PM
Quick response :)

I just realized that I've been misspelling your username. Oops! Sorry :eek:

leas5040
10-28-2004, 06:09 PM
question,

can pci cards fit into a a PCI-X slot? im a bit consufed. the only reason being is alot of the nicer opteron boards have like 1 32 bit pci slot and the rest are PCI-X. i have 2 pci cards i abolutly need. pro audio stuff. just cant trade that kind stuff in. I hear confilicting reports about this and how the notches work. i mean the slots do look different. somone at Fry's said no, no pci card work on PIC-X slots, then i emailed belkin about here 2.0 USB cards and they said they all will work in a PCI-X alot. reason for a 2.0 USB card is the Tyan Thunder only has 1.1 USB and i wanted to add a 2.0 USB card to.

any info would be appreciated
Thanks again guys
To answer your question, yes, it is backwards compatable. Here's a quote from the designers of pci-x:

PCI Express architecture is a state-of-the-art serial interconnect technology that keeps pace with recent advances in processor and memory subsystems. From its initial release at 0.8V, 2.5GHz, the PCI Express technology roadmap will continue to evolve, while maintaining backward compatibility, well into the next decade with enhancements to its protocol, signaling, electromechanical and other specifications.

orbitalpunk
10-28-2004, 06:19 PM
Uruk-hai 1: "What is it?"

Uruk-hai 2: "Maninflesh"


Here's a interesting tid bit about rendering speeds. my sony laptop 1.7 Pentium M beats my Desktop Pentium 4 2.6 with HT by 5 seconds using the Highend 3D Maya scene test.

by the way, does anyone still know if pci cards work in a pci-x slot?

oh, another thing, about how this article started. these are the things i finally settled on.

frist off, i snaged 2 opteron 248's Brand New for $860 on ebay.
at the moment i can afford the MSI Master 2 FAR which beats out the tyan tiger board but will be getting the Iwill DK8X. that board looks awsome. dunno why not many people reviewed it. they would always talk about the DK8N which is not as nice as the DK8X. nice nice.
also got a Lian Li V1200 case

Thalaxis
10-28-2004, 06:27 PM
Sweet deal on the Opterons :)

The real winner is the DK8ES, based on the nForce4, and complete with SLI support :D

I suspect that the N model gets more attention than the X model because the N model uses an nForce3, which is a more feature rich chip(set) than the AMD chipset that the X model uses.

orbitalpunk
10-29-2004, 12:13 AM
seems there both very feature rich.

http://www.iwillusa.com/products/ProductDetail.asp?vID=194&CID=92

http://www.iwillusa.com/products/ProductDetail.asp?vID=182&CID=92

i just think the the DK8X is great cuase it has 2 pic 32 bit cards. I just think its so dumb all these new mother boards are getting filled with pci-x slots when there are very few pci-x cards. let alone people even owning them. do they expect us to just toss out all our 32 bit cards and buy a new set of pci-x, which by the way are not even for sale? im mean at least do a half half of something. 3 pci and 2 pci-x. the DK8X is the only one with more then 2 32bit pci slots and some pci-x. and the DK8X has 5 audio connectors including spidf, 1 extra serial and a IEEE port. the DK8N doesnt. not like i care about the ieee or sound, i just want the extra pci slot.. anyways.. yeah...

novadude
10-29-2004, 03:06 AM
PCI cards will work fine in PCI-X slots

Thalaxis
10-29-2004, 04:54 AM
seems there both very feature rich.
True. It's nearly impossible to find an x86 motherboard that isn't these days :)

I think that they're reasoning is probably that there is so much stuff already on the board, that most people won't need the PCI slots.

Not that I agree with that logic, but then it's just a guess. It could just be that PCIe costs less to manufacture or something like that, too.

lots
10-29-2004, 05:48 AM
To answer your question, yes, it is backwards compatable. Here's a quote from the designers of pci-x:

PCI Express architecture is a state-of-the-art serial interconnect technology that keeps pace with recent advances in processor and memory subsystems. From its initial release at 0.8V, 2.5GHz, the PCI Express technology roadmap will continue to evolve, while maintaining backward compatibility, well into the next decade with enhancements to its protocol, signaling, electromechanical and other specifications.
Take a look at that for a second :)

PCI-X is NOT equal to PCI Express

As for PCI-X vs normal 32bit PCI slots. It's fairly common in this market segment to see lots of PCI-X. Why? because these boards are aimed at high end workstations (Video editing) and low to middle class servers (with PCI-X SCSI Ultra320 Raid cards, and other high end drive systems). So this kind of why you see PCI-X here insted of normal PCI. PCI-X offers much greater bandwidth. This means more data can flow off your RAID array, or you can install an infiniband controller, etc., plus almost all the high end RAID controllers are PCI-X based. Also, in most cases, the need for sound on these systems is fairly low (since most of these boards will get sold to the server market). It's also why they come with onboard video, or sound.

In most cases these boards just aren't deisnged for people like us. Sure the NF4 SLI boards coming up are more tailored to us, but still the number of PCI slots is limited to one or two. Guess it would be fair to invest in some USB sound cards from creative ;)

As for the DK8X (or any board based on the AMD chipset Tyan, etc). Watch out for ATI cards. The AGP tunnel, provided by an AMD chip, doesnt work in many cases with ATI. The DK8N also suffers from this, because for some unknown reason IWill chose to go with the AMD chip for AGP as opposed to the one provided by the Nforce3 chipset..

EDIT:


Not that I agree with that logic, but then it's just a guess. It could just be that PCIe costs less to manufacture or something like that, too.
PCIe is a serial interface. Thus its cheaper to impliment, requires less traces on the board, etc. That makes it easier to route, and all that fun stuff. So, I could see it being cheaper ..

Thalaxis
10-29-2004, 01:58 PM
PCIe is a serial interface. Thus its cheaper to impliment, requires less traces on the board, etc. That makes it easier to route, and all that fun stuff. So, I could see it being cheaper ..
That's what I was thinking. Most interconnects these days are being designed with narrower pipes now, because in addition to reducing the trace count, they also reduce the amount of interference they inflict upon each other. That allows them to drive up the clock speeds to make up for the narrower pipes.

That was the Rambus method also... but it's a shame that they killed it with litigation before it took off. It would have been far superior to DDR2, I think. It would also have made the dual Opteron configs less costly, since the high-performance ones need two, two-channel DDR busses, which is something like 600 wires on the board JUST for memory.

RogueLion
10-30-2004, 04:43 AM
I don't know if someone else already told you, but the Supermicro X6DAE-G uses DDR memory while the X6DAE-G2 uses DDR2 memory. Thats the only difference between the two motherboards other than the price. If your still thinking about a Xeon workstation this is another option to an I-Will Xeon board.

elvis
10-30-2004, 07:37 AM
Here's a interesting tid bit about rendering speeds. my sony laptop 1.7 Pentium M beats my Desktop Pentium 4 2.6 with HT by 5 seconds using the Highend 3D Maya scene test.
What you're seeing there is the Centrino core at almost 1GHz "slower" with it's superior floating point design and more efficient internal bandwithd beating the piss-poor pentium 4.

Intel know how much their own P4 design sucks (this encompasses all of their P4-based products, including the current Xeon). They've known for years. The pentium 3 and pentium 4 are worlds apart in design. The Centrino/Dothan "Pentium M" is closer in design to a P3 than a P4. Intel stuck with the "Pentium" name for the simple reason that by having a CPU in the market with the same marketing name for 5+ years gives them fake credibility. People don't like the "Opteron" because it's new. And people being people are always scared of new and uncharted territory. The sad fact is IT moves so fast, if you're one of these people you'll be left behind.

The P4 design will not last forever. Intel have even mentioned that they will stop at 3.8GHz and not hit 4.0GHz (funny how I don't believe them), instead concentrating on migrating to the more efficient Dothan core for all CPUs. Ironically the dothan has more in common with the Pentium3 and AthlonXP than any P4 on the market today. Intel of course will market it as something all new and all powerful, when in effect it's still last generation's technology.

It's unfortunate that most people can't see beyond marketing rubbish and understand what they're really buying. But at the end of the day AMD and IBM (with their PowerPC chips inside Apple's gear) are both proving that smart design outperforms raw MHz any day. Intel know it, but don't want to admit to it. And why should they if people are still stupid enough to spend billions of dollars a year on inferior technology thanks to crafty marketing?

At any rate, last quarter AMD outsold Intel for the first time ever. People are slowly starting to sit up and take notice, and I for one will be very interested to see this current quarter's financials and market share for both companies. Perhaps intel will realise they need to stop taking their customers for granted and start improving their technology rather than just re-launching the same gear year after year with a slight overclock.

Competition is good, mmkay?

orbitalpunk
10-31-2004, 01:04 AM
well just did an early render test using Highen3d.com's render test file and clocked it in at 33 seconds. 2 seconds slower the then dual 250 and 2 seconds faster then the dual 246. at least my system fits in with there results. some other scores there are way off.

just waiting for my Lian LI V1200 case to come in. i was able to fit my opterons on the master far2 in my atx case. so got a early sample. masterfar2 is the only atx dual opteron board. picked up a laser mx1000 mouse yesterday. liking it so far. now just waiting for the logitech dinovo cordless (not bluetooth version) to come out in a month. I had neglected my case and mouse and keyboard stuff for so long and after buy a G5 before all this amd xeon stuff happened and returned i learned one thing from an apple. i needed to respect my pc a bit more.yeah i went back to pc cause of the sheer power pc's produce. but outside wise it was looking a little jank. always has. and most of my friends with pc look jank to compared to mac. but now.. hehe.. now companies like lian li and logitech putting out great stuff, the quality factor between pc's and macs are much closer then ever before. i mean man, have you ever opened a new mac before ? man oh man, when i opened that g5, it was like opening a computer from heaven. every single little thing was wrapped. clear film coved. caped. the keyboard alone was first shrink wrapped and then the entire bottom had a clear film you can peel off. ever single cable came with its on end cover. just amazing. you could tell apple respects there computer. i cant see bill gates doing that. he just wants it to move faster and **** the case. but after to some render tests on the g5 it was slow as cow and sold the thing. hehe.. bye heavenly computer :D You to slow... hehe...

well just did an early render test using Highen3d.com's render test file and clocked it in at 33 seconds. 2 seconds slower the then dual 250 and 2 seconds faster then the dual 246. at least my system fits in with there results. some other scores there are way off.

just waiting for my Lian LI V1200 case to come in. i was able to fit my opterons on the master far2 in my atx case. so got a early sample. masterfar2 is the only atx dual opteron board. picked up a laser mx1000 mouse yesterday. liking it so far. now just waiting for the logitech dinovo cordless (not bluetooth version) to come out in a month. I had neglected my case and mouse and keyboard stuff for so long and after buy a G5 before all this amd xeon stuff happened and returned i learned one thing from an apple. i needed to respect my pc a bit more.yeah i went back to pc cause of the sheer power pc's produce. but outside wise it was looking a little jank. always has. and most of my friends with pc look jank to compared to mac. but now.. hehe.. now companies like lian li and logitech putting out great stuff, the quality factor between pc's and macs are much closer then ever before. i mean man, have you ever opened a new mac before ? man oh man, when i opened that g5, it was like opening a computer from heaven. every single little thing was wrapped. clear film coved. caped. the keyboard alone was first shrink wrapped and then the entire bottom had a clear film you can peel off. ever single cable came with its on end cover. just amazing. you could tell apple respects there computer. i cant see bill gates doing that. he just wants it to move faster and **** the case. but after to some render tests on the g5 it was slow as cow and sold the thing. hehe.. bye heavenly computer :D You to slow... hehe...


This is Jerk Off and I approve this message.........

elvis
10-31-2004, 01:05 AM
Vincent Twice.

Vincent Twice.

:)

DevilHacker
10-31-2004, 01:19 AM
Hey all, it seems to me from the benchmark tests, that AMD, and INTEL seem to work better for Maya or 3DS. Now my question is which works best for XSI, Anyone know? Also does anyone know if XSI takes advantage of dual processors? :argh:

Thalaxis
10-31-2004, 07:00 AM
What you're seeing there is the Centrino core at almost 1GHz "slower" with it's superior floating point design and more efficient internal bandwithd beating the piss-poor pentium 4.

The P4 has better floating point performance than the P-m. The P-m gives up quite a bit of ground on the latency front to save power. It kicks ass in integer arithmetic, though.

That does, of course, lead to the question of how much better could it be if they removed the power constraints?


Intel know how much their own P4 design sucks (this encompasses all of their P4-based products, including the current Xeon).

Bogus. The Northwood was an excellent processor. It still is... which only makes the Prescott look still more disappointing.


They've known for years. The pentium 3 and pentium 4 are worlds apart in design. The Centrino/Dothan "Pentium M" is closer in design to a P3 than a P4.

It's actually quite a bit farther from the P3 than you think it is. It borrows quite a bit from the P4's design, also.


Intel stuck with the "Pentium" name for the simple reason that by having a CPU in the market with the same marketing name for 5+ years gives them fake credibility.

It's a brand. There's nothing fake about it. If it were fake credibility, AMD would have had a lot less trouble breaking into the Xeon market, yet they had a huge hill to climb to get to where they are simply because nobody ever got fired for buying Intel. That last part hasn't changed, and AMD hasn't reached that level of credibility yet (but they are definitely headed in the right direction).


The P4 design will not last forever. Intel have even mentioned that they will stop at 3.8GHz and not hit 4.0GHz (funny how I don't believe them), instead concentrating on migrating to the more efficient Dothan core for all CPUs. Ironically the dothan has more in common with the Pentium3 and AthlonXP than any P4 on the market today. Intel of course will market it as something all new and all powerful, when in effect it's still last eneration's technology.

You need to read more. The P-m is definitely not last generation technology, it's very modern and borrows quite a bit from the P4, and adds quite a bit of new stuff to the mix.

I don't expect the Netburst architecture to last much longer either, though. My guess is 2007, because of the length of modern CPU design cycles.


It's unfortunate that most people can't see beyond marketing rubbish and understand what they're really buying. But at the end of the day AMD and IBM (with their PowerPC chips inside Apple's gear) are both proving that smart design outperforms raw MHz any day.

Intel's proving it also -- the Pentium-m and the Itanium are evidence of that. And AMD is leading the pack in many ways; the primary reason that AMD is able to lead the desktop in performance is their memory subsystem. It has far lower latency than the G5's and the P4's. The P4 makes up for it partially with large caches, which is a particularly strength of Intel's; in addition to having a ridulous amount of capacity, they have an SRAM density and performance that AMD and IBM can't match.

And they're planning on doing something similar to AMD's memory subsystem in the future, so they obviously agree that it's a good approach. :)


Intel know it, but don't want to admit to it. And why should they if people are still stupid enough to spend billions of dollars a year on inferior technology thanks to crafty marketing?

I'm sure that they'll admit it when their new products are ready to roll. When they can launch an Itanium that can outperforrm a Xeon in emulation, they'll be set :)

Their own chipset division is probably a significant roadblock.


At any rate, last quarter AMD outsold Intel for the first time ever.

1) It wasn't the first time
2) It wasn't universal -- AMD can't fab as many products by far as Intel sold.
3) It's far less significant than the fact that there are now almost as many OEMs for Opteron/Athlon64 as there are for Itanium. That's been AMD's biggest challenge to overcome for quite a while, so obviously they're doing more right than just producing an excellent processor.

They accepted major delays to their K8 launches, partly to ensure that the product would be rock solid at launch, and partly to get more OEMs on board, and partly to ensure that there would be OS support available at launch. On top of all that, they got Microsoft on board. That's why the Athlon64 launch was so much more successful than the Athlon launch. It's not so much that they learned from their mistake so much as that they made it very clear that the Athlon wasn't just a lucky fluke.


People are slowly starting to sit up and take notice, and I for one will be very interested to see this current quarter's financials and market share for both companies. Perhaps intel will realise they need to stop taking their customers for granted and start improving their technology rather than just re-launching the same gear year after year with a slight overclock.

AMD is at around 15% overall. They will probably gain a bit of ground in the near term because the transition to 90nm will help them improve their capacity, but this is all just a warm up. The real competition will start when they get their new fab up and running, and have enough capacity to actually affect Intel's bottom line. What we're seeing now is just the warm up.

I imagine that Intel isn't too pleased about the fact that AMD basically lengthened the life of x86 by another decade or two ;)

elvis
10-31-2004, 07:37 AM
The P4 has better floating point performance than the P-m. The P-m gives up quite a bit of ground on the latency front to save power. It kicks ass in integer arithmetic, though.

Whoah... back up there cheif...

The P4 is one of the worst FPU performers per clock cycle on the market. Intel have been pimping their SSE2 technology for years to try and counter the fact that the P4 really does suck that bad in generic float instructions.

There's a very good reason why the majority of production renderers (Mental Ray and PRMan being the two most notable) run far better on AthlonXP and Opteron hardware. It's also the reason why folks like ILM use AthlonXP-M's in their renderfarms rather than Intel gear. Renderers that are publically known for their inclusion of SSE2 optimisations (Cinema4D, 3DSMax scanline and Lightwave are three I know for a fact are optimised around SSE2 and the P4 core) do well on P4 for this very reason.

Most of your post above seems to be arse about. I'd suggest you head over to arstechnica and read through some of the CPU praxis and blackpapers before making further posts about CPUs.

I agree the northwood is the best P4 CPU on the market. But being the "best of a bad bunch" is nothing to brag about.

As I've mentioned a dozen times before, I don't favour one company over another. If Intel does pull their collective thumbs out of their arses and design a half decent CPU (which the Dothan is beginning to look like) then I'll buy their gear again. Currently the Opteron is my x86 CPU of choice, and the PowerPC my RISC CPU of choice. Itaniums are very nice, but too expensive for my liking. And the P4/Xeon core... well... I don't need to cover that again. :)

imashination
10-31-2004, 08:37 AM
Whoah... back up there cheif...

The P4 is one of the worst FPU performers per clock cycle on the market.

But this is completely irrelevent. If a chip gets its speed through engineering or clock cycles, who really cares? By that same logic a G4 has more FPU performance per clock cycle.

Thalaxis
10-31-2004, 03:01 PM
The P4 is one of the worst FPU performers per clock cycle on the market. Intel have been pimping their SSE2 technology for years to try and counter the fact that the P4 really does suck that bad in generic float instructions.

It actually has better per-clock FP performance than the PIII.

And removing the x87 is a good thing -- AMD's doing the same thing with AMD64. They went a bit farther in that direction, which IMO is a good thing.


There's a very good reason why the majority of production renderers (Mental Ray and PRMan being the two most notable) run far better on AthlonXP and Opteron hardware. It's also the reason why folks like ILM use AthlonXP-M's in their renderfarms rather than Intel gear. Renderers that are publically known for their inclusion of SSE2 optimisations (Cinema4D, 3DSMax scanline and Lightwave are three I know for a fact are optimised around SSE2 and the P4 core) do well on P4 for this very reason.

Actually, the P4 platform is currently showing better performance on Mental Ray than even the Opteron. Even though I prefer the Opteron, that fact alone makes your conclusion bogus.


Most of your post above seems to be arse about. I'd suggest you head over to arstechnica and read through some of the CPU praxis and blackpapers before making further posts about CPUs.

I've obviously read a lot more than you have. You ought to try double-checking ArsTech's articles sometime; I like those guys, but accuracy hasn't been their strong suit.


I agree the northwood is the best P4 CPU on the market. But being the "best of a bad bunch" is nothing to brag about.

It outperformed the Athlon. It scaled very well with clock speed and memory performance. It showed a non-trivial gain with HyperThreading. By any rational measure, it was a very good product.


As I've mentioned a dozen times before, I don't favour one company over another.

After reading your last two posts, in which you have replaced fact with opinion quite a bit, it's quite obvious that you do favor one company over another.

The irony of arguing against you in Intel's favor is that I am an AMD fan, but I won't make up facts or cherry-pick someone else's errors in order to support that preference.


If Intel does pull their collective thumbs out of their arses and design a half decent CPU (which the Dothan is beginning to look like) then I'll buy their gear again. Currently the Opteron is my x86 CPU of choice, and the PowerPC my RISC CPU of choice. Itaniums are very nice, but too expensive for my liking. And the P4/Xeon core... well... I don't need to cover that again. :)
I share your preference of the K8 over the P4. The Dothan is the dominant mobile processor these days, partly because it's an excellent product (IMO Intel's best x86 product to date), and partly because no one else had the resources to make a processor optimized for that market.

However, the fact that you prefer the comptitor's chip is not a good reason to invent flaws that the P4 doesn't have... or rather, that it didn't have before the Prescott.

maxrelics
10-31-2004, 03:49 PM
If it was me, I'd go for the AMD system cause it uses memory that's less expensive than intel does and because i have heard it's better at 64 bit stuff. Honestly, it really doesn't matter what you get. Just get what you can afford and be done with it. You're going to get marginally different results and you'll be just as happy with either system.

imashination
10-31-2004, 04:28 PM
If it was me, I'd go for the AMD system cause it uses memory that's less expensive than intel does

What?

AMD 400mhz DDR
Intel 400mhz DDR

lots
10-31-2004, 06:45 PM
Guess that depends on which Intel platform he's thinking of :)

On the new 9xx chipsets, sure DDR2 will probably cost a bit more, but this is single chip Pentium4, NOT Xeon :). On most Xeon rigs, you're buying the same RAM as you are on the Opteron...

orbitalpunk
11-01-2004, 02:07 AM
What what what?

Isnt registered memory more expensive the non registered memory? cause require registered only memory. xeons dont. so amd needs more expensive memory the xeons.

uh.. right?

by the way, whats a good temp to keep the opterons at? I have the master 2 far msi board and the fans go full throtle and i feel like im in a printing press. but the msi has a fan control setting in the bios and it will slow down the fans quite a bit and you can set it to kick in high gear but not until it hits 60C. is 60C ok to wait for the fans to speed up? it doesnt breat 59C until after 3 minutes of high load. on one fan tho the temp is at 55C idle the the master far 2 comes with its own fans you have to use and one of them is not a good as the other. but back to my point, what are some exceptable cpu temps. and i mean healthy ones. something that wont shorten the life of the cpu at all.

thanks again guys

MadMax
11-01-2004, 03:41 AM
What what what?

Isnt registered memory more expensive the non registered memory? cause require registered only memory. xeons dont. so amd needs more expensive memory the xeons.

uh.. right?

better go look up Xeon motherboards. Registered memory, just like Opteron.

orbitalpunk
11-01-2004, 04:30 AM
you might wanna look up xeon mother boards again as well..

Iwill DH800.. Unregistered memory only

http://www.iwillusa.com/products/ProductDetail.asp?vID=186

lots
11-01-2004, 05:24 AM
Take a look at that link you just put up.


Chipset
Intel i875P MCH
Intel 6300ESB (Hance Rapids) ICH
Winbond W83627THF Super I/O
See that? Intel i875P MCH? That is a Pentium 4 chipset, NOT a Xeon chipset, at least the original design of the 875P was not intended for Xeon. IWill did some fancy foot work to get that chipset working with a Xeon socket. So my statement stands. Xeon is designed for ECC Registered RAM.

Registered RAM is more expencive. But on Xeon Mobos, you'll find that they use registered RAM. This is done for stability. NOT speed. Besides, in this market thats what counts.

Thalaxis
11-01-2004, 03:08 PM
See that? Intel i875P MCH? That is a Pentium 4 chipset, NOT a Xeon chipset, at least the original design of the 875P was not intended for Xeon. IWill did some fancy foot work to get that chipset working with a Xeon socket. So my statement stands. Xeon is designed for ECC Registered RAM.

No, your statement does not. IWill didn't do that footwork, Intel did. They specifically validated it for use with XeonDP (though not more than dual).

lots
11-01-2004, 04:42 PM
In the multi CPU market Intel has a near perfect track record for stability. It would make sense then to build Xeon chipsets for use with registered ECC ram. Granted i875 was validated for Xeon. The chipset initially was released for the Pentium4 platform. And you will lose the stability of registered ECC ram. Maybe in the case of most people here, thats acceptable, but for the market the DP Xeons are aimed at in general, the companies that buy such systems perfer the stability. Registered ECC mobos DO exist for Xeon. Probably dont see them in the enthusiast/power user areas as much, but you do see them in the server side of things.

Thalaxis
11-01-2004, 04:52 PM
The idea was that with modern DRAM technology, you can have a reliable enough workstation without the cost and latency of ECC ram. If you have a "five nines" requirement it's necessary, but that's why servers use it. For the vast majority of workstation users, it's overkill.

That doesn't make it a bad thing by any means, of course.

lots
11-01-2004, 06:38 PM
Yeah, in the workstation, the stability that kind of ram brings is probably not as useful ;) and that would be the target audience here wouldnt it..

Thalaxis
11-01-2004, 07:23 PM
Exactly :)

Besides, it's not like having your workstation turned off for 24 hours over the course of a year will cause any loss in productivity, and that's about all you're going to lose to soft errors with modern DRAM.

One of the reasons that ECC is more important in larger systems is that probability is additive... so if you have a 1 in 10,000 chance at having a soft error in a given hour and you have 10,000 DIMMs, you average a soft error every hour.

I suspect that people who don't work with stuff that big don't realize how much data they deal with. It's a bit like imagining the number of stars in the night sky, unless you live in a big city and can't see anything dimmer than Vega ;)

orbitalpunk
11-01-2004, 08:08 PM
just got the Lian Li case today and all i have to say is

"oh my god"

Its not equal to a G5, but surpases any thing else I've seen for a PC Case. Just first class all the way. Reviews on websites really dont do it justice. Even comes with a tool for the mother board studs, zip ties and wire brackets. man.. just an amzing case.

lots
11-01-2004, 08:21 PM
Is that the V1200?

Got any pics? I've been eying that case for my upcoming upgrade, and it seemed to be a pretty nice case. But thats 200 dollars on the case alone :P I dont know if thats worth it.

leas5040
11-01-2004, 09:59 PM
That's the case with the spot for the power supply on the bottom correct? That case looked pretty nice, although, as lots pointed out, 200 is a bit much for a case.

orbitalpunk
11-02-2004, 06:22 AM
its so friken worth the $200. I got the matching card reader as well. works like a charm. they offer a matching temp guage plate, but ill pass on that one. also had to order an extre cd rom bezel. its so great cuase they make brushed metel bezels for your extra drives. and also 5-3" conversion bezels. all in the same brushed metal. yes, its the V1200. Ill post pics tomorrow. I had antec for the longest time. Lian Li is a totally different ball game. There really is no comparison. even the fans have rubber bushings on the mounting points so no vibrations get out. uhh.. just love it. yeah..... it was worth it...... finally a case that reflects my computing power and ability. no more toy boxes and sheet metal boxes.. god, even the ones that look like trans formers.. ugh.. {{hurl}}

Tarrbot
11-02-2004, 10:18 AM
orbitalpunk states: no more toy boxes and sheet metal boxes..
For all intents and purposes, that case is a "sheet metal box". It's just aluminum sheet metal. :p

Thalaxis
11-02-2004, 01:17 PM
even the fans have rubber bushings on the mounting points
Obviously then, Lian Li has not been sitting idle. My older model does not have rubber brushings. It is, however, quieter than yours, by virtue of having a dead power supply (it's old) ;)

orbitalpunk
11-04-2004, 01:06 AM
this box does not flex at all on the sides like most pc cases.

http://members.dslextreme.com/users/orbitalpunk/ll1.jpg

http://members.dslextreme.com/users/orbitalpunk/ll2.jpg

http://members.dslextreme.com/users/orbitalpunk/ll3.jpg

http://members.dslextreme.com/users/orbitalpunk/ll4.jpg

CGTalk Moderation
01-19-2006, 02:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.