View Full Version : Better cpu benchmarking with highend renders, where?

01 January 2004, 03:17 AM
I hope someone can help me. First, this is NOT to start a flamewar on what cpu is best. I write this because I'm sick and tired of seeing all these benchmarking test on anandtech, tomshardware, xbitlabs, aceshardware and more.

All they tend to do is start 3dsmax (no I don't hate 3dsmax), and load some 50k poly scene that takes a few secs/mins to render in the scanline render. While that is just fine for some things, there are

Here is what I have yet to find in a cpu benchmark on the net..... so please post url's to better test.

1. Highend renders. (pixar renderman / mentalray, brazil, vray... ect.)

2. Complex scenes. (how about some 1+ million polygon scenes.)

3. Complex shaders. (not just a 200kb jpg mapped on a few boxes/spheres.)

4. Raytracing. (we all know that 3dsmax scanline render isn't fast at this.)

5. GI/photons/DOF/motion blur/displacement

Why do I want benchmarks focusing on highend renders and complex scenes. Well... how do we know what cpu is faster for each $ if they aren't tested in realworld production scenes? And wouldn't it be cool if you knew what system would be best for the software you are using?

One of the biggest misunderstandings are that HyperThreading = 2 cpu's. Yes I have heard people think a 3Ghz intel cpu would render twice as fast with HT enabled. I would suspect amd's 64bit cpu's will do quite will at complex scenes since the P4 FPU was cut down, focus was more on video/gaming.

But heck... I might be completely wrong. So start posting. Hopefully it will lead to better benchmarks. :D

01 January 2004, 08:23 AM
This would be a general system benchmark and not a CPU benchmark. When ray tracing a 1 mio poly scene with displacements in PRMan, it's probably not the FPU that's your bottleneck but the RAM access that you're having due to constant cache flushing - at some point, you're likely to run out of physical RAM, and once the system is swapping your CPU has enough time to go to the kitchen and get some coffee while waiting for the hard drive.

I'm also not sure if you will see much, if any difference when using different renderers to test CPUs (unless you use those rare odd things like lucille that actually use SIMD extensions). All of these renderers use L.N to calculate your diffuse lighting, all of them use v0xv1 to calculate a face normal, and probably all of them were compiled with intel's vtune or MSVC.

I'm not trying to dismiss your idea - if you want to know which computer is best for your rendering needs, your proposed methods give a lot better real-world results than rendering 3ds' tutorial scene 1 with a stopwatch. But you'd be running system benchmarks then, not CPU benchmarks any more.

I have seen that Cinebench, a benchmark from Maxon has gained popularity lately. I don't know what scene and parameters that is using for tests, but if you look out for rewiews that use Cinebench, that might come closer to what you want.
Sometimes the software makers themselves have test results on their web pages, I think NewTek had some when HT came out.

CGTalk Moderation
01 January 2006, 04:00 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.