What am I doing wrong...

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 04 April 2013   #1
What am I doing wrong...

Hey guys, I've been using 3DS Max 2013 on a Win7 PC and my current project has some extremely heavy scene files. I know about layers, meshsmooth render iterations, and all the basic techniques, but despite all that, Max's viewport has always ran extremely slow. recently my boss gave me a new (extremely more powerful) PC to help out with this problem, only thing is, it runs Max the same, if not worse, and I don't know what to tell him. Here are the specs:

Old PC:

Intel Core i7 CPU 930 @ 2.80GHz 2.80GHz

8GB installed memory

64-bit OS (win7)

ATI Radeon HD 5700 Series

New PC:

Intel Xeon CPU E5-2630 0 2.30 GHz (2 processors)

32.0 GB installed memory

64-bit OS (win7)

Dual NVIDIA Quadro 5000

Am I doing something wrong? By all means this is a much faster and more powerful PC but it runs 3DS Max horribly. I honestly have no idea what to tell him. Please help!
 
Old 04 April 2013   #2
Your "old" PC is an extremely capable machine! The clock speed itself is even higher then the Xeons. Forget Quadros! They're good, don't get me wrong, but they're not above some high end "gaming" (I hate this naming convention) cards. Well, if you intend to do some very limited GPU rendering then they might be worth something. Next time, go with GForce or Radeon, they're cheaper, easy to find and easy to replace/upgrade if you want to.

All in all, the only thing I see as being really better is the Ram memory (which, divided by two, since you have two processors, equals 16 GB per processor).

Don't get the wrong idea - both your machines are really good! Better then the ones I have at my disposal! But they seem quite equivalent in terms of viewport performance. Rendering wise the second one should smoke the first.
__________________
didali
 
Old 04 April 2013   #3
I'm really sorry to say this but your new computer is a complete waste of money, especially as a Max workstation. A 2.3ghz non oveclockable cpu is a bad idea considering the performance you can eek out of a 3930k for the same price. Dual Quadros give you about the same viewport performance as a single 680gtx at 6 times the price, since Max's viewports dont even support dual cards setups. A pair of 3930k boxes overclocked to 4.5 ghz, one with a top of line "gamer" card and the other one (the render slave) with a cheaper card, would have cost you about the same as your dual socket setup, but be roughly twice as fast in single threaded processes and also twice as fast in rendering. You could also add your older 930 box as an additional render node...
 
Old 04 April 2013   #4
Originally Posted by davius: Your "old" PC is an extremely capable machine! The clock speed itself is even higher then the Xeons. Forget Quadros! They're good, don't get me wrong, but they're not above some high end "gaming" (I hate this naming convention) cards. Well, if you intend to do some very limited GPU rendering then they might be worth something. Next time, go with GForce or Radeon, they're cheaper, easy to find and easy to replace/upgrade if you want to.

All in all, the only thing I see as being really better is the Ram memory (which, divided by two, since you have two processors, equals 16 GB per processor).

Don't get the wrong idea - both your machines are really good! Better then the ones I have at my disposal! But they seem quite equivalent in terms of viewport performance. Rendering wise the second one should smoke the first.


Gaming cards would actually still do GPU rendering faster than a Quadro. Quadro 5000 should run it pretty well but I don't think it's worth it since you can get a better gaming card for a much lower price.

Still, it shouldn't have really big problems. A Radeon 5700 series card probably wouldn't run that well, but a Quadro 5000 should be fine.
__________________
The Z-Axis
 
Old 04 April 2013   #5
Originally Posted by vlad: I'm really sorry to say this but your new computer is a complete waste of money, especially as a Max workstation. A 2.3ghz non oveclockable cpu is a bad idea considering the performance you can eek out of a 3930k for the same price. Dual Quadros give you about the same viewport performance as a single 680gtx at 6 times the price, since Max's viewports dont even support dual cards setups. A pair of 3930k boxes overclocked to 4.5 ghz, one with a top of line "gamer" card and the other one (the render slave) with a cheaper card, would have cost you about the same as your dual socket setup, but be roughly twice as fast in single threaded processes and also twice as fast in rendering. You could also add your older 930 box as an additional render node...


Exactly!

See if you can return the workstation and give Boxx a call and ask for an ~4.5Ghz overclocked i7 3930k + GTX680 or Titan setup. Or build it yourself if you're comfortable with that. ( http://www.boxxtech.com/solutions/3dsMax_MA )

This is not really your (or your boss's) fault If you ask a normal PC vendor for a workstation they'll give you a Xeon, ECC memory and Quadros setup.. which is logical since those parts are labeled 'for workstation'. But it's quite the opposite of what you need actually.

Put simply this is the performance ranking for graphics workstations:

1) $3000 high-end custom 'gaming' rig
2) $8500 workstation
3) $400 general office pc

But somehow a lot of people who are not really into all the nitty gritty bits about hardware (and they shouldn't need to be, they are graphics specialist) are easily sold the most expensive ones since economics suggest that it 'must be' the fastest one. Add a big layer of FUD about stability and call it 'gaming' hardware which sounds like it's on the far opposite of 'professional' while in reality gaming hardware is pushed way harder then you'll will ever accomplish with a max viewport, except for GPU rendering maybe. So as a vendor you will have no problem selling option nr 2 and make more profit.

Now for some hard figures:
have a look here: http://www.cpubenchmark.net/cpu.php...630+%40+2.30GHz

Performance:
Your workstation's dual Xeons score a 8,4 x 2 = 16.8 points
A single 3930K @ 4.5Ghz = 12 *(4.5Ghz/3.2Ghz)= 16.8 Points

Price:
Xeons: $620 * 2= $1240
3930K: $550 *1 = $550

In general 6 faster cores are better then 12 cores at half the speed. A lot of stuff is single threaded in Max. In general actually, not everything is suited for parallel execution. So single threaded tasks runs twice as fast on in the 3930K.

Xoens do have a few advantages over 'normal' procs:

-they can come with a lot of cache memory which helps when you are running a lot of threads, like a webserver would.

-They run cooler (due to lower clock speeds) so they are suited for use in crammed data centers where limits on heat production and power usage apply. (95W vs 130+W in this case)

-You can put multiple of them on a single mainboard which is great when you are renting rackspace per volume.

-They support ECC memory that can detect/correct memory errors. Cosmic rays will flip a bit per GB of memory every 9 years..(about 4 times a year with 32GB) not a problem when you can press the reset button under your desk, more trouble if you have to drive to your datacenter to reset things. But not every bit flip will return in a crash, if you have 32GB of textures loaded you'll might see a pixel with the wrong color. This does come into play when your have a whole datacenters to run with TBs of memory all doing precise scientific calculations where the slightest error will change the outcome enormously like weather simulations for example.

So there are cases when Xeons, Quadro's etc are the right choice but it's not under your desk!

my 2$
__________________
The GPU revolution will not be rasterized! - http://www.jdbgraphics.nl

Last edited by jonadb : 04 April 2013 at 07:01 AM.
 
Old 04 April 2013   #6
Ray2: Hello, if you have problem with viewport performance, you should try 3DS Max 2014. For me, it has maybe 10 times or more better viewport performance. 20000 objects in 2013 is a headache, but when I opened it in 2014, I was very surprised - there was not problem to rotate or move in viewport.
But - yes, the problem is that there is not many plugins for 2014 yet.

My specs:
System: Win 8 x64 Pro
Processor: i7 2600K
GPU: GTX590 Nvidia Gainward
RAM: 8GB 2133 Kingston Hyper X

But be careful. If you download 2014 trial and open 2013 project in it and save it, you will not be able to open it back in 2013 So make a backup.

Last edited by Karolina84 : 04 April 2013 at 08:19 PM.
 
Old 04 April 2013   #7
Originally Posted by Karolina84: Ray2: Hello, if you have problem with viewport performance, you should try 3DS Max 2014. For me, it has maybe 10 times or more better viewport performance. 20000 objects in 2013 is a headache, but when I opened it in 2014, I was very surprised - there was not problem to rotate or move in viewport.
But - yes, the problem is that there is not many plugins for 2014 yet.

My specs:
System: Win 8 x64 Pro
Processor: i7 2600K
GPU: GTX590 Nvidia Gainward
RAM: 8GB 2133 Kingston Hyper X

But be careful. If you download 2014 trial and open 2013 project in it and save it, you will not be able to open it back in 2013 So make a backup.


You can save back to 3 Max versions, if you look at the file type it can be changed from .max to all the way back to 3ds Max 2011 .max
__________________
The Z-Axis
 
Old 04 April 2013   #8
Oh yes, I see it now, thank you. When I tried to open 2013 file in 2014, Max told me I won't be able to open it in earlier versions if saved. So it is good to know it is possible
 
Old 04 April 2013   #9
Originally Posted by Karolina84: ...
But - yes, the problem is that there is not many plugins for 2014 yet.
...


Most 2013 plugins should work on 2014.
 
Old 04 April 2013   #10
Originally Posted by jonadb: Exactly!

See if you can return the workstation and give Boxx a call and ask for an ~4.5Ghz overclocked i7 3930k + GTX680 or Titan setup. Or build it yourself if you're comfortable with that. ( http://www.boxxtech.com/solutions/3dsMax_MA )

This is not really your (or your boss's) fault If you ask a normal PC vendor for a workstation they'll give you a Xeon, ECC memory and Quadros setup.. which is logical since those parts are labeled 'for workstation'. But it's quite the opposite of what you need actually.

Put simply this is the performance ranking for graphics workstations:

1) $3000 high-end custom 'gaming' rig
2) $8500 workstation
3) $400 general office pc

But somehow a lot of people who are not really into all the nitty gritty bits about hardware (and they shouldn't need to be, they are graphics specialist) are easily sold the most expensive ones since economics suggest that it 'must be' the fastest one. Add a big layer of FUD about stability and call it 'gaming' hardware which sounds like it's on the far opposite of 'professional' while in reality gaming hardware is pushed way harder then you'll will ever accomplish with a max viewport, except for GPU rendering maybe. So as a vendor you will have no problem selling option nr 2 and make more profit.

Now for some hard figures:
have a look here: http://www.cpubenchmark.net/cpu.php...630+%40+2.30GHz

Performance:
Your workstation's dual Xeons score a 8,4 x 2 = 16.8 points
A single 3930K @ 4.5Ghz = 12 *(4.5Ghz/3.2Ghz)= 16.8 Points

Price:
Xeons: $620 * 2= $1240
3930K: $550 *1 = $550

In general 6 faster cores are better then 12 cores at half the speed. A lot of stuff is single threaded in Max. In general actually, not everything is suited for parallel execution. So single threaded tasks runs twice as fast on in the 3930K.

Xoens do have a few advantages over 'normal' procs:

-they can come with a lot of cache memory which helps when you are running a lot of threads, like a webserver would.

-They run cooler (due to lower clock speeds) so they are suited for use in crammed data centers where limits on heat production and power usage apply. (95W vs 130+W in this case)

-You can put multiple of them on a single mainboard which is great when you are renting rackspace per volume.

-They support ECC memory that can detect/correct memory errors. Cosmic rays will flip a bit per GB of memory every 9 years..(about 4 times a year with 32GB) not a problem when you can press the reset button under your desk, more trouble if you have to drive to your datacenter to reset things. But not every bit flip will return in a crash, if you have 32GB of textures loaded you'll might see a pixel with the wrong color. This does come into play when your have a whole datacenters to run with TBs of memory all doing precise scientific calculations where the slightest error will change the outcome enormously like weather simulations for example.

So there are cases when Xeons, Quadro's etc are the right choice but it's not under your desk!

my 2$


Correct me if I'm wrong, but aren't the xeoms better since, even without multi-threading in max, since its two physical cpus, it would be 2.3 ghz * 2 = 4.6ghz, with 12 cores. I always thought that was the advantage with multiple processors.
 
Old 04 April 2013   #11
Originally Posted by Ian31R: Correct me if I'm wrong, but aren't the xeoms better since, even without multi-threading in max, since its two physical cpus, it would be 2.3 ghz * 2 = 4.6ghz, with 12 cores. I always thought that was the advantage with multiple processors.


Not really, the xeons have 6 cores each, so having 12 cores at 2.3 is like having a *single* proc with 6 cores at 4.6GHz. cores*freq= processing power.
__________________
The GPU revolution will not be rasterized! - http://www.jdbgraphics.nl
 
Old 04 April 2013   #12
Originally Posted by jonadb: Not really, the xeons have 6 cores each, so having 12 cores at 2.3 is like having a *single* proc with 6 cores at 4.6GHz. cores*freq= processing power.


Okay, so the frequency is doubled, but not the cores? Even if the cores aren't doubled, for max it doesn't matter since max doesn't use multiples cores, so you're stuck with frequency, which you would have to overclock the 3930K past 4.6 ghz to beat the xeons right?
 
Old 04 April 2013   #13
Originally Posted by Ian31R: Okay, so the frequency is doubled, but not the cores? Even if the cores aren't doubled, for max it doesn't matter since max doesn't use multiples cores, so you're stuck with frequency, which you would have to overclock the 3930K past 4.6 ghz to beat the xeons right?


When using all cores at full 100%, when rendering for example, a 6core 3930K@4.6 performs about the same as the xeon 2x6 core @2.3

But things that run on a single core will run twice as fast on the 3930k compared to one from the xeon that runs at half the frequency of a 3930k core.

That's why you want a few faster cores over many slower ones.
__________________
The GPU revolution will not be rasterized! - http://www.jdbgraphics.nl
 
Old 04 April 2013   #14
Originally Posted by jonadb: When using all cores at full 100%, when rendering for example, a 6core 3930K@4.6 performs about the same as the xeon 2x6 core @2.3

But things that run on a single core will run twice as fast on the 3930k compared to one from the xeon that runs at half the frequency of a 3930k core.

That's why you want a few faster cores over many slower ones.


Wait, so say for example, the 2 xeons were only one core each, other than for rendering, max would only utilize one 2.3ghz processor? I thought two processors could be combined to calculate one task, in all situations, not for multitasking. That blows, simulating anything with one 2.3ghz processor with one core, would be very slow...
 
Old 04 April 2013   #15
Originally Posted by Ian31R: Wait, so say for example, the 2 xeons were only one core each, other than for rendering, max would only utilize one 2.3ghz processor? I thought two processors could be combined to calculate one task, in all situations, not for multitasking. That blows, simulating anything with one 2.3ghz processor with one core, would be very slow...


Jep, that's why the topic starter isn't seeing any speed improvements.

And there is more to it then that... all those cores need to have access to the memory banks, so 12 cores trying to access a texture in memory is a bit more prone to congestion compared to 6 cores. So the memory bandwidth has to be shared among cores, there are a lot of optimizations and tricks bring used do it's not really a 2:1 ratio but it can be an issue.
__________________
The GPU revolution will not be rasterized! - http://www.jdbgraphics.nl
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 11:01 AM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.