Reawakening this thread as I’ve found very little on the topic of actual EC2 render performance (3dsmax) until we did some testing ourselves to better understand if it’d be something investing in over buying more kit in the office - here’s what we found (based on a very basic setup, and not being particularly scientific so please consider this more anecdotal!)
Amazon Setup
1x Backburner 2012 running on m1.large (single socket 4 core xeon 2.5ghz)
4x 3DS Max 2013 running on CC2.8xlarge (dual socket 8 core xeon 2.6ghz)
In reality, the cpu specifications don’t give the performance we originally expected. I’m not entirely sure why, but I’m sure hypervisors, some shared resources and other hidden stuff is getting in the way.
Compared with local machines we have, these are rough rendertimes for a test BB job:
Amazon EC2 CC2.8xlarge (XeonE5 2x8 cores @ 2.6GHz : 41.6GHz) 5 mins
Local Rendernode (SandyBridge i7 6 cores @ 3.2 : 19.2GHz) 6.5 mins
Local Workstation (Xeon E3 4 cores @ 3.5GHz : 14GHz) 10 mins
We were hoping to see the Amazon instances to be knocking frames out in 1.5x - 1.8x faster than a local renderbox can smash them so were a little disappointed when they were coming in at only 1.2x considering they supposedly had twice the raw available GHz. There were very little maps being used in the scene, and actual scene translation time was fairly minimal (precached geometry, no FG/GI, no proxys) and kept to around 15 secs. Once rendering the machines were using 100% of the available CPU resources (according to Task Manager).
In total it takes about 30 minutes to spin all 5 machines up from cold and get a file rendering via backburner. We’ve not yet got a successful VPN from an EC2 instance to our studio so having to transfer files off the EC2 instances via a webdav link so the bottleneck in our experience is with our download speed. In tests, we can shift approx 1GB a minute off the EC2 onto other cloud storage. note: Don’t use Dropbox - it’s hobbled to 400KBPs (it seems in the UK).
In summary - it could be very useful when needing extra juice during a panic (until we expand our local render kit). Probably works out to about £8/hour to employ 4 extra machines with minimal setup fuss. Downside is performance ain’t all what we expected/hoped. Our studio internet connection is the biggest immediate problem (a fixed always on EFM to the green box sadly limited to 4mbps up/down for nearly £6K/year - thanks BT for never digging up the road around here!). Plugins/node licensing is a big unknown - will try krakatoa or Fume next time we need it and see what headaches that creates.
The 3ds max file we were testing with:
150meg layout file referencing a 120 meg XrefScene file
100 meg of bitmaps
Mental Ray, ray tracing, cached geometry, no render elements, no other caches/mr proxys
3840x1080 resolution, framebuffer off during render
No plugins or fancy stuff
Hope some of this was useful for other EC2 dabblers…