|1 Week Ago||#16|
Join Date: Nov 2016
Originally Posted by LukeLetellier: I wish we had an easy way to communicate rendering needs other than time - something akin to processor-hours, but in a way that accounts for different processor speeds, as a scene could take 50 processor hours on one machine but only 20 on another.
1 FLOP = 1 Floating Point Operation Per Second
1 GFLOP = 1 Billion Floating Point Operations Per Second
1 TFLOP = 1 Trillion Floating Point Operations Per Second
All rendering operations are are essentially 16 bit, 32 bit or 64 bit precision FLOPs being executed on CPU or GPU.
If you go on Wikipedia, you'll find that every CPU, GPU, APU, SOC, FPGA and other electronic processor has a public 32 bit GFLOPs or TFLOPs rating.
So there is your rendering efficiency number and measure, basically.
The more 32 bit GFLOPs/TFLOPs hardware can sustain, the faster a 3D scene will render.
You could ask Maxon to include an optional FLOPs overlay in Physical Render or AMD ProRender.
The renderer counts the number of FLOPs it used to render a test frame, then tells you: Render finished. 144.37 TFLOPs used to render this frame.
Again, every floating point operation performed by a rendering engine counts as 1 FLOP.
The number of FLOPs a given hardware X can do per second is a pretty accurate determinant of how fast it can render 3D frames.
If you had a FLOP counter overlay in C4D, you could literally communicate a figure like "this 3D scene takes about 1244 TFLOPs on average to render for each 1920 x 1080 pixel frame".
Now it may be the case that not every FLOP a CPU or GPU is technically capable of executing goes 100% into rendering a frame.
But in my line of work - custom video processing - I use GFLOPs all the time to measure and communicate what hardware my algorithms need to run on.
My latest algorithm for example requires about 400 GFLOPs to process 1080HD @ 30 FPS video in realtime. And about 1600 GFLOPs or 4 times as much to do 4K UHD @ 30 FPS video in realtime.
When I communicate that number, people I'm talking to can work out what GPUs or other processors can or cannot run the algorithm in realtime.
So GFLOPs/TFLOPs is not a bad measure of the computing power of hardware, and of measuring precisely how many FLOPs a given operation, like rendering a 3D frame, took.
FLOPs are not dependent on hardware. If a rendering operation takes exactly 4 GFLOPs (4 Billion Floating Point Operations) to finish, then it takes precisely that to finish - 4 GFLOPs.
|1 Week Ago||#17|
Join Date: May 2010
Originally Posted by ThePriest: A recent job I did cost nearly 10k on Rebus. Granted that wasn't an out of pocket expense for me, but it was expensive. And that's my rationale when it comes to ownership: That I don't necessarily need a farm's insanely fast turnaround speed, but I could use 10-20x more power than I have now. On both the GPU and CPU side.Have you considered leasing a few current generation machines or is that not an option? It sounds like you have enough work from your agency to be able to cover the expense.
I work primarily in VRay, where it's not unusual for even a highly optimized scene to take 30 minutes or more.
On my work workstation and the 2 I have at home, I might get back 3 seconds of footage per day.
If wanted 10 seconds of footage back from rebusfarm that takes 30 minutes per frame to render, you're looking at $1049.51 (from their calculator).
As it stands on a current VR job that I'm tackling, I have 3600 frames to render at 4k at around 1hr per frame. Their estimate for this is $25188.20 USD
That would pay for my farm, C4D licenses and I'd still have spare change.
It would take 1 day on Rebus. That's the luxury. I'd be rendering for 15 days straight on a home farm. Or billing my client $1500
|1 Week Ago||#18|
Join Date: Nov 2016
I've said this in other threads before - GPUs have far more firepower in terms of TFLOPS ratings than even the beefiest CPUs, and they are getting even more powerful very quickly.
AMD is really tearing into Intel in the CPU space right now. Intel were basically sitting on their asses for the last decade and making easy cash from CPUs that get only 5 - 15 % faster each generation.
Along came an agressive AMD, and suddenly prices are coming down and core counts are shooting up.
If AMD tear into Nvidia - another somewhat lazy company - just as aggressively, we may see 15, 20, 25 Teraflop GPUs with 16 - 24Gb VRAM within 2 years.
3 of those GPUs would basically be a "renderfarm in 1 PC box".
In your case you want to do rendering for others in your office, which may understandably favor CPU over GPU as your work colleagues may be doing CPU stuff rather than GPU stuff.
|reply share thread|