Increasing the frame-rate doesn’t necessarily equate to a directly proportional increase in rendering times or costs. Same as rendering two eyes usually averages out to 10 to 20% more computational work and not double.
In first place a denser temporal sampling means you can reduce other things, and get better interpolation from some data. You reduce moblur etc. Especially if you’re heavy on raytracing, fast shots you’re already paying for in rendering several times over the single temporal sample of a non mo-blurred shot.
Secondly, a lot of rendering power already goes into partials, you often render on 8s and 4s before you try to render the whole sequence, many, many times over (not to mention the absurd amount of stills and wedges).
Unless you’re working on something where motion and moblur has great impact, you can work at 12 fps in lighting for a relatively long time throughout the process.
All in all the deressed and not-full-on renderings during production outweight the iterations of the final-like settings several times over.
You’re probably looking at 30-40% more rendering across the span of a production to go from 24 to 60.
Nobody’s done anything quite like it yet, but in plenty productions you might have to do a few sequences here and there at higher speed, especially if they want to play with ramping between the first iteration and the final baked in ramp delivery.
When so much goes in tests and tweaks, the farm impact for doing an entire sequence at 96, I think, ends up around time and half what it would have been if the sequence had been 24 throughout.