What they share depends on the build, type and so on.
PSU is usually shared as in you only need one plug per blade, and internally it will draw and power what it needs to, but there are many offers where you have two, and good blades often have redundant power supplies in case one fails (which is something you can put in any server tailored case too, if you need to).
NICs depends, some have multiple regardless of the number of CPUs and mobos (plates) hosted, some have one plug and deal with splitting and offering multiple IDs in a managed way, some will have fiberchannel too.
Climate control is about the inside of the chassis being at reasonable temperatures, so if you have an air conditioned room hosting forty workstations comfortably, it won’t be a problem to add 10, whether you use them for distributed rendering only or seat someone in front of it it matters absolutely nothing, if they don’t overheat, they will keep churning frames out. That’s the whole extent about climate control.
With racks you have to be more specific and careful because it’s a lot of heat in a small space, but the principles don’t change.
Workstations are, from a power and running costs point of view, very rarely advantageous over more compact solutions.
They will be cheaper in terms of casing and management, but they usually aren’t as optimized heat and power draw wise as blades can be. It doesn’t mean they aren’t an option though. Again, space and power being the difference between a computational centre and a bunch of workstations.
There is nothing magic about hardware inside blades. If it can fit in one, the equivalent can fit in a case if you prefer that.
That’s why I was stressing those points.
People think of a renderfarm as if it’s some sort of magical, abstract entity… It’s not, it’s just a bunch of computers, end of story.
Power, space and computational needs and constraints dictate whether you need one in racks, or you can pile up some cases on a desk.
It’s all about logistics, end of story.