PDA

View Full Version : anyone using Multi-OS in production?


tswalk
08-29-2012, 01:26 AM
I'm curious to know if anyone has experience in using Multi-OS in production for DCC use?

http://www.nvidia.com/object/sli_multi_os.html

To be specific, i'm looking at Maya with a Quadro (may get a K5000 when available). However, i doubt my budget will allow me to get two Quadro's since i'm bootstrapping things at the moment. so, I may need to use other Geforce cards along with it...

olson
08-29-2012, 02:51 AM
That feature is only available in Quadro products. If you want to run resource intensive tasks in more than one operating system the best option in my opinion is to dual boot or have multiple workstations (and perhaps a KVM to save desk space). Also why would Maya need this anyway? Maya is available for all of the major platforms so you should be able to run it without virtualization unless you're running something unusual (BSD, Solaris, etc.).

tswalk
08-29-2012, 04:31 AM
Also why would Maya need this anyway?

it doesn't.

its' the benefits from having a single host and easily access multiple guest systems with minimal performance loss.


edit:

this was something i was looking at initially... not totally the same, but the cluster rendering is pretty cool...

http://youtu.be/SICwqm5oO5k

cgbeige
08-29-2012, 04:38 AM
just curious why you're interested in it. Am I wrong in seeing this as a sort of thin client for GPUs?

tswalk
08-29-2012, 04:48 AM
just curious why you're interested in it. Am I wrong in seeing this as a sort of thin client for GPUs?

it is in a way allowing the guest operating systems that are virtual direct I/O to the hardware on your host computer, which i'm seriously considering moving to Linux for...

i've used VMs in the past, but generally only for servers to provide specific network services... they're easy to manage and transfer from one host to another. the only thing that has prevented me from doing this at the workstation level is the inability to utilize hardware directly such as the GPU.. but now, this may not be the case. its' really got me wondering.

[edit]
may not be a problem to do.. is what i meant to say

olson
08-29-2012, 05:25 AM
While a neat feature on paper its probably a huge pain in the ass. :curious:

Oh, you updated your kernel? Hopefully you're familiar with the terminal. Want to use the latest release of your favorite distribution? Maybe two years down the road. Support from Autodesk? Oh, I see you're running a virtual machine so we can't help you. Want to use multiple monitors? Sorry, the guest will only see one. Putting the cynical soapbox aside now but you get the idea.

In time I think graphics hardware virtualization will improve and become standardized given time but I wouldn't go anywhere near it right now unless you're just looking for something to throw money at. :shrug:

tswalk
08-29-2012, 05:28 AM
... Putting the cynical soapbox aside now but you get the idea....


haha, ya i can relate, and so true...

check this out though ( http://youtu.be/MhJxAi2jCeQ )

cgbeige
08-29-2012, 05:20 PM
ya, olson's right. These have a nice way of falling apart on implementation. Nvidia's doing their best to get CUDA, Maximus and a shit-ton of other proprietary technologies into pipelines while the industry is moving to OpenCL and open tech. You won't see support for such a niche product outside of some stuff that Nvidia writes themselves and that will eventually never get supported as it fails and they lose interest. I have Parallels Workstation Extreme in CentOS and it promises GPGPU acceleration for Quadros and Windows clients, but I've never tried it. You might look into that instead but it's probably going to be another abandonware thing in a few years.

tswalk
08-29-2012, 09:18 PM
.... I have Parallels Workstation Extreme in CentOS and it promises GPGPU acceleration for Quadros and Windows clients, but I've never tried it. ....

I'm curious to know, are you using it similar to how they demonstrate in this video by building a scalable render cluster? ( http://youtu.be/SICwqm5oO5k )

This was my original intent to do... I plan to be able to allow for scalability in the long run which will be difficult to create initially, but once I have it up... VM management is simple, and I can expand my power capability from 6, to 12, 64+ cores.

olson
08-29-2012, 10:03 PM
You don't need virtual machines to create a scalable render farm that utilizes workstations. There are some advantages to virtualizing and using their products but they make it seem like without their product it would be impossible. Just run the queue client on the workstation and use flags to set how many processors to use for each job upon submission. Amazing! :rolleyes:

CGTalk Moderation
08-29-2012, 10:03 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.