CGTalk > Technical > Technical and Hardware
Login register
Thread Closed share thread « Previous Thread | Next Thread »
 
Thread Tools Search this Thread Display Modes
Old 05-06-2013, 07:33 AM   #1
sentry66
Expert
 
sentry66's Avatar
portfolio
node crazy
USA
 
Join Date: May 2008
Posts: 2,106
who here runs 10 Gb ethernet and fast storage?

I've been asking my IT guys for a fast network storage solution to transfer 4k stereo projects to and from. It's mainly for me and another person who are working with these large projects and right now we have to wait 2+ hours to transfer a 400-600 gig folder to or from our current server. I'd like to cut that time down to 1/5 what it is now. It's also way too slow to work directly from the server. I'd like to get similar performance across a network to what I get with my local SSD - around 420 meg/sec or higher.


So my question is, who here is running 10Gb ethernet and what transfer rates are you getting when you copy your project folders full frames to or from your server?


Our IT guy says it's not possible to saturate even a gigabit network with folders that have hundreds of 80 meg files because there's so many files. Right now I get 90-120 meg/sec transfer rate on gigabit to a single 5400rpm 2TB hard drive - of which I always assumed the that drive was the bottleneck. I read reviews of 10Gb NAS's that transfer 400-800 megs/sec in all sorts of file transfer benchmarks with large and small files - or even 2000+meg/sec with multiple 10Gb aggregation.

I don't have any 10Gb equipment to test. I've been thinking all you need is a couple 10Gb network cards, a 10Gb capable switch, and a server with a big RAID array.

Last edited by sentry66 : 05-06-2013 at 03:55 PM.
 
Old 05-06-2013, 05:22 PM   #2
olson
Houdini|Python|Linux
portfolio
Luke Olson
Dallas, USA
 
Join Date: Jan 2007
Posts: 2,924
I don't have any 10GbE at the moment but have done extensive research and planning for implementing it at work. With 1GbE the bottleneck is definitely the network because even a single modern hard disk is capable of more than 1Gb/s (125MB/s, eight bits to a byte) when continuously reading or writing let alone arrays with lots of disks.

With 10GbE the bottleneck will be the hard disk arrays unless the arrays are really big and use really fast drives. With a typical array of 7,200 RPM disks you can expect four or five times the performance you're getting now using 10GbE if the clients can keep up with the server. Maybe more depending on how much money can be thrown at the disk arrays.

It might be more complicated than you're thinking to setup depending on how the network infrastructure is setup. For example if the server is more than 100 meters from the workstation you won't be able to use 10GBASE-T (copper wiring). Also the main switch might not have any 10GbE ports so another switch would be needed that would then uplink to the main switch and everything that needs 10GbE connectivity would be on the separate switch. New cabling will likely be needed too (category 6a or 7 for 10GBASE-T or fiber for greater than 100 meter distances).

Large backbone 10GbE switches have come down in price to around $140,000 like the Arista 7500 series switches but that's still a lot of money for bandwidth that most users don't need. Small switches that can uplink to the rest of the network have come way down in price recently to less than $10,000 like the Netgear M7100 but they are 10GBASE-T for use with copper cabling so the distance can be an issue in large facilities.

My planned install for a dozen workstations is about $35,000. That covers the Netgear switch, new cabling, all the 10GbE network adapters, and two SuperMicro bare bone based servers with 60TB each (one for production and one for backup). The workstations and data center are less than 100 meters so it will be all copper. The main switch at work doesn't have 10GbE ports so it will be on a separate subnet with a 1GbE uplink to the main switch. Hopefully this helps out.
__________________
http://www.whenpicsfly.com
 
Old 05-06-2013, 07:32 PM   #3
sentry66
Expert
 
sentry66's Avatar
portfolio
node crazy
USA
 
Join Date: May 2008
Posts: 2,106
thanks olson, yeah we have cat6 cables here. I've been proposing putting the server in our department locally, though I know the IT dept (we're at a hospital) would rather keep it close to them.
 
Old 05-06-2013, 08:08 PM   #4
olson
Houdini|Python|Linux
portfolio
Luke Olson
Dallas, USA
 
Join Date: Jan 2007
Posts: 2,924
Quote:
Originally Posted by sentry66
thanks olson, yeah we have cat6 cables here.


If they are the original category 6 specification they won't work, it's got to be 6a or 7. Ideally category 7, especially for longer distances.
__________________
http://www.whenpicsfly.com
 
Old 05-06-2013, 09:35 PM   #5
sentry66
Expert
 
sentry66's Avatar
portfolio
node crazy
USA
 
Join Date: May 2008
Posts: 2,106
oh ok, yeah I'm not sure what they rigged into the walls other than I know they're cat6. The standalone patch cables they gave are regular cat6 so we might need to put in some cat6a network drops - or fiber if we went that route.
 
Old 05-06-2013, 11:38 PM   #6
tswalk
Expert
 
tswalk's Avatar
portfolio
Troy Walker
USA
 
Join Date: Jan 2012
Posts: 717
as a temporary boost you could implement teaming on both ends and do a vlan between your most active workstations and the existing file server... perhaps that could help to reduce the transfer time a little until you can get the cabling overhaul and the fiber array in place?
__________________
-- LinkedIn Profile --
-- Blog --
-- Portfolio --
 
Old 05-07-2013, 12:37 PM   #7
dmeyer
raaaaaerrrrrrrrr!
 
dmeyer's Avatar
designer
de-zy-nur
USA
 
Join Date: Nov 2002
Posts: 1,270
10GbE can be fast for large sequential file transfers but you'll still have TCP latency issues if trying to work with many small files. It can work but is no panacea. If you're looking for shared storage that is fast, especially among just a few users, you may want to look at some of the switched mini-sas style systems out there. The Jupiter system from OWC looks great and will get you 48Gb speeds without the latency of TCP, though they've been slow getting the all the components of it to market. Caldigit used to have a similar system as well.

Also, if you do go 10GbE, make sure you have a server room to put the switch. They are actively cooled with the smallest, loudest fans ever designed.
__________________
Fix it in pre.
 
Old 05-07-2013, 02:07 PM   #8
ThE_JacO
MOBerator-X
 
ThE_JacO's Avatar
CGSociety Member
portfolio
Raffaele Fragapane
That Creature Dude
Animal Logic
Sydney, Australia
 
Join Date: Jul 2002
Posts: 10,954
You are aware that you are looking at a six digits overhaul with many nagging things to solve before seeing, maybe, a 3 to 5x improvement, right?
And not barely into six digits, it's very easy to dig deep into it if you require a certain amount of online storage.

Most places deal with locally set-up short stashes and have pre-runs to move the files nightly for the following day, it's a helluva lot cheaper than being able to request teras and expecting them to be available on demand in a few minutes

You can't expect, on a many files scenario, better than 400MBps (so barely on par with your SSD) if you will share this with someone, often less than that and down in the 200s. And that's with a very well set up, expensive kit.
Several TB worth of fibered up online storage hitting reliably the GB mark (would -barely- be in your expectation range of 5x) is a very expensive proposition.

Unless you have things requiring immediate and fast access after obtaining, if you have a few hours of buffer before you work on sources from their pick-up, you could invest a fraction of the budget towards developing a fast closed online accessible from the workstation and a more than decent automation to pre-fetch overnight or on pick-up.
__________________
"As an online CG discussion grows longer, the probability of the topic being shifted to subsidies approaches 1"

Free Maya Nodes
 
Old 05-07-2013, 02:39 PM   #9
sentry66
Expert
 
sentry66's Avatar
portfolio
node crazy
USA
 
Join Date: May 2008
Posts: 2,106
I was thinking 3 10Gb network cards - $350 each, for 2 clients and a server
a 48-port gigabit switch with 4 10Gb ports - $1900
a server - $3500
with a large RAID5 - $3000
2 cat6a wall drops $150 each
= $9400


I'm sure it'd cost well over six figures to outfit our entire dept and everyone's computer to 10 Gb. I'm just asking for a small local solution for 2, possibly 3 people later.



I've talked with a CG animator who has 10 Gb at their studio and she says they get between 250-400 MB/sec transfer rate when they copy their folders of thousands of 1-30 meg frames.

Last edited by sentry66 : 05-07-2013 at 02:59 PM.
 
Old 05-07-2013, 03:34 PM   #10
olson
Houdini|Python|Linux
portfolio
Luke Olson
Dallas, USA
 
Join Date: Jan 2007
Posts: 2,924
Quote:
Originally Posted by tswalk
as a temporary boost you could implement teaming on both ends and do a vlan between your most active workstations and the existing file server... perhaps that could help to reduce the transfer time a little until you can get the cabling overhaul and the fiber array in place?


That's not a bad idea if there are some spare cable runs and spare ports on the machines. If it would require new cable runs get them to do category 7 cables so the move to 10GbE will be easier later.

Quote:
Originally Posted by sentry66
I was thinking 3 10Gb network cards - $350 each, for 2 clients and a server
a 48-port gigabit switch with 4 10Gb ports - $1900
a server - $3500
with a large RAID5 - $3000
2 cat6a wall drops $150 each
= $9400


There are cheaper switches available if you literally need just three ports. Like this 8 port switch from Netgear. It's about $1,000 from online shops.

http://www.netgear.com/business/pro...hes/XS708E.aspx
__________________
http://www.whenpicsfly.com
 
Old 05-07-2013, 06:06 PM   #11
sentry66
Expert
 
sentry66's Avatar
portfolio
node crazy
USA
 
Join Date: May 2008
Posts: 2,106
nice, thanks olson. That's a better switch for our purposes. We have a ton of gigabit switches already so that switch would eliminate buying stuff we don't need.
 
Old 05-08-2013, 01:27 AM   #12
NicholasG
New Member
 
NicholasG's Avatar
portfolio
Nicholas Golden
New York, USA
 
Join Date: Jan 2013
Posts: 16
Are you and the person you're transferring files with local with one another? New Haswell motherboards will support full usb 3.0 speeds, meaning 10gb/s transfers. If you can swap harddrives locally then this may not be that big of an issue for you starting next month.
 
Old 05-08-2013, 01:41 AM   #13
ThE_JacO
MOBerator-X
 
ThE_JacO's Avatar
CGSociety Member
portfolio
Raffaele Fragapane
That Creature Dude
Animal Logic
Sydney, Australia
 
Join Date: Jul 2002
Posts: 10,954
A short cable short stash of storage is actually very easy and a lot cheaper to reach 4-6Gb speed per user on with only a couple users, even without manually swapping the cabling.

That's an online close storage thing though, not so much a networking high BW one from central data.
__________________
"As an online CG discussion grows longer, the probability of the topic being shifted to subsidies approaches 1"

Free Maya Nodes
 
Old 05-08-2013, 05:12 AM   #14
olson
Houdini|Python|Linux
portfolio
Luke Olson
Dallas, USA
 
Join Date: Jan 2007
Posts: 2,924
Quote:
Originally Posted by NicholasG
Are you and the person you're transferring files with local with one another? New Haswell motherboards will support full usb 3.0 speeds, meaning 10gb/s transfers. If you can swap harddrives locally then this may not be that big of an issue for you starting next month.


USB 3.0 has about 4Gb/s of usable bandwidth, not 10Gb/s. You might be thinking of Thunderbolt which is 10Gb/s. For the price it would be great to get stuff between two computers next to each other (USB 3.0 or Thunderbolt) but it sounds like the original poster wants to transfer files to a centralized storage server located in the data center.
__________________
http://www.whenpicsfly.com
 
Old 05-08-2013, 05:23 PM   #15
sentry66
Expert
 
sentry66's Avatar
portfolio
node crazy
USA
 
Join Date: May 2008
Posts: 2,106
Yeah I've been using usb3 for 3 years now along with a USB-SATA adaptor to plug SSD's into for fast portable storage.

We need a centralized server setup that is capable of transferring data to quickly. It's sounding like we might be growing in staff over the next couple years and we need a fast networked server solution in place.

Just the other day I needed to make a correction to a finished 4k 60fps stereo composite. If the project was on my local machine, I could output it in a couple hours, but because my local SSD is full with another project right now it takes 12 hours to do across the network. It's a pain.
 
Thread Closed share thread


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 10:11 PM.


Powered by vBulletin
Copyright ©2000 - 2016, Jelsoft Enterprises Ltd.