who here runs 10 Gb ethernet and fast storage?


#1

I’ve been asking my IT guys for a fast network storage solution to transfer 4k stereo projects to and from. It’s mainly for me and another person who are working with these large projects and right now we have to wait 2+ hours to transfer a 400-600 gig folder to or from our current server. I’d like to cut that time down to 1/5 what it is now. It’s also way too slow to work directly from the server. I’d like to get similar performance across a network to what I get with my local SSD - around 420 meg/sec or higher.

So my question is, who here is running 10Gb ethernet and what transfer rates are you getting when you copy your project folders full frames to or from your server?

Our IT guy says it’s not possible to saturate even a gigabit network with folders that have hundreds of 80 meg files because there’s so many files. Right now I get 90-120 meg/sec transfer rate on gigabit to a single 5400rpm 2TB hard drive - of which I always assumed the that drive was the bottleneck. I read reviews of 10Gb NAS’s that transfer 400-800 megs/sec in all sorts of file transfer benchmarks with large and small files - or even 2000+meg/sec with multiple 10Gb aggregation.

I don’t have any 10Gb equipment to test. I’ve been thinking all you need is a couple 10Gb network cards, a 10Gb capable switch, and a server with a big RAID array.


#2

I don’t have any 10GbE at the moment but have done extensive research and planning for implementing it at work. With 1GbE the bottleneck is definitely the network because even a single modern hard disk is capable of more than 1Gb/s (125MB/s, eight bits to a byte) when continuously reading or writing let alone arrays with lots of disks.

With 10GbE the bottleneck will be the hard disk arrays unless the arrays are really big and use really fast drives. With a typical array of 7,200 RPM disks you can expect four or five times the performance you’re getting now using 10GbE if the clients can keep up with the server. Maybe more depending on how much money can be thrown at the disk arrays.

It might be more complicated than you’re thinking to setup depending on how the network infrastructure is setup. For example if the server is more than 100 meters from the workstation you won’t be able to use 10GBASE-T (copper wiring). Also the main switch might not have any 10GbE ports so another switch would be needed that would then uplink to the main switch and everything that needs 10GbE connectivity would be on the separate switch. New cabling will likely be needed too (category 6a or 7 for 10GBASE-T or fiber for greater than 100 meter distances).

Large backbone 10GbE switches have come down in price to around $140,000 like the Arista 7500 series switches but that’s still a lot of money for bandwidth that most users don’t need. Small switches that can uplink to the rest of the network have come way down in price recently to less than $10,000 like the Netgear M7100 but they are 10GBASE-T for use with copper cabling so the distance can be an issue in large facilities.

My planned install for a dozen workstations is about $35,000. That covers the Netgear switch, new cabling, all the 10GbE network adapters, and two SuperMicro bare bone based servers with 60TB each (one for production and one for backup). The workstations and data center are less than 100 meters so it will be all copper. The main switch at work doesn’t have 10GbE ports so it will be on a separate subnet with a 1GbE uplink to the main switch. Hopefully this helps out. :thumbsup:


#3

thanks olson, yeah we have cat6 cables here. I’ve been proposing putting the server in our department locally, though I know the IT dept (we’re at a hospital) would rather keep it close to them.


#4

If they are the original category 6 specification they won’t work, it’s got to be 6a or 7. Ideally category 7, especially for longer distances.


#5

oh ok, yeah I’m not sure what they rigged into the walls other than I know they’re cat6. The standalone patch cables they gave are regular cat6 so we might need to put in some cat6a network drops - or fiber if we went that route.


#6

as a temporary boost you could implement teaming on both ends and do a vlan between your most active workstations and the existing file server… perhaps that could help to reduce the transfer time a little until you can get the cabling overhaul and the fiber array in place?


#7

10GbE can be fast for large sequential file transfers but you’ll still have TCP latency issues if trying to work with many small files. It can work but is no panacea. If you’re looking for shared storage that is fast, especially among just a few users, you may want to look at some of the switched mini-sas style systems out there. The Jupiter system from OWC looks great and will get you 48Gb speeds without the latency of TCP, though they’ve been slow getting the all the components of it to market. Caldigit used to have a similar system as well.

Also, if you do go 10GbE, make sure you have a server room to put the switch. They are actively cooled with the smallest, loudest fans ever designed.


#8

You are aware that you are looking at a six digits overhaul with many nagging things to solve before seeing, maybe, a 3 to 5x improvement, right?
And not barely into six digits, it’s very easy to dig deep into it if you require a certain amount of online storage.

Most places deal with locally set-up short stashes and have pre-runs to move the files nightly for the following day, it’s a helluva lot cheaper than being able to request teras and expecting them to be available on demand in a few minutes :slight_smile:

You can’t expect, on a many files scenario, better than 400MBps (so barely on par with your SSD) if you will share this with someone, often less than that and down in the 200s. And that’s with a very well set up, expensive kit.
Several TB worth of fibered up online storage hitting reliably the GB mark (would -barely- be in your expectation range of 5x) is a very expensive proposition.

Unless you have things requiring immediate and fast access after obtaining, if you have a few hours of buffer before you work on sources from their pick-up, you could invest a fraction of the budget towards developing a fast closed online accessible from the workstation and a more than decent automation to pre-fetch overnight or on pick-up.


#9

I was thinking 3 10Gb network cards - $350 each, for 2 clients and a server
a 48-port gigabit switch with 4 10Gb ports - $1900
a server - $3500
with a large RAID5 - $3000
2 cat6a wall drops $150 each
= $9400

I’m sure it’d cost well over six figures to outfit our entire dept and everyone’s computer to 10 Gb. I’m just asking for a small local solution for 2, possibly 3 people later.

I’ve talked with a CG animator who has 10 Gb at their studio and she says they get between 250-400 MB/sec transfer rate when they copy their folders of thousands of 1-30 meg frames.


#10

That’s not a bad idea if there are some spare cable runs and spare ports on the machines. If it would require new cable runs get them to do category 7 cables so the move to 10GbE will be easier later.

There are cheaper switches available if you literally need just three ports. Like this 8 port switch from Netgear. It’s about $1,000 from online shops.

http://www.netgear.com/business/products/switches/prosafe-plus-switches/XS708E.aspx


#11

nice, thanks olson. That’s a better switch for our purposes. We have a ton of gigabit switches already so that switch would eliminate buying stuff we don’t need.


#12

Are you and the person you’re transferring files with local with one another? New Haswell motherboards will support full usb 3.0 speeds, meaning 10gb/s transfers. If you can swap harddrives locally then this may not be that big of an issue for you starting next month.


#13

A short cable short stash of storage is actually very easy and a lot cheaper to reach 4-6Gb speed per user on with only a couple users, even without manually swapping the cabling.

That’s an online close storage thing though, not so much a networking high BW one from central data.


#14

USB 3.0 has about 4Gb/s of usable bandwidth, not 10Gb/s. You might be thinking of Thunderbolt which is 10Gb/s. For the price it would be great to get stuff between two computers next to each other (USB 3.0 or Thunderbolt) but it sounds like the original poster wants to transfer files to a centralized storage server located in the data center.


#15

Yeah I’ve been using usb3 for 3 years now along with a USB-SATA adaptor to plug SSD’s into for fast portable storage.

We need a centralized server setup that is capable of transferring data to quickly. It’s sounding like we might be growing in staff over the next couple years and we need a fast networked server solution in place.

Just the other day I needed to make a correction to a finished 4k 60fps stereo composite. If the project was on my local machine, I could output it in a couple hours, but because my local SSD is full with another project right now it takes 12 hours to do across the network. It’s a pain.


#16

Nope, not thinking of thunderbolt.

http://www.techspot.com/news/51246-usb-30-update-will-double-transfer-speeds-to-10gbps-in-2014.html


#17

That will be nice in a year or two but it doesn’t help this thread. :shrug:


#18

You have to keep in mind that our enterprise hospital IT team wouldn’t deploy a consumer ivy bridge system as a server. From their point of view, it has to be a certified enterprise solution or they won’t even consider it.

It’ll end up being some sort of rackmount enterprise NAS xeon server. Right now they’re looking into if their existing NetApp storage will be fast enough.

You can imagine the effort I had to make on how to efficiently spend $14k for rendering since I wanted to buy 10 overclocked 3930k i7 linux systems (the IT dept only supports Windows) that I maintain myself opposed to buying 2 enterprise rackmount xeon servers running Windows Server 2008 that they’d maintain. I needed the most processing power from that money as possible and they would have needed to fork out $85k (that they didn’t have) to reach the same overall rendering performance with enterprise rackmount servers.

I was able to win in that case since rendering is my job field and that particular funding was limited. For dedicated file servers though, that’s their area to make that decision and the funding comes from a different budget.


#19

oh brother how I do know…


#20

yeah it’s been interesting working with the IT dept. They see things from a bigger picture beyond just our dept. They’d prefer to not have every dept do their own custom on-off solution because it costs more to support vs having more unified systems across different depts. They’d rather not buy a band-aid solution for us now and would rather wait for their big solution to be implemented system-wide.