thanks olson, yeah we have cat6 cables here. I’ve been proposing putting the server in our department locally, though I know the IT dept (we’re at a hospital) would rather keep it close to them.
who here runs 10 Gb ethernet and fast storage?
If they are the original category 6 specification they won’t work, it’s got to be 6a or 7. Ideally category 7, especially for longer distances.
oh ok, yeah I’m not sure what they rigged into the walls other than I know they’re cat6. The standalone patch cables they gave are regular cat6 so we might need to put in some cat6a network drops - or fiber if we went that route.
as a temporary boost you could implement teaming on both ends and do a vlan between your most active workstations and the existing file server… perhaps that could help to reduce the transfer time a little until you can get the cabling overhaul and the fiber array in place?
10GbE can be fast for large sequential file transfers but you’ll still have TCP latency issues if trying to work with many small files. It can work but is no panacea. If you’re looking for shared storage that is fast, especially among just a few users, you may want to look at some of the switched mini-sas style systems out there. The Jupiter system from OWC looks great and will get you 48Gb speeds without the latency of TCP, though they’ve been slow getting the all the components of it to market. Caldigit used to have a similar system as well.
Also, if you do go 10GbE, make sure you have a server room to put the switch. They are actively cooled with the smallest, loudest fans ever designed.
You are aware that you are looking at a six digits overhaul with many nagging things to solve before seeing, maybe, a 3 to 5x improvement, right?
And not barely into six digits, it’s very easy to dig deep into it if you require a certain amount of online storage.
Most places deal with locally set-up short stashes and have pre-runs to move the files nightly for the following day, it’s a helluva lot cheaper than being able to request teras and expecting them to be available on demand in a few minutes 
You can’t expect, on a many files scenario, better than 400MBps (so barely on par with your SSD) if you will share this with someone, often less than that and down in the 200s. And that’s with a very well set up, expensive kit.
Several TB worth of fibered up online storage hitting reliably the GB mark (would -barely- be in your expectation range of 5x) is a very expensive proposition.
Unless you have things requiring immediate and fast access after obtaining, if you have a few hours of buffer before you work on sources from their pick-up, you could invest a fraction of the budget towards developing a fast closed online accessible from the workstation and a more than decent automation to pre-fetch overnight or on pick-up.
I was thinking 3 10Gb network cards - $350 each, for 2 clients and a server
a 48-port gigabit switch with 4 10Gb ports - $1900
a server - $3500
with a large RAID5 - $3000
2 cat6a wall drops $150 each
= $9400
I’m sure it’d cost well over six figures to outfit our entire dept and everyone’s computer to 10 Gb. I’m just asking for a small local solution for 2, possibly 3 people later.
I’ve talked with a CG animator who has 10 Gb at their studio and she says they get between 250-400 MB/sec transfer rate when they copy their folders of thousands of 1-30 meg frames.
That’s not a bad idea if there are some spare cable runs and spare ports on the machines. If it would require new cable runs get them to do category 7 cables so the move to 10GbE will be easier later.
There are cheaper switches available if you literally need just three ports. Like this 8 port switch from Netgear. It’s about $1,000 from online shops.
http://www.netgear.com/business/products/switches/prosafe-plus-switches/XS708E.aspx
nice, thanks olson. That’s a better switch for our purposes. We have a ton of gigabit switches already so that switch would eliminate buying stuff we don’t need.
Are you and the person you’re transferring files with local with one another? New Haswell motherboards will support full usb 3.0 speeds, meaning 10gb/s transfers. If you can swap harddrives locally then this may not be that big of an issue for you starting next month.
A short cable short stash of storage is actually very easy and a lot cheaper to reach 4-6Gb speed per user on with only a couple users, even without manually swapping the cabling.
That’s an online close storage thing though, not so much a networking high BW one from central data.
USB 3.0 has about 4Gb/s of usable bandwidth, not 10Gb/s. You might be thinking of Thunderbolt which is 10Gb/s. For the price it would be great to get stuff between two computers next to each other (USB 3.0 or Thunderbolt) but it sounds like the original poster wants to transfer files to a centralized storage server located in the data center.
Yeah I’ve been using usb3 for 3 years now along with a USB-SATA adaptor to plug SSD’s into for fast portable storage.
We need a centralized server setup that is capable of transferring data to quickly. It’s sounding like we might be growing in staff over the next couple years and we need a fast networked server solution in place.
Just the other day I needed to make a correction to a finished 4k 60fps stereo composite. If the project was on my local machine, I could output it in a couple hours, but because my local SSD is full with another project right now it takes 12 hours to do across the network. It’s a pain.
Nope, not thinking of thunderbolt.
http://www.techspot.com/news/51246-usb-30-update-will-double-transfer-speeds-to-10gbps-in-2014.html
You have to keep in mind that our enterprise hospital IT team wouldn’t deploy a consumer ivy bridge system as a server. From their point of view, it has to be a certified enterprise solution or they won’t even consider it.
It’ll end up being some sort of rackmount enterprise NAS xeon server. Right now they’re looking into if their existing NetApp storage will be fast enough.
You can imagine the effort I had to make on how to efficiently spend $14k for rendering since I wanted to buy 10 overclocked 3930k i7 linux systems (the IT dept only supports Windows) that I maintain myself opposed to buying 2 enterprise rackmount xeon servers running Windows Server 2008 that they’d maintain. I needed the most processing power from that money as possible and they would have needed to fork out $85k (that they didn’t have) to reach the same overall rendering performance with enterprise rackmount servers.
I was able to win in that case since rendering is my job field and that particular funding was limited. For dedicated file servers though, that’s their area to make that decision and the funding comes from a different budget.
yeah it’s been interesting working with the IT dept. They see things from a bigger picture beyond just our dept. They’d prefer to not have every dept do their own custom on-off solution because it costs more to support vs having more unified systems across different depts. They’d rather not buy a band-aid solution for us now and would rather wait for their big solution to be implemented system-wide.
I really don’t get why they don’t call that USB4. I guess since it’s not 10x faster than current USB3? It’s going to get confusing if they don’t call it USB3.1 or USB3v2 something different than USB3