In this 3rd (and I’m thinking final) part of the return to the homelab series I want to write about storage. I’ve followed other peoples adventures over the years, and seemed that the Synology series of NAS boxes keep on coming up trumps. I did consider “building my own” with something the Nexanta Community Edition – because there was the possibility of going 10Gps with Infiniband. However, with all things considered I know most of my storage traffic generated by deploying VMs from templates and from moving VMs from one type of storage to another using Storage vMotion. I bet my running VMs barely touch even a 1Gps interface…

The thing that impressed me about the Synology series is how they have implemented VAAI (and Microsoft ODX) extensions into the firmware. I opted for the Synology DS1513+ which is their 5-caddy series. I decided that I probably didn’t need the next level up to gain capacity or performance. For me the critical thing is performance, as I hate having to wait around for template clones to complete. My previous home lab used an IOMEGA NAS device using SATA drives. This was an earlier edition that didn’t support VAAI, because VAAI wasn’t even around then (I think). So it’s perhaps unfair to compare the two devices. The critical thing for me having walked away from the colo where enjoyed access to both NetApp and Equallogic (both SAS/SATA) was not to feel I was compromised on the storage performance front. So I took the radical decision to buy all SSD drives for the entire array, as well as paying for a 2GB RAM upgrade to the onboard cache. I pretty sure the only bottleneck in the Synology is the CPU or the maybe the network when I carry non-VAAI enabled tasks (like copying ISOs around…). By going down this route I hoped to remove any bottleneck from a storage perspective in the new homelab.

Adding the 2GB memory to the Synology was pretty straightforward and cheap. A quick Google of the Synology forums ID’d a 2GB package that other users had found no compatibility problems with. This was the Hynix HMT325S6BFR8C-H9 N0 AA (2 GB, DDR3 RAM, 10600, SO DIMM 204-pin) RAM.

IMG_2980
Not sure if pink polythene counts as anti-static protection, but what do you expect for £15?

This extra RAM (in addition to the 2GB on board) was picked up perfectly fine. I could see the 4096MB in the System Information.

Screen Shot 2014-02-10 at 17.10.15

My next tasks was to fit the SSD drives. I picked the SanDisk Extreme II drive which has 480GB RAW capacity. It’s a desktop drive, rather than enterprise one, but still on the Synology HCL. The enterprise drives are of high-quality and resilience, but the price considering the volume of capacity they give you is still hefty. Desktop ones are cheaper, but last for shorter period. I felt the SanDisks offered a could compromise between price and capacity, plus they were on special offer on Amazon.

Fitting the drives, meant pushing off the plastic lugs from the caddy (used to allow for the clipping in of SATA disks without screws) and using the little screws provided to fit the SSD to the caddy.

fitSSD

I powered on the device, and did an update. This is important because in most of these “home” NAS boxes don’t support hot-updates.

My plan is to use vSphere Replication to replicate important VMs from the Synology to the IOmega. I’m also intending to use the IOmega as backup target. Right now its holding my .ISOs and what I call my “legacy templates”. So on the Synology I have my current templates (Windows2012-R2, Windows7, Windows8.1, Windows2008-R2 along with some current Linux Distros) whereas the IOmega is holding legacy templates I hardly use (Win-NT4, Win2000, Win2003, WinXP and so on). I think this is good compromise of re-using the IOmega (which is perfectly serviceable for iSCSI/NFS/SMB sharing) for low-level storage, and keep the Synology for “live” VMs and templates. In fact I’ve end up with a tiered storage offering suitable for vCD or vCAC…

  • Platinum – Synology, iSCSI (SSD)
  • Gold – Synology, NFS (SSD)
  • Silver – IOMEGA, iSCSI (SATA)
  • Bronze, IOMEGA, NFS (SATA)

Why is iSCSI before NFS ? Two reasons currently the Synology DSM software only supports VAAI on iSCSI, not on NFS. Apparently, the rumour is the next drop of Synology’s firmware will add VAAI support to NFS, but I cannot confirm/deny these rumours. I found in tests that iSCSI was much, much quicker as consequence. The second reason is I can use iSCSI Port Binding in vSphere to maximise my throughput from a network perspective.

So how quick is this solid-state beast? Blistering. OM(f)G fast. So fast the first time I did a clone I didn’t believe that could be that quick! 🙂

I put a 40GB (7GB on disk) Windows 2012 R2 VM on iSCSI Template store, and cloned it to my “InfrastructureNYC” iSCSI datastore (both on the Synology. It takes about 25 seconds to finish. With NFS its 7mins. On the IOmega SATA array its 25mins. My physical switch does support Link Aggregation and so does VMware ESX, and NFS. But I dare say the network has very little impact on these times – except for NFS which isn’t VAAI capable. It’s those SSD drives that are doing all the heavy lifting – with or without VAAI.

Here’s a video that shows how quick it is…

The setup of iSCSI is a doddle. You create a target (one per cluster would make sense), ensure its enabled for “multiple sessions”, and configure some IQNs under the “Mapping” Tab.

Screen Shot 2014-02-10 at 17.29.18

Screen Shot 2014-02-10 at 17.29.28

Next you create a volume, and allocate it to the named target (in my case VMwareNYC-Cluster).

Screen Shot 2014-02-10 at 17.32.10

Screen Shot 2014-02-10 at 17.32.18

Configure the iSCSI Software Initiator on the host – and job done.

So far I’ve been really impressed with the Synology. It has LOADS of other functions and features – I’ve not even touched yet…. Money well spent in my book. I got this performance without any real tweaking of the storage. These were all out of the box settings…

NOW. I dare say I might have well seen the SAME performance, using SATA, and using 2xSDDs to act as cache. This would have made the system infinitely cheaper and offer more capacity, and cheaper cost-per-GB. I just had some money to burn up, and I felt like going down the all solid-state array route… As understand though this type of NAS box doesn’t yet support caching (something that only came to my attention from responses from twitter after this article was published – with thanks to Shawn Bass). Again, I understand that functional might waterfall down to these types of NAS devices in future DSM releases.