Screen Shot 2014-08-20 at 16.00.06



You can post questions to the forum –

Resources (User Guide, Datasheet, Technical Videos)

Well, announced to today after much speculation is EVO:RAIL (the project formally known as “Marvin” or “Starburst). It’s not to be confused with EVO:RACK which a TechPreview, and developed by different team down the corridor at HilltopB.

If you want to see EVO:RAIL in the flesh, and your at VMworld – come down to the booth where you might end up talking to me OR head over to the special EVO:RAIL Zone where some of our hardware partners will be there to show you the tin, and talk about their work.

So the basics. EVO is from Evolution, and RAIL is from the fact that the product is 4-node box with vSphere5.5 and Virtual SAN ready to rock and roll. You can see the way things are going when it comes to building out a new environment with software-definited everything. You can either BYO (Build Your Own) by consulting the HCL and buying the supported hardware. You can approach VCE (VMware/Cisco/EMC) for a vBlock or NetApp partner for a referrence architeture based on FlexPod (VMware, Cisco/NetApp). Now there’s a third option – an EVO:RAIL… and shortly after that EVO:RACK.

You can see the EVO: RAIL as being part of the new catagory we called “Hyper-converged”. That distinguises itself from converged architectures (vBlock/Flexpod) because there isn’t a storage array here, but VMware VSAN.

Firstly, VMware is NOT getting into the hardware business – its working with its trusted OEM partners, to create a competitive market place for EVO:RAIL. You can vote with your regular hardware vendor, or you can shop around. It’s your choice. The speeds, and feeds will be be broadly similiar in the 1.0 release. It will be really up to the OEMs to compete with each other, and perhaps add additional services, support or whatever. That means the EVO:RAIL experience with OEM VendorA should be broadly the same as OEM VendorB.

Key Features – its a 2U box which contains in it 4-nodes running VMware vSphere 5.5 U2. You can couple together 4 EVO:RAIL appliances together to create a 16-node cluster (4×4). New EVO:RAIL appliances are discovered on the network, and are automatically configured and enrolled into the EVO:RAIL using the Zero Network Configuration methodology. Actually, a significant amount of work has done by our very skilled engineers to re-enginer this (something we call Loud Mouth), which has result in patents being filed. Each node in the EVO:RAIL presents 192GB of RAM, and 6-cores with two 10GB network interfaces (used in a Active/Standby Standard Swith configuration) with secondary 1Gbps BMC interface – management, VMotion, Virtual SAN and VM networking is all driven by the 10Gbps cards.  One EVO:RAIL appliance (populated with 4-nodes) presents 16TB worth storage which is a combination of SSD (for use with Virtual SAN) and HHD. Using some internal testing we reckon conservatively one EVO:RAIL will support about server 100VMs or about 250 virtual desktops. If you went all the way up to 4 EVO:RAIL nodes you would be looking at 400 server VMs in total, or 1000 virtual desktops. Finally, EVO:RAIL comes with its own patch management and upgrade process (which isn’t based on VMware Update Manager – you might be quite pleased about that?). So unlike some other solutions which come with their own unique upgrade/patch management technologies – this should be a simplier model to upgrading.

Screen Shot 2014-08-20 at 16.54.57

So what are the use-cases. Well, conventional server consolidation is one for perhaps a reasonable sized SMB (who needs more than two-node or three node cluster would provide based on something like vSphere Foundation). I think ROBO or perhaps retail sector may well be interested as well – and as EVO:RAIL is based on vSphere 5.5 U2, all the other technologies that intergrate with vSphere. So it could be used as target for vCAC or used in a colocation facilitity to be a target for SRM/VR DR scanarios. Personally, I think the SMB/ROBO segment will be where the product gets it fastest adoption. But I also I think that EVO:RAIL could sit along side an existing environment for specific  project – such Horizon View.

So in summary: 100% VMware, provided in a competitive marketplace delivered by OEMs you know. Not a single source appliance from a single vendor based around a VSA model, but embedded deep in the kernel. Although EVO:RAIL is 1.0 product – its based on technologies that have been tried and tested by customers around the world (ESX, vCenter, LogInsight). Get it up and running in minutes, and add additional EVO:RAIL appliances in scale-out model in even less time by autodiscovery process driven by Loud Mouth.

In my next blogpost I will be delving in more into what the customer experiences is like, and some of the requirements needed prior to setting up the first EVO:RAIL.