Your supposed to just know this by reading some obscure blogpost which links you to the PowerShell Gallery with positively NO instructions on how to set it up. Man, someone could really do with RTFM this – to make it easy for new people to get hold of.
So.. Drum roll here’s how its done.
1. Run PowersHell from your system – ensuring that use RunAs Administrator:
Install-Module -Name VMware.PowerCLI
3. Choose [Y] to download and install the Nuget update engine
4. Chose [A] to get download all of the PowerCLI modules
5. Make a cup of tea whilst stuff downloads and unzips itself
6. Once this completes. Your not done yet. Just because the modules have been download and unzip – that doesn’t mean they have been loaded. That’s something you’ll need to do yourself. If you simple type connect-viserver you will get an error message like so:
There are lots of modules that contain lots of cmdlets (is that CMD-lets or “Command-Lets?)
7. You cannot just run the command Import-Module VMware.VimAutomation.Core on clean system UNTIL you set your Execution Policy like so:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
8. Followed by:
9. Followed by opting out of the Customer Experience Program with:
Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $false
10. Before you make your first connect-viserver connection to your vCenter. You need to decide if you just going accept untrusted self-sign certificates that a generated during the install of VMware ESX and VMware vCenter – or whether you want go thru the ball-ache of issuing your own certificates. In a homelab environment your probably going just going to get rid of any warnings with:
Acknowledgement: I’d like to thank William Lam’s work and his blog VirtualGhetto – to whom this work and blogpost would not have been possible. In many ways my work is just minor adjunct to his efforts: Thanks William! 🙂
Update: Since publishing this – William has released a new version of his template. Given the time and effort I put into validating my work – I’m going to keep mine available too. I’m still testing this on vSphere6.7 which what held me back releasing it sooner…
Note: This template will not work with VMware Fusion, and has NOT been tested with VMware Workstation
IMPORTANT: This configuration has been tested in vSphere 6.7, but I had to wait for the public drop. Despite working with VMware technologies since 2004, I was not invited to the preview. Additionally, I cannot do any verification work on vSphere 7.0 because I haven’t been approved for the beta.
As you might gather from my recent post on using VMware “ovftool” I’ve been looking at the process of importing and exporting VMs into a OVF/OVA format for a particular reason. One of my projects this year is to master VMware’s VSAN technology. You might remember I had a role previously in VMware’s Hyper-convergence Team, where I got more than plenty of exposure – but things have moved on considerably in the two-years I’ve been away. And my goal is to write in depth on the topic – beyond the shorter for blog format. More about that at later date!
I’ve got pretty good hardware at my home lab – but no SSD in my servers – and I’m not about to go dipping even further into my savings to do that. Not when I have hours worth of vExpert time on the Oracle Ravello Cloud – as well as access to SSD storage on my Synology NAS and local storage on the VMware ESXi host. It makes much more sense to go for nested vSphere 6.5U1 or vSphere6.7 configuration where a mere edit to the .VMX file marks a .VMDK as a SSD rather than an a HDD.
After a tiny amount googling it became clear that vSphere6.5U1 and vSphere 6.7 offered many enhancements for the nested vSphere homelab particularly if its is running on top of a physical VMware ESXi host or vSphere HA/DRS Cluster. More specifically fellow vExpert – William Lam – VirtuallyGhetto site rounds these enhancements in a blogpost very neatly which he wrote back in October, 2016:
So whether you running VMware ESXi or in “Da Cloud” as the basis for your lab – it was time to take advantage of improvements. I knew from memory that William had a range of different VSAN templates – which I thought I could use as the basis of a “new” Nested vSphere 6.x VSAN Cluster. Sadly, I’ve tried to contact William via Twitter and LinkedIn to discuss this – but I’ve not heard back from him. I think he must bizzy with other projects or like me is having a sabbatical from work. If you interested the “source” for my OVF template came from William’s blog here which he posted back in Feb, 2015:
Note: I’ve only tested this on a physical VMware ESXi host. It won’t work on VMware Fusion 10 does not yet support the VMware Paravirtual SCSI Controller. I don’t have access to a physical Windows PC with the necessary CPU attributes to test this with VMware Workstation. I’d recommend to Fusion/Workstations customers that you use the slightly older templates provided by William as I know it definitely won’t work on Fusion, Workstation is a mystery.
OVF Virtual Hardware Specification:
Anyway, I’ve taken William’s original 6-node nested VSAN OVF template which he built to demonstrate “Fault Domains” and made some changes – and spat thing back out for others to use.
My personal goal when ever I have done nesting has been to have something that resembles the customers physical world as much possible (I’m limited two physical NICs in my physical homelab which makes playing with Distributed Switches awkward…). As you might have gather increasingly I’m not someone who naturally predisposed to compromises. I was even tempted to even mirror the disk sub-system of the type of hyper-converge appliance – that might have 24 disk slots divided over 4 nodes in 2U chassis – like a vXRAIL. But I thought that was starting to get a bit silly!
Like me you will want to deploy the OVA to the FASTEST storage you have. For me my fastest storage happens to be to be my Synology NAS which is backed by SSD drives.
Here’s an overview of my customisation so folks know what’s changed.
I’ve increased the number of nodes from 6 to 8. This allows for proper use of maintenance mode as well allowing for enough nodes to test “Fault Domains”. Remember you can dial down your number of nodes to suit your physical resources.
Upgraded to Virtual Hardware Level 13 (Requires an VMware ESX 6.5 or VMware ESX 6.7
Change Guest Operation System type from vSphere 5.x to vSphere 6.x
Increased the number of nodes from 6 to 8 to allow for maintenance mode to work correctly, fault domains and the emulation of a stretched VSAN configuration aka two 4-node VSAN clusters that look like they are two locations – remember you only need one vCenter to do this…
After some testing – I discovered the old RAM allocation of 6GB wasn’t sufficient for VSAN to configure itself in vSphere 6.5 and vSphere 6.7. So I had to increase the nested ESXi RAM allocation to 10GB and this seemed to fix most issues. I did try with 8GB of RAM with this configuration I found it was like having my teeth pulled. 10GB worked every time – smooth as a baby’s bottom. 🙂
Removed the 2xE1000 NICs, and added 4xVMXNET – The additional NICS allow for easy both vSwitch and DvSwitch play time! It means I can keep the “core” management networks on a vSwitch0, and have virtual machine networking on a DvSwitch. Some might regard this as Old Skool an reminiscent of the 1Gb days when people thought 16 uplinks per host was a good idea. And they be right, and of course is not option in the physical world where the server might only have two 10Gb ports presented to it.
More Disks and Bigger Disks:
Removed the LSILOGIC Controller with the VMware Paravirtualised Storage Controller
Increased the boot disk from 2GB to 8GB – This means you now get a local VMFS volume with logs stored on the disk. Despite the ability to PXE/SD-CARD/SATADOM boot VMware ESXi it seems like many SysAdmins still install to 2xHHD mirrored or to FC-SAN LUN/Volume. Initially with vSphere 6.5 U1 this was set to 6GB, but with the release of vSphere 6.7, I had to up this value to 8GB.
I’ve increased the size of the SSD and HHD disks and also their number. The sizes reflect conceptually that unless you have All-Flash VSAN generally you have more HHD backed storage than SSD…
There is extra 2xSSD and 2xHHD per Virtual/Nested ESXi nodes – this allows for the setup of more than “VSAN Disk Group”. However, as the indication of a disk as SSD or HHD in nested environment the template does support all-Flash, and single VSAN Disk Group configurations. That’s your call.
All VMDK’s are marked to be thinly-provisioned to save on space…
All SSD drives are 10GB and all HDD drives are 15GB to make even easier to identify them – although VSAN itself does a good job of ID disk type.
Clearly this “Beefed Up” nested vSphere6.x VSAN cluster is going to consume more resources than the one previously built by William Lam. But there’s a coupe of things to bear in mind.
Firstly, if you only need 3 or 4 VSAN nodes – then simply do not power up other nodes or delete them
Secondly, If you do not require the additional SSD/HHD disks and are not interested in the delete those (before you enable VSAN!)
Download and Import:
The ONLY way to import the .OVA below is using the OVFTOOL. This is because it uses settings that a standard GUI import reports as unsupported. Specifically, the vCenter “Deploy OVF” file options has a problem with the SCSI Controller type called “VirtualSCSI”. I’m not sure WHY this happens. Either its broken, or is only partial implementation of the OVF import process and simple doesn’t not recognise the VMware Paravirtual SCSI Controller
To import the OVA, first download and install the OVFTOOL for your workstation type.
Note: The virtualSDD settings is where all the excitement happens. And allows you to control the disk configuration of this single OVA. For instance:
This setting –extraConfig:scsi0:0.virtualSSD=0 –extraConfig:scsi0:1.virtualSSD=0 –extraConfig:scsi0:2.virtualSSD=1 –extraConfig:scsi0:3.virtualSSD=0 –extraConfig:scsi0:4.virtualSSD=1 would allow for two disk groups – creates a nested VSAN with one HHD and one SSD per disk group
This setting –extraConfig:scsi0:0.virtualSSD=1 –extraConfig:scsi0:1.virtualSSD=1 –extraConfig:scsi0:2.virtualSSD=1 –extraConfig:scsi0:3.virtualSSD=1 –extraConfig:scsi0:4.virtualSSD=1 would import the template to be All-Flash VSAN
This setting –extraConfig:scsi0:0.virtualSSD=0 –extraConfig:scsi0:1.virtualSSD=1 –extraConfig:scsi0:2.virtualSSD=0 –extraConfig:scsi0:3.virtualSSD=0 –extraConfig:scsi0:4.virtualSSD=0 would import the template for a single disk group with one SSD and 3xHDD.
Of course the disk sizes would be a bit odd with some HHD being 10GB or 15GB. And in the case of all-flash a mix of SSD that were a combination of 10GB and 15GB. This isn’t the end of the world, and still works
For a Stand-Alone ESXi host using local storage only as an example:
Sample: To Deploy vApp to Stand-Alone Physical ESXi Host:
for /l %x in (1, 1, 8) do "C:\Program Files\VMware\VMware OVF Tool\ovftool.exe" --acceptAllEulas --noSSLVerify=true --skipManifestCheck --allowExtraConfig --extraConfig:scsi0:0.virtualSSD=0 --extraConfig:scsi0:1.virtualSSD=1 --extraConfig:scsi0:2.virtualSSD=0 --extraConfig:scsi0:3.virtualSSD=1 --extraConfig:scsi0:4.virtualSSD=0 -ds="esx01nyc_local" -dm="thin" --net:"VM Network"="VM Network" "C:\Users\Michelle Laverick\Downloads\Nested-ESXi-8-Node-VSAN-6.5-1-5-ESXi-%x.ova" "vi://root:VMware1email@example.com"
NOTE: In this example a FOR /L loop is used to run the command 8 times (loop starts at number 1, increments at 1, and ends when it reaches 8 – hence 1, 1,8) , each time processing an OVA file for each node. Stand-alone ESXi host have no understanding of the “vApp” construct so this is a quick and dirty way to import a bundle of them using a consistent filename convention. I’m sure there MUST be a quicker, smart and less dumb way to do this – and I will update this page if I find it…
IMPORTANT: Don’t bother doing the import without the OVFTOOL otherwise you will see this error – its shows the vSphere6.5U1 and vSphere 6.7 Web-Clients inability to import a VM with VMware Paravirtual SCSI Controller enabled which it calls by the name of “VirtualSCSI”
Update your vCenter/ESXi hosts: Make sure you a do VMware Update of the vCenter and Hosts – even if you have a relatively recent version of vCenter/ESX. Bugs exists, and tools exist to fix bugs. Use them. I like to do my VCSA updates from the Management UI that listens on 5480 – and then use Update Manager to remediate the hosts. The majority of this post was tested on vSphere 6.5U1 but didn’t play nicely until the host were upgraded from VMware ESXi, 6.5.0, 5146846 to
Later it was tested on the new release of vSphere 6.7. A number of changes were triggered by testing this nested VSAN setup on the new software and this included
Even more memory up from 10GB to 12GB!
vSphere 6.7 requires you setup VSAN using the new HTML5 client. More about this shift in a companion blogpost
Virtual Disk sizes increased – without a large disk a spurious alarm called “VSAN max component size” was triggered. The case comes from the fact that the disk is size is small. Yes, a mini value triggers a “max” alarm. This is so wrong on so many levels its hard to explain.
Confirm Networking: Make sure you physical switch supports Jumbo Frames, That the Physical ESXi hosts are enabled for 9000 MTU and that the vSwitch is enabled for Accept, Accept, Accept on the Security Settings
Build Order: I would recommend creating empty cluster, adding hosts – patch the hosts – and then enable VSAN afterwards
Create TWO VSAN Disk Groups: Until ALL the disks are allocated to an existing disk group or a second disk group is created – you will receive a warning from VSAN “Configuration Assistant” – that not all disks have been claimed
Enable other Cluster Features: Once VSAN is functional enabled DRS first, followed by HA
Finally use the “Configuration Assist” to valid all is well. By default a warning will appear for “Hardware Compatibility” as the VMware Paravirtual SCSI controller used in nested environments like this isn’t on the HCL. And will never be!
If this “hardware compatibility” warning really offends you – it is possible using the RVC console utility to disable tests – that are false positives. For example if SSH into the vCenter that owns the VSAN cluster (in my case vcnj.corp.local – and run the RVC console – this command will add this test of the HCL to the silent list. You do this by SSH into the VCSA as “root” using an IP/FQDN – typing “bash” running RVC and authenticating with your firstname.lastname@example.org account…. Phew. I’ve never come across a more obtuse way gaining access to a command-line tool!
vsan.health.silent_health_check_configure '1/New Jersey/computers/Cluster1' -a 'controlleronhcl'
Successfully add check “SCSI controller is VMware certified” to silent health check list for VSAN Cluster
With vSphere 6.7 I had a number of other bogus alarms that needed to be cleared using:
vsan.health.silent_health_check_configure '1/New Jersey/computers/Cluster1' -a 'smalldiskstest'
vsan.health.silent_health_check_configure '1/New Jersey/computers/Cluster1' -a 'hostlatencycheck'
Here we can see that the question is “SCSI controller VMware Certified” has been skipped…
What will Michelle do next? I dunno. I like to keep folks guessing. Something fun. Something useful to myself and others. Enjoy.
I did some experimentation with the performance of my nested vSAN. It’s important to know that the performance reflects the non-native performance created by nesting – and is not a reflection of physical VSAN performance characteristics.
Firstly on my hosts this how much a 4-node nested VSAN consumed at rest (just running ESXi and not doing very much.
So despite my 4xvESXI needing 10GB each – actually they consumed 28GB leaving 35GB on my physical host (which has 64GB of RAM). They consumed about 45GB from a 458.25GB local VMFS drive. A clone of a Windows 2016 VM from my Synology SAN across the wire to this local “VSAN” storage very slow. It took all night and by the morning still hadn’t completed the boot process.
I tried the same process with a “MicroLinux” – and despite it being tiny the clone process was a bit quicker taking 5mins of 1GB thin virtual disk (350MB in use). Perhaps it would be quicker once the MicroLinux was initially copied to the VSAN – taking the source template out of the loop. I did think this made a difference – the copy time came down to 2mins for the MicroLinux. The verdict is storing the nested VSAN on local storage in this way is subpar for performance – that’s because my local VMFS disk was just a SATA drive on a generic HP controller. Clearly, one would get better throughput with an SSD drive.
I decided to pivot my nested VSAN off local storage to my SDD backed Synology NAS to see if performance improved. For instance the Win2016 image took about 20-30mins to clone a 40GB VM from the Synology’s iSCSI layer into the nested VSAN layer – also this was quite reliable. So it looks at if the cloning process was the source of the bottleneck.
You will get much better performance using a “super-skinny” version of Linux such as the MicroCore Linux available from my downloads page. This is the VM used in VMware Hands-on-Lab’s and contains a copy of “Open Source” Tools. Its not the skinniest of skinny linux but its is very functional. Navigate to Download, scroll down to “Small Linux VMs for a Home Lab”
Category: vSphere | Comments Off on New Nested vSphere6.5 and vSphere 6.7 VSAN Cluster OVF
I’ve been using vSphere 6.5 for a few weeks and noticed a couple of oddness surrounding OVFs and OVAs. For those who maybe unfamiliar OVF/OVA is packaging format that allows you to easily import and export VMs – and is a recognise format beyond VMware’s boundaries that’s adopted almost universal industry acceptance. An OVA is just a gzip file which contains the OVF file itself (which is merely text descriptor) together with the VMDK’s that make up the VM.
Generally, I’ve found importing OVAs/OVFs is pretty easy and relative reliable – even though you often tussle initially with web-browser security settings and uploading/downloading of files. Sadly, I’ve found the export process can be a bit 50:50 and the resulting OVF generated unusable…
Firstly, we appear to have lost the ability to export to an OVA from the vSphere Web-Client altogether – that’s a disappointment to me because I quite like the simplicity of the simple OVA bundle. It’s not clear to me if this deliberate or an oversight by the vSphere Web-Client development team OR if this hints at some policy of moving away from OVA as companion to OVF.
Secondly, the resulting OVF file that’s exported (there was the appropriate files) gave me an unpleasant error message 🙁
Getting Started Author: Michelle Laverick (@m_laverick)
Version: Tested on 4.5.1 (Soon to be released)
Note: I’m not overly happy with the way the graphics are behaving in terms of resolution. If you’re finding this hard to read – in think in future I will have to crank up the font size in PuTTy – and I’m looking for a modern WordPress theme as this is looking a little too much 2011 for my tastes.
My fellow vExpert Edward Haletky has github crammed full of useful stuff – and last week I spent most of my time getting to grips with his LinuxVSM. As ever I found this really interesting and is a bit of distraction from my main personal project. But like to follow where ever my passions and interest take me. As for myself I’m chatting with Alastair Cooke of about how Edward’s work could be incorporated into his AutoLab Project. Additionally, I”m looking at updating the UDA for the next release of vSphere – adding LinuxVSM to it, as well the deployment of nested homelab – that way the UDA can be used to deploy not just physical, but virtual ESX hosts.
Edward’s LinuxVSM is essentially a Linux version of VMware’s Software Manager. This is a sadly neglected Windows tool which hasn’t had any love from VMware since 2016. I tried downloading VMware’s Windows version and using it, but it didn’t work. In case you don’t know the “software manager” is meant to the ease the pain of downloading software from vmware.com. You can see the LinuxVSM as text-based, and scriptable version of VMware Software Manger allowing access to the main VMware’s VSM Metadata site.
So, using an ordinary personal MyVMware account LinuxVSM can:
Download practically almost any piece of software you need from vmware.com
You can “mark” a certain product suite as a “favourite” – and using cron LinuxVSM will update your repository with new version of software as they are released.
Your repository could be just a local .VMDK or else you could mount from the LinuxVSM to a CIFS or NFS share/export – and store your download on your NAS device.
Your account can be a personal MyVMware account, and you do not need to be a customer account (although there are some bits that are not downloadable except for customers). For many of us this is a godsend. I’ve lost track of the number of “mailinator” accounts I’ve set up in effort to get hold of software from VMware – something I’ve experienced since 2003. Don’t forget you can use the VMUG Advantage to gain access to 1year NFRs if you are not in the vExpert club. VMUG Advantage is great – although sometimes its “downloads” lag behind what is available from the live site. So perhaps the real advantage of the VMUG Advantage are the licensing keys rather than the access to the media.
Note: As ever before you begin – make sure the FQDNs of your proposed PSC and vCenter are listed in DNS – and reserve your IP addresses accordingly. The vCenter install validates your IP/DNS configuration and won’t let you proceed until its correct.
WARNING: Please pay close, close attention to your FQDNs as during the process built-in certificates are created which if you subsequently correct/change hostname will be invalid.
In this scenario – I wanted the appearance of multiple vCenters across many sites – and wish to link them together for ease of administration – and the sharing of licensing repositories. This ensures licenses can be assigned freely around the organisation – and not be “locked” to specific site location. This more distributed model is not supported with the “embedded” deployment type – where the vCenter and PSC service reside in the same instance – and seems to have been introduced with vSphere 6.5 U1. So I would have two PSC and vCenters one for New York and the other for New Jersey.
There now 8 supported topologies for multiple vCenters and “Enhanced” Link Mode – and 3 depreciated one as well. Far too many possible permutations for me to cover – so I would seriously considering studying the documentation in full. I would recommend starting https://kb.vmware.com/s/article/2147672 which gives a good round-up of all them.
VMware’s “Linked Mode” feature has a number of names – from Linked Mode to Enhanced Linked Mode, to now it being also called “Hybrid Link Mode”. Most of the changes have come about as the company pivots away from vCenter’s historical Microsoft Windows roots, to being purely a Linux based Virtual Appliance. However, In 2017, VMware announced a partnership with Amazon to extend vSphere functionality into Amazon Datacenters and integration with its Amazon Web Services (AWS) environment. This development prompted VMware to modify linked-mode functionality to also include management of assets in Amazon’s cloud. Hence “Hybrid” mode is now the favoured term. Hybrid mode in its full functionality is only available for those who have both vSphere on-premises and a vSphere subscription with Amazon. Whatever its name – linked mode addresses a scenario for where multiple vCenter persist for geographical or political reasons – and it has been decided to provide one-login identity to both systems.
It’s entirely possible that you may wish to install another vCenter at different site or location. In this configuration I had a single PSC Domain (vsphere.local) and single Active Directory Domain (corp.local) – but with two SSO sites – one called New York, and the other called New Jersey.
In our case I have two different vCenters and PSC in two different sites – however, they will part of the same SSO domain and linked together. The KB article referenced at the beginning of this section outlines this accordingly – although in my case there will for the moment just one vCenter under each PSC.
1 Single Sign-On domain 1 Single Sign-On site 2 or more external Platform Services Controllers
This configuration is not without limitations:
In the event of a Platform services Controller failover the vCenter Servers will need to be manually repointed to the functioning Platform Services Controller.
vCenter Servers attached to higher latency Platform Services Controller may experience performance issues
This week I had a run in with the PSC and vCenter in vSphere 6.5 U1. I’m ashamed to admit it was really all my fault – being a bit fat-fingered and hasty in my inputting – I put a bump name in DNS, and then a bum name in the installer as well. That result in SSL certificate mismatches and errors…
So I seriously needed to clean out the guff I’d created and try again. There are couple of KB articles and blogpost that cover this scenario. I found I need to do four step. My life was made easier by enabling SSH on all the appliances along the way – and of course switching to the “Bash” prompt after logging.
I started the process by log in on to one of my functional PSC’s using SSH….
1.) Run cmsso-util command on a functioning PSC to clean out the bum PSC and vCenter references
3.) Run the vdcleavefed to really clean out the bum PSC and vCenter references. Despite running cmsso-util the ghostly remains of failed deployment haunted the web-client – indicating they were still there… vdcleavefed allowed me to remove the properly…
This week I had a need to download the official PDF guides to vSphere 6.5 U1. I like having the guides offline because Apple’s Spotlight can index them and make them available for search queries – but also if you in a place where internet access is restricted you can use the offline docs to lookup stuff.
The official landing page for documentation around vSphere is located here:
Recently VMware has moved all its ‘administration guides” online in a HTML format called “VMware Docs Home” – https://docs.vmware.com/. It is still possible to download an “offline” PDF copy as single .ZIP file. But they have rather “tucked” it away where its tricky to find. If you need it – it can be found under a node called “Archive Packages”. These links down a single .ZIP file containing all the PDFS
You can download a zip file of all vSphere documentation as a zip file using this link which is current as of today, 14th Feb, 2018….
This monday I had briefing with Datrium. They have a tag line of “Open Convergence”. I was grasping for a snappy title for this post as lead into writing about what they do. As ever my contrarian brain hit about the opposite of convergence which is divergence. I kind of like “hyper-divergence” because for me in away it describes the fact that despite the massive growth in the “hyper-convergence” marketplace – there persist radically different approaches to “getting there”. Both in the method of consumption (build your own VSAN Vs the ‘appliance’ model) and also the architecture (shared storage accessible directly from a hypervisor kernel (VSAN), a “controller” VM which shares out the storage back to the hypervisor (Nutanix)). I think Datrium and the recently announced NetApp HCI are delivering yet more options on both the consumptions/architecture front.
A runestone is typically a raised stone with a runic inscription, but the term can also be applied to inscriptions on boulders and on bedrock. The tradition began in the 4th century and lasted into the 12th century, but most of the runestones date from the late Viking Age. Most runestones are located in Scandinavia, but there are also scattered runestones in locations that were visited by Norsemen during the Viking Age. Runestones are often memorials to dead men. Runestones were usually brightly coloured when erected, though this is no longer evident as the colour has worn off.
This week I was fortunate to have a briefing with Stan Markov (VCDX #74 and VCI), the CEO of Runecast. In case you don’t know Runecast Analyzer is a tool that gathers info from your vSphere environment and compares it to the VMware KB, Best Practices and the Security Hardening guide. The idea is it makes you proactively act on what it discovers to reduce the time spent reactively acting to events as they happening – in that typical “firefighting manner”.
Typically, we are so busy in the IT world we tend to respond to situations as they arise, and hope that by following design best practice we reduce these events to a minimum. In recent years a number of software vendors have been developing tools to break this cycle of behavior. Despite bold attempts to “automate all the things”, you’d be surprised how many people still are using a combination of Excel spreadsheets and Googling to both keep a track of changes, or respond to new issues as VMware finds them. And, of course, those pesky things called “default settings” that often are left as is, and never reviewed.
When the poop hits the fan such admins are forced into “Cutting and Pasting” cryptic log entries into Google, in the hope that a narrowly defined string will reduce the long list of false positives – it’s become a skill in it’s own right, scrolling through search results and translating the verbiage of KB articles to see if it answers your problem. And I can speak of situations first hand where I’ve had to “stitch together” KB articles to fix an issue. It’s this sort of first-hand pain that the folks at Runecast are addressing.
I was given an NFR license for a year (thank you) and spent yesterday getting my lab environment up and running to ingest their offer. I spent most my time making the lab work again replacing my expired vSphere license! The Runecast Analyzer appliance (in a OVF format) took less time to setup, than it did to download. I pointed at it my vCenter and I was up and running.
As you might gather with the lab being down for more than a year, it’s not been patched in ages, and also I’ve never bothered with any security hardening. So my results will not be reflective of most production environments (or will it?). As you’ve probably gathered, Runecast Analyzer is an on-premises appliance, and although it pulls data down from Runecast Central Repository, which in turn keeps a track on the VMware KB, nothing is pushed out of your environment. Runecast Analyzer does support offline patch-management for those people who require an air gap between themselves and the outside world for compliance purposes.
Hi there, and thanks for reading this blog post about Altaro VM Backup. I was asked by the guys at Altaro to take a look at their latest release. I said yes, and I also managed to persuade Altaro to make a donation to the charity (aquabox.org) who I’m volunteering for whilst I look for a new role. So firstly, a big thank you goes out to Altaro for agreeing to this arrangement. I think its setup that works well for all. Altaro gets exposure to their new offering; I get stick time with a product that’s new to me – and a good cause benefits as well. I managed to raise £280 for Aquabox. If you want to donate to Aquabox as well click the logo!
Lets start with some basic facts. Altaro has won a number of pludits from the reviewers on Spiceworks and VirtualizationAdmin.com. Their Altaro VM Backup software can backup both VMware vSphere as well as Microsoft HyperV, so is handy for those people working in a hybrid environment. It’s licensed on a per-host basis, not per-socket or CPU, so customers who go for a high-density consolidation ratios (the number of VMs per hosts) are really going to benefit from a licensing perspective. It’s chocked full of all the features you would normally expect from any enterprise backup system. Altaro VM Backup is fully compatible with Microsoft VSS, and that means you will get a consistent backup from those tricky customers like Microsoft SQL. The software is granular enough to restore individual files and emails from within a virtual machine backup. Finally, a number of backup targets are supported including USB External Drives and Flash Drives eSata External Drives, File Server Network Shares (via UNC), NAS devices (via UNC), RDX Cartridges – as well as the Offsite Altaro Backup Server with WAN acceleration. In my own case I pointed my simple Altaro Server to my local NAS box that already had backup shared out accessible to Microsoft Windows, the same NAS is visible to my VMware ESXi hosts on the same network using NFS.
As you might expect the setup routine was a relatively trivial affair, and indeed the software itself does a good job of walking you through the 3-step routine to provide the core details need to do your first test backup – this means adding your VMware vCenter, individual VMware ESXi Hosts or Microsoft Hyper-V Hosts.
Each of these stages has a ‘test connection’ component before you proceed, tha you can see in this screen grab below:
The next stage is adding your storage options for carrying out the backup itself. You can opt for a directly connected device, or for a remote location supported by UNC. In my case my Altaro VM Backup Server was a Windows 2012 R2 virtual machine, with access to my remote NAS.
As you can see once a backup target has been added its simply a case of dragging and dropping a VM to that target. From this point onwards most of the admin tasks are of a drag-and-drop variety – dragging VMs to predefinied schedules and retentention policys, so you can control the frequency of backups, and hold old backups are disgarded. As my lab has been offline for a year, I don’t really have that many VMs to backup, except of course the infrastructure VMs that make up the lab itself. So I decided to backup these VMs as a matter of course.
The V7 Edition boasts a number of new features. The first is “Augmented Inline Deduplication”. This decreases the time it takes to both take and restore a backup. It creates the smallest backup size, and doesn’t require you to group VMs together to get the benefits. The fact that its inline means the deduplication process isn’t run as a post-backup process. This is important because the storage savings that deduplication brings mean little in real terms if you still need the temporary space required to carry out the backup. By definition backups often mean backing up the same bit of data that repeats itself in different VMs over and over again, and this deduplication cancels out bloat in backups.
Altaro have published blogs that explain this augmented deduplication process. This blogpost is a centred around Hyper-V and they have a very similar one for VMware as well. Calculating the upfront exact amount of potential savings any customer will get from any dedupe process is difficult. However, the Altaro VM Backup Dashboard does a good job of showing those dedupe and compression savings.
Also new to V7 is “Boot from Backup”, it’s the ability to power on a VM directly from the source backup. Typically, this means a network location like a CIFS/NFS server share/export is mounted directly to the hypervisor and powered on. That means the IO performance will be constrained by the disk capabilities of the system backing it. Remember this is merely away of getting the VM up and running in the shortest possible time. In most cases the availability issue trumps any short-term performance hit, because it’s the clever stuff going on in the background that matters. In the background the restore process is continuing – once the restore process has completed, all you need to do is schedule a small maintenance window to shutdown the “boot from backup” and replace it with the restored copy. As you might expect, a reboot takes less time than waiting for a full VM restore.
The “boot from backup” feature has two modes – a verification and recovery mode, and of course the performance mileage will vary dependent on the qualities and capabilities of the storage backing that VM’s backup target location.
Once you have gone through the usual suspects of selecting the mode, backup location and VM itself – you get granular control over the way VM is brought up. This includes attributes such as renaming the VM and ensuring its network card is in a disconnected state – to avoid conflicts with the existing VM.
VM Backup V7 will soon promises a feature called Cloud Management Console (CMC), which will allow administrators to monitor and manage remotely all their backup installations using a single tool that can be accessed from any web browser – without VPN or any requirement to be on-site. The CMC dashboard gives a more site-by-site or customer-by-customer point of view and will be designed for a more multi-tenant approach to backup management.
Well, as I stated earlier everything you’d expect from an enterprise backup solution is pretty much there. So along side multi-hypervisor support you’ll see an impressive list of features:
Drastically reduce backup storage requirements on both local and offsite locations, and therefore significantly speed up backups with Altaro’s unique Augmented Inline Deduplication process
Back up live VMs by leveraging Microsoft VSS with Zero downtime
Full support for Cluster Shared Volumes & VMware vCenter
Offsite Backup Replication for disaster recovery protection
Compression and military grade Encryption
Schedule backups the way you want them (View video)
Specify backup retention policies for individual VMs (View video)
Back up VMs to multiple backup locations
So there are plenty of positives to be hand, along side a competitive licensing policy… but….
If there’s one repeated criticism levelled at Altaro VM Backup is the lack of public cloud as a backup targets. So for offsite backup use your very much dependent on having another site in which to host the Altaro VM Backup Offsite Server. Now for many small businesses this might not be an issue, as many SMBs actually have more than one location – such as their main warehouse facility and the customer-facing location. However, for SMBs that literally only have one location this is tricky. Such customers might look to services like Amazon S3, Glacier or Azure as way of getting their backups a distance from the core site. The alternative is transporting removable media to another location – and that feels decidedly 1990’s for an era where data can and should be held anywhere.
I raised this issue with the guys at Altaro and they pointed me to blogpost they have which show using the Altaro VM Backup Office Server in Azure. The first blogpost covers off the planning and pricing aspects of placing an Altaro Offsite Server in Microsoft Azure. The second blogpost explains the process of how to setup it up. This configuration is something that Altaro intends to fully develop and it in the pipeline, and part of an overall cloud strategy – but they weren’t understandably able to give me an ETA on that – because it would be commercial sensitive to do so.
If you are familiar with virtualisation and have been following the backup space for virtualization for a while – there are no surprises here. What’s certainly true for me is that a new tier of backup vendors is entering an already crowded space. This is not dissimilar to the shake-up we saw in the storage space in the last 5 years. Features that were once unique and only available from premium vendors are now going mainstream. The question remains – if you are working with a premium mainstream vendor what unique features are they offering you that you can’t get elsewhere from a relatively new player in the market who is hitting the streets with very attractive pricing and licensing policies? So I see it as a mark of ‘due diligence’ to do a scoping out of alternatives, rather than simply disengaging the brain and signing the renewal contract. You don’t do that with any other insurance premium, so why do that with your backup insurance premium?
Finally, for home labs and small environments, that need basic features, they can also use the free edition that enables backup up to two VMs for free, valid forever.
Category: Other, vSphere | Comments Off on Altaro VM Backup V7 Released