In the second of my vExpert briefings with VMware – we focused on a new edition of vSphere dubbed “Premium”, and a drill-down into the new features of vSphere 6.7 U1. There was a little bit of overlap with the previous session (which was about VSAN). Once we were done with vSphere we moved on improvements on VMware Cloud on AWS. There was a brief 101 session of on AppDefense which I’ve opted not to cover in this blogpost…

vSphere Platinum Edition

For all intents and purposes this looks like a bundling exercise. The “Platinum” edition includes vSphere 6.7 (with all the bells and whistles expected in the full version – host profiles, distributed switches etc), together with a licenses for AppDefense and $10,000 worth of credits for VMware Cloud on AWS. The “pitch” for this was essentially that this is the most secure version of vSphere you can buy. The speakers flagged up the rise and rise in losses from security breaches coupled together with a flat-lining of IT budgets. So the usual tale of do more with less. I did ask specifically if the reason this SKU has been developed was that customers have been crying out for it, and no other SKU in the portfolio addressed that need – I didn’t really get a positive response from that. I also asked for customer numbers for AppDefense. I was told that commonly these numbers weren’t in the public domain. Despite the fact that VMware regularly publicises customer usages numbers for other products. My goal was to see if VMware was about to tap into until a previously unaddressed market or if existing AppDefense customers were suddenly going to benefit from a new way of licensing the existing product set.

Given a lack of data – I’m not able to measure of the impact of Platinum, so anything here would be mere speculation. The other thing with the ‘security’ angle is the part the VMWonAWS credits contribute. It’s entirely possible that someone might be interested in the features that vSphere+AppDefense bring to their on-prem environment, without an interest in AWS. Perhaps those customers have a preference for Azure. So its unclear to me whether this blend of products will appeal. Vendor rarely develop SKUs for the fun of it, as it takes a LONG TIME to get new SKUs into the system. So I will bow to VMware’s great knowledge on this respect – after all they know their customer base and have the data – more than a mere blogger does.

vSphere 6.7 Update 1

Upgrade Paths: No news here. But this table is a handy overview of what’s possible and not possible (currently). Note that its still the case that whilst there’s a upgrade path for vSphere 6.5, there’s no upgrade path for 6.5 U1. I’ve been reassured that isn’t a problem.

vSphere Client (HTML5): Basically, now has feature parity with all the other slower clients. So there’s no reason NOT to use the new HTML5 client unless you like having your teeth removed one-by-one by using the Ye Olde Flash Client.

New Security for Appliance Management Interface (5480). Amongst the many UIs used to manage vSphere, this is one that actually manages the appliance(s) itself, and commonly listens on TCP 5480. It now has a built-in firewall which allows you to accept, ignore, reject and return traffic. Additionally, it can now can be rigged to your SSO configurations – so rather than the root account password having to be shared/reset for multiple admins, there is a proper RBAC process by which access can be granted and more importantly audited.

Note: Apologises for the granny screengrab. Don’t worry your not missing much if you can’t read it… In fact the screen grabs were so lousy I decide to dispense with them altogether. Just close your eyes, and imagine dialog boxes… 🙂

Converge JSON with vcsa-converge-cli: This utility and companion script handles the whole “embedded Vs external”. As you probably know the SSO/Platform Services Controller has pretty checkered past based on upgrades – and change of policy about whether “Linked Mode” or “Enhanced Linked Mode” supported either an embedded model or the external model (requires a load-balancer if you wanted to provide service resiliency). I pleased to say that VMware have gone back to their previous policy/approach/support – the embedded model is recommended (its operationally simpler in my view) without compromises over functionality. Once you know this then you’ll appreciate the need for this “Converge” utility. It allows you to converge multiple PSC into single PSC that is embedded, and also decommission a PSC if its no longer needed. It does support those load-balancers, BUT critically requires the vSphere 6.7 Update 1.  You may have come across the JSON file before, as it can be used to carry out a scripted install of vCenter – something I’ve tinkered with. It works well once you have all your ip address, FQDNs, username and passwords in a row. Finally, the is converge is always from external to embedded…

Content Library Uplift: From what I can gather this isn’t a wildly used feature, mainly because so many services exist out there for getting files from A-to-B and generally if one is in use for other reasons elsewhere in the business people opt for it. There has also been some limitations which customers have found off putting, and to some degree this release does much to address those. So the new version now supports both  OVA files, as well as VMTX (templates) using the context sensitive “Deploy From…” and “Clone to…” menus. Content Library now supports .ISO and other ancillary files such as scripts.

PSC Embedded Re-Pointing (not to be confused with reporting!): The cmsso-util supports a “domain repoint” which allows you to point an existing PSC providing SSO to a different SSO domain, as well as splitting an existing single domain SSO configuration. There are number of scenarios where this might come in useful, not least the situation where a big company acquires a littler one. (or in this day and age, the other way round).

vCenter High Availability (VCHA): As you might know availability is provided to vCenter by an Active and Passive configuration – together with a witness to arbitrate in a split-brain configuration. Previously, this option was enabled during deployment, and its now been ported to the vSphere Client proper. It means a slicker more automated monitoring process, together with an “auto-clone” of the passive node and witness node.

vMotion for NVIDIA GRID vGPU Virtual Machines: Does what it says on the tin. Extends to functionality previously unavailable because the VM was accessing device of the local host. It’s not just vMotion – but Storage vMotion, and the hot-add of other virtual devices and memory snapshots as well. This table shows you the delta of support between vSphere 6.7 and vSphere 6.7 U1:

VMware Cloud on AWS

There’s a couple of highlights here. (1), aside from US West, US East, London and Frankfurt – there’s a new region to the VMWonAWS offering which is Asia Pacific (Tokyo).

(2) there is a new bulk migrations service that allows to get 1000’s of VMs out of on-prem vSphere environments into VMWonAWS. This is being dubbed as “VMware Cloud Motion with vSphere Replication“. Essentially, it replicates your on-prem VMs to the public cloud, and then uses vMotion to bring them in sync before a live switch over. This is done over a secure, bi-directional link with auto-VPN setup. I could be used to carry out a datacenter evacuation, datacenter consolidation, or as datacenter extension. This essentially an uplift of the existing VMware Hybrid Cloud Extension functionality –

(3) VMWonAWS Migration Assessment – is an engagement that deploy multiple tools in an on-premise vSphere environment (such as vRealize Network Insight) to inspect the customers pre-existing environment, and measure its suitability for migration into VMWonAWS. It will work out your capacity requirements in AWS before the migration.

(4) vCenter Cloud Gateway. Basically a way of giving a single vCenter Client view of both on-premise and VMWonAWS environments. This allows you to manage the VMWonAWS commitment as if it were merely an extension of an on-premise datacenter.  During the deployment of the gateway it gathers the info to enable the “Hybrid Linked Mode” functionality (the term VMware co-opted to describe linked mode between vCenter on-premise with vCenter in AWS). The one limitation I picked up on is that only one on-premise vCenter can be paired with one VMWonAWS vCenter…

(5) Computer Policy Service. Basically this isn’t a feature you configure, but a manage service that’s SDDC wide (rather than focused on a cluster or host). It allows the customer to define policies that might be familiar to anyone who as configured DRS – such as whether groups of VMs must be kept-together or kept-apart or if they should be excluded from DRS iniatiate vMotions altogether.

(6) VSAN Data-at-Rest Encryption. In VMWonAWS the VSAN used is enabled for encryption using AWS KMS.

(7). Storage Dense Bare Metal (Preview Only). A new offering using a diskless R5.Metal Instance, to which EBS volumes are mapped. This allows for a more flexible storage backplane which is normally provides by the local NVMe devices. Those local NVMe device give pretty good IOPS, so when their being replaced with EBS that’s consideration. However, being EBS gives the customer more flexibility when it comes to sizing and capacity. There are some restrictions which affects this “preview” (so its not yet available). This type of cluster has to be add to an existing SDDC and cannot be the first cluster – and the minimum commitment is 4 ESXi hosts. There are some interesting configurations parameters around the VSAN cluster that gets created with this setup. Firstly, to improve throughput every host as 3 disk groups, and the EBS type used is GP2. Compression is automatically enabled, and the VSAN datastore it can be increased in capacity in 5TB increments. This configuration provides for a current RAW capacity of anywhere between 15-35TB.

One interesting outcome of using EBS is increased flexibility around the resync process for VSAN should you loose a disk or a host. As R5.Metal instances are quick deploy and EBS volumes can be unmapped from one instance to another. If a host fails, its is quick to create a new instance, and then remap the disks to the replacement node. A lost disk incurs the same rebuild/resync overhead as it would in an on-premise environment…