In the second of my vExpert briefings with VMware – we focused on a new edition of vSphere dubbed “Premium”, and a drill-down into the new features of vSphere 6.7 U1. There was a little bit of overlap with the previous session (which was about VSAN). Once we were done with vSphere we moved on improvements on VMware Cloud on AWS. There was a brief 101 session of on AppDefense which I’ve opted not to cover in this blogpost…
I’m back at the event this year, after taking a one-year sabbatical in 2017. I wasn’t working at the time, and didn’t think my bank balance could afford the “VMworld Hit”. Now I’m back in the saddle work-wise, I thought it would be good to catch-up with my former colleagues at VMware, and say hello to friends within the community. Shortly before VMworld 2018 kicked off, myself and my fellow vExperts were briefed on the some of the key announcements surrounding VMware prior to the event. This is pretty typical of many of these programs, and the content was embargo’d until closer to the VMworld itself. One of these sessions was focused on the enhancements to VSAN in the vSphere6.7 U1 release.
The improvements can be broken down into three categories – Simplified Operations; Efficient Infrastructure and Rapid Support Resolution. There is nothing jaw dropping “wow” about these increments and taken on their own they amount to tiring up of loose ends to some degree. However, loose ends do have a tendency of tripping people up, and you’ll be surprised how often from an operational perspective, these issues collectively can bond to together to undermine customer experience and satisfaction. So they are not to be underestimated. Also I would say from my experience these issues are often much harder to resolve and deliver than many customers really give credit for. Trust me, if there was an easy fix – everyone would leap on it. The fact that doesn’t happen immediately is often because once you pull back the lid of the tin, there’s a mass of complexity or politics to be first resolved.
Yes, we have heard that message before from many, many companies in the past – but how many companies have REALLY delivered on that promise. And I mean REALLY delivered on that promise…
The trouble has been that in order to achieve the laudable goal in the past you have needed a truckload of infrastructure to deploy either a server-based or virtual-desktop-based environment – and run those Apps from an expensive data center location usually in the context of a Windows Operating System. Once I said at a user group in the US that putting desktops in the data center (the most expensive computing environment known to human or beast) is unlikely to result in massive cost savings. As I recall, one person stood up and clapped me for my honesty. It doesn’t matter how many TCO calculators you throw at the approach, the data center business is an expensive one. And one that according to our public cloud vendors, is one business customers can’t wait to get out of.
The Technology – Droplet Computing
At the Cloud Field Day in Silicon Valley back in April – Droplet Computing came out “stealth” to reveal they have developed a client side container technology, that will allow practically any application to run in the context of web-browser. Here’s a quick overview of that container technology:
So, there’s a couple of components going on here. But in simple terms, the container sits inside a web-browser which essentially offers the runtime environment, surrounded by a series of supporting “libraries”. Possibly the most significant is the use of WebAssembly which handles the job of intercepting the machine code generated by the app sitting in the container. A clunky analogy would have the web-browser like the virtualisation layer, and the container is like a VM, with the user app sitting inside the VM. All this is done however, without the bloatware of either a server-side hypervisor or client-side virtualisation and an “operating system” sitting inside that VM. That was tried in the past with technologies such as VM Player, ACE, Workstation or Fusion – or worse still downloading an entire VM from the corporate data center – just to run a measly little Windows App. Now I’m not saying that VMware Horizon or Citrix XenApp are “wrong”, it’s just for many they were sledgehammer technologies trying to crack a nut. Of course there the physical system that end-user sits on needs operating system – but that could be almost anything you care to think of.
So, this is ultra-compatible. How compatible? You could easily “natively” run a Windows based App inside a Droplet Container under an Intel-based Apple Mac. All without installing Windows or having to power-up or resume a Windows VM. In fact, the processor wouldn’t even have to be Intel-based, it could be an ARM based processor if necessary. This opens the door to being able to those Windows Apps running on chipsets for which they were never designed.
That lack of Windows requirement stems from the use of “wine” from the world of Linux. Wine has had a sketchy history in the world of Linux as a way of natively running Windows Apps under a Linux context – but it has moved on and improved over the years.
In terms of the underlying physical system – all that’s really needed is a relatively modern web-browser with WebAssembley enabled – which is sufficient to support the Droplet Container. Currently that’s Chrome v6.0, Firefox 5.2, Safari v11, and Internet Explorer v16. Mobile device such as Android-based phones/tablets (Android v6.2) and Apple iPhone/iPAD (iOS v11.1).
So, in a nutshell. Droplet Computing does for desktops what container technology has done for server-side code development and code distribution. Adding a layer without the overhead of a virtualization layer+operating system and other dependencies.
The Use Cases
ANY, ANY, ANY means MANY, MANY MANY.
But let’s start from scratch. The end-user computing world is a very different one from the narrow world of Windows PCs, and the old days of “Virtual Desktop Initiatives”. Whilst it would be foolish to discount server-based computing and VDI, neither succeeded in going mainstream – or become the de-facto way that users got their apps. They remained corralled into a particular niche for Dilbert-style users who sat in their cube all day. That way of working is on the decline with many of us being more mobile or working-from-home (WFM or should that be WTF?) – plus we all now have at least three devices (if not more) in the shape of laptop, tablet, and smartphone.
Whilst attempts have made to duplicate apps across those device types – this has resulted in compromises either in the “app” driven world of the iPAD or in the web-driven world of Office365. It’s always meant some uneasy compromise of the experience – which support folks have to excuse or explain away. So, this new way of working has spawned attempts to bring everything under one house via things like VMware’s WorkspaceONE – a portal to a plethora of different ways off delivering apps (SaaS, Virtual Desktops, Application Packaging – like ThinApp, AppVolumes, Microsoft App-V and so on, and on and on…). For me the difference with Droplet Computing is they are offering a net-new method of delivering the App – in a container, executing on a device of the end-users choosing with their preferred web-browser. Incidentally, I still think these one-stop-shop “App Stores” will still be needed for ID management, entitlement and security reasons – but I can see Droplet Computing being the Apps that are advertised there – to be downloaded and run on the end-users’ device. And of course, if you must have centralised VDI they are possible target too…
Clearly, legacy apps will be an important market – but I personally believe that this approach will pay dividends for new applications as well as older ones. Although to be fair it’s those older applications that often prove to the bane of everyone’s life. Often they’ve been developed in an OS with lower security requirements – and that often means ‘breaking’ the rules and regulations about OS hardening – just to make them work. The older they become the more they are likely to break as their dependencies themselves become incompatible or discontinued. This then has a knock-on effect to other important requirements such as meeting compliance during an external audit, or merely ensuring the apps are as quick and reliable as they once were. The dizzying releases of Windows and their associated Apps means it’s really impossible for an enterprise to freeze its world based on a particular approved “build” and blend of OS/Apps. This just doesn’t sit well in a BOYD era where CorpIT has no clue or control over the end-point the user chooses – never mind that that they may be using a form-factor such as tablet. So, Droplet Computing’s container vision and technology offers a tantalising promise of escaping these limitations and restrictions.
There are some interesting parallels between the early days of virtualisation and what Droplet Computing is doing.
Firstly, there’s a low-hanging fruit market of legacy apps that are still used by business but won’t be supported or won’t work on new operating systems. That includes App’s developed by ISVs who may not actually be trading anymore. The cost of rewriting those legacy apps far outweighs the usefulness to the business, so away of extending their life time beyond meaningful usage is appealing.
Secondly, Although the software running in the container is unmodified and runs natively – customers should be aware that Droplet Computing is not responsible for the licensing policy or support agreement of the ISV. Of course, if they have ceased to operate there’s a great deal of leeway there, but if the ISV is current – they might decide (as they did with 1st Gen virtualisation) simply not to support it or licensing it in a such away as to reduce its competitive value. For instance, a Droplet Computing user license allows the end-user to run 3 copies of the Droplet Computing software – enough to cover the 3 most popular devices a user might use (laptop, smartphone, and tablet).
However, the ISV might choose to charge the business 3 times for an application that has been “installed” on three different devices. Remember many of these legacy applications that represent the low-hanging fruit have quite antiquated licensing policies that are often the bane of many Citrix XenApp or Horizon View admin. That said, the potential cost savings from not having to run the older infrastructure – and having that app execute on expensive compute (the data center) but on cheap compute (the laptop) – could outweigh that restriction. Put simply it might be cheaper to suck up the additional license costs, to save money elsewhere. Personally, my hope (the same hope I had with Gen1 Virtualisation) is that ISVs review their licensing policies with a view that anything that drives consumption also preserves market share – and that it’s not in their interests to corral their horses and carriages in cycle in order to “protect their revenue stream”.
Looking back on this paragraph, I’m perhaps over-egging the impact of these licensing considerations. Perhaps ISVs have woken up the multi-device world we now reside in, and these antiquity licensing policies are a thing of the past?
So, its early days for Droplet Computing. They have secured their first round of VC funding and come out of stealth – and they are on the cusp of releasing their first release candidate 1.0 GA. I hope to get some stick time with the technology, as I believe getting one’s hands dirty is the first step to learning the advantages, disadvantages and limits of technology. I’ve waited a while for a truly new and innovating technology to grab my eye. And not just a rehash of existing bits and bytes. I think what Droplet Computing is doing is very, very interesting – and they are certainly a company to keep on your radar.
Category: TechFieldDays | Comments Off on Cloud Field Day: Droplet Computing – Any App, Any Where, Any Device
Well, it all seems such a long time ago since I was in the Bay Area at the beginning of April. And I have been meaning to blog about my experiences and insights at Cloud Field Day for weeks. Sadly, I became very ill on the way home from the event – I came down with bronchitis. If you’ve never had it – count yourself lucky. It took me weeks to recover. After getting over that – a perfect storm of events overtook me – both good and tragic.
Anyway, I’m feeling MUCH better now, and I’m finding some cycles to work thru that “to do list” we all perpetual have… This won’t be my only blog on Cloud Field Day, as I intend to write a blog about the start-up that came out of stealth when we were there – called Droplet Computing. The are worthy of a blog all to themselves…
So there were a number of more traditional shrinked-wrapped software vendors at the event – and without fail each one endeavoured to show how they were pivoting their traditional stack to the cloud. It makes perfect sense to do that considering it was a “Cloud Field Day”. The clue is rather in the title. I want to be kind and say that all the vendors were at least trying to do that. But commonly it did rather feel at times as if ToysRUS were reacting to the onslaught of Amazon. And its not without irony how – where some traditional retail operators are really struggling to deal with the competition that Amazon.com brings in the domestic space, some traditional software vendors are struggling to deal with the competition that a public cloud vendor like Amazon bring to the table. There is an element of “lets wait and see” if this change really is happening – only to find that once the change has happened – you are on the back-foot from the get-go. It’s a much more dynamic and competitive landscape which is moving very quickly, and some of the traditional software vendors aren’t as “agile” (hateful word!) as their senior management might want to believe – or tell their shareholders and investors.
Now, this blogpost isn’t your usual – lets bash the old ways, and slag the traditional vendors off because they are easy targets. I genuinely would like to see their cloudy efforts succeed. Because I believe in competition. And I believe that a market that’s sown up by tiny cartel of big players is not in the customer or anybodies interest. Competition is good for the Big Evil Corporates too – because it stops them becoming lazy and complacent. So I would like to give credit where credit is due. NetApp for instance has gone down the route of effectively creating a brand new unit to deal with the challenges they are facing. Essentially, putting all the storage goodness they have into Amazon AWS. This is native enterprise storage in the cloud, with all the features and functionality you used to enjoy on-prem.
[yes, I said on-prem. I’m bored already with the language police going on and on about on-prem(ises). Can we devote our mental energies to something that is actually IMPORTANT, for once?].
Note: This is the interesting Vertias presentation, and is worth a watch.
At the end of the week (when delegates are losing the will to live!). Vertias came into talk about what they were doing. Sadly, the first part of the session was a bit of a bore fest. Until a separate team came on to talk – and the Old Skool Vertias guys had left the room. Now, what they showed us looked very much like a start-ups “minimal viable product”, and I’ve always hated that term, especially when its deployed by huge, huge companies. Guys, you need to up your game (not Veritas specifically..). MVP is a term beloved of the Valley and the Start-up culture. It’s rightful place is there – in the start-up culture, where smaller companies need to spend a little, impress a lot, react quickly – and critically attract VC cash. But I’m afraid MVP has no place to play in a massive incumbent. People expect you to apply some spit and polish, and the expectations from customers are naturally higher. Meet them. Exceed them. Anyway, I digress.
There was some speculation amongst the delegates about why, Vertias had chosen the route they had. I made appoint that we were in Vertias CorpHQ, why don’t we, like, just ask them? Mercifully, we were spared the usual corporate flim-flam. The guy told use quite straight down the line – that they just could not get the features and coding built – quick enough from the existing team. So they built their own.
There’s critical message for all companies like NetApp and Veritas. There’s a right way to do this and a wrong way. Trying to build the new company – from the people and process that built the old, is by its nature like rolling a stone up a hill. You nearly always need to build the new company by incubating a new BU from within. This is not just some organisational chart activity – critically that group needs to be championed, and yes “protected” from other BU/SILOs, that would quite cheerfully strangle it at birth. So a bigly beautiful wall needs to be built around the company-within-in-the-company – and then it need stuffing with cash. The other thing that needs to happen is the sales folks who earned their commission form selling the “Old stuff on the truck” need to be “compensated” and “incentivised” to sell the “New stuff on the truck”. And if they can’t do that – you need to get new people who will ,because they have no stake in the previous model. As a friend quoted to me:
“Change the people, or change the people”
Think about that for a while…….
Anyway, I name check NetApp and Veritas as companies who I think may have woken up and – smelt the coffee and bacon (an odd mix, but actually can be quite tasty, although as a Brit I prefer a cup tea with my bacon butty). I think I would include Oracle Infrastructure Cloud (OIC) in that mix too. Whether NetApp, Veritas or Oracle – are able to overcome the baggage that comes with their brand is anyone’s guess. But I do think they are at least trying, and if they put enough weight behind their respective projects – there is at least a chance they will be successful. And it feels churlish not at least encourage and recognise effort, where effort is being made. Oddly enough OIC didn’t go down as well at CFD as they did at the Ravello Bloggers Day (same venue, similar people, almost the same presenters). I’m not sure quite why that happened – as there’s lots of positive things to say about their offering. Perhaps that’s because they led with Ravello (for which there’s a lot of love for in the vCommunity?) Oracle comes with a lot of baggage with the vCommunity – so there’s “perception” issue to overcome. Sadly, the Ravello Bloggers Day wasn’t recorded, and I think embedding the CFD video might actually do them a disservice. So recommend checking out my blog on that event:
I’m afraid I could not say the same about Riverbed. Sadly, their presentation lent too heavily on older sales plays, that had merely been re-jigged for a cloud era. You could tell they weren’t really connecting with their audience by the rigor mortis that was settling into the group. That also kind of came across with the lack of vim and vigour in their presenters. I hate to say things like that. Because it feels like a personal criticism, which I normally shy away from.
Things did brighten up with a presentation from one of their team members at the end of the session (Vivek Ganti) – who at least came across as someone, who felt a passion for the work he and his team were doing. This guys didn’t feel like he was just “going through the motions”.
Sadly, however, what he was showing was some of the automation that Riverbed have put into deploying their appliances into an Amazon VPC. In fairness if you were doing this all by hand using Amazon’s frankly horrid web UI you would have piece of work on your hands. However, if your doing public cloud “right” you should be using your favourite toolset of utilities to leverage the API. That’s what public cloud is all about. Not Old Skool SysAdmin (like me!) clicking and filling in dialog boxes, but a new style DevOps SysAdmins who can stand-up and tear down infrastructure with the flick of script. One of the goals of this DevOps Public Cloud is to accept that nothing is really ever persistent in the old style way – but should be by its nature volatile – so it is so robustly automated it can be destroyed and created in an isn’t. My point here is that this kind of automation is likely to make the average DevOps SysAdmin go “meh”.
Also, for me this issue is also a more important one – the value in any software – whether is on-prem OR in Da Cloud isn’t how easily it can deployed and setup. That, really should be a “given”. The real value is what that software allows the customer to do – which they couldn’t do before. Now, I guess you could say – standing up a multi-tier load-balanced layer that offers redundancy, and inspection of packets to ensure a smooth network experience can be difficult. I actually think Riverbed have a fantastic suite of products (although folks tell me they can be quite pricey). But I wasn’t really convinced by their cloud play.
Why was that? After all they aren’t really doing that much differently than say what NetApp were doing. But there was something about the vibe. Whereas it felt that Riverbed were sprinkling cloudiness over an existing product range. I got the impression that NetApp had made a genuine attempt to Cloudify their existing product ranges, whilst at the same time acquiring and investing in something net-new. I don’t want to label Riverbed’s approach as “Cloud Washing”. I guess the difference is in the approach. Whereas it appears as if NetApp wanted to deliver NetApp-as-a-Service (NaaS!) with Riverbed it was more like Virtualization 1.0.
Hey, lets put our existing stack into a Linux instance and spin that up in EC2/VPC.
I suspect what customers WANT (remember them? customers?) is all the features and functionality they used to enjoy on-prem, without any of the complexity of configuration, management, or having to deal with it all becoming rusty after 3-5 years, and having to lash out more cash to upgrade and forklift their unique bits. Shrink-wrapped software, without being tied up in cling film if you wish….
Anyway, I said when I started this would be just one post, with another on Droplet Computing. But I feel like banging on about the companies who are getting this right from the get-go. And that feels like another post entirely (although it is related…)
Category: ThinkPiece | Comments Off on ThinkPiece: Cloud Field Day: Can an old dog, learn new tricks?
I recently decided to switch back to using USB Media for booting my VMware ESXi hosts. My main thought was that I want to use the local HHD for either some kind of VSA testing (such as StorMagic’s SvSAN) or else when I have the budget buy in SSDs to make a physical VSAN cluster, rather being dependent and reliant on a virtual nested VSAN setup. I went out and got some 32GB USB media to make sure they would be big enough to create the scratch partition.
I’m often wiping my VMware ESXi hosts to try out new builds – all to roll-backwards to previous builds for testing purposes. For that I used the UDA together with some kickstart scripts to do the bulk of the customisation. So times I just lay down a clean build with no customisation and use PowersHell scripting to build up the environment to the level I want it. I have that in a modular way that allows me to lay down, some but not all, of the configuration depending on my development needs.
Anyway, to script an install to USB media you can use this in the kickstart:
install --firstdisk=usb --novmfsondisk
The –firstdisk flag actually can take a series of arguments separated with a comma. It is possible to specify an order of particular disks type (not just the first USB, HHD/SSD or SAN based LUN) the installer searches for. And you can use to this set your own orders for how disks are discovered. For instance –firstdisk=usb,remote,local would first try to install to USB, then a LUN on a SAN, before lastly trying the local disk.
Also what’s supported is specifying the model of the disk like so –firstdisk=ST3120814A. I used this in the past with hyper-converged appliances that I was doing ground-zero resets on – and just so happened the bootdisk I was using had a unique model number. Of course this isn’t very helpful when a server contains disks that all have the same model number…
The other option is to use instead of install –firstdisk, but use install –disk=mpx.vmhba1:C0:T0:L0. This allows you to indicate the disk by which adapter its configured for, and the controller/target used (note these values are often 0, just the L number value that changes. For many years we have referred to this as the vmhba syntax or “Runtime Name”. In the past its been hard to trust 100% these values, but I think they are more reliable nowadays, as the way the VMware ESXi host boots the vmkernel is VERY different now.
Probably the easiest way to find the MODEL name/number and vmhba syntax is using the esxcli commands like so:
esxcli storage core path list
Here’s a cut and paste of the USB output –
usb.vmhba32-usb.0:0-t10.SanDisk00Ultra00000000000000000000004C530001270222101242 UID: usb.vmhba32-usb.0:0-t10.SanDisk00Ultra00000000000000000000004C530001270222101242
Runtime Name: vmhba32:C0:T0:L0
Device Display Name: Local USB Direct-Access (t10.SanDisk00Ultra00000000000000000000004C530001270222101242)
Adapter: vmhba32 Channel: 0 Target: 0 LUN: 0 Plugin:
NMP State: active Transport: usb
Adapter Identifier: usb.vmhba32
Target Identifier: usb.0:0
Adapter Transport Details: Unavailable or path is unclaimed
Target Transport Details: Unavailable or path is unclaimed Maximum IO Size: 32768
I imagined a system with multiple UBS media (!??!) would report C0:T0:L1 or else C0:T1:L1.
Oddly enough I found a second USB device added to my ESXi host doesn’t appear – not even when doing a manual install with ISO attached the HP ILO. I’m not sure why this is – but it may (or may not) be significant which USB slot the device gets inserted too, or that there are limits around the way the VMkernel enumerates USB device – perhaps only enumerating the first USB memory stick found? This isn’t terribly important – but I will ask around my contacts and see what I can dig up. It’s a bit obscure, but I’m curious like that.
Category: vSphere | Comments Off on Kickstart Scripted VMware ESXi Install to USB Media
I’m running VSAN in a nested configuration, and I generally shutdown my homelab in the evening each day. I grew tired of doing the manual process of maintenance mode for these nested nodes before shutting them down, and then shutting down the host that they run on. I did a bit of googling for the PowersHell that does that – and was see quite complicated scripts. I’m pleased I went back to the documentation for PowerCLI which is the primary source for the cmdlets:
Nesting VMware ESXi has become easier and easier as the years roll by. In case you don’t know “nesting” is the term used for running VMware ESXi inside a VM for development purposes. It’s the basis of VMware’s popular “Hands-on-Lab”, as well as homelabs – and in recent years “nesting” has gone to the Cloud with service like Ravello on the Oracle Infrastructure Cloud (OIC).
In years gone by there were many settings needed at the Physical and Virtual layer – that required hand-edits to configuration files to make this work. With the advent of vSphere 6.5 U1 a lot of that disappeared. HOWEVER, there’s still plenty of work that needs to be done at the physical and virtual layer to allow for networking to work properly. As well as successfully passomh some of the “health checks” around technologies like VMware VSAN that has specific networking requirements for your web-client to light up with little green ticks of happiness.
Here’s a brief check list:
Physical Switch needs:
MTU of 9000,
with VLAN Tagging enabled as necessary
Physical ESXi host vSwitch needs
MTU of 9000
vSwitch security policy of Accept, Accept, Accept
Portgroups used by the virtual “nested ESX” enabled for 4096 to allow for VLAN tagging in the nested layer to pass-thru to the physical switch
Virtual Nested ESXi vSwitch needs
MTU of 9000
vSwitch security policy of Accept, Accept, Accept
Portgroups for VMs and VMkernel enabled for VLAN Tagging as necessary
Standard Switches work well with nesting, and have the added benefit of not being tied to a vCenter to the host. This makes blowing away the nested layer after your lab period ends a breeze. It’s slower and more awkward “clean-up” process if your using DvSwitches. To use the special MacLearn VIBs that improve network performance DvSwitches are needed at the physical layer – this isn’t an option if you main physical ESXi host is stand-alone and not managed vCenter.
1. Physical Switch Needs:
In my case I have a HP ProCurve 1810G – 24 G switch – its not a bad unit, whisper quiet and simple to configure:
I have simple VLAN configuration (mainly used to demonstrate/explain the VLAN Tagging concept) but I also VLAN off my VMotion traffic.
Physical ESXi host vSwitch needs
I set my MTU and Security policy on the properties of vSwitch0, which means all the portgroups inherit those settings. Its simple quick and easy.
To do that in the context of the VMware ESXi VMKernel you could use the ESXCLI command – and these commands could be part of a kickstart install script:
## - Enabling Jumbo Frames on vSwitch0 to pass VSAN "Configuration Assistant" tests.
esxcli network vswitch standard set -m 9000 -v vSwitch0
## - Lower security on vSwitch0 to allow traffic to flow in a nested environment.
esxcli network vswitch standard policy security set -v vSwitch0 -f=true -m=true -p=true
The VLAN configuration can be set when the portgroup is being created as is the case with VMotion being on VLAN10 like so:
esxcfg-vswitch -A "VMotion" vSwitch0
esxcfg-vswitch -v 10 -p "VMotion" vSwitch0
esxcfg-vmknic -a "VMotion" -i [VMOT_IP] -n 255.255.0.0 -p "VMotion" vSwitch0
esxcli network ip interface tag add -i vmk1 -t VMotion
Note: In this case [VMOT_IP] is variable used as part of the UDA appliance.
For an existing portgroup for example the “VM Network” the VLAN value can be set to be one that passes thru the VLAN Tagging from the nested layer to the physical layer:
esxcli network vswitch standard portgroup set -p "VM Network" --vlan-id 4095
Alternatively, if your Physical ESXi hosts are managed by vCenter – you could do this with a PowersHell for each loop like so:
The setting on vSwitch0 on the Nested ESXi host mirror that of the physical… so everything is aligned from vESX>pESX>pSwitch everything is MTU 9000 with weakened security. The VLANs I have from 101-104 could be added at the physical and virtual level with the esxcli command like so:
esxcli network vswitch standard portgroup add --portgroup-name=VLAN101 --vswitch-name=vSwitch0
esxcli network vswitch standard portgroup set -p VLAN101 --vlan-id 101
or if you were wanting to do this via PowersHell, you could use the following method. This approach differs from my previous examples – it merely “gets” every ESXi host in a vCenter, and creates VLAN101, 102, 103, and VLAN 104.
A very common use of nested vSphere is to create a virtual/nested vSphere VSAN cluster – this is because not everyone can afford the servers and storage to build out VSAN on a physical level (3 hosts with 1xSDD, 1xHDD). For sometime its been possible to virtualise ESXi, as well as also marking specific VMDKs as either being HHD or SSD. Inside the nested ESXi environment you will need a VMKernel portgroup enabled for VSAN. A simple thing to do would be to enable either the Management Network or the VMotion network for dual usage. Alternatively, you could setup a net-new VMKernel portgroup who’s sole and only purpose is VSAN communications. That’s what I do (even if the traffic goes over the EXACT same nics – as it makes it clear what the usage of each portgroup/vmkernel port is.
You can do this with CLI using the following commands like so:
Note: In my case [VSAN_IP] is a variable used in my kickstart sub-template as part of the UDA. The main thing is to know the number of vmk ports – vmk0 is always the default management port – in my script the next vmk port created is for VMotion, and that therefore make the VSAN port be vmk2.
DISCLAIMER: Firstly, and most importantly. This issue likely to effect a tiny proportion of customers. Despite what many think as soon as new version of vSphere hits the streets for most large organisations it can take anything between 12-24 months to actually complete an upgrade from one flavour of vSphere to another. Therefore by the time a customer who has rolled out vSphere 6.5 U2, the chances are that there WILL be an upgrade path to vSphere 6.7 probably in the shape of U1 or U2 release. The important thing to remember is if your intention is in the short-term to upgrade to vSphere 6.7 for whatever reason – is to be aware of the title of this blogpost. You may as well skip U2 and head straight to vSphere 6.7 if that’s the case…
UPDATE (10th May, 2018): It looks like the VAMI interface on the VCSA is set to deploy vSphere 6.5 U2. It’s set as just a “bugfix” and that its severity is set to “critical”. I’m research what the situation is with update manager.
WHY DOES THIS HAPPEN: This has happened on more than one occasion in recent memory. I don’t personally think that precedent can be used to justify this. A much simpler reason exists. The reality is VMware now has a very complicated, and richly interdependent and tightly-coupled series of software and services. Attempts have been made in the past to build a “Train Release” by which the core platform like vCenter/ESX are the “head” or “engine” of the train, and the related products that sit on top of it – are couple behind it on a release schedule that if correctly managed should arrive in the customer station at roughly the same time. Occasionally, a carriage gets de-coupled and is left in the sidings somewhere outside Colchester. For customers for whom that software package is critical it can and will delay and upgrade processes until the last part of the train gets into the station. This can be difficult where one SR calls for upgrade to fix one problem, only to “break” a piece of software elsewhere.
This kind of sequencing of software releases is very, very difficult to do in a large multi-national software company where software is shrinked-wrapped and installed on-prem. It’s actually a compelling argument for buying software like vSphere-as-a-service, and heading off into the Public Cloud. You cease to have worry about this kind of poop.
HAS THIS BEEN COMMUNICATED EFFECTIVELY. I’m sorry to say this but no. Sadly, an almost like Apple like cloak of secrecy still envelopes VMware. There are good reasons for this in some cases, but other situations like this – it actively and effectively works against good communication. As ordinary mortal I am supposed to read all the blogs, read all the release notes and listen to all the podcasts. I would say I’m pretty well connected to the VMWorld – but I didn’t know this was happening. Heck, I didn’t know that vSphere 6.5 U2 was on its way – or know there wasn’t upgrade path.
Apparently, to some that’s my fault, and my problem – and it isn’t the responsibility of VMware to shout from the rooftops as if Four Horses of the Apocalypse were on their way. Before I rant on and loose the plot a little – I will point you back to this disclaimer. Look. I totally get it, and totally understand why and how this happens. And in many, many, many ways the vSphere 6.5 U2 release is a good, good,good thing because it quickly deals with the issue associated with “WHY DOES THIS HAPPEN”. The train becoming de-railed from the railway lines.
For those who are outside of the vendorland much of this just seems bug-eyed weird. Although “shit-sandwich” has become such a defacto standard in our industry many seem quiet happy to suck it up. Personally, I like to set my standards higher than that. Case in point. The download page for vSphere 6.5 U2 has one of those little yellow “read the KB” stickers – which incidentally disappears when you click “Go To Downloads”.
The downloads page of vSphere 6.7 (at the time of writing) has a tiny notification at all.
People will say “buyer beware” and you should read the Release Notes. Honestly, who has time for that – Release Notes are like EULAs. Practically no-one reads them (I’m unable to quantify that statement I know!).
Of course, if you fail to read all the blogs, the release notes and these stickers – then its “your own fault chum” according to some. Personally, I think vendors have to realise that in todays busy world with lots of competing channels searching for our eyeball time – something a little more noticeable is required.
With that. Read my disclaimer again…
Category: vSphere | Comments Off on CURRENTLY: There is no supported upgrade path from vSphere 6.5 Update 2 to vSphere 6.7
Executive Summary: There are lot of methods. If at first you don’t succeed give it another go. 2nd Time lucky and all that.
I recently lost my Windows10 Jumpbox due to a power failure. It disappeared from my SAN array. I didn’t have a backup (which was stupid of me) and Windows wasn’t activated. In the meantime I realise I had an activated version of Windows10 running in VMware Fusion. So I thought heck might as well import to vSphere, and avoid re-installing and that annoying watermark in the right-hand corner of my RDP desktop.
Ironically, after completing the move – because Windows10 was sitting on a different virtualization hardware platform – it de-activated itself – and wouldn’t reactivate. So it was all a waste of time really – I could have just deployed a fresh copy of Windows 10.
Anyway, I’ve learn my lesson the hardware – always backup your Jumpbox and make sure any work in progress is held elsewhere such as Dropbox, Google Drive or OneDrive. That way if your jumpbox is toast you don’t loose that PowersHell script you have been perfecting all week.
METHOD1: Connect to Server from VMware Fusion
This initially failed when the target was vSphere6.5 U1, but worked with vSphere 6.7. Hard to tell if the vSphere uplift was the decisive difference, or whether it was a case of just try, try and try again.
VMware Fusion has a “Connect to Server” option
Once connected and authenticated to the vCenter or ESXi host – you can drag and drop a VMware Fusion VM like so:
METHOD2: Export/Import OVA
Another method is to export the VM from Fusion into a the OVF/OVA format. I prefer OVA as its pre-compressed and gives you a single file to deal with – as OVF gives you a text-based “descriptor” file and whole bunch of VMDK files.
Select the VM in Fusion, and in the File menu, choose Export to OVF
Once exported you can login to either VMware vCenter or VMware ESX to import there. Personally I’ve found the import/export process in vCenter/ESX a bit 50:50…. Again, this failed in vSphere 6.5 U1, but was successful in vSphere 6.7.
Select a host or cluster in vCenter, and choose Deploy OVF Template…
Watch out for the format of the disk – as it does not default to “Thin Provisioning”.
METHOD3: VMware Convertor
I tried this – a number of times, and it did not work. VM would not boot. This is kind of odd because I’ve had more success with this method for importing physicals into VMware Fusion, than I have had with other methods of “importing” OSes into VMware Fusion….
Category: vSphere | Comments Off on How to Import VMware Fusion VM into VMware vSphere
On last week’s VMTN Podcast (Talkshoe / Facebook Live) the discussion turned to VMWorld 2018. I asked our genial host, Eric Nielsen what the official VMworld Policy was towards the use of “Booth Babes” or “Models” was in the post-Weinstein, #MeToo era. It can hardly escaped peoples attention that this is a touchstone topic of our era. Recent events in the UK’s “Presidents Club” have highlighted the dangers of employing young women into the murky world of so-called “Corporate Hospitality” and conference events.
There’s something implicitly wrong with it because it is nearly always gender-biased. In that people employed to be “Booth Babes” are nearly universally young women – and the implication is that such events are uniformly populated by heterosexual men, who enjoy ogling women old enough to be their daughters, and in some cases grand-daughters. Of course, that doesn’t apply to everyone. #NotAllMen. However, the implication or message sent out by such activities suggest that the event is not for gay men or women of any sexuality.
I’m not least political correct or wishing to hector or carp. Indeed, a quick scour of the internet you’ll be able to find photos of me with “booth babes” at previous VMworld. This was something I used to do wind-up my EX who was convinced that I was living the high-life at VMworld, rather than busting my butt, and returning to my hotel room utterly shattered. I’m not especially proud of doing this on more levels than just one…
I also recall the discomfort felt at bloggers table one year when John Troyer raised the issue. I made the point that there were no muscle men in little gold metallic hot-pants for the delectation of the gay men or heterosexual women who attended VMworld. My assertion was despite the event being heavily dominated by men, it does rather assume they are heterosexual.
[[[Insert photo of man in gold hot pants]]]
Remember its the sponsors who engage in this crass behaviour. And by doing so they demonstrate scant regard for their clients, customers and attendees – as they assume they are as crass as their PR/Marketing Team. A little story might help illustrate the effect such actions of sponsors has on women attending the event. When I was employed at VMware I attended an event EXPO Event in Docklands – where I was helping a very well-known partner pitch their VMware related offering. As we were setting up one of the partner’s employees sidles up me and asked
“So what about her, eh?”
He pointed in the general direction of a young woman, no more than 20, in an all in one lycra cycling outfit. Given her age and duties, I think it was safe to say she wasn’t employee, and wasn’t there to engage delegates with the technical merits of her booths technology. This is classic alpha-male “signalling” beloved of many a locker room in High School. It is a classic “are you one of us” test. Given my new status, you can imagine – I didn’t really feel like that I had anything in common with him. The young woman was younger than my step-daughter. I mumbled something and looked away. I felt awkward.
Things took a turn for the worst when the partner turned to a female member of his staff dressed in simple corporate polo shirt – the standard attire of regular booth staffer:
“Why don’t you wear something like that?” he said
So there I am stuck between them. The lady looks at me. And I think what do I do? Do I just let this slide? Do I leave it to her to say something? Looking for way out, I groped for my weapon of first defence – sarcasm.
“So… This must be Phase Two of the diversity and inclusivity training I have heard soooooo much about in your organisation?”
I hope my story gives you an idea of how “booth babes” sanctions and condones a certain level of behaviour – and contributes to making everyone regardless of gender or sexuality, feel incredible awkward, and he’s the important part – unwelcome. This event is not for you it says…
Anyway, back to my story. I asked Eric for VMware’s official policy on “booth babes” and “models” is…. Here’s what he was able to dig out for me….
“Hi here is what the event team responded with. I know they have kicked non-compliant vendors off the floor in years past. Since those high profile events, vendors take it seriously as does VMware.
———- event team response ——
Here’s our booth staff policy. This is part of the Rules and Regulations document that all sponsors and exhibitors must accept in order to participate in the event.
* VMware discourages the use of professional talent for booth or demonstration staff. Show Management reserves the right to remove any person(s) from the Event or Mandalay Bay Convention Center that do not have an Event badge, are improperly badged, are unprofessionally or objectionably dressed or other persons that behave in a manner deemed unprofessional or objectionable by Show Management.
Amanda Johnson Sponsorship Manager, Global Events
I think it is great that VMware has policy. But I have some concerns.
Firstly, I’m little bit troubled about the use of the term “discourages”. A stronger term would be “prohibits”.
Secondly, the emphasis on appearance rather misses a point (whilst highlighting an issue with the way some “booth babes” are scantily clad and often sexualised) For me the bigger issue is not this issue of how someone is dressed (I might add this is a deeply subjective judgement call) but the fact that anyone even someone “professionally attired” is at booth for how they look, not for what they know about a technology or what they contribute to the company.
I feel a better policy would prohibit professional “talent” or “demonstration staff” – and opt for “employees only” policy. I recognise this might unduly hit contractors who work for these companies, but aren’t on the payroll. But I feel unless a policy is 100% clear, it is open to “creative interpretation” and outright abuse. It could also impact on entertainment that is harmless – such as the magic acts you sometimes see. Perhaps once we have rid the IT sector “booth babes” the policy could be relaxed or allow for more discretion.
Finally, any policy of any type is really only as good as its enforcement. Eric reassures me that VMware takes this seriously, and due to its enforcement in the past – vendors have got the message. I wasn’t at last years VMworld so I can’t testify to that. I would be interested to know what you think?