May 16

Droplet Computing: The Drip, Drip Effect

The last couple of weekends I’ve spent playing with Droplet Computing application containers solution. As ever getting “stick time” with any technology inevitably leads to thoughts about future functionality. I think the tricky thing for any new company is how far they want to carve out new features of their own, and run the risk of re-inventing a wheel that’s already been invented and in use already.

The dilemma though, is if you don’t spell out a management vision for a technology that’s essentially a platform, do you run the risk of looking like you’re lacking “vision”? The danger is you just want to be yet another vendor, with yet another ‘single pane of glass’ to be added alongside all the other ‘multiple pains of glass’ that customers have to toggle between. I guess the short part of that long-winded statement – what’s the best approach? To integrate or innovate?

I guess some Smart Alec would say the ideal thing would be to do both. Listen to customers and their needs and requirements and come up with solutions and functionality that addresses those needs. The only downside of this customer approach is customers can have a tendency to demand things, because they think they “should” be there. Only for the ISV to find out that this was tick-box exercise, and you have squandered precious development time for no reasonable benefit. It used to infuriate me when I suggested areas for improvement to VMware, only to be asked to explain and justify my thinking. Couldn’t they just see it was obvious! Of course, the reason I was asked to make my use case every time was to avoid the very situation I’ve just described. So featurism – the desire to add more features and options without a good use-case – is a crime I know I have been guilty of in the past. It’s crime I’m likely to continue to commit until I retire.

So, that gets me to the explanation of the title of this blogpost. Drip. Drip. It strikes me that the best kind of improvements to a new technology take little coding effort and improves functionality for all customers. A rapid series of quick updates is far easier to pivot, than massive undertakings that might take months to complete.

I think there’s another distinction to make with this type of approach. Do those drip-drip enhancements improve the users experience, or the administrator’s life and the deployment experience? For that reason, I’m going to separate them because at the end of the day no end-user ever patted on my back for well-administered systems.

User Experience:

Printing:

At the moment the only print support inside the container is for the native print driver that needs to be installed and configured by the administrator. Of course, customers may already have a license for some kind of universal print solution such as UniPrint. I think it would be great if Droplet Computing had some type of redistribution deal with a company like UniPrint. Ideally, this feature would retrieve the print configuration from the device OS, and then bubble those up into the container OS. This means the print configuration is ultimately managed by existing policies, profiles and/or scripts used to prepare the users environment to the device OS.

Unity-like Functionality:

If you’ve ever used something like VMware Fusion, you’ll have come across so-called “unity” style experience. This unity experience would essentially “hide” the Container environment as much as possible. Leaving just the application floating on the screen, and perhaps also creating shortcuts to Droplet Computing Apps contained inside the Droplet Computing Container. The end-user doesn’t need to interact with the “tiles” view in the main application.

But instead gains access to their application from an icon on their desktop. That said I’ve always been irritated about desktop icon clutter, where the entire desktop is covered with icons. Some kind of “Droplet Computing Apps” folder on the desktop that contains all the icons would be neat.

Content Redirection:

This is a Citrix feature that’s been around for sometimes. There are two types of “redirection”; one that takes from the device OS into the Container OS, and another that takes you out of the Container OS back to the Device OS. The former is 9 times out 10 the most common.

It’s very common for a user to find a document in a folder and want to open it with a double-click. Historically, in Windows that process been controlled with MIME file associations held in the registry. It can be tricky to get right. After all, generally file extensions tend to have no version information in them. For instance, the .PDF extension doesn’t help me decide if that application should run locally or within the Container OS. Nor does it help me in a situation where 9 times of 10, it should open locally, but in particular application scenarios it should open in the Container OS.

However, it is possible to create program logic that says for all Excel 2010 spreadsheets requiring Macros, to open in Excel 2010 in the Container OS, and for all spreadsheets created in Office 2016 open them Excel 2016. It would be possible to redirect these double clicks for the entire legacy XLS, XLM and XLT files to the Container OS. And to redirect all XLSX, XLSM, XLTX, and XLW to be redirected to a newer version of Excel running in the Device OS.

This kind of redirection also requires a method of passing the location of the file to the Container OS which I can used to retrieve the file. It’s one thing to call an .EXE using file extensions, and then another to then retrieve the data. It will generally require a communication channel and a file sharing protocol between the Device OS and Container OS.  And then there’s the tricky issue of authenticating the users access to the file – would that use the Device OS credentials passed-through to the Container OS, or would it use the credentials of the Container OS?

The other type of redirection flows in the opposite direction. This is typically where a link to some multi-media format is found in a web page or embedded as a link inside an application running in the container. In most cases that YouTube link, MP3 or MP4 video would be best played by locally installed application within the domain of Device OS. I see this as being less of an issue because the action happens less frequently as a double-click to open a file – because it’s easier to create and encode a simple “multi-media” rule to redirect all such links back to the Device OS.

Administrator Experience:

Multi-Container Support:

Currently the Container OS being used is Windows XP, and I understand a Windows 7 Container OS build is on its way. Although I think the numbers of users needing both Container OS’s at the same time would be quite small, it would be nice if there was an easy way for administrators to give access two more than one container at time – and for users to have an easy way to toggle between them. This isn’t a biggie as I suspect the number of people doing this will limited, and the best practise will be to keep these images to a minimum. Right now, it’s up to the user to browse between various image files. A task that’s probably beyond the abilities of the average user.

Tile Management – Clipboard and Browsing for .EXE:

Right now, the administrator has to type in paths to the EXE’s held within the Container OS. I would like to see the ability to browse these paths, rather needing to type them. Together with that I’d like to see the ability to copy an existing “tile” and modifying to reduce this sort browsing function to a minimum. It’s perhaps not a huge issue – it maybe that many customers of Droplet Computing are using the Container OS to deliver a single strategic legacy application rather than multiple applications after all.

Ideally, I’d like to see some sort of programable or scriptable method of setting up this environment, even if it something as crude as copying down an .ini file. I’ve no idea of where the tile data that makes up granted applications is stored, but I understand it’s not held within the Droplet Computing Image. So, it must be held somewhere probably in an ASCII file of some description. I assume the tile configuration is held in the user’s profile – so if they roam around the organization they find their Droplet Computing Container configuration follows them.

Web-Browser Support:

Currently, the Windows XP version of the Container OS contains no web-browser. I think that’s intentional to reduce the attack surface that legacy browsers bring, as well as keeping the Droplet Computing Image slim. That does present challenges – such as downloading software into the Container OS. I was forced to do that with an external NAS device that wasn’t too painful, but it does mean sourcing every piece of software from the Device OS and leaving it on shared locations.

It makes things like setting up the Container OS for Dropbox, Google Drive and OneDrive trickier. A lot of the download pages for these sites just inspect the OS with the web-browser to detect the extension required.

[Note: It’s perhaps worth noting that some of these extensions no longer work with Windows XP. For instance, Dropbox no longer supports Window XP for their download that extends the Windows Explorer environment.]

In the end I installed the K-Meleon (http://kmeleonbrowser.org/) web-browser into the Container OS that then allowed me to get some of my online software providers and extensions.

Conclusions:

This is my third and final post on Droplet Computing, but I’m sure I will be back for more in future. It’s worth remembering that users care less about HOW they get their applications, more than they just get them. So, whether its local or remote VDI, Server-based computing in the form of Citrix or Microsoft RDS or a Droplet Computing Container – they really aren’t that bothered by the delivery mechanisms. I know WE are, but they aren’t. And this is a reality that is often forgotten.

Ease of use is critical, and any delivery mechanisms that “gets in the way” of the user being productive is a distraction by any other name. Any feature or absence of a feature that results in a help desk ticket needs to be addressed. And, ideally, the delivery mechanisms must offer improved performance, ease of use and value for money to be able to unseat the incumbents. The ideal should be all three – but if a company can deliver two of these in a solution they will get traction in the marketplace.

It’s interesting how application delivery in the form of Containers has come full-circle. The much-loathed PC is a resilient little computing platform, one that has endured and survived huge changes in the last 40-odd years. Dismiss it at your peril. For me it’s testament to the economies of scale that the PC has honed over the years. Despite the rise of tablets and phones, these devices have excelled at delivering simple single-use applications.

But when it comes to the more productive knowledge work, it’s still the Office Suite of applications they spend hours in front of writing reports, presentations and maintaining the dreaded spreadsheet. Alongside these business-critical end-user applications, is a whole raft of applications that customers want to carry on using well beyond the vendors perception of end-of-life. It’s for these reasons I think the “Container” approach as delivered by Droplet Computing has legs. I can see customers adopting not just a method of delivering legacy applications to any type of device (Windows, Mac, Linux) but also as an application delivery method for new applications too.

Category: Droplet Computing, View/EUC | Comments Off on Droplet Computing: The Drip, Drip Effect
May 3

Droplet Computing Product Walk Thru: Not just another Droplet in the ocean

NOTE: Okay, I fess up that shameless nod to one of my favourite bands of the 80’s the indomitable “Echo and The Bunnymen” and their ‘hit’ “The Cutter”. What can I say I spent the later part of the 80s dressed in black moaning about how terrible life was… But hey, didn’t we all do that as teenagers?


This blogpost represents a walk thru the Droplet Computing container application and will attempt to describe the technology whilst keeping eyes focused on the practical side of life. I think I have one more Droplet Computing blogpost in me, and that post will be about where I think the next iteration of features should take them, and some of the challenges around achieving that. There are plenty of other things I want to learn about Droplet Computing, not least the process of authoring .droplet image files. So, whilst this blog does cover some very modest “administrator” like functions, it’s still very much focused on the “user experience” rather getting into much of the weeds of management.

Being a Mac user, I was supplied with the OS X version of Droplet – but a Windows and Linux version exists. The main thing to say is the functionality is the same regardless of device type, and the image file that makes up the environment in which applications work is portable between three support device types. Remember, Droplet Computing is container-based technology, it’s not a virtual machine or application virtualization. Apps are installed “as is” without any special packaging or sequencing process. And I think this will appeal to customers who do not have the time or resources for a dedicated team of packaging experts.

In the Apple Mac system images are moved from the location from which they were downloaded to this path:

/Users/<username>/Library/Application Support/Droplet/Images

These .droplet image files basically holds the Container OS files, I’m my case Windows XP together with any applications that have been pre-installed there. Droplet Computing provided me with the .zip file that contain both the core application and the image. When starting Droplet Computing for the first time you’re asked for the .droplet file, and if required it will move it to the images path.  Currently, it’s Droplet Computing that provide the base .droplet file from which you can make multiple copies, and install software using the usual methods.

[Technical Bit: Early versions of Droplet Computing used a web-browser such as Google Chrome to provide the runtime environment. This approach has been dropped for the moment, as Google were pushing too many updates to Chrome to make this a reliable process that didn’t trigger endless regression testing. Droplet Computing has adopted a Chromium web-browser which they control to side step this issue. Needless to say, that the end-user never sees this hidden web-browser.]

Start and Load a Droplet Computing Container/Application

When Droplet Computing is loaded for the first time, the user has access to locate their image file, and there’s an option to control how much memory is assigned to the container, as well as set the location of their “Shared Folder”. This allows for the easy transfer of small files to and from the host and the Droplet Computing Container. Personally, I didn’t use this shared folder feature – opting instead to connect the Container OS to my home NAS server, and find my software there.

I’ve an old 13” MacBook Pro with 8GB of RAM and SSD drive which is circa 2012. So, it’s by no means a powerful machine by modern standards (it suits my needs perfectly fine until it stops working!). Clicking the big orange “Start Container” erm, starts the container…. That process took on my machine about 1min and 30seconds which I don’t think is too shabby considering the hardware involved, and the start-up time compares very favourably to a locally run virtual machine using VMware Fusion or Virtualbox.

Note: Currently this progress bar, really only shows that the container is loading, it doesn’t really show how far complete the process is.

To make my life easier, Droplet Computing has pre-set-up an application as part of my sample image that was notepad. A simple “launch” button option loads the application from within the container.

Note: Ordinary users can remove application tiles from this UI at will.

Shutting down an application is the same in the container as it would be in Windows XP. Clicking the red X in the top right-hand corner will close the app or using File, Exit. It’s the container OS’s job to check that files have been saved, and if the users chose to save some data, then it currently defaults to the user profile settings. In the world of a generic unmodified Windows XP installation, this is the “My Documents” folder. (Gee, remember when every folder in a Windows install had the word My appended to it for no apparent useful reason!). One thing that’s worth noting is that currently it is possible to shut down the container and the OS container, with open and unsaved files in the application using the power button. This does not suspend the Container OS, and when you start the container again, that data will be lost.

Note: The power button shutdown applications and the Container OS.

The Edit options allow you to view/change the applications configuration parameters, and these amount to simple variables such as name, .exe path, and description. Currently, there isn’t a browse option to locate the .exe, you have to type the path manually. Incidentally, this sort of browse functionally looks easier on paper than it is to implement in practice especially if you want to make sure the Container remains a secure environment.

You can upload an icon to act as friendly logo, this currently means browsing for .png, .jpeg, and .jpg files. This browsing is from within the Host OS, not the Container OS. So, I took a screen grab of the TextEdit application from the Apple Mac. I think in future Droplet Computing will inspect the .EXE and present icons from with it.

What can I say I like pretty icons!

Adding a New Application

The Droplet Computing Container is password protected, and adding a new application requires the administrator password. Droplet Computing sets a default password for both access to the Container. The important thing to say is this a password for Droplet Computing. It’s not the password for the Container OS admin account, nor is it the password for the local device. So this password sits independently of those and is designed to secure the access to the Droplet configuration such as adding/removing tiles, and opening up the ContainerOS to install new software.

Remember, Droplet Computing passed recent PEN Testing with flying colours, and so I don’t imagine this is an issue.

Once validated the container unlocks the options to add more applications for the user, revealing a + application tile from where you can add the parameters.

Note: My sample .droplet image was a bare-bones install of Windows XP, that didn’t even include MSPaint.exe! A situation that might bring a tear to the eye of old-timers out there who are fans of pixelated artwork. This configuration of the tiles view is user specific and is not saved to the .droplet file. It means the .droplet file can be replaced with a new image/build without upsetting the users configuration.
Note: First time I tried this it didn’t work because I’d missed the r in Program! There is no clipboard buffer in the container so you cannot copy and paste from the device to these fields or copy them within the application. I assume the lack of a clipboard is a security decision. Copy and Paste is allowed between applications within the same container.

There’s currently no “browse” option here to navigate into the container, nor does Droplet Computing validate these paths to make sure they are correct. I knew that a legacy version of Microsoft Office 2010 had been pre-installed to the Container OS, so the way I validated the path was by doing the following:

I asked Droplet Computing to “Show Container”

Which reveals the administrator-only “spanner” icon, that allowed me to run Windows Explorer to check the path to winword.exe

Once I’d found the install directory, I was able to make a note of the path (without speech marks incidentally) and the .exe.

As you can see in the screengrab above, this administrator mode exposes aspects of the Container OS shell in the form of the Control Panel, Explorer, Task Manager, Command Prompt, PowerShell (if installed) and sending the Ctrl-Alt-Del keystroke to the Container OS.

When my work was done, I could see both Notepad and Microsoft Word.

There’s currently no method of copying a tile, to be able setup a new application by quickly modifying the path to the .EXE and the icon file. That said, this is a one-off operation that doesn’t take that long to configure. I notice there’s no way to drag tiles around, and they are sorted by application “name” field. So used a naming convention of Microsoft Word 2010, Microsoft Excel 2010 and so on to make sure they were grouped together.

Once multiple applications are available to the user they can switch back to the list of tiles, using the “Show Apps” option. This appears once the 1st application is loaded in my case, Microsoft Excel 2010

They can then load another application such as Microsoft Word 2010. The task bar shows the applications that currently open like so:

Installing new software to the Container OS

Adding new software to the Container OS involves a couple of steps:

  1. From host locate the source software
  2. Place on shared location in such as NAS Server
  3. Download and Run installer within the Container OS

As example of doing this for Windows XP I used PuTTY. By far the easiest thing to do is to put the .exe or .MSI on a NAS server, and then map a network drive from the Container OS to the NAS Device. I did this by entering the administrative mode, and then running a Command Prompt

From there I was able to carry out a net view on the IP address of NAS server to see the shares on it:

And then a net use command to map a network drive:

From there I could use the copy command to copy the PuTTY.exe file to C:\Program Files. Once that was done, I was able to add PuTTY as tile to main view.

This work does beg the question on how networking is arranged inside the Container OS. In my case the Windows XP instance has a DHCP address from an internally supported IP range. So, it did not get an IP address from my home router. DNS and Routing queries get sent to an internal service which then piggy backs off the host networking. This allows for outbound access to my network, and the internet – but with no corresponding inbound access. This was just-enough networking to get me to my NAS server. As an aside, I notice Internet Explorer was not installed in my Windows XP image.

Conclusions:

From what I can see Droplet Computing has put together a robust release 1.0 product with PLENTY of scope for future features and functionality. They have developed their own in-house approach that solves the issue in a unique way. Personally, I think that’s no small achievement. It’s perhaps worth restating what those issue entail. As with server virtualization there’s always been that challenge of extending the lifetime of a legacy application beyond the life of the hardware and operating system for which it was first designed. Other attempts have been made using server-based computing (Citrix/MSFT-TS) and virtual desktop infrastructure (Horizon View and others). But these are have been datacenter focused solutions. I’ve championed both of these approaches in the past, and continued to do so – with the caveat that they place user-workloads in the most expensive computing space known to the Enterprise. So whilst VDI and SBS remain useful tools in a companies armoury, we have to acknowledge that much vaunted “death of the PC” and “Year of VDI” hasn’t happened. The PC remains the resilient and cost effective method of delivering a compute experience that hasn’t been eclipsed by the tablet or the dumb-terminal (I prefer to call them smart terminals, personally)

The bonding of hardware, operating system and an application has been ‘loosened’ in the last couple of decades. But they are still close coupled together. It’s only really containers on the server-side with technologies like Docker and Kubernetes that has really been a significant challenge and change in the way applications are developed. I think the time is right for that “container” approach to applied to desktops as well. Creating a mirror image of what’s happening in the space of server-side application or new paradigm for how companies might deliver applications in the future. The issue of legacy application support isn’t going away, because it has gone away for the last two decades (and a bit more) that I’ve been in the industry. So as companies see operating systems like WindowXP and Windows7 fall off the support cliff, I suspect that the same situation will be faced with Windows8/10. And there’s more. If I thought technologies like Droplet Computing were just about legacy applications, I’d be less excited by this space. The fact is I think many companies once they harness the power of container technology for legacy applications, will be thinking about using at as method of deploying new applications.

 

 

Category: Droplet Computing, TechFieldDays, View/EUC | Comments Off on Droplet Computing Product Walk Thru: Not just another Droplet in the ocean
April 11

Droplet Computing: Drops Some NewsLets

Disclaimer: I’m not paid or engaged by Droplet Computing. And I wasn’t offered any trinkets or inducements for writing this series of posts. I’m just interested in what they are doing. The CTO is a former colleague of mine from VMware, and admire anyone’s hutzpah to walk away from the corporate shekel to do their own thing.

 

This is going to be series of blogposts about Droplet Computing. I’m trying to eschew my usual TL:DR approach to blogging in favour of an approach that more reflects the gold-fish style attention spans that scrolling news has engendered in the population at large.

In case you don’t know, Droplet Computing does “Containers for desktops”. This is the kind of typical “company-as-sound-bite” that is used as shortcut to describe what a company does. If you want to some more technical detail check out the blogpost that will join this series.

The simple idea is delivering end-user applications for Windows, Mac and Linux in a container. This is NOT your 2006 desktop virtualization (so not “Year of VDI” narrative that vendors have being flogging like a horse deader than Mrs May’s Brexit Deal), and nor is it the application virtualization that involves “capturing” and “sequencing” applications into a special runtime (aka AppV, ThinApp and dozen other wannabees).

With Droplet Computing, applications are installed natively to an OS library held within the container, in such a way that anyone who knows how to click “Next” could build the environment.

So, the newslets are this.

A Special Person joins Droplet Computing

No, not me. I’m not that special.

That very special person has joined Droplet Computing as non-executive director.

Who?

Adam Denning.

Who’s he?

None other than Bill Gate’s former technical advisor. That’s who.

This is “big” for a number of reasons. It’s a vote of confidence in Droplet Computing. It’s big because Droplet Computing is tiny (I think there’s less than 15 people currently engaged – I could be wrong about this figure) So the arrival of such an industry heavy weight is relatively and cosmic significant. Adam’s the kind of figure that would convince folks being paid hefty sums working at some oil tanker corporate to do something infinitely more interesting – and riskier… But there’s something more as well. It’s about sending a message that Droplet Computing is in it for the long game. I mean who knows what the future brings, but when heavy weights like Adam join there’s something take note of…

This is what Adam has to say about himself on Linkedin…

“Technical strategist and architect with proven software delivery skills. Over 25 years’ experience with Microsoft in varied technical roles, the last 22 in its corporate headquarters in Redmond, USA, and including a 3-year stint as assistant technical advisor to Bill Gates. Led teams of over 100 people with multi-million-dollar budgets and delivered products used by 100s of millions of people around the world. Deep technical knowledge, thorough strategic thinking capabilities and extremely quick learning. Significant customer-facing work, oriented around developer strategy, working to ensure customer success and gathering feedback to improve Microsoft’s products. Presented and communicated at CxO-level, at 5 to 5000+ attendee conferences, and published books, magazines and blogs. Recently led the evolution of Microsoft’s platform strategy around Windows and its derivatives.”

This is a chap who hasn’t just management stuff. See I told you it was vote of confidence. That’s about as much as I could glean from Google aside from….

The other thing that’s nice about Adam is his natty selection of bowties. Bowties are super cool. And have been ever since Dr Who announced this that it is a truth universally acknowledged, that a man in possession of a good fortune, must be in want of a bowtie.

Photo courtesy of Linkedin. Bowtie source unknown.

 

Droplet Computing Security Testing

The 1,000 foot of this NewsLet is that they passed with flying colours. Okay, case closed… Well, not quite. The whole point of this sort of testing is to shout it out from the roof tops so folks are convinced your product is safe to use. This is especially true of Droplet Computing since their first use case is about allowing legacy applications associated with legacy operating systems to continue to run on OS’s that are still current and patched.

A couple of years ago the UK was hit by WannaCry, by a wave of WindowsXP instances that could not be protected (because Microsoft saw fit to keep the patch to themselves). Our beloved National Health Service was perhaps the most impacted, as they have a LOT of applications still in use that are too expensive to refactor and rebuild for a new OS. Sadly, the whole thing got politicised by the media and others, and the narrative became dominated by wider concerns around underfunding our NHS. The situation is somewhat more nuanced. Even if the government’s cup of money was overflowing, it would probably still be decided that to maintain the older system was the best use of resource.

Incidentally, some might say this use case is dangerous because it means Droplet Computing is chasing a diminishing market of legacy applications that will one day be so redundant they will be switched off. I think this thinking is a bit woolly. Firstly, what is current today will be legacy in 5 years’ time, and IT history has a habit of repeating itself – the first time as tragedy, the second time as farce. But secondly, I could easily see customers loving Droplet Computing so much they choose to make it their de facto method for deploying new and current applications. Okay, so I know that’s a grand claim. And it remains to be seen. We will have to see if customers bite the Droplet Computing cherry.

Anyway, Droplet Computing engaged the services of NCC group to do the tests. The assessment was conducted from February 14 to February 18, 2019 on a Windows 10 laptop with two Droplet Computing containers, one containing Windows XP, with a variety of outdated software, including Office 2010, and the other with Kali Linux containing a large number of malicious tools useful for breaking out of the container. The main outcome of the report was that the container service was not accessible remotely, a huge advantage for organisations in securing enterprise applications. Here’s what NCC Group reported…

“The system being assessed allowed organizations to run existing applications within a secure containerized environment within a browser. The portability of running in a browser would allow these organizations to decommission unsupported and vulnerable operating systems in place of fully updated and supported versions, while still being able to use production software.”

Stop! Read that quote back again. Now read the bit in bold and italics again. Interesting, huh?

Droplet Computing is now using these results and working with NCC and Public Sector clients to achieve Cyber Essentials PLUS accreditation. Cyber Essentials is a UK Government-backed, industry-supported scheme to help organisations protect themselves against common online threats. The idea is make the UK the safest cyber location on the planet. Assuming some civil servant doesn’t download everyone social security details to a USB stick and leave on a commuter train to Northholt.

https://www.theguardian.com/uk/2008/oct/28/terrorism-security-secret-documents

https://www.huffingtonpost.co.uk/entry/government-staff-lost-more-than-600-laptops-phones-and-usbs-in-last-four-years_uk

http://news.bbc.co.uk/1/hi/uk/7449927.stm

Admittedly, a lot of these cases are more than a decade old now. Things have moved on, except for government ministers who persist in carrying important documents of state in full view of the media.

https://www.telegraph.co.uk/news/politics/8731143/Minister-accidentally-reveals-Afghanistan-documents.html

Summary

So, a senior Microsoft guy onboard, and PEN Testing complete. Pretty handy dandy. I think Droplet Computing is finally positioning it to release their first 1.0 product, less than a year after showcasing “minimal viable product” or proof-of-concept at last year’s TechField Day when they came out of “stealth”. The PEN Testing is interesting. I figure it will be constant balancing act between providing the features customers desire, against maintain the security credentials. However, as VMware demonstrated with ESX. It helps if you can set a good baseline of security from the get-go, rather than retro-fitting it once the horse and your credit card details have bolted.

Next up, and practical and technical hands on walk thru of the product today

 

 

Category: Announcements, Droplet Computing, TechFieldDays, View/EUC | Comments Off on Droplet Computing: Drops Some NewsLets
February 1

All Quiet on the Western Front

Well, I’ve pretty quiet on the blogging front I guess since VMworld. In case you don’t know I start a new role in May, 2018 and I’ve really been heads down on that trying to wrap my head around a whole series of new processes and procedures – and as ever learning by one’s mistakes. As ever in the life the lesson you learn the hard way are the ones that tend to stay.

So in May of last year I joined a company called sureskills.com. My relationship with SureSkills went back to the days when I was freelance VMware Certified Instructor (VCI) and I was pimping my training skills across Europe, whilst writing books in hotels rooms, and speaking at VMUG. SureSkills was a big customer of mine – and in the early days I’d be there at least 1-2 weeks every month. The role I have at SureSkills is one that gets me involved in course development with a primary focus on VMware Official Curriculum materials. My first two courses that I worked with colleagues on was “VMware Sales Professional” (VSP) course covering VMware’s EUC/Mobility story focused on WorkspaceONE, and the second was a “VMware Technical Sales Professional” (VTSP) on Network Virtualisation – with specific focus on NSX Data Center (both the “T” and “V” editions). That just came to close this week. It quite something to hear one’s “script” voiced-over by American VO artist!

My new project is more familiar ground. This is an instructor-led classroom based course initially focused on the “Academy Program” and its actually falling under VMware’s “Virtualize Africa” initiative. Expressed simply “Virtualize Africa” is kind “leave no-one behind” project that aims to extend the benefits of VMware technology to every part of the globe. The course I developing with the input of VMware staffers is focused on supporting business-critical apps (Microsoft, Oracle, SAP) as well as nodding toward the next-generation of application frameworks such as containers, kubernetes – as well as vSphere Integrated Containers and VMware PKS. It’s a really exciting project because much of this content has never to this day been available in a training course – and usually resided in best practise docs, VMworld Sessions and VROOM! Style blog posts. Although, the remit is focused on the Virtualize Africa project – much of this content should be of equal interests to other geos. Anyway, I hope to share my insights and learnings now that I’m getting back into the weeds of technical material.

Category: vSphere | Comments Off on All Quiet on the Western Front
August 27

VMworld 2018: vSphere Platinum Edition; vSphere 6.7 U1 Features; VMWonAWS

In the second of my vExpert briefings with VMware – we focused on a new edition of vSphere dubbed “Premium”, and a drill-down into the new features of vSphere 6.7 U1. There was a little bit of overlap with the previous session (which was about VSAN). Once we were done with vSphere we moved on improvements on VMware Cloud on AWS. There was a brief 101 session of on AppDefense which I’ve opted not to cover in this blogpost…

vSphere Platinum Edition

Continue reading

Category: VMworld, vSphere | Comments Off on VMworld 2018: vSphere Platinum Edition; vSphere 6.7 U1 Features; VMWonAWS
August 27

VMworld 2018: What’s New in vSphere VSAN 6.1 U1

I’m back at the event this year, after taking a one-year sabbatical in 2017. I wasn’t working at the time, and didn’t think my bank balance could afford the “VMworld Hit”. Now I’m back in the saddle work-wise, I thought it would be good to catch-up with my former colleagues at VMware, and say hello to friends within the community. Shortly before VMworld 2018 kicked off, myself and my fellow vExperts were briefed on the some of the key announcements surrounding VMware prior to the event. This is pretty typical of many of these programs, and the content was embargo’d until closer to the VMworld itself. One of these sessions was focused on the enhancements to VSAN in the vSphere6.7 U1 release.

The improvements can be broken down into three categories – Simplified Operations; Efficient Infrastructure and Rapid Support Resolution. There is nothing jaw dropping “wow” about these increments and taken on their own they amount to tiring up of loose ends to some degree. However, loose ends do have a tendency of tripping people up, and you’ll be surprised how often from an operational perspective, these issues collectively can bond to together to undermine customer experience and satisfaction. So they are not to be underestimated. Also I would say from my experience these issues are often much harder to resolve and deliver than many customers really give credit for. Trust me, if there was an easy fix – everyone would leap on it. The fact that doesn’t happen immediately is often because once you pull back the lid of the tin, there’s a mass of complexity or politics to be first resolved.

Continue reading

Category: VMworld | Comments Off on VMworld 2018: What’s New in vSphere VSAN 6.1 U1
June 25

Cloud Field Day: Droplet Computing – Any App, Any Where, Any Device

Yes, we have heard that message before from many, many companies in the past – but how many companies have REALLY delivered on that promise. And I mean REALLY delivered on that promise…

The trouble has been that in order to achieve the laudable goal in the past you have needed a truckload of infrastructure to deploy either a server-based or virtual-desktop-based environment – and run those Apps from an expensive data center location usually in the context of a Windows Operating System. Once I said at a user group in the US that putting desktops in the data center (the most expensive computing environment known to human or beast) is unlikely to result in massive cost savings. As I recall, one person stood up and clapped me for my honesty. It doesn’t matter how many TCO calculators you throw at the approach, the data center business is an expensive one. And one that according to our public cloud vendors, is one business customers can’t wait to get out of.

The Technology – Droplet Computing

At the Cloud Field Day in Silicon Valley back in April – Droplet Computing came out “stealth” to reveal they have developed a client side container technology, that will allow practically any application to run in the context of web-browser. Here’s a quick overview of that container technology:

So, there’s a couple of components going on here. But in simple terms, the container sits inside a web-browser which essentially offers the runtime environment, surrounded by a series of supporting “libraries”. Possibly the most significant is the use of WebAssembly which handles the job of intercepting the machine code generated by the app sitting in the container. A clunky analogy would have the web-browser like the virtualisation layer, and the container is like a VM, with the user app sitting inside the VM. All this is done however, without the bloatware of either a server-side hypervisor or client-side virtualisation and an “operating system” sitting inside that VM. That was tried in the past with technologies such as VM Player, ACE, Workstation or Fusion – or worse still downloading an entire VM from the corporate data center – just to run a measly little Windows App. Now I’m not saying that VMware Horizon or Citrix XenApp are “wrong”, it’s just for many they were sledgehammer technologies trying to crack a nut. Of course there the physical system that end-user sits on needs operating system – but that could be almost anything you care to think of.

So, this is ultra-compatible. How compatible? You could easily “natively” run a Windows based App inside a Droplet Container under an Intel-based Apple Mac. All without installing Windows or having to power-up or resume a Windows VM. In fact, the processor wouldn’t even have to be Intel-based, it could be an ARM based processor if necessary. This opens the door to being able to those Windows Apps running on chipsets for which they were never designed.

That lack of Windows requirement stems from the use of “wine” from the world of Linux. Wine has had a sketchy history in the world of Linux as a way of natively running Windows Apps under a Linux context – but it has moved on and improved over the years.

In terms of the underlying physical system – all that’s really needed is a relatively modern web-browser  with WebAssembley enabled – which is sufficient to support the Droplet Container. Currently that’s Chrome v6.0, Firefox 5.2, Safari v11, and Internet Explorer v16. Mobile device such as Android-based phones/tablets (Android v6.2) and Apple iPhone/iPAD (iOS v11.1).

So, in a nutshell. Droplet Computing does for desktops what container technology has done for server-side code development and code distribution. Adding a layer without the overhead of a virtualization layer+operating system and other dependencies.

The Use Cases

ANY, ANY, ANY means MANY, MANY MANY.

But let’s start from scratch. The end-user computing world is a very different one from the narrow world of Windows PCs, and the old days of “Virtual Desktop Initiatives”. Whilst it would be foolish to discount server-based computing and VDI, neither succeeded in going mainstream – or become the de-facto way that users got their apps. They remained corralled into a particular niche for Dilbert-style users who sat in their cube all day. That way of working is on the decline with many of us being more mobile or working-from-home (WFM or should that be WTF?) – plus we all now have at least three devices (if not more) in the shape of laptop, tablet, and smartphone.

Whilst attempts have made to duplicate apps across those device types – this has resulted in compromises either in the “app” driven world of the iPAD or in the web-driven world of Office365. It’s always meant some uneasy compromise of the experience – which support folks have to excuse or explain away. So, this new way of working has spawned attempts to bring everything under one house via things like VMware’s WorkspaceONE – a portal to a plethora of different ways off delivering apps (SaaS, Virtual Desktops, Application Packaging – like ThinApp, AppVolumes, Microsoft App-V and so on, and on and on…). For me the difference with Droplet Computing is they are offering a net-new method of delivering the App – in a container, executing on a device of the end-users choosing with their preferred web-browser. Incidentally, I still think these one-stop-shop “App Stores” will still be needed for ID management, entitlement and security reasons – but I can see Droplet Computing being the Apps that are advertised there – to be downloaded and run on the end-users’ device. And of course, if you must have centralised VDI they are possible target too…

Clearly, legacy apps will be an important market – but I personally believe that this approach will pay dividends for new applications as well as older ones. Although to be fair it’s those older applications that often prove to the bane of everyone’s life. Often they’ve been developed in an OS with lower security requirements – and that often means ‘breaking’ the rules and regulations about OS hardening – just to make them work. The older they become the more they are likely to break as their dependencies themselves become incompatible or discontinued. This then has a knock-on effect to other important requirements such as meeting compliance during an external audit, or merely ensuring the apps are as quick and reliable as they once were. The dizzying releases of Windows and their associated Apps means it’s really impossible for an enterprise to freeze its world based on a particular approved “build” and blend of OS/Apps. This just doesn’t sit well in a BOYD era where CorpIT has no clue or control over the end-point the user chooses – never mind that that they may be using a form-factor such as tablet. So, Droplet Computing’s container vision and technology offers a tantalising promise of escaping these limitations and restrictions.

There are some interesting parallels between the early days of virtualisation and what Droplet Computing is doing.

Firstly, there’s a low-hanging fruit market of legacy apps that are still used by business but won’t be supported or won’t work on new operating systems. That includes App’s developed by ISVs who may not actually be trading anymore. The cost of rewriting those legacy apps far outweighs the usefulness to the business, so away of extending their life time beyond meaningful usage is appealing.

Secondly, Although the software running in the container is unmodified and runs natively – customers should be aware that Droplet Computing is not responsible for the licensing policy or support agreement of the ISV. Of course, if they have ceased to operate there’s a great deal of leeway there, but if the ISV is current – they might decide (as they did with 1st Gen virtualisation) simply not to support it or licensing it in a such away as to reduce its competitive value. For instance, a Droplet Computing user license allows the end-user to run 3 copies of the Droplet Computing software – enough to cover the 3 most popular devices a user might use (laptop, smartphone, and tablet).

However, the ISV might choose to charge the business 3 times for an application that has been “installed” on three different devices. Remember many of these legacy applications that represent the low-hanging fruit have quite antiquated licensing policies that are often the bane of many Citrix XenApp or Horizon View admin. That said, the potential cost savings from not having to run the older infrastructure – and having that app execute on expensive compute (the data center) but on cheap compute (the laptop) – could outweigh that restriction. Put simply it might be cheaper to suck up the additional license costs, to save money elsewhere. Personally, my hope (the same hope I had with Gen1 Virtualisation) is that ISVs review their licensing policies with a view that anything that drives consumption also preserves market share – and that it’s not in their interests to corral their horses and carriages in cycle in order to “protect their revenue stream”.

Looking back on this paragraph, I’m perhaps over-egging the impact of these licensing considerations. Perhaps ISVs have woken up the multi-device world we now reside in, and these antiquity licensing policies are a thing of the past?

Early Days…

So, its early days for Droplet Computing. They have secured their first round of VC funding and come out of stealth – and they are on the cusp of releasing their first release candidate 1.0 GA. I hope to get some stick time with the technology, as I believe getting one’s hands dirty is the first step to learning the advantages, disadvantages and limits of technology. I’ve waited a while for a truly new and innovating technology to grab my eye. And not just a rehash of existing bits and bytes. I think what Droplet Computing is doing is very, very interesting – and they are certainly a company to keep on your radar.

Category: TechFieldDays | Comments Off on Cloud Field Day: Droplet Computing – Any App, Any Where, Any Device
May 18

ThinkPiece: Cloud Field Day: Can an old dog, learn new tricks?

A rather fuzzy picture the CFD3 Crew. That’s okay – I think I look my best when everyone hasn’t been to SpecSavers recently…

Well, it all seems such a long time ago since I was in the Bay Area at the beginning of April. And I have been meaning to blog about my experiences and insights at Cloud Field Day for weeks. Sadly, I became very ill on the way home from the event – I came down with bronchitis. If you’ve never had it – count yourself lucky. It took me weeks to recover. After getting over that – a perfect storm of events overtook me – both good and tragic.

Anyway, I’m feeling MUCH better now, and I’m finding some cycles to work thru that “to do list” we all perpetual have… This won’t be my only blog on Cloud Field Day, as I intend to write a blog about the start-up that came out of stealth when we were there – called Droplet Computing. The are worthy of a blog all to themselves…

So there were a number of more traditional shrinked-wrapped software vendors at the event – and without fail each one endeavoured to show how they were pivoting their traditional stack to the cloud. It makes perfect sense to do that considering it was a “Cloud Field Day”. The clue is rather in the title. I want to be kind and say that all the vendors were at least trying to do that. But commonly it did rather feel at times as if ToysRUS were reacting to the onslaught of Amazon. And its not without irony how – where some traditional retail operators are really struggling to deal with the competition that Amazon.com brings in the domestic space, some traditional software vendors are struggling to deal with the competition that a public cloud vendor like Amazon bring to the table. There is an element of “lets wait and see” if this change really is happening – only to find that once the change has happened – you are on the back-foot from the get-go. It’s a much more dynamic and competitive landscape which is moving very quickly, and some of the traditional software vendors aren’t as “agile” (hateful word!) as their senior management might want to believe – or tell their shareholders and investors.

Now, this blogpost isn’t your usual – lets bash the old ways, and slag the traditional vendors off because they are easy targets. I genuinely would like to see their cloudy efforts succeed. Because I believe in competition. And I believe that a market that’s sown up by tiny cartel of big players is not in the customer or anybodies interest. Competition is good for the Big Evil Corporates too – because it stops them becoming lazy and complacent. So I would like to give credit where credit is due. NetApp for instance has gone down the route of effectively creating a brand new unit to deal with the challenges they are facing. Essentially, putting all the storage goodness they have into Amazon AWS. This is native enterprise storage in the cloud, with all the features and functionality you used to enjoy on-prem.

[yes, I said on-prem. I’m bored already with the language police going on and on about on-prem(ises). Can we devote our mental energies to something that is actually IMPORTANT, for once?].


Note: This is the interesting Vertias presentation, and is worth a watch.
At the end of the week (when delegates are losing the will to live!). Vertias came into talk about what they were doing. Sadly, the first part of the session was a bit of a bore fest. Until a separate team came on to talk – and the Old Skool Vertias guys had left the room.  Now, what they showed us looked very much like a start-ups “minimal viable product”, and I’ve always hated that term, especially when its deployed by huge, huge companies. Guys, you need to up your game (not Veritas specifically..). MVP is a term beloved of the Valley and the Start-up culture. It’s rightful place is there – in the start-up culture, where smaller companies need to spend a little, impress a lot, react quickly – and critically attract VC cash. But I’m afraid MVP has no place to play in a massive incumbent. People expect you to apply some spit and polish, and the expectations from customers are naturally higher. Meet them. Exceed them. Anyway, I digress.

There was some speculation amongst the delegates about why, Vertias had chosen the route they had. I made appoint that we were in Vertias CorpHQ, why don’t we, like, just ask them? Mercifully, we were spared the usual corporate flim-flam. The guy told use quite straight down the line – that they just could not get the features and coding built – quick enough from the existing team. So they built their own.

There’s critical message for all companies like NetApp and Veritas. There’s a right way to do this and a wrong way. Trying to build the new company – from the people and process that built the old, is by its nature like rolling a stone up a hill. You nearly always need to build the new company by incubating a new BU from within. This is not just some organisational chart activity – critically that group needs to be championed, and yes “protected” from other BU/SILOs, that would quite cheerfully strangle it at birth. So a bigly beautiful wall needs to be built around the company-within-in-the-company – and then it need stuffing with cash. The other thing that needs to happen is the sales folks who earned their commission form selling the “Old stuff on the truck” need to be “compensated” and “incentivised” to sell the “New stuff on the truck”. And if they can’t do that – you need to get new people who will ,because they have no stake in the previous model. As a friend quoted to me:

“Change the people, or change the people”

Think about that for a while…….

Anyway, I name check NetApp and Veritas as companies who I think may have woken up and – smelt the coffee and bacon (an odd mix, but actually can be quite tasty, although as a Brit I prefer a cup tea with my bacon butty). I think I would include Oracle Infrastructure Cloud (OIC) in that mix too. Whether NetApp, Veritas or Oracle – are able to overcome the baggage that comes with their brand is anyone’s guess. But I do think they are at least trying, and if they put enough weight behind their respective projects – there is at least a chance they will be successful. And it feels churlish not at least encourage and recognise effort, where effort is being made. Oddly enough OIC didn’t go down as well at CFD as they did at the Ravello Bloggers Day (same venue, similar people, almost the same presenters). I’m not sure quite why that happened – as there’s lots of positive things to say about their offering. Perhaps that’s because they led with Ravello (for which there’s a lot of love for in the vCommunity?) Oracle comes with a lot of baggage with the vCommunity – so there’s “perception” issue to overcome. Sadly, the Ravello Bloggers Day wasn’t recorded, and I think embedding the CFD video might actually do them a disservice. So recommend checking out my blog on that event:

http://www.michellelaverick.com/2018/03/oracle-ravello-blogger-day-2018-aka-rbd2/

I’m afraid I could not say the same about Riverbed. Sadly, their presentation lent too heavily on older sales plays, that had merely been re-jigged for a cloud era. You could tell they weren’t really connecting with their audience by the rigor mortis that was settling into the group. That also kind of came across with the lack of vim and vigour in their presenters. I hate to say things like that. Because it feels like a personal criticism, which I normally shy away from.

Things did brighten up with a presentation from one of their team members at the end of the session (Vivek Ganti) – who at least came across as someone, who felt a passion for the work he and his team were doing. This guys didn’t feel like he was just “going through the motions”.

Sadly, however, what he was showing was some of the automation that Riverbed have put into deploying their appliances into an Amazon VPC. In fairness if you were doing this all by hand using Amazon’s frankly horrid web UI you would have piece of work on your hands. However, if your doing public cloud “right” you should be using your favourite toolset of utilities to leverage the API. That’s what public cloud is all about. Not Old Skool SysAdmin (like me!) clicking and filling in dialog boxes, but a new style DevOps SysAdmins who can stand-up and tear down infrastructure with the flick of script. One of the goals of this DevOps Public Cloud is to accept that nothing is really ever persistent in the old style way – but should be by its nature volatile – so it is so robustly automated it can be destroyed and created in an isn’t.  My point here is that this kind of automation is likely to make the average DevOps SysAdmin go “meh”.

Also, for me this issue is also a more important one – the value in any software – whether is on-prem OR in Da Cloud isn’t how easily it can deployed and setup. That, really should be a “given”. The real value is what that software allows the customer to do – which they couldn’t do before. Now, I guess you could say – standing up a multi-tier load-balanced layer that offers redundancy, and inspection of packets to ensure a smooth network experience can be difficult. I actually think Riverbed have a fantastic suite of products (although folks tell me they can be quite pricey). But I wasn’t really convinced by their cloud play.

Why was that? After all they aren’t really doing that much differently than say what NetApp were doing. But there was something about the vibe. Whereas it felt that Riverbed were sprinkling cloudiness over an existing product range. I got the impression that NetApp had made a genuine attempt to Cloudify their existing product ranges, whilst at the same time acquiring and investing in something net-new. I don’t want to label Riverbed’s approach as “Cloud Washing”. I guess the difference is in the approach. Whereas it appears as if NetApp wanted to deliver NetApp-as-a-Service (NaaS!) with Riverbed it was more like Virtualization 1.0.

Hey, lets put our existing stack into a Linux instance and spin that up in EC2/VPC.

I suspect what customers WANT (remember them? customers?) is all the features and functionality they used to enjoy on-prem, without any of the complexity of configuration, management, or having to deal with it all becoming rusty after 3-5 years, and having to lash out more cash to upgrade and forklift their unique bits. Shrink-wrapped software, without being tied up in cling film if you wish….

Anyway, I said when I started this would be just one post, with another on Droplet Computing. But I feel like banging on about the companies who are getting this right from the get-go. And that feels like another post entirely (although it is related…)

Category: ThinkPiece | Comments Off on ThinkPiece: Cloud Field Day: Can an old dog, learn new tricks?
May 14

Kickstart Scripted VMware ESXi Install to USB Media

I recently decided to switch back to using USB Media for booting my VMware ESXi hosts. My main thought was that I want to use the local HHD for either some kind of VSA testing (such as StorMagic’s SvSAN) or else when I have the budget buy in SSDs to make a physical VSAN cluster, rather being dependent and reliant on a virtual nested VSAN setup. I went out and got some 32GB USB media to make sure they would be big enough to create the scratch partition.

I’m often wiping my VMware ESXi hosts to try out new builds – all to roll-backwards to previous builds for testing purposes. For that I used the UDA together with some kickstart scripts to do the bulk of the customisation. So times I just lay down a clean build with no customisation and use PowersHell scripting to build up the environment to the level I want it. I have that in a modular way that allows me to lay down, some but not all, of the configuration depending on my development needs.

Anyway, to script an install to USB media you can use this in the kickstart:

install --firstdisk=usb --novmfsondisk

The –firstdisk flag actually can take a series of arguments separated with a comma. It is possible to specify an order of particular disks type (not just the first USB, HHD/SSD or SAN based LUN) the installer searches for. And you can use to this set your own orders for how disks are discovered. For instance –firstdisk=usb,remote,local would first try to install to USB, then a LUN on a SAN, before lastly trying the local disk.

Also what’s supported is specifying the model of the disk like so –firstdisk=ST3120814A. I used this in the past with hyper-converged appliances that I was doing ground-zero resets on – and just so happened the bootdisk I was using had a unique model number. Of course this isn’t very helpful when a server contains disks that all have the same model number…

The other option is to use instead of install –firstdisk, but use install –disk=mpx.vmhba1:C0:T0:L0. This allows you to indicate the disk by which adapter its configured for, and the controller/target used (note these values are often 0, just the L number value that changes. For many years we have referred to this as the vmhba syntax or “Runtime Name”. In the past its been hard to trust 100% these values, but I think they are more reliable nowadays, as the way the VMware ESXi host boots the vmkernel is VERY different now.

Probably the easiest way to find the MODEL name/number and vmhba syntax is using the esxcli commands like so:

esxcli storage core path list

Here’s a cut and paste of the USB output – 

usb.vmhba32-usb.0:0-t10.SanDisk00Ultra00000000000000000000004C530001270222101242 UID: usb.vmhba32-usb.0:0-t10.SanDisk00Ultra00000000000000000000004C530001270222101242 

Runtime Name: vmhba32:C0:T0:L0 

Device: t10.SanDisk00Ultra00000000000000000000004C530001270222101242 

Device Display Name: Local USB Direct-Access (t10.SanDisk00Ultra00000000000000000000004C530001270222101242) 

Adapter: vmhba32 Channel: 0 Target: 0 LUN: 0 Plugin: 

NMP State: active Transport: usb 

Adapter Identifier: usb.vmhba32 

Target Identifier: usb.0:0 

Adapter Transport Details: Unavailable or path is unclaimed 

Target Transport Details: Unavailable or path is unclaimed Maximum IO Size: 32768

 

I imagined a system with multiple UBS media (!??!) would report C0:T0:L1 or else C0:T1:L1.

Oddly enough I found a second USB device added to my ESXi host doesn’t appear – not even when doing a manual install with ISO attached the HP ILO. I’m not sure why this is – but it may (or may not) be significant which USB slot the device gets inserted too, or that there are limits around the way the VMkernel enumerates USB device – perhaps only enumerating the first USB memory stick found?  This isn’t terribly important – but I will ask around my contacts and see what I can dig up. It’s a bit obscure, but I’m curious like that.

Category: vSphere | Comments Off on Kickstart Scripted VMware ESXi Install to USB Media
May 11

VSAN Maintenance Mode with PowersHell

I’m running VSAN in a nested configuration, and I generally shutdown my homelab in the evening each day. I grew tired of doing the manual process of maintenance mode for these nested nodes before shutting them down, and then shutting down the host that they run on. I did a bit of googling for the PowersHell that does that – and was see quite complicated scripts. I’m pleased I went back to the documentation for PowerCLI which is the primary source for the cmdlets:

https://code.vmware.com/doc/preview?id=6702#/doc/Set-VMHost.html

I discovered that the Set-VMhost cmdlet supports a -VsanDataMigrationMode switch with three different options for:

  • Full
  • EnsureAccessibility
  • NoDataMigration
1..4 | Foreach {
 $Num = "{0:00}" -f $_
 Set-VMHost -VMHost esx"$Num"nj.corp.local -State "Maintenance" -VsanDataMigrationMode NoDataMigration 
 }

1..4 | Foreach {
 $Num = "{0:00}" -f $_
 Stop-VMHost esx"$Num"nj.corp.local -Confirm:$False
 }
Category: vSphere | Comments Off on VSAN Maintenance Mode with PowersHell