December 6

Too Bizzy To Blog / Intel NUC / ESXi7


My last blog here was in April of this year. And what a year it’s been. They say hindsight is 2020, and who would have thought it would be in the year of 2020 that saying would be made true. I take the view that if you are still working, healthy, and haven’t lost a loved one to COVID, then you are blessed. I’m pleased to say I’m one of that group – and my love & best wishes go out to anyone affected by the events of the last 12 months.

One thing I didn’t realize is how bizzy I would in my first full year here at Droplet Computing – and it’s not just with doing 101 with new prospective customers or jumping on Zoom to assist with trials of software – or cranking out interoperability guides and KBs. What makes my job so enjoyable is the hours I get to spend testing, playing, and creating skunkworks in my lab, which I try to turn into products or product features. It’s probably the most creative part of what I do – and often driven by customers bringing challenges to us that you would only encounter in the real world. So one thing that is deeply rewarding is watching your technology become battle-hardened out there in the world of the production environment – forged in the fire of real-world use. One thing learned in the last 18 months or so with being at “Droplet” is how flexible you need to be, and able to turn your hand to any technology – which suits my character down to the ground. If variety is the spice of life, then working here is like a particularly hot vindaloo!

Anyway, I was inspired today to write a blog about my experiences of using an Intel NUC in my vSphere lab – a topic that would seem seaming unrelated to run running legacy or modern Windows apps in containerized format – but stick with me and you’ll see where I’m going.

So in my time of being a VMware Instructor and then an employee of the mighty VMW – I’ve largely kept away from the NUC style form factor. The resources always seemed so limited (volume of memory/number of NICs), and also because back in my ESX 2.x.x days I got burned by the HCL. I kid you not I once tried to install ESX 2.x.x to Dell Optiplex PC – and I’m proud to say that I’m one of a small number of people who ran ESX 2.x.x on Pentium III PowerEdge Pizza boxes connect to SCSI (yes SCISI!) JBOD!  Home labs have come along way since 2004. But I decided the time had come to take a punt on an Intel NUC especially as I hard so many good and trusted people in the VMware Community speak highly of them (That’s you William Lam, if you’re reading this 🙂 )

So this is what I added to my basket on Black Friday, with a plan to buy the gear – do some tests – do a demo to my management – and then claim it back through company expenses. In that “act now, and ask for forgiveness” kind of way [I’m happy to say my sneaky plot worked like a charm – thanks Barry…]

Parts wise I played safe consulting Intel’s site for the NUC10i7FNH and only buying RAM and the NVMe card that was on that list. I probably could have found cheaper RAM/Storage from no-name brands coming off the same factory somewhere in deepest darkest China – but I figured that was pointless considering I’d only save a couple of bucks (or pounds) here or there. I took the decision to just fill one bank on the memory slots – in case this approach didn’t fly – but I’ve been so pleased with the experience, I will buy the other 32GB SODIMM. The only question is how to sneak that thru my expenses claim without the big boss man noticing (Sorry, Barry…)

I won’t bore you with a 4hr unboxing video (WTF!) where I marvel at the quality packing materials (FFS!). Needless to say – fours screws at the base and your in – slot in your RAM. Unscrew the mounting screw on the NVMe, which is sprung loaded, and once the NVMe is in place – screw the mounting screw back. Pop the lid back on and your good to go. I did the whole thing on the living room carpet whilst watching Bing Crosby in High Society. Not quite taking ESD precautions there, but hey – you only live once – right?

Gotchas – none really except:

  • I had to go into the BIOS/EFI (F2) and disable the on-board Thunderbolt port when installing ESXi 7
  • You get a weird TPM warning – apparently, there’s a BIOS update for that… and then you have to apply a Firmware update – then disable Secure Boot – AND if that doesn’t work – clear the TPM keys by installing Windows before installing ESXi. I did the Firmware Update (F7 and provide a USB stick with the .cap file on it) – but it still didn’t clear the warning. I decided to live with the warning – and turned the Secure Boot back on… Life too short too worry about a false-positive warning in a homelab…
  • vSphere 6.7 wouldn’t install – and I got a weird EFI firmware message – apparently fixed by flash update – it didn’t work for me – I’m not bothered, who wants 6.7 when you can run 7 in your lab (but it would have been nice to have…)
  • Being old skool I burned a CD (how quaint) and plugged in a USB CD/DVD I have. I guess I could have put ESXi7 on a USB thumb drive but I couldn’t be bothered – I got ESX7i on the box which is the main thing. The build I used was ESX 7.0.1 (1685050804) this apparently has the NIC drivers built-in to the build so need to faff about with VIBs and Image Builder, blah, blah, blah.

So overall I really rate the NUC. It’s a great little ESXi home lab server for those who are operating so far up the stack that you really don’t care about features – you just need a cheap & cheerful bundle of CPU/Memory/Disk to run the high-level stuff without burning precious hours trying to get some custom shop thing working, and tearing your hair out to get network drivers going.

I’ve half a mind to sell on my aging HP ML350e – and use the money to replace them with NUCs. If I did that I could sell on the half-height rack I have in the garage, and being the home lab back indoors which is quite nice considering it’s Winter. That said, I do rather like having the HP ILOs for videos. My plan had been to just keep on adding RAM to the HP ML350e’s and sweat that asset – which I still could do. It turned out that in the end, it wasn’t a memory I ran out so much – more than my CPUs were getting so old that they lack the horsepower/feature set I needed to do my testing. That’s a bit of a bummer because you can always add more RAM, but I rather balk at the idea of ripping out CPUs. Server-class hardware is always more pricey than PC class – and I spent a pretty penny upgrading the HP ML350e for the bare-bones (2nd CPU, Add DIMMS, and GPUs….). I guess that’s another trade-off with the NUCs, no chance of adding a GPU.  So, having discussed this thru the medium of a blog, I think I will stick with the HP’s ML350e for now… add RAM as I run out, and use them increasingly more for management systems – and stick anything that needs performance to the NUC. The day the ML350s just won’t install whatever version of ESXi I’m running on is the day I put them on eBay.


Now for the Droplet part. For some time I’ve been concerned that some of my performance experiences weren’t very realistic in virtual platforms. My old home lab was based on some Gen8 ML350e which are some 4 years old (maybe more…). Although they have 2xSOCKETS they are only 8 cores of Xeon, and whilst they work fine for workhorse VMs (AD, Citrix, RDS, VMware View, and management nodes like SCCM and so on) it became clear I couldn’t use them to benchmark or get a feel for what our technology would do under the context of more modern hardware. So much so I abandoned that altogether this year – a took to testing our technology on cloud-based environments provided by our partners such as Amazon WorkSpace/AppStream and Microsoft WVD (big shout and thank you to Toby & Co at Foundation-IT for standing up that environment for me!)

The downside (for me) of these cloud environments is as a rule they don’t expose to their instances the Intel-VT attribute (Disclaimer: some Azure Linux instances, do have Intel-VT exposed) I assume mainly due to concerns about security and multi-tenancy. On physical Windows 10, Apple Mac and Linux/Chromebook Droplet Computing use a whole range of difficult hardware accelerators based on the container type, customer need and how is easy it is to get change management things approved (you know the score!)

So for me one question always was – if I expose hardware assistance to the vSphere VM running Windows 10, could I use a hardware accelerator to make our container “go faster”. I’d always failed to achieve this with my older hardware (BSODS and freezes ago-ago), and figured with a new chipset I might get a different outcome. Also wanted to see what ducks in a row might be needed to make this work – what combination of CPU/vSphere/Windows 10 Version – would be necessary. I’m pleased to say I was successful in getting to work on the Intel NUC. In fact, it flies – and I figure with a proper enterprise-class server – the result would most likely be even better.

The Setup

Firstly, I needed to enable X Expose hardware-assisted virtualization to the guest OS. You might find this is already in place if you’re in the habit of enabling Microsoft Virtualization-Based Security (VBS) as MS VBS requires access to Intel-VT as well as other attributes such as Secure Boot and TPM in the BIOS.


Secondly, I need the right version of Windows 10. As you might know, there are two channels for Windows 10 – the semi-annual which is updated twice a year, and the Long Service channel (LSTC). Whilst the LTSC still languishes on version 1809 of Windows 10 – there have been at least 4 major releases on semi-annual, and we are currently on 20H2. 

I started my tests using 20H2, but I have successfully made this configuration work with 1909 and 2004.

Aside: BTW there appears to be a bug in Windows 10 1809. I suspect the bug is “well known” and the fix will be “upgrade your version of Window 10” rather than an individual hotfix.

Thirdly, before installing our software I enabled a hardware accelerator in the side of the guest operating system

And that’s it… our software automatically picks up the hardware enablement inside the VM, and our container is nice and nippy. This particular benefits our 32-bit container which we use a LOT with legacy applications. We do have a 64-bit container (for customers who have legacy apps, which only shipped as 64-bit and for whom there was no 32-bit version). With our modern container (which needs more grunt to run) for modern applications, the hardware acceleration makes all the difference. So the break thru for us – is being able to run all our container types in a Windows 10 VDI VM without concerns about performance.

As a side benefit, it should also mean myself and our partners can use our vSphere labs more flexibly for tests and demos.

Sweet! 🙂

My next step is to QA this with our partners on a wide range of hardware – before we consider giving the green light on this configuration and officially supporting it – and I also need to test RDSH guest running Windows Server 2016 and Windows Server 2019 for platforms like Citrix VirtualApps, VMware Horizon View and Microsoft RDS.

Category: Droplet Computing, View/EUC | Comments Off on Too Bizzy To Blog / Intel NUC / ESXi7
January 15

Happy New Integration Guide: VMware Horizon, App Volues and UEM/DEM

Happy New Year! 

Happy New Integration Guide!

Late last year I spent time showing how Droplet Computing can run in the context of VMware Horizon Virtual Desktop as well as VMware Horizon Application Pools. I went on to show how you can leverage VMwareApp Volumes to deploy our software to a virtualized environment and use VMware  User/Dynamic Environment Management. It was nice to refresh my knowledge of VMware EUC offering as its been some time since I wrote my book about VMware VIew (that was back in version 4/5!). I was suitably impressed by the Instant Clones option which is a marked improvement on Linked Clones of old….

You can read our integration guide here:


Category: Droplet Computing, View/EUC | Comments Off on Happy New Integration Guide: VMware Horizon, App Volues and UEM/DEM
August 1

Ubuntu Screen Sharing, Encryption Settings and a Masonary Drill

As part of my work at Droplet Computing, I need to test our software on 4 different platforms (Windows, Apple Mac, Unbuntu and RHEL… and if I’m feeling frisky Chromebooks too!). As Apple Mac physical user, it makes sense to use virtual machines for this functional kind of testing (performance testing is another matter and has to be really done on physical systems). Being a long-standing vSphere user, it was logical to crank my home lab back up and use that gear than buy 3 laptops.

That was a pretty easy affair, but I was tussling with Unbuntu even with VMware Tools installed using Windows10 jump box to access the lab (which coincidentally resides in an enclosure in my garage. I’m using those horrid powerline adapters to get ethernet into there at the moment, needless to say, I hope to borrow a masonry drill in the next couple of days) so I can be wired directly to the wifi router that sits behind my telly.

So I thought I would give Unbuntu “Screen Sharing” a bash to see if that was any better than VMware Console. You can find Screen Sharing by simply typing “Sharing” and enabling it like so:

Trouble was I kept on getting refused/access either using the free VNC Viewer or the Apple Mac’s own Go To Server options.

Turns out Screen Sharing has its own encryption settings which are incompatible with these viewers. The easiest way to fix this is to use a dconf editor to turn off the encryption – and Bob’s your close relative…

1. Open a terminal and install dconf with:

sudo apt install dconf-editor

2. Run dconf-editor with:


3. Browse to org -> gnome -> desktop -> remote-access and turn require-encryption to OFF

I did find VNC to be more network-friendly, but not as effective as a masonry drill… 🙂

Category: View/EUC | Comments Off on Ubuntu Screen Sharing, Encryption Settings and a Masonary Drill
May 16

Droplet Computing: The Drip, Drip Effect

The last couple of weekends I’ve spent playing with Droplet Computing application containers solution. As ever getting “stick time” with any technology inevitably leads to thoughts about future functionality. I think the tricky thing for any new company is how far they want to carve out new features of their own, and run the risk of re-inventing a wheel that’s already been invented and in use already.

The dilemma though, is if you don’t spell out a management vision for a technology that’s essentially a platform, do you run the risk of looking like you’re lacking “vision”? The danger is you just want to be yet another vendor, with yet another ‘single pane of glass’ to be added alongside all the other ‘multiple pains of glass’ that customers have to toggle between. I guess the short part of that long-winded statement – what’s the best approach? To integrate or innovate?

I guess some Smart Alec would say the ideal thing would be to do both. Listen to customers and their needs and requirements and come up with solutions and functionality that addresses those needs. The only downside of this customer approach is customers can have a tendency to demand things, because they think they “should” be there. Only for the ISV to find out that this was tick-box exercise, and you have squandered precious development time for no reasonable benefit. It used to infuriate me when I suggested areas for improvement to VMware, only to be asked to explain and justify my thinking. Couldn’t they just see it was obvious! Of course, the reason I was asked to make my use case every time was to avoid the very situation I’ve just described. So featurism – the desire to add more features and options without a good use-case – is a crime I know I have been guilty of in the past. It’s crime I’m likely to continue to commit until I retire.

So, that gets me to the explanation of the title of this blogpost. Drip. Drip. It strikes me that the best kind of improvements to a new technology take little coding effort and improves functionality for all customers. A rapid series of quick updates is far easier to pivot, than massive undertakings that might take months to complete.

I think there’s another distinction to make with this type of approach. Do those drip-drip enhancements improve the users experience, or the administrator’s life and the deployment experience? For that reason, I’m going to separate them because at the end of the day no end-user ever patted on my back for well-administered systems.

User Experience:


At the moment the only print support inside the container is for the native print driver that needs to be installed and configured by the administrator. Of course, customers may already have a license for some kind of universal print solution such as UniPrint. I think it would be great if Droplet Computing had some type of redistribution deal with a company like UniPrint. Ideally, this feature would retrieve the print configuration from the device OS, and then bubble those up into the container OS. This means the print configuration is ultimately managed by existing policies, profiles and/or scripts used to prepare the users environment to the device OS.

Unity-like Functionality:

If you’ve ever used something like VMware Fusion, you’ll have come across so-called “unity” style experience. This unity experience would essentially “hide” the Container environment as much as possible. Leaving just the application floating on the screen, and perhaps also creating shortcuts to Droplet Computing Apps contained inside the Droplet Computing Container. The end-user doesn’t need to interact with the “tiles” view in the main application.

But instead gains access to their application from an icon on their desktop. That said I’ve always been irritated about desktop icon clutter, where the entire desktop is covered with icons. Some kind of “Droplet Computing Apps” folder on the desktop that contains all the icons would be neat.

Content Redirection:

This is a Citrix feature that’s been around for sometimes. There are two types of “redirection”; one that takes from the device OS into the Container OS, and another that takes you out of the Container OS back to the Device OS. The former is 9 times out 10 the most common.

It’s very common for a user to find a document in a folder and want to open it with a double-click. Historically, in Windows that process been controlled with MIME file associations held in the registry. It can be tricky to get right. After all, generally file extensions tend to have no version information in them. For instance, the .PDF extension doesn’t help me decide if that application should run locally or within the Container OS. Nor does it help me in a situation where 9 times of 10, it should open locally, but in particular application scenarios it should open in the Container OS.

However, it is possible to create program logic that says for all Excel 2010 spreadsheets requiring Macros, to open in Excel 2010 in the Container OS, and for all spreadsheets created in Office 2016 open them Excel 2016. It would be possible to redirect these double clicks for the entire legacy XLS, XLM and XLT files to the Container OS. And to redirect all XLSX, XLSM, XLTX, and XLW to be redirected to a newer version of Excel running in the Device OS.

This kind of redirection also requires a method of passing the location of the file to the Container OS which I can used to retrieve the file. It’s one thing to call an .EXE using file extensions, and then another to then retrieve the data. It will generally require a communication channel and a file sharing protocol between the Device OS and Container OS.  And then there’s the tricky issue of authenticating the users access to the file – would that use the Device OS credentials passed-through to the Container OS, or would it use the credentials of the Container OS?

The other type of redirection flows in the opposite direction. This is typically where a link to some multi-media format is found in a web page or embedded as a link inside an application running in the container. In most cases that YouTube link, MP3 or MP4 video would be best played by locally installed application within the domain of Device OS. I see this as being less of an issue because the action happens less frequently as a double-click to open a file – because it’s easier to create and encode a simple “multi-media” rule to redirect all such links back to the Device OS.

Administrator Experience:

Multi-Container Support:

Currently the Container OS being used is Windows XP, and I understand a Windows 7 Container OS build is on its way. Although I think the numbers of users needing both Container OS’s at the same time would be quite small, it would be nice if there was an easy way for administrators to give access two more than one container at time – and for users to have an easy way to toggle between them. This isn’t a biggie as I suspect the number of people doing this will limited, and the best practise will be to keep these images to a minimum. Right now, it’s up to the user to browse between various image files. A task that’s probably beyond the abilities of the average user.

Tile Management – Clipboard and Browsing for .EXE:

Right now, the administrator has to type in paths to the EXE’s held within the Container OS. I would like to see the ability to browse these paths, rather needing to type them. Together with that I’d like to see the ability to copy an existing “tile” and modifying to reduce this sort browsing function to a minimum. It’s perhaps not a huge issue – it maybe that many customers of Droplet Computing are using the Container OS to deliver a single strategic legacy application rather than multiple applications after all.

Ideally, I’d like to see some sort of programable or scriptable method of setting up this environment, even if it something as crude as copying down an .ini file. I’ve no idea of where the tile data that makes up granted applications is stored, but I understand it’s not held within the Droplet Computing Image. So, it must be held somewhere probably in an ASCII file of some description. I assume the tile configuration is held in the user’s profile – so if they roam around the organization they find their Droplet Computing Container configuration follows them.

Web-Browser Support:

Currently, the Windows XP version of the Container OS contains no web-browser. I think that’s intentional to reduce the attack surface that legacy browsers bring, as well as keeping the Droplet Computing Image slim. That does present challenges – such as downloading software into the Container OS. I was forced to do that with an external NAS device that wasn’t too painful, but it does mean sourcing every piece of software from the Device OS and leaving it on shared locations.

It makes things like setting up the Container OS for Dropbox, Google Drive and OneDrive trickier. A lot of the download pages for these sites just inspect the OS with the web-browser to detect the extension required.

[Note: It’s perhaps worth noting that some of these extensions no longer work with Windows XP. For instance, Dropbox no longer supports Window XP for their download that extends the Windows Explorer environment.]

In the end I installed the K-Meleon ( web-browser into the Container OS that then allowed me to get some of my online software providers and extensions.


This is my third and final post on Droplet Computing, but I’m sure I will be back for more in future. It’s worth remembering that users care less about HOW they get their applications, more than they just get them. So, whether its local or remote VDI, Server-based computing in the form of Citrix or Microsoft RDS or a Droplet Computing Container – they really aren’t that bothered by the delivery mechanisms. I know WE are, but they aren’t. And this is a reality that is often forgotten.

Ease of use is critical, and any delivery mechanisms that “gets in the way” of the user being productive is a distraction by any other name. Any feature or absence of a feature that results in a help desk ticket needs to be addressed. And, ideally, the delivery mechanisms must offer improved performance, ease of use and value for money to be able to unseat the incumbents. The ideal should be all three – but if a company can deliver two of these in a solution they will get traction in the marketplace.

It’s interesting how application delivery in the form of Containers has come full-circle. The much-loathed PC is a resilient little computing platform, one that has endured and survived huge changes in the last 40-odd years. Dismiss it at your peril. For me it’s testament to the economies of scale that the PC has honed over the years. Despite the rise of tablets and phones, these devices have excelled at delivering simple single-use applications.

But when it comes to the more productive knowledge work, it’s still the Office Suite of applications they spend hours in front of writing reports, presentations and maintaining the dreaded spreadsheet. Alongside these business-critical end-user applications, is a whole raft of applications that customers want to carry on using well beyond the vendors perception of end-of-life. It’s for these reasons I think the “Container” approach as delivered by Droplet Computing has legs. I can see customers adopting not just a method of delivering legacy applications to any type of device (Windows, Mac, Linux) but also as an application delivery method for new applications too.

Category: Droplet Computing, View/EUC | Comments Off on Droplet Computing: The Drip, Drip Effect
May 3

Droplet Computing Product Walk Thru: Not just another Droplet in the ocean

NOTE: Okay, I fess up that shameless nod to one of my favourite bands of the 80’s the indomitable “Echo and The Bunnymen” and their ‘hit’ “The Cutter”. What can I say I spent the later part of the 80s dressed in black moaning about how terrible life was… But hey, didn’t we all do that as teenagers?

This blogpost represents a walk thru the Droplet Computing container application and will attempt to describe the technology whilst keeping eyes focused on the practical side of life. I think I have one more Droplet Computing blogpost in me, and that post will be about where I think the next iteration of features should take them, and some of the challenges around achieving that. There are plenty of other things I want to learn about Droplet Computing, not least the process of authoring .droplet image files. So, whilst this blog does cover some very modest “administrator” like functions, it’s still very much focused on the “user experience” rather getting into much of the weeds of management.

Being a Mac user, I was supplied with the OS X version of Droplet – but a Windows and Linux version exists. The main thing to say is the functionality is the same regardless of device type, and the image file that makes up the environment in which applications work is portable between three support device types. Remember, Droplet Computing is container-based technology, it’s not a virtual machine or application virtualization. Apps are installed “as is” without any special packaging or sequencing process. And I think this will appeal to customers who do not have the time or resources for a dedicated team of packaging experts.

In the Apple Mac system images are moved from the location from which they were downloaded to this path:

/Users/<username>/Library/Application Support/Droplet/Images

These .droplet image files basically holds the Container OS files, I’m my case Windows XP together with any applications that have been pre-installed there. Droplet Computing provided me with the .zip file that contain both the core application and the image. When starting Droplet Computing for the first time you’re asked for the .droplet file, and if required it will move it to the images path.  Currently, it’s Droplet Computing that provide the base .droplet file from which you can make multiple copies, and install software using the usual methods.

[Technical Bit: Early versions of Droplet Computing used a web-browser such as Google Chrome to provide the runtime environment. This approach has been dropped for the moment, as Google were pushing too many updates to Chrome to make this a reliable process that didn’t trigger endless regression testing. Droplet Computing has adopted a Chromium web-browser which they control to side step this issue. Needless to say, that the end-user never sees this hidden web-browser.]

Start and Load a Droplet Computing Container/Application

When Droplet Computing is loaded for the first time, the user has access to locate their image file, and there’s an option to control how much memory is assigned to the container, as well as set the location of their “Shared Folder”. This allows for the easy transfer of small files to and from the host and the Droplet Computing Container. Personally, I didn’t use this shared folder feature – opting instead to connect the Container OS to my home NAS server, and find my software there.

I’ve an old 13” MacBook Pro with 8GB of RAM and SSD drive which is circa 2012. So, it’s by no means a powerful machine by modern standards (it suits my needs perfectly fine until it stops working!). Clicking the big orange “Start Container” erm, starts the container…. That process took on my machine about 1min and 30seconds which I don’t think is too shabby considering the hardware involved, and the start-up time compares very favourably to a locally run virtual machine using VMware Fusion or Virtualbox.

Note: Currently this progress bar, really only shows that the container is loading, it doesn’t really show how far complete the process is.

To make my life easier, Droplet Computing has pre-set-up an application as part of my sample image that was notepad. A simple “launch” button option loads the application from within the container.

Note: Ordinary users can remove application tiles from this UI at will.

Shutting down an application is the same in the container as it would be in Windows XP. Clicking the red X in the top right-hand corner will close the app or using File, Exit. It’s the container OS’s job to check that files have been saved, and if the users chose to save some data, then it currently defaults to the user profile settings. In the world of a generic unmodified Windows XP installation, this is the “My Documents” folder. (Gee, remember when every folder in a Windows install had the word My appended to it for no apparent useful reason!). One thing that’s worth noting is that currently it is possible to shut down the container and the OS container, with open and unsaved files in the application using the power button. This does not suspend the Container OS, and when you start the container again, that data will be lost.

Note: The power button shutdown applications and the Container OS.

The Edit options allow you to view/change the applications configuration parameters, and these amount to simple variables such as name, .exe path, and description. Currently, there isn’t a browse option to locate the .exe, you have to type the path manually. Incidentally, this sort of browse functionally looks easier on paper than it is to implement in practice especially if you want to make sure the Container remains a secure environment.

You can upload an icon to act as friendly logo, this currently means browsing for .png, .jpeg, and .jpg files. This browsing is from within the Host OS, not the Container OS. So, I took a screen grab of the TextEdit application from the Apple Mac. I think in future Droplet Computing will inspect the .EXE and present icons from with it.

What can I say I like pretty icons!

Adding a New Application

The Droplet Computing Container is password protected, and adding a new application requires the administrator password. Droplet Computing sets a default password for both access to the Container. The important thing to say is this a password for Droplet Computing. It’s not the password for the Container OS admin account, nor is it the password for the local device. So this password sits independently of those and is designed to secure the access to the Droplet configuration such as adding/removing tiles, and opening up the ContainerOS to install new software.

Remember, Droplet Computing passed recent PEN Testing with flying colours, and so I don’t imagine this is an issue.

Once validated the container unlocks the options to add more applications for the user, revealing a + application tile from where you can add the parameters.

Note: My sample .droplet image was a bare-bones install of Windows XP, that didn’t even include MSPaint.exe! A situation that might bring a tear to the eye of old-timers out there who are fans of pixelated artwork. This configuration of the tiles view is user specific and is not saved to the .droplet file. It means the .droplet file can be replaced with a new image/build without upsetting the users configuration.
Note: First time I tried this it didn’t work because I’d missed the r in Program! There is no clipboard buffer in the container so you cannot copy and paste from the device to these fields or copy them within the application. I assume the lack of a clipboard is a security decision. Copy and Paste is allowed between applications within the same container.

There’s currently no “browse” option here to navigate into the container, nor does Droplet Computing validate these paths to make sure they are correct. I knew that a legacy version of Microsoft Office 2010 had been pre-installed to the Container OS, so the way I validated the path was by doing the following:

I asked Droplet Computing to “Show Container”

Which reveals the administrator-only “spanner” icon, that allowed me to run Windows Explorer to check the path to winword.exe

Once I’d found the install directory, I was able to make a note of the path (without speech marks incidentally) and the .exe.

As you can see in the screengrab above, this administrator mode exposes aspects of the Container OS shell in the form of the Control Panel, Explorer, Task Manager, Command Prompt, PowerShell (if installed) and sending the Ctrl-Alt-Del keystroke to the Container OS.

When my work was done, I could see both Notepad and Microsoft Word.

There’s currently no method of copying a tile, to be able setup a new application by quickly modifying the path to the .EXE and the icon file. That said, this is a one-off operation that doesn’t take that long to configure. I notice there’s no way to drag tiles around, and they are sorted by application “name” field. So used a naming convention of Microsoft Word 2010, Microsoft Excel 2010 and so on to make sure they were grouped together.

Once multiple applications are available to the user they can switch back to the list of tiles, using the “Show Apps” option. This appears once the 1st application is loaded in my case, Microsoft Excel 2010

They can then load another application such as Microsoft Word 2010. The task bar shows the applications that currently open like so:

Installing new software to the Container OS

Adding new software to the Container OS involves a couple of steps:

  1. From host locate the source software
  2. Place on shared location in such as NAS Server
  3. Download and Run installer within the Container OS

As example of doing this for Windows XP I used PuTTY. By far the easiest thing to do is to put the .exe or .MSI on a NAS server, and then map a network drive from the Container OS to the NAS Device. I did this by entering the administrative mode, and then running a Command Prompt

From there I was able to carry out a net view on the IP address of NAS server to see the shares on it:

And then a net use command to map a network drive:

From there I could use the copy command to copy the PuTTY.exe file to C:\Program Files. Once that was done, I was able to add PuTTY as tile to main view.

This work does beg the question on how networking is arranged inside the Container OS. In my case the Windows XP instance has a DHCP address from an internally supported IP range. So, it did not get an IP address from my home router. DNS and Routing queries get sent to an internal service which then piggy backs off the host networking. This allows for outbound access to my network, and the internet – but with no corresponding inbound access. This was just-enough networking to get me to my NAS server. As an aside, I notice Internet Explorer was not installed in my Windows XP image.


From what I can see Droplet Computing has put together a robust release 1.0 product with PLENTY of scope for future features and functionality. They have developed their own in-house approach that solves the issue in a unique way. Personally, I think that’s no small achievement. It’s perhaps worth restating what those issue entail. As with server virtualization there’s always been that challenge of extending the lifetime of a legacy application beyond the life of the hardware and operating system for which it was first designed. Other attempts have been made using server-based computing (Citrix/MSFT-TS) and virtual desktop infrastructure (Horizon View and others). But these are have been datacenter focused solutions. I’ve championed both of these approaches in the past, and continued to do so – with the caveat that they place user-workloads in the most expensive computing space known to the Enterprise. So whilst VDI and SBS remain useful tools in a companies armoury, we have to acknowledge that much vaunted “death of the PC” and “Year of VDI” hasn’t happened. The PC remains the resilient and cost effective method of delivering a compute experience that hasn’t been eclipsed by the tablet or the dumb-terminal (I prefer to call them smart terminals, personally)

The bonding of hardware, operating system and an application has been ‘loosened’ in the last couple of decades. But they are still close coupled together. It’s only really containers on the server-side with technologies like Docker and Kubernetes that has really been a significant challenge and change in the way applications are developed. I think the time is right for that “container” approach to applied to desktops as well. Creating a mirror image of what’s happening in the space of server-side application or new paradigm for how companies might deliver applications in the future. The issue of legacy application support isn’t going away, because it has gone away for the last two decades (and a bit more) that I’ve been in the industry. So as companies see operating systems like WindowXP and Windows7 fall off the support cliff, I suspect that the same situation will be faced with Windows8/10. And there’s more. If I thought technologies like Droplet Computing were just about legacy applications, I’d be less excited by this space. The fact is I think many companies once they harness the power of container technology for legacy applications, will be thinking about using at as method of deploying new applications.



Category: Droplet Computing, TechFieldDays, View/EUC | Comments Off on Droplet Computing Product Walk Thru: Not just another Droplet in the ocean
April 11

Droplet Computing: Drops Some NewsLets

Disclaimer: I’m not paid or engaged by Droplet Computing. And I wasn’t offered any trinkets or inducements for writing this series of posts. I’m just interested in what they are doing. The CTO is a former colleague of mine from VMware, and admire anyone’s hutzpah to walk away from the corporate shekel to do their own thing.


This is going to be series of blogposts about Droplet Computing. I’m trying to eschew my usual TL:DR approach to blogging in favour of an approach that more reflects the gold-fish style attention spans that scrolling news has engendered in the population at large.

In case you don’t know, Droplet Computing does “Containers for desktops”. This is the kind of typical “company-as-sound-bite” that is used as shortcut to describe what a company does. If you want to some more technical detail check out the blogpost that will join this series.

The simple idea is delivering end-user applications for Windows, Mac and Linux in a container. This is NOT your 2006 desktop virtualization (so not “Year of VDI” narrative that vendors have being flogging like a horse deader than Mrs May’s Brexit Deal), and nor is it the application virtualization that involves “capturing” and “sequencing” applications into a special runtime (aka AppV, ThinApp and dozen other wannabees).

With Droplet Computing, applications are installed natively to an OS library held within the container, in such a way that anyone who knows how to click “Next” could build the environment.

So, the newslets are this.

A Special Person joins Droplet Computing

No, not me. I’m not that special.

That very special person has joined Droplet Computing as non-executive director.


Adam Denning.

Who’s he?

None other than Bill Gate’s former technical advisor. That’s who.

This is “big” for a number of reasons. It’s a vote of confidence in Droplet Computing. It’s big because Droplet Computing is tiny (I think there’s less than 15 people currently engaged – I could be wrong about this figure) So the arrival of such an industry heavy weight is relatively and cosmic significant. Adam’s the kind of figure that would convince folks being paid hefty sums working at some oil tanker corporate to do something infinitely more interesting – and riskier… But there’s something more as well. It’s about sending a message that Droplet Computing is in it for the long game. I mean who knows what the future brings, but when heavy weights like Adam join there’s something take note of…

This is what Adam has to say about himself on Linkedin…

“Technical strategist and architect with proven software delivery skills. Over 25 years’ experience with Microsoft in varied technical roles, the last 22 in its corporate headquarters in Redmond, USA, and including a 3-year stint as assistant technical advisor to Bill Gates. Led teams of over 100 people with multi-million-dollar budgets and delivered products used by 100s of millions of people around the world. Deep technical knowledge, thorough strategic thinking capabilities and extremely quick learning. Significant customer-facing work, oriented around developer strategy, working to ensure customer success and gathering feedback to improve Microsoft’s products. Presented and communicated at CxO-level, at 5 to 5000+ attendee conferences, and published books, magazines and blogs. Recently led the evolution of Microsoft’s platform strategy around Windows and its derivatives.”

This is a chap who hasn’t just management stuff. See I told you it was vote of confidence. That’s about as much as I could glean from Google aside from….

The other thing that’s nice about Adam is his natty selection of bowties. Bowties are super cool. And have been ever since Dr Who announced this that it is a truth universally acknowledged, that a man in possession of a good fortune, must be in want of a bowtie.

Photo courtesy of Linkedin. Bowtie source unknown.


Droplet Computing Security Testing

The 1,000 foot of this NewsLet is that they passed with flying colours. Okay, case closed… Well, not quite. The whole point of this sort of testing is to shout it out from the roof tops so folks are convinced your product is safe to use. This is especially true of Droplet Computing since their first use case is about allowing legacy applications associated with legacy operating systems to continue to run on OS’s that are still current and patched.

A couple of years ago the UK was hit by WannaCry, by a wave of WindowsXP instances that could not be protected (because Microsoft saw fit to keep the patch to themselves). Our beloved National Health Service was perhaps the most impacted, as they have a LOT of applications still in use that are too expensive to refactor and rebuild for a new OS. Sadly, the whole thing got politicised by the media and others, and the narrative became dominated by wider concerns around underfunding our NHS. The situation is somewhat more nuanced. Even if the government’s cup of money was overflowing, it would probably still be decided that to maintain the older system was the best use of resource.

Incidentally, some might say this use case is dangerous because it means Droplet Computing is chasing a diminishing market of legacy applications that will one day be so redundant they will be switched off. I think this thinking is a bit woolly. Firstly, what is current today will be legacy in 5 years’ time, and IT history has a habit of repeating itself – the first time as tragedy, the second time as farce. But secondly, I could easily see customers loving Droplet Computing so much they choose to make it their de facto method for deploying new and current applications. Okay, so I know that’s a grand claim. And it remains to be seen. We will have to see if customers bite the Droplet Computing cherry.

Anyway, Droplet Computing engaged the services of NCC group to do the tests. The assessment was conducted from February 14 to February 18, 2019 on a Windows 10 laptop with two Droplet Computing containers, one containing Windows XP, with a variety of outdated software, including Office 2010, and the other with Kali Linux containing a large number of malicious tools useful for breaking out of the container. The main outcome of the report was that the container service was not accessible remotely, a huge advantage for organisations in securing enterprise applications. Here’s what NCC Group reported…

“The system being assessed allowed organizations to run existing applications within a secure containerized environment within a browser. The portability of running in a browser would allow these organizations to decommission unsupported and vulnerable operating systems in place of fully updated and supported versions, while still being able to use production software.”

Stop! Read that quote back again. Now read the bit in bold and italics again. Interesting, huh?

Droplet Computing is now using these results and working with NCC and Public Sector clients to achieve Cyber Essentials PLUS accreditation. Cyber Essentials is a UK Government-backed, industry-supported scheme to help organisations protect themselves against common online threats. The idea is make the UK the safest cyber location on the planet. Assuming some civil servant doesn’t download everyone social security details to a USB stick and leave on a commuter train to Northholt.

Admittedly, a lot of these cases are more than a decade old now. Things have moved on, except for government ministers who persist in carrying important documents of state in full view of the media.


So, a senior Microsoft guy onboard, and PEN Testing complete. Pretty handy dandy. I think Droplet Computing is finally positioning it to release their first 1.0 product, less than a year after showcasing “minimal viable product” or proof-of-concept at last year’s TechField Day when they came out of “stealth”. The PEN Testing is interesting. I figure it will be constant balancing act between providing the features customers desire, against maintain the security credentials. However, as VMware demonstrated with ESX. It helps if you can set a good baseline of security from the get-go, rather than retro-fitting it once the horse and your credit card details have bolted.

Next up, and practical and technical hands on walk thru of the product today



Category: Announcements, Droplet Computing, TechFieldDays, View/EUC | Comments Off on Droplet Computing: Drops Some NewsLets
March 20

Problems with Windows 10 (1703) and VMware Guest Customisation

This week I decided to try and help out a fellow vExpert who was having an issue with carrying out guest customisation on a Windows10 system. I’ve not done much with Windows10. And I’m keen to expose myself new problems and issues, and I like trying to fix issues. So I offered to help. We went ALL around the houses looking at the usual suspects – DHCP, Administrator credentials, DNS and so on. Turned out it was a problem with with Windows itself. In nutshell 1703 is “bad” for VMware Admins, but 1709 is fine.

I’d not seen the problem because my lab uses 1709…. and once my fellow vExpert had ditched the 1703 build of Windows10 the problem went away. To be honest the build difference was the LAST thing I checked. I went round all the houses – looking at the usual suspects…

The problem was this – put simply – when ever guest customisation was taking place the customisation was stalling and triggering the setting of Regional Settings/Keyboard and such like. It’s worth saying that Sysprep is always been crock of poop. It’s primarily designed for OEMs who ship PCs with Windows pre-installed, and need to “depersonalise” the build ready for shipping to customer. It was never intended really for customer deploying Windows NT/XP/7/8/10 en-masse least of all Windows Server.

It is however all we have – and so we have to work within its constraints – once reason to make sure if your VDI broker (aka Horizon or XenDesktop) has their own “Sysprep” – they are MUCH more functional and 1,000 times faster to process.

Category: View/EUC | Comments Off on Problems with Windows 10 (1703) and VMware Guest Customisation
February 19

Vendorwag Reloaded: Liquidware Labs

Screen Shot 2016-02-11 at 13.52.00

It’s back – Vendorwag! So why the 3-year hiatus? Well, I felt whilst I was working with VMware that spending precious time working with other vendors wasn’t really very ethical. VMware employed me to promote VMware, not other vendors after all. Additionally, my first role at VMware was in the competition team – helping VMware folks handle competitive situations. It seemed a bit hypocritical in my day job, helping to defeat the competition whilst at the same time being all lovey-dovey with the same competitors here on my personal blog. Anyway, now I’m in my Adult Gap Year and a free-agent again, I felt it was finally OK to crank-up the Vendorwag. Plus I see it as a good way to keep in touch with the wider industry whilst I’m off being creative.

For those of you have been round the block for a while in the virtualization space like me, Liquidware Labs has an interesting pedigree. One of the co-founders is David Bieneman – who came to my attention when he found the VizionCore company (it went on to be acquired by Quest, and then Dell). I’ve had the pleasure of meeting David at the London VMUG – and that was in the previous decade. Yes. We are really getting that old…

Liquidware Labs positions themselves as a desktop transformation company (hence Transforming the Desktop!), and see themselves as solving the thorny problems of profile management (ProfileUnity), Application Layering (FlexApp), Monitoring and Diagnostics (Stratusphere) of existing environments in pre-transformation assessments, as well as diagnosing typical problems once the desktop transformation has gone live to users. The technology works for physical, virtual or cloud hosted desktops – as well as being compatible with the popular VMware and Citrix EUC suites.

The Vendorwag is slipping comfortably back into its original format – with an Elevator Pitch, Product Lowdown and Techknowledgy Demo strands. Each video is really intended for a different audience or character.

You can watch the videos embedded here on my blog, alternatively if you prefer you can subscribe to the Chinwag/Vendorwag podcast. If your specifically interested in the vendorwag I would recommend the videos – because they help complement the audio element substantially, and of course a demo without being able to see the screen – will stretch your powers of imagination!

Podcast Version Links:

The Elevator Pitch

The Elevator Pitch is a high-level overview of Liquidware Labs value proposition. It’s some elevator because the video runs to 30mins! That would be some skyscraper!  You can see the Elevator Pitch as the kind of video discussion you point your senior management at – for those people who aren’t concerned how the product works or indeed what it even looks like – just what it can do for the business.

J.Tyler “T. Rex” Rohrer is one of Liquidware Labs Co-Founders with a focus on strategic Alliances. Tyler Rohrer helped co-found Liquidware Labs after leaving a key role within VMware’s Enterprise Desktop Team. Previously, he was a partner at FOEDUS, which was acquired by VMware. Tyler heads up the Strategic Alliance program and is engaged in managing the company’s relationships with major platform and storage partners.

[Note: To see us in glorious TechnoColour and Panovision – make the video full-screen and make sure HD quality is selected]

The Product Lowdown

For the more ‘technically” mind the Product Lowdown talks about the product – what they can do and what their capabilities are. This video will be of interest to those who will eventually press buttons and knobs in the software, or the kind of person manages a team who will do that for them. Increasingly people are delegating tasks to others, but that can lead quickly to a “de-skilling” – or what I sometimes paraphrase as “I spend so much time doing high-level architecting, I don’t know how stuff works anymore!!!”. You know that anxiety we all have as we get older and “mature”, that our technical skills aren’t sharp as they once were. Hopefully this video will help you understand the technical challenges, advantages and disadvantages – and leave you being able to talk the talk (and be credible), even if you through with walking the walk.


Jason E. Smith, VP, Product Marketing. Previously an owner of Entrigue Systems, which was acquired by Liquidware Labs after its founding in 2009, Jason E. Smith currently heads the company’s Product Marketing team. In this role, he works strategically with Product Management, and focuses on go-to-market strategies for the company. Jason’s previous experience includes external strategic product and marketing consulting for Citrix, Red Hat, Scriptlogic, UltraBac Software, Internet Security Systems, and RES Software. Jason is a frequent speaker on technology trends in desktop computing. Recent speaker engagements include Citrix Synergy, VMworld, and VMware Partner Exchange, as well as numerous local CUGC and VMUG events.

[Note: To see us in glorious TechnoColour and Panovision – make the video full-screen and make sure HD quality is selected]

The Techknowledgey Demo

Like me you’re probably fine with a little theory, but all theory and no practical – can after while make you feel like you’re loosing grip with the real world. I don’t know about you – but once I have got the first principles of what technology does and how it works – I need to see it in action. That’s where the Techknowledgey Demo comes in. See the technology in action on screen, often visually illustrating the theoretical principals outlined in the Product Lowdown.

Jason Mattox, CTO is well-known in the virtualization community for his world-class knowledge and leadership in end user computing. In his role as CTO, Jason actively drives product development and product roadmaps for the company. What I personally love is the fact that Liquidware Labs selected their CTO to deliver the demo. How often does that happen now in our industry? Nowadays most C-class executives of software are so far removed from the coal-face of technology they would have to get one of their “minions” (the SE who actually does know how the product works!) to do it for them.

[Note: To see us in glorious TechnoColour and Panovision – make the video full-screen and make sure HD quality is selected]

Category: Vendorwag, View/EUC | Comments Off on Vendorwag Reloaded: Liquidware Labs
July 18

VMware Horizon View: A little Local Difficulty with Windows7 Error Recovery

windows-7-startup-repairLast night I softly (which sounds like was very gentle and kind to Windows) rebooted (to let Patch Tuesday do its work) my persistent Windows 7 virtual desktop (using Horizon View 5.2) and today I found I couldn’t connect to it. Why? In my View I run two of everything – two Security Servers, two Connection Servers and yes, two Windows 7 virtual desktops in case one stops working explicably. Opening the VMware Remote Console on the VM in question I discovered it was stuck in a loop attempting and failing an unwanted repair job. This is caused by the default when you get one of those ugly black screens in Windows being “Launch Start-up Repair”.


I’ve had this happen to me a couple of times in my time of using Windows especially the more recent Windows /7/8/2012 releases which all display this functionality. 9 times out 10 the repair either is unnecessary or doesn’t work. So decide to consult the technical bible of the day – twitter – for suggestions on how to stop this ever happening again. Heck, I might even consider using in all my templates and parent VMs in linked clones.

Chris Neale of came up trumps first:

Screen Shot 2013-07-17 at 15.24.23

Followed quickly by Marcus Toovey

Screen Shot 2013-07-17 at 15.25.07

BCDedit is a utility for modifying what Microsoft call “Boot Configuration Data” (BCD) files provide a store that is used to describe boot applications and boot application settings. The objects and elements in the store effectively replace Boot.ini. Ahhh, boot.ini. That takes me back to when I was lowly NT4 instructor teaching RISC paths to hopeful MCSE candidates.

Whether the change should be applied to “current” or “default” is perhaps a bit moot – I imagine the default, is the current one – unless you selected a different boot option at start-up. What interested me was different settings being applied – and which was the right one. Chris quickly pointed me to a Microsoft MSDN article that summarises the differences:

Screen Shot 2013-07-17 at 15.28.39

I was also interested to know what criteria triggers a Startup Repair job if one hasn’t been manually requested. I have a feeling there is some sort of intergar in the Windows registry which accrues on N number of dirty shutdowns, and then N number of dirty shutdowns has occured this triggers the repair option – perhaps regardless of whether there is anything to repair as such.

As for the settings the general verdict is to turn them all on – in a belt and braces approach to try and stop this happening again.

Category: Microsoft, View/EUC | Comments Off on VMware Horizon View: A little Local Difficulty with Windows7 Error Recovery
July 12

VMware Horizon View 5.2 Feature Pack 2 Released

I guess the blogpost title says it all. Here’s what’s new – for me the biggy is Flash URL Redirection – that’s something I used to be able to do on my ye olde MetaFrame/Presentation/XenApp Days…

  • Flash URL Redirection – Customers can now use Adobe Media Server and multicast to deliver live video events in a virtual desktop infrastructure (VDI) environment. To deliver multicast live video streams within a VDI environment, the media stream should be sent directly from the media source to the endpoints, bypassing the virtual desktops. The Flash URL Redirection feature supports this capability by intercepting and redirecting the ShockWave Flash (SWF) file from the virtual desktop to the client endpoint.
  • Unity Touch improvements – You can now add a favorite application or file from a list of search results, and you can now use the Unity Touch sidebar to minimize a running application’s window. Requires users to connect to their desktops from VMware Horizon View Client for iOS 2.1 or later, or VMware Horizon View Client for Android 2.1 or later.
Category: View/EUC | Comments Off on VMware Horizon View 5.2 Feature Pack 2 Released