April 26

Droplet containers on Google Chromebook – Technical Requirements

One of the most common questions that come up around Droplet containers on Google Chromebooks is the hardware specification. Whilst that is important and significant what I usually direct customers to validate first is what kernel generation is being used by the Chromebook. You see, it’s entirely possible that a very modern and powerful Chromebook is using an older kernel, and that an old and underpowered Chromebook is using a modern kernel. There isn’t any link between horsepower and the generation of the kernel in use.

Why is this significant? Well, kernels from 14.4 onwards were compiled by Google to allow grub (the bootloader) to expose Intel-VT attributes from the kernel to the wider ChromeOS. This attribute is passed as “VMX” in the grub bootloader. As probably know ChromeOS is a very lightly loaded and sealed environment – which drives great performance and security. For that reason, you can’t just change the kernel or modify the bootloader. Well, you can – by using “Developer Mode” – but that essentially “breaks” the security model. For the uninitiated “Developer Mode” essentially gives you “root” access to the device – and often say it’s not unlike “jailbreaking” an IOS/Android phone or tablet. It’s not so much the “kernel” that’s the issue but the parameter passed to that kernel via grub.conf. It just so happens that checking the kernel version is the quickest route to determine the capabilities.

How does this impact Droplet containers? We leverage Intel-VT to deliver our hardware acceleration for our popular 32-bit containers and some of our 64-bit containers too (depending on their generation). It’s worth saying not all Droplet containers benefit from this hardware acceleration – such as our DCI-X container. That’s because some of the kernels we use pre-date Intel-VT anywhere – so there’s no performance benefit to be had. I’m keen not to overstate the performance delta between accelerated or non-accelerated containers – often we are talking about the difference of seconds when the first Droplet containerized app loaded.

So how does one work out the kernel type on an existing Chromebook? First, you need to launch the Google Chrome web browser, and open a terminal prompt using [CTRL]+[ALT]+[T] on the keyboard, and then issue the command:

uname -a

This will print out the data needed to ID the kernel in use – in my case, this is a Lenovo laptop running Neverware’s CloudReady. This is an open-source flavor of ChromeOS used to repurpose laptops, PC and Apple Mac devices for ChromeOS. Performance can be stellar because often these devices have huge amounts of resources to run huge OSes – with ChromeOS being so light touch it means they are superfast. So, you know Neverware was recently acquired by Google – and think this is part of Google’s strategy to get a foothold in the world of Enterprise IT. Neverware can be a great ‘get out of jail” card for devices that are powerful enough to run Droplet containers, but where the kernel can’t be updated.

As for new hardware – we would always recommend speaking to your preferred OEM provider, as their internal inventories should allow them to find the details of kernel generation. Another source of information is the community sharing their experience – but need to be careful as some of these devices are only available in certain geographical regions – plus you not just looking at the kernel type but the specifications of CPU/Memory and local storage.


My personal favorite (which is fantastically priced) is the ASUS C436 Flip 14in i5 8GB 256GB 2-in-1 Chromebook which is retailing here in the UK at the £799 bracket include VAT (or Sales Tax, if you are based outside of the EU).


It’s a super-powerful unit for its price. In terms of CPU, my idea would some kind of Intel Core i3/i5/i7 device. Memory, I would say would be a minimum of 4GB, ideally 8GB. Storage-wise, capacity isn’t the issue but IOPS is significant – so I would prefer an SSD or NVme drive, over eMMC fundamentally an SSD or NVme drive wipes the floor of an eMMC type – which is usually fine as simple boot media for ChromeOS, but I feel isn’t up the job when it comes to run a Droplet container.

So, the main takeaway – Droplet containers run on Chromebooks and watch out of under-resourced Intel Celeron type devices and powerful machines with the old kernels. Once our Droplet software is installed you should be looking to see “KVM” is listed as the accelerator in the Help, About menus like so:


Category: Droplet Computing | Comments Off on Droplet containers on Google Chromebook – Technical Requirements
April 14

Using and Managing the Droplet File Share Service

There many, many ways to get data in and out of the Droplet container – my own personal favorite is just to use my NAS-based storage on my network – as it gives me a centralized location for all my storage needs on a super-fast network. There other ways of course and that include using the clipboard to copy & paste a file into a container, as well as using our redirect drives. For some time we have also had our own file replication service or “Droplet Share”. The term “Droplet Share” is more of a marketing term because this does not leverage shared storage protocols such as NFS, SMB/CIFS, or iSCSI, but a built-in replication service. In fact, it was developed a secure method to get data in/out of the container even if protocols like SMB/CIFS are blocked. That’s the case in my many high secure environments where these Microsoft protocols are expressly blocked by the policies of our customers.

You can see if the file replication service is running if the orange cross appears in the bottom left-hand corner:

There are two main uses of this feature – a gorilla usage as an easy way to copy install media into the context of the container. I’ll often use this when a customer has already downloaded their install media to their desktop and started the container – and other methods are not available. A more legitimate use is in highly secure environments where such protocols as SMB are deliberately blocked, and security policies decree that redirected drives or the clipboard are not enabled. This is often the case in some banking and financial services environments where physical and virtual airgaps enforced when dealing with legacy applications that handle sensitive data.

The “Open Device File Share” opens a File Explorer/Finder Window on the host device, and the Open Container File Share opens a Windows Explorer application on the container:

So to the “back” is the Windows 10 File Explorer which opens the default path for the file replication service which is:


And the Window to the “Front” is Windows Explorer inside the container which opens a default path for the file replication service which is:


Our replication is bidirectional, so any file/folder created in either Window results in it being replicated to and from the container. So here I create folders in either direction – and they were replicated in both directions:

We can cope with very large files – double-digit GB files – are not a problem. Of course, the bigger the volume of data the longer it takes to complete the replication. Some actions are almost immediate – such as file deletes – because no network IOPS are generated.

If customers are not wanting to use this feature it can be disabled from our settings.json or from our core UI under the settings gear icon:

Once the Droplet Fileshare feature is disabled – the services that back it are stopped, and the UI is redacted and the shortcuts removed – so there no orange plus in the bottom left-hand corner of the UI.

Finally, a word of warning about the Droplet Fileshare. When you are building your “master” image you might want to clean out the contents of C:\Shared and turn off the Fileshare feature – occasionally system administrator forget this and leave unwanted installers, scripts, and other such code – and that can end up being pushed out during the deployment phase.

Category: Droplet Computing | Comments Off on Using and Managing the Droplet File Share Service
April 12

Droplet containerized apps published with Citrix Virtual Apps

One of the common questions that come is what options exist for presenting Droplet containerized apps in platforms like Citrix Virtual Apps. I have a bit of pedigree with Citrix – before pivoting to VMware in the 2003/4 era, my main role was as Citrix Certified Instructor (CCI) delivering their official curriculum – I started with Citrix MetaFrame 1.8 on the NT4 TSE platform. So, there is a number of ways that you can make Droplet containerized apps available these include:

  • Full Application in Desktop
  • Full Application as a published app
  • Individual apps within the container

This guide handles that last one. The goal is to be able to present just a single application or .exe within the context of the container – so it advertised in the StoreFront like so:

So here I’m publishing the core droplet.exe and also a copy of Excel 2003 and Notepad within the container. Let’s have a look at the Excel 2003 example.

In the Citrix Studio we can use the Application node to make new definitions:

When adding the droplet.exe – you can change the icon by providing a .ico file:

We can modify the “Location” add additional command-line arguments – in our case “launch winword.exe –minimised”

This syntax instructs droplet.exe to start Microsoft Word and to launch the droplet.exe in a minimized mode if it’s not already started. The result is Microsoft Word launches, and droplet.exe is “hidden” in the Windows System Tray.

This is advertised in the StoreFront like so:

There’s a couple of other recommendations I would make if you’re opting to use a third-party-based publishing system to serve up Droplet containerized apps…

Firstly, you don’t need our publishing front-end called “Application Tiles” in this environment. You could if you so wished “mirror” the configuration in Citrix StoreFront in the Droplet client. Alternatively, you could remove all the “Application Tiles” – if you also disable our FileShare feature this makes the client, very much like an agent sitting in the system tray waiting for Droplet containerized applications to be launched from the Droplet Windows Service:

This configuration is not uncommon in environments where organizations have a method of controlling access applications which they have been using for some time, and prefer to create shortcuts to Droplet containerized apps in the standard way using:

droplet.exe launch “excel.exe”

Secondly, in our new release, we will be supporting the settings.json option called:

“hideSettingsMenu”: false

This has the effect of suppressing the “gears” icon in the UI which makes the UI a little bit neater.

Category: Droplet Computing | Comments Off on Droplet containerized apps published with Citrix Virtual Apps
April 9

Joining Droplet containers to Active Directory Domains

For some time, we have supported joining our containers to Active Directory. Its function that’s always been built-in to our technology from its inception. There are several reasons why a customer might want to go down this route. Firstly, I should say we do usually try to avoid it mainly because it’s an extra piece of configuration to consider before going into production – with that said in many cases, it’s an unavoidable requirement of the containerized application.

AD Authentication became endemic as soon as Microsoft won the wars of the Directory Services fought against Novell NDS. Those wars remain firmly in the past, and you’ll find AD based authentication is pretty much the default method for many of the legacy applications – whether that’s some pass-through authentication where no pop-up boxes appear or some propriety UI that confronts the user – either way these methods usually require a computer account in the domain. Of course, nothing stays still – and the rise and rise on non-Windows devices (iPhone/iPad/Android) and web-based applications are fueling authentications methods which although tied to LDAP infrastructure, but use a system of tokens such as SAML/OATH as the protocol for the username/password. So, by a country mile, the primary reason for joining a container to a domain is to meet these authentication requirements – and the way those requirements are often found out is by testing.

The second reason for joining the domain is accessing other resources which may be backed by AD-based ACLs. In the workgroup model, this is going to create some kind of Windows Security box, where the user will need to type their domain credentials. We can store and pass those credentials, but if there’s an excess of them – then sometimes the quickest route to avoid these pop-up box “annoyances” is to join the container to the domain – and course, as soon as one application requires it the rest of the environment benefits from it as well.

There are other use-cases as well – and that’s to leverage other management constructs and tools that are AD domain-based. Not least AD GPOs – whilst the local security policy remains intact inside the container – many organizations would prefer to have an OU and policy setting pushed down to the container. For those organizations using some end-point management tools like SCCM to manage their conventional Windows 10 PCs, they may also wish to use the same management tools to control the software stack. Our general line on this is – fill your boots. Droplet isn’t about imposing restrictions on how you manage your stuff, nor are we about creating another “single pane of glass” for you to navigate through. But rather empower you to leverage your already very serviceable management infrastructure.

Finally, there’s one other reason to join the container to the domain – and that’s to take advantage of our multi-session capabilities. This is a scenario where the container runs in either a Remote Desktop Session Host (aka Terminal Services) or Windows WVD where Windows 10 multi-session is in use. It also applies to scenarios where more than one user logs into the same physical PC at the same time using the Microsoft “Switch User” feature – this is a common requirement in healthcare settings where staff on shift-share the same Windows PC and toggle between users using the “Switch User”. Of course, in the multi-session environments where one container provides applications for many users – we tend to use our 64-bit container type as it offers great memory scalability – and we need to separate the different users by their userID to prevent one user from getting the Droplet Seamless Apps of another. Whilst you could create users inside the container – that makes no sense logically when there is an AD environment in place.

The joining to the domain process is no different from the process you would use on a physical Windows PC. However, this is a manual process, so we provide customers a series of complementary scripts that automate this process. This includes are “arming” process so as a container is deployed on the first start-up it runs through these scripts which automates changing the computer name, joining the domain itself (creating the computer accounts in a designated OU), and ends by ensuring various AD-based groups allow access to the Droplet Seamless App. At the same time the default logon process is modified by changing an option in the settings.json:

This setting essentially turns off our default login process using our service account and instead challenges the user to supply their domain password.

By default, end-users will see the Windows Security dialog box connecting through to the container listening on the internal, non-routable IP address of This uses the discrete, encrypted private network channel created between the host device and the container itself. Some customers like this approach as they see it as an additional layer of security, especially those organizations that have acquired third-party software that augments these MSGINA dialog boxes with other functionality. As you can see the domain/username is already pre-filled, or that we are waiting for is their password.

If customers would rather pass through the credentials from the main Windows 10/2016/2019 logon. That’s very easy to do with a couple of minor changes in the GPO that focused on the Windows systems run our software is all that’s needed. Again, we supply the documentation to configure those options if desired.

Other Considerations:

Now that the container is joined to the domain there are some other considerations and configurations to think about. By default, the Microsoft Firewall inside the container is turned off. We do not need it – because we have our firewall configuration. However, joining a container to the domain, means there’s a good chance that domain GPO could turn back on the “Domain Profile”. This policy won’t stop our core technologies – but it can stop the Droplet Fileshare service and stop our UsbService both of which are network processes.  This triggering of the Windows Firewall can be stopped by enabling this disabling policy (did you see what I did there?)

Computer Configuration > Administrative Templates > Network > Network Connections > Windows Defender Firewall > Domain Profile

The policy-setting “Windows Defender Firewall: Protect all network connections” can be set to disable for a GPO that filters just on the container computer object in Active Directory:

Another consideration is user profiles – as soon as multiple users are login in into the container this will generate a user profile. In most cases, this isn’t an issue because organizations have already a profile management system in place – not least Microsoft own roaming and mandatory profiles. That said it shouldn’t be overlooked – not least that the creation of the profile isn’t very quick, and the last thing you’d want is multiple containers with multiple different local profiles being created.

So that’s joining the container to the domain in a nutshell – mainly done for applications that require AD authentication and with our “multi-session” based containers used in RDSH and Windows WVD environments.

Category: Droplet Computing | Comments Off on Joining Droplet containers to Active Directory Domains
April 7

Droplet containers in a non-persistent mode

One important feature of Droplet containers is our non-persistent mode – or what’s referred to in our configuration file as “non-persistent mode”. By default, Droplet containers are stateful and changes are held in a single in an image file format called .droplet.

Non-persistent mode essentially makes the container stateless such that any changes that occur after the container is started are discarded at shutdown or reboot of the host device. It’s especially useful for kiosks – where a dedicated machine is the access point for a single application that is in a public arena. If something goes wrong with the application or device, staff can merely reboot the system – and it will go back to its “known good” state.

That principle also extended to the shared computer model – where a single PC is used by multiple staff over the day or week. Classic examples would be shift workers in a healthcare context, or call-center staff. They may be sharing the same physical PC and sharing the same container on that PC. It’s unlikely you’d want changes to accrue in the application between shifts. So, a combination of say mandatory profiles and droplet containers in a non-persistent mode – essentially makes the environment read-only, and super easy to reset. That works in another context such as lab environments in education where students jump on any terminal to carry out their work. This is especially true in environments where students deliberately delight in making changes – you know the type!

There is also a security angle to non-persistent mode. A significant number of our customers use Droplet to run legacy applications that could be vulnerable and subject to attack. Alongside our current security which creates a secure bubble around these applications – by making the container itself easy to reset to a clean state. For our customers taking advantage of this approach – they often run as Windows service – and schedule a rolling stop/start of the core Droplet service.

Finally, our old friend “bloatware”. As you will have observed in virtualization environments – Windows still creates temporary files, which even if deleted still tattoo the file system with dirty/stale blocks.  Generally, the more modern the Windows kernel the fast and quicker the image can grow. By first using the persistent mode to install and configure the software – and then deploying it with non-persistent mode enabled you can guarantee the container image file stays the same.

Enabling non-persistent mode is simple as editing the settings.json which is in the user profile in the per-user model, or C:\ProgramData\Droplet when running as Windows service – and turning it on with a true statement. In my case, I did this on my Apple Mac using nano as the text editor.

I use non-persistent mode on my Mac mainly because it’s usually from there I show modern Windows apps running natively, and therefore I tend to use a modern container image type for that, so I’m keen to make sure it doesn’t grow unexpectedly

So, caveats…

There are two main caveats. First the obvious one. You should not store persistent data inside the container. This is something we recommend as a matter of course for the majority of customers. By using mapped network drives or our drive redirection feature you can make sure that data is store outside the context of the container. I would also recommend imposing either a local or GPO policy to hide access to the C: drive inside the container such that only remote storage is accessible. Another common mistake is making changes such as installing software into the container and being horrified that after a restart – it has disappeared. If you are doing this quickly whilst doing other tasks and distracted you begin to wonder if you’re losing your marbles. Non-persistent mode ALWAYS discards changes, and never asks – it just does it. The setting is seamless to the UI so no big red traffic lights are warning you of this fact. Have me and my developers being caught out by this fact? To quote Ludwig Wittgenstein Tractatus Logico-Philosophicus 

“That which we cannot speak of, we must pass over in silence…


The other caveat concerns the subject of my next blog post – “enableDomainLogon”. We do support joining the container to an Active Directory Domain. This primarily for containerized applications that mandate a full Windows Authentication. As you know any domain-joined device wants to update its trust password every 30-days. Effectively, the non-persistent mode would discard those password changes, and the container would end up “orphaned” from the domain. This is akin to having a laptop off the domain for several weeks, and then bringing it back into the corporate network, and discovering its trust relationship was broken.

So non-persistent mode is a great way to keep the container fresh and ensure no unauthorized changes are made to it – just make sure you don’t lose your marbles…



Category: Droplet Computing | Comments Off on Droplet containers in a non-persistent mode
April 6

Enabling Windows Performance Extensions for Droplet Container

Here at Droplet, we leverage several different performance extensions for our container technologies. These are very much tied to their respective platforms. On Microsoft Windows, we leverage WHPX whereas on Apple macOS and Chromebook we leverage HVF and KVM respectively. All of these different hardware accelerators essentially do the same thing – they give us access to the underlying Intel VT or AMD-V extensions which improve performance. Historically, these attributes on the chip we implemented to improve performance for virtualization – but they also drive performance improvements for emulators like Android Studio and container technology like Droplet Computing. Microsoft does have a couple of different names for WHPX – whose full name is “Windows Hypervisor Performance Extensions”. These are a collection of APIs/SDKs that vendors such as us can speak to get a performance boost. It’s not to be confused with Hyper-V which provides a full virtualization management layer and engine. All that WPHX provides is access to those Intel-VT/AMD-V attributes on the CPU chipset.

HVF and KVM are built-in and enabled by default in the world of Apple macOS and Chromebook, whereas on Microsoft Windows, WHPX needs to be enabled by the administrator. Of course, Microsoft provides a range of different ways of doing that using either the UI, command-prompt, or PowerShell. Enabling WHPX from the UI  involves navigating to >Control Panel  >Programs and Features >Turn Windows features on and off and placing a tick in the box next to “Windows Hypervisor Platform”.

Please note, that all the UI doesn’t state it – we would recommend a reboot once applied.

So that’s great for demo purposes and in simple trials – but no administrator worth their salt wants to be running around their environment doing this through the UI. So there are ways thru the command-prompt or PowerShell by which this process can be enabled. Usually, I include these processes at the end of the scripting process – because I can finish off the script with a reboot – leaving the Windows PC or virtual desktop ready in a state to be used for Droplet containerized apps.

From Command-Prompt:

You can use the dism command to enable the feature. This is simple “one-liner” format that will appeal to those who prefer easy DOS-based batch (.bat / .cmd) scripts. In case you don’t know DISM stands for the “Deployment Image Servicing and Management” tool:

dism /online /enable-feature /featurename:HypervisorPlatform

Notice how in the command-line tool the name is different again – not WHPX, Windows Hypervisor Platform, but merely “HypervisorPlatform”. DISM is clever enough to know if WHPX or any service has already been enabled before, and so won’t try anything illogical like trying to enable the same thing twice.

From PowerShell:

PowerShell is my main CLI of choice, even if I might use a simple batch file to my PS scripts together as a single setup.cmd. What PowerShell brings to the table is the ability to include IF statement logic in the script environment.


$featureName = ‘HypervisorPlatform’

$Feature = get-wmiobject -class Win32_OptionalFeature | where {$_.Name -eq $featurename}

If ($feature.InstallState -eq “1”)


write-Host $featureName “is installed”


Else {

Write-Host “Not installed”

Enable-WindowsOptionalFeature -Online -FeatureName $featureName

shutdown /t 600 /r /c “Restarting Windows 10 within the next 10mins to enable Microsoft WHPX”



The nice thing about a script like above is I can reuse it time and time again, by simply changing the string for $featureName – but also use tools like WMIobject to verify if WHPX has already been enabled.

Once WPHX has been enabled hardware acceleration can be enabled in our software, and you verify its presence in the UI:

Now for some caveats….

When we talk about “hardware acceleration” or “performance extensions” there are some things to bear in mind.

Firstly, and most importantly, how significant are those performance differences? Well, it can be measured in seconds – with hardware extensions enabled the login process to the container can be as little as 3-4, with subsequent applications opening in less than a second. Without hardware acceleration, the login process is more like 8-9 seconds, with subsequent applications loading in less than a second.  So, we are only talking about seconds worth of difference which is unlikely if ever to result in any ‘productivity gains – it’s more about delivering the best user experience.

Secondly, where it pays dividends are in our older 32-bit containers, rather than our 64-bit containers. We find that hardware acceleration is less impactful on our 64-bit containers mainly because a 64-bit container on a modern 64-bit OS running on 64-bit hardware has all its performance ducks in a row. For this reason, we tend to use our 32-bit container in a traditional Windows PC model where one user runs one container. Whereas we tend to use our 64-bit containers in Windows WVD and Citrix Virtual Apps where one container services the needs of many users – and where the additional scalability of 64-bit works – because of the larger addressable memory and CPU. Although we can run with hardware acceleration in environments like Windows WVD, often the scalability and resource efficiencies win out over raw performance, especially when you’re talking about a couple of second differences.

Thirdly, and finally – Windows 10 versions. We support Windows 10 1909 or higher with WHPX. There is a known issue with WHPX on 1809 or later which has known conflicts with Microsoft’s Device Guard and Windows virtualization-based Security (VBS).

Category: Droplet Computing | Comments Off on Enabling Windows Performance Extensions for Droplet Container
March 31

Droplet External Device Support – USB (Part 2)

Previously, I explained our support for external devices in the shape of COM/Serial/LPT devices – which is a built-in component of our technology. This time I want to focus on our support for USB devices. Our USB support comes in a special USB edition and is secure so only the administrator can connect and configure USB devices to the Droplet Container. The configuration is “persistent” in the sense that once configured the USB device will always connect so long as it is plugged in and powered on.

Once the container is unlocked, our UI displays a USB icon together with an interface to select connected devices like so:

You’ll notice in this screengrab I’ve not selected my USB to Serial dongle. There’s no need to because of our built-in COM/Serial/LPT redirection. What is select is HP Color LaserJet Printer – again if I just needed the print functionality it would be easier/simpler to just use our built-in printer redirection feature or indeed use the IP printing functionality of the printer to print across the network such as IP-based printing supports scanning as well.

Scanning is possibly is the most common use case for USB support in things like healthcare – where often expensive medical scanners still operate, but the software stack around them is so dated it’s based on a Windows XP or Windows 7 model.

Another use case we have seen is in engineer environments (aviation, process management, automotive) where USB devices control expensive pieces of hardware that need servicing, maintenance, and support – often to meet industry standards and auditing requirements.

Once the device is enabled, it becomes a permanent part of the configuration until its removed, and the configuration is stored in the settings.json file.

Once we start-up the container – we are in the hands of Windows “Plug and Pray” – in other words, will the Windows binaries inside the container have a driver for the connected software? The older the software, and the more likely you will need to install a driver.

Once the container has started and is unlocked the administrator can investigate either Device Manager or in this case “Device and Printers”. In this case, the printer was detected by the container, but only partially so. There appears to be an issue with the scanner component:

This meant hunting the drivers down for this device (which I already had for this printer) – this is a 200MB+ bundle that downloads into the container and runs the autorun.exe. This installer requires the USB device to be unplugged and re-plugged during the install which our USB stack handles with aplomb. During the installer, the setup program asked me to reconnect the device and the installation comes through.

At the end of the installation, we can see the yellow exclamation marks indicating there was a problem have gone… whoop!

Although there are many ways within the Windows UI to initiate a scan – I wanted to find a more direct method so it could be “published” as a shortcut to the user’s desktop as a Droplet shortcut. I found the main scanning application resided in “C:\Program Files\HP\HP Color LaserJet Pro MFP M177\bin\hpscan.exe” so I made an Application Tile to that:

Category: Droplet Computing | Comments Off on Droplet External Device Support – USB (Part 2)
March 22

Droplet External Device Support – Serial/COM/LPT Ports (Part 1)

For some time, we have supported external devices, and this comes in a couple of flavors connected to a couple of different use cases. We support COM/Serial/LPT redirection, USB redirection, and PCI pass-through*.

Serial/COM/LPT Redirection

The most common use of our native COM/Serial/LPT port redirection seems to be in engineering environments connected with aviation, automotive, and process management. Frequently these dedicated systems need an engineer to run periodic diagnostic or maintenance tasks, where connectivity is provided by a COM/serial port or LPT port on the device itself. Of course, it’s been some years since many laptops even had COM ports, so in most cases, a USB to Serial dongle is needed to provide the physical connectivity. For lab environments using a fixed desktop or tower PCs, there is still the option to purchase PCI cards that fit to the riser and provide either single or dual-port serial interfaces. In some of our higher-end engineering customers, the device that needs to be tested or serviced has a dual serial/com ports. These COM/Serial to USB devices are largely manufactured in the far east and are inexpensive, however, for that reason, their quality varies significantly. We have had a very good experience with the StarTech range of devices, and their backup documentation is reassuring – as they appeared to be are aimed at the engineering sector.

The good news is Droplet 2.x natively supports these COM/Serial/LPT devices. The functionality is built-in and merely needs turning on from the settings.json file which can be found manufactured C:\Users\%username%\AppData\Roaming\Droplet in the per-user mode, and in C:\ProgramData\Droplet in the service mode. All you need to do is modify the settings.json in notepad, and then restart the application or service for it to take effect:

In terms of the Serial/COM/LPT port ID which is usually represented by a number (COM1, COM2, and so on) whatever that port number is in on the Windows PC it will be the same number in the container. Occasionally, this com port number on the Window PC will need to be adjusted if the application is hard to code to only utilize COM1 or COM2)

This can be tricky especially if the remnants of “phantom devices” are already occupying those slots. I’ve found showing “hidden device” in Device Manager is helpful when troubleshooting this.

When I demo this in my lab to customers, I use an old APAC PDU which has a serial port on the rear and PuTTy inside the container using its com port option to act as the terminal emulator:

Note: Just look at that 1997. God, how young were you in 1997? Where you even born? Shudder…

So, the moral of this story is – you don’t need the UBS edition of our software to connect a USB-to-Serial/COM/LPT dongle to the container – that’s is a built-in of Droplet. The only thing you might have to worry about is the driver install on the Windows 10 PC and messing with the device ID to work with your software.

In the next episode, I will look at our USB support and how that works…

* PCI pass-through is currently only supported with the Linux edition of our software and requires a modest amount of free support to get it set up and running

Category: Droplet Computing | Comments Off on Droplet External Device Support – Serial/COM/LPT Ports (Part 1)
March 15

Droplet Containers with Hardware Acceleration in VMware vSphere Virtual Desktops

As a level set, you should know we support many hardware acceleration types for the container. For Windows 10 (1909 or later) Microsoft WHPX API, whereas on the Apple Mac their native HVF acceleration. On Linux and Chromebook, we support KVM extensions. One of our more recent advancements is the ability to enable our hardware acceleration within the context of on-premises virtual desktops. Up until this point, we had to turn off our hardware acceleration feature as it was incompatible with the combination of virtualization and Windows 10 running inside of the guest OS. Using WHPX is compatible with security features dependent on Intel-VT such as Device Guard and Microsoft virtualization-based Security so long as Windows 10 is using version 1909 or higher.


Interestingly, we discovered in our labs that our legacy 64-bit container doesn’t benefit a huge amount from hardware acceleration – I think this is to do with having a 64-bit container, on a 64-bit OS together in the case with a 64-bit hypervisor. This is especially true of our more popular container type which our customers use to run legacy applications.

For our container types used to deliver modern applications, our internal test show that hardware acceleration is still advantageous – I think this is to do with the kernel inside the container – it’s a modern kernel and therefore more resource-hungry than the legacy container so that hardware acceleration makes all the difference.

So hardware acceleration inside a VM. What the prereqs or “ducks in a row”. Firstly, the version of vSphere is unimportant – this works on 6.x and 7.x without an issue. What really matters is the “version” of Windows 10. There’s a known Microsoft bug with version 1809 – basically, a combo of Device Guard/Microsoft virtualization-based Security and WPHX creates a conflict. These internal Windows components are incompatible with each other. If your on version 1909 or higher this Microsoft issue has been resolved. If you unsure of Windows 10 version use the “System” option on the right-click of the Start Menu:

Enabling hardware acceleration on the vSphere VM means changing a setting on the properties of the VM.

Once Microsoft WPHX has been enabled and our software installed the container picks up on the hardware acceleration:


Category: Droplet Computing | Comments Off on Droplet Containers with Hardware Acceleration in VMware vSphere Virtual Desktops
March 8

Droplet As A Service Revisited

A couple of weeks ago I wrote about how we can now run a Droplet container as Windows Service. At that time, I just wanted to explain the use cases around that, and didn’t peel back the lid to explain how it’s done and what the setup process is like. It’s actually very simple.

Just to refresh the little grey cells – we have two modes of operation per-user or per-computer. In the per-user model, the end-user starts the container and waits a short while for it to load (30-40s on physical), and then they get their “Droplet Seamless App”. The start of the container can be invoked by loading droplet.exe and hitting the big orange “Start Container” button (no prizes for guessing what does)

This client can also be started when clicking a droplet shortcut backed by our “launch” syntax for example:

“C:\Program Files\Droplet\Droplet.exe” launch “C:\WINDOWS\NOTEPAD.EXE”

This model works well for containerized applications typically used only occasionally such as once a week, month, quarter, or on a completely ad-hoc as-needed basis. The per-user mode is also handy for administrators as it gives the admin an easy way to build, monitor and switch contains during the time where you install software and checking everything works as expected.

With the per-computer model, if on the other hand the containerized apps are being used all day and every day, it makes sense to automatically start the container when the Windows PC or Windows 2016/2019 server first starts. This means no need to wait for the container to load, and containerized applications are available immediately after login.

To set up the service I need a copy of the setting.json this per-user ]file can be found in the current user’s profile at:


And copied to:


Next, we can open a command prompt with administrator rights, and navigate to this location:

C:\Program Files\Droplet\resources\container\service

In this location is a droplet-service.exe utility that can be used to install, start and remove the service with the command:

“C:\Program Files\Droplet\resources\container\service\droplet-service.exe” install

Once the service is started it appears in the Windows Service MMC like so:

For the client-side configuration, I use a shortcut in located in shell:common startup using our flags to make sure the client-end is always loaded, but minimised to the System Tray like so – this means any changes to the entitlement is always updated at logon when the user “apps.json” is read. We can use the syntax:

“C:\Program Files\Droplet\droplet.exe” launch –minimised

* That’s minimised with a S not Z…. :-p

With everything in place, the client loads silently in the System Tray and connects to the Droplet container service.

As you can see with this model the client-side app merely becomes a “broker” to the applications running inside the container. We believe that as time goes by the per-computer model will become the de-facto way most of our customers will run Droplet software. Increasingly, we will become invisible to the user who shouldn’t have to worry or concern themselves with how applications are being delivered – all they do is double-click a shortcut, without any real awareness that their software has been dropletised (and that’s with a S, not a Z…)


Category: Droplet Computing | Comments Off on Droplet As A Service Revisited