June 16

Droplet Container Support Scenarios

Back when I started with Droplet (it will be two years in Sept 2021!)  we had quite a simple product. Back then we just had two container types and a limited number of places where we supported them. Since then we have had many different container types running both in physical, virtual, and cloud environments. Additionally, there’s been an explosion of features and possible configuration options. I’m pleased to say that 100% of this development has been customer and demand-led. That’s how I feel it should be, and how it should remain. I’ve seen software companies go adrift chasing featurism. The endless development of new features simply to have something “new” to bring to customers, which lacks a strong, compelling use case driven by customer need.

Most software companies address this increased “flexibility” (what they mean is complexity!) by series of tables or matrix. I find these a turn-off mainly because they are not focused on the organization and reduces a product to series to “tick boxes”. I think they are difficult to navigate, often confusing in their own right – and don’t simplify the story in the way they intend. I prefer to have series of scenarios that are firmly rooted in a real-world scenario which makes it much easier for organizations, partners, and our staff to ask the right questions, and save in the process customers bags of time and energy – especially now we have 4 core types of container – with two sub-types within each category. Of course, even a scenario approach has its limitations – in that few organizations fit into a “one size fits all” – so multiple scenarios can and do exist – which lends itself to a more “blended” solution. The other thing I don’t like about matrices is they lead to drag-race comparisons between vendors based on a feature list – how often is it the case that a missing X or present X is latched onto as a deal-breaker – only to find that feature never gets enabled once in production!

So let’s look at some common scenarios and outline what my recommendations would be for the customer…

Scenario 1: Legacy Apps on Physical Hardware

This is probably our most common scenario – although I would say a significant minority of customers are using our technology to deliver a modern application stack.

Whether you’re running on Microsoft Windows 10, Apple macOS, or Google Chromebook I would recommend our DCI-M7x32 container leveraging the native hardware acceleration provided by the local OS. In the case of Microsoft Windows 10 that would be WHPX, Apple macOS that would be HVF, or on Google Chromebook those would be the KVM extensions. The DCI-M7x32 container is a good all-rounder for both legacy, and some modern applications too – and is probably our most popular container type.

Scenario 2: Legacy Apps On-premises VDI environments

We support hardware acceleration in VDI environments where Windows 10 is the primary OS running inside the VM. For on-premises environments like VMware vSphere, the Intel-VT or AMD-V attributes need to be exposed to the VM. For on-premises environments, we would recommend using Windows 10 Version 1909 or higher. Assuming you have all your ducks-in-a-row, then we would recommend our DCI-M7x32. In short, physical and virtual environments are treated equally. In case you don’t know hardware acceleration to the VM is very easy to enable on the properties of the CPU in VMware vSphere:


Scenario 3: Legacy Apps on Multi-Session environments

In this case, a server OS is enabled Microsoft Remote Desktop Session Host (aka Terminal Services) and multiple users connect either to a shared desktop or shared application infrastructure. In this scenario, we would recommend our multi-session container which is 64-bit enabled – the DCI-M7x64. Although this container type doesn’t currently support hardware acceleration it does provide up to 4-core and 192GB of memory. So that offers fantastic scalability – where one container image is accessible to multiple users – offering the same concurrency model as the RDSH host within it runs.

In Microsoft WVD Whilst E-series-v4 and D-series-v4 instances do pass hardware acceleration the benefits are limited to a power-user style environment where there is a 1-2-1 relationship between the user and desktop. In our literature, we refer to the model offering as the “Flexible Model”. As each user gets their own personal container accelerated by Intel-VT. In this case, the DCI-M7x32 container is the best type to go with.

In environments like Microsoft WVD we recommend the same configuration as we would with a RDSH – essentially Windows 10 Multi-session offers the same multi-user functionality RDS enabled Windows 2016/2019 server. The DCI-M7x64 container which multi-session enabled offers greater consolidation and concurrency ratios.  Laying the technical issues aside for a moment, economically, the utility model seems to allow wins as a multi-session environment is always going to offer consolidation and concurrency benefits. In our literature, we refer to this as the “Scalable Model” as this most cost-effective method of serving up containerized apps to multiple users. There’s an implicit lack of scalability for a multi-session container on a 32-bit kernel. Since that kernel is limited to using 4GB of memory – it means once you have around 9-10 users connected into the container – you run the risk of ‘out of memory’ conditions and swap activity. On an x64 container that isn’t a problem as the maximum amount of configurable RAM is 192GB.

Scenario 4: Modern Apps on Physical or Virtual Hardware

Frequently we have organizations wanting to run modern apps of Windows 10 rather than installing those apps directly to the OS. The reason for this can be multiple – but often it’s about wanting to decouple the apps from the OS to allow for portability, security and able ingest Windows 10 updates without fearing they will clobber the delicate blend applications. Another motivation can be trying to support applications across other platforms like Apple macOS or Google Chromebook. As the Droplet image is portable across all three platforms without modification it’s often the best approach.

In a pure Windows 10 environment – whether that was physical or virtual we would recommend the DCI-M8x32 or DCI-M8x64 container with hardware acceleration – the same container type would be used on the Google Chromebook. The Apple macOS on the other hand would benefit from the use of our DCI-M8x32 which we have been running for some time – which gives excellent performance. We do have a DCI-10×64 container type but you do need that with physical hardware (currently it’s simply too resource-intensive for a virtual/cloud environment – although that will change with improvements in software and hardware). We tend to reserve the DCI-M10x64 container for high-end devices (8-16GB, SSD/Nvme) as this offsets the payload associated with this generation of the kernel.

Scenario 5: Really Jurassic Apps on Physical or Virtual

Occasionally, we come across an organization with really old applications. For these organizations I recommend, they give the DCI-M7 container type a try, as often we find even really old applications will install and run inside our container runtime. If that’s not the case then I would recommend the DCI-X container type (hint: X refers to cross-compatibility). It offers the same “Droplet Seamless App” experience but contains an old set of application binaries and frameworks, often missing or depreciated in the DCI-M7 generation.


Category: Droplet Computing | Comments Off on Droplet Container Support Scenarios
June 9

Droplet Networking (Part 2 of 2) Walls Of Fire

Sorry, it just amuses me that we use the term “firewall”. Yup, I know it comes from the construction industry, as a way of saving people’s lives and the integrity of the building. I sometimes wish hackers and other bad actors did have to walk-through fire as part of their punishment – but I guess that’s against some crazy health and safety laws. If you don’t know by now, I am joking. Besides which I felt “Walls of Fire” is a suitable “clickbaity” blogpost title that might garner me some additional views – and if that is the case, I’m very sorry for that.

So anyway, in the Droplet we have two firewalls – external inbound, and internal outbound. The important thing about the external inbound firewall is that is turned on by default and blocks all inbound traffic. There is no API or SDK – which means there are controls for the hacker leverage to facilitate an attack. That does have implications clearly for “push” based events, but so far in my experience the vast majority of networking activity is actually “pull” based – in that some software inside the container is responsible for initiating network activity. In that case, our triggers the internal outbound firewall…

The internal outbound is stateful by design – which is just firewall speak for saying that if a client apps open a TCP/UDP port to the network, then allow that to pass – and when communication ends or times out – then close that door. It’s the basis of many firewalls for decades. By default, our outbound firewall doesn’t block any traffic (remember ping and tracert do NOT work inside our container). The default configuration allows ANY:ANY. To a great degree, this is a deliberate choice on our part to deviate away from our usual stances of “all the doors are closed until you open them”.

[Aside: It’s the response to the reality in our time-pressed world, that almost no one has the time to RTFM these days. Heck, I’m surprised you even have time to read this blog post – but here you are. Thanks for that 🙂 ]

So, if we made our default BLOCK:BLOCK precisely zero packets would be able to leave the container, and we spend hours explaining why that was the case… So, if you look at our default firewall configuration when the container is powered off this is what you will see:

Changes to the firewall require access to the Droplet Administrator password, and that the container is shut down or the droplet service stopped. The changes made in this UI are permanent and survive reboots and shutdowns.

Note: Enabling block with no rules defines – blocks ALL network traffic from the container. This is a viable configuration if you wanting to block all communications in and out of the container except those allowed by our redirection settings or other internal droplet processes.

I can make this configuration very restrictive by only allowing port 80 traffic inside the container to work for,, and This is common when a customer is running a legacy web browser for example IE8 to connect to a legacy backend web service.

In this screengrab below the web service running is accessible (incidentally it’s running in a Droplet Server Container protected by secure and encrypted link…) but www.dropletcomputing.com is not accessible – notice also how my mapped network drive to S: no longer works. The Droplet redirected drives still function – which goes to show that for every rule – there’s an exception. So, our firewall does not block our own trusted internal communications – such that drives our file replication service.


Category: Droplet Computing | Comments Off on Droplet Networking (Part 2 of 2) Walls Of Fire
May 11

Droplet Networking (Part 1 of 2)

So this blogpost is all about our networking and our built-in firewall. This is intended as a primer for geeks like me who like nitty-gritty details. You don’t need to know ANY of this stuff to use our product or deploy our product. In this first part, I will talk about the raw IP of the container and how that functions. In part 2 I will focus on our built-in firewall controls and how they work.

Firstly, let’s talk about the IP configuration of the container. The Droplet container is configured to be a DHCP client but critically does NOT get that IP address from your DHCP services on the network. Instead, a built-in service is responsible for assigning this IP that requires zero configuration. An ipconfig inside the Droplet container reveals the IP address assigned – incidentally it is always for every container.

Notice the Default Gateway is – this means that any packet outside of this network will be directed to So, the thing to understand here is we by default NAT all traffic out of the container across this IP address which is assigned to the host device. Together with our firewall configuration this effectively “hides” the container on the network making it invisible.

One of the common admin tools to confirm network communications are utilities like tracert and ping. It is possible to ping the Default Gateway address, and get a positive response from the host device – but all other ping/tracert will fail. So, we don’t allow ICMP (the protocol that drives ping/tracert) to escape this host/container network. In the screengrab below the ping of to the Default Gateway works, but all other ping tests fail.

The main takeaway is if you are experiencing networking problems – ping can and does result in false positives indicating a problem – when the issue lies elsewhere. Incidentally, this does not impact other utilities such as nslookup where the requests are driven by a different protocol. In case you don’t know nslookup requests are driven by UDP Port 53. So a nslookup on will give a positive response (assuming a DNS PTR record exists for this reverse lookup). So in the screengrab below, you’ll see a positive answer to the query nslookup and nslookup dc.corp.local

If you look closely, you’ll see the address listed as the “unknown” server. So, a DNS forwarding is taking place on another built-in IP address of which you can see listed if you run an ipconfig /all

So, if a container cannot get to \\dc.corp.local it more likely the case that the host device is misconfigured – or else there’s a problem with DNS where that name either doesn’t exist or is incorrectly configured. A nslookup is probably the most crudest and basic test of connectivity because if you get an answer it proves that the container is forwarding queries to the host device, which in turn is forwarding those requests onto the real DNS infrastructure.

Whilst I’m on the subject of names – one word about using names inside a container. The Droplet container is for the most part completely unaware of your network or domain (both DNS and AD) infrastructure – but it does understand IP and DNS based FQDNs.

[Aside: If you join the container to the domain – the above isn’t true. Our default behavior is the container is a member of its workgroup]

So opening explorer window on \\ or \\dc.corp.local is highly likely to work – but using \\dc – the short or NETBIOS name is less likely to be successful, and is more likely to be slow. Here’s why. Many years ago, we embraced DNS as the main name resolution system – and most organizations decommissioned the systems that supported the short NETBIOS system that goes back to Microsoft’s NETBIEU protocol (commonly referred to as the less-than-good-old-days :p). NETBIOS names used to be resolved by either a host file, lmhost file or system call WINS (ah remember the days of JetDB driven name resolutions systems!), however, In the absence of these systems, a broadcast packet is probably the only way these shorter names will be resolved. As broadcasts are not proliferated across routers and inhibited by network switches and VLANs, the chances of successful resolution are slim – unless you run a flat network like you would on a simple home-based WIFI network.

What if you have a legacy application is so old, decrepit, and poorly written that it only works with NETBIOS names, and not the face new DNS names that started to be adopted with Windows 2000? Simplez, simply tell it the DNS Suffix information required to complete the short name on the LAN adapter inside the container.

A good way to illustrate this name resolution issue of NETBIOS names is trying an NSLOOKUP on a short name. Here you can see the query of “dc” using nslookup fails:

However, if I manually tell the Droplet container what my preferred domain name is (something it usually gets from domain membership or option 015 DNS Domain Name on your DHCP server) this “problem” goes away…

Of course, the simplest and recommended method of avoiding this issue altogether is to use FQDNs or IP addresses where possible, and avoid the use of NETBIOS names altogether…

So now, we have a good idea of the networking inside the container – what are some practical uses? Here’s one practical example – suppose you have adopted cloud-based solutions such as OneDrive, GoogleDrive, or Dropbox – and you wish to access those systems within the Droplet container. I wouldn’t be recommending installing the software to these systems INSIDE the container, as the synchronization process behind these systems is like starting to fill the container with redundant data. Better to use either our built-in drive redirection to access these or share out the OneDrive/GoogleDrive folder, and then map a network drive to it.

With our drive redirection enabled, any folder accessible on the host device is accessible to the end-user (based on their user login details…) and in this way, the File Manager of the container “mirrors” the File Manager of Windows 10.

So on the left we see the the C, F, L and S drives of the Windows 10 PC, and on the right we see the redirected drives of the Droplet container.

Another approach would be to share the OneDrive folder on the Windows 10 PC, and then map a network drive to the address using that share name. So after sharing out the folder, I can connect to using SMB/CIFS by browsing to \\ and right-clicking the share, mapping a drive to C: using the standard tools:

Category: Droplet Computing | Comments Off on Droplet Networking (Part 1 of 2)
April 26

Droplet containers on Google Chromebook – Technical Requirements

One of the most common questions that come up around Droplet containers on Google Chromebooks is the hardware specification. Whilst that is important and significant what I usually direct customers to validate first is what kernel generation is being used by the Chromebook. You see, it’s entirely possible that a very modern and powerful Chromebook is using an older kernel, and that an old and underpowered Chromebook is using a modern kernel. There isn’t any link between horsepower and the generation of the kernel in use.

Why is this significant? Well, kernels from 14.4 onwards were compiled by Google to allow grub (the bootloader) to expose Intel-VT attributes from the kernel to the wider ChromeOS. This attribute is passed as “VMX” in the grub bootloader. As probably know ChromeOS is a very lightly loaded and sealed environment – which drives great performance and security. For that reason, you can’t just change the kernel or modify the bootloader. Well, you can – by using “Developer Mode” – but that essentially “breaks” the security model. For the uninitiated “Developer Mode” essentially gives you “root” access to the device – and often say it’s not unlike “jailbreaking” an IOS/Android phone or tablet. It’s not so much the “kernel” that’s the issue but the parameter passed to that kernel via grub.conf. It just so happens that checking the kernel version is the quickest route to determine the capabilities.

How does this impact Droplet containers? We leverage Intel-VT to deliver our hardware acceleration for our popular 32-bit containers and some of our 64-bit containers too (depending on their generation). It’s worth saying not all Droplet containers benefit from this hardware acceleration – such as our DCI-X container. That’s because some of the kernels we use pre-date Intel-VT anywhere – so there’s no performance benefit to be had. I’m keen not to overstate the performance delta between accelerated or non-accelerated containers – often we are talking about the difference of seconds when the first Droplet containerized app loaded.

So how does one work out the kernel type on an existing Chromebook? First, you need to launch the Google Chrome web browser, and open a terminal prompt using [CTRL]+[ALT]+[T] on the keyboard, and then issue the command:

uname -a

This will print out the data needed to ID the kernel in use – in my case, this is a Lenovo laptop running Neverware’s CloudReady. This is an open-source flavor of ChromeOS used to repurpose laptops, PC and Apple Mac devices for ChromeOS. Performance can be stellar because often these devices have huge amounts of resources to run huge OSes – with ChromeOS being so light touch it means they are superfast. So, you know Neverware was recently acquired by Google – and think this is part of Google’s strategy to get a foothold in the world of Enterprise IT. Neverware can be a great ‘get out of jail” card for devices that are powerful enough to run Droplet containers, but where the kernel can’t be updated.

As for new hardware – we would always recommend speaking to your preferred OEM provider, as their internal inventories should allow them to find the details of kernel generation. Another source of information is the community sharing their experience – but need to be careful as some of these devices are only available in certain geographical regions – plus you not just looking at the kernel type but the specifications of CPU/Memory and local storage.


My personal favorite (which is fantastically priced) is the ASUS C436 Flip 14in i5 8GB 256GB 2-in-1 Chromebook which is retailing here in the UK at the £799 bracket include VAT (or Sales Tax, if you are based outside of the EU).


It’s a super-powerful unit for its price. In terms of CPU, my idea would some kind of Intel Core i3/i5/i7 device. Memory, I would say would be a minimum of 4GB, ideally 8GB. Storage-wise, capacity isn’t the issue but IOPS is significant – so I would prefer an SSD or NVme drive, over eMMC fundamentally an SSD or NVme drive wipes the floor of an eMMC type – which is usually fine as simple boot media for ChromeOS, but I feel isn’t up the job when it comes to run a Droplet container.

So, the main takeaway – Droplet containers run on Chromebooks and watch out of under-resourced Intel Celeron type devices and powerful machines with the old kernels. Once our Droplet software is installed you should be looking to see “KVM” is listed as the accelerator in the Help, About menus like so:


Category: Droplet Computing | Comments Off on Droplet containers on Google Chromebook – Technical Requirements
April 14

Using and Managing the Droplet File Share Service

There many, many ways to get data in and out of the Droplet container – my own personal favorite is just to use my NAS-based storage on my network – as it gives me a centralized location for all my storage needs on a super-fast network. There other ways of course and that include using the clipboard to copy & paste a file into a container, as well as using our redirect drives. For some time we have also had our own file replication service or “Droplet Share”. The term “Droplet Share” is more of a marketing term because this does not leverage shared storage protocols such as NFS, SMB/CIFS, or iSCSI, but a built-in replication service. In fact, it was developed a secure method to get data in/out of the container even if protocols like SMB/CIFS are blocked. That’s the case in my many high secure environments where these Microsoft protocols are expressly blocked by the policies of our customers.

You can see if the file replication service is running if the orange cross appears in the bottom left-hand corner:

There are two main uses of this feature – a gorilla usage as an easy way to copy install media into the context of the container. I’ll often use this when a customer has already downloaded their install media to their desktop and started the container – and other methods are not available. A more legitimate use is in highly secure environments where such protocols as SMB are deliberately blocked, and security policies decree that redirected drives or the clipboard are not enabled. This is often the case in some banking and financial services environments where physical and virtual airgaps enforced when dealing with legacy applications that handle sensitive data.

The “Open Device File Share” opens a File Explorer/Finder Window on the host device, and the Open Container File Share opens a Windows Explorer application on the container:

So to the “back” is the Windows 10 File Explorer which opens the default path for the file replication service which is:


And the Window to the “Front” is Windows Explorer inside the container which opens a default path for the file replication service which is:


Our replication is bidirectional, so any file/folder created in either Window results in it being replicated to and from the container. So here I create folders in either direction – and they were replicated in both directions:

We can cope with very large files – double-digit GB files – are not a problem. Of course, the bigger the volume of data the longer it takes to complete the replication. Some actions are almost immediate – such as file deletes – because no network IOPS are generated.

If customers are not wanting to use this feature it can be disabled from our settings.json or from our core UI under the settings gear icon:

Once the Droplet Fileshare feature is disabled – the services that back it are stopped, and the UI is redacted and the shortcuts removed – so there no orange plus in the bottom left-hand corner of the UI.

Finally, a word of warning about the Droplet Fileshare. When you are building your “master” image you might want to clean out the contents of C:\Shared and turn off the Fileshare feature – occasionally system administrator forget this and leave unwanted installers, scripts, and other such code – and that can end up being pushed out during the deployment phase.

Category: Droplet Computing | Comments Off on Using and Managing the Droplet File Share Service
April 12

Droplet containerized apps published with Citrix Virtual Apps

One of the common questions that come is what options exist for presenting Droplet containerized apps in platforms like Citrix Virtual Apps. I have a bit of pedigree with Citrix – before pivoting to VMware in the 2003/4 era, my main role was as Citrix Certified Instructor (CCI) delivering their official curriculum – I started with Citrix MetaFrame 1.8 on the NT4 TSE platform. So, there is a number of ways that you can make Droplet containerized apps available these include:

  • Full Application in Desktop
  • Full Application as a published app
  • Individual apps within the container

This guide handles that last one. The goal is to be able to present just a single application or .exe within the context of the container – so it advertised in the StoreFront like so:

So here I’m publishing the core droplet.exe and also a copy of Excel 2003 and Notepad within the container. Let’s have a look at the Excel 2003 example.

In the Citrix Studio we can use the Application node to make new definitions:

When adding the droplet.exe – you can change the icon by providing a .ico file:

We can modify the “Location” add additional command-line arguments – in our case “launch winword.exe –minimised”

This syntax instructs droplet.exe to start Microsoft Word and to launch the droplet.exe in a minimized mode if it’s not already started. The result is Microsoft Word launches, and droplet.exe is “hidden” in the Windows System Tray.

This is advertised in the StoreFront like so:

There’s a couple of other recommendations I would make if you’re opting to use a third-party-based publishing system to serve up Droplet containerized apps…

Firstly, you don’t need our publishing front-end called “Application Tiles” in this environment. You could if you so wished “mirror” the configuration in Citrix StoreFront in the Droplet client. Alternatively, you could remove all the “Application Tiles” – if you also disable our FileShare feature this makes the client, very much like an agent sitting in the system tray waiting for Droplet containerized applications to be launched from the Droplet Windows Service:

This configuration is not uncommon in environments where organizations have a method of controlling access applications which they have been using for some time, and prefer to create shortcuts to Droplet containerized apps in the standard way using:

droplet.exe launch “excel.exe”

Secondly, in our new release, we will be supporting the settings.json option called:

“hideSettingsMenu”: false

This has the effect of suppressing the “gears” icon in the UI which makes the UI a little bit neater.

Category: Droplet Computing | Comments Off on Droplet containerized apps published with Citrix Virtual Apps
April 9

Joining Droplet containers to Active Directory Domains

For some time, we have supported joining our containers to Active Directory. Its function that’s always been built-in to our technology from its inception. There are several reasons why a customer might want to go down this route. Firstly, I should say we do usually try to avoid it mainly because it’s an extra piece of configuration to consider before going into production – with that said in many cases, it’s an unavoidable requirement of the containerized application.

AD Authentication became endemic as soon as Microsoft won the wars of the Directory Services fought against Novell NDS. Those wars remain firmly in the past, and you’ll find AD based authentication is pretty much the default method for many of the legacy applications – whether that’s some pass-through authentication where no pop-up boxes appear or some propriety UI that confronts the user – either way these methods usually require a computer account in the domain. Of course, nothing stays still – and the rise and rise on non-Windows devices (iPhone/iPad/Android) and web-based applications are fueling authentications methods which although tied to LDAP infrastructure, but use a system of tokens such as SAML/OATH as the protocol for the username/password. So, by a country mile, the primary reason for joining a container to a domain is to meet these authentication requirements – and the way those requirements are often found out is by testing.

The second reason for joining the domain is accessing other resources which may be backed by AD-based ACLs. In the workgroup model, this is going to create some kind of Windows Security box, where the user will need to type their domain credentials. We can store and pass those credentials, but if there’s an excess of them – then sometimes the quickest route to avoid these pop-up box “annoyances” is to join the container to the domain – and course, as soon as one application requires it the rest of the environment benefits from it as well.

There are other use-cases as well – and that’s to leverage other management constructs and tools that are AD domain-based. Not least AD GPOs – whilst the local security policy remains intact inside the container – many organizations would prefer to have an OU and policy setting pushed down to the container. For those organizations using some end-point management tools like SCCM to manage their conventional Windows 10 PCs, they may also wish to use the same management tools to control the software stack. Our general line on this is – fill your boots. Droplet isn’t about imposing restrictions on how you manage your stuff, nor are we about creating another “single pane of glass” for you to navigate through. But rather empower you to leverage your already very serviceable management infrastructure.

Finally, there’s one other reason to join the container to the domain – and that’s to take advantage of our multi-session capabilities. This is a scenario where the container runs in either a Remote Desktop Session Host (aka Terminal Services) or Windows WVD where Windows 10 multi-session is in use. It also applies to scenarios where more than one user logs into the same physical PC at the same time using the Microsoft “Switch User” feature – this is a common requirement in healthcare settings where staff on shift-share the same Windows PC and toggle between users using the “Switch User”. Of course, in the multi-session environments where one container provides applications for many users – we tend to use our 64-bit container type as it offers great memory scalability – and we need to separate the different users by their userID to prevent one user from getting the Droplet Seamless Apps of another. Whilst you could create users inside the container – that makes no sense logically when there is an AD environment in place.

The joining to the domain process is no different from the process you would use on a physical Windows PC. However, this is a manual process, so we provide customers a series of complementary scripts that automate this process. This includes are “arming” process so as a container is deployed on the first start-up it runs through these scripts which automates changing the computer name, joining the domain itself (creating the computer accounts in a designated OU), and ends by ensuring various AD-based groups allow access to the Droplet Seamless App. At the same time the default logon process is modified by changing an option in the settings.json:

This setting essentially turns off our default login process using our service account and instead challenges the user to supply their domain password.

By default, end-users will see the Windows Security dialog box connecting through to the container listening on the internal, non-routable IP address of This uses the discrete, encrypted private network channel created between the host device and the container itself. Some customers like this approach as they see it as an additional layer of security, especially those organizations that have acquired third-party software that augments these MSGINA dialog boxes with other functionality. As you can see the domain/username is already pre-filled, or that we are waiting for is their password.

If customers would rather pass through the credentials from the main Windows 10/2016/2019 logon. That’s very easy to do with a couple of minor changes in the GPO that focused on the Windows systems run our software is all that’s needed. Again, we supply the documentation to configure those options if desired.

Other Considerations:

Now that the container is joined to the domain there are some other considerations and configurations to think about. By default, the Microsoft Firewall inside the container is turned off. We do not need it – because we have our firewall configuration. However, joining a container to the domain, means there’s a good chance that domain GPO could turn back on the “Domain Profile”. This policy won’t stop our core technologies – but it can stop the Droplet Fileshare service and stop our UsbService both of which are network processes.  This triggering of the Windows Firewall can be stopped by enabling this disabling policy (did you see what I did there?)

Computer Configuration > Administrative Templates > Network > Network Connections > Windows Defender Firewall > Domain Profile

The policy-setting “Windows Defender Firewall: Protect all network connections” can be set to disable for a GPO that filters just on the container computer object in Active Directory:

Another consideration is user profiles – as soon as multiple users are login in into the container this will generate a user profile. In most cases, this isn’t an issue because organizations have already a profile management system in place – not least Microsoft own roaming and mandatory profiles. That said it shouldn’t be overlooked – not least that the creation of the profile isn’t very quick, and the last thing you’d want is multiple containers with multiple different local profiles being created.

So that’s joining the container to the domain in a nutshell – mainly done for applications that require AD authentication and with our “multi-session” based containers used in RDSH and Windows WVD environments.

Category: Droplet Computing | Comments Off on Joining Droplet containers to Active Directory Domains
April 7

Droplet containers in a non-persistent mode

One important feature of Droplet containers is our non-persistent mode – or what’s referred to in our configuration file as “non-persistent mode”. By default, Droplet containers are stateful and changes are held in a single in an image file format called .droplet.

Non-persistent mode essentially makes the container stateless such that any changes that occur after the container is started are discarded at shutdown or reboot of the host device. It’s especially useful for kiosks – where a dedicated machine is the access point for a single application that is in a public arena. If something goes wrong with the application or device, staff can merely reboot the system – and it will go back to its “known good” state.

That principle also extended to the shared computer model – where a single PC is used by multiple staff over the day or week. Classic examples would be shift workers in a healthcare context, or call-center staff. They may be sharing the same physical PC and sharing the same container on that PC. It’s unlikely you’d want changes to accrue in the application between shifts. So, a combination of say mandatory profiles and droplet containers in a non-persistent mode – essentially makes the environment read-only, and super easy to reset. That works in another context such as lab environments in education where students jump on any terminal to carry out their work. This is especially true in environments where students deliberately delight in making changes – you know the type!

There is also a security angle to non-persistent mode. A significant number of our customers use Droplet to run legacy applications that could be vulnerable and subject to attack. Alongside our current security which creates a secure bubble around these applications – by making the container itself easy to reset to a clean state. For our customers taking advantage of this approach – they often run as Windows service – and schedule a rolling stop/start of the core Droplet service.

Finally, our old friend “bloatware”. As you will have observed in virtualization environments – Windows still creates temporary files, which even if deleted still tattoo the file system with dirty/stale blocks.  Generally, the more modern the Windows kernel the fast and quicker the image can grow. By first using the persistent mode to install and configure the software – and then deploying it with non-persistent mode enabled you can guarantee the container image file stays the same.

Enabling non-persistent mode is simple as editing the settings.json which is in the user profile in the per-user model, or C:\ProgramData\Droplet when running as Windows service – and turning it on with a true statement. In my case, I did this on my Apple Mac using nano as the text editor.

I use non-persistent mode on my Mac mainly because it’s usually from there I show modern Windows apps running natively, and therefore I tend to use a modern container image type for that, so I’m keen to make sure it doesn’t grow unexpectedly

So, caveats…

There are two main caveats. First the obvious one. You should not store persistent data inside the container. This is something we recommend as a matter of course for the majority of customers. By using mapped network drives or our drive redirection feature you can make sure that data is store outside the context of the container. I would also recommend imposing either a local or GPO policy to hide access to the C: drive inside the container such that only remote storage is accessible. Another common mistake is making changes such as installing software into the container and being horrified that after a restart – it has disappeared. If you are doing this quickly whilst doing other tasks and distracted you begin to wonder if you’re losing your marbles. Non-persistent mode ALWAYS discards changes, and never asks – it just does it. The setting is seamless to the UI so no big red traffic lights are warning you of this fact. Have me and my developers being caught out by this fact? To quote Ludwig Wittgenstein Tractatus Logico-Philosophicus 

“That which we cannot speak of, we must pass over in silence…


The other caveat concerns the subject of my next blog post – “enableDomainLogon”. We do support joining the container to an Active Directory Domain. This primarily for containerized applications that mandate a full Windows Authentication. As you know any domain-joined device wants to update its trust password every 30-days. Effectively, the non-persistent mode would discard those password changes, and the container would end up “orphaned” from the domain. This is akin to having a laptop off the domain for several weeks, and then bringing it back into the corporate network, and discovering its trust relationship was broken.

So non-persistent mode is a great way to keep the container fresh and ensure no unauthorized changes are made to it – just make sure you don’t lose your marbles…



Category: Droplet Computing | Comments Off on Droplet containers in a non-persistent mode
April 6

Enabling Windows Performance Extensions for Droplet Container

Here at Droplet, we leverage several different performance extensions for our container technologies. These are very much tied to their respective platforms. On Microsoft Windows, we leverage WHPX whereas on Apple macOS and Chromebook we leverage HVF and KVM respectively. All of these different hardware accelerators essentially do the same thing – they give us access to the underlying Intel VT or AMD-V extensions which improve performance. Historically, these attributes on the chip we implemented to improve performance for virtualization – but they also drive performance improvements for emulators like Android Studio and container technology like Droplet Computing. Microsoft does have a couple of different names for WHPX – whose full name is “Windows Hypervisor Performance Extensions”. These are a collection of APIs/SDKs that vendors such as us can speak to get a performance boost. It’s not to be confused with Hyper-V which provides a full virtualization management layer and engine. All that WPHX provides is access to those Intel-VT/AMD-V attributes on the CPU chipset.

HVF and KVM are built-in and enabled by default in the world of Apple macOS and Chromebook, whereas on Microsoft Windows, WHPX needs to be enabled by the administrator. Of course, Microsoft provides a range of different ways of doing that using either the UI, command-prompt, or PowerShell. Enabling WHPX from the UI  involves navigating to >Control Panel  >Programs and Features >Turn Windows features on and off and placing a tick in the box next to “Windows Hypervisor Platform”.

Please note, that all the UI doesn’t state it – we would recommend a reboot once applied.

So that’s great for demo purposes and in simple trials – but no administrator worth their salt wants to be running around their environment doing this through the UI. So there are ways thru the command-prompt or PowerShell by which this process can be enabled. Usually, I include these processes at the end of the scripting process – because I can finish off the script with a reboot – leaving the Windows PC or virtual desktop ready in a state to be used for Droplet containerized apps.

From Command-Prompt:

You can use the dism command to enable the feature. This is simple “one-liner” format that will appeal to those who prefer easy DOS-based batch (.bat / .cmd) scripts. In case you don’t know DISM stands for the “Deployment Image Servicing and Management” tool:

dism /online /enable-feature /featurename:HypervisorPlatform

Notice how in the command-line tool the name is different again – not WHPX, Windows Hypervisor Platform, but merely “HypervisorPlatform”. DISM is clever enough to know if WHPX or any service has already been enabled before, and so won’t try anything illogical like trying to enable the same thing twice.

From PowerShell:

PowerShell is my main CLI of choice, even if I might use a simple batch file to my PS scripts together as a single setup.cmd. What PowerShell brings to the table is the ability to include IF statement logic in the script environment.


$featureName = ‘HypervisorPlatform’

$Feature = get-wmiobject -class Win32_OptionalFeature | where {$_.Name -eq $featurename}

If ($feature.InstallState -eq “1”)


write-Host $featureName “is installed”


Else {

Write-Host “Not installed”

Enable-WindowsOptionalFeature -Online -FeatureName $featureName

shutdown /t 600 /r /c “Restarting Windows 10 within the next 10mins to enable Microsoft WHPX”



The nice thing about a script like above is I can reuse it time and time again, by simply changing the string for $featureName – but also use tools like WMIobject to verify if WHPX has already been enabled.

Once WPHX has been enabled hardware acceleration can be enabled in our software, and you verify its presence in the UI:

Now for some caveats….

When we talk about “hardware acceleration” or “performance extensions” there are some things to bear in mind.

Firstly, and most importantly, how significant are those performance differences? Well, it can be measured in seconds – with hardware extensions enabled the login process to the container can be as little as 3-4, with subsequent applications opening in less than a second. Without hardware acceleration, the login process is more like 8-9 seconds, with subsequent applications loading in less than a second.  So, we are only talking about seconds worth of difference which is unlikely if ever to result in any ‘productivity gains – it’s more about delivering the best user experience.

Secondly, where it pays dividends are in our older 32-bit containers, rather than our 64-bit containers. We find that hardware acceleration is less impactful on our 64-bit containers mainly because a 64-bit container on a modern 64-bit OS running on 64-bit hardware has all its performance ducks in a row. For this reason, we tend to use our 32-bit container in a traditional Windows PC model where one user runs one container. Whereas we tend to use our 64-bit containers in Windows WVD and Citrix Virtual Apps where one container services the needs of many users – and where the additional scalability of 64-bit works – because of the larger addressable memory and CPU. Although we can run with hardware acceleration in environments like Windows WVD, often the scalability and resource efficiencies win out over raw performance, especially when you’re talking about a couple of second differences.

Thirdly, and finally – Windows 10 versions. We support Windows 10 1909 or higher with WHPX. There is a known issue with WHPX on 1809 or later which has known conflicts with Microsoft’s Device Guard and Windows virtualization-based Security (VBS).

Category: Droplet Computing | Comments Off on Enabling Windows Performance Extensions for Droplet Container
March 31

Droplet External Device Support – USB (Part 2)

Previously, I explained our support for external devices in the shape of COM/Serial/LPT devices – which is a built-in component of our technology. This time I want to focus on our support for USB devices. Our USB support comes in a special USB edition and is secure so only the administrator can connect and configure USB devices to the Droplet Container. The configuration is “persistent” in the sense that once configured the USB device will always connect so long as it is plugged in and powered on.

Once the container is unlocked, our UI displays a USB icon together with an interface to select connected devices like so:

You’ll notice in this screengrab I’ve not selected my USB to Serial dongle. There’s no need to because of our built-in COM/Serial/LPT redirection. What is select is HP Color LaserJet Printer – again if I just needed the print functionality it would be easier/simpler to just use our built-in printer redirection feature or indeed use the IP printing functionality of the printer to print across the network such as IP-based printing supports scanning as well.

Scanning is possibly is the most common use case for USB support in things like healthcare – where often expensive medical scanners still operate, but the software stack around them is so dated it’s based on a Windows XP or Windows 7 model.

Another use case we have seen is in engineer environments (aviation, process management, automotive) where USB devices control expensive pieces of hardware that need servicing, maintenance, and support – often to meet industry standards and auditing requirements.

Once the device is enabled, it becomes a permanent part of the configuration until its removed, and the configuration is stored in the settings.json file.

Once we start-up the container – we are in the hands of Windows “Plug and Pray” – in other words, will the Windows binaries inside the container have a driver for the connected software? The older the software, and the more likely you will need to install a driver.

Once the container has started and is unlocked the administrator can investigate either Device Manager or in this case “Device and Printers”. In this case, the printer was detected by the container, but only partially so. There appears to be an issue with the scanner component:

This meant hunting the drivers down for this device (which I already had for this printer) – this is a 200MB+ bundle that downloads into the container and runs the autorun.exe. This installer requires the USB device to be unplugged and re-plugged during the install which our USB stack handles with aplomb. During the installer, the setup program asked me to reconnect the device and the installation comes through.

At the end of the installation, we can see the yellow exclamation marks indicating there was a problem have gone… whoop!

Although there are many ways within the Windows UI to initiate a scan – I wanted to find a more direct method so it could be “published” as a shortcut to the user’s desktop as a Droplet shortcut. I found the main scanning application resided in “C:\Program Files\HP\HP Color LaserJet Pro MFP M177\bin\hpscan.exe” so I made an Application Tile to that:

Category: Droplet Computing | Comments Off on Droplet External Device Support – USB (Part 2)