For some time, we have supported external devices, and this comes in a couple of flavors connected to a couple of different use cases. We support COM/Serial/LPT redirection, USB redirection, and PCI pass-through*.
The most common use of our native COM/Serial/LPT port redirection seems to be in engineering environments connected with aviation, automotive, and process management. Frequently these dedicated systems need an engineer to run periodic diagnostic or maintenance tasks, where connectivity is provided by a COM/serial port or LPT port on the device itself. Of course, it’s been some years since many laptops even had COM ports, so in most cases, a USB to Serial dongle is needed to provide the physical connectivity. For lab environments using a fixed desktop or tower PCs, there is still the option to purchase PCI cards that fit to the riser and provide either single or dual-port serial interfaces. In some of our higher-end engineering customers, the device that needs to be tested or serviced has a dual serial/com ports. These COM/Serial to USB devices are largely manufactured in the far east and are inexpensive, however, for that reason, their quality varies significantly. We have had a very good experience with the StarTech range of devices, and their backup documentation is reassuring – as they appeared to be are aimed at the engineering sector.
The good news is Droplet 2.x natively supports these COM/Serial/LPT devices. The functionality is built-in and merely needs turning on from the settings.json file which can be found manufactured C:\Users\%username%\AppData\Roaming\Droplet in the per-user mode, and in C:\ProgramData\Droplet in the service mode. All you need to do is modify the settings.json in notepad, and then restart the application or service for it to take effect:
In terms of the Serial/COM/LPT port ID which is usually represented by a number (COM1, COM2, and so on) whatever that port number is in on the Windows PC it will be the same number in the container. Occasionally, this com port number on the Window PC will need to be adjusted if the application is hard to code to only utilize COM1 or COM2)
This can be tricky especially if the remnants of “phantom devices” are already occupying those slots. I’ve found showing “hidden device” in Device Manager is helpful when troubleshooting this.
When I demo this in my lab to customers, I use an old APAC PDU which has a serial port on the rear and PuTTy inside the container using its com port option to act as the terminal emulator:
Note: Just look at that 1997. God, how young were you in 1997? Where you even born? Shudder…
So, the moral of this story is – you don’t need the UBS edition of our software to connect a USB-to-Serial/COM/LPT dongle to the container – that’s is a built-in of Droplet. The only thing you might have to worry about is the driver install on the Windows 10 PC and messing with the device ID to work with your software.
In the next episode, I will look at our USB support and how that works…
* PCI pass-through is currently only supported with the Linux edition of our software and requires a modest amount of free support to get it set up and running
Category: Droplet Computing |
Comments Off on Droplet External Device Support – Serial/COM/LPT Ports (Part 1)
As a level set, you should know we support many hardware acceleration types for the container. For Windows 10 (1909 or later) Microsoft WHPX API, whereas on the Apple Mac their native HVF acceleration. On Linux and Chromebook, we support KVM extensions. One of our more recent advancements is the ability to enable our hardware acceleration within the context of on-premises virtual desktops. Up until this point, we had to turn off our hardware acceleration feature as it was incompatible with the combination of virtualization and Windows 10 running inside of the guest OS. Using WHPX is compatible with security features dependent on Intel-VT such as Device Guard and Microsoft virtualization-based Security so long as Windows 10 is using version 1909 or higher.
Interestingly, we discovered in our labs that our legacy 64-bit container doesn’t benefit a huge amount from hardware acceleration – I think this is to do with having a 64-bit container, on a 64-bit OS together in the case with a 64-bit hypervisor. This is especially true of our more popular container type which our customers use to run legacy applications.
For our container types used to deliver modern applications, our internal test show that hardware acceleration is still advantageous – I think this is to do with the kernel inside the container – it’s a modern kernel and therefore more resource-hungry than the legacy container so that hardware acceleration makes all the difference.
So hardware acceleration inside a VM. What the prereqs or “ducks in a row”. Firstly, the version of vSphere is unimportant – this works on 6.x and 7.x without an issue. What really matters is the “version” of Windows 10. There’s a known Microsoft bug with version 1809 – basically, a combo of Device Guard/Microsoft virtualization-based Security and WPHX creates a conflict. These internal Windows components are incompatible with each other. If your on version 1909 or higher this Microsoft issue has been resolved. If you unsure of Windows 10 version use the “System” option on the right-click of the Start Menu:
Enabling hardware acceleration on the vSphere VM means changing a setting on the properties of the VM.
Once Microsoft WPHX has been enabled and our software installed the container picks up on the hardware acceleration:
Category: Droplet Computing |
Comments Off on Droplet Containers with Hardware Acceleration in VMware vSphere Virtual Desktops
A couple of weeks ago I wrote about how we can now run a Droplet container as Windows Service. At that time, I just wanted to explain the use cases around that, and didn’t peel back the lid to explain how it’s done and what the setup process is like. It’s actually very simple.
Just to refresh the little grey cells – we have two modes of operation per-user or per-computer. In the per-user model, the end-user starts the container and waits a short while for it to load (30-40s on physical), and then they get their “Droplet Seamless App”. The start of the container can be invoked by loading droplet.exe and hitting the big orange “Start Container” button (no prizes for guessing what does)
This client can also be started when clicking a droplet shortcut backed by our “launch” syntax for example:
This model works well for containerized applications typically used only occasionally such as once a week, month, quarter, or on a completely ad-hoc as-needed basis. The per-user mode is also handy for administrators as it gives the admin an easy way to build, monitor and switch contains during the time where you install software and checking everything works as expected.
With the per-computer model, if on the other hand the containerized apps are being used all day and every day, it makes sense to automatically start the container when the Windows PC or Windows 2016/2019 server first starts. This means no need to wait for the container to load, and containerized applications are available immediately after login.
To set up the service I need a copy of the setting.json this per-user ]file can be found in the current user’s profile at:
And copied to:
Next, we can open a command prompt with administrator rights, and navigate to this location:
Once the service is started it appears in the Windows Service MMC like so:
For the client-side configuration, I use a shortcut in located in shell:common startup using our flags to make sure the client-end is always loaded, but minimised to the System Tray like so – this means any changes to the entitlement is always updated at logon when the user “apps.json” is read. We can use the syntax:
With everything in place, the client loads silently in the System Tray and connects to the Droplet container service.
As you can see with this model the client-side app merely becomes a “broker” to the applications running inside the container. We believe that as time goes by the per-computer model will become the de-facto way most of our customers will run Droplet software. Increasingly, we will become invisible to the user who shouldn’t have to worry or concern themselves with how applications are being delivered – all they do is double-click a shortcut, without any real awareness that their software has been dropletised (and that’s with a S, not a Z…)
One cool new aspect of Droplet 2.0 is our redirection features – by default, we take a security posture that all the doors are closed until the System Administrator opens them. As a result, all the redirection features are disabled – in a state of FALSE – in the configuration until they are enabled. The redirection features are not exposed in the UI of the Droplet software and are held in the settings.json.
This is a plain text file in the .json format and is stored in the User Profile in the per-user mode of our software:
C:\Users\%username%\AppData\Roaming\Droplet easily accessed using Start, Run, and shell:AppData
For the Windows system service mode of our software the same file is held in:
As you can see, I’ve highlighted the redirection settings of the settings.json in the file – and hopefully, they make sense without too much explanation.
RedirectPrinters looks at the local printers of the Windows 10 or Apple Mac system and makes them available inside the container. By default, we use a compatibility driver which means no printer driver needs to be installed inside the container. This gives zero-printer configuration capability inside the container for printing.
This compatibility driver can be disabled using a Local Security Policy setting (accessed using gpedit) – and this enables customers to install a native printer driver into the container. That can be helpful for specialist multi-function printers where a compatibility driver simply doesn’t offer all the functionality the user expects.
For USB-enabled scanners, we would recommend our USB edition to provide a native scanning application from the hardware vendor. Incidentally, you can see I’m using the USB edition in my settings.json to capture my HP LaserJet printer so I can demo scanning from within the container. More about our USB support in future posts.
RedirectClipboard. Enables a bi-directional copy and paste functionality between apps running locally in Microsoft Windows, Apple macOS, Chromebook, or Linux and the containerized app. You might think it a bit weird that this is turned off by default – but some customers are concerned about the ease with which data from a legacy application can move from the containerized app to the host device – that seems to be anxiety in remote desktop environments like Microsoft RDSH, Citrix Virtual Apps, VMware Horizon View, and Amazon Workspace/AppStream. For me, it’s the first redirection feature I turn on – life without a clipboard is like having one arm tied behind your back, or like losing your phone and realizing how it’s now actually an extension of your body.
RedirectDrives. As with printer redirection, this picks up the host device drive letters – both local and network mapped drives and exposes them into the container. This makes it very easy for end-users to retrieve data and save files. When used together with RedirectPrinters it means pretty much the ‘core’ aspects of the user environment are handled programmatically – without the need for policies, login scripts, or having to join containers to the domain (which incidentally we do support if needed…).
So you know we do have our own File Synchronization service – which allows for a complete out-of-band, network-less method of transferring data in and out of the container – for customers who for security reasons might block SMB/CIFS traffic or find RedirectDrives exposes too much functionality. Remember that the local policy always wins – so you can easily block or hide specific drives using the gpedit tool.
Finally, whilst RedirectDrives gives excellent performance if you are looking to move large amounts of data out of the container – I would still recommend standard mapped network drives driven by the SMB/CIFS protocol – pound for pound it offers better IOPS. Configuring that is relatively easy, but don’t forget we now support joining the container to the domain to an Active Directory Domain, which means you can leverage existing GPO/Scripts to manage the setup of the environment as well as using your profile management system if you choose to go down that route.
RedirectComPorts. Yes, I know COM/LPT ports who have those these days, right? Well, you’d be surprised in how many engineering environments (Automotive, Aviation, and Healthcare) how many service and support centers still use COM/Serial ports. Some of that stems from the nature of the business – after all a train, plane or expensive medical equipment can have a 10/20/30/40-year life span. In the main most customers plug-in an inexpensive USB-to-Serial dongle (PCI cards do exist for desktop PCs as well). We automatically pick up on these interfaces and redirect them to the container – so if the COM port is COM1 on the physical it will be COM1 in the container.
RedirectPNPDevices. I wouldn’t say this setting doesn’t do anything. But its usefulness has proved to be limited and was originally developed for a customer problem with legacy PCI devices which in the end we found a better way of handling. It’s stayed in the settings.json just in case we face a similar issue in the future. The problem we had was with a customer who had an expensive piece of scanning equipment managed by an ancient Windows PC. The scanner was still good, and it wasn’t economical to replace it for want of a Windows PC. In the end, we used Linux on the physical machine, and use PCI redirection to run the legacy Windows software inside the container. That was more robust, stable, and drove better IOPS.
RedirectAudio. This allows for bidirectional audio – so both playback and record and is in use with some of our healthcare customers where health professionals give audio descriptions of patient scans – which are later typed up as notes for patient records. So, you won’t be using this to watch NetFlix or listen to Spotify, but the quality is more than good enough for the applications where we see it use.
As you can see the settings.json hosts a whole series of features and options not exposed in our UI – but by a country mile – it’s the redirection settings that impact the user experiences the most.
Can’t be bothered to read – then watch the video instead… 🙂
One of the most common questions we get asked about our software – is how you deploy “en masse”. When our technology was developed, we knew didn’t want to build “Yet-Another-Single-Pain-of-Glass” management frontend – to go with the other 100+ management front-ends administrators contend with on a daily or weekly basis. So, our response to the questions is usually – how do would currently deploy a piece of software into your environment – and take it from there. Usually, once we thru the trial and UAT phase, I usually set up a workshop where I work as a facilitator to help the organization work out its preferred approach.
The workshop starts by explaining that everything we do is just a file. What we usually recommend is building the Droplet environment on a ‘reference’ machine and validating everything works. Before copying the files needed to the central location ready for deployment.
There are small metadata files that reside in the user’s profile – these are generic and can be pushed to the profile of the destination user(s).
C:\Users\%username%\AppData\Roaming which can be easily accessed using shell:appdata from the run dialog box.
The two critical files are apps.json and settings.json. Apps.json basically controls the name of the containerized app, description, and icon (imported to apps.json using the .ico format, but converted into base64 string in the .json) file. You can see the apps.json as the publishing end of Droplet as it controls what .exe the user can load from the container – and protected by the Droplet Administrator password.
On the left, we have the contents of the apps.json and on the right UI in the client…
The settings.json holds the core configuration of the application itself – this more or less equates to the Settings tab held behind the gear icon on the client. Notice how we don’t expose ALL the settings in the UI. That’s to inhibit casual changes – such as turning on a feature – without understanding the implications or the dependencies.
The other metadata files are important but very easy to explain. The “credentials” file stores the Droplet Admin’s password stored in an obfuscated format. That’s fancy talk to say the password isn’t stored in plain text but is scrambled in a non-human readable format. The droplet.lic is the per-user license file. The eula_accept is a logical file – its absence would cause our EULA dialog box to appear, its presence causes the dialog box to be suppressed.
Clearly, the jewel in the crown in the .droplet image file. One question is where to store it? The answer is really up to the customer and also what the model is in terms of delivering the user desktop. For uses sat a regular Windows PC or laptop, it makes sense to leverage the SSD drive on the endpoint which is also a requirement for our promise of true “offline” access to the apps. I tend to dump mine in C:\DropletImages.
Most customers zip up the .droplet file, and then extract it on the endpoint. Of course, compression algorithms vary and the size .droplet file is very much dependent on the disk payload of your apps – but we have seen 50% compression rates on our image format. So, the zip/unzip process can be beneficial in terms of reducing any network hit during a deployment. A smaller subset of customers has taken to storing the .droplet image on the end-users OneDrive or GoogleDrive and thus leverage the background synchronization threads these modern cloud storage systems provide.
If on the other hand the desktop is delivered remotely such as in a VDI, the .droplet file could be stored on C: Drive or D: Drive of the virtual desktop itself. Much depends on whether the desktop is personal and persistent, or compossible/disposable and destroyed a log off. If you don’t care about the persistence of the container it could remain in the C:/D: drive – but if you do care about persistence – some customers store the container with the user’s home directory. Thus, it is doesn’t matter which desktop the user connects to, and the container “follows” the user around on the network.
So, with everything just being a file. After the question is how to handle those files. Some customers opt to handle the metadata file using AD GPO’s as these can easily be associated with an OU/Group membership:
Others prefer to script the entire thing using PowerShell and a combination of msiexec, icacls, xcopy and 7zip – perhaps using the ‘if exists’ kind of logic in script to check for the presence of files – and aborting the script early if it’s found that the application and files are already there Whereas other customers will leverage something like Microsoft System Center Configuration Manager, now called Microsoft Endpoint Configuration Manager.
In this case, we created a Device Collection to target the endpoint, and the software is installed in a modular way. The core “Application” installs the software, and then a series of “programs” which call scripts each have their own function. So, one scripts downloads and unzips the .droplet image. Another program copies down the smaller metadata files – and other programs make sure droplet.exe is started at login – together with stopping any pop-ups. We do also have customers deploying our software with Microsoft inTune and also MobileIron. I’ve no experience of these platforms – but as inTune grows its a matter of time that will have to write a guide to that as well. So many management systems, and so little time… 🙂
So the main take-away – everything we do is a file. The same files can be used seamlessly across Windows 10 (physical or virtual), Windows 2016/2019 RDSH, Apple macOS, Chromebook, or Linux. Droplet Computing doesn’t introduce another single pain in the ass of management – after all, you have plenty of those already right?
Category: Droplet Computing |
Comments Off on Deploying Droplet – Everything is just a file
Two videos – one using a Microsoft RDSH host running on VMware vSphere… and the other demoing the “Switch User” feature for kiosks in environments like health care.
Note: In the video, I’m running our software in vSphere VM because it made it super easy to show the “Switch User” feature – it’s also a great illustration of how Droplet containers run on physical or virtual environments in equal measure.
Okay, I’d admit that was a complete tease as a title, design purely to grab your attention and suck you in. Please forgive me. It should really be Droplet-as-a-Windows-Service (DaaWS)
One new thing about the 2.0 release is to “modes” of operation. You can run the Droplet container as our standard per-user application or as a Windows computer service. If the per-user application the user starts the Droplet container manually or else the administrator sets the software to run using the flags droplet.exe launch –minimised.
(Yes, that’s minimised with s, not a z! Just like virtualisation is spelt with is S not a Z :p)
This per-user mode of operation works best for containerised (with S!) apps that the user only needs occasionally such as once a day, or every other day or once a week/month – and where you want to maintain the “each user gets their personal container” model. From a resource perspective, the droplet.exe only consumes memory and CPU allocations when needed, and when the user is finished, they can shut down both the application and container. This model is commonly used by customers who want to deliver a containerized (okay with a zed this time just to keep the Yanks happy :p) app on an as-needed basis for a subset of users. The start-up time of the container is very good (around 30-the 40s mark) and so for many users, that’s acceptable.
Alternatively, our software can be configured to run a Windows computer service – which means the container is started when the Windows PC is started and is always running silently in the background. In the is case one Droplet container file services the needs of many end-users. It means the container is ready and waiting to serve up containerized apps as soon as the user logins to the Windows PC. In this model, our “client” application becomes just that – an “agent” that sits in the System Tray waiting for the user to run the Droplet Seamless App.
This was the first model I developed for Droplet Seamless Apps back when it wasn’t a product, but an in-house skunkworks. So, it’s been around in our labs for more than a year as we tested and validated the approach with our core customers.
So, there are a couple of core use cases.
Firstly, for customers who want to run their containerized app most of the time. In this case, whether the app is legacy or modern, it’s a key line-of-business app that is used by the vast majority of users, the vast majority of the time.
Secondly, we have many customers who are using the “Switch User” feature in Windows 10 PCs. A good example of that is health care (or own heroic NHS for example) wherein a given shift 4-6 different members of staff use the same physical machine and want to quickly toggle between different uses in the same kind of kiosk-style mode. Our container in this model allows for multiple users to access the same Droplet container on the same Windows PC.
Finally, the service-mode is ideal for Microsoft Remote Desktop Session Hosts (RDSH) on its own, or augmented by VMware Application Pools or Citrix Virtual Apps. This means one container can service many contexts on the server. For this model, we leverage our 64-bit container which allows 1-4 CPU cores to be allocated. It also supports allocating as little as 2GB memory up to a maximum of 192GB. This means the System Administrator can allocate a subset of resources to the container based on resources need to support the concurrency needed. This is a much more efficient model for running the container – it’s more efficient from a storage, CPU, and memory allocation perspective – than multiple users running multiple containers in the context of their remote user-session
In my last blog post, I talked about our 2.0 release and support for Droplet Seamless Apps, and the story is similar on Chromebook/Linux. We now have a brand-new way of presenting applications to the end-user that we call “Droplet Chelle”. Now I will fess up that the naming is a total vanity project on my behalf (Michelle/Chelle, geddit?) as well as a pun on the commonly used term of “shell” to describe any operating environment that provides an interface to the user.
All this running under Google’s ChromeOS Project Crostini. In case you don’t know that’s Google’s code name to allow Linux apps to run natively on Chromebooks, and thus extend the useability and usefulness of the ChromeOS – known as Linux (Beta) on the Chromebook. So essentially, we are running a modified and tuned version of our Linux-based product under ChromeOS. Of course, you do need a good Chromebook to run this stuff – so Intel Celeron with 2-4GB of RAM won’t cut the mustard. Fortunately, there are a whole raft of modern Chromebook’s out there that easily rival a Windows PC or Apple Mac for durability and performance. My personal favorite is the ASUS C436 Flip 14in i5 8GB 256GB 2-in-1 Chromebook which was doing the rounds last year at the £799 price mark.
So below I have the latest version of Microsoft Office 365 running natively on the Chromebook. I’m actually running Neverware CloudReady (a commercially and community-based open-source version of ChromeOS) who was recently acquired by Google. CloudReady is a great way to repurpose Windows PCs and turn them into ChromeOS devices.
In the next example, I switched containers to show legacy applications. The Droplet Chelle allows direct access to the Google Drive and Local Storage of the container which allows users to store the files in da cloud or locally without needing to set up or install Google Drive in the container itself.