November 17

Debut Album: Songs from a cold conservatory

I’m currently working on my debut album. I’m recording at Cobbler Cottage Studios in Crich, Derbyshire. The working title is “Songs from a Cold Conservatory” and is a nod to my old practice space – a UPVC lean-too conservatory which is freezing in Winter and boiling hot in Summer. For logistical reasons I recently moved out of the conservatory into my living room – so my joke is that the follow-up album should be called “Songs from a Cosy Living Room”. That might actually work, that’s an album title – as Lou Reed said. Anyway, many thanks to both David and Mel at Cobbler Cottage Studios.

The songs for the conservatory are in the style of the confessional, heartfelt singer/songwriter mode. I’d be doubtful about calling myself a “folk musician” – although I appreciate the sentiment behind Louis Armstrong’s now legendary quote.  There are 9 tracks on the album which are all me and just a guitar. The rhythm, bass and cellist join me for the last track. The last song is optimistic from a “you can get through this” perspective – and might mark my next direction both musically and lyrically – towards a more collaborative approach with fellow musicians – and songs that give people encouragement to keep on, keeping on [dig the Curtis Mayfield reference there I hope!]

We have one last song to record in January 2023. The remaining tracks are as I type being mixed and mastered – so with luck, the album should be out in the first quarter of 2023. It will ship as CD (how quaint) and as a digital download on Bandcamp. It won’t be on Amazon Music, Apple Music or Spotify – or any of those sites that rip off musicians away that would make an A&R person blush at an established recording company. There will be a limited on-demand vinyl recording for those people who prefer that medium. By on-demand that means that they are pressed and produced as single entities – I think I will offer these with the option of them being signed, perhaps with some kind of bonus element – purchasing the vinyl will come with a download code for digital copy included in the price.

These are very expensive and I won’t make a single cent on the vinyl copies – but quite like the idea of having a vinyl copy.

Category: Michelle's Music | Comments Off on Debut Album: Songs from a cold conservatory
November 15

Mastodon and the Miocene Era of the Internet

It’s no irony to me that the up-and-coming internet platform is named after a once-extinct animal whose period on earth ended some 5 million years ago.  For many, it’s a refreshing reminder of the idealism of the early days of mass internet use ushered in the 90s.  Oh, how sweetly naive and innocent we all were back then – with our talk of “information superhighways”. Little did we know what kind of virtual world we were constructing almost 30 years hence. Like the mastodon of yore, those heady days seem a lifetime ago, but the reality is those changes happened within our lifetimes. So in this post, I want to reflect on where we came from, where we have been and where we might be going.

For some, the woes of Twitter started a couple of weeks ago with the acquisition by Musk, and a series of miss steps that would make even Liz Truss blush. The reality is the rot had set in some years ago – and had always been there hiding in plain sight. There’s been a lot of analysis of Twitter and its impact on our culture – which I find personally a bit troublesome. Twitter exerts its influence far in excess of its subscriber base. Compared to other platforms like Facebook, its relative minnow – and those who are on it – who have far more followers than follow – it exerts an influence on the public discourse which reveals the vast inequalities of our societies. Twitter was meant to be the great leveller – giving ordinary folks access to persons of note, and making persons of note more accessible. The reality is Twitter merely reflects the power differential at play in our society and culture. Why should a site of some 200m have so much influence it has compared to the 8 billion people this earth currently houses? The vast majority of my friends and family have no interest in Twitter, never have and never will.

Sometimes I’ve been a bit embarrassed by the world of “Social Media”. It’s an environment my generation helped in part to build and sanction – that we let loose on our kids. We assumed that people would happily give up their data in exchange for free stuff – and we naively and optimistically assumed it wouldn’t be used to groom people for terrorist organisations or promote suicide as a viable option for teenage girls. And to this day we allow unregulated algorithms to be bought and sold by political parties – so they can push a hateful agenda to an unsuspecting public. If Twitter and social media are a cesspit – it is in part because we and our politicians have allowed it to become so. And because we said they were platforms, not publications – where free speech absolutists can say what they like, to who like – without consequence. This is not a great message to send to the young – that what you say doesn’t matter, and therefore you can say what you like.

It’s worth reminding ourselves that Twitter started out with the same idealism that Mastodon currently enjoys. Over time it became an engine by which people with polarised views took lumps out of each other. I joined Twitter in 2009 when friends told me there was a discussion about a blog post I’d written. I challenge anyone not to be interested in a conversation where they or their content is the primary subject matter 🙂 . Reader. I held up no resistance. Back then I had a small Twitter community of some 100 followers and 100 followed. And it was like an early Slack channel (I remember IRC), a real-time extension of some of the VMware Communities forums where I’d first met people who I would later meet for real, and also come to call friends.

It didn’t stay that way. Pretty quickly Twitter evolved or our use of it evolved to pushing “content” to our “followers” and building status as “influencers”. Of course, that was limited to the mainly esoteric world of enterprise IT. I think that’s when I started to lose interest – when it was less about shoutouts for help, advice and opinions – more of a one-way broadcasting system. As my number of “followed” increased I found I couldn’t keep up with the volume of Tweets. In short, I discovered I could not read the whole of the internet.

Can Twitter ever make money? I would argue not. It turns out it’s really difficult to turn a micro-blogging site of 200m users into a viable platform for advertisers. Unlike FB which is primarily used by a generation of users who seem to relish filling out profile pages with valuable metadata – Twitter doesn’t seem to hold that information so readily. I guess that’s one fear folks moving to Mastodon have – that in a desperate bid to monetize that which has been resistant to monetization – the privacy and protections previously “enjoyed” will be open to ML/AI engines to be mined for advertising-friendly metadata. It’s one of the reasons I archived all my tweets and then used TweetDelete to remove them. Clearly, the attempts in the shape of Tweet Blue have been so bad, the “product” was withdrawn within 48 hours. That to me demonstrates the unwillingness of the user base to pay for a system they have previously enjoyed for free. Maybe the only viable and stable model for micro-blogging is minimal-cost Mastodon instances like one’s personal blog which then allows a micro-blogging presence in a non-commercial environment. IF that is the case – then don’t expect it to look anything like Twitter in practice.

Can Mastodon avoid becoming yet another cesspit? I would hope so – and someone would say that system that is NOT built on an algorithm that rewards conflict and hate means that it will. I’m more sceptical. Sadly, I believe there isn’t any human invention and innovation that doesn’t come with negatives and fail to see why Mastodon will be any different. The truth is there is light and darkness in the human population and people bring that light and darkness into everything they do – there’s nothing really in Mastodon that would stop darkness – except for a well-expressed server rule and good moderator. That assumes everyone doesn’t set up their own Mastodon host and become their own self-policing person. Bad actors and bad servers can be blocked – but won’t that merely create a series of different echo chambers creating a false impression of unity and harmony – which could be quite different from the world outside your door?

This week seems feels less insane than the last couple of weeks in the Birdsphere. Whether that continues is anyone’s guess. Right now although lots of people have moved, it’s far from the mass exodus many predicted. Musk is calculating that his users are emotionally tied to their followers and unwilling to abandon the network they have created there. As for myself, this feels like a good time to move on. I recently renamed my Twitter account to “Michelle Laverick Music” and use it to promote my music, videos, gigs and recordings. The truth is a felt queasy doing that. My “audience” (if that doesn’t sound too egotistical) followed me for my contribution to IT, not music. And it did feel like a was grifting my Twitter followers for entirely different purposes. The move to Mastodon seems a good time to start afresh with a tabula rasa. Also, I was cautious about “pushing” issues that my original Twitter followers might not be interested in such as my politics or my views on trans. With a new ID on a new platform, I feel more at liberty to express those views.

Finally, no one knows what the future will bring – Musk himself has warned about bankruptcy, and introduced the working culture of presenteeism and long-hours common in a start-up. Twitter isn’t a start-up, and it’s unclear to me why any experienced member of staff would hang around for Foxconn-style T&C of employment. Some predict that Twitter will become unstable, unreliable and unusable within 6 months – personally, that seems to be a bit pre-emptive – but who am I to predict the future? That remains to be seen – my concern is that if the mass exodus does occur and at a swift pace – Mastodon’s distributed model isn’t yet able to absorb that demand – and there needs to be a method distributing new sign-ups beyond people picking the most popular hosts within the system. I’d love to see the big WordPress hosters start to offer a personal Mastodon instance. I’d love to see vendors like Synology offer native Mastodon instances on their NAS arrays. I’d also love to see a more enhanced method of finding and following individuals than the process we currently enjoy.

And for all our sakes I hope it starts a return to civil discourse not based on character assassination and smear – but a space that allows for the curious and kind to debate and discusses real issues in good faith. You see, forever the idealist.

Category: Other | Comments Off on Mastodon and the Miocene Era of the Internet
November 10

So I don’t blog much – Mastodon & Twitter

So I realise I’ve NOT blogged much recently – to tell you the truth I’m so busy working at Droplet – I don’t actually have much time on my hands to do so. Maybe that will change in the next year – I don’t know.

So the subject of this blog is Mastodon and Twitter. I don’t want to get into the whys and wherefores of recent developments – although I have an interest in those – I assume those opinions are better expressed by other people elsewhere – and I’m not sure whether my commentary would add much. I’m more interested in the technical side of things…

Setting Up your Mastodon Server?

Firstly, you have to ask yourself the question – why? My reason was I wanted a friendly handle – and with the volumes of folks bailing from Twitter – some of the Mastodon Servers are quite heavily loaded and are taking 24hrs to process sign-ups. Anyone who isn’t tech savvy is going pick and handle and run with it. This is like the early day’s internet where there was a plethora of email websites offering their services for free to sign-up people. If you want your own Mastodon Server chances you serving not just yourself but a community of folks who share stuff in common. Consider this is going to cost you money – whether you sign up for SaaS service or have a more VM-based configuration. Clearly, you could setup Mastodon on your homelab where you may have already paid for CPU/Memory/Network resources – but have to consider the availability and upstream/downstream bandwidth. Being a rural location power reliability is bit of an issue for me – and my bandwidth is current domestic broadband. All in all having this thing hosted and managed like my WordPress blog is – made more sense.

Mastodon is based on Linux and isn’t easy to set up from scratch. Initially, I tried a free-tier Azure VM and started installing modules. I hit a roadblock and gave up. I later found out there was Ansible Playbook based on YAML I could have used. In the end, I found a few providers offering dedicated VM with the Mastodon installer service. Unlike WordPress hosters, this is a bit more niche so I went with running off the Hetzer cloud. So Elestio is like the orchestration engine overly – with a series of different cloud providers which creates a kind of “marketplace”. Once the service was provisioned I used my existing subscription to register a friendly domain –

[I’ve become increasingly interested in the folk music of England, Scotland, Ireland and Wales – apologises to my independence chums – in haste registered UK, in hindsight, I should have picked something more neutral!]

As you might expect it took a while for DNS to proliferate – but once done – has a handy SSL registering system where you put in your FQDN (] and the domain [] and they handle all the SSL stuff for you – pretty neat as SSL can be a bit of nightmare on any platform. Many thanks to the support folks at Elestio who fixed my SSL issues because I didn’t RTFM…

There’s some work that needs to be done post config in the Mastodon “administration” pages when you’re logged into Mastodon with your admin credentials – metadata need to be included such as:

  • Site Setting – you need to fill out stuff like admin username, your email used to log in for admin, a short server description (visible) to subscribers, server description (I put things in like where its hosted and hardware specification), welcome logo
  • Server Rules – these are the “Acceptable Usage” style rules you would see when you sign up such as zero tolerance to hate speech…

Once done you can submit the URL of the Mastodon Server to an email address – which feels quaint – but I haven’t discovered a web-based API method for this task – I recall seeing it but couldn’t find it again…

Followers & The Followed

If you have more followers than you follow on Twitter there is not an easy programmatically way to port that across – and why should they? It’s your decision to move to another platform – and you can’t make free people follow you! There is an API to scan the people you follow – look for their Mastodon ID in their Twitter info and then follow them in Mastodon. This is NOT pretty but it can be done. This is how it works:

  • Run and Authorise your Twitter
  • Let it search your lists and the folks you follow…
  • Export to CSV
  • Then import to Mastodon using the import and export functions

Did this find everyone? No.

Not everyone is Mastodon.

Not everyone has put Mastodon in their profile.

So I’m thinking once a month I will re-run this process until I decide to give up. It’s worth saying there’s is nothing I can do about the 11k people who followed me on Twitter. My view is I’m not walking away from those people – many of who are my friends – but I’m not for the moment “Active on Twitter” until we find out what the heck is going on. [See what I did there?]

A Safe and Reliable Exit Strategy:

I like to see the decision to leave Twitter and go elsewhere as if you were migrating from version 5.0 of one product to version 9.0 of another. This isn’t an upgrade but a migration to a whole new platform. Don’t burn your bridges to the old platform overnight. It’s going to take time to build up the new presence and time to back out of the old commitment. Do it at your pace, at your own schedule.

  1. I’m not leaving Twitter – but I going to archive my tweets and delete my history. I’ve done this once before when Michelle Laverick became Michelle Laverick Music. I decided my old tweets needed to go – and I was going to focus on music not technology or ranting about politics. I singularly failed to stay focused purely on music! If you want to do this. Request a download of your Twitter activity – this is very small as its just metadata (its also a great example of great coding by Twitter engineers – its so storage efficient)
  2. This takes a while (like a day to get and download). Then use TweetDelete to upload the .zip file and purge your tweets…
  3. It is recommended NOT to deactivate your Twitter account but to park it to avoid someone taking over your ID buying a blue tick for $8 and passing themselves off as you – this is like the FB scam where people FB accounts to impersonate others…

So I’m keeping my Twitter account but it’s going to run VERY much like my TikTok and Instagram accounts – I just use them as “publishing” platforms to post videos, gigs and recordings when they become available. My “active” platforms will be Facebook (where my friends live) and Mastadon where I subject people to my Tufo eating, Guardian Reading, Mastadonarti views –  like respect for all and human rights – you know those seemingly outdated notions in the world of free speech snowflakes who complain about cancel culture when their precious opinions are questioned. You know where I’m going with this right?

What’s Mastadon Like:

Well, Twitter it is not. This is open-sorcery, and it kind of feels like the early days of the internet circa 1990s which is actually kind of refreshing and thrilling. People who expect the slickness of commercial products backed by millions or billions of VC dollars are going to find out that a scaleable distributed micro-blogging service takes time to stand up, especially when within days it suddenly got a surge of new sign-ups

Personally, I’m worried about how easy it is for anyone to stand a Mastodon Server up and advertise on other platforms – what are you connecting to? who is running the service? Are they trustworthy? If they have root access what is stopping a rogue admin from using a Mastodon as a glorified honeypot for the collection of emails for phishing or scamming purposes? AFAIK there’s no encryption in Mastodon and the admin has rights to the mySQL DB and can dump the contents of DMs. That doesn’t seem terribly secure to me. This history of the internet is to a degree of naive idealists setting up services with little or no security, and then having to retrofit security. This is not a good model – as it generally means closing the stable door after the horse has bolted.

There are folks saying you must delete and leave Twitter for fear itself could be leveraged as part of some kind of bizarre witch-hunt reminiscent of the McCarthy era. Personally, I find this to be scare-mongering and borderline conspiracy theory [why do I feel like I’m typing famous last words!] But I suspect if anyone wanted to target you – it would be just as easy to do that with Mastodon. Where there is a will there’s always a way.

Final Tips:  Use Mastodon thru a web browser it currently feels so much better than the official Mastodon app (sorry guys) there are other Mastodon Clients out there – and I’ll be testing those until I find one that works how like on my Mac, iPhone and iPAD. You see, I’m still in bed with evil corporates – and my favourite web browser, photos and maps – yes our friends at google.

But hey, at least the CEO of Apple and Google have the good sense of not go around telling their customers how to vote…


Category: Other | Comments Off on So I don’t blog much – Mastodon & Twitter
June 16

Droplet Container Support Scenarios

Back when I started with Droplet (it will be two years in Sept 2021!)  we had quite a simple product. Back then we just had two container types and a limited number of places where we supported them. Since then we have had many different container types running both in physical, virtual, and cloud environments. Additionally, there’s been an explosion of features and possible configuration options. I’m pleased to say that 100% of this development has been customer and demand-led. That’s how I feel it should be, and how it should remain. I’ve seen software companies go adrift chasing featurism. The endless development of new features simply to have something “new” to bring to customers, which lacks a strong, compelling use case driven by customer need.

Most software companies address this increased “flexibility” (what they mean is complexity!) by series of tables or matrix. I find these a turn-off mainly because they are not focused on the organization and reduces a product to series to “tick boxes”. I think they are difficult to navigate, often confusing in their own right – and don’t simplify the story in the way they intend. I prefer to have series of scenarios that are firmly rooted in a real-world scenario which makes it much easier for organizations, partners, and our staff to ask the right questions, and save in the process customers bags of time and energy – especially now we have 4 core types of container – with two sub-types within each category. Of course, even a scenario approach has its limitations – in that few organizations fit into a “one size fits all” – so multiple scenarios can and do exist – which lends itself to a more “blended” solution. The other thing I don’t like about matrices is they lead to drag-race comparisons between vendors based on a feature list – how often is it the case that a missing X or present X is latched onto as a deal-breaker – only to find that feature never gets enabled once in production!

So let’s look at some common scenarios and outline what my recommendations would be for the customer…

Scenario 1: Legacy Apps on Physical Hardware

This is probably our most common scenario – although I would say a significant minority of customers are using our technology to deliver a modern application stack.

Whether you’re running on Microsoft Windows 10, Apple macOS, or Google Chromebook I would recommend our DCI-M7x32 container leveraging the native hardware acceleration provided by the local OS. In the case of Microsoft Windows 10 that would be WHPX, Apple macOS that would be HVF, or on Google Chromebook those would be the KVM extensions. The DCI-M7x32 container is a good all-rounder for both legacy, and some modern applications too – and is probably our most popular container type.

Scenario 2: Legacy Apps On-premises VDI environments

We support hardware acceleration in VDI environments where Windows 10 is the primary OS running inside the VM. For on-premises environments like VMware vSphere, the Intel-VT or AMD-V attributes need to be exposed to the VM. For on-premises environments, we would recommend using Windows 10 Version 1909 or higher. Assuming you have all your ducks-in-a-row, then we would recommend our DCI-M7x32. In short, physical and virtual environments are treated equally. In case you don’t know hardware acceleration to the VM is very easy to enable on the properties of the CPU in VMware vSphere:


Scenario 3: Legacy Apps on Multi-Session environments

In this case, a server OS is enabled Microsoft Remote Desktop Session Host (aka Terminal Services) and multiple users connect either to a shared desktop or shared application infrastructure. In this scenario, we would recommend our multi-session container which is 64-bit enabled – the DCI-M7x64. Although this container type doesn’t currently support hardware acceleration it does provide up to 4-core and 192GB of memory. So that offers fantastic scalability – where one container image is accessible to multiple users – offering the same concurrency model as the RDSH host within it runs.

In Microsoft WVD Whilst E-series-v4 and D-series-v4 instances do pass hardware acceleration the benefits are limited to a power-user style environment where there is a 1-2-1 relationship between the user and desktop. In our literature, we refer to the model offering as the “Flexible Model”. As each user gets their own personal container accelerated by Intel-VT. In this case, the DCI-M7x32 container is the best type to go with.

In environments like Microsoft WVD we recommend the same configuration as we would with a RDSH – essentially Windows 10 Multi-session offers the same multi-user functionality RDS enabled Windows 2016/2019 server. The DCI-M7x64 container which multi-session enabled offers greater consolidation and concurrency ratios.  Laying the technical issues aside for a moment, economically, the utility model seems to allow wins as a multi-session environment is always going to offer consolidation and concurrency benefits. In our literature, we refer to this as the “Scalable Model” as this most cost-effective method of serving up containerized apps to multiple users. There’s an implicit lack of scalability for a multi-session container on a 32-bit kernel. Since that kernel is limited to using 4GB of memory – it means once you have around 9-10 users connected into the container – you run the risk of ‘out of memory’ conditions and swap activity. On an x64 container that isn’t a problem as the maximum amount of configurable RAM is 192GB.

Scenario 4: Modern Apps on Physical or Virtual Hardware

Frequently we have organizations wanting to run modern apps of Windows 10 rather than installing those apps directly to the OS. The reason for this can be multiple – but often it’s about wanting to decouple the apps from the OS to allow for portability, security and able ingest Windows 10 updates without fearing they will clobber the delicate blend applications. Another motivation can be trying to support applications across other platforms like Apple macOS or Google Chromebook. As the Droplet image is portable across all three platforms without modification it’s often the best approach.

In a pure Windows 10 environment – whether that was physical or virtual we would recommend the DCI-M8x32 or DCI-M8x64 container with hardware acceleration – the same container type would be used on the Google Chromebook. The Apple macOS on the other hand would benefit from the use of our DCI-M8x32 which we have been running for some time – which gives excellent performance. We do have a DCI-10×64 container type but you do need that with physical hardware (currently it’s simply too resource-intensive for a virtual/cloud environment – although that will change with improvements in software and hardware). We tend to reserve the DCI-M10x64 container for high-end devices (8-16GB, SSD/Nvme) as this offsets the payload associated with this generation of the kernel.

Scenario 5: Really Jurassic Apps on Physical or Virtual

Occasionally, we come across an organization with really old applications. For these organizations I recommend, they give the DCI-M7 container type a try, as often we find even really old applications will install and run inside our container runtime. If that’s not the case then I would recommend the DCI-X container type (hint: X refers to cross-compatibility). It offers the same “Droplet Seamless App” experience but contains an old set of application binaries and frameworks, often missing or depreciated in the DCI-M7 generation.


Category: Droplet Computing | Comments Off on Droplet Container Support Scenarios
June 9

Droplet Networking (Part 2 of 2) Walls Of Fire

Sorry, it just amuses me that we use the term “firewall”. Yup, I know it comes from the construction industry, as a way of saving people’s lives and the integrity of the building. I sometimes wish hackers and other bad actors did have to walk-through fire as part of their punishment – but I guess that’s against some crazy health and safety laws. If you don’t know by now, I am joking. Besides which I felt “Walls of Fire” is a suitable “clickbaity” blogpost title that might garner me some additional views – and if that is the case, I’m very sorry for that.

So anyway, in the Droplet we have two firewalls – external inbound, and internal outbound. The important thing about the external inbound firewall is that is turned on by default and blocks all inbound traffic. There is no API or SDK – which means there are controls for the hacker leverage to facilitate an attack. That does have implications clearly for “push” based events, but so far in my experience the vast majority of networking activity is actually “pull” based – in that some software inside the container is responsible for initiating network activity. In that case, our triggers the internal outbound firewall…

The internal outbound is stateful by design – which is just firewall speak for saying that if a client apps open a TCP/UDP port to the network, then allow that to pass – and when communication ends or times out – then close that door. It’s the basis of many firewalls for decades. By default, our outbound firewall doesn’t block any traffic (remember ping and tracert do NOT work inside our container). The default configuration allows ANY:ANY. To a great degree, this is a deliberate choice on our part to deviate away from our usual stances of “all the doors are closed until you open them”.

[Aside: It’s the response to the reality in our time-pressed world, that almost no one has the time to RTFM these days. Heck, I’m surprised you even have time to read this blog post – but here you are. Thanks for that 🙂 ]

So, if we made our default BLOCK:BLOCK precisely zero packets would be able to leave the container, and we spend hours explaining why that was the case… So, if you look at our default firewall configuration when the container is powered off this is what you will see:

Changes to the firewall require access to the Droplet Administrator password, and that the container is shut down or the droplet service stopped. The changes made in this UI are permanent and survive reboots and shutdowns.

Note: Enabling block with no rules defines – blocks ALL network traffic from the container. This is a viable configuration if you wanting to block all communications in and out of the container except those allowed by our redirection settings or other internal droplet processes.

I can make this configuration very restrictive by only allowing port 80 traffic inside the container to work for,, and This is common when a customer is running a legacy web browser for example IE8 to connect to a legacy backend web service.

In this screengrab below the web service running is accessible (incidentally it’s running in a Droplet Server Container protected by secure and encrypted link…) but is not accessible – notice also how my mapped network drive to S: no longer works. The Droplet redirected drives still function – which goes to show that for every rule – there’s an exception. So, our firewall does not block our own trusted internal communications – such that drives our file replication service.


Category: Droplet Computing | Comments Off on Droplet Networking (Part 2 of 2) Walls Of Fire
May 11

Droplet Networking (Part 1 of 2)

So this blogpost is all about our networking and our built-in firewall. This is intended as a primer for geeks like me who like nitty-gritty details. You don’t need to know ANY of this stuff to use our product or deploy our product. In this first part, I will talk about the raw IP of the container and how that functions. In part 2 I will focus on our built-in firewall controls and how they work.

Firstly, let’s talk about the IP configuration of the container. The Droplet container is configured to be a DHCP client but critically does NOT get that IP address from your DHCP services on the network. Instead, a built-in service is responsible for assigning this IP that requires zero configuration. An ipconfig inside the Droplet container reveals the IP address assigned – incidentally it is always for every container.

Notice the Default Gateway is – this means that any packet outside of this network will be directed to So, the thing to understand here is we by default NAT all traffic out of the container across this IP address which is assigned to the host device. Together with our firewall configuration this effectively “hides” the container on the network making it invisible.

One of the common admin tools to confirm network communications are utilities like tracert and ping. It is possible to ping the Default Gateway address, and get a positive response from the host device – but all other ping/tracert will fail. So, we don’t allow ICMP (the protocol that drives ping/tracert) to escape this host/container network. In the screengrab below the ping of to the Default Gateway works, but all other ping tests fail.

The main takeaway is if you are experiencing networking problems – ping can and does result in false positives indicating a problem – when the issue lies elsewhere. Incidentally, this does not impact other utilities such as nslookup where the requests are driven by a different protocol. In case you don’t know nslookup requests are driven by UDP Port 53. So a nslookup on will give a positive response (assuming a DNS PTR record exists for this reverse lookup). So in the screengrab below, you’ll see a positive answer to the query nslookup and nslookup dc.corp.local

If you look closely, you’ll see the address listed as the “unknown” server. So, a DNS forwarding is taking place on another built-in IP address of which you can see listed if you run an ipconfig /all

So, if a container cannot get to \\dc.corp.local it more likely the case that the host device is misconfigured – or else there’s a problem with DNS where that name either doesn’t exist or is incorrectly configured. A nslookup is probably the most crudest and basic test of connectivity because if you get an answer it proves that the container is forwarding queries to the host device, which in turn is forwarding those requests onto the real DNS infrastructure.

Whilst I’m on the subject of names – one word about using names inside a container. The Droplet container is for the most part completely unaware of your network or domain (both DNS and AD) infrastructure – but it does understand IP and DNS based FQDNs.

[Aside: If you join the container to the domain – the above isn’t true. Our default behavior is the container is a member of its workgroup]

So opening explorer window on \\ or \\dc.corp.local is highly likely to work – but using \\dc – the short or NETBIOS name is less likely to be successful, and is more likely to be slow. Here’s why. Many years ago, we embraced DNS as the main name resolution system – and most organizations decommissioned the systems that supported the short NETBIOS system that goes back to Microsoft’s NETBIEU protocol (commonly referred to as the less-than-good-old-days :p). NETBIOS names used to be resolved by either a host file, lmhost file or system call WINS (ah remember the days of JetDB driven name resolutions systems!), however, In the absence of these systems, a broadcast packet is probably the only way these shorter names will be resolved. As broadcasts are not proliferated across routers and inhibited by network switches and VLANs, the chances of successful resolution are slim – unless you run a flat network like you would on a simple home-based WIFI network.

What if you have a legacy application is so old, decrepit, and poorly written that it only works with NETBIOS names, and not the face new DNS names that started to be adopted with Windows 2000? Simplez, simply tell it the DNS Suffix information required to complete the short name on the LAN adapter inside the container.

A good way to illustrate this name resolution issue of NETBIOS names is trying an NSLOOKUP on a short name. Here you can see the query of “dc” using nslookup fails:

However, if I manually tell the Droplet container what my preferred domain name is (something it usually gets from domain membership or option 015 DNS Domain Name on your DHCP server) this “problem” goes away…

Of course, the simplest and recommended method of avoiding this issue altogether is to use FQDNs or IP addresses where possible, and avoid the use of NETBIOS names altogether…

So now, we have a good idea of the networking inside the container – what are some practical uses? Here’s one practical example – suppose you have adopted cloud-based solutions such as OneDrive, GoogleDrive, or Dropbox – and you wish to access those systems within the Droplet container. I wouldn’t be recommending installing the software to these systems INSIDE the container, as the synchronization process behind these systems is like starting to fill the container with redundant data. Better to use either our built-in drive redirection to access these or share out the OneDrive/GoogleDrive folder, and then map a network drive to it.

With our drive redirection enabled, any folder accessible on the host device is accessible to the end-user (based on their user login details…) and in this way, the File Manager of the container “mirrors” the File Manager of Windows 10.

So on the left we see the the C, F, L and S drives of the Windows 10 PC, and on the right we see the redirected drives of the Droplet container.

Another approach would be to share the OneDrive folder on the Windows 10 PC, and then map a network drive to the address using that share name. So after sharing out the folder, I can connect to using SMB/CIFS by browsing to \\ and right-clicking the share, mapping a drive to C: using the standard tools:

Category: Droplet Computing | Comments Off on Droplet Networking (Part 1 of 2)
April 26

Droplet containers on Google Chromebook – Technical Requirements

One of the most common questions that come up around Droplet containers on Google Chromebooks is the hardware specification. Whilst that is important and significant what I usually direct customers to validate first is what kernel generation is being used by the Chromebook. You see, it’s entirely possible that a very modern and powerful Chromebook is using an older kernel, and that an old and underpowered Chromebook is using a modern kernel. There isn’t any link between horsepower and the generation of the kernel in use.

Why is this significant? Well, kernels from 14.4 onwards were compiled by Google to allow grub (the bootloader) to expose Intel-VT attributes from the kernel to the wider ChromeOS. This attribute is passed as “VMX” in the grub bootloader. As probably know ChromeOS is a very lightly loaded and sealed environment – which drives great performance and security. For that reason, you can’t just change the kernel or modify the bootloader. Well, you can – by using “Developer Mode” – but that essentially “breaks” the security model. For the uninitiated “Developer Mode” essentially gives you “root” access to the device – and often say it’s not unlike “jailbreaking” an IOS/Android phone or tablet. It’s not so much the “kernel” that’s the issue but the parameter passed to that kernel via grub.conf. It just so happens that checking the kernel version is the quickest route to determine the capabilities.

How does this impact Droplet containers? We leverage Intel-VT to deliver our hardware acceleration for our popular 32-bit containers and some of our 64-bit containers too (depending on their generation). It’s worth saying not all Droplet containers benefit from this hardware acceleration – such as our DCI-X container. That’s because some of the kernels we use pre-date Intel-VT anywhere – so there’s no performance benefit to be had. I’m keen not to overstate the performance delta between accelerated or non-accelerated containers – often we are talking about the difference of seconds when the first Droplet containerized app loaded.

So how does one work out the kernel type on an existing Chromebook? First, you need to launch the Google Chrome web browser, and open a terminal prompt using [CTRL]+[ALT]+[T] on the keyboard, and then issue the command:

uname -a

This will print out the data needed to ID the kernel in use – in my case, this is a Lenovo laptop running Neverware’s CloudReady. This is an open-source flavor of ChromeOS used to repurpose laptops, PC and Apple Mac devices for ChromeOS. Performance can be stellar because often these devices have huge amounts of resources to run huge OSes – with ChromeOS being so light touch it means they are superfast. So, you know Neverware was recently acquired by Google – and think this is part of Google’s strategy to get a foothold in the world of Enterprise IT. Neverware can be a great ‘get out of jail” card for devices that are powerful enough to run Droplet containers, but where the kernel can’t be updated.

As for new hardware – we would always recommend speaking to your preferred OEM provider, as their internal inventories should allow them to find the details of kernel generation. Another source of information is the community sharing their experience – but need to be careful as some of these devices are only available in certain geographical regions – plus you not just looking at the kernel type but the specifications of CPU/Memory and local storage.

My personal favorite (which is fantastically priced) is the ASUS C436 Flip 14in i5 8GB 256GB 2-in-1 Chromebook which is retailing here in the UK at the £799 bracket include VAT (or Sales Tax, if you are based outside of the EU).

It’s a super-powerful unit for its price. In terms of CPU, my idea would some kind of Intel Core i3/i5/i7 device. Memory, I would say would be a minimum of 4GB, ideally 8GB. Storage-wise, capacity isn’t the issue but IOPS is significant – so I would prefer an SSD or NVme drive, over eMMC fundamentally an SSD or NVme drive wipes the floor of an eMMC type – which is usually fine as simple boot media for ChromeOS, but I feel isn’t up the job when it comes to run a Droplet container.

So, the main takeaway – Droplet containers run on Chromebooks and watch out of under-resourced Intel Celeron type devices and powerful machines with the old kernels. Once our Droplet software is installed you should be looking to see “KVM” is listed as the accelerator in the Help, About menus like so:


Category: Droplet Computing | Comments Off on Droplet containers on Google Chromebook – Technical Requirements
April 14

Using and Managing the Droplet File Share Service

There many, many ways to get data in and out of the Droplet container – my own personal favorite is just to use my NAS-based storage on my network – as it gives me a centralized location for all my storage needs on a super-fast network. There other ways of course and that include using the clipboard to copy & paste a file into a container, as well as using our redirect drives. For some time we have also had our own file replication service or “Droplet Share”. The term “Droplet Share” is more of a marketing term because this does not leverage shared storage protocols such as NFS, SMB/CIFS, or iSCSI, but a built-in replication service. In fact, it was developed a secure method to get data in/out of the container even if protocols like SMB/CIFS are blocked. That’s the case in my many high secure environments where these Microsoft protocols are expressly blocked by the policies of our customers.

You can see if the file replication service is running if the orange cross appears in the bottom left-hand corner:

There are two main uses of this feature – a gorilla usage as an easy way to copy install media into the context of the container. I’ll often use this when a customer has already downloaded their install media to their desktop and started the container – and other methods are not available. A more legitimate use is in highly secure environments where such protocols as SMB are deliberately blocked, and security policies decree that redirected drives or the clipboard are not enabled. This is often the case in some banking and financial services environments where physical and virtual airgaps enforced when dealing with legacy applications that handle sensitive data.

The “Open Device File Share” opens a File Explorer/Finder Window on the host device, and the Open Container File Share opens a Windows Explorer application on the container:

So to the “back” is the Windows 10 File Explorer which opens the default path for the file replication service which is:


And the Window to the “Front” is Windows Explorer inside the container which opens a default path for the file replication service which is:


Our replication is bidirectional, so any file/folder created in either Window results in it being replicated to and from the container. So here I create folders in either direction – and they were replicated in both directions:

We can cope with very large files – double-digit GB files – are not a problem. Of course, the bigger the volume of data the longer it takes to complete the replication. Some actions are almost immediate – such as file deletes – because no network IOPS are generated.

If customers are not wanting to use this feature it can be disabled from our settings.json or from our core UI under the settings gear icon:

Once the Droplet Fileshare feature is disabled – the services that back it are stopped, and the UI is redacted and the shortcuts removed – so there no orange plus in the bottom left-hand corner of the UI.

Finally, a word of warning about the Droplet Fileshare. When you are building your “master” image you might want to clean out the contents of C:\Shared and turn off the Fileshare feature – occasionally system administrator forget this and leave unwanted installers, scripts, and other such code – and that can end up being pushed out during the deployment phase.

Category: Droplet Computing | Comments Off on Using and Managing the Droplet File Share Service
April 12

Droplet containerized apps published with Citrix Virtual Apps

One of the common questions that come is what options exist for presenting Droplet containerized apps in platforms like Citrix Virtual Apps. I have a bit of pedigree with Citrix – before pivoting to VMware in the 2003/4 era, my main role was as Citrix Certified Instructor (CCI) delivering their official curriculum – I started with Citrix MetaFrame 1.8 on the NT4 TSE platform. So, there is a number of ways that you can make Droplet containerized apps available these include:

  • Full Application in Desktop
  • Full Application as a published app
  • Individual apps within the container

This guide handles that last one. The goal is to be able to present just a single application or .exe within the context of the container – so it advertised in the StoreFront like so:

So here I’m publishing the core droplet.exe and also a copy of Excel 2003 and Notepad within the container. Let’s have a look at the Excel 2003 example.

In the Citrix Studio we can use the Application node to make new definitions:

When adding the droplet.exe – you can change the icon by providing a .ico file:

We can modify the “Location” add additional command-line arguments – in our case “launch winword.exe –minimised”

This syntax instructs droplet.exe to start Microsoft Word and to launch the droplet.exe in a minimized mode if it’s not already started. The result is Microsoft Word launches, and droplet.exe is “hidden” in the Windows System Tray.

This is advertised in the StoreFront like so:

There’s a couple of other recommendations I would make if you’re opting to use a third-party-based publishing system to serve up Droplet containerized apps…

Firstly, you don’t need our publishing front-end called “Application Tiles” in this environment. You could if you so wished “mirror” the configuration in Citrix StoreFront in the Droplet client. Alternatively, you could remove all the “Application Tiles” – if you also disable our FileShare feature this makes the client, very much like an agent sitting in the system tray waiting for Droplet containerized applications to be launched from the Droplet Windows Service:

This configuration is not uncommon in environments where organizations have a method of controlling access applications which they have been using for some time, and prefer to create shortcuts to Droplet containerized apps in the standard way using:

droplet.exe launch “excel.exe”

Secondly, in our new release, we will be supporting the settings.json option called:

“hideSettingsMenu”: false

This has the effect of suppressing the “gears” icon in the UI which makes the UI a little bit neater.

Category: Droplet Computing | Comments Off on Droplet containerized apps published with Citrix Virtual Apps
April 9

Joining Droplet containers to Active Directory Domains

For some time, we have supported joining our containers to Active Directory. Its function that’s always been built-in to our technology from its inception. There are several reasons why a customer might want to go down this route. Firstly, I should say we do usually try to avoid it mainly because it’s an extra piece of configuration to consider before going into production – with that said in many cases, it’s an unavoidable requirement of the containerized application.

AD Authentication became endemic as soon as Microsoft won the wars of the Directory Services fought against Novell NDS. Those wars remain firmly in the past, and you’ll find AD based authentication is pretty much the default method for many of the legacy applications – whether that’s some pass-through authentication where no pop-up boxes appear or some propriety UI that confronts the user – either way these methods usually require a computer account in the domain. Of course, nothing stays still – and the rise and rise on non-Windows devices (iPhone/iPad/Android) and web-based applications are fueling authentications methods which although tied to LDAP infrastructure, but use a system of tokens such as SAML/OATH as the protocol for the username/password. So, by a country mile, the primary reason for joining a container to a domain is to meet these authentication requirements – and the way those requirements are often found out is by testing.

The second reason for joining the domain is accessing other resources which may be backed by AD-based ACLs. In the workgroup model, this is going to create some kind of Windows Security box, where the user will need to type their domain credentials. We can store and pass those credentials, but if there’s an excess of them – then sometimes the quickest route to avoid these pop-up box “annoyances” is to join the container to the domain – and course, as soon as one application requires it the rest of the environment benefits from it as well.

There are other use-cases as well – and that’s to leverage other management constructs and tools that are AD domain-based. Not least AD GPOs – whilst the local security policy remains intact inside the container – many organizations would prefer to have an OU and policy setting pushed down to the container. For those organizations using some end-point management tools like SCCM to manage their conventional Windows 10 PCs, they may also wish to use the same management tools to control the software stack. Our general line on this is – fill your boots. Droplet isn’t about imposing restrictions on how you manage your stuff, nor are we about creating another “single pane of glass” for you to navigate through. But rather empower you to leverage your already very serviceable management infrastructure.

Finally, there’s one other reason to join the container to the domain – and that’s to take advantage of our multi-session capabilities. This is a scenario where the container runs in either a Remote Desktop Session Host (aka Terminal Services) or Windows WVD where Windows 10 multi-session is in use. It also applies to scenarios where more than one user logs into the same physical PC at the same time using the Microsoft “Switch User” feature – this is a common requirement in healthcare settings where staff on shift-share the same Windows PC and toggle between users using the “Switch User”. Of course, in the multi-session environments where one container provides applications for many users – we tend to use our 64-bit container type as it offers great memory scalability – and we need to separate the different users by their userID to prevent one user from getting the Droplet Seamless Apps of another. Whilst you could create users inside the container – that makes no sense logically when there is an AD environment in place.

The joining to the domain process is no different from the process you would use on a physical Windows PC. However, this is a manual process, so we provide customers a series of complementary scripts that automate this process. This includes are “arming” process so as a container is deployed on the first start-up it runs through these scripts which automates changing the computer name, joining the domain itself (creating the computer accounts in a designated OU), and ends by ensuring various AD-based groups allow access to the Droplet Seamless App. At the same time the default logon process is modified by changing an option in the settings.json:

This setting essentially turns off our default login process using our service account and instead challenges the user to supply their domain password.

By default, end-users will see the Windows Security dialog box connecting through to the container listening on the internal, non-routable IP address of This uses the discrete, encrypted private network channel created between the host device and the container itself. Some customers like this approach as they see it as an additional layer of security, especially those organizations that have acquired third-party software that augments these MSGINA dialog boxes with other functionality. As you can see the domain/username is already pre-filled, or that we are waiting for is their password.

If customers would rather pass through the credentials from the main Windows 10/2016/2019 logon. That’s very easy to do with a couple of minor changes in the GPO that focused on the Windows systems run our software is all that’s needed. Again, we supply the documentation to configure those options if desired.

Other Considerations:

Now that the container is joined to the domain there are some other considerations and configurations to think about. By default, the Microsoft Firewall inside the container is turned off. We do not need it – because we have our firewall configuration. However, joining a container to the domain, means there’s a good chance that domain GPO could turn back on the “Domain Profile”. This policy won’t stop our core technologies – but it can stop the Droplet Fileshare service and stop our UsbService both of which are network processes.  This triggering of the Windows Firewall can be stopped by enabling this disabling policy (did you see what I did there?)

Computer Configuration > Administrative Templates > Network > Network Connections > Windows Defender Firewall > Domain Profile

The policy-setting “Windows Defender Firewall: Protect all network connections” can be set to disable for a GPO that filters just on the container computer object in Active Directory:

Another consideration is user profiles – as soon as multiple users are login in into the container this will generate a user profile. In most cases, this isn’t an issue because organizations have already a profile management system in place – not least Microsoft own roaming and mandatory profiles. That said it shouldn’t be overlooked – not least that the creation of the profile isn’t very quick, and the last thing you’d want is multiple containers with multiple different local profiles being created.

So that’s joining the container to the domain in a nutshell – mainly done for applications that require AD authentication and with our “multi-session” based containers used in RDSH and Windows WVD environments.

Category: Droplet Computing | Comments Off on Joining Droplet containers to Active Directory Domains