It makes me WannaCry….

You don’t know how to ease my pain
You don’t know…
You don’t know how to ease my pain
Don’t you hear any voices cryin’?
You don’t know how to play the game
You cheat…
You lie…
You don’t even know how to say goodbye…
You make me want to cry….

It’s rare that the world of IT impinges on my friends day-to-day lives in the scale it has in recent days, and rarer still that I feel compelled to address political issues on my tech based blog. That’s mainly because I think people  visit to learn something new about tech or to read one of my blogposts where I got something to work, and they are looking to find out how to do the same. I do have a political blog called “The Age of Rage” and I offload my venom there – I only wish more people did this instead of filling Linkedin, Twitter and Facebook with political opinions they think everyone else will agree with – only to be upset, offended or abusive when they are shocked to discover the world doesn’t uniformly agree with them. However, the outbreak of the “WannaCry” ransomware represents for me unique situation where these worlds do collide. However, I want to talk about these issues in a non-partisan, non-party political way, because frankly there’s enough of that guff around already from our policial class.

Before I “go positive” and speak about the positive steps that can be taken by all stakeholders (users, vendors, governments, agencies of the state). I feel compelled to draw your attention to some artful media management and outright charlatanism that typifies how this adverted crisis is playing out in the media, especially here in the UK. It’s from this I hope to outline how we can collectively take responsibility, but that some organisations have more responsibility than others because of the power and/or financial muscle.

Read the rest of this entry »


Posted by on May 15, 2017 in ThinkPiece

Comments Off on It makes me WannaCry….

Using Amazon Route53 and Google Apps Together using Domain Aliases to complete SSL Certificate Requests!


I’ve got nearly 25 years experience in the IT game with a range of skills that take in this task – DNS, Email, Web-Servers. However, for the last 15 years or more I’ve more or less outsourced the management of this to a third-party, or it simply hasn’t been my job. Once I used to teach Active Directory DNS to students when I was a Microsoft Certified Trainer, but that was way, way, way back in 1996-2003. Of course, there’s nothing new under the sun, as the Great Bard once said – and so I have gotten by ever since with core fundamentals. So this is both old and new too me, and if I was to be honest I’m not sure if the solution to my problem was the best or easiest. I might have just taken a sledgehammer to drive home a thumb tack. I’d be interesting hear if this process could made infinitely more simpler.

I think the ‘order’ of my process is good – especially as you need valid emails to confirm the transfer and setup of certain domains. But I’d also be interested that is this is the best way of doing it – could it have been done more efficiently in fewer steps. Finally, I’d be interested to know if this the ‘right’ way from a security and best practise perspective as well.


I would have liked to have a more exciting title to this blogpost – and one infinitely shorter!  Being a Hunter S Thompson fan, I had thought of adding “A Strange and Terrible Saga”. But I actually I want to avoid the rabbit of an extended rant, and the convoluted shaggy-dog story of my experiences on Friday. It took me 6hrs to get this working, and I’m still mopping up the blood spatter today. This should have taken 30min-60mins tops including waiting for DNS caches to expire, and DNS records to be propagated on the interweb. However, I will spare you my personal grief this time, and just focus on the back-story, use-case, solution and workarounds in the hope that anyone facing similar heartache in the future will stumble upon this post and I will save them a bag of time. I’m just nice like that – after all I first got started with VMware, by just trying to be helpful. It takes you a long way in life I think.

Advice: If you are budding wannabe blogger who just wants your own domain, linked to Google Apps for email etc – together with your own WordPress setup. Don’t bother with this approach. It’s overkill. I would sign up to any number of hosted WordPress packages online, that will handle all of this for you in a nice simple easy enrollment process. This blog is hosted with Dreamhost.

The Problem – Back-story/Use-case:

As part of my endeavours to learn more about public cloud I’ve been looking at Amazon AWS. I’ve already put together an environment that leverages Amazon Router53 (DNS) together with multi-region Elastic Load-Balancer (ELB) together with IIS web-based instances running on ‘public’ subnets. I thought it would be good experience to do this using SSL certificates. I established a new DNS domain, registered and hosted with Amazon Route53, and then opted for for .net domain because that allows for the possibility of making my WHOIS information private, whereas this option did not exist for a domain. Privacy is important to me, and I don’t think my postal address should be online for all and sundry to see. This is important to note, as it impacts on the SSL certificate enrollment. Registering the domain with Amazon Route 53 and Requesting an SSL certificate was relatively easy.

Where I became unstuck however – was In order for my SSL Provider to verify me and send me certificate they needed a valid email listed under WHOIS. This became tricky because that information as a.) private b.) the email used under the WHOIS information did not match the emails they would usually “expect” to use. That was tricky for me to easily provide because all I have is the raw DNS domain name, with none of the ancillary services that would normally surround it such as web-servers resolving to or any email infrastructure. Nor did I feel inclined to waste precious time putting together such services merely for a one-off email and verification process.

This process would have been relatively simple had I been requesting a certificate for where those pieces of the puzzle were are in place, and much of the verification process had already been undertaken. However, I specifically wanted to use SSL with Amazon AWS and have it all in that environment, rather than doing the DNS work through dreamhost. Dreamhost is the company that hosts this blog. They are very good by the way.


So I hit upon the idea of associating my existing Google Apps subscription which supports my domain, to also provide email services to my new domain. It is possible to register the domain as “alias” to Once recognised by Google I would be able to create an user within my subscription with google. After that I can then update my WHOIS information at Amazon Route53. And then contact my SSL provider to complete the verification process. Of course, working out HOW do this took time. I’m a pretty tech savvy – but this requires an area of skills, often using interfaces and procedures which are different to ones I’ve used in the past. So you need:

  • DNS knowledge (with Amazon Route 53)
  • Certificate Request Knowledge (Many routes – I used IIS 10 to create a CSR request)
  • An account with Google, and knowledge of their Domain Registration/Validation process
  • Further updates to Route 53 and the WHOIS information to change default settings

I don’t intend to write something step-by-step because as soon as I do – the UI’s will change. I’ve often found that Google help does NOT keep up with their many changes. Amazon on other hand appear to have a better handle on documentation – so there is no point in me trying to compete with Amazon or Google in the documentation stakes. It does illustrate the challenges of them managing such an “agile”  environment compared to conventional shrink-wrapped software company. The documentation gets out of sync with the product…. To be honest I still don’t know WHY some processes provided by Google DID NOT work. And I still dont’ really know if the WAY I have done it the best or most efficient. It does HOWEVER, work. And that to me is what counts. BUT, if anyone can figure out what went wrong or suggest simple/easier way I would be indebted to them for that guidance.

Finally, I dare say Google Domains/Apps could be replaced with a different vendor if you subscription is with some other email supplier other than gmail. For instance I’m sure such a configuration could be achieved with Office360. Of course, any ordinary mortal just wanting a blog with their own domain, and bit of SSL to protect the login would be better of getting a hosting company to orchestrate all this – its much less heartache!

1,000 Foot View:

This is a simple number list that serves as a check-list to anyone (well mainly me) wanting to do this style of configuration…

  1. Register new domain with Amazon Route 53
  2. Login to Google Domains and create a New Domain Alias
  3. Use the cname record method to verify your domain
  4. Populate the Route 53  with the MX records for Google Mail servers
  5. Create a new user in Google Console for your preferred contact for the new domain
  6. Login to the new account, and (optionally) forward all email to an email address you do actually use!
  7. In Amazon Route 53 update your WHOIS information for the new ‘admin” email. receive a flurry of confirmation and validation emails!
  8. Generate a CSR for your domain (various methods)
  9. Submit CSR for your single host certificate (aka or domain wild card certificate *
  10. Use your new certificate as you see fit. In my case attached to two region specific ELB’s which act the SSL endpoint for inbound https requests – thus offloading the SSL process to ELB and away from your web-servers.
  11. Punch the air – and say wow, did I really do that. I must be some sort Cloud God loading over the Olympus of the Internet. Sit back. Have a cup of tea. Feel a little less full of yourself. It’s only software you know… 😉

NOTE: I won’t be covering step 8-11 as they are specific to your environment, and will vary from vendor to vendor. And mainly because this post will be LONG enough without adding that level of detail. My main interest is the interoperability between Amazon Route 53 and Google Apps to get this working.

Now in a LOT more detail…

Read the rest of this entry »


Posted by on May 15, 2017 in Amazon

Comments Off on Using Amazon Route53 and Google Apps Together using Domain Aliases to complete SSL Certificate Requests!

Fluffy Cloudy Amazon Web Services Thoughts (Part N of N)

Disclaimer: I’m not an AWS Expert. I’m learning. I regard myself as a novice. Therefore I reserve the right to make idiotic statements now, which I will later retract. My thoughts on AWS are very much a work in progress. So please don’t beat me up if you don’t agree with me. I’m just as like to respond with “Gee, I hadn’t thought of that – you have a point!”

Well, okay the title of this post is a bit of a joke at my expense. Just before I joined VMware in 2012, I embarked on a series of blogposts about vCloud Director [yes, just as the company change strategy towards vRealise Automation!]. It became quite a series of posts. I dubbed it my “vCloud Journey Journal”, and it ended up with a whopping 73 posts, in what almost became like writing a book through the medium of a blog. Just so you know, this is NOT a good idea as the two formats are totally incompatible with each other. So anyway I don’t want to make the same mistake this time around. And my intention is to write stuff as I learn.

After vCD, I dabbled with vRealise Automation (which was once the vCloud Automation product if you remember, which was aquired via DynamicOps). That product was fine but it was very much about creating and powering up VMs (or Instances as AWS likes to call them). I didn’t feel I was really using the public cloud “properly” but merely extending virtualization features up into the public cloud rather than consuming stuff in the -as-a-service kind of way. Sorry to my former VMware colleagues if this is a massive misconception on my behalf – the last time I touched vRealise Automation is nearly four years ago – and things can and do move on. Plus I’ve been out of the loop for 12 months.

The last couple of weeks have modified my experience, and as consequence got me thinking all over again about what public cloud is, means, or is defined. Sadly, this has became a very boring and tired parlour game in the industry many years ago. I personally think the game of “definitions” of “What is public, private, cloud?” are a bit moot for the community. But they kind of matter to me as the typical in-house, on-premises type who made a name for herself by helping other setup, configure, troubleshoot the virtualization stack from around 2003-2015. But even I feel that the debate moved on long, long ago – and this is me playing catch-up.

Read the rest of this entry »


Posted by on May 9, 2017 in Amazon

Comments Off on Fluffy Cloudy Amazon Web Services Thoughts (Part N of N)

VMware {code} Briefing: What’s New with VMware PowerCLI 6.5.1

VMware PowerCLI 6.5.1 was released on April 20th and it contained some significant improvements and changes! Whether you’re an occasional PowerCLI user or a power user, you’re not going to want to miss this special briefing!


Posted by on May 8, 2017 in Announcements

Comments Off on VMware {code} Briefing: What’s New with VMware PowerCLI 6.5.1

My Amazon AWS Certification Plan with @pluralsight and @ekhnaser (Part God Knows!)

So I’ve played about with AWS in my time at VMware, but really only dipped my toes. Like many people I like to have a goal to work towards – so it felt reasonable to think about going through the steps to prepare for certification. For me the important thing is the learning process and getting the old IT Brain working again. So I may or may not end up doing the eggzams for AWS, but thought the structure around that prep could help frame my learning. I took a look at the certs on Amazons websites:

The above link is pretty good for generic info – if you want more detail for the AWS Certified Solutions Architect – Associate certification this – a much better location –

And I can tell I need to do the “asssociate’ stuff before I do anything ‘profesisonal’ – and given my background the Administrator/Architect path is one that suits me. I’ve spent most of my career training, education and teaching sysadmins how to manage systems – and AWS isn’t going to be any different to that. I’m not about to morph into a developer at my advanced age. You can can teach a dog new tricks, but you can’t teach an old dog to be a cat.

According to Amazon – Step1 is take a training class. As understand it authorised training is not a requirement, only recommendation. So unlike some (ahem) certification tracks that mandate authorised training, that’s NOT the case with AWS. Yippee. That means I can spend my plentiful time instead of my limited cash on training.

As vExpert (2009-2017) I bagged a free 1-year subscription to Pluralsight so it makes sense to use it as alternative to authorised training from a recognised training partner. As rule I prefer classroom training with an instructor is who alive (as opposed to dead). But given the finances I will make do with the passivity that is online training. Pluralsight does have a course entiteld “AWS Certified Solutions Architect – Associate” which fits the bill. It’s created by Elias Khnaser. I know Elias though Linkedin and Twitter, so intend to be little cheeky monkey and ask him questins directly. Although to be kind, I’ll probably store them up until the end of the course. There’s nothing worse for an instructor to be asked questions in Module1, that is answered in Module2, right?

Right out of the bat, Elias recommends attending another course to the above if your a novice. I’ve never been one to skip steps in learning process so I opted to do that first.

If you are going to do the fundmentals course first – I would recommend skipping to Module3: Introduction to AWS Global Infastructure, if you have been in the industry a while like myself. The course is itself feels pretty up to date (I notice there’s no date of creation) and isn’t going date that much because its fundmentals. But you will spot little changes – for instance the course states that there are 10 Regions plus GovCloud. Actually, its now stands at 16 regions with another 3 planned. So long as you follow the URLs in the course you should be able to see these difference. For a more up to date list of the Global Infrastructure – you need this page:

My plan once I’ve gone through both courses is double back to Amazons 8-Step program outline on their webpages. Both courses are about 8hrs in duration… and I would recommend perhaps going through each one twice. One of the decide benefits of online training like this is the “rewind button”. Something that is decidedly lacking in instructor-led training – although I believe some vendors do allow access to online versions of their training material AFTER you have passed the exam. Although in my personal opinion I imagine few people can spare their time out of the bizzy schedules to re-do a course all over again. The benefit I think is “refreshing” yourself on a particular topic or subject you found tough.



Posted by on April 11, 2017 in Amazon

Comments Off on My Amazon AWS Certification Plan with @pluralsight and @ekhnaser (Part God Knows!)


What Next?

So I’m back from my family holiday in Wales with my Mum and Big Brother (no relation to George Orwell). And my thoughts have been turning to what I do next with my time, now that my grown-up gap year feels properly over. I’m not the kind of person who likes to sit on my big fat butt waiting for opportunities to wash up on my shore. So I’ve been thinking about what I can do to ease my way back into the world of work, after my time way. I guess this is always a concern or anxiety that anyone would have during time away from gainful employment. So it’s not just finances that stop people from taking time out from work, as well as those other commitments – mortgage and family usually!

For some months I’ve been volunteering in my local area. Volunteering is a great way to give back to wider society whilst giving your week a focus, not least getting you out and about in the big wide world. I currently volunteer at Derby Museum as well as a local National Trust site called Eyam Hall. I’ve been asked by some what this work is like. The work at the museum started by supporting their recent exhibition on the History of Children’s TV. That was a fun exhibition as we got all age groups coming through, and it really was a little snapshot of how British Culture has changed. My role there as a “Volunteer Ambassador” was just to meet and greet people, and ideally engage with them about the exhibits. It makes such a difference to persons visit- to have a chat with someone, rather than walking through silently through a gallery speaking to no-one. Eyam Hall on the other hand is different kettle of fish. It’s a National Trust property and built around the 16th Century in a village that cut itself off from the world when the plague hit the country. The NT’s approach is to let people wonder and discover, and not ‘impose’ an interpretation on visitors – but its great when folks do ask questions as that means I get the chance to do my best Lucy Worsley impersonation!  My last piece of volunteer work is for local charity called Aquabox. My role there is more work-from-home – in finding new source for fund-raising. So far I’ve managed to get Aquabox listed on the VMware Foundation (and I’m on the look out for other corporate style foundations to add to the list) and applying to official bodies like UK-AID. Anyway, the moral is simple one. If you seeking new employment after being out of the circuit for a while – get volunteering. There are no shortage of areas or opportunities. When I do find employment again – I will probably reduce the time I spend volunteering and move them to the weekend. If you are an employee of big company remember lots of these business now have programs that encourage you taking ‘service hours’ to help good causes. For instance VMware calls this “Service Learning” – For the moment – my plan is to ring-fence Thursday and Friday as my volunteering days (these are always times when there is a shortage of people), and use the remainder of the week doing something more IT related.

So one questions I’ve been asking myself is what do on the technical front. Things have moved on since I’ve been away, but they also moved on whilst I was at VMware. If you have a full-time job with a large software vendor – it’s full-time job just keeping up to date with your own responsibilities, never mind peaking over the cube to look at what the rest of the company is doing. So there question has been – do I throw myself in learning more VMware stuff and refreshing existing knowledge OR do I branch out and do something totally different give myself an entirely virgin field to explore? I mean I don’t want to lose my connections with VMware because that’s been such an important technology and company to me in the last 14 years (2003 is when I opened my first VMTN communities account!). But if I’m going to learning its important to learn some brand new to me. The other consideration as well as ever to someone who is on their own and learning without the backing of an employer is what pre-reqs (physical, virtual, software, knowledge) are needed. Do you play to strengths or try to plug gaps in your knowledge that may not be your strengths?

One thing I’ve noticed in the community is significant rise in folks working towards the AWS Certifications. I guess that’s testament to Amazon’s dominance in the Public Cloud space, but also reflects that fact that many in the enterprise world are users of VMware on-premises and Amazon in the off-premises (is that actually word? it feels so odd to type it!). The other interesting thing to me that happened last year – was the collaboration between VMware and Amazon that was announced last year ( This is currently in a techpreview format, and I think it’s an interesting pivot. There have been lots of different partnerships of this ilk over the years – but I do think this one is significant. The appeal to me is the possibility of cross-over of skills. As we all know find someone who is equally strong in two areas is tricky – and being someone who can comfortable talk about VMware and Amazon with equal authority could be an interest area.

Right now my knowledge of Amazon is pretty thin. Like many I had an account for testing purposes usually of things like VMware vRealise Automation, but also test products that leverage AWS as it related to VMware technologies Revello (now owned by Oracle) and Velostrata. On the plus side, as recent vExpert I have as benefit access to PluralSight’s library of courses. So plan is to use my Mon/Tue/Wed to work through these course, and maybe do the exams associated with Amazon certification. I don’t suspect that this will lead or even relate directly to finding a new role – but what’s important to me is getting my “IT Brain” moving again. The other thought I had is that learning something new will inspire some blogging on my part as well, and that blogging will help (re)build my presence in the community. But also In the spirit of –  learning something new can never hurt….


Posted by on April 10, 2017 in Amazon, Announcements

Comments Off on What Next?

Retail Software Update/Upgrades in the era of the Silver Surfer….

Old YouView

New YouVIew

So I have this PVR box here in the UK called “YouView” which now pretty much standard fair – you know Series Link, Pause/Rewind Live TV etc. etc. This week they did a software update/upgrade which reskined the thing with quite a shift in the UI. The UI change is pretty typical of what’s in fashion nowadays, and you see it on modern day website designed for tablets. So reduce the detail; menus – and opted for the more stripped down ’tile’ view along the lines of say OSes like Window8/10. The kind of less is more approach.

Of course, this raises the thorny questions of when is software change – a patch, update or upgrade. This old catagory question has got even more blurry as stuff that was meant to just fix stuff is now generally sweetened up with additional features or a new look. The other SW vendors are doing are doing is “depreciating” features. This is a clever use language for what is affectively an arbitarary removal of functionality without notice. Finally, with domestic retail software we seeing an increase us of over the air updates which are mandatory, not optional – and happen automagically without your triggering them. I guess this is requirement nowadays as more and more devices are web-connected, as vunerabilities are discovered those fixes need to pushed out quickly in order to gain ‘herd immunity‘ from potentional virus or exploits in badly patched managed environments.

I guess my generation is probably going to be the last to be irratated by this, as the younger generation will be able to absorb software changes as fast rate, and have more important things to do like curating an interest image on themselves on social media platforms, and wondering why their uber hasn’t arrived yet.

But I think the retail software people are forgetting a core demographic. The baby-boomer generation or “silver surfers” who react badly to any change, of any type. I’ve seen this happen loads with my Mum as Microsoft ceasely change an almost weekly cadence, for almost negiable benefit, unless they definie “benefit’ as confusing the shit out my elderly parents. So how to manage this radically divergent user types. Well, I think these vendors should be going back to a very simple Q/A of “Doing want our radical new update that makes everything bright and shiny, or would rather have the good classic look”.  At the very least the ability to go back to a classic look and feel should be offered. With the rise in the aged population, there’s going to be rise in people who struggle to adapt change, and need to make notes on ‘how to do stuff’.

Of course the silly thing is. This ’tile’ UI is in itself quite old-hat now. I mean its been around for donkeys years and think the first time i saw it was on an early AppleTV. Personally, I prefer the good old fashioned list – when you could see more on a single screen and navigate through more content in a single page, and also see what shows I’d partly watched… Finally, we with every mass software update there is always a % of DOA updates. Mine went thru perfectly fine, others less so. I assume retail sofware vendors budget for and have the PR chaps ready for any blowback…


Posted by on March 18, 2017 in Other

Comments Off on Retail Software Update/Upgrades in the era of the Silver Surfer….

Altaro VM Backup V7 Released

Download the 30-day trial:
Product Info:

Hi there, and thanks for reading this blog post about Altaro VM Backup. I was asked by the guys at Altaro to take a look at their latest release. I said yes, and I also managed to persuade Altaro to make a donation to the charity ( who I’m volunteering for whilst I look for a new role. So firstly, a big thank you goes out to Altaro for agreeing to this arrangement. I think its setup that works well for all. Altaro gets exposure to their new offering; I get stick time with a product that’s new to me – and a good cause benefits as well. I managed to raise £280 for Aquabox. If you want to donate to Aquabox as well click the logo!

Lets start with some basic facts. Altaro has won a number of pludits from the reviewers on Spiceworks and Their Altaro VM Backup software can backup both VMware vSphere as well as Microsoft HyperV, so is handy for those people working in a hybrid environment. It’s licensed on a per-host basis, not per-socket or CPU, so customers who go for a high-density consolidation ratios (the number of VMs per hosts) are really going to benefit from a licensing perspective. It’s chocked full of all the features you would normally expect from any enterprise backup system. Altaro VM Backup is fully compatible with Microsoft VSS, and that means you will get a consistent backup from those tricky customers like Microsoft SQL. The software is granular enough to restore individual files and emails from within a virtual machine backup. Finally, a number of backup targets are supported including USB External Drives and Flash Drives eSata External Drives, File Server Network Shares (via UNC), NAS devices (via UNC), RDX Cartridges – as well as the Offsite Altaro Backup Server with WAN acceleration. In my own case I pointed my simple Altaro Server to my local NAS box that already had backup shared out accessible to Microsoft Windows, the same NAS is visible to my VMware ESXi hosts on the same network using NFS.

The Setup

As you might expect the setup routine was a relatively trivial affair, and indeed the software itself does a good job of walking you through the 3-step routine to provide the core details need to do your first test backup – this means adding your VMware vCenter, individual VMware ESXi Hosts or Microsoft Hyper-V Hosts.

Each of these stages has a ‘test connection’ component before you proceed, tha you can see in this screen grab below:

The next stage is adding your storage options for carrying out the backup itself. You can opt for a directly connected device, or for a remote location supported by UNC. In my case my Altaro VM Backup Server was a Windows 2012 R2 virtual machine, with access to my remote NAS.

As you can see once a backup target has been added its simply a case of dragging and dropping a VM to that target. From this point onwards most of the admin tasks are of a drag-and-drop variety – dragging VMs to predefinied schedules and retentention policys, so you can control the frequency of backups, and hold old backups are disgarded. As my lab has been offline for a year, I don’t really have that many VMs to backup, except of course the infrastructure VMs that make up the lab itself. So I decided to backup these VMs as a matter of course.

What’s New

The V7 Edition boasts a number of new features. The first is “Augmented Inline Deduplication”. This decreases the time it takes to both take and restore a backup. It creates the smallest backup size, and doesn’t require you to group VMs together to get the benefits. The fact that its inline means the deduplication process isn’t run as a post-backup process. This is important because the storage savings that deduplication brings mean little in real terms if you still need the temporary space required to carry out the backup. By definition backups often mean backing up the same bit of data that repeats itself in different VMs over and over again, and this deduplication cancels out bloat in backups.

Altaro have published blogs that explain this augmented deduplication process. This blogpost is a centred around Hyper-V and they have a very similar one for VMware as well. Calculating the upfront exact amount of potential savings any customer will get from any dedupe process is difficult. However, the Altaro VM Backup Dashboard does a good job of showing those dedupe and compression savings.

Also new to V7 is “Boot from Backup”, it’s the ability to power on a VM directly from the source backup. Typically, this means a network location like a CIFS/NFS server share/export is mounted directly to the hypervisor and powered on. That means the IO performance will be constrained by the disk capabilities of the system backing it. Remember this is merely away of getting the VM up and running in the shortest possible time. In most cases the availability issue trumps any short-term performance hit, because it’s the clever stuff going on in the background that matters. In the background the restore process is continuing – once the restore process has completed, all you need to do is schedule a small maintenance window to shutdown the “boot from backup” and replace it with the restored copy. As you might expect, a reboot takes less time than waiting for a full VM restore.

The “boot from backup” feature has two modes – a verification and recovery mode, and of course the performance mileage will vary dependent on the qualities and capabilities of the storage backing that VM’s backup target location.

Once you have gone through the usual suspects of selecting the mode, backup location and VM itself – you get granular control over the way VM is brought up. This includes attributes such as renaming the VM and ensuring its network card is in a disconnected state – to avoid conflicts with the existing VM.

What’s Next?

VM Backup V7 will soon promises a feature called Cloud Management Console (CMC), which will allow administrators to monitor and manage remotely all their backup installations using a single tool that can be accessed from any web browser – without VPN or any requirement to be on-site. The CMC dashboard gives a more site-by-site or customer-by-customer point of view and will be designed for a more multi-tenant approach to backup management.

What’s There?

Well, as I stated earlier everything you’d expect from an enterprise backup solution is pretty much there. So along side multi-hypervisor support you’ll see an impressive list of features:

  • Drastically reduce backup storage requirements on both local and offsite locations, and therefore significantly speed up backups with Altaro’s unique Augmented Inline Deduplication process
  • Back up live VMs by leveraging Microsoft VSS with Zero downtime
  • Full support for Cluster Shared Volumes & VMware vCenter
  • Offsite Backup Replication for disaster recovery protection
  • Compression and military grade Encryption
  • Schedule backups the way you want them (View video)
  • Specify backup retention policies for individual VMs (View video)
  • Back up VMs to multiple backup locations

So there are plenty of positives to be hand, along side a competitive licensing policy… but….

What’s Missing?

If there’s one repeated criticism levelled at Altaro VM Backup is the lack of public cloud as a backup targets. So for offsite backup use your very much dependent on having another site in which to host the Altaro VM Backup Offsite Server. Now for many small businesses this might not be an issue, as many SMBs actually have more than one location – such as their main warehouse facility and the customer-facing location. However, for SMBs that literally only have one location this is tricky. Such customers might look to services like Amazon S3, Glacier or Azure as way of getting their backups a distance from the core site. The alternative is transporting removable media to another location – and that feels decidedly 1990’s for an era where data can and should be held anywhere.

I raised this issue with the guys at Altaro and they pointed me to blogpost they have which show using the Altaro VM Backup Office Server in Azure. The first blogpost covers off the planning and pricing aspects of placing an Altaro Offsite Server in Microsoft Azure. The second blogpost explains the process of how to setup it up. This configuration is something that Altaro intends to fully develop and it in the pipeline, and part of an overall cloud strategy – but they weren’t understandably able to give me an ETA on that – because it would be commercial sensitive to do so.

In Conclusion

If you are familiar with virtualisation and have been following the backup space for virtualization for a while – there are no surprises here. What’s certainly true for me is that a new tier of backup vendors is entering an already crowded space. This is not dissimilar to the shake-up we saw in the storage space in the last 5 years. Features that were once unique and only available from premium vendors are now going mainstream. The question remains – if you are working with a premium mainstream vendor what unique features are they offering you that you can’t get elsewhere from a relatively new player in the market who is hitting the streets with very attractive pricing and licensing policies? So I see it as a mark of ‘due diligence’ to do a scoping out of alternatives, rather than simply disengaging the brain and signing the renewal contract. You don’t do that with any other insurance premium, so why do that with your backup insurance premium?

Finally, for home labs and small environments, that need basic features, they can also use the free edition that enables backup up to two VMs for free, valid forever.







Posted by on January 30, 2017 in Other, vSphere

Comments Off on Altaro VM Backup V7 Released

@LastPass and Password Management


This blogpost is about my recent escapades in password reset and password management. Before I dive in I need to fess up. Despite decades of experience, I have over time seriously miss-managed my passwords. That’s despite having used tools like Lastpass for a couple of years. I haven’t been naughty such as writing down passwords on PostIT notes, but I have re-used similar or same passwords across multiple websites – even though I knew this exposed me to so-called “weaker sister” style breaches – that is to say that if you use the same password across multiple site, it’s the one that is most vulnerable to attack which then allows access (assuming the same user ID is in use) to all the rest. So this New Year I decided to put a stop once and for all to this bad practise. What follows is a description of what that was like, how bad/easy it was, and some general thoughts about the nature of security in the modern world. I might add the recent 1B breach of user ID by Yahoo was a wake-up call. I wasn’t personally hacked and I believe my account was secure (after all 1B accounts takes some going thru even by modern computing standards). I guess the operative word there is ‘believe’

Firstly, if you a LastPass user – check out how many websites you have listed, and run the security challenge. This does a good job of flagging up how bad your situation is, as well as flagging – compromised passwords, weak paswords, reused passwords and old passwords. You can see the result of my score above. Actually, this was in terrible state until I set about resetting the passwords. I had bad reports for Step1/2/3/4. My master password (the one that allows access to the LastPass word vault) was the same as one of the websites I had saved. Lastpass does warn you about doing this – but I foolishly ignored it and never got round to resetting it…


Secondly, where possible use Lastpass ‘Change Password Automatically’ feature to reset bum entries. This feature works well with the website it works with (paypal, twitter, amazon). However, it DOES NOT work with the vast majority of other websites. This is NOT Lastpass fault, but because we have no uniform standard for how password reset webpages should be constructed and formatted. This means authenticating individually to each and every site, and doing the password reset manually. I had over 240 sites. A follower on twitter had over 600 (admittedly he said he was okay as everyone was unique)

Note: Incidentally, I found “Change Password Automagically” is available for Yahoo, it didn’t work. I also found it got confused with the multiple Google accounts I have. I think this is because both Yahoo and Google have their own special UI and method of handling logins. I found Lastpass would reset the wrong accounts password.


Thirdly, let LastPass generate new passwords for you. But beware that not all websites support special characters (!@£%^&*_), and some require things like 2 numbers and two letters with Upper-Case. Also I found occasionally that Lastpass would not ‘see’ the password reset, and it wouldn’t prompt to update the username/password stored in the Vault. I took to copying the password to the clipboard, just in case – and doing manual updates. This is because there are really no standards for how password resets are managed for web-pages.

Lastly, Lastpass creates a little icon in the username and password areas – this works on Yamaha’s website for example but not for Hertz’s website.



Note: You can right click in these fields, and select Lastpass, and Generate Secure Password

Also I spent many minutes trying to find the place to reset my password in some websites which slowed the process down. This is because there is no standardisation really for where this information is held. Sometimes it’s easier to pretend you’ve forgotten your password, to get an easy to click reset link. However, this isn’t standardised either – as some websites reset your password to a value which you have to subsequently change (which means you wind up having to locate and work with their password reset feature).

Fourthly, rinse and repeat for every single login ID – I ended up running down my 240 stored usernames/passwords to about 160. This is because some of the websites no longer exists or I couldn’t access them. For instance I had username/password combo for internal systems at stored behind a VPN accessible firewall. This does raise the spectre of bad username/password combinations that can never be fixed. However, I take the view that if ALL of the existing websites I do have access to – each have their own unique password – I’m as safe as I could ever be. And in comparison to my poor rating before – I now have a much better situation. It does raise the issue of remembering to delete accounts or reset passwords on systems you are not using anymore. The Yahoo warning was about an email address I have not used in years….

Firstly, You will notice that the word ‘standardisation’ comes up a number of times. It’s my belief that this lack of standardisation in the industry concerning password management significantly reduces the value of tools like Lastpass. This isn’t Lastpass fault, they must work with the reality they find. However, given recent breaches I think pressure should be put on the large stakeholders to adopt uniform standards.

Secondly, I shocks me that today in 2017, many website use your ’email address’ as the username. I doubt very much if the average joe/Josephine creates a bogus email address simply for the purpose of logins. This means the very means by which people requests password resets can be hacked. I see no reason why folks can’t have a user ID that is distinct and separate from their email. It would make swapping out email when they change infinitely easier. If I change my email address many hundreds of entries in my Lastpass vault become stale or invalid.

Thirdly, given this a manual process cared out me a monkey with an oversized wet brain – mistake can and do happen. There are couple of website where I screwed up their password reset process and found myself locked out. This means I have to request a password reset email (or in the case of get codes sent to other email addresses or my phone).

Finally, although Lastpass has an automatic password reset feature, it’s not supported uniformly. This makes the process very labourious, and is a dissensitivity to fix the problem – but also reset passwords. It’s common standard in the enterprise environments to change passwords on a 30/60/90 cycle. No such standard exists in the private internet space. It took me ALL DAY to fix my problem – starting at 9am and finishing at nearly 11pm. It’s unacceptable to me to have carve out a whole day annually, quarterly or monthly to reset all 160 entries. The only ‘reasonable thing is once a week do a block of 10 or alternatively – make a folder of the MOST sensitive accounts (email, banking and anything that processes money – paypal and ebay for instance) and put them on a more frequent cadence of resets.


Posted by on January 2, 2017 in Other

Comments Off on @LastPass and Password Management

Employee Alert: VMware Foundation Charity Listing – AQUABOX


Hello my fellow VMwareans. (Yes, I know that makes people sound like their some kind of alien species that have just landed on planet earth).  Although this post is public on my blog, it’s actually directed at all the folks who work at VMware. I’m currently on my gap year which officially ends at midnight on the 31st Dec, but will mostly like carry on until such time as I find gainful employment. One of the things I’ll be doing in the mean time is volunteering. I had thought of starting in the New Year, to mark the end of this time. But after attending September’s VMworld in Vegas – I realised that there was no time like the present.

If you are searching for Aquabox in the VMware Foundation – change the filter to be “UK” you can locate it Registered Charity Number which is 1098409.  This year the company has allowed you to donate a fixed sum for a good cause, if you donate more this triggers a matching donation from VMware.

What follows below is a description of Aquabox and what we do. I realise many are you time poor, but if you prefer videos. Grab yourself a brew, some M&Ms, and watch this 8 minute YouTube. It will tell you why Aquabox is so important, and how the technology works.

For the rest of you who enjoy reading my excessively verbose blogposts… Hello!

One of my activities is volunteering at local charity to me called Aquabox. I say local to me, because although the technology and concept we developed in the town I now call home, its remit is a global. So what is Aquabox? At the heart of it is a unique and innovative water filter that’s gone through a number of iterations over the years. When a disaster strikes the first thing that goes to pot is the water supply. You can survive for many weeks (if your well nourished) without food, but without clean and non-polluted water you will die in days (and in some cases hours). Historically, the big charities have distributed chlorine tables to kill off water born bugs such as cholera. Have you ever taken a gulp of water in a swimming pool? Think of that, but 100 times worse. So what happens is people in dire straits (and this often includes children who know no better) drink dirty and polluted water – and die not of starvation or thirst, but from the diseases that water contains.

There’s two type of AquaFilter – a Community and Family. As you might imagine the big daddy serves a large number of people, whereas the family is intended for a group of five. As for the Aquabox itself some of the filters have been running for 4 years in Africa. The technology is robust, simple and easy to maintain. As piece of technology its a thing of beauty to any engineer worth their salt, and it’s perfectly fit for its purpose. And of course, it needs to be – given the hostile environment it has to function in. Aquabox has been operating for 20 years – and employs just one part-time manager – the rest of us are volunteers. So you can rest assured that the vast majority of your donation will go to the end-user. Aquabox started its life as part of the Rotary Club Organisation which has a global reach with a good reputation for trustworthiness. So the supply chain of getting the boxes to the family is one that comes with a high integrity.

As for myself. I’ve been packing the boxes which include not just the AquaFilter, but whole host of items a family would need in the first hours, and days of humanitarian crisis. The other thing I’ve been doing is trying to establish other methods of raising funds. As former employee I thought of VMware and you my former colleagues – and the VMware Foundation. I’m exceeding grateful to the folks within the VMware Foundation who have expedited this new beneficiary so swiftly and efficiently. And I’m very grateful to my good friend Hans Bernhardt (who many will know as Chicken Man!) for helping getting the word internally.

By its nature Aquabox  goes everywhere and we operate in the most extreme of situations because that is where the greatest need exists. Aquabox has been helping the many tens of thousands of people who remain trapped inside a war zone in the Aleppo. These include incredibly brave team members from our aid distribution partner Hand in Hand for Syria يدا بيد لنبني سوريا . Hand in Hand in Syria is UK registered charity, and the team at Aquabox have been sending shipments to distribute to Syrian refugee camps, and they have 1,000 Aquaboxes ready to give out to families once they have been evacuated from Aleppo. Very few aid organisations are able to operate in Syria because of the nature of the conflict, and the only way to achieve this is with trusted partners who’s only concern is life and limb.

Of course, Syria is not Aquabox’s most recent recipient of aid. In fact, the main focus has been Hatti. We are continuing to send emergency disaster relief to the people of Haiti whose lives have been thrown into chaos following the devastation of Hurricane Matthew. We are sending a further 250 family sized Aquaboxes, to add to our previous shipments of 500 Aquaboxes and 18 Community Aquafilters.

That timeliness even more acute today. As you have seen Aleppo in Syria is about to fall, sparking yet another massive humanitarian crisis – with mass exodus from the city of almost biblical proportions. I want to put aside any political analyse or opinions, to ask you to think of those people this Christmas Time – the vast majority are innocent civilians just caught in the crossfire. People just like you and me, caught at the wrong place at the wrong time. All too often in our modern media saturated world, tragedy spills out on to our screens. The scale of the suffering can lead you feel to be numb at times. It’s so overwhelming it makes you wonder what can be done. Well, something can be done. An Aquabox can be sent. You can make that happen. Today.

Please think of Aquabox if you if you have the opportunity to donate.

And if you reading this and your not a VMware Employee, there’s nothing stopping your donating from your own pocket. Think of it this way, how much do you spend in coffee shops in a week. Why not give that amount?


Posted by on December 16, 2016 in Announcements

Comments Off on Employee Alert: VMware Foundation Charity Listing – AQUABOX