Fluffy Cloudy Amazon Web Services Thoughts (Part N of N)
Disclaimer: I’m not an AWS Expert. I’m learning. I regard myself as a novice. Therefore I reserve the right to make idiotic statements now, which I will later retract. My thoughts on AWS are very much a work in progress. So please don’t beat me up if you don’t agree with me. I’m just as like to respond with “Gee, I hadn’t thought of that – you have a point!”
Well, okay the title of this post is a bit of a joke at my expense. Just before I joined VMware in 2012, I embarked on a series of blogposts about vCloud Director [yes, just as the company change strategy towards vRealise Automation!]. It became quite a series of posts. I dubbed it my “vCloud Journey Journal”, and it ended up with a whopping 73 posts, in what almost became like writing a book through the medium of a blog. Just so you know, this is NOT a good idea as the two formats are totally incompatible with each other. So anyway I don’t want to make the same mistake this time around. And my intention is to write stuff as I learn.
After vCD, I dabbled with vRealise Automation (which was once the vCloud Automation product if you remember, which was aquired via DynamicOps). That product was fine but it was very much about creating and powering up VMs (or Instances as AWS likes to call them). I didn’t feel I was really using the public cloud “properly” but merely extending virtualization features up into the public cloud rather than consuming stuff in the -as-a-service kind of way. Sorry to my former VMware colleagues if this is a massive misconception on my behalf – the last time I touched vRealise Automation is nearly four years ago – and things can and do move on. Plus I’ve been out of the loop for 12 months.
The last couple of weeks have modified my experience, and as consequence got me thinking all over again about what public cloud is, means, or is defined. Sadly, this has became a very boring and tired parlour game in the industry many years ago. I personally think the game of “definitions” of “What is public, private, cloud?” are a bit moot for the community. But they kind of matter to me as the typical in-house, on-premises type who made a name for herself by helping other setup, configure, troubleshoot the virtualization stack from around 2003-2015. But even I feel that the debate moved on long, long ago – and this is me playing catch-up.
The bottom-line for me is whilst its easy to grasp a concept in words and PR marketing terms, it alters once you properly immerse yourself in it – rather than merely dipping your toe into the waters, before heading back to the reassuring comfort of an in-house, on-premises system. I’m perhaps being a bit over-dramatic – it’s not a radical a “Road to Damascus” conversion. After all I was for a time (in name only), VMware’s Senior Cloud Infrastructure Evangelist. A job title I always gently took the pee out of – because heck, four-word job titles almost always have nothing to do with what you do! And if I’m honest that was true for me. In the end I was an “evangelist” in name only – my main work being within the Competition Team at the time, I ended up spending more time evaluating other vendors products and communicating internally to sales folks. This isn’t me belly aching by the way, it was perfectly interesting role, I’m just trying to be honest about my credentials (or the lack of them as the case might be!).
I’ve been beginning to wonder that they are shades of grey within public cloud. You could be 99% on-premises but subscribing to Amazon Glacier for off-site data backup purposes (I’m not sure if that’s a great idea, but hey-ho). Would that would mean you were running a hybrid cloud? For me for it to be truly hybrid you would have comms going back and forth seemlessly between your off-and-on premises infrastructure – most likely with a site-to-site VPN or if needed a “Direct Connection” from a AWS partner. Anything else is just kind of “playing at it” and stretching the term “hybrid cloud” beyond its elastic limits. Also for me hybrid cloud could also mean – you run some of you stuff runs in AWS, Google, Azure and Virtustream. IF it is really about matching the applications needs to the correct provider – then why be “ideological” about which provider you use? That said, I’ve always felt that there are operational benefits to keeping-it-simple-stupid (KISS)
So with this brief and pithy intro. #irony.
Let me get to the meat and potatoes of my fluffy cloudy thoughts. I was trying to think of a personal analogy for the difference between on-premises virtualization, and public cloud. You’ll notice I don’t use “Private Cloud” because I don’t think it personally exists in a significant amount. People say thay have a “Private Cloud” often it because it gives Senior Management the reassurance that they are still innovating and relevant. Cloud is such an amorphous concept that anyone can make the claim to having it… In my limited experience very few businesses truly have a private cloud – in that it bares virtually no resemblance to the features, functionality and feel of AWS, Google or Azure. You could very fairly say:
You: Michelle – that’s not a problem. Private Cloud isn’t trying to be Amazon.
Me: Fair Point
There’s another reason I prefer to compare on-premises virtualization and public cloud – because the vast majority of customers are still managing a virtualization environment with the same moribund silos of the past (server team, storage team, network). So I personally feel we are not living (yet) in a true Hybrid Cloud environment and what actually have is more of aPrivate Virtualization and Public Cloud schism.
One thing that has come across strongly in my last 4-weeks or so immersed in AWS, is this thing is so not JUST about VM/instances and replicating the plumbing you have on-premises. Of course, I’ve spent a lot of the last couple of weeks doing precisely that, by following PluralSight’s SysOps Admin course. And without sounding superior – more plumbing work isn’t frankly that interesting. Although occasionally there’s been interesting twists on what were previously well-understood technologies. For instance (if you forgive the pun) AWS Route 53 DNS is interesting as it has functionality I’ve not seen in your typical AD DNS implementations. Yes, I know DNS is as boring as hell, but it kept my interesting going through modules that could have been quite tiresome. For me the interesting modules were things like Relational Database Service (RDS), AutoScaling and Elastic Load-Balancing (ELB) – the later because they are so “easy” to setup compared to rigging together fully blown LB services which I’ve done in the past with Microsoft NLB (YUK!) and F5 (YUMMY!).
The “easy” is with heavy Stephen Fry like speech marks because we all know everything is “easy” when you know what your doing, but much less so when you don’t. BUT, also one thing I’ve found is how incredibly easy it is to miss the pre-reqs required to make a feature work. I’m trying to think of a good example where I was repeatedly caught out by the “more haste, less speed” principle, as it happened LOADS of times. I have about 10-20 blogpost about where I screwed up, and how I fixed my mistakes. I guess one example is how you wire instances for use in ELB. For an ELB to started properly you must have TWO instances each on different subnet in a different availability zone. Documentation states this quite clearly, and was duly noted but I was for ever forgetting to set this correctly when creating an instance.
I find this UI/page difficult to read, perhaps that’s my very mild dyslexia kicking in. I could make a case here for a better web-based UI, but I wonder how many AWS SysOps people actually spend time in front of this interface? I don’t know – perhaps they do – or are they using some other 3rd party front-end (such as vRealise Automation) because they want a single-pain-in-the-ass from which to do their management, with a consistent UI. (Aside. I just hate the phrase single-pain-of-glass, its repeated as much as “strong and stable government”, and each time I hear it a synapse dies and goes to heaven, and little part of my soul cries inside). Perhaps a good SysOp person is using Powershell for Amazon or the CLI tools to the bulk of their “admin”.
One thing I’ve noticed about AWS is how absolutely intolerant its to renaming things. There are circumstances I’ve found where you can’t even modify the description, never mind the object name itself. This starts to get a bit irratating after a while. And its the most compelling argument for planning and architecting your environment before you leap in with both feet and start creating stuff. A part of me actually thinks the AWS console encourages really bad ad-hoc administration. In an effort to getting you up and running and creating instances as soon as possible – all manner of objects (security groups for instance) are created, some with really bad defaults (new subnets don’t have network ACLs applied to them – ANY/ANY is the rule!).
The other thing you will see is a fraction of the knobs and whistles that make up a virtualization environment. This is true of all cloud environments even with something as basic as the IaaS products like vCloud Director (R.I.P). There’s always a pressure from SysAdmins like me to have some widget to bubbled up from the underlying virtual infrastructure layer or to use an old-fashioned term “The Hypervisor” whether that be Xen, ESXi or Hyper-V. Of course the WHOLE POINT of cloud is to mask that complexity away to make us all Happy Little Consumers of the Cloud. As a virtualization admin I was sometimes a bit horrified that neither vCloud Director or AWS allows a slick and easy way, to say increase the size of virtual disk or volume. In both environments it is a total bore and chore. Some people would say (and I would agree with them) that in the world of the cloud, if you find yourself getting in a pickle because of lack of free space in a boot disk or data volume – that is your stupid operational fault – and you should be really looking to fix this problem so it never happens EVER AGAIN! However, I would argue that the road to hell is paved with good intentions – things can and do go wrong. Of course, the problem with this argument is that every customer is going to have their favourite knob or button they wish was bubbled-up to the cloud layer. If the goal is reduced functionality for greater operational efficiencies, then some compromise is going to have to happen OR better still an improved operational model is developed so this sort of annoying <another expletive deleted> becomes a thing of the past….
The analogy/anecdote. I’ve got analogy for the difference between Private Virtualization and Public Cloud, and goes all the way back to my first job in IT in the early 1990s. I’m not sure if this is a good analogy but it works for me. Back then I was an “Applications Trainer” teaching numb-nuts how to move the mouse, and what the [B] button did in Word (What can say I was in my early 20s, knew nowt about IT really and straight outta college). Occasionally, I would get more challenging work to do such as Advanced Excel or a database application (I still have nightmares about teaching Windows Foxpro!)
Anyway, one project I had been helping a local authority with a spreadsheet that was getting slower and slower. When I did the analysis I could see what they had done – through a combination of linked sheets, linked files and macros they had effectively built a database using a spreadsheet app. Of course, as soon as the number of records started to mount this thing had begun to slow down – to the point they bought a server to run Microsoft Excel (I kid you not!). So I asks them why they had used a spreadsheet and not a database to store this info, which would have been much more appropriate (I was NOT quite as blunt as I am in the blogpost, to use modern parlance – I had to take the customer on a journey away from their rank stupidity).
They told me they were scared of DBs because they required so much upfront knowledge. The thing they loved about Excel was there were no rules (no fields, field types, field lengths) to worry about – they could walk down to the deep end and jump right in. What they loved about spreadsheets was they could get started quickly and gain benefits right away…
I think this isn’t a bad analogy for Private Virtualization Vs Public Cloud. I think virtualization drove such immediate benefits that even if you got it wrong and built it badly, you still got benefits (and gawd, have I seen some appalling VMware deployments in my time!). Cloud is like the database. It takes time, planning and architecting – even if Amazon AWS think they are being “helpful” by making it easy to create instances, and thus drive a larger credit card bill….
Because if you do just jump in start creating stuff it ain’t easy just to change it. It’s a bit like driving down the freeway and realising you have just missed your exit. Sure you can come off the next exit and do a U-Turn… but wouldn’t it been better if you hadn’t screwed up your journey from the get-go? You’d be further along to your final destination….