May 18

ThinkPiece: Cloud Field Day: Can an old dog, learn new tricks?

A rather fuzzy picture the CFD3 Crew. That’s okay – I think I look my best when everyone hasn’t been to SpecSavers recently…

Well, it all seems such a long time ago since I was in the Bay Area at the beginning of April. And I have been meaning to blog about my experiences and insights at Cloud Field Day for weeks. Sadly, I became very ill on the way home from the event – I came down with bronchitis. If you’ve never had it – count yourself lucky. It took me weeks to recover. After getting over that – a perfect storm of events overtook me – both good and tragic.

Anyway, I’m feeling MUCH better now, and I’m finding some cycles to work thru that “to do list” we all perpetual have… This won’t be my only blog on Cloud Field Day, as I intend to write a blog about the start-up that came out of stealth when we were there – called Droplet Computing. The are worthy of a blog all to themselves…

So there were a number of more traditional shrinked-wrapped software vendors at the event – and without fail each one endeavoured to show how they were pivoting their traditional stack to the cloud. It makes perfect sense to do that considering it was a “Cloud Field Day”. The clue is rather in the title. I want to be kind and say that all the vendors were at least trying to do that. But commonly it did rather feel at times as if ToysRUS were reacting to the onslaught of Amazon. And its not without irony how – where some traditional retail operators are really struggling to deal with the competition that brings in the domestic space, some traditional software vendors are struggling to deal with the competition that a public cloud vendor like Amazon bring to the table. There is an element of “lets wait and see” if this change really is happening – only to find that once the change has happened – you are on the back-foot from the get-go. It’s a much more dynamic and competitive landscape which is moving very quickly, and some of the traditional software vendors aren’t as “agile” (hateful word!) as their senior management might want to believe – or tell their shareholders and investors.

Now, this blogpost isn’t your usual – lets bash the old ways, and slag the traditional vendors off because they are easy targets. I genuinely would like to see their cloudy efforts succeed. Because I believe in competition. And I believe that a market that’s sown up by tiny cartel of big players is not in the customer or anybodies interest. Competition is good for the Big Evil Corporates too – because it stops them becoming lazy and complacent. So I would like to give credit where credit is due. NetApp for instance has gone down the route of effectively creating a brand new unit to deal with the challenges they are facing. Essentially, putting all the storage goodness they have into Amazon AWS. This is native enterprise storage in the cloud, with all the features and functionality you used to enjoy on-prem.

[yes, I said on-prem. I’m bored already with the language police going on and on about on-prem(ises). Can we devote our mental energies to something that is actually IMPORTANT, for once?].

Note: This is the interesting Vertias presentation, and is worth a watch.
At the end of the week (when delegates are losing the will to live!). Vertias came into talk about what they were doing. Sadly, the first part of the session was a bit of a bore fest. Until a separate team came on to talk – and the Old Skool Vertias guys had left the room.  Now, what they showed us looked very much like a start-ups “minimal viable product”, and I’ve always hated that term, especially when its deployed by huge, huge companies. Guys, you need to up your game (not Veritas specifically..). MVP is a term beloved of the Valley and the Start-up culture. It’s rightful place is there – in the start-up culture, where smaller companies need to spend a little, impress a lot, react quickly – and critically attract VC cash. But I’m afraid MVP has no place to play in a massive incumbent. People expect you to apply some spit and polish, and the expectations from customers are naturally higher. Meet them. Exceed them. Anyway, I digress.

There was some speculation amongst the delegates about why, Vertias had chosen the route they had. I made appoint that we were in Vertias CorpHQ, why don’t we, like, just ask them? Mercifully, we were spared the usual corporate flim-flam. The guy told use quite straight down the line – that they just could not get the features and coding built – quick enough from the existing team. So they built their own.

There’s critical message for all companies like NetApp and Veritas. There’s a right way to do this and a wrong way. Trying to build the new company – from the people and process that built the old, is by its nature like rolling a stone up a hill. You nearly always need to build the new company by incubating a new BU from within. This is not just some organisational chart activity – critically that group needs to be championed, and yes “protected” from other BU/SILOs, that would quite cheerfully strangle it at birth. So a bigly beautiful wall needs to be built around the company-within-in-the-company – and then it need stuffing with cash. The other thing that needs to happen is the sales folks who earned their commission form selling the “Old stuff on the truck” need to be “compensated” and “incentivised” to sell the “New stuff on the truck”. And if they can’t do that – you need to get new people who will ,because they have no stake in the previous model. As a friend quoted to me:

“Change the people, or change the people”

Think about that for a while…….

Anyway, I name check NetApp and Veritas as companies who I think may have woken up and – smelt the coffee and bacon (an odd mix, but actually can be quite tasty, although as a Brit I prefer a cup tea with my bacon butty). I think I would include Oracle Infrastructure Cloud (OIC) in that mix too. Whether NetApp, Veritas or Oracle – are able to overcome the baggage that comes with their brand is anyone’s guess. But I do think they are at least trying, and if they put enough weight behind their respective projects – there is at least a chance they will be successful. And it feels churlish not at least encourage and recognise effort, where effort is being made. Oddly enough OIC didn’t go down as well at CFD as they did at the Ravello Bloggers Day (same venue, similar people, almost the same presenters). I’m not sure quite why that happened – as there’s lots of positive things to say about their offering. Perhaps that’s because they led with Ravello (for which there’s a lot of love for in the vCommunity?) Oracle comes with a lot of baggage with the vCommunity – so there’s “perception” issue to overcome. Sadly, the Ravello Bloggers Day wasn’t recorded, and I think embedding the CFD video might actually do them a disservice. So recommend checking out my blog on that event:

I’m afraid I could not say the same about Riverbed. Sadly, their presentation lent too heavily on older sales plays, that had merely been re-jigged for a cloud era. You could tell they weren’t really connecting with their audience by the rigor mortis that was settling into the group. That also kind of came across with the lack of vim and vigour in their presenters. I hate to say things like that. Because it feels like a personal criticism, which I normally shy away from.

Things did brighten up with a presentation from one of their team members at the end of the session (Vivek Ganti) – who at least came across as someone, who felt a passion for the work he and his team were doing. This guys didn’t feel like he was just “going through the motions”.

Sadly, however, what he was showing was some of the automation that Riverbed have put into deploying their appliances into an Amazon VPC. In fairness if you were doing this all by hand using Amazon’s frankly horrid web UI you would have piece of work on your hands. However, if your doing public cloud “right” you should be using your favourite toolset of utilities to leverage the API. That’s what public cloud is all about. Not Old Skool SysAdmin (like me!) clicking and filling in dialog boxes, but a new style DevOps SysAdmins who can stand-up and tear down infrastructure with the flick of script. One of the goals of this DevOps Public Cloud is to accept that nothing is really ever persistent in the old style way – but should be by its nature volatile – so it is so robustly automated it can be destroyed and created in an isn’t.  My point here is that this kind of automation is likely to make the average DevOps SysAdmin go “meh”.

Also, for me this issue is also a more important one – the value in any software – whether is on-prem OR in Da Cloud isn’t how easily it can deployed and setup. That, really should be a “given”. The real value is what that software allows the customer to do – which they couldn’t do before. Now, I guess you could say – standing up a multi-tier load-balanced layer that offers redundancy, and inspection of packets to ensure a smooth network experience can be difficult. I actually think Riverbed have a fantastic suite of products (although folks tell me they can be quite pricey). But I wasn’t really convinced by their cloud play.

Why was that? After all they aren’t really doing that much differently than say what NetApp were doing. But there was something about the vibe. Whereas it felt that Riverbed were sprinkling cloudiness over an existing product range. I got the impression that NetApp had made a genuine attempt to Cloudify their existing product ranges, whilst at the same time acquiring and investing in something net-new. I don’t want to label Riverbed’s approach as “Cloud Washing”. I guess the difference is in the approach. Whereas it appears as if NetApp wanted to deliver NetApp-as-a-Service (NaaS!) with Riverbed it was more like Virtualization 1.0.

Hey, lets put our existing stack into a Linux instance and spin that up in EC2/VPC.

I suspect what customers WANT (remember them? customers?) is all the features and functionality they used to enjoy on-prem, without any of the complexity of configuration, management, or having to deal with it all becoming rusty after 3-5 years, and having to lash out more cash to upgrade and forklift their unique bits. Shrink-wrapped software, without being tied up in cling film if you wish….

Anyway, I said when I started this would be just one post, with another on Droplet Computing. But I feel like banging on about the companies who are getting this right from the get-go. And that feels like another post entirely (although it is related…)

Category: ThinkPiece | Comments Off on ThinkPiece: Cloud Field Day: Can an old dog, learn new tricks?
May 15

It makes me WannaCry….

You don’t know how to ease my pain
You don’t know…
You don’t know how to ease my pain
Don’t you hear any voices cryin’?
You don’t know how to play the game
You cheat…
You lie…
You don’t even know how to say goodbye…
You make me want to cry….

It’s rare that the world of IT impinges on my friends day-to-day lives in the scale it has in recent days, and rarer still that I feel compelled to address political issues on my tech based blog. That’s mainly because I think people  visit to learn something new about tech or to read one of my blogposts where I got something to work, and they are looking to find out how to do the same. I do have a political blog called “The Age of Rage” and I offload my venom there – I only wish more people did this instead of filling Linkedin, Twitter and Facebook with political opinions they think everyone else will agree with – only to be upset, offended or abusive when they are shocked to discover the world doesn’t uniformly agree with them. However, the outbreak of the “WannaCry” ransomware represents for me unique situation where these worlds do collide. However, I want to talk about these issues in a non-partisan, non-party political way, because frankly there’s enough of that guff around already from our policial class.

Before I “go positive” and speak about the positive steps that can be taken by all stakeholders (users, vendors, governments, agencies of the state). I feel compelled to draw your attention to some artful media management and outright charlatanism that typifies how this adverted crisis is playing out in the media, especially here in the UK. It’s from this I hope to outline how we can collectively take responsibility, but that some organisations have more responsibility than others because of the power and/or financial muscle.

Continue reading

Category: ThinkPiece | Comments Off on It makes me WannaCry….
February 5

ThinkPiece: The Good Enough Delusion:


The new Volkswagen Tiguan advert sums up the idea – the tatty old-rope that’s “almost as good” the standard rope…


After publishing this article a follower (@IanNoble) on twitter sent me this infographic type thing from the Bill & Melinda Gates Foundation, need I say more… BfvTNEiIgAA6BVO

Because “Good Enough” just isn’t Good Enough

[My main title is inspired by my current reading – the God Delusion by Richard Dawkins]

goodenoughI recently took a trip to Palo Alto to VMware’s Corporate Head Office. It was chance for my team to get together, looking back on last year and look forward to the new one. It was such a treat to put faces to names that I’ve heard on the end of concall for the last 12 months. On our second day we had a bunch of people from across VMware present to us. And in one of those sessions I had one of those “light bulb” moments. Like all light bulb moments, or moments of epiphany – the insight I had feels like it has always been there. In truth I’ve had this blogpost in my draft folder for more than a year – but it took the presentation last week to crystalize it for me.  The talk was about how products are selected by large corporates, and the dynamics that fuel the process of implementing a C-Class executive initiative.

The process begins with C-Class starts an Executive Initiative identifying a business need – this will hopefully result increased profits by increasing revenue; decreasing cost or decreasing risk. In the best of all possible worlds (to quote Voltaire for a moment) you would hope to achieve all three. The likelihood is the executive team have balance risks against rewards. Further down the management chain these Executive Initiatives trigger the creation of programs, and then projects – which ultimately lead to tasks being carried out by the “doers” in the organization. Budgets are aligned from the Executive Initiative to various Programs and Projects. So even if a particular Executive Initiative comes with million dollars worth of budget, by the time it is diced and spliced, and trickled down to the Project Team it has been shared out amongst a number of Projects competing for funding. It’s from this that the pressure to save money is driven – either keep on budget or qualify for a bonus by bringing the Project in on time and on budget. And this precisely why and where the “good enough” delusion starts to rear its ugly head.

The term “Good Enough” is often used to describe a technology or process that’s adopted because it meets the perceived bare minimum requirements for the problem or task at hand. Quite often there is a much more functional, flexible and more advanced technology available. The trouble is that superior product is sometimes slightly more expensive than the “good enough” solution. This was something I used to see a lot of back in the previous decade. Customers would complain about the cost of VMware or the cost of Microsoft SQL licenses for vCenter. But when looked at in a broader picture their virtualization initiative the customer was spending four times as much on the acquisition of storage. I’m sure you’ve heard of the adage that for every $1 spent on VMware; $4 is spent on storage.

Sadly, that appeal to look at costs with the wider context of a massive Executive Initiative often fell on deaf ears. Despite the fact the technology might contribute just 1% of the overall initiatives budget, by the time the folks who are running the Project are given their stipend, its now become 25% of their budget. The issue is often distorted by the way budgets are aligned to various silos within the organization. Where the cost of a technology isn’t seen because it doesn’t come out of that business units cost center. This contributes to resources being “free” because the burden of paying from them falls on someone else balance sheet. Increasingly, I feel our IT structures or silos (InFRUSTuctures as I like to call them) create a dysfunctional culture in many organizations. To the server team – I’ve often seen/heard that the network and storage is “free” because that “spend” comes out of someone else’s budget. I know this is an over-simplification, but bear with me.

It’s within this context that the “Good Enough Delusion” evolves. I can see the seductive appeal of the “Good Enough Delusion”. In fairness on paper the “Good Enough” delusion is totally understandable and logical response to these business pressures to bear down on costs – and I have every sympathy for those who are convinced by its seductive message. Especially as In my career if often witness customers being “over sold” on a technology. In other words they have been sold a product that’s far in excess of there needs and requirements. I’ve sometimes used the metaphor of the car sales man selling a little old lady a Bugatti Veyron to take her to the shops and back, and to see her grand-daughter on a Sunday. The causes of this can be merely the salesman miss selling a product in order to fuel their short-term commission. Perhaps less provocative reasons is the engineer over-engineering a solution to both cover their own back (called CYA in the trade!), but also to protect the customers from blowback. In this context the “Good Enough Delusion” is an attractive strategy to protect the business. The “Good Enough Delusion” drives down costs, and keeps your budget on track. So long as you spend time carrying out due diligence testing to make 100% sure that your needs are being met by the solution. Right?

Wrong. The problem with “Good Enough” lies in those very words. You can soft-soap it and couch in terms of “but it meets our needs and requirements”. But to admit something is “good enough” is to tacitly admit that you have selected a less than “ideal” solution, when something superior was available. That’s what is the hidden kernel truth at the heart of the phrase. And it’s for this reason you often see “good enough” solutions deployed in scenarios of low-risk, as a tactical play rather than a strategic one.  The classic example of this is using a “Good Enough” technology in Test/Dev and “Best of Breed” in production. I think a good way to undermine this bifurcation is to ask this simple question. You know that “Good Enough” solution you’re using in Test/Dev – would you trust it in production? Would you risk your organizations operations or stake your career or the success of the overall initiative by using a “Good Enough” technology in a production scenario. Nine times out of ten, the answer that comes back is NO!  At this point we are admitting that good enough just isn’t good enough.

In our community there’s been a lot of chatter about the adoption of a “multi-hypervisor” strategy. Personally, I hate this phrase because in our current period the game has nothing to do with the hypervisor – and everything to do with the comprehensive platform include a control plane of management technologies that cluster around the hypervisor – and the VM. Customers often flag up not wanting avoid “vendor lock in” – which is always pretty laughable considering how overcommitted and “in bed” they are with other dominant vendors (Cisco, HP, IBM, Microsoft, etc…).   For me the multi-hypervisor strategy just increases cost and complexity by having to have multiple management systems, and staff capable of driving both systems. Many of us as spent years developing automation and business process around VMware technologies – are we going to jump into the TARDIS and redo all that work again? Additionally, I perceive a very strong risk associated with running test/dev one system, and production on another. This is particularly true if that test/dev environment serves as sandbox for QA testing, and staging up solutions ready for production use. Ever since I was SysAdmin in the 90’s the adage of “reduce your differences in your environment” is one I’ve held to – it reduce complexity, costs and risks.  For me the “Good Enough” approach in test/dev introduces the unnecessary risk of undermining the overall such of the CEO initiative.  It drives up complexity and cost maintaining multiple systems that duplicate each others functionality under some false notion that it will save money especially as such a business critical layer – The Good Enough Delusion.

There’s many difference ways of taking this rather abstract argument and translating that into something an ordinary human can understand. I often begin with famous quotes that in a pithy way wrap up the idea with neat bow. Try these on for size:

Penny wise, Pound foolish From the ‘The Historie of Foure-footed Beastes’ by Edward Topsell Robbing Peter to pay Paul Biblical in origin, first quoted by John Haywood, 1546. One ounce of prevention, is worth a pound of cure Ben Franklin, 1736

All three of these quotes essentially make the same vital point. Trying to save money in the short-term can cost you money in the long term. A more academic way of expressing this is the term “a false economy” where a perceived cost saving actually results in greater spending later on. By overly focusing on nominal savings and squeezing the pips on those – you loose sight of the bigger picture, the bigger Executive Initiative. It’s like my customers belly-aching about the cost of Microsoft SQL license for the vCenter database – whilst simultaneously writing an enormous cheque with lots of zeros to their storage vendor. To use a more American reference, I’ve seen plenty of solutions undermined by people “nickel-and-dime” their customers.

For me, another way of illustrating the limitations of “Good Enough” delusion is to think back in my career looking for cases where it was used. In the 90’s the company I worked for needed disk imaging solution for rolling out a large number of PCs. The market and technical leader was Symantec Ghost. The company didn’t buy it. Instead they bought a competing product that was cheaper. Everyone in the company knew that Symantec Ghost was the more reliable, and performing technology. Instead the decision was made to acquire a product almost no one had heard of [I’m protecting the guilty by not mentioning it by name!]. Sure enough, the “good enough” technology was okay. But every 2 out 10 operations would fail. This meant the SysAdmin had babysit and nanny the “Good Enough” solution because it could be relied to failed every 2 clones. This meant many long hours staying behind after business hours, and coming in at the weekend to mop-up the failed jobs. This time could have been better spent doing some real productive work, rather than nursing maiding this technology. The situation got so bad, that many employees bought a personal license for Symantec Ghost and used it without senior managements knowledge. Of course now we would call this “Shadow IT”….

Another way at looking at the “Good Enough Delusion” is to be honest about how we behave as consumers and as employees in the economy. I’m going to be changing my car in the next couple of weeks. I’m currently driving a rear-wheeled power two-seat roadster. I bought it 2007 as a “pre-middle life crisis” car. That was my joke with the wife. I needed this totally impractical car to pre-empt a fully blown mid-life crisis that is usually accompanied by dyeing ones hair, buying a red Ferrari – and acquiring 18-year-old girlfriend. Where I live now this car is totally impractical, when the snow sets in I won’t be able to get through the country roads and hills. So I’m switching to SUV – I’m looking for combination of good fuel consumption; engine size; and reliability. The way I see it I have series of attributes – just like we would have with an IT purchase such security, stability, availability and performance.  I’m going to work my butt of get the best vehicle with the all the features – at the price point I can reasonably afford. I won’t be settling for good enough solution to save myself money – only to discover that when get up in the morning in the ice and cold – to find the SUV won’t start.  Similarly, when I turn up for work – do I aspire to be just a “Good Enough” employee – or do I try to exceed expectations and deliver a quality of service – above and beyond clocking in and off at the end of the day. If good enough is not good enough for my personal purchases or the services I deliver to my company – why is “Good Enough” approach for enterprise technologies that provide mission critical applications to thousands of users?

Here’s where I see the gaps in “good enough”

Superior Solution Good Enough Solution Risk
Proven technology / application in “real world” environments The vendor only says its proven or good enough loss productivity to deal with unforeseen shortfalls
Established skilled workforce Training to include another solution Cost of training and exploring the “unknowns”
Proven in production Good enough for test/dev Cost of migrating workloads / buying addition tools – more skillset required. Reinventing the wheel in operations…
Managed workloads as part of the platform Incomplete management Increased cost & complexity

For me the whole problem with the “Good Enough Delusion is that it just not Aspirational Enough. It’s admission that the design is a subpar configuration of cobbled together technologies that could well undermine the overall success of the solution, and a consequence the initiative itself. We should be striving to make things better – exceeding business expectations whilst remaining on time, and on budget. Instead of nickel and diming the businesses investment in its infrastructure. Without a proper investment in our physical and virtual infrastructure we run the risk of the levees breaking….

Now, of course that doesn’t mean we at VMware are complacent about cost. It should be of no surprises to you that a company who drove massive efficiencies and costs savings in the Virtualization 1.0 era – is still focused on driving as greater efficiencies beyond the compute layer. Remember its VMware who first brought that innovation to the mainstream x86 market places, not the also-ran vendors who were content to allow the status quo to prop-up their revenue streams. That’s the message at the heart of the SDDC message – the efficiencies and cost savings VMware brought to the computer layer – now need to be extended to the network, storage and administration. vSphere isn’t the end of the story – we have only just begun…

Category: ThinkPiece | Comments Off on ThinkPiece: The Good Enough Delusion:
January 21

ThinkPiece: Beyond the Matrix

VirtualMatrix has one of the best comparison tools about…

This is a topic I’ve been wanting write about for some months, but never got round to it. Why the title “Beyond The Matrix” you ask? Well, this article concerns the habit we have in the industry of making tables or matrix that compare one vendor’s product to another. You know the sort of thing I’m talking about are those that have the companies name at the top, followed by a big list of features going down a left hand column. Some of these will merely use an X to indicate a feature is present, and N/A to indicate that feature doesn’t apply.

Now. In principle I have nothing against these sorts of comparisons, and in a crude way they do help technical people quickly see who has what features and who doesn’t. Of course, on the vendor side we love it when VendorA doesn’t have feature X, but of course VendorA will endeavour to show that they have feature Y. Customers however must look at these range of features, and ask themselves if FeatureX or Y is show stopper. If they don’t use the feature its not important to them. Right?

Well, yes. Except I have a problem with this. Firstly, it assumes that product will always be used in same way for all time, and that nothing will change in the next 6/12/18 months that would make the absence of feature important. This I think is an argument to opting for the “top of range” or “best of breads” approach where the customer CYA their organisation by ensuring that they don’t hit a brick-wall or an upgrade-wall to access a feature they later need. Secondly, my problem with these sort of matrices is how they are used by people (not the folks who generate them – who I think in the main do sterling work often for free…) – there’s tendency to treat the Yes/No/X box as if it some definitive statement. So just because VendorA and VendorB have the SAME feature – that doesn’t mean its been equally well implemented; easy to configure; reliable. In short the problem with the Yes/X it doesn’t go beyond the matrix to tell you how good the technology is.

I think this is one of the classic scenarios we see in the world of technology. VendorA steals a march on its competition – and then there’s this mad “Me To” scramble to close the gap by the competition. Thus the competitor sales people have some ammo in meetings with customers to convince the customer that they have the same or better features. The same happens when features become “free” or “waterfalled” into lower SKUs. This has the disruptive effect of triggering competitors to match on price/function. That often works to the customer’s advantage – its called competition.

If I had my way I’d dispense with the X/Yes – and have it replaced by pint glass. The VMware pint glass would full or almost full, the also-ran competitor’s glasses would look half-empty, or show some dregs at the bottom. 😉


Of course, I’m joking – but only slightly. I would like to see these matrices go beyond a comparison of features. Perhaps adopting a scoring system (9/10 etc) or a traffic light approach – where green is an outstandingly implemented feature; amber medium quality and red when a feature is less good than a competitors. To be far I think the virtualization matrix is quite possibly the closest to this ideal I’ve seen so far.

But more than that I would like to see people really kicking the tyres on various products. The problem as I see it is that lot of people don’t have the time/money to devote to their own comparisons and that they are left to use resources on the net.

Perhaps there’s a big problem/challenge with the matrix approach. Does it actually influence peoples decision to adopt one technology over another? Probably not. Here’s why the folks who make the decision to adopt one strategy over another are NOT techies. They are extremely senior management people, who’s key concern is to drive down costs (to drive up profit). That, of course is alway filtered thru the prism of risk. Selecting one solution over another may indeed save money – but it does so at the risk of introducing a potentially substandard product into the infrastructure. In my view any potential cost savings have to balanced against introducing a new piece to the infrastructure which could buckle under the weight of demands it was never designed for…

Category: ThinkPiece | Comments Off on ThinkPiece: Beyond the Matrix
January 13

ThinkPiece: Dependency Culture…

Everyone loves a good architecture diagram!

As former student of literature, I’ve always been into language (specifically the English one!) and the shifting tides of meanings, and linking that language brings from one area of human activity to another. One word that resonates for me is the “dependency”. My title comes from a work I did I was when I was doing my Master in American Studies. As part of sessions on US domestic policy we looked at that concept of the “Dependency Culture”. The basic premise is an overly generous welfare programs leaves people “dependent” on the state for handouts – and make then unlikely or unwilling to search for work. The book we read at the time was called “Losing Ground” by Charles Murray.

It was the term “dependency culture” started me thinking about the “dependencies” we have in corporate IT systems – and the culture that causes those systems to be created and maintained.

Clearly, we’re “dependent” on such systems just to make the business function – but also I was thinking about the complex state of service dependencies a true multi-tier application possesses. This goes WELL  beyond just VM1 must start before VM2. Often each tier of an application (Web, DB, Application) has many nodes within to offer redundancy (web01/web02/web03; DB01, DB02, DB03; App01, App02,App03). If the TCP sessions between each layer is not desecrate there’s likely to to TCP load-balancer – at the very least at the web-tier. Outside of the application – but to other infrastructure – the Virtual infrastructure; Server infrastructure; Network infrastructure; Security infrastructure (Firewall, IDS etc).

In the ideal world these dependencies should be loosely coupled – to allow for changes to take place at one layer, without a complete reconfiguration of another. My feeling is if the dependencies are too tightly coupled this leads to big problems. What do I mean by tightly coupled? Well, I mean within a tier when applications has series of features are interconnected – SettingA cannot be changed without affecting SettingB, C, D, E, and F… For me there’s two outcomes from this – firstly, the adoption of a technology becomes stymied by “Settings Anxiety”. The organization realises that setting A, B, C, D, E and F should be configured in such away they that NEVER need to change, because the change is so difficult and so catastrophic that no one would want to undertake it. Therefore an excess of time must be spent designing the configuration of the application – but also agreeing that with various stakeholders on what those settings should be.

Now you could argue that our dependency culture is a fact of life, and its our job to manage, maintain and protect those dependencies. Agreed, I have no real argument with that. My contention is that more dependencies there are the greater complexity. The great the complexity the increased chance that the application or service will fail. I see it as probability factor – an application with 21 dependencies is more likely to fail than one with only 3. The law of averages would decree the greater the number of the parts, the great the number of opportunities for failure or misconfiguration by the operator/administrator. Fundamentally, I think our industry is still building applications/services that are just too complicated in effort to pack in more features on an annual basis or as method of driving scale – by scaling out and adding more nodes. Invariably the vendors who build such application crow about its scaleability, whilst at the same time ignoring the administrative burden introduced by a scale-out model.

Why is this important? Well, at the moment some say that the barrier of getting “enterprise applications” into the public cloud – is they are not “designed for failure”. In that each of the components can be treated as cattle, rather than cats in terms of the impact if they fail or die. It’s part of many developers DNA to create systems that have complicated inter-connected dependencies. If they don’t start off with this – there’s a tendency that it grows to be so overtime – as the product “matures” and grows in popularity – more modules and parts are added increasing dependencies and complexities. Of course the solution to this is to build enterprise applications that are “cloud ready”.

BUT, and this big but (hence the capitals) as with life, that’s easier said than done. It’s a bit like saying the solution to obesity is people should eat less, or the solution to binge drinking is folks should drink less. It’s tautological in the sense that the solution to the problem, is not create the problem in the first place. This is where I feel out “dependency culture” kicks in. The important aspect is the word “culture”. It’s very hard to change culture, as any new CEO will tell you. You can tinker around the edges with organisation structures – but trying to change the way people think is the hardest thing to achieve. Personally, I don’t think people change the way they think or behave very often – unless confronted with a catastrophic paradigm shift – say when scientist demonstrated the earth revolved around sun. Even when the paradigm shift is obvious, the response of many humans is to merely pretend it hasn’t happened. That’s especially easy in our technology world where shifts are happening constantly. I personally think the next generation of “cloud aware applications” can’t be created by the current generation. It needs a new generation unfettered from the mental constraints of doing things “the way they have always been done”. In the meantime we will be lumbered with supporting complicated multi-tier applications that are resistant to cloudifcation.

Call me a pessimist if you like.

Category: ThinkPiece | Comments Off on ThinkPiece: Dependency Culture…
August 16

ThinkPiece: PRISM/Pandora, NSA, GCHQ, Edward Snowden

john-william-waterhouse-pandora-18961One thing I want to do more on the blog is write an opinion or OP-ED (opinion opposite the editorial) on various subjects close to my heart. I think it might be time for me to do as much of this type of writing as my more usual tech-focused – “how do I make this work” material.

In recent days and weeks the techmedia and bloggers generally have started to talk about the PRISM/NSA/GCHQ/Snowden debacle, and its impact on cloud adoption. Before I talk about the impact on cloud I want to get my position clear first. I don’t think Edward Snowden is a traitor. I think he’s a whistleblower. The tactics of the US Govt against Snowden (with the complicity of the UK and the EI), in smearing him as such  as a threat to “National Security” are essentially a media smoke screen, a media diversion – to distract the general population (the voters who elected the politicians) from the fact that in some cases legal protections that are in place to protect citizens from the covert actions of the security community – have at best being sidestepped or breached. Don’t get me wrong I believe we do need security services. But they need to be tightly controlled by proper oversight that is democratically accountable…

Mr Snowden hasn’t disclosed the names, locations, or activities of any CIA/MI5/6 operative. Nor has he spilt the beans on any security programs such as military technology or covert black-operations elsewhere. Nor has he exposed the secret cables that pass between embassies and diplomats – as was the case in the Bradley Manning/WikiLeaks situation.

What I personally find very worrying is that the response has largely been one of acquiescence. The big tech companies involved have basically stated they will rollover and have their tummies tickled whenever govts make these requests; Last week two big secure email companies shutdown their operations rather than hand over the keys to the Kingdom; Europe on the other hand worked hand in glove with the US authorities to make Snowden’s situation as hard as possible.

Then there’s the response of ordinary people: “I’m not a bad guy, so its okay for the government to snoop on me – because they’re only after the evil terrorists”. I’m sorry, but people who espouse this view are essentially accepting a type of George Orwell “Big Brother” view of the world. If you’re not a criminal, its fine for the govt to install CCTV cameras in your home, right? If I hacked your email that would be a crime, but if the government does it without your knowledge that’s called “National Security” right?

ANYWAY. Less of my political ranting. What impact does this have on cloud? Answer: None. Why? Well, if you thought before the Snowden revelations that your data was impermeable and secure once it had left your building – you were quite clearly deluded. Few online systems are protected from the sysadmin with rights. In a way Snowden’s revelations prove this point. With his level of sysadmin rights this information might never have reached the public domain. So if you put your data in the cloud – what protects it from a rogue sysadmin? The answer to this should be proper security that’s designed for multi-tenancy. I would like to see some sort of delegation rights where tokens are used to allow access to systems for temporary troubleshooting assistance, and encryption of data in flight and data at rest that allow access for the tenant. Assuming your Cloud Provider doesn’t hand over those keys. Right now the model is based on trust, and trust alone. We have to blindly trust the govts and intelligence services – and we have to blindly trust cloud providers. And sometimes its not even malicious intent that’s the source of over-stepping the line. In today’s revaltions this choice anecdote was flagged up by the BBC News website:

In one instance in 2008, a “large number” of calls placed from Washington DC were intercepted after an error in a computer program entered “202” – the telephone area code for Washington DC – into a data query instead of “20”, the country code for Egypt.

Honestly, this is the stuff of “The Thick of IT” satire. You can’t make this sort material up. Although I suspect a computer program didn’t enter the numbers incorrectly. But a human (a programmer?) made a human mistake. Remember computers only do what we tell them too, they haven’t become sentient beings. Yet… 😉

For me one of the rich ironies of the whole sorry tale is the use of the project name of Pandora for the UK end of the internet hoovering machine. As you might know Pandora was the first human created by the Gods. According to the Hesiodic myth,  Pandora opened a jar, in modern accounts sometimes mistranslated as “Pandora’s box”, releasing all the evils of humanity —  leaving only Hope inside once she had closed it again. She opened the jar out of simple curiosity and not as a malicious act.

[I’ve reworked this from a wikipedia entry…]

You could say Mr Snowden has opened the intelligence services’ “Pandora’s box”. All that remains is the hope that our individual freedoms will be respected. I think Ben Franklin had it right when he said “They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.”

Category: ThinkPiece | Comments Off on ThinkPiece: PRISM/Pandora, NSA, GCHQ, Edward Snowden