Acknowledgement: I’d like to thank fellow vExpert, Ed Grigson for proofing this and giving me valuable feedback. Help inspire a better conclusion than this piece originally had. You can find Ed’ own blog here, and he also tweets!

One thing I’ve learned pretty quickly using Amazon AWS, whilst following the PluralSight SysOps Admin course, is how resistant to changes the platform is. Now, this shouldn’t really come to a surprise to anyone who has interfaced with a virtualization layer, as mediated through a cloud UI. As I’ve said in previous posts – the layer of abstraction added by cloud means a great deal of the knobs and buttons you’re used to as a virtualization admin are by necessity redacted and not exposed. Remember, you’re meant to be the Little Happy Consumers of the Cloud now.

We’re all used to the experience where “dependencies” between one service or object prevents our arbitrary and ad-hoc administration changes which haven’t properly thought through. So it becomes impossible to change the “D” setting because of restrictions upstream in A, B, and C or without it affecting downstream dependencies in E, F, and G. I can pretty much live with this – although that does mean you do REALLY, REALLY need to think things through before you start creating stuff.

This is why I think a cloud architect is probably more valuable or useful to an organization than a SysOps Admin. However, I think where you learn the consequence of not architecting or pre-planning your development is leaping in as a SysOps Admin creating/changing stuff and then having to deal with the often painful consequences. Often the best lessons are learnt the hard way after all.

What I would say is this is a serious consideration often extends itself to even some of the most trivial of admin tasks which you would assume would be unrestricted. I don’t intend this as a criticism of Amazon AWS as such, but an observation that much public and private cloud solutions behave in precisely the same way, but some are more “restrictive” about this than others. For instance:

Creating CIDR for Virtual Private Cloud (VPC)

A VPC is a pretty common object type in most cloud environments – it’s often a logical construct that acts as a partition between one part of your environment to another. So one “root” account could contain many VPC’s (Production and Test/Dev for instance), and by default there is no communication allowed between VPC’s either in different regions or within a region – even within the same Availability Zones (AZ).

When you define a VPC you can use a handy wizard to set things up. That’s fine but as with all wizards the more you ask them to do in one big sweep the more likely they will barf-up on you because some variable is injudicious or worse just plain wrong. The VPC carves out a range of CIDR address for use within the VPC for all your subnets. All subnets are routed within the SAME VPC, and those subnets must live within the range of the CIDR selected.

The only difference between a so-called “public” subnet and a “private” subnet – is the first is rigged to an “Internet Gateway” ‘device’ directly, whereas a private subnet hits the outside world via routing tables which eventually makes its way to the internet. As far as I can tell the functional difference is that an instance connected to the “Public” Internet can either have an “Elastic IP” to it for direct access on the Internet OR be referenced as a “target” behind an “Elastic Load Balancer” (ELB) for inbound access. Visually, it’s like a physical machine being patched into your DMZ, or else behind an internal firewall. That’s probably a hugely inaccurate and old-fashioned, but as a “Jacqueline-of-all-trades” admin, it’s a metaphor that works within the limits of my knowledge.

This diagram shows one VPC in a Region – with multiple subnets (all connected together) but residing in different “Availability Zones” (AZ). Notice the subnets ‘live’ with in the same contiguious block of IP CIDR…

You can see my subnets are bit of a mess. The CIDR for my Production VPC is 10.0.x.y/16 (or what we used to call in the de olden days a Class-B address), and my subnets are using 10.0.N.x/24 (or what we used to call in the de olden days a Class-C address scheme). You can do classless CIDR in Amazon AWS if that floats your boat, but for the rest of us mere mortals it tends to make the brain ache when looking at routing tables.

Initially, I created just one public subnet. This was a mistake because ELB requires at least 2 public subnets in different AZ’s (incidentally this is true of highly available RDS deployment as well). So you’ll notice Public Subnet01 is in US-EAST-1a whereas Public Subnet04 is in US-EAST-1b. The fact I haven’t serialized the assignment of the network ID (10.0.1, 10.0.2 and so on) looks a bit sloppy as if I was making it up as I went along. Erm, that’s because I was.

In hindsight I might consider pre-populating the VPC with a public network for all the AZ’s within a given region. That might feel a little overkill, but for me it feels a bit more ‘belt and braces’. Perhaps I’m over-egging it – after all Amazon could see fit to add another AZ, and to keep my ‘design’ (which is a rather generous term!) consistent, I would have to add another public subnet to the bottom of the deck cards

So anyway. So far so good. So

Can you change the CIDR?

Well, no you can’t as that would require radical surgery across the whole of the network – impacting negatively on subnets, routes and other ‘network devices’ such as IGW’s.

Can you have multiple VPC’s within your account with the same CIDR?

Yes, you can.

So here VPC 37a9a151, 88fb02f1 and 17fd046e all have the same CIDR of 10.0.x.y/16

Would they ever be able to communicate with each other?


Why not?

In order to enable “VPC Peer Networking” between two different VPC’s with the same account they each need a unique CIDR. Otherwise how would packets destined for 10.0.x.y, get routed if they think they are on the same network – when actually they aren’t connected at all. This is like your neighbor having 192.168.2.x on their WIFI, and you have 192.168.2.x on your WIFI – and thinking you could use a router to link the two networks – the router would be upset and very confused. Incidentally, there are ways of achieving this configuration with bridging technologies – but that’s not an option in Amazon AWS.

I don’t think is something you should “worry” about – but beware of various wizards that run when you first go to setup Amazon AWS/EC2 that pre-populate fields with IP ranges and such like.


In hindsight I chose a really bad example of stuff you cannot ch-ch-ch-change, although it was helpful to me, to summarize the rules and regulations in my head. As ever blogging actually helps the blogger crystalize their thoughts. So I need to grasp at some other setting you cannot change to make my point. Here’s a couple to chew on…

  • If you patch an instance to the wrong subnet, you cannot merely right-click it and patch it somewhere else. You have to clone it of somewhere, and redeploy it to the right network. That’s right a simple ‘re-wire’ of the designated network generates a cloning process!
  • Security Groups are assigned to all manner of resources to control access. Although you can give them “Name” tags neither the group name or the descriptions (oh, come on Amazon what gives!) cannot be modified. The best you can do is copy the Security Group, rename the copy – and then reassign ALL the effected objects to the new group. I kid you not…
  • There others I knew off and Ed Grigson was helping in refreshing my memory. These include:
    • Launch Configurations for ELB cannot be changed after their creation. You have to create a new one, and re-assign it
    • S3 Bucket names are fixed. If you want a different name you have to empty your existing bucket into a new bucket with the desired name
    • Until this year couldn’t resize a EBS volume without downtime (this was something I mentioned in a previous post. You can now do a resize of a EBS volume without downtime but the process is convoluted compared to most virtualization vendors

So I guess you will be architecting this thing upfront before you log on and start creating stuff in a make-it-up-as-you-go-along SysOp Admin way…

And that’s how this blogpost was going to end until Ed Grigson emailed me some responses and reminded me of stuff that I’ve known for a while, but utterly failed to mention. In our chat Ed told me he sometimes struggles with conclusions for his blogpost. I think Ed’s underselling himself, because Ed’s email response could be cut and pasted to the end of this blogpost almost unedited!

You know as they say, good artists borrow, but great artists steal… 😉

(Incidentally, the sentiment has been repeated many times. Check out this URL if your remotely interested in this kind of thing – )

Some people might say…. That’s Amazon AWS was designed for “cloud native” that is all about automating things, infrastructure-as-code, disposable infrastructure so the idea of binning an instance in order to change its IP isn’t a problem. It’s been said by some that you should redeploy Amazon AWS infrastructure frequently as a best practice. It’s beneficial to redeploy frequently from an infrastructure perspective as it tests resilience, build processes. It will for example guarantee you get the latest instance types and any new features (such as dynamic EBS volumes) will be available, while aging out old instances and processes. Without this your elastic environment is likely to become as inflexible and antiquated as some on-premises solutions that don’t change in years…

What it means – is Amazon AWS is infrastructure, but not as you know it. So comparing “Public Cloud” to “Private Virtualization” is a to oranges to apples comparison. As they have different goals, different requirements and as consequence different outcomes. You’re best “level setting” your expectation for both – rather than merely expecting one to mirror the other. And yet the architectural difference is the key reason why “lift and shift” of legacy/monolithic apps isn’t always desirable or an effective outcome. That’s not to naysay either public or private – but to fall back on the old adage that is still as true today as it was yesterday – it depends. I think one natural outcome of being a virtualization admin is the desire to port all your skills from the private world to the public. That’s not a bad goal – more knowledge never hurt anyone. But perhaps you need to leave your expectations at the door when you walk through door of public cloud…