October 11

Droplet Computing containers on Amazon AppStream

Amazon WorkSpaces was a doddle to get up and running as I am very familiar with VDI and workspace concepts. I’d not touched AppStream before, so I needed a bit of primer. I found this video by Thorr Giddings to be excellent. To the point and, for someone with quite a bit of experience, I was able to pause the video at the various steps and get the process down. The best 8 mins of my time this week!

In case you don’t know, Amazon AppStream is a fully managed application streaming service. You centrally manage your desktop applications on AppStream and securely deliver them to any computer. You can easily scale to any number of users across the globe without acquiring, provisioning, and operating hardware or infrastructure. AppStream is built on AWS, so you benefit from a data center and network architecture designed for the most security-sensitive organizations. Each user has a fluid and responsive experience with your applications, including GPU-intensive 3D design and engineering ones, because your applications run on virtual machines (VMs) optimized for specific use cases and each streaming session automatically adjusts to network conditions.

Read on…

Category: Amazon, Droplet Computing | Comments Off on Droplet Computing containers on Amazon AppStream
October 2

Droplet Computing containers on Amazon WorkSpaces

This week I spent time working with Amazon WorkSpaces. In case you don’t know, Amazon WorkSpaces is a managed, secure cloud desktop service. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe. You can pay either monthly or hourly, just for the WorkSpaces you launch, which helps you save money when compared to traditional desktops and on-premises VDI solutions. Amazon WorkSpaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.

Read on…


Category: Amazon, Droplet Computing | Comments Off on Droplet Computing containers on Amazon WorkSpaces
August 27

Amazon AWS: To NAT or not to NAT, That is the Question

Yes, I know. When Hamlet holds the skull… It’s not the “To be or not to be” speech… but the one about Yorick. 🙂


I’d like to thank Tim Hynes for reviewing this blog post and giving me valuable feedback. Tim is a fellow vExpert, he is @railroadmanuk on twitter and blogs at http://virtualbrakeman.wordpress.com/

The Conceptual Stuff

I was curious about Amazon options to use NAT inside the VPC construct, so I decide to do some research about its merits. Before I delve into the practicalities – here’s the whys and wherefores.

Amazon recommend a NAT configuration if you have Internet facing web-servers, with backend servers that they communicate to. That statement shows how much the AWS geared around “Web Services”, although it’s fair to say that most applications these days have web-based front-end, with an application server/database server back-end. The alternative to this NAT configuration is to merely have public/private subnets protected with Security Groups – with no NAT. In this setup a heavily secured “jumpbox” or “bastion” instance is used as the access point for those environments – this would be a very typical setup for a test/dev environment where only developers need access to whatever Amazon AWS is hosting…

To get a NAT system up and running you have two main options:

  • “NAT Instance” – The NAT runs as just another instance amongst your other instances. You can use a number of different sized instances provided by Amazon.
  • “NAT Gateway” – This service is configured in the VPC, and has features such as high availability, higher bandwidth capabilities, and less administrative overhead (this method is recommended by Amazon).

I found the NAT Instance method is very easy to setup, and the VPC wizard does a good job updating the VPC “Routing Tables” in order to make sure traffic flows in the right directions. You do however, have to update the Security Groups around the “NAT Instance” to allow it to send and receive traffic – just like any other instance really.

The NAT Gateway method is a tiny bit trickier to setup, and critically is not a Freeium service (remember neither is the NAT Instance really). With the NAT Gateway as you create it you associate it with one of the public subnets inside a VPC, and assign an Elastic IP to it. You do have to manually update the routing tables for the affected (or should that be afflicted?) subnets before traffic flows. The easiest thing is to setup the VPC first, so you can then attach the NAT Gateway to the appropriate public subnet. There are other ways (in terms of order of the process) to do this, but I found this easiest way and the most logical for my brain to wrap its head round. The NAT Gateway is created within a particular “Availability Zone” (AZ) and is implemented with redundancy in mind. And I think it’s for this reason that Amazon recommends it. The NAT Gateways availability is set by which Public Subnet its associated with – so it is possible to create more than one NAT Gateway associated with multiple public subnets in different AZ’s. This web page contains this statement:

“If you have resources in multiple Availability Zones and they share one NAT gateway, in the event that the NAT gateway’s Availability Zone is down, resources in the other Availability Zones lose Internet access. To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.”

And here’s some other nuggets and facts worth highlighting:

  • A NAT Gateway supports 10Gbps of bandwidth;
  • You can’t swap out an elastic IP to an existing NAT Gateway – you have to destroy and re-create it to change the IP
  • Although you can’t wrap a Security Group around NAT Gateway, it does support network ACLs to restrict the traffic it will pass
  • Finally, NAT Gateway’s cannot be used with EC2 Classic-Link. However, this is really a legacy issue and would only impact on customers who have been using Amazon AWS for sometime.

The Practical Stuff

Continue reading

Category: Amazon | Comments Off on Amazon AWS: To NAT or not to NAT, That is the Question
August 20

Amazon AWS and VPC Peering Connections

VPC Peering is the way that two VPC’s with distinct CIDR spaces within the same REGION can be linked together. Whether you actually need to do this could be moot – but I can imagine a scenario where each VPC were different companies within in a holding group, or else you were using VPC’s on a departmental basis. You could still maintain separate “root” accounts for billing purposes, as VPC peering can be setup with multiple “root” AWS user accounts. For legal reasons the VPC’s might need to be separated, but they maybe “natural synergies” between companies within the same group or between departments where communication is desirable or needed.

Aside: You should normally be VERY worried when management uses the term “natural synergies”, as it is term that normally suggests two companies merging and job redundancies. Such are the euphemisms of modern employee relations!

Note: I found this Rackspace article useful especially as it outlined some of the limits around using VPC connections and some of the pitfalls of excessive VPC and VPC Peer Connections – https://blog.rackspace.com/vpc-peering-architecture-use-cases-guidance

There are two main “rules” around VPC Peer Connection in Amazon AWS. Firstly, The two VPC’s to be connected together must have own unique CIDR. It’s not possible to VPC Peer a VPC where they both have the same CIDR such as 10.0.x.y/16. Secondly, the VPC can be managed by the SAME Amazon “root” account or as I said a moment ago – DIFFERENT Amazon “root” accounts. If it different accounts the later then the two “root” administrators of the VPC’s would have to work together as credentials are needed on both sides.

I see this as being a lot like the “trust” relationships we used to make manually in the not so good old days of Windows NT4 (God, how that ages me!). However, if you of my generation you might remember that before “Active Directory” those trust relationships were not transitive. So just because VPC1 connects to VCP2 and VCP2 connects to VCP3, it does NOT follow that VCP1 can communicate to VCP3. So the VCP Peering Connections do not flow from one VPC seamlessly to another.

The VPC Peering wizard creates a “PCX” target that can be referenced in the routing tables to allow communication to pass from one VCP to another. When using the VCP wizard one side of the relationship between the VCP acts as the “Requester”, and the opposite side acts as the “Acceptor”. The communication is automatically two-way so there’s no need to create the VPC Peering Connection twice. If you making the VCP Peering Connection between two VCP under the SAME Amazon “root” account you merely select two different VPCs – as you are both the “requestor” and “acceptor” at the same time.

So in the screen grab below the “Requestor” is my VCP called “Prod” using 10.0.x.y./16 as the CIDR, and the “Acceptor” is my VCP called “Dev” with the CIDR of 10.1.x.y/16. The fields are completed by merely browsing the VPC metadata queried using the currently used “root” account.

Continue reading

Category: Amazon | Comments Off on Amazon AWS and VPC Peering Connections
August 1

We’re off to see the Wizard, the Wonderful Wizard of AWS

Note: Just to say this title is meant to be a humorous and silly pun. I actually think the Amazon wizards in the main are pretty good, and in fact pretty invaluable.

Acknowledgement: I’d like to thank vExpert, Jame Kilby for reviewing this blog post prior to publication. You can follow James on twitter at https://twitter.com/jameskilbynet and he blogs at https://www.jameskilby.co.uk/

In my previous blog post I was writing about how important planning stuff upfront in any cloud environment is. Not just because this is a good practice in system design, but because so many cloud environments are resistant to the kind of arbitrary ad-hoc SysAdmin changes, that could be so easily done to fix problem in an on-premises virtualization platform. In this post I’m turning my attention to something less high-fluting and more down in the weeds.

When I was working my through the PluralSight SysOps Admin training I was following the demo’s with my Amazon AWS Console open. Mainly playing “spot the differences”. Let me make something clear – the Pluralsight training is pretty good and an excellent foundation to getting stuck in and learning more. I believe it’s going to get harder and harder to keep ALL training materials up to date and current. Cloud environments are almost naturally more “agile” (hateful word – sorry I have thing against the way our industry brutalizes my native tongue). This means it’s really hard for training materials and guides to keep track. It’s partly the reason I’ve abandoned the whole step-by-step tutorials that I did in the past. I will leave that work to the big boys – like Amazon/Microsoft/Google as they have for more resources and time. But my plan was always to go back through my notes on the course (48 pages!) to both revises what I learned; inspire new blogging content – but also go back a research those differences I’d noted. I didn’t do that there and then whilst the video rolled. It would have slowed up my pace of the training. But now I feel I have the time to check those out.

To whit. Once thing I notice is when you create a VPC in Amazon AWS using the wizard you get some new options that the Pluralsight videos didn’t dwell or mention. Incidentally, as a rule I despise wizards, however in the context of Amazon AWS I would recommend them. They often automate many tasks, and thus meet certain dependencies – and speed up the process of setup (unless you decide to go down the scripting route). I think the key with the Amazon AWS wizard is understanding exactly what is being automated, and where those settings reside. This reduces the feeling that it’s the “Wizard of Oz” pulling strings behind a curtain, with you being clueless on what he’s up to. The other thing I would recommend is that if they’re 4 different routes through a wizard – go through it four times. The best way to learn a technology is to expose your self to the reality, rather than the theory. When I was an Microsoft Certified Trainer in the ‘90s, there was an awful lot of “you can do this configuration” but then it was never gone through. One way I expanded my knowledge at the time was actually trying these “theoretical configurations” – you certainly learned that often you can do something, its often comes with major dental work, to replace all the teeth you lost putting it together…

So… less pre-amble, more amble. Here’s a screengrab of the VPC wizard from PluralSight…

Continue reading

Category: Amazon | Comments Off on We’re off to see the Wizard, the Wonderful Wizard of AWS
July 29

Amazon AWS and Ch-Ch-Ch-Changes

Acknowledgement: I’d like to thank fellow vExpert, Ed Grigson for proofing this and giving me valuable feedback. Help inspire a better conclusion than this piece originally had. You can find Ed’ own blog here, and he also tweets!



One thing I’ve learned pretty quickly using Amazon AWS, whilst following the PluralSight SysOps Admin course, is how resistant to changes the platform is. Now, this shouldn’t really come to a surprise to anyone who has interfaced with a virtualization layer, as mediated through a cloud UI. As I’ve said in previous posts – the layer of abstraction added by cloud means a great deal of the knobs and buttons you’re used to as a virtualization admin are by necessity redacted and not exposed. Remember, you’re meant to be the Little Happy Consumers of the Cloud now.

We’re all used to the experience where “dependencies” between one service or object prevents our arbitrary and ad-hoc administration changes which haven’t properly thought through. So it becomes impossible to change the “D” setting because of restrictions upstream in A, B, and C or without it affecting downstream dependencies in E, F, and G. I can pretty much live with this – although that does mean you do REALLY, REALLY need to think things through before you start creating stuff.

This is why I think a cloud architect is probably more valuable or useful to an organization than a SysOps Admin. However, I think where you learn the consequence of not architecting or pre-planning your development is leaping in as a SysOps Admin creating/changing stuff and then having to deal with the often painful consequences. Often the best lessons are learnt the hard way after all.

What I would say is this is a serious consideration often extends itself to even some of the most trivial of admin tasks which you would assume would be unrestricted. I don’t intend this as a criticism of Amazon AWS as such, but an observation that much public and private cloud solutions behave in precisely the same way, but some are more “restrictive” about this than others. For instance:

Continue reading

Category: Amazon | Comments Off on Amazon AWS and Ch-Ch-Ch-Changes
May 23

Amazon AWS Summit – London, ExCel – 28th June

I’ve bitten the bullet and decided to attend the Amazon AWS Summit in London on the 28th June. Both the London VMUG and this event are for “FREE” the only cost is getting there and back. I’ve spent the money on the train ticket and that pretty much commits me to going! It’s funny with free events – your commitment can vary depending on the mood. But once you put money down it rather clarifies the situation!

If you live in London I guess these events are ‘easier’ to do from a financial perspective, its more whether you have the time to do them. There’s precious little in terms of agenda – but I hope it will be technical and learning oriented and less on the old marketing side. The key note looks mercifully short – so know 2.5hrs sat on your button with you mind being numbed – just 1hr of being sat on your butt with your mind numbed.



Category: Amazon | Comments Off on Amazon AWS Summit – London, ExCel – 28th June
May 15

Using Amazon Route53 and Google Apps Together using Domain Aliases to complete SSL Certificate Requests!


I’ve got nearly 25 years experience in the IT game with a range of skills that take in this task – DNS, Email, Web-Servers. However, for the last 15 years or more I’ve more or less outsourced the management of this to a third-party, or it simply hasn’t been my job. Once I used to teach Active Directory DNS to students when I was a Microsoft Certified Trainer, but that was way, way, way back in 1996-2003. Of course, there’s nothing new under the sun, as the Great Bard once said – and so I have gotten by ever since with core fundamentals. So this is both old and new too me, and if I was to be honest I’m not sure if the solution to my problem was the best or easiest. I might have just taken a sledgehammer to drive home a thumb tack. I’d be interesting hear if this process could made infinitely more simpler.

I think the ‘order’ of my process is good – especially as you need valid emails to confirm the transfer and setup of certain domains. But I’d also be interested that is this is the best way of doing it – could it have been done more efficiently in fewer steps. Finally, I’d be interested to know if this the ‘right’ way from a security and best practise perspective as well.


I would have liked to have a more exciting title to this blogpost – and one infinitely shorter!  Being a Hunter S Thompson fan, I had thought of adding “A Strange and Terrible Saga”. But I actually I want to avoid the rabbit of an extended rant, and the convoluted shaggy-dog story of my experiences on Friday. It took me 6hrs to get this working, and I’m still mopping up the blood spatter today. This should have taken 30min-60mins tops including waiting for DNS caches to expire, and DNS records to be propagated on the interweb. However, I will spare you my personal grief this time, and just focus on the back-story, use-case, solution and workarounds in the hope that anyone facing similar heartache in the future will stumble upon this post and I will save them a bag of time. I’m just nice like that – after all I first got started with VMware, by just trying to be helpful. It takes you a long way in life I think.

Advice: If you are budding wannabe blogger who just wants your own domain, linked to Google Apps for email etc – together with your own WordPress setup. Don’t bother with this approach. It’s overkill. I would sign up to any number of hosted WordPress packages online, that will handle all of this for you in a nice simple easy enrollment process. This blog is hosted with Dreamhost.


The Problem – Back-story/Use-case:

As part of my endeavours to learn more about public cloud I’ve been looking at Amazon AWS. I’ve already put together an environment that leverages Amazon Router53 (DNS) together with multi-region Elastic Load-Balancer (ELB) together with IIS web-based instances running on ‘public’ subnets. I thought it would be good experience to do this using SSL certificates. I established a new DNS domain, registered and hosted with Amazon Route53, and then opted for for .net domain because that allows for the possibility of making my WHOIS information private, whereas this option did not exist for a .co.uk domain. Privacy is important to me, and I don’t think my postal address should be online for all and sundry to see. This is important to note, as it impacts on the SSL certificate enrollment. Registering the domain with Amazon Route 53 and Requesting an SSL certificate was relatively easy.

Where I became unstuck however – was In order for my SSL Provider to verify me and send me certificate they needed a valid email listed under WHOIS. This became tricky because that information as a.) private b.) the email used under the WHOIS information did not match the emails they would usually “expect” to use. That was tricky for me to easily provide because all I have is the raw DNS domain name, with none of the ancillary services that would normally surround it such as web-servers resolving to www.domain.net or any email infrastructure. Nor did I feel inclined to waste precious time putting together such services merely for a one-off email and verification process.

This process would have been relatively simple had I been requesting a certificate for www.michelle.com where those pieces of the puzzle were are in place, and much of the verification process had already been undertaken. However, I specifically wanted to use SSL with Amazon AWS and have it all in that environment, rather than doing the DNS work through dreamhost. Dreamhost is the company that hosts this blog. They are very good by the way.


So I hit upon the idea of associating my existing Google Apps subscription which supports my michelle.com domain, to also provide email services to my new mydomain.net domain. It is possible to register the mydomain.net domain as “alias” to mydomain.com. Once recognised by Google I would be able to create an admin@mydomain.net user within my mydomain.com subscription with google. After that I can then update my WHOIS information at Amazon Route53. And then contact my SSL provider to complete the verification process. Of course, working out HOW do this took time. I’m a pretty tech savvy – but this requires an area of skills, often using interfaces and procedures which are different to ones I’ve used in the past. So you need:

  • DNS knowledge (with Amazon Route 53)
  • Certificate Request Knowledge (Many routes – I used IIS 10 to create a CSR request)
  • An account with Google, and knowledge of their Domain Registration/Validation process
  • Further updates to Route 53 and the WHOIS information to change default settings

I don’t intend to write something step-by-step because as soon as I do – the UI’s will change. I’ve often found that Google help does NOT keep up with their many changes. Amazon on other hand appear to have a better handle on documentation – so there is no point in me trying to compete with Amazon or Google in the documentation stakes. It does illustrate the challenges of them managing such an “agile”  environment compared to conventional shrink-wrapped software company. The documentation gets out of sync with the product…. To be honest I still don’t know WHY some processes provided by Google DID NOT work. And I still dont’ really know if the WAY I have done it the best or most efficient. It does HOWEVER, work. And that to me is what counts. BUT, if anyone can figure out what went wrong or suggest simple/easier way I would be indebted to them for that guidance.

Finally, I dare say Google Domains/Apps could be replaced with a different vendor if you subscription is with some other email supplier other than gmail. For instance I’m sure such a configuration could be achieved with Office360. Of course, any ordinary mortal just wanting a blog with their own domain, and bit of SSL to protect the login would be better of getting a hosting company to orchestrate all this – its much less heartache!

1,000 Foot View:

This is a simple number list that serves as a check-list to anyone (well mainly me) wanting to do this style of configuration…

  1. Register new domain with Amazon Route 53
  2. Login to Google Domains and create a New Domain Alias
  3. Use the cname record method to verify your domain
  4. Populate the Route 53  with the MX records for Google Mail servers
  5. Create a new user in Google Console for your preferred contact for the new domain
  6. Login to the new account, and (optionally) forward all email to an email address you do actually use!
  7. In Amazon Route 53 update your WHOIS information for the new ‘admin” email. receive a flurry of confirmation and validation emails!
  8. Generate a CSR for your domain (various methods)
  9. Submit CSR for your single host certificate (aka www.mydomain.net) or domain wild card certificate *.mydomain.net
  10. Use your new certificate as you see fit. In my case attached to two region specific ELB’s which act the SSL endpoint for inbound https requests – thus offloading the SSL process to ELB and away from your web-servers.
  11. Punch the air – and say wow, did I really do that. I must be some sort Cloud God loading over the Olympus of the Internet. Sit back. Have a cup of tea. Feel a little less full of yourself. It’s only software you know… 😉

NOTE: I won’t be covering step 8-11 as they are specific to your environment, and will vary from vendor to vendor. And mainly because this post will be LONG enough without adding that level of detail. My main interest is the interoperability between Amazon Route 53 and Google Apps to get this working.

Now in a LOT more detail…

Continue reading

Category: Amazon | Comments Off on Using Amazon Route53 and Google Apps Together using Domain Aliases to complete SSL Certificate Requests!
May 9

Fluffy Cloudy Amazon Web Services Thoughts (Part N of N)

Disclaimer: I’m not an AWS Expert. I’m learning. I regard myself as a novice. Therefore I reserve the right to make idiotic statements now, which I will later retract. My thoughts on AWS are very much a work in progress. So please don’t beat me up if you don’t agree with me. I’m just as like to respond with “Gee, I hadn’t thought of that – you have a point!”

Well, okay the title of this post is a bit of a joke at my expense. Just before I joined VMware in 2012, I embarked on a series of blogposts about vCloud Director [yes, just as the company change strategy towards vRealise Automation!]. It became quite a series of posts. I dubbed it my “vCloud Journey Journal”, and it ended up with a whopping 73 posts, in what almost became like writing a book through the medium of a blog. Just so you know, this is NOT a good idea as the two formats are totally incompatible with each other. So anyway I don’t want to make the same mistake this time around. And my intention is to write stuff as I learn.

After vCD, I dabbled with vRealise Automation (which was once the vCloud Automation product if you remember, which was aquired via DynamicOps). That product was fine but it was very much about creating and powering up VMs (or Instances as AWS likes to call them). I didn’t feel I was really using the public cloud “properly” but merely extending virtualization features up into the public cloud rather than consuming stuff in the -as-a-service kind of way. Sorry to my former VMware colleagues if this is a massive misconception on my behalf – the last time I touched vRealise Automation is nearly four years ago – and things can and do move on. Plus I’ve been out of the loop for 12 months.

The last couple of weeks have modified my experience, and as consequence got me thinking all over again about what public cloud is, means, or is defined. Sadly, this has became a very boring and tired parlour game in the industry many years ago. I personally think the game of “definitions” of “What is public, private, cloud?” are a bit moot for the community. But they kind of matter to me as the typical in-house, on-premises type who made a name for herself by helping other setup, configure, troubleshoot the virtualization stack from around 2003-2015. But even I feel that the debate moved on long, long ago – and this is me playing catch-up.

Continue reading

Category: Amazon | Comments Off on Fluffy Cloudy Amazon Web Services Thoughts (Part N of N)
April 11

My Amazon AWS Certification Plan with @pluralsight and @ekhnaser (Part God Knows!)

So I’ve played about with AWS in my time at VMware, but really only dipped my toes. Like many people I like to have a goal to work towards – so it felt reasonable to think about going through the steps to prepare for certification. For me the important thing is the learning process and getting the old IT Brain working again. So I may or may not end up doing the eggzams for AWS, but thought the structure around that prep could help frame my learning. I took a look at the certs on Amazons websites:


The above link is pretty good for generic info – if you want more detail for the AWS Certified Solutions Architect – Associate certification this – a much better location – https://aws.amazon.com/certification/certified-solutions-architect-associate/

And I can tell I need to do the “asssociate’ stuff before I do anything ‘profesisonal’ – and given my background the Administrator/Architect path is one that suits me. I’ve spent most of my career training, education and teaching sysadmins how to manage systems – and AWS isn’t going to be any different to that. I’m not about to morph into a developer at my advanced age. You can can teach a dog new tricks, but you can’t teach an old dog to be a cat.

According to Amazon – Step1 is take a training class. As understand it authorised training is not a requirement, only recommendation. So unlike some (ahem) certification tracks that mandate authorised training, that’s NOT the case with AWS. Yippee. That means I can spend my plentiful time instead of my limited cash on training.

As vExpert (2009-2017) I bagged a free 1-year subscription to Pluralsight so it makes sense to use it as alternative to authorised training from a recognised training partner. As rule I prefer classroom training with an instructor is who alive (as opposed to dead). But given the finances I will make do with the passivity that is online training. Pluralsight does have a course entiteld “AWS Certified Solutions Architect – Associate” which fits the bill. It’s created by Elias Khnaser. I know Elias though Linkedin and Twitter, so intend to be little cheeky monkey and ask him questins directly. Although to be kind, I’ll probably store them up until the end of the course. There’s nothing worse for an instructor to be asked questions in Module1, that is answered in Module2, right?


Right out of the bat, Elias recommends attending another course to the above if your a novice. I’ve never been one to skip steps in learning process so I opted to do that first.


If you are going to do the fundmentals course first – I would recommend skipping to Module3: Introduction to AWS Global Infastructure, if you have been in the industry a while like myself. The course is itself feels pretty up to date (I notice there’s no date of creation) and isn’t going date that much because its fundmentals. But you will spot little changes – for instance the course states that there are 10 Regions plus GovCloud. Actually, its now stands at 16 regions with another 3 planned. So long as you follow the URLs in the course you should be able to see these difference. For a more up to date list of the Global Infrastructure – you need this page:


My plan once I’ve gone through both courses is double back to Amazons 8-Step program outline on their webpages. Both courses are about 8hrs in duration… and I would recommend perhaps going through each one twice. One of the decide benefits of online training like this is the “rewind button”. Something that is decidedly lacking in instructor-led training – although I believe some vendors do allow access to online versions of their training material AFTER you have passed the exam. Although in my personal opinion I imagine few people can spare their time out of the bizzy schedules to re-do a course all over again. The benefit I think is “refreshing” yourself on a particular topic or subject you found tough.


Category: Amazon | Comments Off on My Amazon AWS Certification Plan with @pluralsight and @ekhnaser (Part God Knows!)