September 11

The 1st Inaugural – VendorSwagVMworldBag Awards 2013

keep-calm-and-turn-your-swag-on

Before I begin I have a number of people to thank for making this possible. I’d like to thank the sponsors for supporting the VMUG, without them there would be no VendorSwag. Secondly, I’d like to thank Tim Gleed (@timgleed) for donating is VMworld Bag at VMworld US. Tim works for VCE and has helped on the vBrownbags as well.

Thanks Tim!

I’d also like to thank that guy on twitter who gave me is idea. I’ve tried in vain to relocate you. But I’m afraid I forgot your name. Whoever you are – you’re responsible for this cosmic karma.

Well, I say this is the 1st. But there was a VMworld SwagBag award a couple of years ago that me and David Davis jointly ran. During my travels around various VMUGs I’m often attracted to the free swag on offer in the solutions exchange. Of course, there is a lot of rubbish to be sifted thru – pens, crappy 2GB memory sticks stuffed with docs (but handy for giving to folks when you have a lot of photos to share with family or as boot devices for ESXi!) mints and so on.

What I have done is collected the very best of the VendorSwag I could find and stuffed them into anniversary edition of the VMworld 2013 Bag. The bag and its contents can be won if you:

a.) Attend the UK VMUG on the 21st Novemeber at the Birmingham, Motorcycle Museum. REGISTER HERE! Right now there’s special offer on hotel for those coming to stay the night before. There will be lots of very interesting speakers and its great chance to network with your peers. Confirmed speakers include:

Joe Bagley (Fresh from being on stage at the VMworld Keynote!)
Duncan Epping
Massimo Re Ferre
Brian Gammage
Ray Heffer
Cormac Hogan
Scott Lowe
Matthew Steiner

and…

b.) buy a strip of tickets from me on the day…

and

c.) your ticket gets called out… You must be there to win it – and you MUST where my stupid sunglasses for a grip-and-grin photograph.

All monies raised will be donated to charity – possibly UNICEF or perhaps a UK-only charity. Haven’t decided yet.

ANYWAY…

Drumroll. The audience has settled. The lights have been dimmed. The 1st Annual VendorSwagBag Awards are about to begin. How will win best luggage tag? Who will win best supporting mints? Who will best stress ball? Watch the video to find out…

September 3

VMworld 2013: What’s new in vSphere5.5: Storage

Note: Special thanks to my esteemed colleague Cormac Hogan for answering some of my questions when I was writing this post. To learn more about VAAI unmap check out Cormac’s rather excellent blogpost: VAAI Thin Provisioning Block Reclaim/UNMAP In Action

There’s lots of news stuff storage wise in vSphere5.5. I could use this space to talk about VSAN (apparently the v is capitalized?!?!) and vFRC (stands for vSphere Flash Read Cache apparently the V is not capitalized and vFlash was regards as too snappy and easy to remember :-p ) – but I want to keep those separate for now, and focus on the core platform.

16GB Fibre-Channel Support

vSphere 5.5 now completely support 16GB FC end-to-end. Previously, 16GB was supported natively at the vSphere host, but once you got to the ports connecting to the storage array this was teamed to multiple 8GB. This looked like this:

vsphere-16gb-compromised

Now with vSphere 5.5 the 16GB support is native to array, and looks like this:

vsphere-16gb-end-to-end

I imagine that this was largely a Q&A effort with VMware working with its storage partners to get gear required qualified for the HCL.

Microsoft Clustering Services (MSCS)

Yeah, I know its mad. Folks still run MSCS in guest. I can’t believe it myself that this junk is still used, never mind that Microsoft persists in retro-fitting it to offer availability to VMs on Hyper-V. Talk about if all you have is a hammer, every problem looks like a nail. In fairness though a lot of people do still use MSCS to help with patching critical systems. You can’t help thinking if Windows was more robust as OS that the up times of single instance would be good enough for most people. Anyway, the news is that Windows 2012 clustering is supported. It took sometime, and the process was complicated by funkiness involved in the Microsoft EULA – the less said about that the better. The support has been extended not just to include FC backed storage, but iSCSI as well. iSCSI always worked for me in my lab, but it wasn’t QA’d and not officially supported. There’s also now support for iSCSI inside the guest OS as well as the iSCSI initiator in vSphere. I don’t see much advantage to using the Microsoft iSCSI initiator as it gives you more admin work to do…The other good news is 5-node MCSC cluster are to be supported on FC, iSCSI and FCoE. Sadly, MSCS will not be supported on ToE (Token Ring over Ethernet) 😉

Permanent Device Loss/All Paths Down (PDL & APD)

In rare case you can experience PDL/APD where access to storage is lost completely. That can be due to hardware failures or major power loss to the rack. APD and PDL are terms used in SCSI sense codes to indicate that storage has come unavailable. This starts life as indication of APD, and then moves to PDL status after a period of time. The difference is with PDL is that the host stops trying to reach storage it cannot reach. The logic behind the APD/PDL process has been improved, and should mean that dead devices do not lurk around in the storage device list. Remember that a vSphere host see’s 256 LUNs (0-255) maximum so cleaning up the list can be important. Of course, if the device comes back online, all that’s need is a rescan for it to be discovered, and the VMFS mounted.

VAAI Unmap

As you may (or may not) know VAAI Unmap concerns the process by which thinly provisioned data/vmdk disk space is recouped to the overall system. As you might know when you delete stuff in the world of computers, it doesn’t actually get deleted – just marked for deletion. That can cause problems in the thin world because disk space isn’t restored back to the overall storage system. It’s a particular pain in environments where large amounts of temp data is created and then destroyed again. VAAI introduced the SCSI unmap primitive to fix this issue. Initially, it was enabled by default in vSphere5.0 but then it was made a manual process with subsequent updates – that’s because of the law of unintended consequences – the unmap process could negatively affect performance on some storage arrays.

There have been some improvements made here. It’s still a manual process using the vmkfstools -y command (currently this is executed in the path of the datastore where you want to free up space and uses a % value to indicate how much free space to reclaim – it creates a temp file to handle the process, and care must be taken in specifying to high a percentage value, and accidentally filling the volume) with vSphere5.5 this % value has been changed to use a block size instead. The vmkfstools -y it’s joined by a new addition to the esxcli command – called esxcli storage vmfs unmap. The esxcli method does the unmap exactly the same as the vmkfstools -y. I think its addition is to make sure esxcli remains a complete toolset rather than having different commands for different functions. It’s taken quite a feat of engineering to maintain the unmap functionality, whilst accommodating the new 62TB disk size.

VMFS Heap Size

VMFS has a “heap size” value of 30TB limit on open files, that’s despite the fact that vSphere5.5 supported 64TB disks. Very rarely it is possible to run out of heap size, this was caused by the file system not clearing out indirect file system pointers correctly. Essentially, the issue was VMFS not evicting stale files system points in a timely enough fashion. Previous updates have fixed the issue by merely increasing the heap size value. In vSphere5.5 this issue has been properly addressed by improving the reclamation of heap space. It now means the heap size allocation has returned to a more acceptable level. So the 256MB heap size now allows access to all the 62TB available to a VM.

September 2

VMworld 2013: What’s new in vSphere5.5: Networking

Well, I’m back in the UK after my trip to the USofA last week for the 10th annual VMworld Fest. I was rushed off my feet last week, so I wasn’t able to write up all of my “What’s New” content. Today’s post is all about networking in vSphere. As ever there are big and small improvements to the overall networking stack. So firstly, vSphere5.5 ships with support for 40Gps networking interface. Support is limited to Mellanox ConnectX3 VPI Adapters, but I’m sure others will follow on the HCL in due course.

Improved LACP

LACP support has been about for sometime in VMware’s DvSwitches, but previously only one LACP configuration was allowed per DvSwitch. That meant you needed to setup more than one DvSwitch (and have the pNICs to do that) if you wanted more. vSphere5.5 introduces support for LACP Link Aggregation Groups (LAG – an unfortunate industry acronymn, after all who wants lags in their networking!) You can have upto 64 LAGs per ESX hosts, and 64 per DvSwitch (1xDvSwitch with 64 LAGs, or 64xDvSwitch with 1 LAG or any combination therein). It now supports upto 22 different algorithms for load-balancing (not just IP hash).

Some Grabs!

In this example LACP deployment there’s two portgroups which are associated with two LAGs backed by one DvSwitch – each portgroup/LAG is backed by two different pNICs.

vpshere-niclb-lacp-lags

Configuration is a two setup process. First add the LAGs, and then assign to a portgroup…vsphere-lags-step1-definelags

vsphere-lags-step2-assigntoportgroups

New Packet Capture Tool

vSphere5.5 will introduce a new packet capturing tool. In previous editions of ESX there was fully-fledge console based on RHEL, and folks would use tools such as tcpdump to capture packets. We used to use this in train courses to show how security could be weakened on the vSwitch to allow promiscuous mode captures. Don’t forget that a DvSwitch supports Netflow and Port Mirroring methods of gathering network information as well. The new utility can capture at three different levels the vmnic, the vSwitch, and the DvUplink.

Port Security via Traffic Filtering

This is sometimes referred to as ACL on physical switches. What it allows you to do is drop packets based on ethernet header information such that traffic that matches the rule never even gets to leave the switch. You can drop based on source/destination MAC address, IP TCP Port, or source/destination IP Address. It’s even possible to drop internal vmkernal communications such as vMotion, although the usage case for doing so has yet to be found. The traffic can be dropped whether its ingress or egress.

QoS Tagging for end-to-end SLA

vSphere5.5 introduces support for Differential Service Code Point (DSCP) marking or tagging. DvSwitches have support for sometime vSphere Network IO Control (NIOC), and for many customers this per-portgroup priority/bandwidth allocations fit their needs. However, they only go so far – and so its been determined to offer a greater level of granularity which allows of tagging of traffic based on the type. For example we can tag and prioritise traffic to a database server based on HTTP or HTTPs. A combination of NIOC and Tagging should allow customers to meet their SLA needs. DSCP Tagging works at L3, and adds 6-bits to define the type of traffic supporting up-to 64 different traffic classes. Whether folks will actually configure this feature will vary upon need, but its likely to keep the network teams happy from a “tick box” perspective. As ever virtualization admins often experience resistance to change, merely because networking guys expect a virtual switch to have ALL the features and functionality of a physical switch.

 vsphere-network-tags-forQoA

August 27

VMworld 2013: What’s New in vCenter 5.5

I guess there’s going to be a bit of “mea culpa” to had around vCenter and SSO. I guess we were little caught out on this feature. Hopefully that’s been addressed with updates and plenty (some too many) KB articles. It’s hoped that SSO in vSphere 5.5 goes along way to repairing that trust relationship we have with customers that VMware technologies just work. For my own part I had a rather pleasant vSphere 5.1 roll-out. I started with a blank slate. Deployed the vCSA where SSO. The new version of SSO should support multi-domain, multi-forest configurations with ease.

vCenter also offers improvements on the connections to the DB backend. Firstly, support for clustered DB backends with Oracle and MS-SQL has been re-introduced. And for Microsoft-SQL there’s now support for Secure connectivity to MS-SQL together with Windows Authentication.

With the vCenter Server Appliance the internal Postgres database now offers vastly improved scalability for up to 100 hosts and 3,000 VMs. It’s hoped this improved scalability will entice more customers to consider a move away from the Ye Old Windows vCenter. With that said, the vCSA still only supports Oracle as an external database which I know will be a concern for customers. But its felt that the new scalability of the Postgres database means the demand for an external database may decline. I remember back in my instructor days people moaning on about needing Microsoft SQL and licenses…

The web-client get a bit of an overhaul. I know some folks are still using the Ye Old vSphere Client, and admittedly I’ve noticed in vSphere5.1 those “gears” did take sometime to turn before a refresh or a menu opened. In my experience of the beta I found the web-client performance vastly improved with the wait time for opening menus or tabs so quick, I had to take a video to capture the gears for a recent blogpost! As Mac user I’m pleased to hear the web-client fully supports OSX. Previously the plug-in to the web-client was Windows only – and that mean I need to run a Window instance in VMware Fusion on my Mac. With the new web-client I won’t need to do that – and the lost functionality (VM Console, Deploy OVF Templates and Client Devices) is now fully available.

Finally, there are improvements to the UI including the ability to do drag & drop, see a list of recently used items to speed up navigation – and filters to clear views down to just the items you want to see.

vsphereplatvorm-filters-in-vcenter

August 26

VMworld 2013: What’s New in 5.5 – vSAN

Yes, I know it sounds a bit weird to have a “what’s new” post on a new product – but in effort to keep these posts together it seemed to make sense. Beside which this post is more than just a round up of futures – and more a decision about what vSAN is, what is capable of, and what is not capable of…

vSAN is brand new product from VMware, although it got its first tech preview back in VMworld last year. That’s why I always think if your attending VMworld you should always search for and attend the “Tech Preview” session. We tend not to crow on about futures stuff outside of a product roadmap and a NDA session – so the Tech Previews are useful for those people outside of that process to get a feel for the blue-sky future.

So what is vSAN? Well, it addresses a long time challenge of all virtualization projects – how to get the right type of storage to allow for advanced features such as DRS/HA; deliver the right IOPS – at the right price point. In the early days of VMware (in the now Jurassic period of ESX2.x/2003/4) the only option was FC-SAN. That perhaps wasn’t a big ask for early adopters of virtualization in the Corporate domain, but it rather excluded medium/small business. Thankfully, Virtual Infrastructure 3.x introduced support for both NFS and iSCSI, and those customers were able to source storage that was more competitive. However, even with those enhancements its still left businesses with storage challenges dependent on the application. How to deliver cost-effective storage to Test/Dev or VDI projects, whilst keeping the price point low. Of course, you could always buy an entry-level array to keep the costs down, but would it offer the performance required? In recent years we’ve seen a host of new appliance lead start-ups (Nutanix, Simplivicity, and Pivot5) offer bundles of hardware with combos of local storage (both HDD; SDD and in some case FusionIO cards) in effort to bring the IOPS back to the PCI bus, and allow the use of commodity based hardware. You could say that VMware vSAN is a software version of this approach. So there’s a definite Y-in the road when it comes to this model – do you buy into a physical appliance or do you “roll-your-own” and stick with your existing hardware supplier?

You could say vSAN and it competitors are attempts to deliver “software-defined storage”. I’ve always felt a bit ambivalent about the SDS acronym. Why? Well, because every storage vendor I’ve met since 2003 has said to me “Were not really hardware vendors, what were really selling you is software”. Perhaps I’m naïve and gullible, and have too readily accepted this at face value, I don’t know. I’m no storage guru after all. In fact I’m not a guru in anything really. But I see vSAN as an attempt to get away from the old storage constructs that I started to learn more and more about in 2003 with me learning VMware for the first time. So with vSAN (and technologies like TinTri) there are no “LUNs” or “Volumes” to manage, mask, present and zone. vSAN present a single datastore to all the members of the cluster. And the idea of using something like Storage vMotion to move VMs around to free up space or improve their IOPS (by moving a VM to a big or fast datastore) is large irrelevant. That’s not to say Storage vMotion is a dead-in-the-water feature. After all you may still want to move VMs from legacy storage arrays to vSAN, or move a test/dev VM from vSAN to your state-of-the art storage arrays.  As an aside it’s worth saying that Storage vMotion from vSAN-to-Array would be slightly quicker, than from Array-to-vSAN. That’s because the architecture of vSAN is so different than conventional shared storage.

vSAN has a number of hardware requirements – you need at least 1 SSD drive, and you cannot use the ESX boot disk as a datastore. I imagine a lot of homelabbers will chose to boot from USB to free up a local HHD. You need not buy an SSD drive to make vSAN run on your home rig. You might have noticed both William Lam and Duncan Epping have shown ways of fooling ESX into thinking a HDD drive is SSD based. Of course, if you want to enjoy the performance that vSAN delivers you will need the real deal. The SSD portion of vSAN is used to purely address the IOPS demands – it acts as cache only storage layer. With data written to disk first, before it’s cached to improve performance, and reduce the SDD component as single point of failure.

I don’t want to use this blogpost to explain how to setup vSAN or configure it. But to highlight some of the design requirements and gotchas associated it with it. So with that lets start with use cases. What is good for, and what is not good for.

vSAN Use Cases

vsan

Because vSAN uses commodity based hardware (local storage) one place where vSAN sings is in the area of virtual desktops. There’s been a lot of progress in storage over the last couple of years to reduce the performance and cost penalty of virtual desktops both from the regular storage players (EMC, NetApp, Dell and so on) as well as host of SSD or hybrid storage start-ups (TinTri, Nimble, PureStorage, WhipTail etc). All the storage vendors have cottoned on that the biggest challenge of virtual desktops (apart from having quality images and having good application delivery story!) is storage. They’ve successfully changed that penalty into an opportunity to sell more storage. I’ve often felt that a lot engineering dollars and time has been thrown at this problem, which is largely by design. Even before VDI took off, storage has been systematic/endemic issue. The hope is that new architecture will allow genuine economies of scale. I’m not alone in this view. In fact even avowed VDI sceptics are becoming VDI-converts (well, kind of. Folks do like a good news headline that drives traffic to their blogs don’t they? I guess that’s the media for you. Never let the truth get in the way of a good story, eh?)

The second big area is test/dev. The accepted wisdom is that test/dev does need high-end performance. Good enough will do, and we should save our hardware dollars for products. There is some merit in this, but its also the case that developers are no less demanding as consumers, and there are some test/dev environments that experience more disk IOPS churn than production platforms. There’s also a confidence factor – folks who experience poor responsiveness in a test/dev environment are likely to express scepticism about that platform in production. Finally, there’s ever-present public cloud challenges. Developers turn to the public cloud because enterprise platforms using shared storage – require more due diligence when it comes to the provisioning process. Imagine a situation where developers are silo’d in a sandbox using commodity storage miles away from your enterprise-class storage arrays, demarcated for production use? The goal of vSAN is to generate almost a 96% cache hit-rate. That means 96% of the time the reads are coming off solid-state drives with no moving parts.

Finally, there’s DR. vSAN is fully compatibly with VMware’s vSphere Replication (in truth VR sits so high in the stack it has no clue what the underlying storage platform – it will replicate VMs from one type of storage (FC) to another (NFS) without care).  So your DR location could be commodity-based servers using commodity-based hardware.

So its all brilliant and wonderful, and evangelist like me will be able to stare into people foreheads for the next couple years – brainwashing people that VMware is perfect, and vSAN is an out-of-the-box solution with no best practises of gotchas to think of. Right? Erm, not quite. Like any tech vSAN comes with some settings you may or may not need to change. In most cases you won’t want to change these settings – if you do – make sure your fully aware of the consequences….

Important vSAN Settings

VSANpolicies

Firstly, there’s a setting that controls the “Read Cache Reservation”. This is turned on by default, and the vSAN schedule will take care of what’s called the “Fair Cache Allocation”. By default vSAN makes a reservation on the SSD – 30% for reads, and the rest for writes. The algorithms behind vSAN are written to expect this distribution. Changing this reservation is possible, but it can include files that have nothing to do with the VM – such as the vmx file, log files and so on. The reservation is set per-VM, and when changed it includes all the files that make up a VM. Ask your self this question – do you really want to cache log files, and waste valuable SSD space as consequence? So you should really know they IO profile of a VM before tinkering with this setting. Although the option is there, I suspect many people are best advised to leave it alone.

Secondly, there’s a setting called “Space Reservation”. The default is that is set to 0, and as a consequence all the virtual disks provisioned on the vSAN datastore are set to be thinly-provisioned. The important thing to note is from a vSAN perspective virtual disk formats a largely irrelevant – unless the application requires them (remember guest clustering and VMware Fault Tolerance require the eagerzeroedthick format). There’s absolutely no performance benefit to using thick disks. That’s mainly because of the use of SSD drives, but also it’s a grossly wasteful use of precious SSD capacity. What’s the point of zeroing out blocks on a SSD drive, unless you a fan of burning money?

In fairness you might be sort of shop that isn’t a fan of monitoring disk capacity, and your paranoid about massively over-committing your storage. At the back of your mind you picture Wild E Coyote from the cartoons – running of the end of a cliff. My view is if your not monitoring your storage, the whole thin/thick debate is largely superfluous. The scariest thing is your not monitoring your free space! You must be really tired from all those sleeplessness nights your having worrying if your VMs are about to fill up a datastore!

Finally, there’s a setting called “Force Provisioning”. At the heart of vSAN are storage policies. These control the number of failures tolerated, and the settings I’ve discussed above. What happens if a provisioning request to create a new VM is made, but it can’t be matched by the storage policy? Should it fail, or should it be allowed to continue regardless? Ostensibly this setting is there for VDI environments where a large number of maintenance tasks (refresh and recompose) or the deployment of a new desktop pool could unintentionally generate a burst of storage IOPS. There are situations where the storage policy settings mean that these tasks would not be allowed to proceed. So see it as a failback position. It allows you to complete your management tasks, and once the tasks has completed vSAN would respond to the decline in disk IOPS.

Gotchas & Best Practises

Is vSAN ready for Tier 1 Production Applications? As ever with VMware technologies – so long as you stay within the parameters and definition of the design you should be fine. Stray out of those, and start using it as a wrench to drive home a nail, you could experience unexpected outcomes. First of all usage cases – although big data is listed on the graphics – I don’t think VMware is really expecting customers to run Tier 1 applications in production on vSAN. Its fine for Oracle, SAP and Exchange on vSAN within the context of a test/dev environment – and of course, our long time goal is to do precisely that. But you must remember that vSAN is a 1.0 release, and Rome wasn’t built in a day.  Whenever I’ve seen customers come a cropper with VMware Technologies (or any technology for that matter) is when they take something that was designed for Y, and stretch it to do X. Oddly enough when you take an elastic band and use it to lift a bowling ball it has a tendency to snap under the load….

Don’t Make a SAN out of vSAN: The other thing that came out of beta testing was a misunderstanding in the way customers design their vSAN implementation. Despite the name, vSAN isn’t a SAN. It’s a distributed storage system, and designed as such. So what’s a bad idea is this: Buying a number of monster servers, packing them with SSD – and dedicating them to this task – and then presenting the storage to a bunch of diskless ESX hosts. In other words building a conventional SAN out of vSAN. Perhaps vSAN isn’t the best of names, but if I remember the original name was VMware Distributed Storage. I guess vSAN is catchier as product name than vDS! Now it maybe in the future that is a direction vSAN could (not will) take, but at the moment this is not a good idea.  vSAN is designed to be distributed with a small amount of SSD used as cache, and large amount of HDD as conventional capacity based hardware. It’s also been designed for HHD’s that excels in storage capacity, rather than spindle speeds – so its 7K disks, not 15K disks for which its been optimized. So a vSAN made only from SSD won’t give you the performance improvements you expect – but it will give you an invoice that will makes you wince!

VMware HA. Once again a new innovation from VMware has necessitated an overhaul in VMware’s clustering technology. VM resides on an ESX host, and so does its files. The whole point is keeping the resources of the VM close to each other – memory, CPU, network and now disk are all within the form-factor of a server or blade. But what if a server dies? What then? If an ESX host fails is put into maintenance mode then that will trigger either graceful evacuation of the host or disgraceful one. When the host comes back online it not only re-joins the HA/DRS cluster it also re-joins the vSAN as a member. Now if it’s a maintenance mode then a rebuild begins. In the beta this was delayed for 30mins, but under testing it has been extended for an hour. This is to avoid spurious rebuilds that were not required – by rebuild we mean the metadata/data that backs an individual node that has been down for a period receives delta updates. I guess the analogy would be if you shutdown a Microsoft Active Directory Domain Controller for an hour or so, when it came back up it would trigger a directory services synch. The important thing from a virtualization perspective is we want the ESX host to complete this synch successfully before DRS starts repopulating the host with VMs. Now think about that for second. After a reboot (for what ever reason) an ESX hosts now takes 1hr before its joins the cluster. Therefore you may need to factor in additional ESX host resources to cover this period. The operative word is “may”, not “must”. I think much depends on the spare capacity you have left over once a server is unavailable due to an outage or maintenance. The situation is a bit different if the problem is detected as a component failure. If there is a disk failure or read error then vSAN doesn’t wait an hour. The rebuild process begins immediately.

When is commodity hardware, commodity hardware? This is one for labbers who might want to run vSAN at home.  But it could be relevant to a vSAN configured at work too. I’ve been looking into moving back to a home lab. That means buying commodity hardware. Right now I’m very attracted to the HP ML350e series. It supports a truckload of RAM (196GB Max with two CPUs), although its big and expensive compared to white boxes, and Shuttle XPC. The reseller offered me a choice of disks. The hot-pluggable ones from HP are proprietary and pricey. The more generic SATA drives are not hot pluggable, and are much cheaper. For my home I know what I will be choosing. The other thing I need to think about is what capacity and ratio of HDD and SSD I need for my lab. There could be tendency of over spend. After all my plan calls for the use of Synology NAS which I hope to pack with high-capacity SSD.  Although I want to use vSAN I can imagine in my very volatile lab environment (where its built and destroyed very frequently) having my “core” VMs (domain controller, MS-SQL, View, Management Virtual Desktops) on external storage might give me peace of mind should I do something stupid….

August 26

VMworld 2013: What’s New in ESX 5.5

So I guess your beginning to detect a theme in my recent posts. Please bear with me, normal service will be resumed shortly.

You may not be surprised to hear that the configuration maximums in ESX have gone up even further. So you can now have upto:

  • 320 Logical CPUs
  • 4TB of RAM
  • 16 NUMA
  • 4086 vCPUs

Per ESX hosts. That’s more or less a doubling of capacity on ESX 5.0/5.1. Now I doubt many folks will actually configure such as system, mainly because to fill a physical box with that much memory is cost prohibitive most people. But we have seen the “standard” for how much memory an ESX host has grow as physical boxes get beefier, and memory prices go down. So I remember when 32GB/64GB was the sweet spot. I guess now its more the 96-128GB range. So there’s a bit of future proofing here, but also making sure that if any other virtualization vendor wants to get into a pissing contest we can more than deal with that scenario. 🙂

There’s also new ESX host features such as support for hot-plug of SSD drives – and the ability to leverage the new physical memory that supports “reliable memory” information. It means the ESX host can pick up information from the RAM chips about portions of the memory that marked as being “reliable”. It will then make resident parts of ESX that critical to its uptime such as the VMkernel itself, user worlds, init threads, hostd and watchdog processes. It should mean the chances of a PSOD due to bad blocks of memory are minimized. I guess the days of burning in your RAM with memtest tools are increasingly unfeasible due the quantity of memory we now have, and the time it takes to do a couple of passes.

August 25

VMworld 2013: What’s New in vSphere 5.5: Virtual Machines

Note: Sorry for the poor quality of graphics in my post. Some of them were taken from videos and powerpoints (so I could get this content to you quickly), and not from a live system. I will probably set a reminder to myself to update them once I’ve got my paws on the GA release.

vSphere 5.5 introduces a new virtual compatibility version that is now 10. This is sometimes abbreviated to vHW 10.  You might recall we dispensed with “Virtual Hardware Level” values – for something a bit more user-friendly which is compatibility levels. It allows us to express compatibilities based on both hardware and VMware Tools levels as single entity.

62TB Virtual Disks (yes, 62TB, not 64TB!)

vHW 10 introduces support for 62TB virtual disks. That’s something folks have been wanting VMware to do for sometime. Up until now only physical mode RDM’s supported 64TB volumes/LUNs. That’s sounds okay, until you remember that there are some products that we have that are incompatible with RDMs such as vCloud Director. So 62TB virtual disks are now support. Why 62TB and not 64TB. Well, we reserve 2TB of disk space for features like VM snapshots. The last thing you want to do is create a 64TB LUN, and fill it with a 64TB virtual disk, and find you had no space left for snapshots.  This release also introduces support for 62TB virtual RDMS as well.

Now there are some limitations. We currently don’t support the extension of existing <2TB into the >2TB range whilst the VM is powered on. The important point is that there is no onerous conversion process required to get to the 62TB virtual disk unlike some other foreign virtualization vendor who remain unmentioned. J There are however, some requirements and incompatiablities – some of them are not within VMware’s control, and some are. Here’s a quit hit list.

1. BusLogic Controllers (commonly used by default by Windows NT/2000) are not supported

2. The partition(s) within the disk need to be GPT not MBR formatted. There are tools that will convert this for you, but not for boot disks. Also beware of using small cluster sizes on partitions. Anything <16K will mean you won’t be able to take a partition and increase it to the maximum 62TB size. So in a nutshell there’s some guest operating system limits.

2. vSAN is not supported

3. VMware FT is not supported

4. You must use the new web-client, as the Ye Olde vSphere C# client will spit back errors.

vsphereclient-64tb-must-use-the-web-client

Stay tuned to the blog, as I have a much longer blog post on 62TB support. But I need to verify it against the GA release before I click the publish button.

New SATA Controller – Advanced Controller Interface (AHCI)

vHW 10 introduces a new device type called the AHCI. This SATA controller allows for up 30 devices per controller, and we support 4 of them. That’s a max of 120 devices (just in case you need to take out a calculator to work out 4×30). Compatibility is pretty good – and there’s still an IDE controller I needed. One anomaly is if your running Mac OS X on ESX (now there’s a popular configuration!) which requires a CDROM on the AHCI, because Apple dropped support for IDE some years ago.

vsphereplatvorm-sata-disk

GPU Support:

I’m embarrassed to say I don’t know much about GPU support. Mainly because my ESX hosts are running on Jurrasic hardware and none of my GPU’s are supported. I’m also embarrassed to say I didn’t delve into it in my last outing with Horizon View.  The good news is the hardware support is improving not just NVIDA cards, but also Intel/AMD as well. If you have looked at this GPU has three modes – manual, hardware and automatic. Automatic seems the way to go, as it tries the hardware-assisted method first, but fails back to software mode if the GPU isn’t support. A hardware mode seems risky to me, as this can break vMotion if the destination host doesn’t have the supported hardware.  The other interesting news is we now support Linux device drivers as well. That rather sets us apart as the only virtualization vendor with a complete set of enhanced drivers for Linux.

August 13

The Defy Convention Convention: My VMworld 2013

Well, in few days myself and Carmel will be jetting off to San Francisco. It hardly feels like a minute has elapsed since I was last in Palo Alto at CorpHQ before travelling on to the Indy VMUG. Since I got back I’ve been in the thick of a home move. I’m sat in my home office surrounded by cardboard boxes as I type. Fortunately, there not mine but my wifes office boxes. We are trying to share an office for the first time in years, wish us luck!

We plan to spend the weekend before VMworld in Half Moon Bay visiting some Brit and Yank friends – before heading downtown to San Francisco on Sunday afternoon. Carmel has signed up for some spouseitivies during the day… Meanwhile, during the week you will mainly find me in the Solutions Exchange at the VMUG Booth. I’ve agreed to base myself there most of the time, ducking out occasionally for the odd breakout session or meeting. I have some designated time slots where you can be 100% sure to find me there.

Monday, 1:30- 2:00 pm
Tuesday, 3:30 – 4:00 pm
Wednesday, 3:30 – 4:00 pm

On top of that I have session at VMworld where I will be presenting. I submitted 3 sessions this year, but sadly none of them got approved. Alan Renouf suggested if I’d put “Software-Defined Datacenter” into the title that might have upped my chances! Fortunately, the vCloud Hybrid Service team approached me to speak on the subject of vCHS. More specifically, the session is about how to get your VMs in and out of vCHS using vCloud Connector, and also how the new “Data Protection Services” (DPS) works (currently a Freemium service).

Session Title: PHC5752 – Data In, Data Out and Data Protected

Session Time: Wednesday, Aug 28, 2:00 PM – 3:00 PM

Session HashTag: #PHC5752 (yes I know, catchy huh? 😉 )

I will be joined on stage with Roshni Pary, Product Marketing Manager

As for parties and such like. I have an invite for the CXI party, VMUG party and I’m hoping to get one for the VMunderground. This year, Carmel and I are uninspired by the VMworld bands. Sorry, not a patch on seeing the “The Killers” in Vegas (Carmel’s first VMworld). So we will probably just have dinner, drinks and go to the VMworld Unparty instead.

Category: VMworld | Comments Off on The Defy Convention Convention: My VMworld 2013
June 26

Extreme Hands-on-Labs Competition

DEFY-CONVENTION

We are running a fun contest to help raise awareness around Hands on Labs at VMworld. It is a pretty simple contest to participate in – all you have to do is send in a picture of yourself taking Hands on Labs in a non-traditional way – for example photos of you taking labs on a train, or on a plane – you could call it  “Defying Convention”

I see it a bit like those photos of folks doing “extreme ironing”

tumblr_ll9iuqghsr1qbp781o1_500

The prize – a free ticket to VMworld!

More Info…

Category: VMworld | Comments Off on Extreme Hands-on-Labs Competition
April 24

VMworld Voting is now open…

I got this from Eric Sloof this morning – who I follow on twitter. Seems like the public voting on VMworld Sessions has opened. I’m pleased to see that all 3 of my sessions were approved as candidates for the voting process.

Mike Laverick’s Sessions:

4575 – Zero to Colo: vCloud Director in my Lab
This session has been toured and well received at various VMUGs around Europe this year – and covers my journey though learning vCloud Director – and how thinking about the cloud has shaped and continues to re-shape the way I do virtualization. It also a has strong nod towards homelabs, and how the vCommunity can scale up their homelabs to be able to take on the next generation of VMware Technologies.

4576 – VMware Horizon View 5.2 meets vCNS Edge Gateway…
In this hands-on and practical session, Mike Laverick (VMware’s Senior Cloud Infrastructure Evangelist) explains how to enable access to the VMware Horizon View Security Server and Connection Server – using VMware’s very own Edge Gateway for both firewall and load-balancing. This session will be of interest to both people who support VMware View and are looking for cost affective way of replacing hardware based load-balancers/firewalls – and to those who use vCloud Director looking for example of integration across the product suite.

4843 – Rapid VMware ESX Deployment with the Ultimate Deployment Appliance
The Ultimate Deployment Appliance (UDA) is a community backed solution (designed by Carl Thijensen and promoted by Mike Laverick) that automates the deployment of ESX (as well as other OSes) and is an all in one PXE/TFTP/DHCP appliance. In this session you can learn all about its setup, configuration with a live demo…

To vote need to have an account on vmworld.com – and then once logged in click this link. To find my sessions the easiest way is to filter on my name as keyword:

Screen Shot 2013-04-24 at 10.15.37

 

And… whilst your there you might want to search for Eric as well 🙂 If you an independent guy (in other words not backed by some megalithic corporate!) like Eric is from the vCommunity. Let me know what your sessions are and I will promote them as I have with Eric sessions:

Eric Sloof’s Sessions:

4882 – How to create professional video tutorials

In this session you will learn how to create professional videos easily. You will get a walkthrough with some basic steps like setting-up you lab, using the right microphone and disabling other interfering software. Besides a lot of great tips for recording an instructional video, you will also learn how to edit the videos and use special effects to make the tutorials more attractive to watch. You will also learn how to import other content. Once the video is ready, you will get a heads up about all the different possibilities for sharing the video.

4943 – VMware High Availability Failover Capacity Demystified

VMware High Availability offers several options to configure failover capacity for a cluster. This session will show you the different options and their pitfalls. You will learn how to configure vSphere HA to tolerate a specified number of host failures. With the Host Failures Cluster Tolerates admission control policy, vSphere HA ensures that a specified number of hosts can fail and sufficient resources remain in the cluster to fail over all the virtual machines from those hosts.

5008 – vCenter Operations and the quest for the Missing Metrics (with Duco Jaspars)

This session will teach you how to customize vCenter Operations to provide you the information you really need for your business. You will get an insight of very useful vCenter Operations Manager customizations. We do this by giving you some real life examples from the field where we use Custom Dashboards, Super Metrics, Adapters and Alerts in order to give you the best possible view in to the well-being of your environment.

Chas Setchell – Independent Consultant:

 5631 – Got .50 Cents Build a Lab 

So you want to be a VMware Ninja, and have no access to real servers to build a home lab and learn on your own time. In this session you will learn how to build a production like lab environment for as little as .38 cents per hour that will enable you to learn the latest and greatest from VMware. We will take you on a step by step journey and in less than 30min from sign up to actually spinning up your very own dedicated server with full IP KVM access and have fully deployed AutoLab running vSphere, vCenter and some virtual machines and perform a vMotion. You will also learn how to create an image of your lab that will save the current state and allow you to restore at a later date, since servers are leased by the hour, no need to keep paying if you are not using your lab.

 

Category: VMworld | Comments Off on VMworld Voting is now open…