What's New in vSphere 5.5

ESXi Hypervisor enhancements

  • Hot-Pluggable SSD devices
As we know, PCIe SSDs are getting quite common due to its improved performance.
In vSphere 5.5, we can now hot-swap (hot-add/ hot-remove)PCIe SSD hard disks on a running vSphere hosts without downtime. This feature was available for SATA and SAS hard disks in the previous versions.


  • Support for Reliable Memory Technology
ESXi hypervisor runs directly in memory, so any error in memory could crash the hypervisor. In vSphere 5.5, this risk is addressed using a new technology called Reliable Memory Technology. The system firmware configures a region of system memory as fault resilient (Reliable Memory) and communicates information about it to the OS. The ESXi Hypervisor uses this information to optimize the placement of the VMkernel and other critical components in the fault resilient region of the system memory making the hypervisor robust against memory errors.
In PowerEdge servers this is also known as Fault Resilient Memory.

  • Enhancements to CPU C-States (Operating States)
Prior to 5.5, the vSphere was only using CPU P-states (Performance states) for power saving. Higher P-state numbers represent slower processor speeds. Power consumption is lower at higher P-states. To operate at any P-state, the processor must be in the C0 operational state where the processor is working and not idling.
A host enters ‘C-state’ or ‘C-mode’ to save energy when the CPU is idle. The basic idea is to cut clock signals. These are numbered starting at C0, which is normal CPU operational mode; the CPU is 100% operational. The higher the C number is, deeper is the CPU sleep mode. At higher C-states, more components shut down to save power. A disadvantage is that deeper sleep states have slower wake up times. In 5.5 VMware leverages both P-states and C-States to enhance performance.

Virtual Machine enhancements

  • vSphere 5.5 released new VM Hardware Version (Version 10)
  • Expanded vGPU Support
One of the main addition of vSphere 5.1 was the introduction of hardware-accelerated 3D graphics-virtual graphics processing unit (vGPU) support for a VM. This allows the VMs with graphic intensive applications to take advantage of the hardware GPUs. But this support was limited only for NVIDIA-based GPUs.
With vSphere 5.5, the support has been extended to Intel and AMD-based GPUs as well.

There are different rendering modes for a VM with vGPU.
o   Automatic
o   Software
o   Hardware

If automatic mode is enabled and the VM tries to do a vMotion and the GPU is not available in the destination then software rendering will be enabled automatically.
If hardware mode is enabled and the VM tries to do a vMotion and the GPU is not available in the destination then the vMotion will not be attempted.

  • Graphic Acceleration is now available for Linux Guests also

VMware vCenter enhancements

  • Enhanced vCenter Single Sign-On
  • Enhanced vSphere Web Client
  • vSphere App HA
Prior to vSphere 5.5, third-party applications in a VM were monitored using VMware vSphere Guest SDK. Along with this VM uses virtual machine monitoring which uses the heartbeats from VMware tools as well as I/O activity to ensure that the VM is running fine. If neither of these is detected, a VM will be restarted using vSphere HA.

In vSphere 5.5, VMware has simplified application monitoring for vSphere HA with the introduction of vSphere App HA. This new feature works with vSphere HA host monitoring and VM monitoring to improve application uptime. vSphere App HA can be configured to restart an application service when an issue is detected and if the application does not restart properly, App HA can be configured to restart the VM. App HA uses VMware vFabric Hyperic for monitoring. To use this feature, vSphere App HA and vFabric Hyperic is installed in a vCenter. vFabric Hyperic will monitor the applications and act according to the policies set in vSphere App HA.

  • Enhanced VMware HA
Prior to 5.5, during the course of a host failure, HA will move VMs to hosts without considering the VM anti-affinity rules if any. And it was the duty of DRS to perform vMotion after HA failover to move VMs in such a way that it relies with the VM anti-affinity rules.

In vSphere 5.5, vSphere HA has been enhanced to confirm with VM the virtual machine anti affinity rules.

vSphere Storage enhancements

  • Support for 62TB VMDK
VMware has increased the maximum vmdk and RDM size of a VM from 2 Tb-512bytes to 62 TB !!!

  • MSCS Updates
Prior to 5.5, shared storage in Microsoft Cluster Service (MSCS) was supported only if the protocol used was Fibre Channel (FC). This has been now relaxed to FCoE and iSCSI.

In addition to this VMware supports the following features related to MSCS:
o   Microsoft Windows 2012
o   Round-robin path policy for shared storage

  • 16GB E2E Support
In vSphere 5.0, VMware introduced support for 16GB FC HBAs. However these HBAs were throttled down to work at 8GB.

In vSphere 5.1, VMware introduced support to run these HBAs at 16GB. However, there was no support for full, end-to-end 16GB connectivity from host to array. To get full bandwidth, a number of 8GB connections must be created from the switch to the storage array.

In vSphere 5.5, VMware introduces 16GB end-to-end FC support. Both the HBAs and array controllers can run at 16GB as long as the FC switch between the initiator and target supports it.

  • PDL AutoRemove
Permanent device loss (PDL) is a situation that can occur when a disk device either fails or is removed from the vSphere host in an uncontrolled fashion.

PDL detects if a disk device has been permanently removed—that is, the device will not return—based on SCSI sense codes. When the device enters this PDL state, the vSphere host can take action to prevent directing any further, unnecessary I/O to this device. This alleviates other conditions that might arise on the host as a result of this unnecessary I/O.

With vSphere 5.5, a new feature called PDL AutoRemove is introduced. This feature automatically removes a device from a host when it enters a PDL state.

  • vSphere Flash Read Cache
In vSphere 5.5, a new flash based storage solution called Flash Read Cache has been introduced.
vSphere Flash Read Cache enables the pooling of multiple flash based storage into a single consumable storage.

vSphere Networking enhancements

  • Traffic Filtering
In 5.5, vSphere Distributed Switch supports traffic filtering based on the below qualifiers:

o   MAC Source and destination address qualifiers
o   System traffic qualifiers – vSphere vMotion, vSphere management, vSphere FT and so on
o   IP qualifiers – Protocol type, IP source and destination address, Port number
  • QoS tagging
  • 40 GB NIC Support



Comments

Popular posts from this blog

VMware and Windows Interview Questions: Part 2

VMware and Windows Interview Questions: Part 3

VMware vMotion error at 14%