Posts

Showing posts with the label VMware Overview

ESXi Maintenance mode improvements | VMware

N ormally when an ESXi host is placed into maintenance mode, all the Powered On VMs will be migrated first to the other hosts in the cluster using DRS, then comes Powered Off VMs and finally VM templates. Starting from vSphere 6.0 u2, VMware has changed the order. Now when a host is placed into maintenance mode, first the Powered Off VMs will be moved, then VM templates and finally the Powered On VMs. Reason for change: Prior to 6.0 U2, when users initiate Maintenance mode in ESXi, they lose the ability to deploy VMs using the templates until the migration completes. Even though the templates are migrated in the last place, ESXi host would have already queued the VM Templates and Powered Off VMs in its migration process. When we consider hosts with 40-50 VMs, users might need to wait for some time to work with the templates or to Power On a VM. Therefore VMware decided to make this small change which might turn useful to administrators atleast at some point of time.

VMware ESXi Memory State

Image
M emory state of an ESXi shows how much memory constrained the host is. There are 5 memory states and depending on each memory state, ESXi will engage its memory reclamation techniques (TPS, Ballooning, Compression & Swapping). High state: No reclamation Clear state: TPS Soft state: TPS + Ballooning Hard state: TPS + Compression + Swapping Low state: TPS + Compression + Swapping + Blocking How ESXi calculates its memory state ? Memory state is not calculated using its overall utilization. But by comparing the current utilization with a dynamic   minFree value. Based on this comparison, ESXi changes its state. High state : enough free memory available Clear state : <100% of minFree Soft state : <64% of minFree Hard state : <32% of minFree Low state : <16% of minFree What is minFree value ? minFree value is calculated using the configured memory of ESXi host. An ESXi host configured with 28GB RAM wi

ALUA & VMware multipating

W hat is the appropriate Path Selection Plugin (PSP) for your ESXi ? Do not wait... check with your storage vendor. As a VMware administrator, In the past I'd my own reasons to choose Round Robin ahead of MRU and Fixed. But that may not be always good for the below reason. Each storage vendor has their own method of handling I/O. For all recent Active/Active storage processors (SP), you will see two paths to a given LUN in VMware. But only one processor will actually own the LUN. The path to this storage processor is called as the Optimized path and the path to the other will be Unoptimized path. VMware PSP (MRU and Fixed) will always send its request to the Optimized path and once it reaches the SP, it will internally transfer the request to the other SP. In short ALUA is something which helps the array to service its I/O request using the interconnects between the SP. In scenarios where the Optimized path fails, VMware vSphere PSP will choose the PSP and failback based on

VMFS locking mechanisms | VMware

I n a multi-host storage access environment like vSphere, a locking mechanism is required for datastore/LUN to ensure data integrity. For VMFS there are two types of locking mechanisms: SCSI reservations Atomic Test and Set - ATS SCSI reservations This method is used by storage which does not support Hardware Acceleration . In this method, the host locks the datastore when it execute operations which requires metadata protection and will release the lock once it completes the activity. SCSI reservation does not lock a LUN, but it reserves the LUN to get the On-disk lock. Whenever a host locks, it must renew the lease on lock to indicate that it stills holds the lock and not crashed. When a new host need to access a file, it checks whether the lease has been renewed. If it hasn't renewed, another host can break the lock and access it. In a multi-host environment, excessive SCSI reservations might degrade the storage performance. The operations that require reservat

Virtual NUMA | The VMware story

NUMA - Non Uniform Memory Access NUMA, a term which comes along with Symmetric Multi-Processing or SMP.  In the traditional processor architecture, all memory access are done using the same shared memory bus. This works fine with small number of CPUs. But when the number of CPUs increase (after 8 or 12 CPUs), CPU competes for control over the shared bus and creates serious performance issues.  NUMA was introduced to overcome this problem.  In NUMA architecture, using advanced memory controller high speed buses will be used by nodes for processing. Nodes are basically a group of CPUs which owns its own memory and I/O much like an SMP. If a process runs in one node, the memory used from the same node is referred to Local Memory. In some scenarios, the node may not be able to cater needs of all its processes. At that time, the node makes use of memory from other nodes. This is referred to as Remote Memory. This process will be slow and the latency depends on the location of th

VMware Storage I/O Control - VMware SIOC

Image
                                        S IOC is considered to be one of the finest addition to the VMware feature stack. In this blog we will discuss about SIOC and its impact on Disk usage. Before discussing about SIOC, we should understand the relevance of VM based 'Disk Shares' in VMware. The disk share concept is simple and much alike Memory and CPU shares. When there is a resource constraint, the host will throttle the disk usage of  VMs by adjusting its Disk Queue Depth based on its share values. If all the VMs have the same Disk Share, the control over disk will be shared among all VMs. If the share value of a VM is high, it gets precedence over other VMs.

Interpreting VMware CPU performance metrics - RUN, WAIT, RDY, CSTP

Quick Reference guide from VMware Run, %RUN: This value represents the percentage of absolute time the virtual machine was running on the system. If the virtual machine is unresponsive,  %RUN  may indicate that the guest operating system is busy conducting an operation.

VMware VLAN Tagging

There are three methods for configuring VLAN tagging: EST - External Switch VLAN Tagging In this method, the physical switch will have a 1:1 relationship with the vlan/portgroup. Or in other words, for every portgroup there should be a physical NIC. ESXi or vSwitch will be unaware of the tagging operation. Since the ports connected to ESXi just need to handle one vlan, the port will be configured as access port. All the portgroups connected to the virtual switch must have their VLAN ID set to 0. And what would be the downfall of this method ?

VMware vSAN in a nutshell

What is vSAN ? V irtual SAN is a new feature introduced in vSphere 5.5. It is a hypervisor-converged storage solution built by aggregating the local storage attached to the ESXi hosts managed by a vCenter.

VMware Virtual Machine Optimization tips

F ocus : How can we optimize the performance of a virtual machine ? Below are a few points you can consider during your virtual machine deployment time:

VMware Prerequisites : Quick reference

vMotion Host must be licensed for vMotion Configure host with at least one vMotion n/w interface (vmkernel port group) Shared storage (this has been compromised in 5.1) Same VLAN and VLAN label GigaBit ethernet network required between hosts

Memory handling techniques in VMware

                              T he way VMware handles its memory had always amazed me. How can a 4 GB ESXi hypervisor allocate 3 VMs with 2 GB vRAM each? What happens if there is a resource crunch ? I will try to explain all these questions in this blog.

What's New in vSphere 5.5

ESXi Hypervisor enhancements Hot-Pluggable SSD devices As we know, PCIe SSDs are getting quite common due to its improved performance. In vSphere 5.5, we can now hot-swap (hot-add/ hot-remove)PCIe SSD hard disks on a running vSphere hosts without downtime. This feature was available for SATA and SAS hard disks in the previous versions.

How to install VMware tools on Linux

Image
Installing VMware Tools from the Command Line with the Tar Installer The first steps are performed on the ESX host, within the VMWARE infrastructure client menus: 1. Power on the virtual machine. 2. After the guest operating system has started, prepare your virtual machine to install VMware Tools. Right click VM, Guest -->  Install/Upgrade VMware Tools . Take the linux console using putty: 3.  Mount the contents of CD using the below commands:             mkdir /mnt/cdrom     --> To create a new directory to mount contents mount /dev/cdrom /mnt/cdrom   --> Mounts content from CD to the new folder Change the working directory cd /tmp Note: If you have a previous installation, delete the previous vmware-distrib directory before installing. Use the below command for this operation: rm –fdr /tmp/vmware-tools-distrib 4. Untar the VMware Tools tar file: tar zxf /mnt/cdrom/VMwareTools<xxxx>.tar.gz Where  <xxxx>  is the build/revision

Disable warnings when SSH enabled in vSphere ESXi 5.0

T he security feature of ESXi 5.0 prompts a warning for ESXi hosts, when SSH is enabled. This can be annoying for system admins.  The following steps explain how to disable this warning: Select the ESXi host from the Inventory. Select Advanced Settings from the Software menu. Navigate to UserVars > UserVars.SuppressShellWarning. Set the value from 0 to 1. Click OK.

CPU ready time in VMware

I n simple words, CPU Ready time is the time a VM waits for a physical CPU.  As we know for VMs, the vCPUs are the CPUs of physical hosts used by VM on a round robin basis. Higher CPU ready time implies poor VM performance.  CPU ready time could be found using the performance tab available for each VM. Here the ready time will be calculated in milli seconds. For finding the ready time value in %, you can use the tool ESXTOP. CPU ready time of 1000 ms is considered to be equivalent to 5% ready time. Any ready time upto 5% for a VM is considered to be normal. If it lies between 5-10%, then the VM needs to be monitored and anything above 10% (above 2000ms) is demands an action from the administrator.

How to run Hyper-V as a VM in VMware ESXi

This article explains the process of nesting Hyper-V virtualization solution inside VMware ESXi 5. It will be useful for those who want to try out the features of Hyper-V in a test environment ESXi prereq: Before we start, we need to ensure that ESXi 5 allows nested Hypervisors to be installed. For this you have to edit the /etc/vmware/config file . Steps given below - Enable SSH through the security profile in the vSphere Client - SSH to the ESXi system using putty - Execute the following command, which updates the config file to allow nested hypervisors echo 'vhv.allow = "TRUE" ' >> /etc/vmware/config VM preparation: Now that the ESXi host is configured to allow for nested VMs,create a new virtual machine using version 8 hardware, 4GB (or as much as you can spare), 2 x vCPUs, 2 or more vNICs and a 100GB virtual disk.Before booting the VM, we need to modify the virtual machine config file .vmx - Access the VM through vSphere client. Go to vi

VMware interview questions

1. What is the hex code for VMFS volume ? 0xFB 2. What are the differences between ESXi 4.1 & ESXi 5.0 ? 3. What are affinity and anti-affinity DRS rules ? 4. What is host monitoring ? 5. What is VM monitoring ? 6. Does MAC vm deployment supported in ESXi 5.0 ? If yes, how? 7. Should we need to TRUNK the switch port connected to ESXi ? If yes, why? 8. What is private VLAN ? How is it different from normal VLAN ? Refer this post 9. Difference between VMFS 3 and VMFS 5 file formats ? 10. Starting sector for VMFS 3 ? 128 11. Starting sector for VMFS 5 ? 2048