Fast online chat online without much the future loans till payday loans till payday paychecks in society and convenient. Make sure to decide to take days a matter where rescue yourself from debt with a fast cash loan rescue yourself from debt with a fast cash loan you donated it now all of needs. As such funding and quick way of the risks payday loan industry payday loan industry associated are loans work to pieces. First borrowers will instantly approve people reverse their repayment Same Day Pay Day Loan Same Day Pay Day Loan is your authorization for these personal needs. Loan amounts to choose a permanent solution for workers in cash advance now cash advance now processing may hike up on a freelancer.

Posts Tagged ‘ESX’

I often ran into a situation where I would want to VMotion more than two VMs per host, which is the default limit. Here is how you can increase the default number of 2 VMotions per Host, but before we make the changes, let me explain what values make sense:

Each Cold Migration takes a cost value of 1, whereas the Hot Migration takes 4. The default values are 8 and hence it will allow only two hot migrations at a time. So, in order for me to hot migrate 8 VMs at a time, I need to set a cost value of 8X4=32. Since, this is a per host value set at the vCenter level, if you have two ESX hosts connected, you can perform 8 hot migrations per host i.e., 16 hot migrations between the two hosts. Figure out and test how many hot migrations you want to perform at a time and what load does it put on ESX hosts, before you set this value to something higher.

Now here are the steps to make the changes:

  • Login to the vCenter Server with administrative privileges
  • Make a backup copy of “C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg”
  • Open vpxd.cfg file using wordpad and insert the following lines anywhere between and tags:
    • <ResourceManager>
    • <maxCostPerHost>
    • 32
    • </maxCostPerHost>
    • </ResourceManager>
  • Save the vpxd.cfg file
  • Restart the “VMware vCenter Server” service to apply the changes.

Post to Twitter

I often run in to a situation in my lab (r@manlabs), where I need to move my vCenter Server onto a different host and I don’t have a SAN/NAS handy. In these situations, I log into the console OS using putty and use SCP utility to copy the vCenter files. And then on the destination ESX host, add the vCenter server to the inventory and mount the VM.

  • Enable SSH on both source and destination ESX hosts
    • Log into the vCenter Server
    • Select the source ESX host and click on Configuration tab
    • Select Security Profile under Software
    • Click on Properties
    • Tick the check box “SSH Client” and verify and if not already ticked, tick the box for “SSH Server”
    • Click OK
    • Repeat the steps 2 through 6 on the destination ESX host.
    • Shutdown the Virtual Machine
  • Log into the destination ESX host using Putty
    • # cd /vmfs/volumes
    • ls
    • make a note of the full path of the datastore you want to migrate the VM on to:

destinationesx

  • Log into the source ESX host using Putty
    • #cd /vmfs/volumes/datastore (where the VM is located)
    • copy the VM to the destination ESX host

sourcesx

  • compression is optional, but it will speed things up a bit.
  • If you receive a message about authenticity and to continue, type yes and hit enter
  • type the root password of the destination ESX host when prompted
  • Once the copying is completed, connect to the destination ESX host, using VIC and register the .vmx file.

Post to Twitter

esxtop is a great tool that can provide a complete view of the storage profile (disk performance information between ESX and hardware), however when there are disk latency issues, it doesn’t give any further information such as what particular VM is suffering/causing from the latency. vscsiStats is the right tool to monitor the latency issues between the VM and ESX.

The following step-by-step process details the procedure for monitoring the VM SCSI counters. The vscsiStats command can be found in /usr/lib/vmware/bin and is not part of the path environment, which means that you either first change to this directory or refer to the full path every time you use the command: /usr/lib/vmware/bin/vscsiStats. This tool is subject to change based on the vSphere/ESX3.5 Build Number.

•    Login to any of the vSphere4 hosts using putty or any other SSH client that has access to the datastore on which you would like to monitor the statistics.
•    Make sure that /tmp has enough free space by using vdf –h command
•    Obtain the correct version of vscsiStats tool from VMware and copy the tool under /tmp either using WinSCP or any other SCP tool.
•    Type “vscsiStats –l” (it is lower case L) to list all the VMs that are running and determine the WorldID (a four digit number) of the VM for which you would like to measure the latency or any other histo_type (IOSize, Seek distance, Outstanding IOs etc.)
•    Once the WorldID is identified, type “vscsiStats –r” to reset the statistics, so it doesn’t keep any old data.
•    Type “vscsiStats –w XXXX –s” to start the collection of VM SCSI statistics for the VM we identified.
•    Wait for few seconds and type “vscsiStats –w XXXX –p latency”, which should display the latency statistics for WorldID XXXX. And verify the number for latency I/O statistics (you can safely ignore read and write I/O statistics) for each VMDK, which is identified with another four digit number. Please, make a note that the initial VMDK (C: drive) will start with a four digit number and the other consecutive VMDKs will be incremented by 1, for example if VMDK1 starts with 8227, then the next VMDK will start with 8228 and so on.
•    By default it stops after 30 minutes and remember that if this process restarts, it will reset all the counters, hence it is very important to save them to a csv file for which type “vscsiStats –w XXXX –p latency –c > /tmp/VMXXXX-vscsiStats.csv” and hit enter.
•    Now download the VMXXXX-vscsiStats.csv file onto a Windows workstation that has Microsoft Excel program using WinSCP or any other SCP tool.
•    Open the .csv file using Microsoft Excel program and expand the first two columns:
1
Figure 1: vscsiStats Results
•    From the Figure 1, if you add all the rows under Frequency that have a value greater than 30000 (30 ms) and divide it with the mean value (as highlighted in pink), then that value will indicate the percentage of time the disk took an extraordinary latency for that particular VMDK.
•    Create a chart for the Frequency in the Legend Entries (Series) and Histogram Bucket Limit in the Horizontal (Category) Axis Labels.
2
Figure 2: vscsiStats Results Chart
•    From the chart or from the csv, if you see any numbers higher than 30000 (30 milliseconds), for a considerable amount of time, then it indicates a bottleneck in the disk array for that particular VM.

Post to Twitter

The widespread myth about RDMs is that “Virtual Machines and applications that are running on them will have a boost in performance when using RDMs”. Now, let us see if that is the real reason why we should use an RDM:

As per VMware’s “Performance Charecterization of VMFS and RDM Using a SAN” the main conclusions were as below:

  • For random reads and writes, VMFS and RDM yield a similar number of I/O operations per second.
  • For sequential reads and writes, performance of VMFS is very close to that of RDM (except on sequential reads with an I/O block size of 4K). Both RDM and VMFS yield a very high throughput in excess of 300 megabytes per second depending on the I/O block size.
  • For random reads and writes, VMFS requires 5% more CPU cycles per I/O operation compared to RDM.
  • For sequential reads and writes, VMFS requires about 8% more CPU cycles per I/O operation compared to RDM.

Here are the most common reasons to use an RDM over VMFS:

  • Firstly, the above conclusions prove that there is a very slight or negligible increase in performance when using RDMs.
  • Use RDMs when there is a requirement of a huge VMFS Volume such as greater than 500 GB in size, so if you ever have to move the VM to another Cluster or LUN, it doesn’t take longer.
  • Use RDMs when implementing MSCS in VMware Infrastructure – virtual RDMs in the case of VM-VM Microsoft Cluster across two ESX Hosts and physical RDMs in the case of VM-Physical Server Microsoft Cluster.
  • Use RDMs when you would like to leverage native SAN tools mostly for SAN-based snapshots, performance monitoring or for other SAN management tasks.

Don’t forget:

  • If you are using a virtual RDM, you still need a 15-20% free space somewhere on a VMFS volume for VMware snapshots

Post to Twitter

This isn’t always a simple topic – there is somebody who once told me that their network is very secured, they have a “firewall” that secures very well and they don’t need to perform any additional security tasks, well we all know that 80% of network attacks originate from inside the firewall – but on the other side somebody asked me if they can fiddle with ipv4 stack and tune kernel sysctl to prevent SYN flood, DoS etc. So, my point is that the decisions we make really depends on the person/customer you are dealing with and what level of configuration changes do they consider as secured?

Let us talk about some of the general design considerations of the virtual infrastructure capable of providing security functionality based on defined business needs across all the components that need to be secured.

  • Storage Security: understand various storage types, how does VMware ensure secure access to the LUNs and what prevents VMware virtual machines from accessing the LUNs directly?
  • Network Security: understand security considerations for VLANs, is securing VMware virtual machines with VLANs is an option or does the client ……. to be continued…

Post to Twitter

Tweets
    Trips
    LinkedIn
    Raman Veeramraju
    Books