Ajcody-Virtualization

From Zimbra :: Wiki

Jump to: navigation, search
Attention.png - This article is NOT official Zimbra documentation. It is a user contribution and may include unsupported customizations, references, suggestions, or information.

Contents

Virtualization Issues

Actual Virutalization Issues Homepage


Please see Ajcody-Virtualization

Vmware - ESX - Performance & Support Resources

Internal Resources:


External Resources:

Vmware And Clustering

Please see Ajcody-Clustering#Vmware_Virtualization_and_Clustering

Introduction To Using VMWare ESX For ZCS Test Servers

References

References I ended up consulting to resolve and develope my notes below. There appears to be many alterations on how to do this depending on the version of the Vmware software you are using.

Introduction To Notes

I just did this and I plan on going through it and improving it once I have time.

  • I believe the hard coded reference to the volumes could use the sym-link for the path:
    • /vmfs/volumes/Storage vs. /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af
  • Setup /etc/hosts file on first vmimage to have entries for other machines
  • Configure first vmimage to use ESX server for DNS [/etc/resolv.conf] , NTP , and other network services
  • Configure vm configuration files, *.vmx , to use manual set MAC addresses and had last number matched number used in hostname.
  • Configure ESX server to act as DNS & DHCP server using to auto-allocate ip address and hostnames and set each hostname to have proper A, PTR, and MX records using hostname of image equally domainname for mail.
    • for example:
    • I would create three groups of images, if possible:
      • host.dev.DOMAIN.com
        • DEV is for pure testing
      • host.qa.DOMAIN.com
        • QA is for testing of changes prior to roll out to production
      • host.prod.DOMAIN.com
        • PROD would be replica's of production and used to test against production issues
    • Images configured as:
      • image hostname = centos5-30.dev.DOMAIN.com
      • server centos5-31.dev.DOMAIN.com has ip address of 192.168.0.31 - set A and PTR record
      • MX equals centos5-31.dev.DOMAIN.com / 192.168.0.31
    • Use more expansive descriptions in hostnames for standards and predictability.
      • Note, underscores ARE NOT vaild DNS hostnames per RFC 1034 - http://www.ietf.org/rfc/rfc1034.txt
      • Add the DEV - QA - PROD if your using that as well.
      • Primary Domain = example.com
        • LDAP Master hostname = ldap-1.DOMAIN.com
        • LDAP Replica hostnames = ldap-2.DOMAIN.com , ldap-3.DOMAIN.com , etc.
        • First MTA hostname = mta-1.DOMAIN.com
        • Other MTA hostnames = mta-2.DOMAIN.com , mta-3.DOMAIN.com , etc.
        • First Mailstore hostname = mailstore-1.DOMAIN.com
        • Other Mailstore hostnames = mailstore-2.DOMAIN.com , mailstore-3.DOMAIN.com , etc.
      • On proxy server hostname =
        • Proxy on MTA's: mta-1.DOMAIN.com , mta-2.DOMAIN.com , etc.
          • Though the proxy package can be installed on other servers, usually it is installed on the mta's or on it's own.
        • Proxy on it's own: proxy-1.DOMAIN.com , proxy-2.DOMAIN.com , etc.
      • On archive [A&D] hostname:
        • First Archive hostname = archive-1.DOMAIN.com
        • Other Archive hostnames = archive-2.DOMAIN.com , archive-3.DOMAIN.com , etc.
  • Configure proxy/firewall to be route point for primary domain and then pass to appropriate server for subdomain mail.
    • All external requess for *.zimbra.DOMAIN.com and zimbra.DOMAIN.com route to ESX server which in turns routes to appropriate vm server for subdomain.
  • Ports open on firewalls to allow vsphere connection to esx server:
  • Add post-clone script to the "gold image" to reconfigure the cloned image for things like hostname & network information. [Thanks Tony for the idea]

How-To Setup ESX For ZCS Test Servers

Creating Initial x64 VM

Initial Setup Of Guest Image In vSphere
  1. Right Click on ESX server name in left column listing in vSphere Client.
    1. Select "New Virtual Machine"
      1. Select "Typical"
      2. Name - ex. "Centos5-x64-30"
        1. Format being - Distro & Distro Version - platform x32 or x64 - last octet of ip address
      3. Datastore - your local esx data storage you'll be using
      4. Guest OS > Linux > RHEL5 64bit - RHEL for Centos [using my example]
      5. Create a disk
        1. Defaults to 8GB, but this isn't enough for ZCS because ZCS requires at least 5GB's free. The OS will take about 4-6+GB's between the swap partition, /boot , and /. Using installer defaults that is.
        2. I make the root image for 12GB. This will be enough to get a basic ZCS install done. One can add more later if needed.
          1. For a 'test' environment, I would recommend leaving the other options UNCHECKED.
            1. Leave UNCHECKED - Allocate and commit space on demand
            2. Leave UNCHECKED - Support clustering features such as Fault Tolerance.
      6. Ready to Complete - Check the box that says:
        1. check - "Edit the virtual machine settings before completion"
          1. Memory > Adjust Memory to be 1024 MB or 1 GB
          2. CD/DVD Drive 1 >
            1. Check "Connected at power on"
            2. Check "Datastore ISO File" and select the iso image for your OS - ex. CentOS-5.4-x86_64-bin-DVD.iso.
              1. Centos default disk partitioning will do:
              2. Also, see - need wiki section - about how to prep your server to host iso images via the datastore.
          3. The rest can be left as the defaults for now.
Initial Setup Of Guest Image Operating System - CentOS/RHEL Example
  1. In the vSphere Client and right click on Guest image, ex. "Centos5-x64-30", and select power on.
    1. Prep our 'base image' prior to cloning it.
    2. The image should give you your Distro's installer screen. Go through the OS installation.
      1. Note about CentOS example
        1. First stage of installer
          1. The partitioning example if using the defaults will give you:
            • /dev/sda1  /boot       ext3      101 MB
            • /dev/sdb2  VolGroup00  LVM PV  12182 MB
              • LogVol00  /  ext3  10144 MB
              • LovVol01     swap   2016 MB
          2. The Network Devices
            1. Network Devices > click on edit
              1. Leave CHECKED - "Enabled IPv4 support"
              2. UNCHECKED "Enable IPv6 support"
          3. Package Selection
            1. Check the "Server" option and then click "Next"
              1. Installer will now finish the first stage of the installation and reboot.
        2. Second state of installation.
          1. Firewall > Disable firewall.
          2. SELinux > Disable SELinux
          3. Date and Time
            1. Network Time Protocol - Enable NTP if you'll be able to reach the NTP servers.
              1. Check the "Synchronize system clock before starting service"
          4. Create User > No need to create additional users for ZCS purposes.
          5. Finish installation.
        3. Post-Installation
          1. Note - Ctrl+Alt allows you to switch the focus of the mouse in and out of the VM guest image showing in vSphere's client.
          2. Note - The client 'screen' is under the "Console" tab.
          3. Install VMwareTools
            1. In the vSphere Client and right click on Guest image, ex. "Centos5-x64-30", and select Guest > Install\Upgrade VMware Tools.
            2. Double click on the VMwareTools rpm or install it via CLI.
            3. You can also run the vmware-tool configuration script
            4. vmware-config-tools.pl
            5. Allows you to adjust the screen resolution and cut-n-paste between workstation and the guest vm.
          4. Launch a terminal. Applications > Accessories > Terminal
            1. Disable sendmail from being used. Note - CentOS/RHEL uses sendmail as default, for another distro you might need to disable postfix from being used.
              1. In terminal, paste [if you installed vmware tools] or type the following command.
              2.  chkconfig sendmail off ; /etc/init.d/sendmail stop
            2. Install some prereq packages that you might not already have installed - CentOS example:
              1. yum install compat-libstdc++-33 compat-libstdc++-296 sysstat
            3. Umount the vmware-tools mount - example
              1.  umount /media/VMware\ Tools/ 
              2. confirm with df -h
            4. Reboot server now - the guest OS image.
          5. See also Ajcody-Virtualization#RHEL_or_CentOS for adding 'disk/partitions' to an image.

Manual Cloning of x32 Setup

My Version of Vmware ESX does not have a cloning option - I was actually given the ESX box rebuilt. So here's what I did to 'clone' an image the manual way. See Ajcody-Virtualization-Named-DNS about setting up BIND/DNS for this example.

# Vmware ESX 4.0.0 Build 208167
# vSphere Client 4.0.0. Build 208111
#
# Copy some iso files to the server, here's is what I used. 
# Please note the vmx files below in my examples to my iso name for the centos dvd.
#   [ /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/iso/CentOS/CentOS-5.4-i386-bin-DVD.iso ]
# Create your first VM - I called my Centos5-30 and that standard and naming convention is used throughout.
# I left the hostname just to use the localhost and installed the vmware tools.
# I then shutdown the image and did a snapshot of it.

pwd
  /vmfs/volumes

ls -la
  total 1024
  drwxr-xr-x 1 root root  512 Apr  9 23:59 .
  drwxrwxrwt 1 root root  512 Apr  9 00:39 ..
  drwxr-xr-t 1 root root 4200 Apr  9 23:20 4bbaf57f-6230127f-d432-00101849e4af
  lrwxr-xr-x 1 root root   35 Apr  9 23:59 Storage1 -> 4bbaf57f-6230127f-d432-00101849e4af

cd Storage1/

ls
  Centos5-30/      esxconsole-4bbb574e-eae0-2eb3-9d5b-00101849e4af/ 	iso/

# Download and copy the various OS iso's you'll want and copy them underneath your storage volume, 
# I made an iso directory for them all.
# Also, I used Firefox and the add-on DownThemAll and went to the Zimbra download pages
# http://www.zimbra.com/downloads/ne-downloads.html
# http://www.zimbra.com/downloads/ne-downloads-previous.html
# And downloaded all the versions of ZCS that I wanted to test against. I then used K3B under Linux
# to make iso files of the tar balls. Copy the iso files to your ESX server under the iso directory
# you made. If you want to mount them remotely to confirm they are ok to or to review what they have:
# ex.  mount -t iso9660 ./zcs-x32-installs.iso /mnt/cdrom -o ro,loop
# Having the Zimbra iso files like that will allow your images to easily install Zimbra .

ls -R iso/
  CentOS/  openSUSE/  SLES/  Ubuntu/  ZCS/
  ./CentOS:
    CentOS-5.4-i386-bin-DVD.iso  CentOS-5.4-x86_64-bin-DVD.iso
  ./openSUSE:
    openSUSE-11.2-Addon-NonOss-BiArch-i586-x86_64.iso  openSUSE-11.2-DVD-i586.iso  
    openSUSE-11.2-DVD-x86_64.iso
  ./SLES:
    SLES-10-SP3-DVD-i386-GM-DVD1.iso  SLES-10-SP3-DVD-x86_64-GM-DVD1.iso  SLES-11-DVD-i586-GM-DVD1.iso
    SLES-10-SP3-DVD-i386-GM-DVD2.iso  SLES-10-SP3-DVD-x86_64-GM-DVD2.iso  SLES-11-DVD-i586-GM-DVD2.iso
  ./Ubuntu:
    ubuntu-8.04.4-desktop-amd64.iso  ubuntu-8.04.4-desktop-i386.iso  
    ubuntu-8.04.4-server-amd64.iso  ubuntu-8.04.4-server-i386.iso
  ./ZCS:
    zcs-x32-606-02_5023-018.iso  zcs-x64-5018-23.iso  zcs-x64-603-06.iso

vmware-cmd -l
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-30/Centos5-30.vmx

pwd
  /vmfs/volumes/Storage1

# Make more directories for you to copy the initial image vitrual disks to

mkdir Centos5-{31..49}

# Now you'll do a copy operation of your first image to the new directories you made

pwd
  /vmfs/volumes/Storage1

vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-31/Centos5-31.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-32/Centos5-32.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-33/Centos5-33.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-34/Centos5-34.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-35/Centos5-35.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-36/Centos5-36.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-37/Centos5-37.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-39/Centos5-39.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-39/Centos5-39.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-40/Centos5-40.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-41/Centos5-41.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-42/Centos5-42.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-43/Centos5-43.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-44/Centos5-44.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-45/Centos5-45.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-46/Centos5-46.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-47/Centos5-47.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-48/Centos5-48.vmdk
vmkfstools -i ./Centos5-30/Centos5-30.vmdk ./Centos5-49/Centos5-49.vmdk

pwd
  /vmfs/volumes/Storage1

# Now you'll copy the inital images configuration file to the directory above the image directories 
# so we can modify it and make it a template file

cp Centos5-30/Centos5-30.vmx ./Centos5-31.vmx

vi Centos5-31.vmx
  change variables to be more generic

cat Centos5-31.vmx 
  #!/usr/bin/vmware
  .encoding = "UTF-8"
  config.version = "8"
  virtualHW.version = "7"
  pciBridge0.present = "TRUE"
  pciBridge4.present = "TRUE"
  pciBridge4.virtualDev = "pcieRootPort"
  pciBridge4.functions = "8"
  pciBridge5.present = "TRUE"
  pciBridge5.virtualDev = "pcieRootPort"
  pciBridge5.functions = "8"
  pciBridge6.present = "TRUE"
  pciBridge6.virtualDev = "pcieRootPort"
  pciBridge6.functions = "8"
  pciBridge7.present = "TRUE"
  pciBridge7.virtualDev = "pcieRootPort"
  pciBridge7.functions = "8"
  vmci0.present = "TRUE"
  nvram = "Centos5-31.nvram"
  deploymentPlatform = "windows"
  virtualHW.productCompatibility = "hosted"
  unity.customColor = "|23C0C0C0"
  tools.upgrade.policy = "useGlobal"
  powerType.powerOff = "soft"
  powerType.powerOn = "default"
  powerType.suspend = "hard"
  powerType.reset = "soft"
  displayName = "Centos5-31"
  extendedConfigFile = "Centos5-31.vmxf"
  floppy0.present = "TRUE"
  scsi0.present = "TRUE"
  scsi0.sharedBus = "none"
  scsi0.virtualDev = "lsilogic"
  memsize = "1024"
  scsi0:0.present = "TRUE"
  scsi0:0.fileName = "Centos5-31.vmdk"
  scsi0:0.deviceType = "scsi-hardDisk"
  ide1:0.present = "TRUE"
  ide1:0.clientDevice = "FALSE"
  ide1:0.deviceType = "cdrom-image"
  ide1:0.startConnected = "TRUE"
  floppy0.startConnected = "FALSE"
  floppy0.clientDevice = "TRUE"
  ethernet0.present = "TRUE"
  ethernet0.networkName = "VM Network"
  ethernet0.addressType = "generated"
  guestOSAltName = "Red Hat Enterprise Linux 5 (32-bit)"
  guestOS = "rhel5"
  uuid.location = "56 4d 41 ed 87 cc a2 03-77 97 48 8c 65 16 7e ed"
  uuid.bios = "56 4d 41 ed 87 cc a2 03-77 97 48 8c 65 16 7e ed"
  vc.uuid = "52 9e a1 5c 02 25 0f c2-20 03 f8 bb 21 93 74 c7"
  ethernet0.generatedAddress = ""
  tools.syncTime = "FALSE"
  cleanShutdown = "TRUE"
  replay.supported = "FALSE"
  sched.swap.derivedName = "/vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-31/Centos5-31-d6823091.vswp"
  scsi0:0.redo = ""
  vmotion.checkpointFBSize = "4194304"
  pciBridge0.pciSlotNumber = "17"
  pciBridge4.pciSlotNumber = "21"
  pciBridge5.pciSlotNumber = "22"
  pciBridge6.pciSlotNumber = "23"
  pciBridge7.pciSlotNumber = "24"
  scsi0.pciSlotNumber = "16"
  ethernet0.pciSlotNumber = "32"
  vmci0.pciSlotNumber = "33"
  ethernet0.generatedAddressOffset = "0"
  vmci0.id = "-876333085"
  hostCPUID.0 = "0000000b756e65476c65746e49656e69"
  hostCPUID.1 = "000106a500100800009ce3bdbfebfbff"
  hostCPUID.80000001 = "00000000000000000000000128100800"
  guestCPUID.0 = "0000000b756e65476c65746e49656e69"
  guestCPUID.1 = "000106a500010800809822010febfbff"
  guestCPUID.80000001 = "00000000000000000000000128100800"
  userCPUID.0 = "0000000b756e65476c65746e49656e69"
  userCPUID.1 = "000106a500100800009822010febfbff"
  userCPUID.80000001 = "00000000000000000000000128100800"
  evcCompatibilityMode = "FALSE"
  ide1:0.fileName = "/vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/iso/CentOS/CentOS-5.4-i386-bin-DVD.iso"
  floppy0.fileName = "/dev/fd0"
  debugStub.linuxOffsets = "0xa356145,0xffffffff,0xfc052084,0xffffffff,0x0,0x0,0xa356150,0x0,0xa35616f,0x0,0xfc052088,0xffffffff,0x0,0x0"
  tools.remindInstall = "TRUE"
  sched.cpu.affinity = "all"
  sched.swap.hostLocal = "disabled"

# Now we'll copy this template to the other image directories

cp Centos5-31.vmx Centos5-31/Centos5-31.vmx
cp Centos5-31.vmx Centos5-32/Centos5-32.vmx
cp Centos5-31.vmx Centos5-33/Centos5-33.vmx
cp Centos5-31.vmx Centos5-34/Centos5-34.vmx
cp Centos5-31.vmx Centos5-35/Centos5-35.vmx
cp Centos5-31.vmx Centos5-36/Centos5-36.vmx
cp Centos5-31.vmx Centos5-37/Centos5-37.vmx
cp Centos5-31.vmx Centos5-38/Centos5-38.vmx
cp Centos5-31.vmx Centos5-39/Centos5-39.vmx
cp Centos5-31.vmx Centos5-40/Centos5-40.vmx
cp Centos5-31.vmx Centos5-41/Centos5-41.vmx
cp Centos5-31.vmx Centos5-42/Centos5-42.vmx
cp Centos5-31.vmx Centos5-43/Centos5-43.vmx
cp Centos5-31.vmx Centos5-44/Centos5-44.vmx
cp Centos5-31.vmx Centos5-45/Centos5-45.vmx
cp Centos5-31.vmx Centos5-46/Centos5-46.vmx
cp Centos5-31.vmx Centos5-47/Centos5-47.vmx
cp Centos5-31.vmx Centos5-48/Centos5-48.vmx
cp Centos5-31.vmx Centos5-49/Centos5-49.vmx

# You'll want to swap the references of the initial image name [ Centos5-30 ] to be the directories image name 

vi Centos5-31/Centos5-31.vmx 
vi Centos5-32/Centos5-32.vmx 
vi Centos5-33/Centos5-33.vmx 
vi Centos5-34/Centos5-34.vmx 
vi Centos5-35/Centos5-35.vmx 
vi Centos5-36/Centos5-36.vmx 
vi Centos5-37/Centos5-37.vmx 
vi Centos5-38/Centos5-38.vmx 
vi Centos5-39/Centos5-39.vmx 
vi Centos5-40/Centos5-40.vmx 
vi Centos5-41/Centos5-41.vmx 
vi Centos5-42/Centos5-42.vmx 
vi Centos5-43/Centos5-43.vmx 
vi Centos5-43/Centos5-43.vmx 
vi Centos5-44/Centos5-44.vmx 
vi Centos5-45/Centos5-45.vmx 
vi Centos5-46/Centos5-46.vmx
vi Centos5-47/Centos5-47.vmx
vi Centos5-48/Centos5-48.vmx
vi Centos5-49/Centos5-49.vmx

# Now we can register those new images and they'll show up in your vSphere Client

vmware-cmd Centos5-31/Centos5-31.vmx register
vmware-cmd Centos5-32/Centos5-32.vmx register
vmware-cmd Centos5-33/Centos5-33.vmx register
vmware-cmd Centos5-34/Centos5-34.vmx register
vmware-cmd Centos5-35/Centos5-35.vmx register
vmware-cmd Centos5-36/Centos5-36.vmx register
vmware-cmd Centos5-37/Centos5-37.vmx register
vmware-cmd Centos5-38/Centos5-38.vmx register
vmware-cmd Centos5-39/Centos5-39.vmx register
vmware-cmd Centos5-40/Centos5-40.vmx register
vmware-cmd Centos5-41/Centos5-41.vmx register
vmware-cmd Centos5-42/Centos5-42.vmx register
vmware-cmd Centos5-43/Centos5-43.vmx register
vmware-cmd Centos5-44/Centos5-44.vmx register
vmware-cmd Centos5-45/Centos5-45.vmx register
vmware-cmd Centos5-46/Centos5-46.vmx register
vmware-cmd Centos5-47/Centos5-47.vmx register
vmware-cmd Centos5-48/Centos5-48.vmx register
vmware-cmd Centos5-49/Centos5-49.vmx register

# Confirm they all are registered

vmware-cmd -l | sort
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-30/Centos5-30.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-31/Centos5-31.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-32/Centos5-32.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-33/Centos5-33.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-34/Centos5-34.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-35/Centos5-35.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-36/Centos5-36.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-37/Centos5-37.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-38/Centos5-38.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-39/Centos5-39.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-40/Centos5-40.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-41/Centos5-41.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-42/Centos5-42.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-43/Centos5-43.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-44/Centos5-44.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-45/Centos5-45.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-46/Centos5-46.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-47/Centos5-47.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-48/Centos5-48.vmx
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af/Centos5-49/Centos5-49.vmx

# You'll see your images now in your vSphere Client. 
# When you first start them, do a console view of it because you'll see in that view 
# you need to select an option before it will start up.
# Choose :  "I copied it"
# Once image is running, login as root and within a shell confirm your network interface has
# an unique IP address as well as a unique MAC address for the interface
# You then might want to suspend the image at this point.

Manual Cloning of x64 Setup

If you can to x64 with your VMware, here's the additions steps for that [cut-n-paste]

Make a new image for x64 rhel5 called Centos5-x64-50 and once it's all done, shutdown and halt it.

Short and sweet update for ESXi 5.5

[ Working Dir is /vmfs/volumes/datastore1 ]

# ls -la /vmfs/volumes/ | grep data
lrwxr-xr-x    1 root     root            35 Aug 24 21:35 datastore1 -> 53f864a6-1d3302c1-1b97-00101849e4af

# mkdir linuxsrv

# vmkfstools -i Centos\ 6.5\ Base/Centos\ 6.5\ Base.vmdk linuxsrv/linuxsrv.vmdk

# cp Centos\ 6.5\ Base/Centos\ 6.5\ Base.vmx linuxsrv/linuxsrv.vmx

# vi linuxsrv/linuxsrv.vmx
   ESC Key
   :%s/Centos 6.5 Base/linuxsrv/g
   ESC Key
  # make the following variables blank , ethernet0.generatedAddress = "00:0c:29:44:16:8d"
   ethernet0.generatedAddress = ""
   :wq!

# Note - if you have snapshots against your source image, you might need to adjust this var
# scsi0:0.fileName = "850-Wapp2-0000104.vmdk" , for example. The numbers after the hostname get removed.
# 

# vim-cmd solo/registervm /vmfs/volumes/datastore1/linuxsrv/linuxsrv.vmx
10

For ESX 4


pwd
  /vmfs/volumes/4bbaf57f-6230127f-d432-00101849e4af

mkdir Centos5-x64-{51..69}

vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-51/Centos5-x64-51.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-52/Centos5-x64-52.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-53/Centos5-x64-53.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-54/Centos5-x64-54.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-55/Centos5-x64-55.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-56/Centos5-x64-56.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-57/Centos5-x64-57.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-58/Centos5-x64-58.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-59/Centos5-x64-59.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-60/Centos5-x64-60.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-61/Centos5-x64-61.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-62/Centos5-x64-62.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-63/Centos5-x64-63.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-64/Centos5-x64-64.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-65/Centos5-x64-65.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-66/Centos5-x64-66.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-67/Centos5-x64-67.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-68/Centos5-x64-68.vmdk
vmkfstools -i Centos5-x64-50/Centos5-x64-50.vmdk Centos5-x64-69/Centos5-x64-69.vmdk

# Setup the vmx file like a template like you did for the 32bit ones and then copy to the other x64 directories.

cp Centos5-x64-50/Centos5-x64-50.vmx ./
cp Centos5-x64-50/Centos5-x64-50.vmx ./Centos5-x64-60.vmx

# make this like be empty quoted - ethernet0.generatedAddress = ""

vi Centos5-x64-50.vmx

# Edit this for the ethernet and change to 50 to 60 -- make for editing easier for the 60 dir's

vi Centos5-x64-60.vmx

# copy to other directories.

cp Centos5-x64-50.vmx Centos5-x64-51/Centos5-x64-51.vmx
cp Centos5-x64-50.vmx Centos5-x64-52/Centos5-x64-52.vmx
cp Centos5-x64-50.vmx Centos5-x64-53/Centos5-x64-53.vmx
cp Centos5-x64-50.vmx Centos5-x64-54/Centos5-x64-54.vmx
cp Centos5-x64-50.vmx Centos5-x64-55/Centos5-x64-55.vmx
cp Centos5-x64-50.vmx Centos5-x64-56/Centos5-x64-56.vmx
cp Centos5-x64-50.vmx Centos5-x64-57/Centos5-x64-57.vmx
cp Centos5-x64-50.vmx Centos5-x64-58/Centos5-x64-58.vmx
cp Centos5-x64-50.vmx Centos5-x64-59/Centos5-x64-59.vmx

cp Centos5-x64-60.vmx Centos5-x64-60/Centos5-x64-60.vmx
cp Centos5-x64-60.vmx Centos5-x64-61/Centos5-x64-61.vmx
cp Centos5-x64-60.vmx Centos5-x64-62/Centos5-x64-62.vmx
cp Centos5-x64-60.vmx Centos5-x64-63/Centos5-x64-63.vmx
cp Centos5-x64-60.vmx Centos5-x64-64/Centos5-x64-64.vmx
cp Centos5-x64-60.vmx Centos5-x64-65/Centos5-x64-65.vmx
cp Centos5-x64-60.vmx Centos5-x64-66/Centos5-x64-66.vmx
cp Centos5-x64-60.vmx Centos5-x64-67/Centos5-x64-67.vmx
cp Centos5-x64-60.vmx Centos5-x64-68/Centos5-x64-68.vmx
cp Centos5-x64-60.vmx Centos5-x64-69/Centos5-x64-69.vmx

# Now edit them, replacing the 50 in the hostname reference to match the ## of the directory.

vi Centos5-x64-51/Centos5-x64-51.vmx
vi Centos5-x64-52/Centos5-x64-52.vmx
vi Centos5-x64-53/Centos5-x64-53.vmx
vi Centos5-x64-54/Centos5-x64-54.vmx
vi Centos5-x64-55/Centos5-x64-55.vmx
vi Centos5-x64-56/Centos5-x64-56.vmx
vi Centos5-x64-57/Centos5-x64-57.vmx
vi Centos5-x64-58/Centos5-x64-58.vmx
vi Centos5-x64-59/Centos5-x64-59.vmx

# Now edit them, replacing the 60 in the hostname reference to match the ## of the directory.

vi Centos5-x64-61/Centos5-x64-61.vmx
vi Centos5-x64-62/Centos5-x64-62.vmx
vi Centos5-x64-63/Centos5-x64-63.vmx
vi Centos5-x64-64/Centos5-x64-64.vmx
vi Centos5-x64-65/Centos5-x64-65.vmx
vi Centos5-x64-66/Centos5-x64-66.vmx
vi Centos5-x64-67/Centos5-x64-67.vmx
vi Centos5-x64-68/Centos5-x64-68.vmx
vi Centos5-x64-69/Centos5-x64-69.vmx

# Register the x64 images

vmware-cmd Centos5-x64-51/Centos5-x64-51.vmx register
vmware-cmd Centos5-x64-52/Centos5-x64-52.vmx register
vmware-cmd Centos5-x64-53/Centos5-x64-53.vmx register
vmware-cmd Centos5-x64-54/Centos5-x64-54.vmx register
vmware-cmd Centos5-x64-55/Centos5-x64-55.vmx register
vmware-cmd Centos5-x64-56/Centos5-x64-56.vmx register
vmware-cmd Centos5-x64-57/Centos5-x64-57.vmx register
vmware-cmd Centos5-x64-58/Centos5-x64-58.vmx register
vmware-cmd Centos5-x64-59/Centos5-x64-59.vmx register

vmware-cmd Centos5-x64-60/Centos5-x64-60.vmx register
vmware-cmd Centos5-x64-61/Centos5-x64-61.vmx register
vmware-cmd Centos5-x64-62/Centos5-x64-62.vmx register
vmware-cmd Centos5-x64-63/Centos5-x64-63.vmx register
vmware-cmd Centos5-x64-64/Centos5-x64-64.vmx register
vmware-cmd Centos5-x64-65/Centos5-x64-65.vmx register
vmware-cmd Centos5-x64-66/Centos5-x64-66.vmx register
vmware-cmd Centos5-x64-67/Centos5-x64-67.vmx register
vmware-cmd Centos5-x64-68/Centos5-x64-68.vmx register
vmware-cmd Centos5-x64-69/Centos5-x64-69.vmx register

# You'll see your images now in your vSphere Client. 
# When you first start them, do a console view of it because you'll see in that view 
# you need to select an option before it will start up.
# Choose :  "I copied it"
# Once image is running, login as root and within a shell confirm your network interface has
# an unique IP address as well as a unique MAC address for the interface
# You then might want to suspend the image at this point.

Other Useful Steps To Get Zimbra Installed And Running

RHEL or CentOS
chkconfig sendmail off
/etc/init.d/sendmail stop

Some prereq packages that you might not already have installed:

yum install compat-libstdc++-33 compat-libstdc++-296 sysstat
Adding Additional Storage

Adding additional storage for Zimbra installation requirement

  • Add new virtual disk - requires at least 5GB's of free space on available partition
  • Power Off VM And then create a new disk giving it at least 5GB's of space
  • Power On the VM
  • You can then run this to see/confirm the new "disk"
    •  fdisk -l
    • In my example here, my new disk is /dev/sdb
      • Partition the new disk
        •  fdisk /dev/sdb 
        • Select "n" for new partition
        • Select "p" for primary partition
        • Select "1" for partition number
        • Select default of "1" for first cylinder
        • Select default, which should be the highest number given in range. This will change based upon the size of the virtual disk you made.
        • You new disk is now partition.
        • Hit "p" to print out the partition table to confirm.
        • Hit "w" to write table to disk and exit fdisk.
      • Create new filesystem for the new disk/partition
        • This example uses ext3 and the example partition path of /dev/sdb1
        •  mkfs.ext3 /dev/sdb1 
      • Setup /etc/fstab to mount the new partition for zimbra use
        •  mkdir /opt/zimbra 
        •  vi /etc/fstab 
        • And now add a line like the following:
        • /dev/sdb1       /opt/zimbra        ext3   defaults  1 1
Moving Zimbra To New Partitions For zimbra and backups

Adding additional storage for Zimbra installation requirement

  • Disable zimbra from starting via init.d - how this is done depends on your distro.
  • Power Off VM And then create two new virtual disks. Spacing is based upon your current usage and what you expect you'll need going forward. Recommend the disk is allocated now vs allowing it to dynamically grow.
  • Power On the VM
  • Confirm zimbra isn't running, stop it if it is running:
    •  su - zimbra ; zmcontrol stop 
  • You should now confirm the new "disks" exist
    •  fdisk -l
    • In my example here, my new disks are /dev/sdb and /dev/sdc
      • Partition the new sdb disk
        •  fdisk /dev/sdb 
        • Select "n" for new partition
        • Select "p" for primary partition
        • Select "1" for partition number
        • Select default of "1" for first cylinder
        • Select default, which should be the highest number given in range. This will change based upon the size of the virtual disk you made.
        • You new disk is now partition.
        • Hit "p" to print out the partition table to confirm.
        • Hit "w" to write table to disk and exit fdisk.
      • Partition the new sdc disk
        •  fdisk /dev/sdc 
        • Select "n" for new partition
        • Select "p" for primary partition
        • Select "1" for partition number
        • Select default of "1" for first cylinder
        • Select default, which should be the highest number given in range. This will change based upon the size of the virtual disk you made.
        • You new disk is now partition.
        • Hit "p" to print out the partition table to confirm.
        • Hit "w" to write table to disk and exit fdisk.
      • Create new filesystem for the new disks/partitions
        • This example uses ext3 and the example partition path of /dev/sdb1 and /dev/sdc1
        •  mkfs.ext3 /dev/sdb1 
        •  mkfs.ext3 /dev/sdc1 
      • Move zimbra data to a temporary location
        • as root
        •  mv /opt/zimbra /opt/zimbra_old 
      • Setup /etc/fstab to mount the new partition for zimbra use
        •  vi /etc/fstab 
        • And now add a line like the following, they must be below the entry for your / partition:
        • /dev/sdb1       /opt/zimbra        ext3   defaults  1 1
        • /dev/sdc1       /opt/zimbra/backup ext3   defaults  1 1
        • Save the file.
        • We first need to get /opt/zimbra mounted before the backup partition.
        •  mkdir /opt/zimbra 
        •  mount /opt/zimbra 
        • Now we can do the backup one.
        •  mkdir /opt/zimbra/backup 
        •  chown zimbra:zimbra /opt/zimbra/backup 
        •  mount /opt/zimbra/backup 
      • Now we sync the data over onto our new partitions.
        •  rsync -avzHS --progress /opt/zimbra_old/ /opt/zimbra 
        • Once you confirm your rsync is done and is correct, you could remove the old data.
        •  rm -rf /opt/zimbra_old 
      • You should now be able to start zimbra.
        • su - zimbra ; zmcontrol start
      • Remember to re-enable zimbra to start from init.d now also if you disabled it at the beginning.

Final Look

Comment About Hyper-threading And CPU Performance

TheE5520 has 4 cores, which do show up in vSphere as 4 available proc's but the cpuinfo just shows the one. Difference comes when Intel VT [vmx flag in cpuinfo] or AMD-V [svm flag in cpuinfo] is enabled while hyperthreading [ht flag in cpuinfo] is turned off. Turning on HT in the bios I think would show 8 proc in vm and report 4 cores in the cpuinfo output from base OS [don't want to reboot and reconfigure at the moment to double check]. Note, cpuinfo will show HT if the cpu's support it - not just if it's enabled. You still need to enable it and confirm APCI is enabled in the BIOS if your not seeing the 'virtual cpu's'. VT is about seeing the physical cores and individual cpu's. HT is about virtualizing each cpu or core into 2 virtual cpu's.

Spec

ESX Server

[root@vmware-server ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
stepping : 5
cpu MHz : 2266.688
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi
 mmx fxsr sse sse2 ss ht tm syscall nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl 
 vmx est tm2 cx16 xtpr popcnt lahf_lm
bogomips : 4535.95
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]

[root@vmware-server ~]# cat /proc/meminfo
MemTotal: 356684 kB
MemFree: 30392 kB
Buffers: 9068 kB
Cached: 127804 kB
SwapCached: 14180 kB
Active: 264444 kB
Inactive: 41320 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 356684 kB
LowFree: 30392 kB
SwapTotal: 730916 kB
SwapFree: 671960 kB
Dirty: 208 kB
Writeback: 0 kB
AnonPages: 168356 kB
Mapped: 36188 kB
Slab: 12448 kB
PageTables: 3160 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 909256 kB
Committed_AS: 533636 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 25036 kB
VmallocChunk: 34359705099 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
MachineMem: 12580415 kB

[root@vmware-server ~]# uname -a
Linux vmware-server.zimbra.homunix.com 2.6.18-128.ESX #1 
 Thu Oct 15 16:11:16 PDT 2009 x86_64 x86_64 x86_64 GNU/Linux

[root@vmware-server ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.1 (Tikanga)

[root@vmware-server ~]# fdisk -l

Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 140 1124518+ 83 Linux
/dev/sda2 141 154 112455 fc VMware VMKCORE
/dev/sda3 155 91201 731335027+ 5 Extended
/dev/sda5 155 91201 731334996 fb VMware VMFS

Disk /dev/sdb: 8095 MB, 8095006720 bytes
255 heads, 63 sectors/track, 984 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 91 730926 82 Linux swap / Solaris
/dev/sdb2 92 346 2048287+ 83 Linux
/dev/sdb3 347 984 5124735 5 Extended
/dev/sdb5 347 984 5124703+ 83 Linux


I've setup the vm's to have 1 cpu and 1024MB RAM. I have 9 running right now. Here's one of my 64bit image details:

[root@mail59 ~]# cat /proc/meminfo
MemTotal: 1026932 kB
MemFree: 85816 kB
Buffers: 14884 kB
Cached: 87132 kB
SwapCached: 117444 kB
Active: 705916 kB
Inactive: 143664 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1026932 kB
LowFree: 85816 kB
SwapTotal: 2064376 kB
SwapFree: 1453736 kB
Dirty: 644 kB
Writeback: 0 kB
AnonPages: 734740 kB
Mapped: 34284 kB
Slab: 38884 kB
PageTables: 33304 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 2577840 kB
Committed_AS: 3257840 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 263932 kB
VmallocChunk: 34359473927 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
[root@mail59 ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
stepping : 5
cpu MHz : 2266.631
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 
 clflush dts acpi mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up ida 
 nonstop_tsc pni cx16 popcnt lahf_lm
bogomips : 4533.26
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]

[root@mail59 ~]# uname -a
Linux mail59.zimbra.homeunix.com 2.6.18-164.el5 #1 SMP Thu 
 Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

[root@mail59 ~]# cat /etc/redhat-release
CentOS release 5.4 (Final)

[root@mail59 ~]# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1305 10377990 8e Linux LVM

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 1044 8385898+ 83 Linux

[root@mail59 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
7.7G 3.5G 3.9G 48% /
/dev/sda1 99M 13M 82M 14% /boot
tmpfs 502M 0 502M 0% /dev/shm
/dev/hdc 6.8G 6.8G 0 100% /media/zcs-x64-603-06
/dev/sdb1 7.9G 1.8G 5.8G 24% /opt/zimbra

Here's my esxtop output, which is ran on the base os of your ESX server. You'll see the running vm's I mentioned.

[root@vmware-server ~]# esxtop

 9:20:07pm up 32 days 20:43, 139 worlds; CPU load average: 0.14, 0.15, 0.16
PCPU USED(%):   7.7   7.3   6.8   6.8 AVG:   7.1
PCPU UTIL(%):  10.6  10.4  10.0   9.9 AVG:  10.2
CCPU(%):   0 us,   2 sy,  98 id,   0 wa ;       cs/sec:    229

     ID    GID NAME             NWLD   %USED    %RUN    %SYS   %WAIT    %RDY   %IDLE  %OVRLP   %CSTP  %MLMTD  %SWPWT
      1      1 idle                4  367.57  370.04    0.00    0.00   34.02    0.00    0.00    0.00    0.00    0.00
      2      2 system              6    0.01    0.01    0.00  600.00    0.00    0.00    0.00    0.00    0.00    0.00
      6      6 helper             58    0.01    0.01    0.00 5800.00    0.00    0.00    0.00    0.00    0.00    0.00
      7      7 drivers             9    0.00    0.00    0.00  900.00    0.00    0.00    0.00    0.00    0.00    0.00
      8      8 vmotion             4    0.00    0.00    0.00  400.00    0.00    0.00    0.00    0.00    0.00    0.00
     10     10 console             2    1.93    1.89    0.01  200.00    0.08   99.04    0.03    0.00    0.00    0.00
     15     15 vmkapimod           9    0.00    0.00    0.00  900.00    0.00    0.00    0.00    0.00    0.00    0.00
     17     17 FT                  1    0.00    0.00    0.00  100.00    0.00    0.00    0.00    0.00    0.00    0.00
     18     18 vobd.4231           8    0.00    0.00    0.00  800.00    0.00    0.00    0.00    0.00    0.00    0.00
     19     19 net-cdp.4239        1    0.00    0.00    0.00  100.00    0.00    0.00    0.00    0.00    0.00    0.00
     20     20 vmware-vmkauthd     1    0.00    0.00    0.00  100.00    0.00    0.00    0.00    0.00    0.00    0.00
    114    114 Centos5-x64-59      4    4.43    5.17    0.01  398.60    0.29   95.33    0.29    0.00    0.00    0.00
    125    125 Centos5-x64-50      4    2.64    3.49    0.00  400.00    0.30   97.30    0.27    0.00    0.00    0.00
    126    126 Centos5-x64-51      4    2.38    3.13    0.00  400.00    0.31   97.63    0.28    0.00    0.00    0.00
    127    127 Centos5-x64-52      4    2.70    3.54    0.00  400.00    0.31   97.21    0.28    0.00    0.00    0.00
    128    128 Centos5-x64-53      4    2.65    3.50    0.00  400.00    0.32   97.25    0.28    0.00    0.00    0.00
    139    139 Centos5-x64-54      4    2.36    3.12    0.00  400.00    0.29   97.63    0.27    0.00    0.00    0.00
    140    140 Centos5-x64-55      4    2.36    3.13    0.00  400.00    0.28   97.65    0.29    0.00    0.00    0.00
    141    141 Centos5-x64-56      4    2.37    3.11    0.00  400.00    0.29   97.66    0.27    0.00    0.00    0.00
    142    142 Centos5-x64-57      4    2.44    3.21    0.00  400.00    0.29   97.57    0.29    0.00    0.00    0.00
Screenshot

Esx-screenshot.jpg

Optional Setups To Enhance Your Test Environment

Using RINETD To Redirect To Your Various VM For External HTTP Access

RINETD Setup

rinetd redirects TCP connections from one IP address and port to another. rinetd is a single-process server which handles any number of connections to the address/port pairs specified in the file /etc/rinetd.conf.


Here's an example /etc/rinetd.conf on an internal server [192.168.0.16] that will redirect to the various vm's to port 80. The 'firewall', a cheap one like I have, only allows me to do very basic port redirection to an ip address -- NOT an internal ip address AND a different port. So my cheap firewall is setup to redirect port 80## to the rinetd server - 192.168.0.16 and then that server redirects to the vm and the right port - 80.


If your using a DynDNS service this works out very well. As I can now use the DynDNS domain name and simply add a port number to get redirected to the various ZCS servers. For example:

  • http://EXTERNAL-DOMAIN:8059 would get redirected to my vm server that is using 192.168.0.59 and to port 80 for the http ZWC login page.

Here's the /etc/rinetd.conf I have setup:

192.168.0.16 8030 192.168.0.30 80                               
192.168.0.16 8031 192.168.0.31 80                               
192.168.0.16 8032 192.168.0.32 80                               
192.168.0.16 8033 192.168.0.33 80                               
192.168.0.16 8034 192.168.0.34 80
192.168.0.16 8035 192.168.0.35 80
192.168.0.16 8036 192.168.0.36 80
192.168.0.16 8037 192.168.0.37 80
192.168.0.16 8038 192.168.0.38 80
192.168.0.16 8039 192.168.0.39 80
192.168.0.16 8040 192.168.0.40 80
192.168.0.16 8041 192.168.0.41 80
192.168.0.16 8042 192.168.0.42 80
192.168.0.16 8043 192.168.0.43 80
192.168.0.16 8044 192.168.0.44 80
192.168.0.16 8045 192.168.0.45 80
192.168.0.16 8046 192.168.0.46 80
192.168.0.16 8047 192.168.0.47 80
192.168.0.16 8048 192.168.0.48 80
192.168.0.16 8049 192.168.0.49 80
192.168.0.16 8050 192.168.0.50 80
192.168.0.16 8051 192.168.0.51 80
192.168.0.16 8052 192.168.0.52 80
192.168.0.16 8053 192.168.0.53 80
192.168.0.16 8054 192.168.0.54 80
192.168.0.16 8055 192.168.0.55 80
192.168.0.16 8056 192.168.0.56 80
192.168.0.16 8057 192.168.0.57 80
192.168.0.16 8058 192.168.0.58 80
192.168.0.16 8059 192.168.0.59 80
192.168.0.16 8060 192.168.0.60 80
192.168.0.16 8061 192.168.0.61 80
192.168.0.16 8062 192.168.0.62 80
192.168.0.16 8063 192.168.0.63 80
192.168.0.16 8064 192.168.0.64 80
192.168.0.16 8065 192.168.0.65 80
192.168.0.16 8066 192.168.0.66 80
192.168.0.16 8067 192.168.0.67 80
192.168.0.16 8068 192.168.0.68 80
192.168.0.16 8069 192.168.0.69 80
logcommon
logfile /var/log/rinetd.log
Setting Up A Mailhub

Continuing with the example of using 192.168.0.16 with a hostname of mail3.[SUB].[DOMAIN].com as my mail hub which the firewall routes all smtp traffic to and also is running rinetd. This assumes you've setup DNS like I outlined in Ajcody-Virtualization-Named-DNS which I mentioned at the top of this wiki page.


Install postfix [centos5 example]

yum install postfix
rpm -q postfix
 postfix-2.3.3-2.1.el5_2

Configure /etc/postfix/main.cf

/etc/init.d/postfix stop
cd /etc/postfix/
mv main.cf main.cf-backup

Now paste in the contents:

## paste the below into file and adjust for your setup the following variables
## myhostname & mydomain with my example of mail3.[SUB].[DOMAIN].com
## might need to also adjust : mynetworks = 192.168.0.0/24
## You might want to compare our default main.cf below and adjust for different paths, etc
## that your distro or postfix version might be setup to use
## and then save -- :wq!

vi main.cf

queue_directory = /var/spool/postfix
command_directory = /usr/sbin
daemon_directory = /usr/libexec/postfix
mail_owner = postfix

myhostname = mail3.[SUB].[DOMAIN].com
mydomain = mail3.[SUB].[DOMAIN].com
myorigin = $mydomain
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
mynetworks = 192.168.0.0/24, 127.0.0.0/8
relay_domains = $mydestination, $mynetworks, hash:/etc/postfix/relay-domains

transport_maps = hash:/etc/postfix/transport
local_transport = local

alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases

debug_peer_level = 2
debugger_command =
         PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
         xxgdb $daemon_directory/$process_name $process_id & sleep 5

sendmail_path = /usr/sbin/sendmail.postfix
newaliases_path = /usr/bin/newaliases.postfix
mailq_path = /usr/bin/mailq.postfix
setgid_group = postdrop
html_directory = no
manpage_directory = /usr/share/man
sample_directory = /usr/share/doc/postfix-2.3.3/samples
readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES

You might need to make the aliases db. Path is referenced in the main.cf above. This command below will end up creating /etc/aliases.db

postmap /etc/aliases

Now we'll setup the relay-domains file. Adjust my example for your domain using substitution in vi [ ESC key and then :%s/SUB.DOMAIN/With Your Domain Details/g hit return key].

vi /etc/postfix/relay-domains

mail30.SUB.DOMAIN.com OK                     
mail31.SUB.DOMAIN.com OK
mail32.SUB.DOMAIN.com OK
mail33.SUB.DOMAIN.com OK
mail34.SUB.DOMAIN.com OK
mail35.SUB.DOMAIN.com OK
mail36.SUB.DOMAIN.com OK
mail37.SUB.DOMAIN.com OK
mail38.SUB.DOMAIN.com OK
mail39.SUB.DOMAIN.com OK
mail40.SUB.DOMAIN.com OK
mail41.SUB.DOMAIN.com OK
mail42.SUB.DOMAIN.com OK
mail43.SUB.DOMAIN.com OK
mail44.SUB.DOMAIN.com OK
mail45.SUB.DOMAIN.com OK
mail46.SUB.DOMAIN.com OK
mail47.SUB.DOMAIN.com OK
mail48.SUB.DOMAIN.com OK
mail49.SUB.DOMAIN.com OK
mail50.SUB.DOMAIN.com OK
mail51.SUB.DOMAIN.com OK
mail52.SUB.DOMAIN.com OK
mail53.SUB.DOMAIN.com OK
mail54.SUB.DOMAIN.com OK
mail55.SUB.DOMAIN.com OK
mail56.SUB.DOMAIN.com OK
mail57.SUB.DOMAIN.com OK
mail58.SUB.DOMAIN.com OK
mail59.SUB.DOMAIN.com OK
mail61.SUB.DOMAIN.com OK
mail62.SUB.DOMAIN.com OK
mail63.SUB.DOMAIN.com OK
mail64.SUB.DOMAIN.com OK
mail65.SUB.DOMAIN.com OK
mail66.SUB.DOMAIN.com OK
mail67.SUB.DOMAIN.com OK
mail68.SUB.DOMAIN.com OK
mail69.SUB.DOMAIN.com OK

Now build the db, the command will end up creating /etc/postfix/relay-domains.db

postmap /etc/postfix/relay-domains

Now we'll create a transport map for each domain. Adjust my example for your domain using substitution in vi [ ESC key and then :%s/SUB.DOMAIN/With Your Domain Details/g hit return key].

vi /etc/postfix/transport

mail30.SUB.DOMAIN.com      smtp:mail30.SUB.DOMAIN.com
mail31.SUB.DOMAIN.com      smtp:mail31.SUB.DOMAIN.com
mail32.SUB.DOMAIN.com      smtp:mail32.SUB.DOMAIN.com
mail33.SUB.DOMAIN.com      smtp:mail33.SUB.DOMAIN.com
mail34.SUB.DOMAIN.com      smtp:mail34.SUB.DOMAIN.com
mail35.SUB.DOMAIN.com      smtp:mail35.SUB.DOMAIN.com
mail36.SUB.DOMAIN.com      smtp:mail36.SUB.DOMAIN.com
mail37.SUB.DOMAIN.com      smtp:mail37.SUB.DOMAIN.com
mail38.SUB.DOMAIN.com      smtp:mail38.SUB.DOMAIN.com
mail39.SUB.DOMAIN.com      smtp:mail39.SUB.DOMAIN.com
mail40.SUB.DOMAIN.com      smtp:mail40.SUB.DOMAIN.com
mail41.SUB.DOMAIN.com      smtp:mail41.SUB.DOMAIN.com
mail42.SUB.DOMAIN.com      smtp:mail42.SUB.DOMAIN.com
mail43.SUB.DOMAIN.com      smtp:mail43.SUB.DOMAIN.com
mail44.SUB.DOMAIN.com      smtp:mail44.SUB.DOMAIN.com
mail45.SUB.DOMAIN.com      smtp:mail45.SUB.DOMAIN.com
mail46.SUB.DOMAIN.com      smtp:mail46.SUB.DOMAIN.com
mail47.SUB.DOMAIN.com      smtp:mail47.SUB.DOMAIN.com
mail48.SUB.DOMAIN.com      smtp:mail48.SUB.DOMAIN.com
mail49.SUB.DOMAIN.com      smtp:mail49.SUB.DOMAIN.com
mail50.SUB.DOMAIN.com      smtp:mail50.SUB.DOMAIN.com
mail51.SUB.DOMAIN.com      smtp:mail51.SUB.DOMAIN.com
mail52.SUB.DOMAIN.com      smtp:mail52.SUB.DOMAIN.com
mail53.SUB.DOMAIN.com      smtp:mail53.SUB.DOMAIN.com
mail54.SUB.DOMAIN.com      smtp:mail54.SUB.DOMAIN.com
mail55.SUB.DOMAIN.com      smtp:mail55.SUB.DOMAIN.com
mail56.SUB.DOMAIN.com      smtp:mail56.SUB.DOMAIN.com
mail57.SUB.DOMAIN.com      smtp:mail57.SUB.DOMAIN.com
mail58.SUB.DOMAIN.com      smtp:mail58.SUB.DOMAIN.com
mail59.SUB.DOMAIN.com      smtp:mail59.SUB.DOMAIN.com
mail60.SUB.DOMAIN.com      smtp:mail60.SUB.DOMAIN.com
mail61.SUB.DOMAIN.com      smtp:mail61.SUB.DOMAIN.com
mail62.SUB.DOMAIN.com      smtp:mail62.SUB.DOMAIN.com
mail63.SUB.DOMAIN.com      smtp:mail63.SUB.DOMAIN.com
mail64.SUB.DOMAIN.com      smtp:mail64.SUB.DOMAIN.com
mail65.SUB.DOMAIN.com      smtp:mail65.SUB.DOMAIN.com
mail66.SUB.DOMAIN.com      smtp:mail66.SUB.DOMAIN.com
mail67.SUB.DOMAIN.com      smtp:mail67.SUB.DOMAIN.com
mail68.SUB.DOMAIN.com      smtp:mail68.SUB.DOMAIN.com
mail69.SUB.DOMAIN.com      smtp:mail69.SUB.DOMAIN.com

Now build the db, the command will end up creating /etc/postfix/transport.db

postmap /etc/postfix/transport

We should be all set now. Start postfix and monitor the log file as you do your testing.

/etc/init.d/postfix start
tail -f /var/log/maillog
Configuring Your ZCS VM's To Use The Mailhub

Note, another 'trick' you might need to do on your vm's to get them working in this type of situation from the outside is to setup one server [I used my 192.168.0.16 machine] to redirect and send emails from the other zcs servers. This is assuming your using a DynDNS setup where you can wildcard for a base domain and also wildcard it for MX requests as well. This is also necessary when your firewall device can only route all port 25 requests to a single ip address internally to your server. This setup assumes that server is running a mta that can then do MX lookups internally for the domainname being used for routing information. The adjustments below setup the ZCS vm's to accept the mail relay from that internal server.


Adjust in the Configuration > Global Settings > MTA tab and under Configuration > Servers > Your server > MTA tab on the other ZCS servers to use the one server for:

  • Web mail MTA Port 25
  • Relay MTA for external delivery
  • Inbound SMTP Host name
  • zmcontrol stop and then zmcontrol start on the server/s
Personal tools