Ajcody-ZCA Appliance: Difference between revisions
Line 208: | Line 208: | ||
</pre> | </pre> | ||
To manually add the new disks to the lvm vg to expand the partition that's available to zimbra. Reusing the example from above. As root : | |||
'''Warning - What your about to do is irreversible. You should confirm and double check that you have your data backed up and are knowledgeable about your DR options if things go wrong beyond this point.''' | |||
To manually add the new disks to the lvm vg to expand the partition that's available to zimbra. Reusing the example from above. As root : | |||
<pre> | <pre> |
Revision as of 19:34, 7 April 2014
![]() |
ZCA Appliance Topics
Actual ZCA Appliance Topics Homepage
Please see Ajcody-ZCA_Appliance
Where Is The Installer Located On ZCA After Download
Here is an example of the ZCA 8.0.3 Release:
/opt/vmware-zca-installer/packages/zcs-NETWORK-8.0.3_GA_5664.UBUNTU10_64.20130305090216.tgz
Mailboxd Not Running After ZCS 803 Upgrade
Check /opt/zimbra/log/zmmailboxd.out for the following:
2013-03-22 09:44:50.496:INFO:oejpw.PlusConfiguration:No Transaction manager found - if your webapp requires one , please configure one. Total time for which application threads were stopped: 0.0002020 seconds 2013-03-22 09:44:50.788:INFO:oejsh.ContextHandler:started o.e.j.w.WebAppContext{/service,file:/opt/zimbra/jetty -distribution-7.6.2.z4/webapps/service/},/opt/zimbra/jetty-distribution-7.6.2.z4/webapps/service 2013-03-22 09:44:52.458:WARN:oejuc.AbstractLifeCycle:FAILED ZimbraQoSFilter: java.lang.NoSuchMethodError: com.g ooglecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/con currentlinkedhashmap/ConcurrentLinkedHashMap$Builder; java.lang.NoSuchMethodError: com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeig htedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
This should be getting fixed ASAP, so hopefully no one else hits it.
- "mailboxd stops after upgrade to 8.0.3"
Hostname Keeps Being Set To Localhost
This is a 'bug' of sorts in the vmware scripts. It requires the hostname to have a valid PTR record, if it doesn't upon a DNS query it will fall back to localhost.
- "vami_set_hostname only valids DNS via TCP resulting in localhost if not available needs workaround"
- https://bugzilla.zimbra.com/show_bug.cgi?id=81262
- Was a clone of this bug:
- https://bugzilla.zimbra.com/show_bug.cgi?id=81262
Steps That Should Work, But Don't Because Of PTR Not Being Set
Edit /etc/hosts as root:
127.0.0.1 localhost.localdomain localhost XXX.XXX.X.X HOSTNAME.DOMAIN.COM HOSTNAME
Edit /etc/hostname as root to have your FQDN:
HOSTNAME.DOMAIN.COM
Push out changes, as root:
/etc/init.d/hostname start
The vmware tool sets the hostname also via this:
# /opt/vmware/share/vami/vami_config_net Main Menu 0) Show Current Configuration (scroll with Shift-PgUp/PgDown) 1) Exit this program 2) Default Gateway 3) Hostname 4) DNS 5) Proxy Server 6) IP Address Allocation for eth0 Enter a menu number [0]:
Time Is Not Set Right
Postfix might not start because of this.
The following RFE is to have this configured so upon start time is pulling from ntp.
- "ntpdate error in syslog after appliance single node install"
To manually update time to pull from a ntp server:
ntpdate ntp.ubuntu.com pool.ntp.org
Increasing Disk - Partition Space
General References On LVM And Commands For It
You should be comfortable with LVM and working with it. Zimbra Support does not support this directly, this is a normal Linux administrative tasks and trouble shooting and training issues should be directed to your Linux OS vendor.
References:
- https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Logical_Volume_Manager_Administration/VG_display.html
- https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Logical_Volume_Manager_Administration/custom_report.html
- A nice walk through on a LVM setup:
Increase - Adding Disk - : Add Storage First in vSphere
This wiki applies to ZCA 8.0.0 and newer.
Adding additional VMDKs for storage is done via the normal vSphere client process.
- For a single server installation, select the Zimbra Appliance; for a multi server installation, select the mailstore server to add storage.
- From Edit, click Add.
- Select Hard Disk and click Next.
- Select Create a new virtual disk. If you are using Raw Device Mapping (RDM), select Raw Device Mappings.
- Select the provision format that is the same as the Zimbra Appliance virtual machine. Recommended is Thick Provision Eager Zero format. If you are deploying on fiber channel storage, this provides the best performance for the appliance.
- Select Specify a datastore or datastore cluster.
- Click Browse to select a datastore to create the virtual disk on.
- Once you have added the appropriately sized VMDKs for your deployment, restart your Zimbra appliance virtual machine. The virtual appliance console indicates the volume group for the virtual appliance that is being increased.
- Login to the storage virtual appliance and as root enter:
- vgdisplay data_vg
- This displays the volume group properties for the Zimbra mailstore server. Verify that the volume group size reflects the size of the added storage plus 12GB.
If you do not see the increase disk space for the data_vg , proceed with the below section.
Manually Adding - Expanding the Disk - Partition In The OS
Note - Ideally the link below should work for you and be the best method to adding/increasing disk space on ZCA
Keyword - zca partition disk full.
All you should have to do is add new disks to be available to the ZCS virtual machine and then reboot the zimbra server. A script should detect it and auto-expand the partition to give you the newly available space. If this does not happen, you'll be able to confirm that the volume group [vg] is still the same size but you will see the new disks with the output of fdisk.
To see what the vg data_vg currently is:
root@localhost:~# vgdisplay data_vg --- Volume group --- VG Name data_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 12.00 GiB PE Size 4.00 MiB Total PE 3071 Alloc PE / Size 3071 / 12.00 GiB Free PE / Size 0 / 0 VG UUID BDfVzQ-zVEP-jiAm-DI1J-2sje-01cc-1rMJ9K
To see what disks are available to you - as root, fdisk -l . Example below :
root@localhost:~# fdisk -l Disk /dev/sda: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a46dc Device Boot Start End Blocks Id System /dev/sda1 * 1 16 123904 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 16 1045 8261633 5 Extended /dev/sda5 16 32 123904 82 Linux swap / Solaris /dev/sda6 32 1045 8136704 83 Linux Disk /dev/sdb: 12.9 GB, 12884901888 bytes 255 heads, 63 sectors/track, 1566 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 53.7 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdd doesn't contain a valid partition table
Warning - What your about to do is irreversible. You should confirm and double check that you have your data backed up and are knowledgeable about your DR options if things go wrong beyond this point.
To manually add the new disks to the lvm vg to expand the partition that's available to zimbra. Reusing the example from above. As root :
root@localhost:~# pvcreate /dev/sdc root@localhost:~# pvcreate /dev/sdd root@localhost:~# vgextend /dev/data_vg /dev/sdc root@localhost:~# vgextend /dev/data_vg /dev/sdd root@localhost:~# lvextend -L +79.99g /dev/data_vg/zimbra root@localhost:~# resize2fs /dev/data_vg/zimbra
Additional Information
Very useful commands that will show information about your PV's, LV's, and VG's :
- pvs -v
- lvs -v
- vgs -v
References about LVM:
- https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Logical_Volume_Manager_Administration/VG_display.html
- https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Logical_Volume_Manager_Administration/custom_report.html
- Nice walk through:
- http://www.idevelopment.info/data/Unix/Linux/LINUX_ManagingPhysicalLogicalVolumes.shtml