Ajcody-Disk-Full-Issues
Disk Full Issues
Actual Disk Full Issues Homepage
Please see Ajcody-Disk-Full-Issues
Read First - Zimbra Support Is Not Your Appropriate Support Contact For Storage Modifications
The information provided on this wiki page is provided for those that are comfortable doing the activity by themselves and will use proper testing and DR strategies prior to making changes on the production machine.
Modifying your disk/partitions/storage is outside of the supported responsibilities of the Zimbra Support team. Your proper vendor support contact for this activity is your OS, Virtualization, and Storage vendors and any other contractor or consultant resources you use.
The only exception to the above rule is for customers that are running the EOL [End of Life] ZCA product of ours. Those customers should be upgrading to the Network Edition of ZCS and after that upgrade, their support contacts for storage modifications would fall to Vmware or the OS vendor.
as root:
cat /etc/fstab cat /proc/mounts df -hT
Let's identify your larger directories , as root :
for i in `find /opt/zimbra -maxdepth 1 -type d`; \ do export sum=`find $i -printf %k"\n" | awk '{ sum += $1 } END { print sum kb }'`; \ echo -e "$sum kb\t$i"; export sum=; done | sort -rn | head -n 20 [example output below] 6007764 kb /opt/zimbra 1966620 kb /opt/zimbra/db 837160 kb /opt/zimbra/backup 680932 kb /opt/zimbra/jetty-distribution-7.6.12.v20130726 387140 kb /opt/zimbra/data 286160 kb /opt/zimbra/jdk-1.7.0_45 211080 kb /opt/zimbra/store 207172 kb /opt/zimbra/zmstat 178628 kb /opt/zimbra/logger 162280 kb /opt/zimbra/mta 155700 kb /opt/zimbra/bdb-5.2.36 116520 kb /opt/zimbra/aspell-0.60.6.1 98820 kb /opt/zimbra/mysql-standard-5.5.32-pc-linux-gnu-i686-glibc23 79408 kb /opt/zimbra/zimbramon 72608 kb /opt/zimbra/lib 66940 kb /opt/zimbra/keyview-10.13.0.0 66488 kb /opt/zimbra/clamav-0.97.8 64676 kb /opt/zimbra/httpd-2.4.4 47408 kb /opt/zimbra/store2 47164 kb /opt/zimbra/index
If you have stats for the server, this will give us trending data:
[zimbra@]$ tar cvf /tmp/df.tar `find /opt/zimbra/zmstat -name df.csv\* -print | sort -r | head -n 20` tar: Removing leading `/' from member names /opt/zimbra/zmstat/df.csv /opt/zimbra/zmstat/2014-04-06/df.csv.gz /opt/zimbra/zmstat/2014-04-05/df.csv.gz /opt/zimbra/zmstat/2014-04-04/df.csv.gz /opt/zimbra/zmstat/2014-04-03/df.csv.gz /opt/zimbra/zmstat/2014-04-02/df.csv.gz /opt/zimbra/zmstat/2014-04-01/df.csv.gz /opt/zimbra/zmstat/2014-03-31/df.csv.gz /opt/zimbra/zmstat/2014-03-30/df.csv.gz /opt/zimbra/zmstat/2014-03-29/df.csv.gz /opt/zimbra/zmstat/2014-03-28/df.csv.gz /opt/zimbra/zmstat/2014-03-27/df.csv.gz /opt/zimbra/zmstat/2014-03-26/df.csv.gz /opt/zimbra/zmstat/2014-03-25/df.csv.gz /opt/zimbra/zmstat/2014-03-24/df.csv.gz /opt/zimbra/zmstat/2014-03-23/df.csv.gz /opt/zimbra/zmstat/2014-03-22/df.csv.gz /opt/zimbra/zmstat/2014-03-21/df.csv.gz /opt/zimbra/zmstat/2014-03-20/df.csv.gz /opt/zimbra/zmstat/2014-03-19/df.csv.gz [zimbra@zcs806 ~]$ ls -ls /tmp/df.tar 80 -rw-r----- 1 zimbra zimbra 81920 Apr 7 11:40 /tmp/df.tar
As the zimbra user, Note - If your zimbraBackupTarget variable uses something different than /opt/zimbra/backup then ALSO do the three ls commands below with that path :
su - zimbra zmprov -l gs `zmhostname` | egrep 'Back|Redo' du -sh /opt/zimbra/redolog ls -latr /opt/zimbra/backup ls -latr /opt/zimbra/backup/tmp ls -latr /opt/zimbra/backup/sessions crontab -l | grep -i back zmbackupquery
and then with a user account, replacing user@domain.com below with a valid account :
zmprov ga user@domain.com |grep -i Lifetime
you can also do these with the COS you use:
zmprov gac
and then:
zmprov -l gc [cos name] | grep Lifetime
You probably are only using the default COS, so:
zmprov -l gc default | grep Lifetime
Notable Bugs In ZCS That Cause Unnecessary Disk Growth Or Consumption
Large /opt/zimbra/logger/db/data/rrds Directory
- This bug was reported in 8.0.5 and is slated to be resolved with 8.0.7
- zmlogger causes extreme rrd file growth
Large /opt/zimbra/data/amavisd/.spamassassin Directory
You find you have a large /opt/zimbra/data/amavisd/.spamassassin directory because bayes_toks.expire* files are not being purged via the cronjob. To check your crontab :
su - zimbra crontab -l | grep sa-learn
You should have something like this in your crontab [It's all one line below if you want to manually run it from the CLI as the zimbra user]:
/opt/zimbra/libexec/sa-learn -p /opt/zimbra/conf/salocal.cf --dbpath \ /opt/zimbra/data/amavisd/.spamassassin --siteconfigpath \ /opt/zimbra/conf/spamassassin --force-expire --sync > /dev/null 2>&1
That should be cleaning up those files. If not, and you have anti-spam off, I would recommend moving them to a temp location or compressing them [just in case]. Give it a night and if nothing is amiss, then remove them from your filesystem.
Large /opt/zimbra/data/ldap/mdb/db/ Directory Because Of The data.mdb File
If your data.mdb file actually is consuming up GB's of space and isn't no longer a sparse file, you most likely did a move, cp, or rsync of this data/directory improperly. With ZCS 8, this is a sparse file and has to be treated differently.
For example:
[root@zcs806 db]# pwd /opt/zimbra/data/ldap/mdb/db [root@zcs806 db]# ls -lh total 1.5M -rw------- 1 zimbra zimbra 15G Apr 7 10:10 data.mdb -rw------- 1 zimbra zimbra 8.0K Apr 7 13:03 lock.mdb [root@zcs806 db]# du -c -h data.mdb 1.5M data.mdb 1.5M total
Notice that the data.mdb file says 15G on the ls but is actually only 1.5M in size with the du output.
Reference:
- About the changes to data.mdb in ZCS 8, see:
- To correct a problem like this where it's no longer a sparse file, see the following:
Very Fast Growing zimbra.log And mail.* in /var/log Directory
If you find, especially after an upgrade, that zimbra.log and the mail.* logs in /var/log are growing in size extremely fast please check your syslog/rsyslog configuration files.
- Endless loop of logging from rsyslog 60-zimbra.conf
Dumpster Issues
Confirm if you have dumpster enabled and then if it's actually purging messages like it should, see the following:
Adding A New Primary Store Volume How-To
Your existing primary volume is using the default path of /opt/zimbra/store . You'll create and mount with a new ext3/4 partition , for example, to /opt/zimbra/store2 . Make sure it's properly placed also in the /etc/fstab - at the end of the file probably will work just fine.
[as root]
mkdir /opt/zimbra/store2 chown zimbra:zimbra /opt/zimbra/store2 chmod 755 /opt/zimbra/store2 mount /opt/zimbra/store2 [Now confirm a write/delete test] su - zimbra touch /opt/zimbra/store2/testfile rm /opt/zimbra/store2/testfile
You can then use the admin console to add the new zimbra message volume for /opt/zimbra/store2 . Assuming you set it to be the active one, the transition to now use that volume for new blobs will be immediate. The old blobs will stay where they are [ /opt/zimbra/store ] . HSM or a secondary volume is different in that, it runs a job that you setup in the crontab that tells it to move messages X days/weeks/months/etc old for all accounts from the primary message volumes into it's own volume path.
You can monitor the /opt/zimbra/store2/ directory and you'll see sub-directories being made as the new blobs/messages come in. Please note, the sub-directories will not have 751 perms. They will be like drwxr-x--- [750] .
- References
- Using zmvolume from the command line
Adding A New HSM Volume How-To - 1 Total
If this is your first time using HSM , please review the complete table of contents at Ajcody-HSM-Notes .
See the following:
Adding A Second HSM Volume How-To - Having One Active HSM and on Inactive HSM Volume - 2 Total
You currently have a HSM volume but it's getting close to being full.
/dev/sdc1 2.0T 1.6T 292G 85% /opt/zimbra/hsm
Let's say you create a new partition for HSM and mount it as /opt/zimbra/hsm2 . You'll need to have it owned by zimbra [ chown zimbra:zimbra /opt/zimbra/hsm2 ]
You would then:
su - zimbra zmhsm -u [Confirm hsm is not currently running] crontab -e [Comment out the hsm run in cron if that's how you have it setup] [We don't want hsm running during the change] zmvolume -a -n hsm2-volume -t secondaryMessage -p /opt/zimbra/hsm2 [You might want to adjust -n hsm2-volume depending on how you named the other hsm volume] [ -p /opt/zimbra/hsm2 is the path for the new volume, adjust if needed] zmvolume -l [Get the volume id of the new volume] [You should see that the new volume isn't listed as current, the old one still is] zmvolume -sc -id ## [Replace ## with the volume id for the hsm2 volume] [This will set the new hsm volume to be the current one, msg's will go there on the next hsm run] [There can only be one "current" volume for each type of volume type {index, primary, secondary] zmvolume -l [confirm the volumes] crontab -e [Uncomment the hsm job in cron if that's how you ran it.]
- See the following for zmvolume options:
- Other references:
- http://wiki.zimbra.com/wiki/Ajcody-HSM-Notes#Create_The_HSM_Volume
- 1.3.1.1 Create The HSM Volume
- 1.3.1.2 Set HSM Volume To Current
- 1.3.1.3 Starting HSM For First Time
- http://wiki.zimbra.com/wiki/Ajcody-Server-Misc-Topics#Volumes_.26_zmvolume
- 1.9 Volumes & zmvolume
- 1.9.1 Basic Concepts
- 1.9.2 Notable RFEs
- 1.9.3 How To Move A User's Data To Another Volume
- 1.9.3.1 Using zmsoap Example
- 1.9.4 How To Go About Changing Volume Paths
- 1.9.4.1 To Modify Volume From CLI After Data Move
- 1.9 Volumes & zmvolume
- http://wiki.zimbra.com/wiki/Ajcody-HSM-Notes#Create_The_HSM_Volume
Adding Additional Storage - Vmware Virtual Machine Example
See the following:
Growing The vmdk Disk And Expanding The LVM Filesystem
See the following references and contact Vmware Support for additional help [Note - doing a snapshot of the vm prior to this seems wise] :
- http://v-reality.info/2010/06/working-with-linux-volumes-n-vsphere/
- http://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-expanding-the-virtual-machine-disk/
Moving Zimbra To New Partitions For the /opt/zimbra And backups Directory - Vmware Virtual Machine Example
See the following:
ZCA - Zimbra Appliance - Manually Adding And Expanding The Disk Partitions
Note - Ideally the link below should work for you and be the best method to adding/increasing disk space on ZCA
See the following also for a more manual way if needed:
Checking Your Dumpster Settings And Purging
If the user empties trash, those are deleted from trash. If you have dumpster enabled though, they will still reside on the file system until it hit the DumpsterLifetime of 30 days. See Ajcody-Server-Misc-Topics#Dumpster_Specific for more details on dumpster variables and how they work.
To see if you have dumpster enabled, you have to check your COS. This example below is against the COS named default:
su - zimbra zmprov getCos default|grep -i dumpster zimbraDumpsterEnabled: TRUE # it must be TRUE value zimbraMailDumpsterLifetime: 30d #
Notice the Enabled flag.
The below command should empty the dumpster data regardless of the lifetime variable:
zmmailbox -z -A -m user@domain.com emptyDumpster
- Reference
- Some default values
- zimbraMailDumpsterLifetime: 30d
- zimbraMailMessageLifetime: 0
- zimbraMailSpamLifetime: 30d
- zimbraMailTrashLifetime: 30d
- Some default values