Ajcody-Disk-Full-Issues: Difference between revisions

mNo edit summary
Line 3: Line 3:
|}
|}


==Disk Full Issues==
=Disk Full Issues=


===Actual Disk Full Issues Homepage===
==Actual Disk Full Issues Homepage==


----
----
Line 11: Line 11:
Please see [[Ajcody-Disk-Full-Issues]]
Please see [[Ajcody-Disk-Full-Issues]]


===Running Out Of Disk Space - Information To Share With Support===
==Running Out Of Disk Space - Information To Share With Support==


----
----
Line 111: Line 111:
  zmprov -l gc default | grep Lifetime
  zmprov -l gc default | grep Lifetime


====Notable Bugs In ZCS That Cause Unnecessary Disk Growth Or Consumption====
===Notable Bugs In ZCS That Cause Unnecessary Disk Growth Or Consumption===


----
----


=====Large /opt/zimbra/logger/db/data/rrds Directory=====
====Large /opt/zimbra/logger/db/data/rrds Directory====


* This bug was reported in 8.0.5 and is slated to be resolved with 8.0.7
* This bug was reported in 8.0.5 and is slated to be resolved with 8.0.7
Line 121: Line 121:
*** https://bugzilla.zimbra.com/show_bug.cgi?id=85222
*** https://bugzilla.zimbra.com/show_bug.cgi?id=85222


=====Large /opt/zimbra/data/amavisd/.spamassassin Directory=====
====Large /opt/zimbra/data/amavisd/.spamassassin Directory====


You find you have a large /opt/zimbra/data/amavisd/.spamassassin directory because bayes_toks.expire* files are not being purged via the cronjob. To check your crontab :
You find you have a large /opt/zimbra/data/amavisd/.spamassassin directory because bayes_toks.expire* files are not being purged via the cronjob. To check your crontab :
Line 140: Line 140:
That should be cleaning up those files. If not, and you have anti-spam off, I would recommend moving them to a temp location or compressing them [just in case]. Give it a night and if nothing is amiss, then remove them from your filesystem.
That should be cleaning up those files. If not, and you have anti-spam off, I would recommend moving them to a temp location or compressing them [just in case]. Give it a night and if nothing is amiss, then remove them from your filesystem.


=====Large /opt/zimbra/data/ldap/mdb/db/ Directory Because Of The data.mdb File=====
====Large /opt/zimbra/data/ldap/mdb/db/ Directory Because Of The data.mdb File====


If your data.mdb file actually is consuming up GB's of space and isn't no longer a sparse file, you most likely did a move, cp, or rsync of this data/directory improperly. With ZCS 8, this is a sparse file and has to be treated differently.
If your data.mdb file actually is consuming up GB's of space and isn't no longer a sparse file, you most likely did a move, cp, or rsync of this data/directory improperly. With ZCS 8, this is a sparse file and has to be treated differently.
Line 168: Line 168:
** http://wiki.zimbra.com/wiki/Importing_LDAP_data_from_provider_to_replica
** http://wiki.zimbra.com/wiki/Importing_LDAP_data_from_provider_to_replica


===Adding A New Primary Store Volume How-To===
==Adding A New Primary Store Volume How-To==


----
----
Line 194: Line 194:
*** http://wiki.zimbra.com/index.php?title=CLI_zmvolume
*** http://wiki.zimbra.com/index.php?title=CLI_zmvolume


===Adding A New HSM Volume How-To - 1 Total===
==Adding A New HSM Volume How-To - 1 Total==


----
----
Line 203: Line 203:
* https://wiki.zimbra.com/wiki/Ajcody-HSM-Notes#A_How-To_Example_-_CLI
* https://wiki.zimbra.com/wiki/Ajcody-HSM-Notes#A_How-To_Example_-_CLI


===Adding A Second HSM Volume How-To - Having One Active HSM and on Inactive HSM Volume - 2 Total===
==Adding A Second HSM Volume How-To - Having One Active HSM and on Inactive HSM Volume - 2 Total==


----
----
Line 261: Line 261:
***** 1.9.4.1 To Modify Volume From CLI After Data Move
***** 1.9.4.1 To Modify Volume From CLI After Data Move


===Adding Additional Storage - Vmware Virtual Machine Example===
==Adding Additional Storage - Vmware Virtual Machine Example==


----
----
Line 268: Line 268:
* http://wiki.zimbra.com/wiki/Ajcody-Virtualization#Adding_Additional_Storage
* http://wiki.zimbra.com/wiki/Ajcody-Virtualization#Adding_Additional_Storage


====Moving Zimbra To New Partitions For the /opt/zimbra And backups Directory - Vmware Virtual Machine Example====
===Moving Zimbra To New Partitions For the /opt/zimbra And backups Directory - Vmware Virtual Machine Example===


----
----
Line 275: Line 275:
* http://wiki.zimbra.com/wiki/Ajcody-Virtualization#Moving_Zimbra_To_New_Partitions_For_zimbra_and_backups
* http://wiki.zimbra.com/wiki/Ajcody-Virtualization#Moving_Zimbra_To_New_Partitions_For_zimbra_and_backups


===ZCA - Zimbra Appliance - Manually Adding And Expanding The Disk Partitions===
==ZCA - Zimbra Appliance - Manually Adding And Expanding The Disk Partitions==


----
----
Line 286: Line 286:
* http://wiki.zimbra.com/wiki/Ajcody-ZCA_Appliance#Manually_Adding_-_Expanding_the_Disk_-_Partition_In_The_OS
* http://wiki.zimbra.com/wiki/Ajcody-ZCA_Appliance#Manually_Adding_-_Expanding_the_Disk_-_Partition_In_The_OS


===Checking Your Dumpster Settings And Purging===
==Checking Your Dumpster Settings And Purging==


----
----

Revision as of 12:21, 14 April 2014

Attention.png - This article is NOT official Zimbra documentation. It is a user contribution and may include unsupported customizations, references, suggestions, or information.

Disk Full Issues

Actual Disk Full Issues Homepage


Please see Ajcody-Disk-Full-Issues

Running Out Of Disk Space - Information To Share With Support


as root:

cat /etc/fstab
cat /proc/mounts
df -hT 

Let's identify your larger directories , as root :

for i in `find /opt/zimbra -maxdepth 1 -type d`; \
do export sum=`find $i -printf %k"\n" | awk '{  sum += $1 } END { print sum kb }'`; \
echo -e "$sum kb\t$i"; export sum=; done | sort -rn | head -n 20

 [example output below]
6007764 kb      /opt/zimbra
1966620 kb      /opt/zimbra/db
837160 kb       /opt/zimbra/backup
680932 kb       /opt/zimbra/jetty-distribution-7.6.12.v20130726
387140 kb       /opt/zimbra/data
286160 kb       /opt/zimbra/jdk-1.7.0_45
211080 kb       /opt/zimbra/store
207172 kb       /opt/zimbra/zmstat
178628 kb       /opt/zimbra/logger
162280 kb       /opt/zimbra/mta
155700 kb       /opt/zimbra/bdb-5.2.36
116520 kb       /opt/zimbra/aspell-0.60.6.1
98820 kb        /opt/zimbra/mysql-standard-5.5.32-pc-linux-gnu-i686-glibc23
79408 kb        /opt/zimbra/zimbramon
72608 kb        /opt/zimbra/lib
66940 kb        /opt/zimbra/keyview-10.13.0.0
66488 kb        /opt/zimbra/clamav-0.97.8
64676 kb        /opt/zimbra/httpd-2.4.4
47408 kb        /opt/zimbra/store2
47164 kb        /opt/zimbra/index

If you have stats for the server, this will give us trending data:

[zimbra@]$ tar cvf /tmp/df.tar `find /opt/zimbra/zmstat -name df.csv\* -print | sort -r | head -n 20`
tar: Removing leading `/' from member names
/opt/zimbra/zmstat/df.csv
/opt/zimbra/zmstat/2014-04-06/df.csv.gz
/opt/zimbra/zmstat/2014-04-05/df.csv.gz
/opt/zimbra/zmstat/2014-04-04/df.csv.gz
/opt/zimbra/zmstat/2014-04-03/df.csv.gz
/opt/zimbra/zmstat/2014-04-02/df.csv.gz
/opt/zimbra/zmstat/2014-04-01/df.csv.gz
/opt/zimbra/zmstat/2014-03-31/df.csv.gz
/opt/zimbra/zmstat/2014-03-30/df.csv.gz
/opt/zimbra/zmstat/2014-03-29/df.csv.gz
/opt/zimbra/zmstat/2014-03-28/df.csv.gz
/opt/zimbra/zmstat/2014-03-27/df.csv.gz
/opt/zimbra/zmstat/2014-03-26/df.csv.gz
/opt/zimbra/zmstat/2014-03-25/df.csv.gz
/opt/zimbra/zmstat/2014-03-24/df.csv.gz
/opt/zimbra/zmstat/2014-03-23/df.csv.gz
/opt/zimbra/zmstat/2014-03-22/df.csv.gz
/opt/zimbra/zmstat/2014-03-21/df.csv.gz
/opt/zimbra/zmstat/2014-03-20/df.csv.gz
/opt/zimbra/zmstat/2014-03-19/df.csv.gz
[zimbra@zcs806 ~]$ ls -ls /tmp/df.tar
80 -rw-r----- 1 zimbra zimbra 81920 Apr  7 11:40 /tmp/df.tar

As the zimbra user, Note - If your zimbraBackupTarget variable uses something different than /opt/zimbra/backup then ALSO do the three ls commands below with that path :

su - zimbra
zmprov -l gs `zmhostname` | egrep 'Back|Redo'
du -sh /opt/zimbra/redolog
ls -latr /opt/zimbra/backup
ls -latr /opt/zimbra/backup/tmp
ls -latr /opt/zimbra/backup/sessions
crontab -l | grep -i back
zmbackupquery 

and then with a user account, replacing user@domain.com below with a valid account :

zmprov ga user@domain.com |grep -i Lifetime

you can also do these with the COS you use:

zmprov gac

and then:

zmprov -l gc [cos name] | grep Lifetime

You probably are only using the default COS, so:

zmprov -l gc default | grep Lifetime

Notable Bugs In ZCS That Cause Unnecessary Disk Growth Or Consumption


Large /opt/zimbra/logger/db/data/rrds Directory

Large /opt/zimbra/data/amavisd/.spamassassin Directory

You find you have a large /opt/zimbra/data/amavisd/.spamassassin directory because bayes_toks.expire* files are not being purged via the cronjob. To check your crontab :

su - zimbra
crontab -l | grep sa-learn

You should have something like this in your crontab [It's all one line below if you want to manually run it from the CLI as the zimbra user]:

/opt/zimbra/libexec/sa-learn -p /opt/zimbra/conf/salocal.cf --dbpath \
 /opt/zimbra/data/amavisd/.spamassassin --siteconfigpath \
 /opt/zimbra/conf/spamassassin --force-expire --sync > /dev/null 2>&1

That should be cleaning up those files. If not, and you have anti-spam off, I would recommend moving them to a temp location or compressing them [just in case]. Give it a night and if nothing is amiss, then remove them from your filesystem.

Large /opt/zimbra/data/ldap/mdb/db/ Directory Because Of The data.mdb File

If your data.mdb file actually is consuming up GB's of space and isn't no longer a sparse file, you most likely did a move, cp, or rsync of this data/directory improperly. With ZCS 8, this is a sparse file and has to be treated differently.

For example:

[root@zcs806 db]# pwd
/opt/zimbra/data/ldap/mdb/db

[root@zcs806 db]# ls -lh
total 1.5M
-rw------- 1 zimbra zimbra  15G Apr  7 10:10 data.mdb
-rw------- 1 zimbra zimbra 8.0K Apr  7 13:03 lock.mdb

[root@zcs806 db]# du -c -h data.mdb
1.5M    data.mdb
1.5M    total

Notice that the data.mdb file says 15G on the ls but is actually only 1.5M in size with the du output.

Reference:

Adding A New Primary Store Volume How-To


Your existing primary volume is using the default path of /opt/zimbra/store . You'll create and mount with a new ext3/4 partition , for example, to /opt/zimbra/store2 . Make sure it's properly placed also in the /etc/fstab - at the end of the file probably will work just fine.

[as root]

mkdir /opt/zimbra/store2
chown zimbra:zimbra /opt/zimbra/store2
chmod 755 /opt/zimbra/store2
mount /opt/zimbra/store2
 [Now confirm a write/delete test]
su - zimbra
touch /opt/zimbra/store2/testfile
rm /opt/zimbra/store2/testfile

You can then use the admin console to add the new zimbra message volume for /opt/zimbra/store2 . Assuming you set it to be the active one, the transition to now use that volume for new blobs will be immediate. The old blobs will stay where they are [ /opt/zimbra/store ] . HSM or a secondary volume is different in that, it runs a job that you setup in the crontab that tells it to move messages X days/weeks/months/etc old for all accounts from the primary message volumes into it's own volume path.

You can monitor the /opt/zimbra/store2/ directory and you'll see sub-directories being made as the new blobs/messages come in. Please note, the sub-directories will not have 751 perms. They will be like drwxr-x--- [750] .

Adding A New HSM Volume How-To - 1 Total


If this is your first time using HSM , please review the complete table of contents at Ajcody-HSM-Notes .

See the following:

Adding A Second HSM Volume How-To - Having One Active HSM and on Inactive HSM Volume - 2 Total


You currently have a HSM volume but it's getting close to being full.

/dev/sdc1 2.0T 1.6T 292G 85% /opt/zimbra/hsm

Let's say you create a new partition for HSM and mount it as /opt/zimbra/hsm2 . You'll need to have it owned by zimbra [ chown zimbra:zimbra /opt/zimbra/hsm2 ]

You would then:

su - zimbra

zmhsm -u
   [Confirm hsm is not currently running]

crontab -e 
   [Comment out the hsm run in cron if that's how you have it setup]
   [We don't want hsm running during the change]

zmvolume -a -n hsm2-volume -t secondaryMessage -p /opt/zimbra/hsm2
   [You might want to adjust -n hsm2-volume depending on how you named the other hsm volume]
   [ -p /opt/zimbra/hsm2 is the path for the new volume, adjust if needed]

zmvolume -l
   [Get the volume id of the new volume]
   [You should see that the new volume isn't listed as current, the old one still is]

zmvolume -sc -id ##  
   [Replace ## with the volume id for the hsm2 volume]
   [This will set the new hsm volume to be the current one, msg's will go there on the next hsm run]
   [There can only be one "current" volume for each type of volume type {index, primary, secondary]

zmvolume -l
   [confirm the volumes] 

crontab -e
   [Uncomment the hsm job in cron if that's how you ran it.]

Adding Additional Storage - Vmware Virtual Machine Example


See the following:

Moving Zimbra To New Partitions For the /opt/zimbra And backups Directory - Vmware Virtual Machine Example


See the following:

ZCA - Zimbra Appliance - Manually Adding And Expanding The Disk Partitions


Note - Ideally the link below should work for you and be the best method to adding/increasing disk space on ZCA

See the following also for a more manual way if needed:

Checking Your Dumpster Settings And Purging


If the user empties trash, those are deleted from trash. If you have dumpster enabled though, they will still reside on the file system until it hit the DumpsterLifetime of 30 days. The days being from the message delivery date.

To see if you have dumpster enabled, you have to check your COS. This example below is against the COS named default:

 su - zimbra
 zmprov getCos default|grep -i dumpster
    zimbraDumpsterEnabled: TRUE # it must be TRUE value
    zimbraMailDumpsterLifetime: 30d #

Notice the Enabled flag.

The below command should empty the dumpster data regardless of the lifetime variable:

zmmailbox -z -A -m user@domain.com emptyDumpster
  • Reference
    • Some default values
      • zimbraMailDumpsterLifetime: 30d
      • zimbraMailMessageLifetime: 0
      • zimbraMailSpamLifetime: 30d
      • zimbraMailTrashLifetime: 30d
Jump to: navigation, search