Ajcody-Disk-Full-Issues: Difference between revisions
Line 112: | Line 112: | ||
[as root] | [as root] | ||
<pre> | |||
mkdir /opt/zimbra/store2 | mkdir /opt/zimbra/store2 | ||
chown zimbra:zimbra /opt/zimbra/store2 | chown zimbra:zimbra /opt/zimbra/store2 | ||
Line 120: | Line 121: | ||
touch /opt/zimbra/store2/testfile | touch /opt/zimbra/store2/testfile | ||
rm /opt/zimbra/store2/testfile | rm /opt/zimbra/store2/testfile | ||
</pre> | |||
You can then use the admin console to add the new zimbra message volume for /opt/zimbra/store2 . '''Assuming you set it to be the active one''', the transition to now use that volume for new blobs will be immediate. The old blobs will stay where they are [ /opt/zimbra/store ] . HSM or a secondary volume is different in that, it runs a job that you setup in the crontab that tells it to move messages X days/weeks/months/etc old for all accounts from the primary message volumes into it's own volume path. | You can then use the admin console to add the new zimbra message volume for /opt/zimbra/store2 . '''Assuming you set it to be the active one''', the transition to now use that volume for new blobs will be immediate. The old blobs will stay where they are [ /opt/zimbra/store ] . HSM or a secondary volume is different in that, it runs a job that you setup in the crontab that tells it to move messages X days/weeks/months/etc old for all accounts from the primary message volumes into it's own volume path. | ||
Revision as of 18:52, 7 April 2014
- This article is NOT official Zimbra documentation. It is a user contribution and may include unsupported customizations, references, suggestions, or information. |
Disk Full Issues
Actual Disk Full Issues Homepage
Please see Ajcody-Disk-Full-Issues
as root:
cat /etc/fstab cat /proc/mounts df -hT
Let's identify your larger directories , as root :
for i in `find /opt/zimbra -maxdepth 1 -type d`; \ do export sum=`find $i -printf %k"\n" | awk '{ sum += $1 } END { print sum kb }'`; \ echo -e "$sum kb\t$i"; export sum=; done | sort -rn | head -n 20 [example output below] 6007764 kb /opt/zimbra 1966620 kb /opt/zimbra/db 837160 kb /opt/zimbra/backup 680932 kb /opt/zimbra/jetty-distribution-7.6.12.v20130726 387140 kb /opt/zimbra/data 286160 kb /opt/zimbra/jdk-1.7.0_45 211080 kb /opt/zimbra/store 207172 kb /opt/zimbra/zmstat 178628 kb /opt/zimbra/logger 162280 kb /opt/zimbra/mta 155700 kb /opt/zimbra/bdb-5.2.36 116520 kb /opt/zimbra/aspell-0.60.6.1 98820 kb /opt/zimbra/mysql-standard-5.5.32-pc-linux-gnu-i686-glibc23 79408 kb /opt/zimbra/zimbramon 72608 kb /opt/zimbra/lib 66940 kb /opt/zimbra/keyview-10.13.0.0 66488 kb /opt/zimbra/clamav-0.97.8 64676 kb /opt/zimbra/httpd-2.4.4 47408 kb /opt/zimbra/store2 47164 kb /opt/zimbra/index
If you have stats for the server, this will give us trending data:
[zimbra@]$ tar cvf /tmp/df.tar `find /opt/zimbra/zmstat -name df.csv\* -print | sort -r | head -n 20` tar: Removing leading `/' from member names /opt/zimbra/zmstat/df.csv /opt/zimbra/zmstat/2014-04-06/df.csv.gz /opt/zimbra/zmstat/2014-04-05/df.csv.gz /opt/zimbra/zmstat/2014-04-04/df.csv.gz /opt/zimbra/zmstat/2014-04-03/df.csv.gz /opt/zimbra/zmstat/2014-04-02/df.csv.gz /opt/zimbra/zmstat/2014-04-01/df.csv.gz /opt/zimbra/zmstat/2014-03-31/df.csv.gz /opt/zimbra/zmstat/2014-03-30/df.csv.gz /opt/zimbra/zmstat/2014-03-29/df.csv.gz /opt/zimbra/zmstat/2014-03-28/df.csv.gz /opt/zimbra/zmstat/2014-03-27/df.csv.gz /opt/zimbra/zmstat/2014-03-26/df.csv.gz /opt/zimbra/zmstat/2014-03-25/df.csv.gz /opt/zimbra/zmstat/2014-03-24/df.csv.gz /opt/zimbra/zmstat/2014-03-23/df.csv.gz /opt/zimbra/zmstat/2014-03-22/df.csv.gz /opt/zimbra/zmstat/2014-03-21/df.csv.gz /opt/zimbra/zmstat/2014-03-20/df.csv.gz /opt/zimbra/zmstat/2014-03-19/df.csv.gz [zimbra@zcs806 ~]$ ls -ls /tmp/df.tar 80 -rw-r----- 1 zimbra zimbra 81920 Apr 7 11:40 /tmp/df.tar
As the zimbra user, Note - If your zimbraBackupTarget variable uses something different than /opt/zimbra/backup then ALSO do the three ls commands below with that path :
su - zimbra zmprov -l gs `zmhostname` | egrep 'Back|Redo' du -sh /opt/zimbra/redolog ls -latr /opt/zimbra/backup ls -latr /opt/zimbra/backup/tmp ls -latr /opt/zimbra/backup/sessions crontab -l | grep -i back zmbackupquery
and then with a user account, replacing user@domain.com below with a valid account :
zmprov ga user@domain.com |grep -i Lifetime
you can also do these with the COS you use:
zmprov gac
and then:
zmprov -l gc [cos name] | grep Lifetime
You probably are only using the default COS, so:
zmprov -l gc default | grep Lifetime
Adding A New Primary Store Volume How-To
Your existing primary volume is using the default path of /opt/zimbra/store . You'll create and mount a new ext3/4 partition , for example, to /opt/zimbra/store2 . Make sure it's properly placed also in the /etc/fstab - at the end of the file probably will work just fine.
[as root]
mkdir /opt/zimbra/store2 chown zimbra:zimbra /opt/zimbra/store2 chmod 755 /opt/zimbra/store2 mount /opt/zimbra/store2 [Now confirm a write/delete test] su - zimbra touch /opt/zimbra/store2/testfile rm /opt/zimbra/store2/testfile
You can then use the admin console to add the new zimbra message volume for /opt/zimbra/store2 . Assuming you set it to be the active one, the transition to now use that volume for new blobs will be immediate. The old blobs will stay where they are [ /opt/zimbra/store ] . HSM or a secondary volume is different in that, it runs a job that you setup in the crontab that tells it to move messages X days/weeks/months/etc old for all accounts from the primary message volumes into it's own volume path.
You can monitor the /opt/zimbra/store2/ directory and you'll see sub-directories being made as the new blobs/messages come in. Please note, the sub-directories will not have 751 perms. They will be like drwxr-x--- [750] .
- References
- Using zmvolume from the command line
Adding A New HSM Volume How-T0
You currently have a HSM volume but it's getting close to being full.
/dev/sdc1 2.0T 1.6T 292G 85% /opt/zimbra/hsm
Let's say you create a new partition for HSM and mount it as /opt/zimbra/hsm2 . You'll need to have it owned by zimbra [ chown zimbra:zimbra /opt/zimbra/hsm2 ]
You would then:
su - zimbra zmhsm -u [Confirm hsm is not currently running] crontab -e [Comment out the hsm run in cron if that's how you have it setup] [We don't want hsm running during the change] zmvolume -a -n hsm2-volume -t secondaryMessage -p /opt/zimbra/hsm2 [You might want to adjust -n hsm2-volume depending on how you named the other hsm volume] [ -p /opt/zimbra/hsm2 is the path for the new volume, adjust if needed] zmvolume -l [Get the volume id of the new volume] [You should see that the new volume isn't listed as current, the old one still is] zmvolume -sc -id ## [Replace ## with the volume id for the hsm2 volume] [This will set the new hsm volume to be the current one, msg's will go there on the next hsm run] [There can only be one "current" volume for each type of volume type {index, primary, secondary] zmvolume -l [confirm the volumes] crontab -e [Uncomment the hsm job in cron if that's how you ran it.]
- See the following for zmvolume options:
- Other references:
- http://wiki.zimbra.com/wiki/Ajcody-HSM-Notes#Create_The_HSM_Volume
- 1.3.1.1 Create The HSM Volume
- 1.3.1.2 Set HSM Volume To Current
- 1.3.1.3 Starting HSM For First Time
- http://wiki.zimbra.com/wiki/Ajcody-Server-Misc-Topics#Volumes_.26_zmvolume
- 1.9 Volumes & zmvolume
- 1.9.1 Basic Concepts
- 1.9.2 Notable RFEs
- 1.9.3 How To Move A User's Data To Another Volume
- 1.9.3.1 Using zmsoap Example
- 1.9.4 How To Go About Changing Volume Paths
- 1.9.4.1 To Modify Volume From CLI After Data Move
- 1.9 Volumes & zmvolume
- http://wiki.zimbra.com/wiki/Ajcody-HSM-Notes#Create_The_HSM_Volume
Checking Your Dumpster Settings And Purging
If the user empties trash, those are deleted from trash. If you have dumpster enabled though, they will still reside on the file system until it hit the DumpsterLifetime of 30 days. The days being from the message delivery date.
To see if you have dumpster enabled, you have to check your COS. This example below is against the COS named default:
su - zimbra zmprov getCos default|grep -i dumpster zimbraDumpsterEnabled: TRUE # it must be TRUE value zimbraMailDumpsterLifetime: 30d #
Notice the Enabled flag.
The below command should empty the dumpster data regardless of the lifetime variable:
zmmailbox -z -A -m user@domain.com emptyDumpster
- Reference
- Some default values
- zimbraMailDumpsterLifetime: 30d
- zimbraMailMessageLifetime: 0
- zimbraMailSpamLifetime: 30d
- zimbraMailTrashLifetime: 30d
- Some default values