Cluster troubleshooting: Difference between revisions

m (Adding Article Footer and Categories)
No edit summary
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
{{ WIP }}
{{BC|Community Sandbox}}
__FORCETOC__
<div class="col-md-12 ibox-content">
=Cluster troubleshooting=
{{KB|{{Unsupported}}|{{ZCS 5.0}}|{{ZCS 6.0}}|}}
{{Archive}}{{WIP}}
This document is intended to provide solutions to some common problems encountered on ZCS systems using Red Hat Cluster Suite.  It is not intended as a substitute for RHCS's documentation or the assistance of Red Hat Technical Support in cases related to direct failure of the RHCS software itself.
This document is intended to provide solutions to some common problems encountered on ZCS systems using Red Hat Cluster Suite.  It is not intended as a substitute for RHCS's documentation or the assistance of Red Hat Technical Support in cases related to direct failure of the RHCS software itself.


Line 32: Line 37:
In this situation, the 'clusvcadm -e' command works correctly and the service starts up, but shortly after startup completes, the service fails and the cluster software attempts to fail it over, either to another node or to restart it in place.  This happens because the ZCS startup completed normally, but at some point after startup, an essential service crashed.  The repair procedure for this problem is the same as for cases in which ZCS does not start at all.  The clustered service must be disabled and all mountpoints and Virtual IP's brought online.  The service can then be manually started and the failing service identified and repaired.
In this situation, the 'clusvcadm -e' command works correctly and the service starts up, but shortly after startup completes, the service fails and the cluster software attempts to fail it over, either to another node or to restart it in place.  This happens because the ZCS startup completed normally, but at some point after startup, an essential service crashed.  The repair procedure for this problem is the same as for cases in which ZCS does not start at all.  The clustered service must be disabled and all mountpoints and Virtual IP's brought online.  The service can then be manually started and the failing service identified and repaired.


{{Article Footer|unknown|7/21/2009}}
{{Article Footer|Zimbra Collaboration 6.0, 5.0|7/21/2009}}


[[Category:Cluster]]
[[Category:Cluster]]
[[Category:Troubleshooting]]
[[Category:Troubleshooting]]

Latest revision as of 16:23, 11 July 2015

Cluster troubleshooting

   KB 2962        Last updated on 2015-07-11  




0.00
(0 votes)

This document is intended to provide solutions to some common problems encountered on ZCS systems using Red Hat Cluster Suite. It is not intended as a substitute for RHCS's documentation or the assistance of Red Hat Technical Support in cases related to direct failure of the RHCS software itself.

Common Scenarios

This section describes some common problems encountered by administrators of clustered RHCS and their resolution.

ZCS Software Fails to Start

In this situation, running 'clustat' (as root) will show all services and cluster nodes present, but the state of the service will be 'disabled' or 'failed'. Attempts to start the service using 'clusvcadm -e' will not succeed. Usually when this type of problem occurs, it is not related to the RHCS software itself. The problem is caused by a misconfiguration or other error preventing ZCS from starting. To correct the problem, the cluster software must be taken out of the picture. To do this, the admin will need to disable the clustered service, manually mount the disk volumes and IP addresses associated with it, and then directly repair the ZCS installation.

(as root):
clusvcadm -d <service_name>
ip addr
(confirm that the virtual IP is not enabled)
mount
(confirm that the cluster mountpoints are not mounted)
ps -ef | grep zimbra
(confirm that ZCS services are not running)
ip addr add <cluster_service_virtual_ip> dev <device>
mount <physical_disk_location> <mountpoint>

At this point, the ZCS system is ready to be started. An admin can use a standard 'zmcontrol start' to attempt to bring services up. This will likely fail, but the error message will give an indication of the service experiencing the problem, and the service may be repaired. When the service is able to start cleanly, then the Virtual IP and disk mountpoints may be removed, and the service may be started again under cluster control:

(as zimbra):
zmcontrol stop
(as root):
ip addr delete <cluster_service_virtual_ip> dev <device>
umount <mountpoint>
clusvcadm -e <service_name> -m <host>

Adding the -m option to clusvcadm is not essential, but is a good practice to ensure that the system is coming up on the expected server. If the ZCS service is repaired, startup should complete correctly.

Clustered Service Repeatedly Fails and Restarts

In this situation, the 'clusvcadm -e' command works correctly and the service starts up, but shortly after startup completes, the service fails and the cluster software attempts to fail it over, either to another node or to restart it in place. This happens because the ZCS startup completed normally, but at some point after startup, an essential service crashed. The repair procedure for this problem is the same as for cases in which ZCS does not start at all. The clustered service must be disabled and all mountpoints and Virtual IP's brought online. The service can then be manually started and the failing service identified and repaired.

Verified Against: Zimbra Collaboration 6.0, 5.0 Date Created: 7/21/2009
Article ID: https://wiki.zimbra.com/index.php?title=Cluster_troubleshooting Date Modified: 2015-07-11



Try Zimbra

Try Zimbra Collaboration with a 60-day free trial.
Get it now »

Want to get involved?

You can contribute in the Community, Wiki, Code, or development of Zimlets.
Find out more. »

Looking for a Video?

Visit our YouTube channel to get the latest webinars, technology news, product overviews, and so much more.
Go to the YouTube channel »

Jump to: navigation, search