Difference between revisions of "Ajcody-Clustering"
m (→HA-Linux (Heartbeat))
|Line 1:||Line 1:|
Revision as of 05:01, 17 November 2008
|- This article is NOT official Zimbra documentation. It is a user contribution and may include unsupported customizations, references, suggestions, or information.|
- 1 Clustering Topics
- 1.1 Actual Clustering Topics Homepage
- 1.2 My Other Clustering Pages
- 1.3 Good Summary For RHEL Clustering
- 1.4 Active-Active Clustering
- 1.5 Non-San Based Fail Over HA/Cluster Type Configuration
- 1.6 RFE's/Bug Related To Supporting Clustering Options
- 1.7 HA-Linux (Heartbeat)
Actual Clustering Topics Homepage
Please see Ajcody-Clustering
My Other Clustering Pages
Good Summary For RHEL Clustering
This is a good solid summary about RHEL clustering:
There is a bug(rfe) for active-active configuration. Please see:
Non-San Based Fail Over HA/Cluster Type Configuration
This RFE covers issues when your wanting a "copy" of the data to reside on an independent server - LAN/WAN.
- "Disaster recovery through server to server sync (beta)"
RFE's/Bug Related To Supporting Clustering Options
- "Add VCS cluster support for Suse ES 10"
- SuSE Clustering resources - These might be useful.
- "Cluster Configuration on SLES"
- "Clustering Your Novell Groupwise Servers - Xen & Heartbeat2 on SLES"
- "SuSE paper about HA and Virtual Servers - PDF"
- What SLES has for clustering - Linux Virtual Server
- "Add support for otther cluster software lifekeeper and mc/service guard"
- "Add clustering support for Mac OS X"
HA-Linux How-To For Testing And Educational Use
- HA-Linux Project Homepage
- Howto: Highly available Zimbra cluster using Heartbeat and DRBD
- DRBD Homepage
Actual HA-Linux How-To For Testing And Educational Use Homepage
Please see Ajcody-Notes-HA-Linux-How-To
Motive Behind How-To
I hope this gives an easy way to setup through some clustering concepts for an administrator to gain some real-world experience when they currently have none. I plan on walking through each "function" that is behind clustering rather than jumping to an end setup (Linux-HA, Shared Storage, And Zimbra).
The structure will be:
- Setup two machines (physical or virtual)
- Emphasis physical hostname / ip vs. the hostname and ip address that will be for HA.
- Setup virtual hostname and ip address for HA.
- Explain and do ip failover between the two machines.
- Setup a disk mount, we'll use probably use a nfs export from a third machine.
- This will give us an example of expanding the HA conf files to move beyond the ip address failover.
- Adjust HA conf's to now export via nfs a local directory from each server. This will not be a shared physical disk of course.
- Setup a shared disk between the two servers and include it in the HA conf files.
- Can use drbd or maybe figure out a way to share a virtual disk between the two vm's.
- Setup a very simple application to include between the two machines. Something like apache or cups.
- Go back and now readjust all variables between monitoring type (automatic) failover and simple manually initiated.