Difference between revisions of "Ajcody-Notes-HA-Linux-How-To"

m
m
 
(13 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{NotOfficial}}
+
{{BC|Zeta Alliance}}                         <!-- Note, this will also add [[Category: Zeta Alliance]] to bottom of wiki page. -->
 +
__FORCETOC__                              <!-- Will force a TOC regards of size of article. __NOTOC__  if no TOC is wanted. -->
 +
<div class="col-md-12 ibox-content">
 +
{{WIP}}                                                <!-- For pages that are "work in progress". -->
  
 
====HA-Linux How-To For Testing And Educational Use====
 
====HA-Linux How-To For Testing And Educational Use====
 +
 +
References:
 +
* [http://www.linux-ha.org/ HA-Linux Project Homepage]
 +
* [http://greenbeedigital.com.au/content/howto-highly-available-zimbra-cluster-using-heartbeat-and-drbd Howto: Highly available Zimbra cluster using Heartbeat and DRBD]
 +
* [http://www.drbd.org/ DRBD Homepage]
 +
** DRBD currently unsupported at this time.
 +
*** 1. "disaster recovery through server to server sync (beta)"
 +
**** a. http://bugzilla.zimbra.com/show_bug.cgi?id=11423
 +
*** 2. "add active-active support to zcs" , marked as a dup of the above.
 +
**** b. http://bugzilla.zimbra.com/show_bug.cgi?id=28150
 +
 +
----
  
 
=====Actual HA-Linux How-To For Testing And Educational Use Homepage=====
 
=====Actual HA-Linux How-To For Testing And Educational Use Homepage=====
 +
 +
----
  
 
Please see [[Ajcody-Notes-HA-Linux-How-To]]
 
Please see [[Ajcody-Notes-HA-Linux-How-To]]
  
 
=====Motive Behind How-To=====
 
=====Motive Behind How-To=====
 +
 +
----
  
 
I hope this gives an easy way to setup through some clustering concepts for an administrator to gain some real-world experience when they currently have none. I plan on walking through each "function" that is behind clustering rather than jumping to an end setup (Linux-HA, Shared Storage, And Zimbra).  
 
I hope this gives an easy way to setup through some clustering concepts for an administrator to gain some real-world experience when they currently have none. I plan on walking through each "function" that is behind clustering rather than jumping to an end setup (Linux-HA, Shared Storage, And Zimbra).  
 +
 +
The structure will be:
 +
* Setup two machines (physical or virtual)
 +
** Emphasis physical hostname / ip vs. the hostname and ip address that will be for HA.
 +
* Setup virtual hostname and ip address for HA.
 +
** Explain and do ip failover between the two machines.
 +
* Setup a disk mount, we'll use probably use a nfs export from a third machine.
 +
** This will give us an example of expanding the HA conf files to move beyond the ip address failover.
 +
** Adjust HA conf's to now export via nfs a local directory from each server. This will not be a shared physical disk of course.
 +
* Setup a shared disk between the two servers and include it in the HA conf files.
 +
** Can use drbd or maybe figure out a way to share a virtual disk between the two vm's.
 +
* Setup a very simple application to include between the two machines. Something like apache or cups.
 +
* Go back and now readjust all variables between monitoring type (automatic) failover and simple manually initiated.
 +
 +
----
  
 
[[Category: Community Sandbox]]
 
[[Category: Community Sandbox]]
 +
[[Category:Cluster]]
 +
[[Category: Author:Ajcody]]
 +
[[Category: Zeta Alliance]]

Latest revision as of 01:07, 21 June 2016


HA-Linux How-To For Testing And Educational Use

References:


Actual HA-Linux How-To For Testing And Educational Use Homepage

Please see Ajcody-Notes-HA-Linux-How-To

Motive Behind How-To

I hope this gives an easy way to setup through some clustering concepts for an administrator to gain some real-world experience when they currently have none. I plan on walking through each "function" that is behind clustering rather than jumping to an end setup (Linux-HA, Shared Storage, And Zimbra).

The structure will be:

  • Setup two machines (physical or virtual)
    • Emphasis physical hostname / ip vs. the hostname and ip address that will be for HA.
  • Setup virtual hostname and ip address for HA.
    • Explain and do ip failover between the two machines.
  • Setup a disk mount, we'll use probably use a nfs export from a third machine.
    • This will give us an example of expanding the HA conf files to move beyond the ip address failover.
    • Adjust HA conf's to now export via nfs a local directory from each server. This will not be a shared physical disk of course.
  • Setup a shared disk between the two servers and include it in the HA conf files.
    • Can use drbd or maybe figure out a way to share a virtual disk between the two vm's.
  • Setup a very simple application to include between the two machines. Something like apache or cups.
  • Go back and now readjust all variables between monitoring type (automatic) failover and simple manually initiated.

Jump to: navigation, search