Difference between revisions of "Ajcody-Notes-HA-Linux-How-To"
m |
m (→Motive Behind How-To) |
||
Line 16: | Line 16: | ||
I hope this gives an easy way to setup through some clustering concepts for an administrator to gain some real-world experience when they currently have none. I plan on walking through each "function" that is behind clustering rather than jumping to an end setup (Linux-HA, Shared Storage, And Zimbra). | I hope this gives an easy way to setup through some clustering concepts for an administrator to gain some real-world experience when they currently have none. I plan on walking through each "function" that is behind clustering rather than jumping to an end setup (Linux-HA, Shared Storage, And Zimbra). | ||
+ | |||
+ | The structure will be: | ||
+ | * Setup two machines (physical or virtual) | ||
+ | ** Emphasis physical hostname / ip vs. the hostname and ip address that will be for HA. | ||
+ | * Setup virtual hostname and ip address for HA. | ||
+ | ** Explain and do ip failover between the two machines. | ||
+ | * Setup a disk mount, we'll use probably use a nfs export from a third machine. | ||
+ | ** This will give us an example of expanding the HA conf files to move beyond the ip address failover. | ||
+ | ** Adjust HA conf's to now export via nfs a local directory from each server. This will not be a shared physical disk of course. | ||
+ | * Setup a shared disk between the two servers and include it in the HA conf files. | ||
+ | ** Can use drbd or maybe figure out a way to share a virtual disk between the two vm's. | ||
+ | * Setup a very simple application to include between the two machines. Something like apache or cups. | ||
+ | * Go back and now readjust all variables between monitoring type (automatic) failover and simple manually initiated. | ||
+ | |||
[[Category: Community Sandbox]] | [[Category: Community Sandbox]] |
Revision as of 20:56, 16 November 2008
HA-Linux How-To For Testing And Educational Use
Actual HA-Linux How-To For Testing And Educational Use Homepage
Please see Ajcody-Notes-HA-Linux-How-To
Motive Behind How-To
I hope this gives an easy way to setup through some clustering concepts for an administrator to gain some real-world experience when they currently have none. I plan on walking through each "function" that is behind clustering rather than jumping to an end setup (Linux-HA, Shared Storage, And Zimbra).
The structure will be:
- Setup two machines (physical or virtual)
- Emphasis physical hostname / ip vs. the hostname and ip address that will be for HA.
- Setup virtual hostname and ip address for HA.
- Explain and do ip failover between the two machines.
- Setup a disk mount, we'll use probably use a nfs export from a third machine.
- This will give us an example of expanding the HA conf files to move beyond the ip address failover.
- Adjust HA conf's to now export via nfs a local directory from each server. This will not be a shared physical disk of course.
- Setup a shared disk between the two servers and include it in the HA conf files.
- Can use drbd or maybe figure out a way to share a virtual disk between the two vm's.
- Setup a very simple application to include between the two machines. Something like apache or cups.
- Go back and now readjust all variables between monitoring type (automatic) failover and simple manually initiated.