Johnm wrote:Well,
We were looking at the HA Cluster but due to it's requirements I am now looking at the LVS option as a temporary solution. I am sure that we will eventually move to the HA Cluster when I have more time to figure out the logistics of the quorum device. It will have to be a device with a USB connection due to the configuration of the servers. I am right there with you on the quorum thing. I had to do some reading to learn and understand it. You are correct, there can be no other data on the quorum partition.
I think that having a separate device will be the way to go to create the quorum although I am also considering using a NBD (Network Block Device) as a solution but I am concerned about it's stability and reliability. It appears that NBD could mask as the necessary partition but the install looks to be a bit dicey.
Any hints or tips on LVS?
John M
<rant mode>
I believe that Red Hat's decision to call these (LVS and HA) two seperate things is ridiculous and confusing.
HA is the state of being able to ensure availability of a given service regardless of the state of any one box in the cluster. Therefore, an LVS setup where each service has more than one box is also "Highly Available".
Now if the service we are talking about is transaction processing for a POS type system, then if you have more than one box doing the processing, is the service still available if one of those boxes takes on in the gut? This is the setup we have now. One service is transaction processing and the other is web based administration. We have multiple firewalls and two routers. Multiple transaction and web servers. And of course, more than one box for the db.
Our setup is Highly available, but it's an LVS type cluster.
I don't know what Red Hat is getting at.
</rant mode>
With our setup, each node on the network is pinged at a regular interval by the cluster managers. Also, something called an "application stability agent" checks the servers (as in the server software) at a regular interval. It doesn't use a quorom or anything like that.
From the outside world, all of our services are behind one IP address. From the firewalls, things are routed based on port to a virtual IP at the cluster manager. The cluster manager then routes the packet to the correct server. In this way with an LVS setup, someone going to your ip at port xxxx will get to the service you're providing there. If you are also providing a web service, then your ip at port 80 will do the trick. Ultimately, it's all behind one ip address.
Now those individual services can be load balanced clusters. The load balancing being handled by the cluster managers. The cluster manger knows when a box is removed from the cluster but doesn't stop the service. We have a transaction cluster and a web cluster behind our cluster managers.
That's really it in a nutshell. Tomorrow morning, I'll do kind of a text diagram for you. I've got to finish some testing of some stuff then go home for now.
Cheers,
BDKR