Redhat Advanced Server Cluster

Ye' old general discussion board. Basically, for everything that isn't covered elsewhere. Come here to shoot the breeze, shoot your mouth off, or whatever suits your fancy.
This forum is not for asking programming related questions.

Moderator: General Moderators

Post Reply
User avatar
Johnm
Forum Contributor
Posts: 344
Joined: Mon May 13, 2002 12:05 pm
Location: Michigan, USA
Contact:

Redhat Advanced Server Cluster

Post by Johnm »

Hi all,
Anyone out there have any experience clustering Redhat 2.1 Advanced Servers?

Direwolf
User avatar
BDKR
DevNet Resident
Posts: 1207
Joined: Sat Jun 08, 2002 1:24 pm
Location: Florida
Contact:

Post by BDKR »

Hey,

I'm using Turbo Cluster 6 on Red Hat 6.2. A very good product.

Give us some details on what you're doing. I allways like to hear what people are doing
with clusters.

Cheers,
BDKR
User avatar
Johnm
Forum Contributor
Posts: 344
Joined: Mon May 13, 2002 12:05 pm
Location: Michigan, USA
Contact:

Post by Johnm »

More of what I am TRYING to do so far. We just bought two Pogo-Linux machines with Redhat Advanced Server installed. I am wading my way through the muck trying to get them to work. Both are successfully on the network and I am now working on setting up the quorum. This has proved to be a bigger task than anticipated. Our NetApp won't work because we have to access it over the net and as soon as you access the "Raw Storage" of the quorum with NFS the Raw Storage is no longer truly raw. The Cluster requires that there be no file system in the quorum. I think that we may have to buy a separate device to properly set up the quorum. That stinks because it only needs to be 10 megs in size.

Am I making sense? I am very new to this.

Direwolf
User avatar
BDKR
DevNet Resident
Posts: 1207
Joined: Sat Jun 08, 2002 1:24 pm
Location: Florida
Contact:

Post by BDKR »

Hey Johnm,

Wow! This term, "quorum" is new to me. I'm looking at some of the Red Hat Documentation now. One thing is for sure, they made a lot more difficult than it has to be.

Anyways, in searching at Google, the text associated with one of the links is...
Quorum partitions are small raw devices used by each node
in Red Hat Cluster Manager to check the health of the other node. ...
This seems to imply that those quorum partitions are not to have any other data on them.

A question. Is this a LVS or High Availability cluster? What we are using is what Red Hat would call an LVS cluster, but it's my opinion that HA is the favorable result if you are using more than one machine per service.

Let me know.

Cheers,
BDKR
User avatar
Johnm
Forum Contributor
Posts: 344
Joined: Mon May 13, 2002 12:05 pm
Location: Michigan, USA
Contact:

Post by Johnm »

Well,
We were looking at the HA Cluster but due to it's requirements I am now looking at the LVS option as a temporary solution. I am sure that we will eventually move to the HA Cluster when I have more time to figure out the logistics of the quorum device. It will have to be a device with a USB connection due to the configuration of the servers. I am right there with you on the quorum thing. I had to do some reading to learn and understand it. You are correct, there can be no other data on the quorum partition.
I think that having a separate device will be the way to go to create the quorum although I am also considering using a NBD (Network Block Device) as a solution but I am concerned about it's stability and reliability. It appears that NBD could mask as the necessary partition but the install looks to be a bit dicey.
Any hints or tips on LVS?

John M
User avatar
BDKR
DevNet Resident
Posts: 1207
Joined: Sat Jun 08, 2002 1:24 pm
Location: Florida
Contact:

Post by BDKR »

Johnm wrote:Well,
We were looking at the HA Cluster but due to it's requirements I am now looking at the LVS option as a temporary solution. I am sure that we will eventually move to the HA Cluster when I have more time to figure out the logistics of the quorum device. It will have to be a device with a USB connection due to the configuration of the servers. I am right there with you on the quorum thing. I had to do some reading to learn and understand it. You are correct, there can be no other data on the quorum partition.
I think that having a separate device will be the way to go to create the quorum although I am also considering using a NBD (Network Block Device) as a solution but I am concerned about it's stability and reliability. It appears that NBD could mask as the necessary partition but the install looks to be a bit dicey.
Any hints or tips on LVS?

John M
<rant mode>
I believe that Red Hat's decision to call these (LVS and HA) two seperate things is ridiculous and confusing.

HA is the state of being able to ensure availability of a given service regardless of the state of any one box in the cluster. Therefore, an LVS setup where each service has more than one box is also "Highly Available".

Now if the service we are talking about is transaction processing for a POS type system, then if you have more than one box doing the processing, is the service still available if one of those boxes takes on in the gut? This is the setup we have now. One service is transaction processing and the other is web based administration. We have multiple firewalls and two routers. Multiple transaction and web servers. And of course, more than one box for the db.

Our setup is Highly available, but it's an LVS type cluster.

I don't know what Red Hat is getting at.
</rant mode>

With our setup, each node on the network is pinged at a regular interval by the cluster managers. Also, something called an "application stability agent" checks the servers (as in the server software) at a regular interval. It doesn't use a quorom or anything like that.

From the outside world, all of our services are behind one IP address. From the firewalls, things are routed based on port to a virtual IP at the cluster manager. The cluster manager then routes the packet to the correct server. In this way with an LVS setup, someone going to your ip at port xxxx will get to the service you're providing there. If you are also providing a web service, then your ip at port 80 will do the trick. Ultimately, it's all behind one ip address.

Now those individual services can be load balanced clusters. The load balancing being handled by the cluster managers. The cluster manger knows when a box is removed from the cluster but doesn't stop the service. We have a transaction cluster and a web cluster behind our cluster managers.

That's really it in a nutshell. Tomorrow morning, I'll do kind of a text diagram for you. I've got to finish some testing of some stuff then go home for now.

Cheers,
BDKR
Post Reply