Ye' old general discussion board. Basically, for everything that isn't covered elsewhere. Come here to shoot the breeze, shoot your mouth off, or whatever suits your fancy. This forum is not for asking programming related questions.
Clustering comes in extremely useful when you're working with large databases under heavy server loads since you can either work with MySQL replication across all servers (constant running backups) or you can use something called the NDB engine to create a cluster farm whereby each server acts as a "node" in the cluster and one parent server dishes out requests to each node so as to reduce the load on any individual server. This runs from memory too.
I'm not sure you can do it very easily with windows but if you have access to more than one linux server I'd strongly recommend playing around with clustering if you have an interest in that sort of thing... it comes in very very useful on large scale applications/websites
Last edited by Chris Corbyn on Sun Mar 05, 2006 1:28 pm, edited 1 time in total.
The real question is how all those available cpu cycles will be used... If you distribute processes you also need cycles to take care of all the management overhead that arises... If i'm not mistaken you'll find a lot of good info in the openmosix faq...
nickman013 wrote:Thats cool. So if I hooked up like 100 computers with 2.0 ghz processors. Would that mean that the main computer would have 200ghz ?
Well... not exactly. There's the overhead of the network transactions and that's not the point as such. It's not to increase the processing power, it's to reduce the load on any server. It's also a way to provide some sort of redundancy and fallback.... if one of the servers goes down, the cluster just keeps running without it