public marks

PUBLIC MARKS from camel with tag cluster

December 2008

severalnines.com

(via)
CMON is a daemon that aggregates information from MySQL Cluster that earlier was only accessible from the cluster log or the management client, such as: * cluster state * node state * backup statistics * statistics * cluster events (cluster log basically) .. and let's you access the information using SQL, because CMON logs the information into ordinary MYISAM tables! So, it is really easy to use! In the package you also get get php scripts that you can put on your webserver to generate graphs and get a www interface to CMON. CMON can also start ndbd nodes and make decisions on how they should be started (with or without --initial). CMON starts as a daemon and will automatically create cmon database and install the necessary tables automatically.

November 2008

xen:live-migration_infrastructure [docs]

In order to be able to do a live migration of a Xen guest from one cluster member to another, some sort of shared storage is required. As the Xen guest won’t run on more than one cluster member at a time, a cluster filesystem is not required. That is, as long as you configure Xen to access the Xen guest by a physical device, not a file.

Tutoriel mysql-proxy rw-splitting Réplication MySQL « Sangokode

by 1 other (via)
Quand un site web dynamique commence à avoir un trafic important, généralement on va essayer de multiplier les serveurs web qui hébergent les fichiers. Multiplier les serveurs web n’est pas le plus difficile, il suffit de faire une réplication des données à chaque mise à jour de votre site et de multiplier les sous-domaines. Les choses se compliquent lorsque vous souhaitez avoir plusieurs serveurs de base de données. La base de données que nous étudierons ici est MySQL. Nous ne parlerons pas de cluster MySQL dans cet article. Le but de l’article est de montrer comment séparer un serveur MySQL en trois serveurs distincts : 1 maitre et deux esclaves, avec un seul point d’entrée (le proxy MySQL). Le proxy devra envoyer les requêtes de lecture vers les serveurs esclaves et toutes les autres requêtes vers le serveur d’écriture.

Linux.com :: Parallel SSH execution and a single shell to control them all

(via)
Many people use SSH to log in to remote machines, copy files around, and perform general system administration. If you want to increase your productivity with SSH, you can try a tool that lets you run commands on more than one remote machine at the same time. Parallel ssh, Cluster SSH, and ClusterIt let you specify commands in a single terminal window and send them to a collection of remote machines where they can be executed. Why you would need a utility like this when, using openSSH, you can create a file containing your commands and use a bash for loop to run it on a list of remote hosts, one at a time? One advantage of a parallel SSH utility is that commands can be run on several hosts at the same time. For a short-running task this might not matter much, but if a task needs an hour to complete and you need to run it on 20 hosts, parallel execution beats serial by a mile. Also, if you want to interactively edit the same file on multiple machines, it might be quicker to use a parallel SSH utility and edit the file on all nodes with vi rather than concoct a script to do the same edit. Many of these parallel SSH tools include support for copying to many hosts at once (a parallel version of scp) or using rsync on a collection of hosts at once. Because the parallel SSH implementations know about all the hosts in a group, some of them also offer the ability to execute a command "on one host" and will work out which host to pick using load balancing. Finally, some parallel SSH projects let you use barriers so that you can execute a collection of commands and explicitly have each node in the group wait until all the nodes have completed a stage before moving on to the next stage of processing.

October 2008

September 2008

XEN Cluster HowTo

I have tried to run both Debian Etch and Ubuntu 8.04 Server on the cluster nodes, in Dom0. I started my tests with Debian, but I had some issues with slow samba performance in one VM that I couldn't fix so I decided to try Ubuntu Server, for the first time. Both installation went OK, the main difference was that I used mainly source code in Debian, but only packages in Ubuntu. I actually ran into more problems with Ubuntu due to some early bugs in the 8.04 release, will describe them below as I go along. And I have still to prove that running this setup in Ubuntu is stable.

August 2008

smtp-delay plug-in for qmail

smtp-delay is an add-on/plug-in intended for use with qmail. It was written primarily to add banner delays and antipipelining to qmail. These two features are known to be able to block certain types of spam and virus mail sent through non-rfc-compliant SMTP engines. When I looked around for programs to add this functionality to qmail, I found only one such program, and didn't like the way it was done. BTW...I have the same objections to the way its done in sendmail 8.13.x. Since banner delays (the server pausing for some time before issuing an SMTP banner) cause every SMTP connection to take longer, I thought it would be a good idea to somehow exempt "legitimate" mail servers...or at least not subject them to long banner delays. So I decided to tune the banner delay time based on the connecting IP's reverse DNS. IPs with no rDNS get treated the worst (longest banner delay). IPs with rDNS matching a regex intended to detect dynamic/end-user IPs get a moderate delay. All other IPs get a very short banner delay...just long enough to see if they immediately pipeline (send SMTP commands before the banner's been sent). The original intent for smtp-delay was that it should be run before rblsmtpd, and simply set the RBLSMTPD environment variable if applicable, letting rblsmtpd issue the 4xx response. Pretty early on, I realized smtp-delay should be able to run standalone (without dependence on rblsmtpd to do its talking) and issue a 4xx response on its own. Lately, the spam load against our mail cluster has gotten so bad that I've started running smtp-delay after rblsmtpd, based on the idea that there's no point waiting out a long banner delay holding an open socket to an IP we have no intention of accepting mail from anyway. This reduced our concurrency by about 20%.

July 2008

OpenNebula :: about

OpenNebula transforms a physical cluster into a flexible virtual infrastructure which dynamically adapts to the changing demands of a service workload. OpenNebula leverages existing virtualization platforms to create a new virtualization layer between the service and the physical infrastructure. This new layer supports the execution of the services on a physical cluster, extending the benefits of VMMs (Virtual Machine Monitors) from a single physical resource to a cluster of resources. OpenNebula effectively decouples a server (deployed as a pre-configured VM) not only from the physical infrastructure but also from its physical location.

Cool Solutions: Configuring a Xen VM for Live Migration within a Cluster

By default, migrating a Xen Virtual Machine (VM) resource causes it to shutdown on the current node and restart on the new one. Once you configure a Xen VM within the High Availability Storage Infrastructure (HASI), how do you configure the Xen VM resource to live migrate among the cluster nodes?

June 2008

How To Set Up A Load-Balanced MySQL Cluster With MySQL 5.1 | HowtoForge - Linux Howtos and Tutorials

This tutorial is based on Falko Timme's tutorial for MySQL Cluster 5.0. It shows how to configure a MySQL 5.1 cluster with five nodes: 1 x management, 2 x storage nodes and 2 x balancer nodes. This cluster is load-balanced by an Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster). In this document I use Debian Etch 4.0 for all nodes. Therefore the setup might differ a bit for other distributions. The two data nodes were x64 to use all of the 8GB RAM. Servers were compiled from source so you should be able to make it running on any platform. The MySQL version I use in this setup is 5.1.24-rc. It's a release candidate, but I wanted to use 5.1 to take advantage of Memory-Disk Based tables. Beginning with MySQL 5.1.6, it is possible to store the non-indexed columns of NDB tables on disk, rather than in RAM as with previous versions of MySQL Cluster.

How To Set Up A Loadbalanced High-Availability Apache Cluster Based On Ubuntu 8.04 LTS | HowtoForge - Linux Howtos and Tutorials

This tutorial shows how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently.

April 2008

Vdoop - Manage your virtual cluster - Vdoop

Every day, search companies like Google download terabytes of data from the Internet, store it on clusters of thousands of machines, and process it so that it can be easily searched. To make this possible, these companies need sophisticated distributed file system and parallel programing architectures. Have you ever heard of the Map/Reduce distributed parallel programing paradigm? If you are a computer scientist, you should have, because every time you submit a Google search, you are using Map/Reduce. Despite growing demand from companies like Google, Yahoo, and Microsoft, few computer science majors have even heard of Map/Reduce, let alone graduate well versed in its use. Unfortunately, several barriers exist to integrating Map/Reduce into computer science curricula. Obtaining a large cluster, configuring it, and installing complicated distributed file system and parallel programing software is difficult, time consuming, and expensive. In the past, Google's solution to this problem has been to ship entire clusters pre-configured with Map/Reduce software to select universities. In essence, Vdoop does same thing, with exactly the same software, except for our clusters are virtual, and hence free.

scalr - Google Code

by 3 others
Scalr is a fully redundant, self-curing and self-scaling hosting environment utilizing Amazon's EC2. It allows you to create server farms through a web-based interface using prebuilt AMI's for load balancers (pound or nginx), app servers (apache, others), databases (mysql master-slave, others), and a generic AMI to build on top of. The health of the farm is continuously monitored and maintained. When the Load Average on a type of node goes above a configurable threshold a new node is inserted into the farm to spread the load and the cluster is reconfigured. When a node crashes a new machine of that type is inserted into the farm to replace it. 4 AMI's are provided for load balancers, mysql databases, application servers, and a generic base image to customize. Scalr allows you to further customize each image, bundle the image and use that for future nodes that are inserted into the farm. You can make changes to one machine and use that for a specific type of node. New machines of this type will be brought online to meet current levels and the old machines are terminated one by one. The project is still very young, but we're hoping that by open sourcing it the AWS development community can turn this into a robust hosting platform and give users an alternative to the current fee based services available.

Loadbalanced High-Availability Apache Cluster Using Ultramonkey

how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently.

March 2008

Xen-AoE — xenaoe.org

Xen-AoE is a cluster server architecture which provides more efficient use of cpu, disk, and memory resources than traditional servers while providing zero single points of failure ensuring high availability and greater server maintainability. It does this through the use of server virtualization, utilization of SAN technologies, decoupling of disk from cpu, and taking advantage of commodity hardware such as x86 cpu's and readily available gigabit ethernet infrastructure. Xen-AoE can save your business money by better utilizing hardware resources, simplifying management, and decreasing losses due to downtime.

Le travail à la chaîne, c'est has been ! - Linux attitude

voici quelques outils pour travailler efficacement sur un groupe de machines

February 2008

Heartbeat2 Xen cluster with drbd8 and OCFS2

The idea behind the whole set-up is to get a High availability two node Cluster with redundant data. The two identical Servers are installed with Xen hypervisor and almost same configuration as Cluster nodes. The configuration and image files of Xen virtual machines are stored on drbd device for redundancy. Drbd8 and OCFS2 allows simultaneous mounting on both nodes, which is required for live migration of xen virtual machines. This Article describes Heartbeat2 Xen cluster Using Ubuntu (7.10) OS, drbd8 and OCFS2 (Ver. 1.39) File system. Although here Ubuntu is used it can be done in almost same way with Debian

puppet - Trac

by 1 other
Put simply, Puppet is a system for automating system administration tasks. To learn more, read our big picture overview of Puppet, or take a deeper look at what Puppet can do with the Puppet Introduction. There's also a Puppet Brochure which gives the highlights of Puppet's functionality.

ONLamp.com -- Using Xen for High Availability Clusters

The idea of using virtual machines to build high available clusters is not new. Some software companies claim that virtualization is the answer to your HA problems, off course that's not true. Yes, you can reduce downtime by migrating virtual machines to another physical machine for maintenance purposes or when you think hardware is about to fail, but if an application crashes you still need to make sure another application instance takes over the service. And by the time your hardware fails, it's usually already too late to initiate the migration.

Xen Virtualization and Linux Clustering, Part 1

In this article, I briefly introduce the concepts of Xen virtualization and Linux clustering. From there, I show you how to set up multiple operating systems on a single computer using Xen and how to configure them for use with clustering. I should point out that a cluster implemented in this manner does not provide the computational power of multiple physical computers. It does, however, offer a way to prototype a cluster as well as provide a cost-effective development environment for cluster-based software. Even if you're not interested in clustering, this article gives you hands-on experience using Xen virtualization.

Cluster haute-disponibilité avec équilibrage de charge » UNIX Garden

À travers un exemple concret, nous vous proposons de déjouer les pièges de la mise en œuvre d’un cluster haute-disponibilité économique, avec équilibrage de charge et constitué uniquement de deux machines !

January 2008

Heartbeat2 Xen cluster with drbd8 and OCFS2 -- Ubuntu Geek

This Article describes Heartbeat2 Xen cluster Using Ubuntu (7.10) OS, drbd8 and OCFS2 (Ver. 1.39) File system. Although here Ubuntu is used it can be done in almost same way with Debian Idea The idea behind the whole set-up is to get a High availability two node Cluster with redundant data. The two identical Servers are installed with Xen hypervisor and almost same configuration as Cluster nodes. The configuration and image files of Xen virtual machines are stored on drbd device for redundancy. Drbd8 and OCFS2 allows simultaneous mounting on both nodes, which is required for live migration of xen virtual machines.

ClusterMonkey - Building A Virtual Cluster with Xen (Part One)

This guide is the first of a series in which I give you detailed step-by-step instructions on how to build a virtual cluster with Xen. The cluster thus built might not be appropriate for your case, and does reflect the author's preferences and/or needs, but if you are new to clusters or Xen, it will hopefully help you get started with both. The goal is to start it simple and then add more complexity as we progress, so in this first guide I show you how to get do the basics: * A Xen installation, the creation of 5 virtual machines (one to act as the master and four slaves), * Shared storage through NFS, * The network configuration on which to build the virtual cluster. The network structure of this first attempt will be very simple, the master having two network cards, one to the outside world and the other one connected through a switch to the slaves.

camel's TAGS related to tag cluster

admin +   administration +   apache +   application +   apt +   ata over ethernet +   Automating +   backup +   blog +   build +   cache +   code +   configuration +   control +   database +   debian +   deployement +   edit +   environment +   fast +   file +   google +   guide +   hosting +   howto +   image +   install +   interface +   iscsi +   linux +   list +   live +   load balancer +   log +   lvm +   mail +   management +   migration +   mysql +   network +   online +   performance +   php +   project +   prototype +   proxy +   qmail +   remote +   replication +   rsync +   script +   search +   server +   serveur +   setup +   shell +   site +   software +   spam +   ssh +   subversion +   tools +   tutorial +   tutoriel +   ubuntu +   web +   windows +   xen +   xen aoe +