public marks

PUBLIC MARKS with tags cluster & debian

2008

XEN Cluster HowTo

by camel
I have tried to run both Debian Etch and Ubuntu 8.04 Server on the cluster nodes, in Dom0. I started my tests with Debian, but I had some issues with slow samba performance in one VM that I couldn't fix so I decided to try Ubuntu Server, for the first time. Both installation went OK, the main difference was that I used mainly source code in Debian, but only packages in Ubuntu. I actually ran into more problems with Ubuntu due to some early bugs in the 8.04 release, will describe them below as I go along. And I have still to prove that running this setup in Ubuntu is stable.

How To Set Up A Load-Balanced MySQL Cluster With MySQL 5.1 | HowtoForge - Linux Howtos and Tutorials

by camel
This tutorial is based on Falko Timme's tutorial for MySQL Cluster 5.0. It shows how to configure a MySQL 5.1 cluster with five nodes: 1 x management, 2 x storage nodes and 2 x balancer nodes. This cluster is load-balanced by an Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster). In this document I use Debian Etch 4.0 for all nodes. Therefore the setup might differ a bit for other distributions. The two data nodes were x64 to use all of the 8GB RAM. Servers were compiled from source so you should be able to make it running on any platform. The MySQL version I use in this setup is 5.1.24-rc. It's a release candidate, but I wanted to use 5.1 to take advantage of Memory-Disk Based tables. Beginning with MySQL 5.1.6, it is possible to store the non-indexed columns of NDB tables on disk, rather than in RAM as with previous versions of MySQL Cluster.

Heartbeat2 Xen cluster with drbd8 and OCFS2

by camel
The idea behind the whole set-up is to get a High availability two node Cluster with redundant data. The two identical Servers are installed with Xen hypervisor and almost same configuration as Cluster nodes. The configuration and image files of Xen virtual machines are stored on drbd device for redundancy. Drbd8 and OCFS2 allows simultaneous mounting on both nodes, which is required for live migration of xen virtual machines. This Article describes Heartbeat2 Xen cluster Using Ubuntu (7.10) OS, drbd8 and OCFS2 (Ver. 1.39) File system. Although here Ubuntu is used it can be done in almost same way with Debian

2007

HA Xen Cluster with DRBD, LVM and heartbeat

by camel
We have implemented a 2-node HA Xen cluster, which consists of two physical machines (hosts,) and runs several virtual servers (guests) each, for our company's internal services (mail, web applications, development, etc.) When one host gets down unexpectedly, the other host physically kills it (STONITH - power down or reset) and then takes over all the guests the failed host was running. When we want to shutdown a host machine for maintenance (to replace a fan, add disk or memory, etc.), we just type the usual shutdown command, and the guests are automatically live-migrated to the other host. Since the guest servers keep running throughout the migration process, except for the less than a second pause, users would never even notice the event.

Xen Cluster Management With Ganeti On Debian Etch | HowtoForge - Linux Howtos and Tutorials

by camel & 1 other
Ganeti is a cluster virtualization management system based on Xen. In this tutorial I will explain how to create one virtual Xen machine (called an instance) on a cluster of two physical nodes, and how to manage and failover this instance between the two physical nodes.

ssh on multiple servers Using cluster ssh -- Debian Admin

by camel
Ever had to make the same change on more than one Linux/unix server? Find it annoyingly painful to keep repeating the exact same commands again and again and again? This tool addresses exactly this problem. You run a utility (cssh) providing a number of server names as parameters, and then xterms opens up to each server with an extra “console” window. Anything typed into the console is replicated into each server window (so, for examples, you can edit the same file on N machines at the same time, or run the same commands with the same parameters across those servers). It is also possible to type into the server windows directly, or temporarily disable replication to one or more of the servers through the “Hosts” menu.

Setting Up An iSCSI Environment On Linux | HowtoForge - Linux Howtos and Tutorials

by rodo & 1 other
Setting Up An iSCSI Environment On Linux Nowadays, the iSCSI technology is quite popular in the storage world. This article shows an iSCSI demo environment which consists of one Debian Linux host and one Netapp Filer.We try to show the most important features of this protocol. 1. What is iSCSI? It is a network storage protocol above TCP/IP. This protocol encapsulates SCSI data into TCP packets. iSCSI allows us to connect a host to a storage array via a simple Ethernet connection (tape drive). This solution is cheaper than the Fibre Channel SAN (Fibre channel HBAs and switches are expensive). From the host view the user sees the storage array LUNs like a local disks. iSCSI devices should not be confused with the NAS devices (for example NFS). The most important difference is that NFS volumes can be accessed by multiple hosts, but one iSCSI volume can by accessed by one host. It is similar to SCSIi protocol: usually only one host has access to one SCSI disk (the difference is the cluster enviroment). The iSCSI protocol is defined in the RFC3720 document by the IETF (Internet Engineering Task Force).

Setting Up An iSCSI Environment On Linux | HowtoForge - Linux Howtos and Tutorials

by camel & 1 other
Setting Up An iSCSI Environment On Linux Nowadays, the iSCSI technology is quite popular in the storage world. This article shows an iSCSI demo environment which consists of one Debian Linux host and one Netapp Filer.We try to show the most important features of this protocol. 1. What is iSCSI? It is a network storage protocol above TCP/IP. This protocol encapsulates SCSI data into TCP packets. iSCSI allows us to connect a host to a storage array via a simple Ethernet connection (tape drive). This solution is cheaper than the Fibre Channel SAN (Fibre channel HBAs and switches are expensive). From the host view the user sees the storage array LUNs like a local disks. iSCSI devices should not be confused with the NAS devices (for example NFS). The most important difference is that NFS volumes can be accessed by multiple hosts, but one iSCSI volume can by accessed by one host. It is similar to SCSIi protocol: usually only one host has access to one SCSI disk (the difference is the cluster enviroment). The iSCSI protocol is defined in the RFC3720 document by the IETF (Internet Engineering Task Force).

2006

PUBLIC TAGS related to tag cluster

architecture +   beowulf +   blade +   computer +   cool +   database +   delicious +   diy +   geek +   google +   hardware +   imported:del.icio.us +   infrastructure +   linux +   lowcost +   performance +   scalability +   supercomputer +   vxtech +  

Active users

camel
last mark : 10/09/2008 07:52

rodo
last mark : 20/09/2007 14:31

alexf10
last mark : 08/05/2006 07:18

jsboyer
last mark : 10/02/2006 10:56