Tag Archives: Openfiler

Openfiler – Monitoring With Cacti

Openfiler – Monitoring With Cacti

Introduction

This article explains how to configure an Openfiler server for SNMP monitoring through Cacti.

The infrastructure used for this article was an Openfiler 2.3 virtual machine being monitored by a Fedora Core 10 64-bit server running Cacti.  Both virtual machines were hosted on a Fedora Core 8 64-bit server running VMware Server 1.0.8.

Pre-requisites

In order to configure an Openfiler 2.3 server for SNMP monitoring in Cacti the following pre-requisites must be met :-

  • An installed Openfiler 2.3 server / appliance with SNMP configured
  • An existing Cacti installation

Adding a device

The following section provides the configuration settings selecting when adding an Openfiler server as a new device in Cacti.  In order to add an Openfiler server in to Cacti as a new device perform the following steps :-

  1. Logon to the Cacti web site as an administrative user
  2. Click on the Console tab if neccesary
  3. Click on the Create devices link in the main pane
  4. Set the Description to the friendly name you wish to use within Cacti – E.G. Openfiler Server
  5. Set the Hostname to the FQDN if DNS is configured or the IP Address of the server
  6. Select the Generic SNMP-enabled Host template
  7. Set the SNMP version to Version 2
  8. Set the community string to the required string for your infrastructure – E.G. public
  9. Set the Downed device detection method to Ping and SNMP
  10. Scroll down to the bottom of the page and click on the Save button
  11. In the Devices list click on the device name and scroll down to the bottom of the page
  12. Confirm that at the top of the page that the general SNMP informatio is displayed for the server

Associate data queries to the device

Once the device has been added to Cacti data queries need to be associated to it so that graphs can be produced.  To associate data queries to the device perform the following steps :-

  1. In the Devices list click on the device name and scroll down to the bottom of the page
  2. In the Associated data queries section change the Add data query drop down box to SNMP – Get Mounted Partitions
  3. Click on the Add button to add the new data query
  4. Once added the query will be shown with a status of Success [xx Items, x Rows] (The amount of Items and Rows will differ depending on how many partitions exist on the server)
  5. Change the Add data query drop down box to SNMP – Get Processor Information
  6. Click on the Add button to add the new data query
  7. Once added the query will be shown with a status of Success [x Items, x Rows] (The amount of Items and Rows will differ depending on how many CPU’s exist in the server)
  8. Change the Add data query drop down box to SNMP – Get Interface Statistics
  9. Click on the Add button to add the new data query
  10. Once added the query will be shown with a status of Success [x Items, x Rows] (The amount of Items and Rows will differ depending on how many network interfaces exist in the server)
  11. Click on the Save button to associate the queries to the device.

Create graphs for the device

The next step in configuring an Openfiler server for monitoring through Cacti is to add the graphs required.  To add new graphs for the Openfiler server perform the following steps :-

  1. In the Devices list click on the device name and scroll down to the bottom of the page
  2. Click on the Create graphs for this host
  3. The create devices page for the device will be shown and list all of the items which have been discovered on the server by the Date query names associated in the previous section
  4. In the SNMP – Get Mounted Partitiions section of the screen all the partitions mounted on the server will be listed including the root partition, swap partition, and physical memory.
  5. Enable the tick box on the right hand side of the screen next to each partition you wish to graph
  6. In the SNMP – Get Processor Information section of the screen all of the CPU’s installed in the server will be listed
  7. Enable the tick box on the right hand side of the screen next to each CPU
  8. In the SNMP – Get Interface Statistics section of the screen all of the network interfaces installed in the server will be listed identified by the IP address associated.
  9. Enable the tick box on the right hand side of the screen next to each network interface you wish to graph
  10. Change the graph type drop down box to In/Out Bytes with Total Bandwidth
  11. Click on the Create button to create the graphs

Openfiler – Integrating in to an Active Directory infrastructure

Openfiler – Integrating in to an Active Directory infrastructure

Introduction

Openfiler can be integrated in to an Active Directory infrastructure and this article explains the steps to perform the integration in to an Active Directory infrastructure are provided.

The infrastructure used to perform this article is running on a Fedora Core 8 64-bit server running VMware Server 1.0.8 hosting a Windows 2003 Domain on an internal network.

The installation of the Fedora Core host server and Windows 2003 Domain are out side the scope of this article and it is assumed that you will have the necessary infrastructure available (Physical or Virtual) to perform the steps in this article.

Pre-requisites

In order to integrate Openfiler in to an Active Directory infrastructure the following pre-requisites must be met :-

  • An Openfiler server with Netwrk Access configured
  • A Windows Domain Controller
  • DNS configured for the Domain

Enabling the Services

The first thing to check is that the necessary Services are started and to do this through the web administration console perform the following steps :-

  1. Click on the Services option
  2. Click on Enable next to the SMB / CIFS service to enable access to Windows file share resources.
  3. Click on the SMB / CIFS Setup option in the Services section
  4. Configure the Server string to the name required for the server in Windows
  5. Set the NetBIOS name to the name required for the server
  6. Set the WINS server to the appropriate address if required

Joining Active Directory

The next step is to configure the Accounts section of the Openfiler so that it can utilise the Active Directory for its access control.  To configure account access perform the following steps :-

  1. Click on the Accounts option
  2. Scroll down if necessary and tick the Use Windows domain controller and authentication
  3. Set the security model to Active Directory
  4. Set the Domain / Workgroup to the NetBIOS Domain Name you wish to join
  5. Set the Domain Controllers to the FQDN of your Domain Controller
  6. Set the ADS Realm to the FQDN of your Domain
  7. Tick the Join to Domain option
  8. Set the Administrator User and Passwords to an account with privileges to add a machine to the Domain
  9. Click on Submit to join the Domain

It may take a while for the web administration page to refresh but once refreshed reboot the Openfiler by clicking on the Shutdown option in the top right of the screen and selecting the Shutdown and reboot option.

Confirming the server has joined successfully

Once the server has rebooted log back on to the web administration page and click on the Accounts tab.  Click on the Group List option in the Accounts section and confirm that your Active Directory Groups are listed.

Openfiler – Installing via PXE

Openfiler – Installing via PXE

Introduction

This article explains how to install Openfile version 2.3 over the network using PXE.

The infrastructure used for this article is a Centos 5.3 server with Tftp installed to provide the PXE server and a DD-WRT router running as the DHCP server with PXE options configured through DNSMasq.

Pre-requisites

In order to install Openfiler 2.3 via PXE the following pre-requisites must be met :-

  • A copy of the Openfiler 2.3 installation ISO
  • A DHCP server configured to provide the relevant PXE options
  • A Tftp server to provide the installation media

Copying the installation files to your PXE server

Once you’ve got the installation ISO mount the image, navigate to the images/pxeboot directory and copy the contents to your PXE Server directory (On Centos /var/lib/tftpboot folder). The following files should now be in your PXE Server directory :-

initrd.img
vmlinux

Modify the PXE Configuration file

Once the files have been copied to the PXE Server directory edit the PXE configuration file (On Centos /var/lib/tftpboot/pxelinux.cfg) and add the following lines :-

label Install Openfiler 2.3
kernel vmlinux

append vga=normal initrd=initrd.imgz

Configuring for http delivery

As with most PXE installation methods of Linux based systems you can provide the files over HTTP or NFS.  The infrastructure used for this article was configured for HTTP delivery due to subnet restrictions on the network.

To provide the remaining files to the PXE client copy the rPath folder from the installation ISO to the web-site folder of your choice

During the installation of Openfiler select HTTP for the location of the files and when prompted enter the web server name or IP address and the folder where the rPath folder was copied.

Test the implementation

Once the steps above have been performed test the installation by network booting the target machine and the Openfiler 2.3 installation should start.

Openfiler – Installing The Nagios NRPE Add-on

Openfiler – Installing The Nagios NRPE Add-on

Introduction

This article explains how to configure NRPE and the Nagios Plugins on an Openfiler server for monitoring through Nagios.

The infrastructure used for this article was an Openfiler 2.3 virtual machine being monitored by a Fedora Core 10 64-bit server running Nagios.  Both virtual machines were hosted on a Fedora Core 8 64-bit server running VMware Server 1.0.8.

The version of NRPE used for this test was v2.12 and can be downloaded from http://www.nagios.org/download/addons

The version of Nagios Plugins used for this test was 1.4.14 and can be downloaded fromhttp://prdownloads.sourceforge.net/sourceforge/nagiosplug

Pre-requisites

In order to configure NRPE and the Nagios Plugins on an Openfiler 2.3 server for monitoring in Nagios the following pre-requisites must be met :-

  • An installed Openfiler 2.3 server / appliance
  • An existing Nagios installation
  • Openssl libraries installed on the Openfiler 2.3 server / appliance
  • The NRPE source tarball
  • The Nagios-Plugins source tarball

Installing the Openssl libraries

In order to compile and install the NRPE source the additional Openssl library package is required on the server.  To install the Openssl library on to the server perform the following steps :-

  1. Logon to the console as the root user
  2. At the command prompt execute the command below :-conary update openssl:devel

Create the nagios user

Before installing NRPE and the Nagios Plugins we must first create the nagios user on the system.  To create the nagios user perform the following steps :-

  1. Logon to the console as the root user
  2. At the command prompt execute the command below :-useradd nagios
  3. Set the password for the user by executing the command below and entering the password desired when prompted :-passwd nagios

Compiling and installing the Nagios Plugins

Once the Openssl library has been installed on the server the next step is to compile and install the Nagios Plugins. In order to compile and install the Nagios Plugins on the server perform the following steps :-

  1. Logon to the console as the root user
  2. Copy the Nagios Plugins source tarball to the /tmp folder and then unpack it using the command below :-tar -zvxf nagios-plugins-1.4.14.tar.gz
  3. Change directory in to the nagios-plugins-1.4.14 folder
  4. To compile and install the software execute the following commands :-./configure
    make
    make install

Compiling and installing NRPE

Once the Nagios Plugins have been installed on the server the next step is to compile and install the NRPE source code.  In order to compile and install NRPE on the server perform the following steps :-

  1. Logon to the console as the root user
  2. Copy the NRPE source tarball to the /tmp folder and then unpack it using the command below :-tar -zvxf nrpe-2.12.tar.gz
  3. Change directory in to the nrpe-2.12 folder
  4. To compile and install the software execute the following commands :-./configure
    make
    make install
    make install-plugin
    make install-daemon
    make install-daemon-config
    make install-xinetd
  5. Edit the /etc/services and append the following line to the bottom of the file :-nrpe      5666/tcp
  6. Save and exit the file
  7. Edit the file /etc/xinetd.d/nrpe and change the only_from line from 127.0.0.1 to the IP address of your Nagios server as shown below using 192.168.1.1 as an example :-only_from           = 192.168.1.1
  8. Save and exit the file
  9. Restart the xinetd service on the Openfiler server by performing the following command :-service xinetd restart
  10. To confirm that the NRPE service has been started as part of the xinetd server perform the following command and confirm that it shows NRPE as LISTENING :-netstat -l | grep nrpe

Openfiler – Installing Amanda Backup Client

Openfiler – Installing Amanda Backup Client

Introduction

This article explains the steps to install the Amanda backup client on to an Openfiler 2.3 server.

The infrastructure used for this article was an Openfiler 2.3 64-bit Virtual Machine running on a Fedora Core 8 64-bit host running VMware Server 1.0.8

The version of Amanda used for this test was version 2.6.1.p1 downloaded from http://sourceforge.net/projects/amanda/files

Pre-requisites

In order to install Amanda on to an Openfiler server the following pre-requisites must be met :-

  • An Openfiler server built using the standard installation
  • The latest Amanda source tarball downloaded to the Openfiler server
  • The GCC compiler

Installing the GCC compiler

The first step to install Amanda on an Openfiler server is to install GCC and its dependencies as Openfiler does not come with it pre-installed.  To install GCC and its dependencies follow the steps below :-

  1. Logon to the console of the Openfiler server as root
  2. At the command prompt execute the commands below :-conary update gcc
    conary update gcc:runtime
    conary update libtool
    conary update glib
    conary update glib:devel
    conary update glibc
    conary update glibc:devel
    conary update automake
    conary update autoconf
    conary update pkgconfig

Create the amandabackup user

The Amanda software runs using an account on system which needs creating before the software is compiled and installed. To create the Amanda user perform the following steps :-

  1. Logon to the console of the Openfiler server as root
  2. At the command prompt execute the commands below :-mkdir /var/lib/amanda
    useradd -d /var/lib/amanda -s /bin/bash -G disk amandabackup
    chown amandabackup:disk /var/lib/amanda
    chmod 700 /var/lib/amanda
    passwd -u -f amandabackup
  3. At the command prompt execute the commands below :-mkdir /etc/amanda
    chown amandabackup:disk /etc/amanda
    chmod 700 /etc/amanda

Compiling and installing Amanda

Once the GCC Compiler has been installed the next step is to compile and install the Amanda software.  To install the Amanda software perform the following steps :-

  1. Unpack the source tarball in to the /tmp foldertar -zvxf amanda-2.6.1.p1.tar.gz
  2. Change directory in to the unpacked folder in tmpcd /tmp/amanda-2.6.1.p1
  3. To compile and install the software execute the commands below :-./configure –with-user=amandabackup –with-group=disk –with-gnutar-listdir=/var/lib/amanda/gnutar-lists
    make
    make install

Create the xinetd amanda config file

Amanda uses the xinetd service to run its service and in order to work a configuration file needs to be created.  To create the xinetd Amanda service file perform the following steps :-

  1. At the command prompt execute the following command :-vi /etc/xinetd.d/amandaclient
  2. In the file insert the following lines :-service amanda
    {
    disable         = no
    flags             = IPv6
    socket_type   = stream
    protocol         = tcp
    wait               = no
    user              = amandabackup
    group            = disk
    groups          = yes
    server           = /usr/local/libexec/amanda/amandad
    server_args    = -auth=bsdtcp amdump
    }
  3. Save and exit the file
  4. Restart the xinetd service by executing the command below :-service xinetd restart
  5. At the command prompt execute the command below and confirm that the amanda service is shown :-netstat -l | grep amanda

Test server to client connectivity

Once the client is installed logon to the Amanda backup server and add the Openfiler to the disklist file in the required Backup Set.

As the Amanda user execute amcheck {Backup Set Name} and confirm that no issues were found with the Openfiler server.

Openfiler – Initial configuration steps

Openfiler – Initial configuration steps

Introduction

This article explains the initial basic configuration changes which need to be made to an Openfiler version 2.3 server once it as been installed.

The network topology for this article uses three different internal subnets using 192.168.0.x, 192.168.200.x, and 192.168.250.x address ranges.

The basic steps required to configure an Openfiler server following installation are as follows and explained in the following sections :-

  • Configuring network access control
  • Configuring physical volumes
  • Configuring volume groups

Logon to the Openfiler management web-site

Openfiler comes with a management web-site which can be used to configure the server and is accessible by the url https://{server name or IP Address}:446

The logon credentials for the web-site are set to the username openfiler with a password of password initially although these can then be changed through the web-site to a password of your choice.

Configuring network access control

The first step to perform once the installation has been completed is to configure the network access to the server which is used to control where the server is accessible from.  The network access control options are very granular and can be configured to for specific IP addresses, ranges, or subnets.  To configure access for an subnet perform the following steps :-

  1. Click on the System option and then scroll down if neccessary to the bottom of the screen
  2. In the Network Access Configuration section enter the friendly name for the subnet – E.G. Local Network
  3. Next set the Network/Host to the subnet desired – E.G. 192.168.0.0
  4. Change the Netmask to the desired mask value – E.G. 255.255.255.0
  5. Configure the Type to Share and click on the Update button to add the configuration.
  6. Repeat step 1 – 5 for each subnet to be added

Creating physical volumes

Once the network access controls have been configured the next step is to create the physical volumes which will be used to configure the volume groups required.  To create the physical volumes perform the following steps :-

  1. Click on the Volumes option and then click on the Block Devices option in the Volumes section
  2. When presented with a list of disks available on the server click on the disk where the physical volume is to be configured – E.G. /dev/hda
  3. In the Edit Partitions page scroll down to the bottom if necessary
  4. Set the Mode to Primary and set the Partition Type to Physical volume
  5. Configure the Starting and Ending cylinders to the desired size and then click on the Create button.

Creating volume groups

The next step is to configure a volume group which contains the physical volume(s) which have been created. To create a volume group perform the following steps :-

  1. Click on the Volumes option and then click on the Volume groups option in the Volumes section
  2. Set the Volume Group name to the desired name without any spaces – E.G. Lost-it-vg
  3. Select the physical volume(s) created above which you wish to be part of the volume group
  4. Click on the Add volume group button to create the volume group

These steps will provide a basically configured Openfiler server which is then ready to deploy iSCSI resources, file shares etc.  For further information on these topics see the topics below.

Openfiler – Creating a HA Cluster

Openfiler – Creating a HA Cluster

Introduction

Openfiler can be configured as a High Availability Active / Passive cluster and in this article the steps to implement this solution are explained. The infrastructure used for this article was using two virtual machines running on a Fedora Core 8 64-bit server with VMware Server 1.0.8 running.

Important Notes

After seeing that this article gets a few hits I have re-ran through the installation routine and found that there are some issues. The first issue is that the drbd.conf file is written incorrectly and the IP Addresses:Ports in the resource vg0drbd were a duplicate of the cluster_metadata IP Addresses:Ports.  This has now been rectified in the article and I apologise for any trouble caused.

Pre-requisitites

In order to implement an Openfile HA cluster the following pre-requisites must be met :-

  • 2 x Servers with the same hardware configurations and drive sizes
  • 2 x Ethernet cards in each server

Node build configuration

Both nodes used in the article were built the same and the configuration settings used are provided in the following sub-sections. Hardware Configuration

  • 2 x 5Gb Hard drives – /dev/sda and /dev/sdb
  • 2 x Ethernet Cards
  • 256Mb of RAM

Disk Layout

  • Partition Name = /boot – Filesystem = efs3 – Partition Size = 100Mb – Disk = /dev/sda1
  • Partition Name = /swap – Filesystem = swap – Partition Size = 1024Mb – Disk = /dev/sda2
  • Partition Name = / – Filesystem = efs3 – Partition Size = 3992Mb – Disk = /dev/sda3
  • Partition Name = /meta – Filesystem = efs3 – Partition Size = 512Mb – Disk = /dev/sdb1
  • Partition Name = /data – Filesystem = lvm – Partition Size = 4608Mb – Disk = /dev/sdb2

Network Configuration – Node1

  • Network Card = Eth0 – LAN – Production LAN – IP Address = 192.168.0.50
  • Network Card = Eth1 – LAN – Replication LAN – IP Address = 192.168.211.50

Network Configuration – Node2

  • Network Card = Eth0 – LAN – Production LAN – IP Address = 192.168.0.60
  • Network Card = Eth1 – LAN – Replication LAN – IP Address = 192.168.211.60

Adding the host file entries

In order to communicate correctly and not rely upon DNS it is necessary to configure the hostnames for the cluster nodes in each servers host file.  To add the required entries in to the servers host files perform the following steps :-

  1. Logon to the Openfiler server using SSH
  2. Navigate to /etc and edit the file named hosts
  3. Add a new line to the bottom of the hosts file as shown below

{Other Nodes Replication Address} {Node Name} E.G. on Node1 add a line in as shown below substituting the 2nd Nodes replication IP address and name 192.168.211.60  Node2

Generating the ssh shared keys

To allow the two Openfilers to communicate it is necessary to configure ssh access using shared keys.  To generate and configure this perform the following steps on each server :-

  1. Logon to the Openfiler server using SSH
  2. At the command prompt execute the command ssh-keygen -t dsa
  3. Accept the Default Location to save the file
  4. When prompted by pressing Enter to set no passphrase
  5. Confirm no passphrase by pressing Enter again
  6. Once created copy the file /root/.ssh/id_dsa.pub to the /root/.sshfolder on other node and name the files as authorized_keys2

Configuring DRBD (Distributed Replicated Block Device)

The next step in implementing an Openfiler HA cluster is to configure the DRBD to provide a distributed filesystem over the network.  In order to configure DRBD perform the following steps :-

  1. If the file exists move /etc/drbd.conf file to /etc/drbd.conf.old
  2. Create a new /etc/drbd.conf file and add the text shown below these steps, substituting the {Node 1} and {Node 2} markers for the two node names in the cluster, the {Replication IP} markers for the replication IP Addresses, and the {Partition Number} for the partition number to be assigned, E.G. /dev/sdb1.
  3. Save the new /etc/drbd.conf file and copy it to the second node
global {
    # minor-count 64;
    # dialog-refresh 5; # 5 seconds
    # disable-ip-verification;
    usage-count ask;
 }
 
 common {
    syncer { rate 100M; }
 }
 
 resource cluster_metadata {
    protocol C;
    handlers {
    pri-on-incon-degr "echo O > /proc/sysrq-trigger ; halt -f";
    pri-lost-after-sb "echo O > /proc/sysrq-trigger ; halt -f";
    local-io-error "echo O > /proc/sysrq-trigger ; halt -f";
    # outdate-peer "/usr/sbin/drbd-peer-outdater";
  }
  
  startup {
    # wfc-timeout 0;
    degr-wfc-timeout 120; # 2 minutes.
  }
 
  disk {
    on-io-error detach;
  }
 
  net {
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  }
 
  syncer {
    # rate 10M;
    # after "r2";
    al-extents 257;
  }
 
  on {Node 1} {
    device /dev/drbd0;
    disk {partition number};
    address {Replication IP}:7788;
    meta-disk internal;
  }
 
  on {Node 2} {
    device /dev/drbd0;
    disk{partition number};
    address {Replication IP}:7788;
    meta-disk internal;
  }
 }
 
 resource vg0drbd {
    protocol C;
    startup {
    wfc-timeout 0; ## Infinite!
    degr-wfc-timeout 120; ## 2 minutes.
  }
 
  disk {
    on-io-error detach;
  }
 
  net {
    # timeout 60;
    # connect-int 10;
    # ping-int 10;
    # max-buffers 2048;
    # max-epoch-size 2048;
  }
 
  syncer {
    after "cluster_metadata";
  }
 
  on {Node 1} {
    device /dev/drbd1;
    disk {partition number};
    address {Replication IP}:7789; 
    meta-disk internal;
  }
 
  on {Node 2} {
    device /dev/drbd1;
    disk {partition number};
    address {Replication IP}:7789;
    meta-disk internal;
  }
 }

Create the partitions for Drbd

The next step is to create the partitions for the Cluster Metadata and the Data partition you wish to be part of the cluster.  To create the required partitions perform the following steps on the Primary Node if the Cluster Disk is a shared disk or on both Nodes if they are locally attached :-

  1. At the command prompt execute fdisk {disk to be used}, E.G. /dev/hdb
  2. Within fidsk execute n to create a new partition
  3. When prompted select the type of Partition to be created, E.G. p for a primary partition
  4. When prompted select the Partition number to be created, E.G. 1 for the first on a second disk
  5. When prompted accept the First Cyclinder value and then set the set the Last Cyclinder to +512M
  6. The partition is created and you are returned to the main Command menu of fdisk
  7. Within fidsk execute n to create a new partition
  8. When prompted select the type of Partition to be created, E.G. p for a primary partition
  9. When prompted select the Partition number to be created, E.G. 2 for the second on a second disk
  10. When prompted accept the First Cyclinder value and then set the set the Last Cyclinder to the of partition desired in Mb
  11. The partition is created and you are returned to the main Command menu of fdisk
  12. Within fdisk execute w to save the partition changes to the disk and exit fdisk

Initialise the metadata

Once the configuration file has been created on both nodes perform the following steps on both nodes to initialise the metadata :-

  1. At the command prompt execute drbdadm create-md cluster_metadata
  2. At the command prompt execute drbdadm create-md vg0drbd

Start the drbd service

Once the drbd metadata has been initialised on both nodes of the cluster start the drbd service on both by perform the step below :-

  1. Confirm that the partitions to be used are not mounted
  2. In the command prompt on both nodes execute the commandservice drbd start

Configure the primary drbd node

Once the drbd service has started on both nodes of the cluster perform the steps below on the first node to configure it as the Primary node :-

  1. At the command prompt on the first node execute the command drbdsetup /dev/drbd0 primary -o
  2. At the command prompt on the first node execute the command drbdsetup /dev/drbd1 primary -o

Configure the drbd service to start at boot

To enable the drbd service to start at boot time perform the steps below on both nodes of the cluster :-

  1. At the command prompt execute the command chkconfig –level 2345 drbd on

Create the metadata partition filesystem

On the primary node of the cluster perform the step below to create the filesystem for cluster metadata partition :-

  1. At the command prompt on the first node execute the command mkfs.ext3 /dev/drbd0

Configure the LVM partition

To configure the second partition as an LVM partition perform the steps shown below :-

  1. Edit the file /etc/lvm/lvm.conf on both nodes
  2. Find the line below

filter = [ “a/.*/” ] and change it to filter = [ “r|{Second drive allocated in to the cluster” ] E.G. [ “r|/dev/sdb2|” ]

  1. Create the LVM partition on the first node only as it will be replicated to the second by executing the command pvcreate /dev/drbd1

Configure heartbeat

The heartbeat service controls failover between the two nodes and is ran over the second network card we set up using the Restricted IP address.  To configure the heartbeat service perform the steps below on both nodes :-

  1. Create the file /etc/ha.d/authkeys and add the text below
  2. Edit the file /etc/ha.d/ha.cf and add the text below
  3. Restrict access to the authkeys file by executing the command chmod 600 /etc/ha.d/authkeys

auth 2 2 crcdebugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 bcast eth1 keepalive 5 warntime 10 deadtime 120 initdead 120 udpport 694 auto_failback off node filer01 node filer02

Configure the heartbeat service to start at boot

To enable the heartbeat service to start at boot perform the following step on both nodes :-

  1. At the command prompt execute the command chkconfig –level 2345 heartbeat on

Openfiler configuration changes

To ensure that the HA Services are able to run as expected and are available during failover perform the following steps on the nodes as directed :-

  1. At the command prompt on the primary node of the cluster execute the following commands :-mkdir /cluster_metadata mount /dev/drbd0 /cluster_metadata mv /opt/openfiler/ /opt/openfiler.local mkdir /cluster_metadata/opt cp -a /opt/openfiler.local /cluster_metadata/opt/openfiler ln -s /cluster_metadata/opt/openfiler /opt/openfiler rm /cluster_metadata/opt/openfiler/sbin/openfiler ln -s /usr/sbin/httpd /cluster_metadata/opt/openfiler/sbin/openfiler rm /cluster_metadata/opt/openfiler/etc/rsync.xml ln -s /opt/openfiler.local/etc/rsync.xml /cluster_metadata/opt/openfiler/etc/ mkdir -p /cluster_metadata/etc/httpd/conf.d cp /etc/httpd/conf.d/* /cluster_metadata/etc/httpd/conf.d
  2. Next edit the file /opt/openfiler.local/etc/rsync.xml and populate with the folowing text :-<?xml version=”1.0″ ?> <rsync> <remote hostname=”{Replication IP of secondary node}”/> <item path=”/etc/ha.d/haresources”/> <item path=”/etc/ha.d/ha.cf”/> <item path=”/etc/ldap.conf”/> <item path=”/etc/openldap/ldap.conf”/> <item path=”/etc/ldap.secret”/> <item path=”/etc/nsswitch.conf”/> <item path=”/etc/krb5.conf”/> </rsync>
  3. At the command prompt on the secondary node of the cluster execute the following commands :-mkdir /cluster_metadata mv /opt/openfiler/ /opt/openfiler.local ln -s /cluster_metadata/opt/openfiler /opt/openfiler
  4. Next edit the file /opt/openfiler.local/etc/rsync.xml and populate with the folowing text :-<?xml version=”1.0″ ?> <rsync> <remote hostname=”{Replication IP of secondary node}”/> <item path=”/etc/ha.d/haresources”/> <item path=”/etc/ha.d/ha.cf”/> <item path=”/etc/ldap.conf”/> <item path=”/etc/openldap/ldap.conf”/> <item path=”/etc/ldap.secret”/> <item path=”/etc/nsswitch.conf”/> <item path=”/etc/krb5.conf”/> </rsync>

Heartbeat cluster configuration

To configure the heartbeat services haresources file it is necessary to modify the Openfiler cluster.xml file.  To modify the Openfiler cluster.xml file perform the following steps :-

  1. On the primary node of the cluster edit the file /cluster_metadata/etc/cluster.xml and populate it with the following text :-<?xml version=”1.0″ ?> <cluster> <clustering state=”on” /> <nodename value=”{Primary Node Name” /> <resource value=”MailTo::it@company.com::ClusterFailover”/> <resource value=”IPaddr::{Cluster IP Address Required}/24″ /> <resource value=”drbddisk::”> <resource value=”LVM::vg0drbd”> <resource value=”Filesystem::/dev/drbd0::/cluster_metadata::ext3::defaults,noatime”> <resource value=”MakeMounts”/> </cluster>

Configuring Samba and NFS services support

In order for the Samba service to be part of the HA cluster perform the following steps :-

  1. At the command prompt on the primary node of the cluster execute the following commands :-mkdir /cluster_metadata/etc mv /etc/samba/ /cluster_metadata/etc/ ln -s /cluster_metadata/etc/samba/ /etc/samba mkdir -p /cluster_metadata/var/spool mv /var/spool/samba/ /cluster_metadata/var/spool/ ln -s /cluster_metadata/var/spool/samba/ /var/spool/samba mkdir -p /cluster_metadata/var/lib mv /var/lib/nfs/ /cluster_metadata/var/lib/ ln -s /cluster_metadata/var/lib/nfs/ /var/lib/nfs mv /etc/exports /cluster_metadata/etc/ ln -s /cluster_metadata/etc/exports /etc/exports
  2. At the command prompt on the secondary node of the cluster execute the following commands :-rm -rf /etc/samba/ ln -s /cluster_metadata/etc/samba/ /etc/samba rm -rf /var/spool/samba/ ln -s /cluster_metadata/var/spool/samba/ /var/spool/samba rm -rf /var/lib/nfs/ ln -s /cluster_metadata/var/lib/nfs/ /var/lib/nfs rm -rf /etc/exports ln -s /cluster_metadata/etc/exports /etc/exports

Configuring iSCSI services support

In order for the iSCSI service to be part of the HA cluster perform the following steps :-

  1. At the command prompt on the primary node of the cluster execute the following commands :-mv /etc/ietd.conf /cluster_metadata/etc/ ln -s /cluster_metadata/etc/ietd.conf /etc/ietd.conf mv /etc/initiators.allow /cluster_metadata/etc/ ln -s /cluster_metadata/etc/initiators.allow /etc/initiators.allow mv /etc/initiators.deny /cluster_metadata/etc/ ln -s /cluster_metadata/etc/initiators.deny /etc/initiators.deny
  2. At the command prompt on the secondary node of the cluster execute the following commands :-rm /etc/ietd.conf ln -s /cluster_metadata/etc/ietd.conf /etc/ietd.conf rm /etc/initiators.allow ln -s /cluster_metadata/etc/initiators.allow /etc/initiators.allow rm /etc/initiators.deny ln -s /cluster_metadata/etc/initiators.deny /etc/initiators.deny

Configuring a volume group

The next stage is to create a volume group which must be done before the heartbeat service is started.  To create the volume group perform the following steps :-

  1. At the command prompt on the primary node of the cluster execute the following commands :-vgcreate vg0drbd /dev/drbd1

Configuring and starting the heartbeat service for the first time

In order to get Openfiler to write the /etc/ha.d/haresources file based on the cluster.xml config file perform the following steps :-

  1. At the command prompt on the primary node of the cluster execute the following commands :-

rm /opt/openfiler/etc/httpd/modules ln -s /usr/lib64/httpd/modules /opt/openfiler/etc/httpd/modules

  1. Restart the Openfiler service on the primary node
  2. Log onto the primary node web interface
  3. Click on Services and enable SMB /CIFS and iSCSI
  4. At the command prompt on the primary node execute lvcreate -L 400M -n filer vg0drbd
  5. Copy the file /etc/ha.d/haresources from the primary node to the secondary node
  6. To enable the heartbeat configuration on the servers reboot the primary node first and then the secondary node

Testing the configuration

Once the two nodes have rebooted test the configuration performing the following steps :-

  1. Go to the cluster administration web page in a browser using the url https://{Cluster IP Address}:446
  2. When prompted logon to the Openfiler cluster
  3. Check under the System tab that the Hostname is showing as the Primary Node Name.
  4. Close the web administration page and shutdown the primary node
  5. Once shutdown re-open the cluster administration web page and logon
  6. Check under the System tab that the Hostname is showing as the Secondary Node Name.
  7. Close the web administration page and shutdown the secondary node
  8. Boot up the primary node and then the secondary node

Finally to clean up the configuration remove the temporary volume created earlier by performing the following step on the primary node :-

  1. At the command prompt on the primary node execute the command lvremove filer

Conclusion

The Openfiler cluster is now configured and you can now create new volumes for the provision of Samba shares or iSCSI disks. N.B. When creating new resources or enabling new services you must copy the /etc/ha.d/haresources file from the active node of the cluster to the passive node

Useful links

http://www.howtoforge.com/installing-and-configuring-openfiler-with-drbd-and-heartbeat

Openfiler – Configuring SNMP

Openfiler – Configuring SNMP

Introduction

This article explains how to configure the SNMP server included in Openfiler 2.3 in order to allow network monitoring of the server.

Editing the SNMP configuration file

The first step to allow monitoring of the Openfiler server by SNMP is to edit the SNMP configuration file.  In order to edit the configuration file perform the following steps :-

  1. Logon to the Openfiler server through SSH as an administrative user
  2. Navigate to the /etc/snmp folder
  3. Edit the snmpd.conf file and append the following to the bottom of the file :-rocommunity public
    com2sec local localhost {IP address of the monitoring server}
    com2sec network_l {Network address allowed to access the server}/24
    group MyROGroup_1 v2c local
    group MyROGroup_1 v2c network_1
    view all-mibs included .1 80
    access MyROGroup_1 .. v1 noauth exact all-mibs none none
    access MyROGroup_2 .. v2c noauth exact all-mibs none none
  4. Save and exit the file

Restart the SNMP service

Once the snmpd.conf file has been modified restart the SNMP service by executing the command below :-

service snmpd restart

Confirm SNMP access from the monitoring server

Following the service restart of SNMP on the Openfiler server confirm access to it via SNMP.

Openfiler – Configuring an iSCSI resource

Openfiler – Configuring an iSCSI resource

Introduction

Openfiler can be used to provide iSCSI disks across the network to other hosts and in this article the steps to provide a iSCSI resource on the server are explained.

Creating iSCSI volumes

The final step in configuring the storage on the server is to create the volumes required.  These will be the volumes used to present iSCSI disks to the network.

To create an iSCSI volume perform the following steps :-

  1. Click on the Volumes option and then click on the Add Volume option in the Volumes section
  2. Choose the Volume Group name desired from the dropdown – E.G. Lost-it-vg
  3. Scroll down to the bottom of the screen if necessary and then enter the volume name required – E.G. iSCSI-Disk-1
  4. Set the description to a meaningful description – E.G. iSCSI based disk 1
  5. Set the required space to the desired size of the volume – E.G. 5120 MB for 5GB
  6. Set the Filesystem / Volume Type to iSCSI and then click on the Create button to create the volume.

Enabling the required Services

Once the volumes and volume groups have been configured the next step is to enable the Services required to provide Windows Share and iSCSI resource access on the server.  To configure the services perform the following steps :-

  1. Click on the Services option
  2. Click on Enable next to the iSCSI Target Server service to enable access to iSCSI resources

Configuring an iSCSI target

To configure an iSCSI target perform the following steps :-

  1. Click on the Volumes option and then click on the iSCSI Targets option in the Volumes section
  2. When prompted either configure the IQN for the iSCSI target or click on the Add button to accept the default generated address.
  3. Click on the LUN Mapping tab and click on the Map button to add the LUN mapping.
  4. Next click on the Network ACL tab and set the networks required for access to Allow.
  5. Click on the Update button to configure the Network ACL for the iSCSI target.

Openfiler – Configuring a Windows file share resource

Openfiler – Configuring a Windows file share resource

Introduction

Openfiler can be used to provide both Windows SMB / CIFS and NFS file shares across the network to other hosts and in this article the steps to provide a Windows file share resource on the server are explained.

Pre-requisites

In order to configure Windows file shares on OPenfiler the following pre-requisistes must be met :-

  • Account access configured in Openfiler
  • A Physical volume configured on the Openfiler server

Creating a filesystem volume

To create a filesystem volume to deliver network shares perform the following steps :-

  1. Click on the Volumes option and then click on the Add Volume option in the Volumes section
  2. Choose the Volume Group name desired from the dropdown – E.G. Lost-it-vg
  3. Scroll down to the bottom of the screen if necessary and then enter the volume name required – E.G. Filesystem-Volume-1
  4. Set the description to a meaningful description – E.G. Filesystem Volume 1
  5. Set the required space to the desired size of the volume – E.G. 5120 MB for 5GB
  6. Set the Filesystem / Volume Type to the desired type – E.G XFS
  7. Click on the Create button to create the volume.

Enabling the required Services

Once the volumes and volume groups have been configured the next step is to enable the Services required to provide Windows Share resource access on the server.  To configure the services perform the following steps :-

  1. Click on the Services option
  2. Click on Enable next to the SMB / CIFS service to enable access to Windows file share resources.

Creating a sub-folder to be shared

Following the creation of a filesystem volume and enabling the SMB / CIFS service a sub-folder needs to be created which will be used as the share point.  To create a sub-folder to be shared perform the following steps :-

  1. Click on the Shares option and a list of the volume groups and filesystem volumes available on the server will be displayed
  2. Click on the filesystem volume where you wish to create the new share and you will be presented with a dialog box to create a new sub-folder
  3. Enter the desired name of the folder and then click on the Create sub-folder button to create it

Sharing a sub-folder

To share the created sub-folder click on the sub-folder and perform the steps below :-

  1. Click on the Make share button
  2. Set the Share Name and Description to the desired settings
  3. Set the Override SMB/Rsync share name to the same as the Share Name and click on the Change button
  4. Configure the Share access control mode to Controlled access and click on the Update button
  5. Under the Group access configuration section grant access to the necessary Groups as required
  6. Mark the Primary Group you wish to set for the share by Enabling the radio button in the PG column next to the group name
  7. Click on the Update button
  8. Configure access to the Share under the Host access configuration section by setting the required level of access to the SMB/CIFS service by enabling the relevant tick box against the host or network range you wish to allow access from
  9. Click on the Update button to create the new share