004. Configure iSCSI Initiator In Linux

Introduction

Open-iSCSI is an open source implementation of iscsi in linux operating system.
Here I will show how to configure a simple iscsi initiator on a linux server.

Tested On

OS: CentOS 6.3 i386
open-iscsi version: 2.0-872
Hardware: Virtual Machine (VirtualBox 4.1.22)

Prerequisite

In order to configure iscsi initiator you need an iscsi target device that will provide a disk for your server.
You can use the following guide to create an iscsi target using linux

Procedure

  • Install open-scsi software
yum install iscsi-initiator-utils lsscsi -y
  • Discover luns on iscsi target server
iscsiadm --mode discovery -t st -p target_srv
  • Configure new lun
iscsiadm --mode discovery -t st -p target_srv --login
  • Check your new scsi disk device using fdisk
fdisk -l
  • Create new partition on your new disk and format it
fidsk /dev/sdc
n
p
1

wr
mkfs.ext4 /dev/sdc1
  • Configure your new created filesystem to mount at boot. Do not use the device name (/dev/sdc1) in /etc/fstab because it may not be persistent across reboots
mkdir /iscsi
vi /etc/fstab
...
/dev/disk/by-id/scsi-1IET_00010001-part1        /iscsi  ext4    defaults 0 0
...
  • Mount the new created file system
mount /iscsi

Useful Commands

  • Show active sessions
iscsiadm -m session
iscsiadm -m session -P 3
  • Check node configuration
iscsiadm -m node -p target_srv
  • Change device startup from automatic to manual
iscsiadm -m node -p target_srv -o update -n node.startup -v automatic

Remove iscsi device

First make sure the device is not being used and remove any reference to it

  • Logout from a configured iscsi device
iscsiadm --mode node --targetname iqn.2012-10.com.nachum234:server.target1 --portal target_srv --logout
  • Delete a configured iscsi device
iscsiadm -m discovery -t  send_targets -p target_srv -o delete

003. Create iSCSI Target in Linux

Introduction

tgt is scsi target framework for linux. Using tgt you can configure SCSI targets driver like iscsi, fcp, iSER and more.

Here I will show how to configure a simple iscsi target on a linux server.

Tested On

OS: CentOS 6.3 i386
tgtd version: 1.0.4
Hardware: Virtual Machine (VirtualBox 4.1.22)

Procedure

  • Install tgt software
yum install scsi-target-utils -y
  • Configure a new target device in /etc/tgt/targets.conf
vi /etc/tgt/targets.conf
<target iqn.2012-10.com.nachum234:server.target1>
    backing-store /dev/sdb2
</target>
  • Restart tgtd daemon and configure it to boot at start up
service tgtd start
chkconfig tgtd on
  • Show your configured targets
tgt-admin -s

Please visit http://stgt.sourceforge.net/ for more information about linux tgt configuration and usage.

002. Create Software RAID 10 With mdadm

Tested On

OS: CentOS 6.3 i386
mdadm version: v3.2.3
Hardware: Virtual Machine (VirtualBox 4.1.22)

Introduction

mdadm is a linux software raid implementation. With mdadm you can build software raid from different level on your linux server. In this post I will show how to create a raid 10 array using 4 disks.

Raid 10 is stripe of mirrored disks, it uses even number of disks (4 and above) create mirror sets using disk pairs and then combine them all together using a stripe. Raid 10 have a great fault tolerance because of its mirror pairs and very good performance because of the stripe technology.

Creating partitions

  • First we will create linux raid partitions on our disks
fdisk /dev/sdb
Command (m for help): n -> enter
Command action
   e   extended
   p   primary partition (1-4)
p -> enter
Partition number (1-4): 1 -> enter
First cylinder (1-1044, default 1): enter
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1044, default 1044): +3G enter

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
  • Continue creating partitions by repeating the above procedure on all the other disks that will participate in the raid
  • Create new raid 10 using the partitions we created
yum install mdadm -y
mdadm --create /dev/md0 --level raid10 --name data --raid-disks 4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
echo "MAILADDR [email protected]" >> /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
  • Create new file system on the new raid device
mkfs.ext4 /dev/disk/by-id/md-name-centos-62-1.localdomain:data
mkdir /data
mount /dev/disk/by-id/md-name-centos-62-1.localdomain:data /data
  • Change /etc/fstab if you want to automatically mount the raid device. Use the device id in fstab and not /dev/md0 because it may not be persistent across reboot
vi /etc/fstab
...
/dev/disk/by-id/md-name-centos-62-1.localdomain:data        /data                     ext4    defaults        1 1
  • Reboot the system to check that raid 10 is automatically started and mounted after a reboot
reboot

Testing Our New Raid 10

Here will test two disks failures in our raid. Because raid 10 use mirror sets it will continue to function when it has faulty disks from different mirror sets.

  • Check the array status
mdadm --detail /dev/disk/by-id/md-name-centos-62-1.localdomain\:data
  • Simulate disk sdb1 failure
mdadm --manage --set-faulty /dev/disk/by-id/md-name-centos-62-1.localdomain\:data /dev/sdb1
  • Check syslog for new failure messages
tail /var/log/messages

Oct  3 16:43:42 centos-62-1 kernel: md/raid10:md0: Disk failure on sdb1, disabling device.
Oct  3 16:43:42 centos-62-1 kernel: md/raid10:md0: Operation continuing on 3 devices.
  • Check array status
mdadm --detail /dev/disk/by-id/md-name-centos-62-1.localdomain\:data
cat /proc/mdstat
  • Simulate disk sdd1 failure
mdadm --manage --set-faulty /dev/disk/by-id/md-name-centos-62-1.localdomain\:data /dev/sdd1
  • Check syslog for new failure messages
tail /var/log/messages

Oct  3 16:45:01 centos-62-1 kernel: md/raid10:md0: Disk failure on sdd1, disabling device.
Oct  3 16:45:01 centos-62-1 kernel: md/raid10:md0: Operation continuing on 2 devices.
  • Check array status
mdadm --detail /dev/disk/by-id/md-name-centos-62-1.localdomain\:data
cat /proc/mdstat
  • Remove sdb1 from the array and re-add it
mdadm /dev/disk/by-id/md-name-centos-62-1.localdomain\:data -r /dev/sdb1
mdadm /dev/disk/by-id/md-name-centos-62-1.localdomain\:data -a /dev/sdb1
  • Check array status
mdadm --detail /dev/disk/by-id/md-name-centos-62-1.localdomain\:data
cat /proc/mdstat
  • Remove sdd1 from the array and re-add it
mdadm /dev/disk/by-id/md-name-centos-62-1.localdomain\:data -r /dev/sdd1
mdadm /dev/disk/by-id/md-name-centos-62-1.localdomain\:data -a /dev/sdd1
  • Check array status
mdadm --detail /dev/disk/by-id/md-name-centos-62-1.localdomain\:data
cat /proc/mdstat

Please visit https://raid.wiki.kernel.org for more information about linux software raid configuration and usage.

001. Coverting Linux System From Non Raid Disk To Software Raid (mdadm)

Tested On

OS: CentOS 6.3 i386
mdadm version: v3.2.3
Hardware: Virtual Machine (VirtualBox 4.1.22)

Introduction

mdadm is a linux software raid implementation. With mdadm you can build software raid from different level on your linux server, in this post I will show how to convert an existing OS system to raid 1 for fault tolerance.

Pre conversion system

  • Partition sda1 for /boot and one logical volume for /
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      2.5G  812M  1.6G  34% /
tmpfs                 250M     0  250M   0% /dev/shm
/dev/sda1             485M   85M  375M  19% /boot

Converting to raid 1

  • Insert new disk to the system and create linux raid partition on it using fdisk
fdisk /dev/sdb
Command (m for help): n -> enter
Command action
   e   extended
   p   primary partition (1-4)
p -> enter
Partition number (1-4): 1 -> enter
First cylinder (1-1044, default 1): enter
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1044, default 1044): +3G enter

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): a
Partition number (1-4): 1

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
  • create new raid 1 in a degraded mode using one disk (sdb1)
yum install mdadm -y
mdadm --create /dev/md0 --level raid1 --raid-disks 2 missing /dev/sdb1 --metadata 1.0
echo "MAILADDR [email protected]" >> /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
  • Create new file system on the new raid device and copy the data from the original disk to the raid
mkfs.ext4 /dev/md0
mount /dev/md0 /mnt
cd /
find . -xdev | cpio -pm /mnt/
cd /boot
find . -xdev | cpio -pm /mnt/boot/
  • Change /etc/fstab on the raid device
vi /mnt/etc/fstab
/dev/md0        /                       ext4    defaults        1 1
tmpfs           /dev/shm                tmpfs   defaults        0 0
devpts          /dev/pts                devpts  gid=5,mode=620  0 0
sysfs           /sys                    sysfs   defaults        0 0
proc            /proc                   proc    defaults        0 0
  • Here is the old fstab file
cat /etc/fstab
/dev/mapper/VolGroup-lv_root                    /                       ext4    defaults        1 1
UUID=d898d82b-bb81-4ff5-9261-681b66bc9c7d       /boot                   ext4    defaults        1 2
/dev/mapper/VolGroup-lv_swap                    swap                    swap    defaults        0 0
tmpfs                                           /dev/shm                tmpfs   defaults        0 0
devpts                                          /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                                           /sys                    sysfs   defaults        0 0
proc                                            /proc                   proc    defaults        0 0
  • Setup grub on /dev/sdb
grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
  • Change grub.conf file on the new raid device
vi /mnt/boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-279.5.2.el6.i686)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-279.5.2.el6.i686 ro root=/dev/md0 rd_NO_LUKS LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto quiet KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
        initrd /boot/initramfs-2.6.32-279.5.2.el6.i686.img
  • I add the /boot prefix because I remove the /boot partition. here is the old grub.conf file
cat /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-279.5.2.el6.i686)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-279.5.2.el6.i686 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto quiet rd_LVM_LV=VolGroup/lv_root  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
        initrd /initramfs-2.6.32-279.5.2.el6.i686.img
  • Create new initrd file with raid 1 support
cd /mnt/boot
mkinitrd --preload raid1 --with=raid1 raid-ramdisk 2.6.32-279.5.2.el6.i686
mv initramfs-2.6.32-279.5.2.el6.i686.img initramfs-2.6.32-279.5.2.el6.i686.img.bck
mv raid-ramdisk initramfs-2.6.32-279.5.2.el6.i686.img
  • Reboot the system from the new drive (sdb)
reboot
  • After we mounted the root folder from the raid device we can destroy the original disk and prepare it to the raid
fdisk /dev/sda
Command (m for help): d -> enter
Partition number (1-4): 2 -> enter

Command (m for help): d -> enter
Selected partition 1

Command (m for help): n -> enter
Command action
   e   extended
   p   primary partition (1-4)
p -> enter
Partition number (1-4): 1 -> enter
First cylinder (1-522, default 1): enter
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-522, default 522): +3G -> enter

Command (m for help): t -> enter
Selected partition 1
Hex code (type L to list codes): fd -> enter
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): a -> enter
Partition number (1-4): 1 -> enter

Command (m for help): w -> enter
The partition table has been altered!

Calling ioctl() to re-read partition table. 
Syncing disks.
  • Reread the partition table of sda using partprobe
yum install parted -y
partprobe /dev/sda
  • Add /dev/sda1 to the array
mdadm /dev/md0 --add /dev/sda1
  • Wait for the rebuild process to complete. check the status using /proc/mdstat file
cat /proc/mdstat
  • Setup grub on /dev/sda
grub
grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
  • Reboot to check that the system come up from sda
reboot

Testing Our New Raid 1

  • Check the array status
mdadm --detail /dev/md0
  • Simulate disk sda1 failure
mdadm --manage --set-faulty /dev/md0 /dev/sda1
  • Check syslog for new failure messages
tail /var/log/messges

Sep 23 21:27:36 centos-62-1 kernel: md/raid1:md0: Disk failure on sda1, disabling device.
Sep 23 21:27:36 centos-62-1 kernel: md/raid1:md0: Operation continuing on 1 devices.
  • Check array status
mdadm --detail /dev/md0
cat /proc/mdstat
  • Remove sda1 from the array and re-add it
mdadm /dev/md0 -r /dev/sda1
mdadm /dev/md0 -a /dev/sda1
  • Check array status
mdadm --detail /dev/md0
cat /proc/mdstat

Please visit https://raid.wiki.kernel.org for more information about linux software raid configuration and usage.