RAID Part-1


Redundant array of independent disk


RAID (originally redundant array of inexpensive disks; now commonly redundant array of independent disks) is a data storage virtualization technology that combines multiple disk drive components into a logical unit for the purposes of data redundancy or performance improvement.

In RAID we have different RAID levels

1.    Level 0           -           striped disk array without fault tolerance
2.    Level 1           -           mirroring and duplexing
3.    Level 2           -           error-correcting coding
4.    Level 3           -           bit-interleaved parity
5.    Level 4           -           dedicated parity drive
6.    Level 5           -           block interleaved distributed parity
7.    Level 6           -           independent data disks with double parity
8.    Level 10        -           a stripe of mirrors

RAID Level 0:  It’s just stripping. RAID Level 0 requires a minimum of 2 drives to implement.
  • RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written to a separate disk drive
  • I/O performance is greatly improved by spreading the I/O load across many channels and drives
  • Best performance is achieved when data is striped across multiple controllers with only one drive per controller
  • No parity calculation overhead is involved
  • Very simple design
  • Easy to implement
RAID Level 1: For Highest performance, the controller must be able to perform two concurrent separate Reads per mirrored pair or two duplicate Writes per mirrored pair. Raid level 1 requires a minimum of 2 drives to implement.


  • ·         One Write or two Reads possible per mirrored pair
  • ·         Twice the Read transaction rate of single disks, same Write transaction rate as single disks
  • ·         100% redundancy of data means no rebuild is necessary in case of a disk failure, just a copy to the replacement disk
  • ·         Transfer rate per block is equal to that of a single disk
  • ·         Under certain circumstances, RAID 1 can sustain multiple simultaneous drive failures
  • ·         Simplest RAID storage subsystem design

RAID Level 2:  Each bit of data word is written to a data disk drive each data word has its Hamming Code ECC word recorded on the ECC disks. On Read, the ECC code verifies correct data or corrects single disk errors.


  • ·         "On the fly" data error correction
  • ·         Extremely high data transfer rates possible
  • ·         The higher the data transfer rate required, the better the ratio of data disks to ECC disks
  • ·         Relatively simple controller design compared to RAID levels 3,4 & 5

RAID Level 3: Byte-level striping with dedicated parity, data block is subdivided ("striped") and written on the data disks. Stripe parity is generated on Writes, recorded on the parity disk and checked on Reads. Requires minimum 3 disks to implement

 


  • ·        Very high Read data transfer rate
  • ·        Very high Write data transfer rate
  • ·        Disk failure has an insignificant impact on throughput
  • ·        Low ratio of ECC (Parity) disks to data disks means high efficiency
RAID Level 4:  Block-level striping with dedicated parity. Each entire block is written onto a data disk. Parity for same rank blocks is generated on Writes, recorded on the parity disk and checked on 

Reads. Requires minimum 3 disks
·    
   

  •         Very high Read data transaction rate
  • ·        Low ratio of ECC (Parity) disks to data disks means high efficiency
  • ·        High aggregate Read transfer rate

RAID Level 5: Block-level striping with distributed parity. Each entire data block is written on a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads. Requires minimum 3 disks to implement

·         

  •      Highest Read data transaction rate
  • ·        Medium Write data transaction rate
  • ·        Low ratio of ECC (Parity) disks to data disks means high efficiency
  • ·        Good aggregate transfer rate
RAID Level 6:  Block-level striping with double distributed parity. Two independent parity computations must be used in order to provide protection against double disk failure. Two different algorithms are employed to achieve this purpose. Requires minimum 4 disks implement     

·         

  •      RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (dual parity)
  • ·        Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures
  • ·        RAID 6 protects against multiple bad block failures while non-degraded
  • ·        RAID 6 protects against a single bad block failure while operating in a degraded mode
  • ·        Perfect solution for mission critical applications
RAID Level 10: Disks within the group are mirrored and groups are stripped, required minimum 4 disks to implement



  • ·        RAID 10 is implemented as a striped array whose segments are RAID 1 arrays
  • ·        RAID 10 has the same fault tolerance as RAID level 1
  • ·        RAID 10 has the same overhead for fault-tolerance as mirroring alone
  • ·        High I/O rates are achieved by striping RAID 1 segments
  • ·        Under certain circumstances, RAID 10 array can sustain multiple simultaneous drive failures
·         
      Excellent solution for sites that would have otherwise gone with RAID 1 but need some additional performance boost

Implementing the RAID will be two types 1.Software RAID and 2.Hardware RAID
Let’s see the difference between those

SOFTWARE RAID
HARDWARE RAID
1.    It will use computer system CPU
1.    It will use its own CPU
2.    Low cost compare to H/W RAID
2.More cost compare to S/W RAID
3.    It has data integrity issues due to system crashes
     3.No data integrity issues
4.    No write-back cache
4.It is capable of write-back cache
5.    Limited operating system migrations
5.Can be migrated to any OS type
6.    Unprotected at boot (cannot manage or protect data at boot):Drive failure or corrupted data during boot and before the RAID software become active leads to an inoperable system
6. Protected at boot: No negative impact on data availability
when boot drive has medium errors or fails completely
7.    Performance issues will be there
7.No performance issues compare to S/W RAID

Add new disks for RAID Creation
 

Create partitions using disks, partition type should be Linux raid AutoDetect (fd)



Creating RAID Device
# mdadm --create /dev/md0 --level=5 --raid-disk=3 /dev/sdb1 /dev/sdb2 /dev/sdb3
 

# mkfs.ext3 /dev/md0         - to make file system in RAID device
 

Mounting and using raid device
 

No comments:

Post a Comment