RAID Controller Features
With RAID technology, data is striped across an array
(a group) of hard disk drives. Striping is the process of storing
data across all the disk drives that are grouped in an array. This
data distribution scheme complements the way the operating system requests
The following features further enhance the performance of your PS/2
Overlapped Input/Output Operation
Cache Size Factors
Cache Size and
Hot-Spare Drive and Replacement
Data Protection (Takes you to RAID Levels)
Because the IBM RAID Controller provides multiple data
paths to and from arrayed drives, your server can respond to requests from
several users simultaneously. With its overlapped input/output operation,
if one user requests data that resides on the first drive of the array
and a second user requests data that resides on the second drive, the controller
can simultaneously deliver both pieces of information.
The granularity at which data from one file is stored
on one drive of the array before subsequent data is stored on the next
drive of the array is called the interleave depth. For the IBM RAID
Controller in your server, the interleave depth is set at 16 sectors to
maximize system performance.
The collection, in logical order, of these 16-sector blocks,
from the first drive of the array to the last drive of the array, is called
Commands are queued in the controller with a queue depth
of 61. To obtain better performance, the commands in the queue will
be reordered and coalesced on a hard disk drive basis. That is, the
controller organizes the commands according to which drive will be responding,
and then orders and combines two or more commands, when possible, before
sending them off to the drives.
The IBM RAID Controller has 4MB of cache memory, which
can be configured to operate in a write-through or write-back mode on a
logical drive basis. (Refer to Changing the Write Policy for more information
about write-through and write-back modes.) Cache memory has parity to detect
memory errors and retry algorithms to recover from errors that appear sporadically.
How important is the amount of cache
ram on the PassPlay RAID adapter--4 MB, 16 MB, 64 MB? Under what
circumstances will a cache increase pay off? (The system in question is
running NetWare 4.1, but I'm interested in general info on this subject.)
I notice that the more recent Cheetah RAID adapter
has only 4 MB with no upgrade possible. It seems counterintuitive,
but I seem to remember reading somewhere that large amounts of controller
cache aren't really that useful with modern drives and operating systems.
Having a large cache is only half the truth. Bigger
cache means more damage if the controller chokes and cannot write data
back to the drives. Large caches on Raid controllers make sense only if
they are battery-backed (Ed. I have seen battery-backed 72 pin SIMMs)
and if there is a mechanism that allowes to remove the cache (with the
data), replace the adapter, plug back the cache and restart the system
to that point where the operation was stopped and write the cache data
down to the drives and maintain the integrity of the data / array.
The older Raid-controllers (Server-95 Raid
"Passplay", Fast/Wide Streaming Raid /A "Cheetah" and Fast/Wide Raid PCI
"DAC960") don't have battery backed cache. Even 4MB of cache memory contains
a large number of "data-stripes" (usually 8K blocks).
These data-stripes will be lost if the machine
powers down for any reason, or the controller fails, or the operating system
hangs. Recalculate how many sectors fit in 4MB - and the higher the number
of missing sectors the lower the chance that the Raid-Utility will be able
to restore the missing data.
a) Overall drive data-throughput (buffering x accesses while
drives are in *mechanically* causes delay / dead zone / recalibration)
b) Data-stripe size (8K normally - 64K under WinNT might be better)
c) Operating system (WinNT and OS/2 are very "swap active")
d) Structure of the RAID itself (Raid-5 uses the cache much more than
Raid-1 ... because the mirroring is imminently fast with buffering the
e) Nature of the data blocks. Consequently high internal redundancy
of the data will cause higher "hit rates" within the cache than permanent
data-streaming with new data, which void the cached data and only "pass
Like on all caches there is a limit where enlarging
the cache any further makes no sense. And I think this limit is at around
4MB on a 5 drives Raid-5 system running under OS/2 or WinNT. The content-redundancy
of the data is mostly not given - so the cache is mostly used to buffer
the Raid data-overhead between the drives (during reading / writing / synchronizing
the Raid structure) - on the transfer between drive-subsystem and processor
the cache does not play a major role.
A larger cache here costs only money and bears
the above mentioned risks to render the entire array useless if something
Size and Diminishing Returns
Generally speaking, increasing
the amount of cache will always improve performance. The performance gain
will be more for sequential access type applications than for random access
Typically increasing the cache from 2 to 4MB will see
a bigger % gain than 4 to 16 MB and that will see a bigger % gain than
16 to 32 MB and so on.
The IBM RAID Controller provides adaptive RAID algorithms
for improved high performance.
The hot-spare drive is a hard disk drive that is installed
in your server and is defined for automatic use in the event of a drive
failure. The hot-spare drive must be of equal capacity or larger
than the drives in the array. You can define as many hot-spare drives
as you want.
If a drive fails, the system automatically switches to
the hot-spare drive, and a rebuild operation recreating the data in the
defunct drive automatically occurs in the hot-spare drive. The system
automatically defines the replacement drive as a hot spare.
Note: A hot-spare drive is
effective only in an array in which no logical drives are defined as RAID
No data loss occurs in arrays with logical drives assigned
only RAID level 5 or 1 or a combination of these two levels. Data is lost
in an array with any logical drive assigned RAID level 0.
You must have at least four hard disk drives if you want a hot-spare
drive and RAID level 5. To maintain capacity, the size of the additional
drive can be larger but must be no smaller than the size of the drives
that came with your server. All the drives in an array are configured
to the capacity of the smallest.
9595 Main Page