Usually, the server CPU is faster here but consumes system resources. Software RAID hands this off to the server's CPU. ASICs can be very fast but are expensive to produce. ![]() Hardware RAID will use a general-purpose CPU (often a Power or ARM RISC processor) or a custom ASIC to handle this. This latency will vary, however, based on the implementation of the RAID level as well as on the processing capability of the system. This introduces latency but does not curtail throughput. Primarily, parity RAID levels require heavy processing to handle write operations, with different levels having different amounts of computation necessary for each operation. Some types of RAID also have surprising amounts of computational overhead associated with them, while others do not. It is primarily hobby and consumer RAID systems that fail in this aspect. We must assume that all are working to the limits of the specification. Or it may fail to use the available spindles (such as having a RAID 1 array read only from a single disk instead of from both simultaneously) There is no easy way to account for deficiencies in specific implementations. A poor implementation might cause latency. One is the implementation of the system itself. RAID is complex, and many factors influence the final performance. Even the biggest, fastest, most robust cache options cannot change an array's long-term, sustained performance. Suffice it to say that it can be very dramatic, depending heavily on the cache choices and workload. There is no simple formula for determining how different cache options will impact the overall performance. But they will not fundamentally change the array's performance under the hood. Artifacts such as memory caches and solid state caches will do amazing things to alter the overall performance of a storage subsystem. It’s also important to remember that we are only talking about the array's performance, not an entire storage subsystem. But we can compare performance in a meaningful way by speaking to it in relation to the individual drives within the array. This is important as IOPS are often very hard to define. We can abstract away the RAID array without thinking about raw IOPS (input/output operations per second). This allows us to talk about relative performance as a factor of the drive performance. We will use “X” to refer to the performance of each drive individually. In our discussions, we will use “N” to represent our array's total number of drives, often referred to as spindles. To make discussing performance easier, we need to define a few terms as we will work with some equations. Read performance is effectively stable across all types. Regarding RAID, reading is straightforward, and writing is rather complex. ![]() There are two types of performance to look at with all storage: reading and writing. It simply maps into the RAID 10 performance curve. As RAID 1 is genuinely a single pair RAID 10 and behaves as such, this works wonderfully for making RAID performance easy to understand. Put simply, a RAID 1 is the same as a RAID 10 array, except that it only includes a single mirrored pair member. In this article, we will explore the standard RAID levels of RAID 0, 5, 6, and 10 to see how their performance differs. For this article, RAID 1 will be assumed to be a subset of RAID 10. RAID performance can be challenging to understand, mainly as distinct RAID levels use varying techniques and behave somewhat differently in practice. Choosing a RAID level is an exercise in balancing many factors, including cost, reliability, capacity, and performance.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |