summaryrefslogtreecommitdiff
path: root/src/Linux:RAID_Setup.adoc
diff options
context:
space:
mode:
Diffstat (limited to 'src/Linux:RAID_Setup.adoc')
-rw-r--r--src/Linux:RAID_Setup.adoc253
1 files changed, 253 insertions, 0 deletions
diff --git a/src/Linux:RAID_Setup.adoc b/src/Linux:RAID_Setup.adoc
new file mode 100644
index 0000000..e9455cc
--- /dev/null
+++ b/src/Linux:RAID_Setup.adoc
@@ -0,0 +1,253 @@
+After fighting with the problem detailed in my Btrfs:RAID_Setup[ last
+post] about this, I decided to go hunting for information about RAID 5
+implementation in btrfs. It turns out that it hasn't been completely
+implemented yet. Given the status verbage on their wiki page, I'm
+surprised it works at all. I suspect the wiki isn't entirely up to date
+though since it does seem to work to a certain extent. I still need to
+do more research to hunt this down though.
+
+You can find that wiki post
+https://btrfs.wiki.kernel.org/index.php/Project_ideas#Raid5.2F6[here].
+
+[[the-new-new-solution]]
+== The NEW New Solution
+
+Since RAID 5/6 is not yet completely implemented in Btrfs, I need to find
+another solution. Given that I still want redundancy, the only other obvious
+option I thought I had here was a
+http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1[RAID 1] configuration.
+However, as many Google searches do, searching for something leads to something
+else very interesting. In this case, my search for Linux RAID setups sent me
+over to the official kernel.org
+https://raid.wiki.kernel.org/index.php/Linux_Raid[RAID page], which details how
+to use http://en.wikipedia.org/wiki/Mdadm[mdadm]. This might be a better option
+for any RAID level, despite Btrfs support since it will detatch dependency on
+the filesystem for such support. Everyone loves a layer of abstraction.
+
+[[setup---raid-5]]
+=== Setup - RAID 5
+
+Let's get the RAID array set up.
+
+----
+mdadm -C /dev/md0 -l raid5 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
+# Or the long version so that makes a little more sense...
+mdadm --create /dev/md0 --level raid5 --raid-devices 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
+----
+
+
+[[setup---raid-1]]
+=== Setup - RAID 1
+
+----
+mdadm -C /dev/md0 -l raid1 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
+# Or the long version so that makes a little more sense...
+mdadm --create /dev/md0 --level raid1 --raid-devices 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
+----
+
+
+[[what-just-happened]]
+=== What Just Happened?
+
+[cols=",,,",options="header",]
+|=======================================================================
+|mdadm |-C,--create /dev/md0 |-l,--level raid5 |-n,--raid-devices 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
+|
+|Create a virtual block device at /dev/md0
+|Set the raid level to RAID 5 for our new device
+|The number of RAID devices is 3 - /dev/sdb1, /dev/sdc1, and /dev/sdd1.
+|=======================================================================
+
+
+[[the-rest]]
+=== The Rest
+
+We did just create a RAID array and a virtual device to map to it, but that's
+all. We still need a filesystem. Given that this whole series of posts has been
+about using Btrfs, we'll create one of those. You can still use whatever
+filesystem you want though.
+
+----
+mkfs.btrfs /dev/md0
+mount /dev/md0 /mnt/home/
+----
+
+
+[[mounting-at-boot]]
+=== Mounting at Boot
+
+Mounting at boot with mdadm is a tad more complicated than mounting a typical
+block device. Since an array is just that, an array, it must be assembled on
+each boot. Thankfully, this isn't hard to do. Simply run the following command
+and it will be assembled automatically
+
+----
+mdadm -D --scan >> /etc/mdadm.conf
+----
+
+That will append your current mdadm setup to the mdadm config file in /etc/.
+Once that's done, you can just add /dev/md0 (or your selected md device) to
+/etc/fstab like you normally would.
+
+
+[[simple-benchmarks]]
+== Simple Benchmarks
+
+Here are some simple benchmarks on my RAID setup. For these I have three
+1TB Western Digital Green drives with 64MB cache each.
+
+
+[[single-drive-baseline]]
+=== Single Drive Baseline
+
+[[ext4]]
+==== Ext4
+
+1GB Block Size 1M (1000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
+1000+0 records in
+1000+0 records out
+1048576000 bytes (1.0 GB) copied, 4.26806 s, 246 MB/s
+----
+
+1GB Block Size 1K (1000000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
+1000000+0 records in
+1000000+0 records out
+1024000000 bytes (1.0 GB) copied, 6.93657 s, 148 MB/s
+----
+
+
+[[raid-5]]
+=== RAID 5
+
+[[btrfs]]
+==== Btrfs
+
+1GB Block Size 1M (1000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
+1000+0 records in
+1000+0 records out
+1048576000 bytes (1.0 GB) copied, 3.33709 s, 314 MB/s
+----
+
+
+1GB Block Size 1K (1000000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
+1000000+0 records in
+1000000+0 records out
+1024000000 bytes (1.0 GB) copied, 7.99295 s, 128 MB/s
+----
+
+[[ext4-1]]
+==== Ext4
+
+1GB Block Size 1M (1000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
+1000+0 records in
+1000+0 records out
+1048576000 bytes (1.0 GB) copied, 12.4808 s, 84.0 MB/s
+----
+
+1GB Block Size 1K (1000000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
+1000000+0 records in
+1000000+0 records out
+1024000000 bytes (1.0 GB) copied, 13.767 s, 74.4 MB/s
+----
+
+[[raid-1]]
+=== RAID 1
+
+[[btrfs-1]]
+==== Btrfs
+
+1GB Block Size 1M (1000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
+1000+0 records in
+1000+0 records out
+1048576000 bytes (1.0 GB) copied, 3.61043 s, 290 MB/s
+----
+
+1GB Block Size 1K (1000000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
+1000000+0 records in
+1000000+0 records out
+1024000000 bytes (1.0 GB) copied, 9.35171 s, 109 MB/s
+----
+
+
+[[ext4-2]]
+==== Ext4
+
+1GB Block Size 1M (1000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
+1000+0 records in
+1000+0 records out
+1048576000 bytes (1.0 GB) copied, 8.00056 s, 131 MB/s
+----
+
+1GB Block Size 1K (1000000 blocks)
+
+----
+[root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
+1000000+0 records in
+1000000+0 records out
+1024000000 bytes (1.0 GB) copied, 9.3704 s, 109 MB/s
+----
+
+
+Those aren't exactly dazzling write speeds, but they're also not too bad, given
+what's happening in the background and that I'm using three standard 7200 rpm
+desktop drives with 64MB of cache a piece. Later down the line I might test
+this with a RAID 0 to see what the max speed of these drives are (though it
+should predictably be three times the current speed).
+
+
+[[final-thoughts]]
+== Final Thoughts
+
+My favorite thing about this at this point is the layer of abstraction doing
+RAID through mdadm provides (we all know how much Linux folk love modularity).
+Using the RAID functionality in Btrfs means I am tied to using that filesystem.
+If I ever want to use anything else, I'm stuck unless what I want to move to
+has its own implementation of RAID. However, using mdadm, I can use any
+filesystem I want, whether it supports RAID or not. Additionally, the setup
+wasn't too difficult either. Overall, I think (like anyone cares what I think
+though) that they've done a pretty great job with this.
+
+Many thanks to the folks who contributed to mdadm and the Linux kernel that
+runs it all (all 20,000-ish of you). I and many many other people really
+appreciate the great work you do.
+
+With that, I'm going to sign off and continue watching my cat play with/attack
+the little foil ball I just gave her.
+
+
+
+Category:Linux
+Category:Btrfs
+Category:Ext4
+Category:Storage
+Category:RAID
+
+
+// vim: set syntax=asciidoc:

Generated by cgit