summaryrefslogtreecommitdiff
path: root/src/Linux:RAID_Setup.ascii
blob: e9455cc9ecab5c86d6f59a53cd603e5650ef5a4d (plain)
    1 After fighting with the problem detailed in my Btrfs:RAID_Setup[ last
    2 post] about this, I decided to go hunting for information about RAID 5
    3 implementation in btrfs. It turns out that it hasn't been completely
    4 implemented yet. Given the status verbage on their wiki page, I'm
    5 surprised it works at all. I suspect the wiki isn't entirely up to date
    6 though since it does seem to work to a certain extent. I still need to
    7 do more research to hunt this down though.
    8 
    9 You can find that wiki post
   10 https://btrfs.wiki.kernel.org/index.php/Project_ideas#Raid5.2F6[here].
   11 
   12 [[the-new-new-solution]]
   13 == The NEW New Solution
   14 
   15 Since RAID 5/6 is not yet completely implemented in Btrfs, I need to find
   16 another solution. Given that I still want redundancy, the only other obvious
   17 option I thought I had here was a
   18 http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1[RAID 1] configuration.
   19 However, as many Google searches do, searching for something leads to something
   20 else very interesting. In this case, my search for Linux RAID setups sent me
   21 over to the official kernel.org
   22 https://raid.wiki.kernel.org/index.php/Linux_Raid[RAID page], which details how
   23 to use http://en.wikipedia.org/wiki/Mdadm[mdadm]. This might be a better option
   24 for any RAID level, despite Btrfs support since it will detatch dependency on
   25 the filesystem for such support. Everyone loves a layer of abstraction.
   26 
   27 [[setup---raid-5]]
   28 === Setup - RAID 5
   29 
   30 Let's get the RAID array set up.
   31 
   32 ----
   33 mdadm -C /dev/md0 -l raid5 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
   34 # Or the long version so that makes a little more sense...
   35 mdadm --create /dev/md0 --level raid5 --raid-devices 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
   36 ----
   37 
   38 
   39 [[setup---raid-1]]
   40 === Setup - RAID 1
   41 
   42 ----
   43 mdadm -C /dev/md0 -l raid1 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
   44 # Or the long version so that makes a little more sense...
   45 mdadm --create /dev/md0 --level raid1 --raid-devices 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
   46 ----
   47 
   48 
   49 [[what-just-happened]]
   50 === What Just Happened?
   51 
   52 [cols=",,,",options="header",]
   53 |=======================================================================
   54 |mdadm |-C,--create /dev/md0 |-l,--level raid5 |-n,--raid-devices 3 /dev/sdb1 /dev/sdc1 /dev/sdd1
   55 |
   56 |Create a virtual block device at /dev/md0
   57 |Set the raid level to RAID 5 for our new device
   58 |The number of RAID devices is 3 - /dev/sdb1, /dev/sdc1, and /dev/sdd1.
   59 |=======================================================================
   60 
   61 
   62 [[the-rest]]
   63 === The Rest
   64 
   65 We did just create a RAID array and a virtual device to map to it, but that's
   66 all. We still need a filesystem. Given that this whole series of posts has been
   67 about using Btrfs, we'll create one of those. You can still use whatever
   68 filesystem you want though.
   69 
   70 ----
   71 mkfs.btrfs /dev/md0
   72 mount /dev/md0 /mnt/home/
   73 ----
   74 
   75 
   76 [[mounting-at-boot]]
   77 === Mounting at Boot
   78 
   79 Mounting at boot with mdadm is a tad more complicated than mounting a typical
   80 block device. Since an array is just that, an array, it must be assembled on
   81 each boot. Thankfully, this isn't hard to do. Simply run the following command
   82 and it will be assembled automatically
   83 
   84 ----
   85 mdadm -D --scan >> /etc/mdadm.conf
   86 ----
   87 
   88 That will append your current mdadm setup to the mdadm config file in /etc/.
   89 Once that's done, you can just add /dev/md0 (or your selected md device) to
   90 /etc/fstab like you normally would.
   91 
   92 
   93 [[simple-benchmarks]]
   94 == Simple Benchmarks
   95 
   96 Here are some simple benchmarks on my RAID setup. For these I have three
   97 1TB Western Digital Green drives with 64MB cache each.
   98 
   99 
  100 [[single-drive-baseline]]
  101 === Single Drive Baseline
  102 
  103 [[ext4]]
  104 ==== Ext4
  105 
  106 1GB Block Size 1M (1000 blocks)
  107 
  108 ----
  109 [root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
  110 1000+0 records in
  111 1000+0 records out
  112 1048576000 bytes (1.0 GB) copied, 4.26806 s, 246 MB/s
  113 ----
  114 
  115 1GB Block Size 1K (1000000 blocks)
  116 
  117 ----
  118 [root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
  119 1000000+0 records in
  120 1000000+0 records out
  121 1024000000 bytes (1.0 GB) copied, 6.93657 s, 148 MB/s
  122 ----
  123 
  124 
  125 [[raid-5]]
  126 === RAID 5
  127 
  128 [[btrfs]]
  129 ==== Btrfs
  130 
  131 1GB Block Size 1M (1000 blocks)
  132 
  133 ----
  134 [root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
  135 1000+0 records in
  136 1000+0 records out
  137 1048576000 bytes (1.0 GB) copied, 3.33709 s, 314 MB/s
  138 ----
  139 
  140 
  141 1GB Block Size 1K (1000000 blocks)
  142 
  143 ----
  144 [root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
  145 1000000+0 records in
  146 1000000+0 records out
  147 1024000000 bytes (1.0 GB) copied, 7.99295 s, 128 MB/s
  148 ----
  149 
  150 [[ext4-1]]
  151 ==== Ext4
  152 
  153 1GB Block Size 1M (1000 blocks)
  154 
  155 ----
  156 [root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
  157 1000+0 records in
  158 1000+0 records out
  159 1048576000 bytes (1.0 GB) copied, 12.4808 s, 84.0 MB/s
  160 ----
  161 
  162 1GB Block Size 1K (1000000 blocks)
  163 
  164 ----
  165 [root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
  166 1000000+0 records in
  167 1000000+0 records out
  168 1024000000 bytes (1.0 GB) copied, 13.767 s, 74.4 MB/s
  169 ----
  170 
  171 [[raid-1]]
  172 === RAID 1
  173 
  174 [[btrfs-1]]
  175 ==== Btrfs
  176 
  177 1GB Block Size 1M (1000 blocks)
  178 
  179 ----
  180 [root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
  181 1000+0 records in
  182 1000+0 records out
  183 1048576000 bytes (1.0 GB) copied, 3.61043 s, 290 MB/s
  184 ----
  185 
  186 1GB Block Size 1K (1000000 blocks)
  187 
  188 ----
  189 [root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
  190 1000000+0 records in
  191 1000000+0 records out
  192 1024000000 bytes (1.0 GB) copied, 9.35171 s, 109 MB/s
  193 ----
  194 
  195 
  196 [[ext4-2]]
  197 ==== Ext4
  198 
  199 1GB Block Size 1M (1000 blocks)
  200 
  201 ----
  202 [root@zion home]# dd if=/dev/zero of=./test.img bs=1M count=1000
  203 1000+0 records in
  204 1000+0 records out
  205 1048576000 bytes (1.0 GB) copied, 8.00056 s, 131 MB/s
  206 ----
  207 
  208 1GB Block Size 1K (1000000 blocks)
  209 
  210 ----
  211 [root@zion home]# dd if=/dev/zero of=./test2.img bs=1K count=1000000
  212 1000000+0 records in
  213 1000000+0 records out
  214 1024000000 bytes (1.0 GB) copied, 9.3704 s, 109 MB/s
  215 ----
  216 
  217 
  218 Those aren't exactly dazzling write speeds, but they're also not too bad, given
  219 what's happening in the background and that I'm using three standard 7200 rpm
  220 desktop drives with 64MB of cache a piece. Later down the line I might test
  221 this with a RAID 0 to see what the max speed of these drives are (though it
  222 should predictably be three times the current speed).
  223 
  224 
  225 [[final-thoughts]]
  226 == Final Thoughts
  227 
  228 My favorite thing about this at this point is the layer of abstraction doing
  229 RAID through mdadm provides (we all know how much Linux folk love modularity).
  230 Using the RAID functionality in Btrfs means I am tied to using that filesystem.
  231 If I ever want to use anything else, I'm stuck unless what I want to move to
  232 has its own implementation of RAID.  However, using mdadm, I can use any
  233 filesystem I want, whether it supports RAID or not. Additionally, the setup
  234 wasn't too difficult either. Overall, I think (like anyone cares what I think
  235 though) that they've done a pretty great job with this.
  236 
  237 Many thanks to the folks who contributed to mdadm and the Linux kernel that
  238 runs it all (all 20,000-ish of you). I and many many other people really
  239 appreciate the great work you do.
  240 
  241 With that, I'm going to sign off and continue watching my cat play with/attack
  242 the little foil ball I just gave her.
  243 
  244 
  245 
  246 Category:Linux
  247 Category:Btrfs
  248 Category:Ext4
  249 Category:Storage
  250 Category:RAID
  251 
  252 
  253 // vim: set syntax=asciidoc:

Generated by cgit