<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwt.com.de/index.php?action=history&amp;feed=atom&amp;title=Manual_Software_RAID_Creation</id>
	<title>Manual Software RAID Creation - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwt.com.de/index.php?action=history&amp;feed=atom&amp;title=Manual_Software_RAID_Creation"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwt.com.de/index.php?title=Manual_Software_RAID_Creation&amp;action=history"/>
	<updated>2026-05-13T20:47:53Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://wiki.bwt.com.de/index.php?title=Manual_Software_RAID_Creation&amp;diff=21&amp;oldid=prev</id>
		<title>BrainwreckedTech: 1 revision</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwt.com.de/index.php?title=Manual_Software_RAID_Creation&amp;diff=21&amp;oldid=prev"/>
		<updated>2014-01-06T01:52:16Z</updated>

		<summary type="html">&lt;p&gt;1 revision&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Manual software RAID configuration from the command line can be easier than using a UI-guided configuration tool.&lt;br /&gt;
&lt;br /&gt;
=RAID At Boot Time=&lt;br /&gt;
&lt;br /&gt;
The best practice is to avoid placing boot loaders and &amp;lt;tt&amp;gt;/boot&amp;lt;/tt&amp;gt; on disks that will be used in software RAID arrays.  If this isn&amp;#039;t an option, the next-best step is to partition off space at the beginning of each drive in the RAID array for use as &amp;lt;tt&amp;gt;/boot&amp;lt;/tt&amp;gt;.  The space required varies between distributions.  Arch can create very small initram images that help &amp;lt;tt&amp;gt;/boot&amp;lt;/tt&amp;gt; fit inside 10MB.  Ubuntu likes to keep older versions of the kernel until you specifically delete them, which can make &amp;lt;tt&amp;gt;/boot&amp;lt;/tt&amp;gt; weigh in over 128MB during the LTS life span.&lt;br /&gt;
&lt;br /&gt;
As for root-on-RAID, you need two things:&lt;br /&gt;
&lt;br /&gt;
* The initramdisk has the raid tools necessary to discover and load RAID arrays.  Arch needs to have &amp;lt;tt&amp;gt;mdadm&amp;lt;/tt&amp;gt; specified as a &amp;lt;tt&amp;gt;HOOK&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/etc/mkinitcpio.conf&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* You may need to specify md=[x],[device],[device]... for each RAID array at the end of you kernel line of your boot loader.&lt;br /&gt;
&lt;br /&gt;
=Stopping Existing Arrays=&lt;br /&gt;
&lt;br /&gt;
The Arch Linux installer automatically detects and starts RAID arrays.  If you want to change the partition setup, you have to stop the arrays first.&lt;br /&gt;
&lt;br /&gt;
 mdadm --stop /dev/md0&lt;br /&gt;
&lt;br /&gt;
The Ubuntu Linux installer does not load RAID arrays unless you specify that you want to set up RAID arrays during the partitioning section.&lt;br /&gt;
&lt;br /&gt;
=Clearing Existing Superblocks=&lt;br /&gt;
&lt;br /&gt;
If you&amp;#039;re re-using a drive that was already part of a software RAID array, most UI tools will choke (in that the UI won&amp;#039;t let you do what you want) when they start &amp;lt;tt&amp;gt;mdadm&amp;lt;/tt&amp;gt; and find superblocks.  Keep in mind that simple formatting only over-writes the beginning of the disk with filesystem data, plus special blocks spread throughout the device.&lt;br /&gt;
&lt;br /&gt;
 mdadm --zero-superblock [device]&lt;br /&gt;
&lt;br /&gt;
If you used the entire drive, &amp;lt;tt&amp;gt;[device]&amp;lt;/tt&amp;gt; will be something like &amp;lt;tt&amp;gt;/dev/sda&amp;lt;/tt&amp;gt;.  If you used partitions, &amp;lt;tt&amp;gt;[device]&amp;lt;/tt&amp;gt; will be something like &amp;lt;tt&amp;gt;/dev/sda1&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;tt&amp;gt;mdadm&amp;lt;/tt&amp;gt; fails to find the superblock, but you know it&amp;#039;s there, you can use &amp;lt;tt&amp;gt;dd&amp;lt;/tt&amp;gt; to zero out all possible locations&lt;br /&gt;
&lt;br /&gt;
 dd if=/dev/zero of=[device] bs=512 count=265&lt;br /&gt;
 dd if=/dev/zero of=[device] bs=512 count=265 seek=[device-sectors-less-265]&lt;br /&gt;
&lt;br /&gt;
=Whole Device vs Partitions=&lt;br /&gt;
&lt;br /&gt;
There&amp;#039;s no difference in performance -- Linux software RAID doesn&amp;#039;t care.  You, on the other hand, may get annoyed when fdisk and parted report that member devices and/or RAID arrays themselves don&amp;#039;t have valid partition tables.  Then again, it may be a trade-off you&amp;#039;re willing to make to not have to worry about partition alignment.&lt;br /&gt;
&lt;br /&gt;
Of course, you&amp;#039;ll have no choice but to use partitions if you can&amp;#039;t put &amp;lt;tt&amp;gt;/boot&amp;lt;/tt&amp;gt; on a non-RAID device.&lt;br /&gt;
&lt;br /&gt;
=Chunk Sizes=&lt;br /&gt;
&lt;br /&gt;
With RAID0 and RAID1, chunk size doesn&amp;#039;t really make a difference so you can use any chunk size you want.  With RAID4 - RAID6. chunk size does make a difference and you&amp;#039;ll most likely want to use a 32K chunk size.&lt;br /&gt;
&lt;br /&gt;
Less typing version:&lt;br /&gt;
&lt;br /&gt;
 mdadm -Cve 1.2 /dev/md0 -l [0|1|4|5|6|10] -c [chunk-size] -n [num-raid-devs] \&lt;br /&gt;
 [raiddev1] [raiddev1] etc. {-x [num-spare-devs] [sparedev1] etc.}&lt;br /&gt;
&lt;br /&gt;
Easier to remember version:&lt;br /&gt;
&lt;br /&gt;
 mdadm --create --verbose --metadata 1.2 /dev/md0 --level=[0|1|4|5|6|10] \&lt;br /&gt;
 --chunk=[chunk-size] --raid-devics=[num-raid-devs] [device1] [device2] etc. \&lt;br /&gt;
 {--spare-devices [num-spare-devs] [sparedev1] etc.}&lt;br /&gt;
&lt;br /&gt;
=Block Sizes=&lt;br /&gt;
&lt;br /&gt;
RAID or not, you&amp;#039;ll most likely want 4K block sizes, which is the default for most filesystems. (Some default to the &amp;quot;architecture page size default&amp;quot; which ends up being 4K — exact size can be had by issuing the command &amp;lt;tt&amp;gt;getconf PAGESIZE&amp;lt;/tt&amp;gt;.)&lt;br /&gt;
&lt;br /&gt;
On modern Linux distributions, EXT and XFS filesystems should automatically detect and set proper stride and stripe parameters upon creation to help optimize placement of special blocks.  You can see exactly how a filesystem was mounted by checking the output from &amp;lt;tt&amp;gt;cat /proc/mounts&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*EXT &amp;lt;pre&amp;gt;mkfs -t ext[2|3|4] -L [label] -b 4096 -E stride=[stride],stripe-width=[stripe] [device]&amp;lt;/pre&amp;gt; &amp;lt;pre&amp;gt;tune2fs -E stride=[stride],stripe-width=[stripe] [device]&amp;lt;/pre&amp;gt;&lt;br /&gt;
**stride = chunk size &amp;amp;divide; block size&lt;br /&gt;
**stripe = stride &amp;amp;times; non-parity non-spare disks&lt;br /&gt;
* XFS &amp;lt;pre&amp;gt;mkfs -t xfs -b 4096 -d su=[chunk-size],sw=[non-parity non-spare devices] /dev/md0&amp;lt;/pre&amp;gt; &amp;lt;pre&amp;gt;xfs_db -c unit=[chunk-size] width=[non-parity non-spare disks]&lt;/div&gt;</summary>
		<author><name>BrainwreckedTech</name></author>
	</entry>
</feed>