Sunit Driver



Back to Hardware issuesForward to Detecting, querying and testing
  • 1RAID setup
    • 1.4Mdadm modes of operation
    • 1.5Create RAID device
  • 2Advanced Options
    • 2.1Chunk sizes
    • 2.2ext2, ext3, and ext4 (2011)

General setup

  1. Fix Bluetooth status bar icon not showing the battery level - thanks to Petru and Sunit on the Telegram group! If you are using TWRP, remember to flash also the persisttwrp zip. If you are using Magisk and a locked bootloader, flash also the boot image with magisk preinstalled.
  2. Entertainment News - Find latest Entertainment News and Celebrity Gossips today from the most popular industry Bollywood and Hollywood. In this way, catch exclusive interviews with celebrities.
  3. ResumeMatch - Sample Resume, Resume Template, Resume Example, Resume Builder,Resume linkedin,Resume Grade,File Convert. Cover Letter for Jobs.

This is what you need for any of the RAID levels:

  • A kernel with the appropriate md support either as modules or built-in. Preferably a kernel from the 4.x series. Although most of this should work fine with later 3.x kernels, too.
  • The mdadm tool
  • Patience, Pizza, and your favorite caffeinated beverage.

However, it would be best if you loaded xfs Linux kernel driver/module too. In other words, type the following one time command. Version=2 = sectsz = 512 sunit=0. Similarly, petrol in Dungarpur is ₹ 95.93, while in neighboring Gujarat, it is cheaper by ₹ 11.85 per litre,” the truck driver said. Also Read: India’s economy to shrink by 8% in FY’21.

The first two items are included as standard in most GNU/Linux distributionstoday.

If your system has RAID support, you should have a file called/proc/mdstat. Remember it, that file is your friend. If you do nothave that file, maybe your kernel does not have RAID support.

If you're sure your kernel has RAID support you may need to run run modprobe raid[RAID mode] to load raid support into your kernel. eg to support raid5:

See what the file contains, by doing a

It should tell you that you have the right RAID personality (eg. RAID mode) registered, and thatno RAID devices are currently active. See the /proc/mdstat page for more details.

Preparing and partitioning your disk devices

Arrays can be built on top of entire disks or on partitions.

This leads to 2 frequent questions:

  • Should I use entire device or a partition?
  • What partition type?

Which are discussed in Partition Types

Downloading and installing mdadm - the RAID management tool

mdadm is now the standard RAID management tool and should be found in any modern distribution.

You can retrieve the most recent version of mdadm with

In the absence of any other preferences, do that in the /usr/local/src directory. As a linux-specific program there is none of this autoconf stuff - just follow the instructions as per the INSTALL file.

Alternatively just use the normal distribution method for obtaining the package:

Debian, Ubuntu:

Gentoo:

RedHat:

[open]SUSE:

Mdadm modes of operation

mdadm is well documented in its manpage - well worth a read.

mdadm has 7 major modes of operation. Normal operation just uses the 'Create', 'Assemble' and 'Monitor' commands - the rest come in handy when you're messing with your array; typically fixing it or changing it.

1. Create

Create a new array with per-device superblocks (normal creation).

2. Assemble

Assemble the parts of a previously created array into an active array. Components can be explicitlygiven or can be searched for. mdadm checks that the components do form a bona fide array, and can, onrequest, fiddle superblock information so as to assemble a faulty array. Typically you do this in theinit scripts after rebooting.

3. Follow or Monitor

Monitor one or more md devices and act on any state changes. This is only meaningful for raid1, 4, 5,6, 10 or multipath arrays as only these have interesting state. raid0 or linear never have missing,spare, or failed drives, so there is nothing to monitor. Typically you do this after rebooting too.

4. Build

Build an array that doesn't have per-device superblocks. For these sorts of arrays, mdadm cannotdifferentiate between initial creation and subsequent assembly of an array. It also cannot perform anychecks that appropriate devices have been requested. Because of this, the Build mode should only beused together with a complete understanding of what you are doing.

5. Grow

Grow, shrink or otherwise reshape an array in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1.

6. Manage

This is for doing things to specific components of an array such as adding new spares and removingfaulty devices.

7. Misc

This is an 'everything else' mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.


Libero adsl driver download for windows xp.

Create RAID device

Below we'll see how to create arrays of various types; the basic approach is:

If you want to access all the latest and upcoming features such as fully named RAID arrays so you no longer have to memorize which partition goes where, you'll want to make sure to use persistent metadata in the version 1.0 or higher format, as there is no way (currently or planned) to convert an array to a different metadata version. Current recommendations are to use metadata version 1.2 except when creating a boot partition, in which case use version 1.0 metadata and RAID-1.[1]

Booting from a 1.2 raid is only supported when booting with an initramfs, as the kernel can no longer assemble or recognise an array - it relies on userspace tools. Booting directly from 1.0 is supported because the metadata is at the end of the array, and the start of a mirrored 1.0 array just looks like a normal partition to the kernel.

NOTE: A work-around to upgrade metadata from version 0.90 to 1.0 is contained in the section RAID superblock formats.

To change the metadata version (the default is now version 1.2 metadata) add the --metadata option after the switch stating what you're doing in the first place. This will work:

This, however, will not work:

Linear mode

Ok, so you have two or more partitions which are not necessarily thesame size (but of course can be), which you want to append to eachother.

Spare-disks are not supported here. If a disk dies, the array dieswith it. There's no information to put on a spare disk.

Using mdadm, a single command like

should create the array. The parameters talk for themselves. The out-put might look like this

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any otherdevice, mount it, include it in your /etc/fstab and so on.

RAID-0

You have two or more devices, of approximately the same size, and youwant to combine their storage capacity and also combine theirperformance by accessing them in parallel.

Like in Linear mode, spare disks are not supported here either. RAID-0has no redundancy, so when a disk dies, the array goes with it.

Having run mdadm you have initialised the superblocks andstarted the raid device. Have a look in /proc/mdstat to see what'sgoing on. You should see that your device is now running.

/dev/md0 is now ready to be formatted, mounted, used and abused.

RAID-1

You have two devices of approximately same size, and you want the twoto be mirrors of each other. Eventually you have more devices, whichyou want to keep as stand-by spare-disks, that will automaticallybecome a part of the mirror if one of the active devices break.

If you have spare disks, you can add them to the end of the devicespecification like

Ok, now we're all set to start initializing the RAID. The mirror mustbe constructed, eg. the contents (however unimportant now, since thedevice is still not formatted) of the two devices must besynchronized.

Check out the /proc/mdstat file. It should tell you that the /dev/md0device has been started, that the mirror is being reconstructed, andan ETA of the completion of the reconstruction.

Reconstruction is done using idle I/O bandwidth. So, your systemshould still be fairly responsive, although your disk LEDs should beglowing nicely.

The reconstruction process is transparent, so you can actually use thedevice even though the mirror is currently under reconstruction.

Try formatting the device, while the reconstruction is running. Itwill work. Also you can mount it and use it while reconstruction isrunning. Of Course, if the wrong disk breaks while the reconstructionis running, you're out of luck.

RAID-4/5/6

You have three or more devices (four or more for RAID-6) of roughly the same size, you want tocombine them into a larger device, but still to maintain a degree ofredundancy for data safety. Eventually you have a number of devices touse as spare-disks, that will not take part in the array beforeanother device fails.

If you use N devices where the smallest has size S, the size of theentire raid-5 array will be (N-1)*S, or (N-2)*S for raid-6. This 'missing' space is used for parity(redundancy) information. Thus, if any disk fails, all the data staysintact. But if two disks fail on raid-5, or three on raid-6, all data is lost.

The default chunk-size is 128kb. That's the default io size on a spindle.

Ok, enough talking. Let's see if raid-5 works. Run your command:

and see what happens. Hopefully your disks start workinglike mad, as they begin the reconstruction of your array. Have a lookin /proc/mdstat to see what's going on.

If the device was successfully created, the reconstruction process hasnow begun. Your array is not consistent until this reconstructionphase has completed. However, the array is fully functional (exceptfor the handling of device failures of course), and you can format itand use it even while it is reconstructing.

The initial reconstruction will always appear as though the array is degraded and is being reconstructed onto a spare, even if only just enough devices were added with zero spares. This is to optimize the initial reconstruction process. This may be confusing or worrying; it is intended for good reason. For more information, please check this source, directly from Neil Brown.

Now, you can create a filesystem. See the section on special options to mke2fs before formatting the filesystem. You can now mount it, include it in your /etc/fstab and so on.

Saving your RAID configuration (2011)

After you've created your array, it's important to save the configuration in the proper mdadm configuration file. In Ubuntu, this is file /etc/mdadm/mdadm.conf. In some other distributions, this is file /etc/mdadm.conf. Check your distribution's documentation, or look at man mdadm.conf, to see what applies to your distribution.

To save the configuration information:

Ubuntu:

Others (check your distribution's documentation):

Note carefully that if you do this before your array has finished initialization, you may have an inaccurate spares= clause.

In Ubuntu, if you neglect to save the RAID creation information, you will get peculiar errors when you try to assemble the RAID device (described below). There will be errors generated that the hard drive is busy, even though it seems to be unused. For example, the error might be similar to this: 'mdadm: Cannot open /dev/sdd1: Device or resource busy'. This happens because if there is no RAID configuration information in the mdadm.conf file, the system may create a RAID device from one disk in the array, activate it, and leave it unmounted. You can identify this problem by looking at the output of 'cat /proc/mdstat'. If it lists devices such as 'md_d0' that are not part of your RAID setup, then first stop the extraneous device (for example: 'mdadm --stop /dev/md_d0') and then try to assemble your RAID array as described below.

Create and mount filesystem

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab, and so on.

Common filesystem creation commands are mk2fs and mkfs.ext3. Please see options for mke2fs for an example and details.


Using the Array

Stopping a running RAID device is easy:

Starting is a little more complex; you may think that:

would work - but it doesn't.

Linux raid devices don't really exist on their own; they have to be assembledeach time you want to use them. Assembly is like creation insofar as it pullstogether devices

If you earlier ran:

then

would work.

However, the easy way to do this if you have a nice simple setup is:

For complex cases (ie you pull in disks from other machines that you're trying to repair) this has the potential to start arrays you don't really want started. A safer mechanism is touse the uuid parameter and run:

This will only assemble the array that you want - but it will work no matterwhat has happened to the device names. This is particularly cool if, for example,you add in a new SATA controller card and all of a sudden /dev/sda becomes /dev/sde!!!

The Persistent Superblock (2011)

Back in 'The Good Old Days' (TM), the raidtools would read your/etc/raidtab file, and then initialize the array. However, this wouldrequire that the filesystem on which /etc/raidtab resided was mounted.This was unfortunate if you want to boot on a RAID.

Also, the old approach led to complications when mounting filesystemson RAID devices. They could not be put in the /etc/fstab file asusual, but would have to be mounted from the init-scripts.

The persistent superblocks solve these problems. When an array iscreated with the persistent-superblock option (the default now),a special superblock is written to a location (different for different superblock versions) on all disksparticipating in the array. This allows the kernel to read theconfiguration of RAID devices directly from the disks involved,instead of reading from some configuration file that may not beavailable at all times.

It's not a bad idea to maintain a consistent /etc/mdadm.conf file,since you may need this file for later recovery of the array, although this is pretty much totally unnecessary today.

A persistent superblock is mandatory for auto-assembly ofyour RAID devices upon system boot.

NOTE: Were persistent superblocks necessary for kernel raid support? This support has been moved into user space so this section may (or may not) be seriously out of date.

Superblock physical layouts are listed on RAID superblock formats .

External Metadata (2011)

MDRAID has always used its own metadata format. There are two different major formats for the MDRAID native metadata, the 0.90 and the version-1. The old 0.90 format limits the arrays to 28 components and 2 terabytes. With the latest mdadm, version 1.2 is the default.

Starting with Linux kernel v2.6.27 and mdadm v3.0, external metadata are supported. These formats have been long supported with DMRAID and allow the booting of RAID volumes from Option ROM depending on the vendor.

The first format is the DDF (Disk Data Format) defined by SNIA as the 'Industry Standard' RAID metadata format. When a DDF array is constructed, a container is created in which normal RAID arrarys can be created within the container.

Sunit Driver License Test

The second format is the Intel(r) Matrix Storage Manager metadata format. This also createsa container that is managed similar to DDF. And on some platforms (depending on vendor), thisformat is supported by option-ROM in order to allow booting.[2]


To report the RAID information from the Option ROM:

To create RAID volumes that are external metadata, we must first create a container: Action actina gameon driver downloads.

In this example we created an IMSM based container for 4 RAID devices. Now we can create volumes within the container.

Of course, the --size option can be used to limit the size of the disk space used in the volume during creation in order to create multiple volumes within the container. One important note is that the various volumes within the container MUST span the same disks. i.e. a RAID10 volume and a RAID5 volume spanning the same number of disks.

Chunk sizes

The chunk-size deserves an explanation. You can never writecompletely parallel to a set of disks. If you had two disks and wantedto write a byte, you would have to write four bits on each disk. Actually, every second bit would go to disk 0 and the others to disk1. Hardware just doesn't support that. Instead, we choose some chunk-size, which we define as the smallest 'atomic' mass of data that canbe written to the devices. A write of 16 kB with a chunk size of 4kB will cause the first and the third 4 kB chunks to be written tothe first disk and the second and fourth chunks to be written to thesecond disk, in the RAID-0 case with two disks. Thus, for largewrites, you may see lower overhead by having fairly large chunks,whereas arrays that are primarily holding small files may benefit morefrom a smaller chunk size.

Chunk sizes must be specified for all RAID levels, including linearmode. However, the chunk-size does not make any difference for linearmode.

For optimal performance, you should experiment with the chunk-size, as wellas with the block-size of the filesystem you put on the array. For others experiments and performance charts, check out our Performance page. You can get chunk-size graphs galore.

RAID-0

Data is written 'almost' in parallel to the disks in the array.Actually, chunk-size bytes are written to each disk, serially.

If you specify a 4 kB chunk size, and write 16 kB to an array of threedisks, the RAID system will write 4 kB to disks 0, 1 and 2, inparallel, then the remaining 4 kB to disk 0.

A 32 kB chunk-size is a reasonable starting point for most arrays. Butthe optimal value depends very much on the number of drives involved,the content of the file system you put on it, and many other factors.Experiment with it, to get the best performance.


RAID-0 with ext2

The following tip was contributed by michael@freenet-ag.de:

NOTE: this tip is no longer needed since the ext2 fs supports dedicated options: see 'Options for mke2fs' below

There is more disk activity at the beginning of ext2fs block groups.On a single disk, that does not matter, but it can hurt RAID0, if allblock groups happen to begin on the same disk.

Example:

With a raid using a chunk size of 4k (also called stride-size), and filesystem using a block size of 4k, each block occupies one stride.With two disks, the #disk * stride-size product (also called stripe-width) is 2*4k=8k.The default block group size is 32768 blocks, which is a multiple of the stripe-width of 2 blocks, so all block groups start on disk 0,which can easily become a hot spot, thus reducing overall performance.Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), which also happens to be a multiple of the stripe-width,so you can not avoid the problem by adjusting the blocks per group with the -g option of mkfs(8).

If you add a disk, the stripe-width (#disk * stride-size product) is 12k,so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1.The load caused by disk activity at the block group beginnings spreads over all disks.

In case you can not add a disk, try a stride size of 32k. The stripe-width (#disk * stride-size product) is then 64k.Since you can change the block group size in steps of 8 blocks (32k), using 32760 blocks per group solves the problem.

Additionally, the block group boundaries should fall on stride boundaries. The examples above get this right.

RAID-1

For writes, the chunk-size doesn't affect the array, since all datamust be written to all disks no matter what. For reads however, thechunk-size specifies how much data to read serially from theparticipating disks. Since all active disks in the array contain thesame information, the RAID layer has complete freedom in choosing fromwhich disk information is read - this is used by the RAID code toimprove average seek times by picking the disk best suited for anygiven read operation.

RAID-4

When a write is done on a RAID-4 array, the parity information must beupdated on the parity disk as well.

The chunk-size affects read performance in the same way as in RAID-0,since reads from RAID-4 are done in the same way.


RAID-5

Suit drive

On RAID-5, the chunk size has the same meaning for reads as forRAID-0. Writing on RAID-5 is a little more complicated: When a chunkis written on a RAID-5 array, the corresponding parity chunk must beupdated as well. Updating a parity chunk requires either

  • The original chunk, the new chunk, and the old parity block
  • Or, all chunks (except for the parity chunk) in the stripe

The RAID code will pick the easiest way to update each parity chunkas the write progresses. Naturally, if your server has lots ofmemory and/or if the writes are nice and linear, updating theparity chunks will only impose the overhead of one extra writegoing over the bus (just like RAID-1). The parity calculationitself is extremely efficient, so while it does of course load themain CPU of the system, this impact is negligible. If the writesare small and scattered all over the array, the RAID layer willalmost always need to read in all the untouched chunks from eachstripe that is written to, in order to calculate the parity chunk.This will impose extra bus-overhead and latency due to extra reads.

A reasonable chunk-size for RAID-5 is 128 kB. A study showed that with 4 drives (even-number-of-drives might make a difference) that large chunk sizes of 512-2048 kB gave superior results [3]. As always, you may want to experiment with this or check out our Performance page.

Also see the section on special options to mke2fs. This affectsRAID-5 performance.


ext2, ext3, and ext4 (2011)

There are special options available when formatting RAID-4 or -5 devices with mke2fs or mkfs. The -E stride=nn,stripe-width=mm options will allow mke2fs to better place different ext2/ext3 specific disks: number of disks that store data, not disks used for parity or spares. For example:

  • RAID 0 with 2 disks: 2 data disks (n)
  • RAID 1 with 2 disks: 1 data disk (n/2)
  • RAID 10 with 10 disks: 5 data disks (n/2)
  • RAID 5 with 6 disks (no spares): 5 data disks (n-1)
  • RAID 6 with 6 disks (no spares): 4 data disks (n-2)

With these numbers in hand, you then want to use mkfs.xfs's su and sw parameters when creating your filesystem.

  • su: Stripe unit, which is the RAID chunk size, in bytes
  • sw: Multiplier of the stripe unit, i.e. number of data disks

If you've a 4-disk RAID 5 and are using a chunk size of 64 KiB, the command to use is:

Alternately, you may use the sunit/swidth mkfs options to specify stripe unit and width in 512-byte-block units. For the array above, it could also be specified as:

The result is exactly the same; however, the su/sw combination is often simpler to remember. Beware that sunit/swidth are inconsistently used throughout XFS' utilities (see xfs_info below).

To check the parameters in use for an XFS filesystem, use xfs_info.

Here, rather than displaying 512-byte units as used in mkfs.xfs, sunit and swidth are shown as multiples of the filesystem block size (bsize), another file system tunable. This inconsistency is for legacy reasons, and is not well-documented.

For the above example, sunit (sunit×bsize = su, 16×4096 = 64 KiB) and swidth (swidth×bsize = sw, 48×4096 = 192 KiB) are optimal and correctly reported.

While the stripe unit and stripe width cannot be changed after an XFS file system has been created, they can be overridden at mount time with the sunit/swidth options, similar to ones used by mkfs.xfs.

Suit River

From Documentation/filesystems/xfs.txt in the kernel tree:

Sunit Driver Salary

Source: Samat Says: Tuning XFS for RAID

Sunit Driver Job

Back to Hardware issuesForward to Detecting, querying and testing

Suit Drive

Retrieved from 'https://raid.wiki.kernel.org/index.php?title=RAID_setup&oldid=6299'