ACC SHELL

Path : /usr/share/gnome/help/opensuse-manuals/C/
File Upload :
Current File : //usr/share/gnome/help/opensuse-manuals/C/cha.advdisk.html

<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>Chapter 2. Advanced Disk Setup</title><link rel="stylesheet" href="susebooks.css" type="text/css"><meta name="generator" content="DocBook XSL Stylesheets V1.75.2"><link rel="home" href="index.html" title="Documentation"><link rel="up" href="part.reference.install.html" title="Part I. Advanced Deployment Scenarios"><link rel="prev" href="cha.deployment.remoteinst.html" title="Chapter 1. Remote Installation"><link rel="next" href="part.reference.software.html" title="Part II. Managing and Updating Software"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header" border="0" class="bctable"><tr><td width="80%"><div class="breadcrumbs"><p><a href="index.html"> Documentation</a><span class="breadcrumbs-sep"> &gt; </span><a href="book.opensuse.reference.html">Reference</a><span class="breadcrumbs-sep"> &gt; </span><a href="part.reference.install.html">Advanced Deployment Scenarios</a><span class="breadcrumbs-sep"> &gt; </span><strong><a accesskey="p" title="Chapter 1. Remote Installation" href="cha.deployment.remoteinst.html"><span>&#9664;</span></a> </strong></p></div></td></tr></table></div><div class="chapter" title="Chapter 2. Advanced Disk Setup"><div class="titlepage"><div><div><h2 class="title"><a name="cha.advdisk"></a>Chapter 2. Advanced Disk Setup<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#cha.advdisk">¶</a></span></h2></div></div></div><div class="toc"><p><b>Contents</b></p><dl><dt><span class="sect1"><a href="cha.advdisk.html#sec.yast2.i_y2_part_expert">2.1. Using the YaST Partitioner</a></span></dt><dt><span class="sect1"><a href="cha.advdisk.html#sec.yast2.system.lvm">2.2. LVM Configuration</a></span></dt><dt><span class="sect1"><a href="cha.advdisk.html#sec.yast2.system.raid">2.3. Soft RAID Configuration</a></span></dt></dl></div><p>
  Sophisticated system configurations require specific disk setups. All
  common partitioning tasks can be done with YaST. To get persistent
  device naming with block devices, use the block devices below
  <code class="filename">/dev/disk/by-id</code> or
  <code class="filename">/dev/disk/by-uuid</code>. Logical Volume Management (LVM) is
  a disk partitioning scheme that is designed to be much more flexible than
  the physical partitioning used in standard setups. Its snapshot
  functionality enables easy creation of data backups. Redundant Array of
  Independent Disks (RAID) offers increased data integrity, performance, and
  fault tolerance. openSUSE also supports multipath I/O (see the
  chapter about multipath I/O in Storage Administration Guide), and there is also the option to
  use iSCSI as a networked disk.

 </p><div class="sect1" title="2.1. Using the YaST Partitioner"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="sec.yast2.i_y2_part_expert"></a>2.1. Using the YaST Partitioner<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.i_y2_part_expert">¶</a></span></h2></div></div></div><a class="indexterm" name="id414593"></a><a class="indexterm" name="id436033"></a><p>
  With the expert partitioner, shown in
  <a class="xref" href="cha.advdisk.html#fig.yast2.i_y2_disk_part" title="Figure 2.1. The YaST Partitioner">Figure 2.1, &#8220;The YaST Partitioner&#8221;</a>, manually modify the
  partitioning of one or several hard disks. Partitions can be added,
  deleted, resized, and edited. Also access the soft RAID and LVM configuration from this YaST module.
 </p><div class="warning"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Warning: Repartitioning the Running System"><tr class="head"><td width="32"><img alt="[Warning]" src="admon/warning.png"></td><th align="left">Repartitioning the Running System</th></tr><tr><td colspan="2" align="left" valign="top"><p>
   Although it is possible to repartition your system while it is running,
   the risk of making a mistake that causes data loss is very high. Try to
   avoid repartitioning your installed system and always do a complete
   backup of your data before attempting to do so.
  </p></td></tr></table></div><div class="figure"><a name="fig.yast2.i_y2_disk_part"></a><p class="title"><b>Figure 2.1. The YaST Partitioner</b><span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#fig.yast2.i_y2_disk_part">¶</a></span></p><div class="figure-contents"><div class="mediaobject"><table border="0" summary="manufactured viewport for HTML img" cellspacing="0" cellpadding="0" width="75%"><tr><td><img src="images/i_y2_disk_part.png" width="100%" alt="The YaST Partitioner"></td></tr></table></div></div></div><br class="figure-break"><p>
  All existing or suggested partitions on all connected hard disks are
  displayed in the list of <span class="guimenu">Available Storage</span> in the
  YaST <span class="guimenu">Expert Partitioner</span> dialog. Entire hard disks are
  listed as devices without numbers, such as
  <code class="filename">/dev/sda</code>. Partitions are listed as parts
  of these devices, such as
  <code class="filename">/dev/sda1</code>. The size, type,
  encryption status, file system, and mount point of the hard disks and
  their partitions are also displayed. The mount point describes where the
  partition appears in the Linux file system tree.
 </p><p>
  Several functional views are available on the lefthand <span class="guimenu">System
  View</span>. Use these views to gather information about existing
  storage configurations, or to configure functions like
  <code class="literal">RAID</code>, <code class="literal">Volume Management</code>,
  <code class="literal">Crypt Files</code>, or <code class="literal">NFS</code>.
 </p><p>
  If you run the expert dialog during installation, any free hard disk space
  is also listed and automatically selected. To provide more disk space to
  openSUSE®, free the needed space starting from the bottom toward
  the top of the list (starting from the last partition of a hard disk
  toward the first). For example, if you have three partitions, you cannot
  use the second exclusively for openSUSE and retain the third and
  first for other operating systems.
 </p><div class="sect2" title="2.1.1. Partition Types"><div class="titlepage"><div><div><h3 class="title"><a name="sec.IB.part.typen"></a>2.1.1. Partition Types<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.IB.part.typen">¶</a></span></h3></div></div></div><a class="indexterm" name="id423223"></a><p>
   Every hard disk has a partition table with space for four entries. Every
   entry in the partition table corresponds to a primary partition or an
   extended partition. Only one extended partition entry is allowed,
   however.
  </p><p>
   A primary partition simply consists of a continuous range of cylinders
   (physical disk areas) assigned to a particular operating system. With
   primary partitions you would be limited to four partitions per hard disk,
   because more do not fit in the partition table. This is why extended
   partitions are used. Extended partitions are also continuous ranges of
   disk cylinders, but an extended partition may be divided into
   <span class="emphasis"><em>logical partitions</em></span> itself. Logical partitions do not
   require entries in the partition table. In other words, an extended
   partition is a container for logical partitions.
  </p><p>
   If you need more than four partitions, create an extended partition as
   the fourth partition (or earlier). This extended partition should occupy
   the entire remaining free cylinder range. Then create multiple logical
   partitions within the extended partition. The maximum number of logical
   partitions is 15 on SCSI, SATA, and Firewire disks and 63 on (E)IDE
   disks. It does not matter which types of partitions are used for Linux.
   Primary and logical partitions both function normally.
  </p></div><div class="sect2" title="2.1.2. Creating a Partition"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.i_y2_part_expert.newpart"></a>2.1.2. Creating a Partition<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.i_y2_part_expert.newpart">¶</a></span></h3></div></div></div><a class="indexterm" name="id435907"></a><p>
   To create a partition from scratch select <span class="guimenu">Hard Disks</span>
   and then a hard disk with free space. The actual modification can be done
   in the <span class="guimenu">Partitions</span> tab:
  </p><div class="procedure"><ol class="procedure" type="1"><li><p>
     Select <span class="guimenu">Add</span>. If several hard disks are connected, a
     selection dialog appears in which select a hard disk for the new
     partition.
    </p></li><li><p>
     Specify the partition type (primary or extended). Create up to four
     primary partitions or up to three primary partitions and one extended
     partition. Within the extended partition, create several logical
     partitions (see <a class="xref" href="cha.advdisk.html#sec.IB.part.typen" title="2.1.1. Partition Types">Section 2.1.1, &#8220;Partition Types&#8221;</a>).
    </p></li><li><p>
     Select the file system to use and a mount point. YaST suggests a
     mount point for each partition created. To use a different mount
     method, like mount by label, select <span class="guimenu">Fstab Options</span>.

    </p></li><li><p>
     Specify additional file system options if your setup requires them.
     This is necessary, for example, if you need persistent device names.
     For details on the available options, refer to
     <a class="xref" href="cha.advdisk.html#sec.yast2.i_y2_part_expert.options" title="2.1.3. Editing a Partition">Section 2.1.3, &#8220;Editing a Partition&#8221;</a>.
    </p></li><li><p>
     Click <span class="guimenu">Finish</span> to apply your partitioning setup and
     leave the partitioning module.
    </p><p>
     If you created the partition during installation, you are returned to
     the installation overview screen.
    </p></li></ol></div></div><div class="sect2" title="2.1.3. Editing a Partition"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.i_y2_part_expert.options"></a>2.1.3. Editing a Partition<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.i_y2_part_expert.options">¶</a></span></h3></div></div></div><a class="indexterm" name="id437381"></a><p>
   When you create a new partition or modify an existing partition, you can
   set various parameters. For new partitions, the default parameters set by
   YaST are usually sufficient and do not require any modification. To
   edit your partition setup manually, proceed as follows:
  </p><div class="procedure"><ol class="procedure" type="1"><li><p>
     Select the partition.
    </p></li><li><p>
     Click <span class="guimenu">Edit</span> to edit the partition and set the
     parameters:
    </p><div class="variablelist"><dl><dt><span class="term">File System ID</span></dt><dd><p>
        <a class="indexterm" name="id437426"></a>  <a class="indexterm" name="id437436"></a> Even if you do not want to format the partition at this
        stage, assign it a file system ID to ensure that the partition is
        registered correctly. Possible values include
        <span class="guimenu">Linux</span>, <span class="guimenu">Linux swap</span>,
        <span class="guimenu">Linux LVM</span>, and <span class="guimenu">Linux RAID</span>.

       </p></dd><dt><span class="term">
       File System
      </span></dt><dd><p>
        <a class="indexterm" name="id437473"></a> <a class="indexterm" name="id437482"></a> To change the partition file system, click
        <span class="guimenu">Format Partition</span> and select file system type in
        the <span class="guimenu">File System</span> list.
       </p><div class="warning"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Warning: Changing the file system"><tr class="head"><td width="32"><img alt="[Warning]" src="admon/warning.png"></td><th align="left">Changing the file system</th></tr><tr><td colspan="2" align="left" valign="top"><p>
         Changing the file system and reformatting partitions irreversibly
         deletes all data from the partition.
        </p></td></tr></table></div><p>
        For details on the various file systems, refer to Storage Administration Guide.
       </p></dd><dt><span class="term">
       Encrypt Device 
      </span></dt><dd><p>
        If you activate the encryption, all data is written to the hard disk
        in encrypted form. This increases the security of sensitive data,
        but reduces the system speed, as the encryption takes some time to
        process. More information about the encryption of file systems is
        provided in Chapter <i>Encrypting Partitions and Files</i> (&#8593;Security Guide).
       </p></dd><dt><span class="term">
       Fstab Options
      </span></dt><dd><p>
        Specify various parameters contained in the global file system
        administration file (<code class="filename">/etc/fstab</code>). The default
        settings should suffice for most setups. You can, for example,
        change the file system identification from the device name to a
        volume label. In the volume label, use all characters except
        <code class="literal">/</code> and space.
       </p><p>
        To get persistent devices names, use the mount option
        <span class="guimenu">Device ID</span>, <span class="guimenu">UUID</span> or
        <span class="guimenu">LABEL</span>. In openSUSE, persistent device names
        are enabled by default.
       </p><p>
        When using the mount option <span class="guimenu">LABEL</span> to mount a
        partition, define an appropriate label for the selected partition.
        For example, you could use the partition label
        <code class="literal">HOME</code> for a partition intended to mount to
        <code class="filename">/home</code>.
       </p><p>
        If you intend to use quotas on the file system, use the mount option
        <span class="guimenu">Enable Quota Support</span>. This must be done before
        you can define quotas for users in the YaST <span class="guimenu">User
        Management</span> module. For further information on how to
        configure user quota, refer to
        <a class="xref" href="cha.y2.userman.html#sec.y2.userman.adv.quota" title="8.3.5. Managing Quotas">Section 8.3.5, &#8220;Managing Quotas&#8221;</a>.
       </p></dd><dt><span class="term">
       Mount Point
      </span></dt><dd><p>
        Specify the directory where the partition should be mounted in the
        file system tree. Select from YaST suggestions or enter any other
        name.
       </p></dd></dl></div></li><li><p>
     Select <span class="guimenu">Finish</span> to save the changes.
    </p></li></ol></div><div class="note"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Note: Resize Filesystems"><tr class="head"><td width="32"><img alt="[Note]" src="admon/note.png"></td><th align="left">Resize Filesystems</th></tr><tr><td colspan="2" align="left" valign="top"><p>
    To resize an existing file system, select the partition and use
    <span class="guimenu">Resize</span>. Note, that it is not possible to resize
    partitions while mounted. To resize partitions, unmount the relevant
    partition before running the partitioner.
   </p></td></tr></table></div></div><div class="sect2" title="2.1.4. Expert Options"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.i_y2_part_expert.options2"></a>2.1.4. Expert Options<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.i_y2_part_expert.options2">¶</a></span></h3></div></div></div><p>
   After you select a hard disk device (like <span class="guimenu">sda</span>) in the
   <span class="guimenu">System View</span> pane, you can access the
   <span class="guimenu">Expert...</span> menu in the lower right part of the
   <span class="guimenu">Expert Partitioner</span> window. The menu contains the
   following commands:
  </p><div class="variablelist"><dl><dt><span class="term">Create New Partition Table</span></dt><dd><p>
      This option helps you create a new partition table on the selected
      device.
     </p><div class="warning"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Warning: Creating a New Partition Table"><tr class="head"><td width="32"><img alt="[Warning]" src="admon/warning.png"></td><th align="left">Creating a New Partition Table</th></tr><tr><td colspan="2" align="left" valign="top"><p>
       Creating a new partition table on a device irreversibly removes all
       the partitions and their data from that device.
      </p></td></tr></table></div></dd><dt><span class="term">Clone This Disk</span></dt><dd><p>
      This option helps you clone the device partition layout and its data
      to other available disk devices.
     </p></dd></dl></div></div><div class="sect2" title="2.1.5. Advanced Options"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.i_y2_part_expert.configure_options"></a>2.1.5. Advanced Options<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.i_y2_part_expert.configure_options">¶</a></span></h3></div></div></div><p>
   After you select the hostname of the computer (the top-level of the tree
   in the <span class="guimenu">System View</span> pane), you can access the
   <span class="guimenu">Configure...</span> menu in the lower right part of the
   <span class="guimenu">Expert Partitioner</span> window. The menu contains the
   following commands:
  </p><div class="variablelist"><dl><dt><span class="term">Configure iSCSI</span></dt><dd><p>
      To access SCSI over IP block devices, you first have to configure
      iSCSI. This results in additionally available devices in the main
      partition list.
     </p></dd><dt><span class="term">Configure Multipath</span></dt><dd><p>
      Selecting this option helps you configure the multipath enhancement to
      the supported mass storage devices.
     </p></dd></dl></div></div><div class="sect2" title="2.1.6. More Partitioning Tips"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.i_y2_part_expert.info"></a>2.1.6. More Partitioning Tips<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.i_y2_part_expert.info">¶</a></span></h3></div></div></div><p>
   The following section includes a few hints and tips on partitioning that
   should help you make the right decisions when setting up your system.
  </p><div class="tip"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Tip: Cylinder Numbers"><tr class="head"><td width="32"><img alt="[Tip]" src="admon/tip.png"></td><th align="left">Cylinder Numbers</th></tr><tr><td colspan="2" align="left" valign="top"><p>
    Note, that different partitioning tools may start counting the cylinders
    of a partition with <code class="literal">0</code> or with <code class="literal">1</code>.
    When calculating the number of cylinders, you should always use the
    difference between the last and the first cylinder number and add one.
   </p></td></tr></table></div><div class="sect3" title="2.1.6.1. Using swap"><div class="titlepage"><div><div><h4 class="title"><a name="id437787"></a>2.1.6.1. Using <code class="literal">swap</code><span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#id437787">¶</a></span></h4></div></div></div><p>
    Swap is used to extend the available physical memory. It is then
    possible to use more memory than physical RAM available. The memory
    management system of kernels before 2.4.10 needed swap as a safety
    measure. Then, if you did not have twice the size of your RAM in swap,
    the performance of the system suffered. These limitations no longer
    exist.
   </p><p>
    Linux uses a page called <span class="quote">&#8220;<span class="quote">Least Recently Used</span>&#8221;</span> (LRU) to
    select pages that might be moved from memory to disk. Therefore, running
    applications have more memory available and caching works more smoothly.
   </p><p>
    If an application tries to allocate the maximum allowed memory, problems
    with swap can arise. There are three major scenarios to look at:
   </p><div class="variablelist"><dl><dt><span class="term">System with no swap</span></dt><dd><p>
       The application gets the maximum allowed memory. All caches are
       freed, and thus all other running applications are slowed. After a
       few minutes, the kernel's out-of-memory kill mechanism activates and
       kills the process.
      </p></dd><dt><span class="term">System with med&#1110;um sized swap (128 MB&#8211;512 MB)</span></dt><dd><p>
       At first, the system slows like a system without swap. After all
       physical RAM has been allocated, swap space is used as well. At this
       point, the system becomes very slow and it becomes impossible to run
       commands from remote. Depending on the speed of the hard disks that
       run the swap space, the system stays in this condition for about 10
       to 15 minutes until the out-of-memory kill mechanism resolves the
       issue. Note that you will need a certain amount of swap if the
       computer needs to perform a <span class="quote">&#8220;<span class="quote">suspend to disk</span>&#8221;</span>. In that
       case, the swap size should be large enough to contain the necessary
       data from memory (512 MB&#8211;1GB).
      </p></dd><dt><span class="term">System with lots of swap (several GB)</span></dt><dd><p>
       It is better to not have an application that is out of control and
       swapping excessively in this case. If you use such application, the
       system will need many hours to recover. In the process, it is likely
       that other processes get timeouts and faults, leaving the system in
       an undefined state, even after killing the faulty process. In this
       case, do a hard machine reboot and try to get it running again. Lots
       of swap is only useful if you have an application that relies on this
       feature. Such applications (like databases or graphics manipulation
       programs) often have an option to directly use hard disk space for
       their needs. It is advisable to use this option instead of using lots
       of swap space.
      </p></dd></dl></div><p>
    If your system is not out of control, but needs more swap after some
    time, it is possible to extend the swap space online. If you prepared a
    partition for swap space, just add this partition with YaST. If you do
    not have a partition available, you may also just use a swap file to
    extend the swap. Swap files are generally slower than partitions, but
    compared to physical ram, both are extremely slow so the actual
    difference is negligible.
   </p><div class="procedure" title="Procedure 2.1. Adding a Swap File Manually"><a name="id437880"></a><p class="title"><b>Procedure 2.1. Adding a Swap File Manually</b></p><p>
     To add a swap file in the running system, proceed as follows:
    </p><ol class="procedure" type="1"><li><p>
      Create an empty file in your system. For example, if you want to add a
      swap file with 128 MB swap at
      <code class="filename">/var/lib/swap/swapfile</code>, use the commands:
     </p><pre class="screen">mkdir -p /var/lib/swap
dd if=/dev/zero of=/var/lib/swap/swapfile bs=1M count=128</pre></li><li><p>
      Initialize this swap file with the command
     </p><pre class="screen">mkswap /var/lib/swap/swapfile</pre></li><li><p>
      Activate the swap with the command
     </p><pre class="screen">swapon /var/lib/swap/swapfile</pre><p>
      To disable this swap file, use the command
     </p><pre class="screen">swapoff /var/lib/swap/swapfile</pre></li><li><p>
      Check the current available swap spaces with the command
     </p><pre class="screen">cat /proc/swaps</pre><p>
      Note that at this point, it is only temporary swap space. After the
      next reboot, it is no longer utilized.
     </p></li><li><p>
      To enable this swap file permanently, add the following line to
      <code class="filename">/etc/fstab</code>:
     </p><pre class="screen">/var/lib/swap/swapfile swap swap defaults 0 0</pre></li></ol></div></div></div><div class="sect2" title="2.1.7. Partitioning and LVM"><div class="titlepage"><div><div><h3 class="title"><a name="id437968"></a>2.1.7. Partitioning and LVM<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#id437968">¶</a></span></h3></div></div></div><p>
   From the <span class="guimenu">Expert partitioner</span>, access the LVM
   configuration by clicking the <span class="guimenu">Volume Management</span> item
   in the <span class="guimenu">System View</span> pane.

   However, if a working LVM configuration already exists on your system, it
   is automatically activated upon entering the initial LVM configuration of
   a session. In this case, all disks containing a partition (belonging to
   an activated volume group) cannot be repartitioned. The Linux kernel
   cannot reread the modified partition table of a hard disk when any
   partition on this disk is in use. However, if you already have a working
   LVM configuration on your system, physical repartitioning should not be
   necessary. Instead, change the configuration of the logical volumes.
  </p><p>
   At the beginning of the physical volumes (PVs), information about the
   volume is written to the partition. To reuse such a partition for other
   non-LVM purposes, it is advisable to delete the beginning of this volume.
   For example, in the VG <code class="literal">system</code> and PV
   <code class="filename">/dev/sda2</code>, do this with the command
   <span class="command"><strong>dd</strong></span> <code class="option">if=/dev/zero of=/dev/sda2 bs=512
   count=1</code>.
  </p><div class="warning"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Warning: File System for Booting"><tr class="head"><td width="32"><img alt="[Warning]" src="admon/warning.png"></td><th align="left">File System for Booting</th></tr><tr><td colspan="2" align="left" valign="top"><p>
    The file system used for booting (the root file system or
    <code class="filename">/boot</code>) must not be stored on an LVM logical volume.
    Instead, store it on a normal physical partition.
   </p></td></tr></table></div><p>
   For more details about LVM, see the Storage Administration Guide.
  </p></div></div><div class="sect1" title="2.2. LVM Configuration"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="sec.yast2.system.lvm"></a>2.2. LVM Configuration<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.system.lvm">¶</a></span></h2></div></div></div><a class="indexterm" name="id438044"></a><a class="indexterm" name="id438051"></a><a class="indexterm" name="id438059"></a><p>
  This section briefly describes the principles behind the Logical Volume
  Manager (LVM) and its multipurpose features. In
  <a class="xref" href="cha.advdisk.html#sec.yast2.system.lvm.yast" title="2.2.2. LVM Configuration with YaST">Section 2.2.2, &#8220;LVM Configuration with YaST&#8221;</a>, learn how to set up LVM with
  YaST.
 </p><div class="warning"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Warning"><tr class="head"><td width="32"><img alt="[Warning]" src="admon/warning.png"></td><th align="left"></th></tr><tr><td colspan="2" align="left" valign="top"><p>
   Using LVM is sometimes associated with increased risk such as data loss.
   Risks also include application crashes, power failures, and faulty
   commands. Save your data before implementing LVM or reconfiguring
   volumes. Never work without a backup.
  </p></td></tr></table></div><div class="sect2" title="2.2.1. The Logical Volume Manager"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.system.lvm.explained"></a>2.2.1. The Logical Volume Manager<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.system.lvm.explained">¶</a></span></h3></div></div></div><p>
   The LVM enables flexible distribution of hard disk space over several
   file systems. It was developed because sometimes the need to change the
   segmenting of hard disk space arises just after the initial partitioning
   has been done. Because it is difficult to modify partitions on a running
   system, LVM provides a virtual pool (volume group, VG for short) of
   memory space from which logical volumes (LVs) can be created as needed.
   The operating system accesses these LVs instead of the physical
   partitions. Volume groups can occupy more than one disk, so that several
   disks or parts of them may constitute one single VG. This way, LVM
   provides a kind of abstraction from the physical disk space that allows
   its segmentation to be changed in a much easier and safer way than with
   physical repartitioning. Background information regarding physical
   partitioning can be found in <a class="xref" href="cha.advdisk.html#sec.IB.part.typen" title="2.1.1. Partition Types">Section 2.1.1, &#8220;Partition Types&#8221;</a> and
   <a class="xref" href="cha.advdisk.html#sec.yast2.i_y2_part_expert" title="2.1. Using the YaST Partitioner">Section 2.1, &#8220;Using the YaST Partitioner&#8221;</a>.
  </p><div class="figure"><a name="fig.lvm.explained.schematic"></a><p class="title"><b>Figure 2.2. Physical Partitioning versus LVM</b><span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#fig.lvm.explained.schematic">¶</a></span></p><div class="figure-contents"><div class="mediaobject"><table border="0" summary="manufactured viewport for HTML img" cellspacing="0" cellpadding="0" width="75%"><tr><td><img src="images/lvm.png" width="100%" alt="Physical Partitioning versus LVM"></td></tr></table></div></div></div><br class="figure-break"><p>
   <a class="xref" href="cha.advdisk.html#fig.lvm.explained.schematic" title="Figure 2.2. Physical Partitioning versus LVM">Figure 2.2, &#8220;Physical Partitioning versus LVM&#8221;</a> compares physical
   partitioning (left) with LVM segmentation (right). On the left side, one
   single disk has been divided into three physical partitions (PART), each
   with a mount point (MP) assigned so that the operating system can gain
   access. On the right side, two disks have been divided into two and three
   physical partitions each. Two LVM volume groups (VG 1 and VG 2)
   have been defined. VG 1 contains two partitions from DISK 1 and
   one from DISK 2. VG 2 contains the remaining two partitions
   from DISK 2. In LVM, the physical disk partitions that are
   incorporated in a volume group are called physical volumes (PVs). Within
   the volume groups, four LVs (LV 1 through LV 4) have been
   defined. They can be used by the operating system via the associated
   mount points. The border between different LVs do not need to be aligned with
   any partition border. See the border between LV 1 and LV 2 in
   this example.
  </p><p>
   LVM features:
  </p><div class="itemizedlist"><ul class="itemizedlist" type="bullet"><li class="listitem" style="list-style-type: disc"><p>
     Several hard disks or partitions can be combined in a large logical
     volume.
    </p></li><li class="listitem" style="list-style-type: disc"><p>
     Provided the configuration is suitable, an LV (such as
     <code class="filename">/usr</code>) can be enlarged if free space is exhausted.
    </p></li><li class="listitem" style="list-style-type: disc"><p>
     With LVM, it is possible to add hard disks or LVs in a running system.
     However, this requires hot-swappable hardware.
    </p></li><li class="listitem" style="list-style-type: disc"><p>
     It is possible to activate a "striping mode" that distributes the data
     stream of a LV over several PVs. If these PVs reside on different
     disks, the read and write performance is enhanced, as with RAID 0.
    </p></li><li class="listitem" style="list-style-type: disc"><p>
     The snapshot feature enables consistent backups (especially for
     servers) of the running system.
    </p></li></ul></div><p>
   With these features, LVM is ready for heavily used home PCs or small
   servers. LVM is well-suited for the user with a growing data stock (as in
   the case of databases, music archives, or user directories). This would
   allow file systems that are larger than the physical hard disk. Another
   advantage of LVM is that up to 256 LVs can be added. However, working
   with LVM is different from working with conventional partitions.
   Instructions and further information about configuring LVM is available
   in the official LVM HOWTO at
   <a class="ulink" href="http://tldp.org/HOWTO/LVM-HOWTO/" target="_top">http://tldp.org/HOWTO/LVM-HOWTO/</a>.
  </p><p>
   Starting from Kernel version 2.6, LVM version 2 is available,
   which is backward-compatible with the previous LVM and enables the
   continued management of old volume groups. When creating new volume
   groups, decide whether to use the new format or the backward-compatible
   version. LVM 2 does not require any kernel patches. It makes use of
   the device mapper integrated in kernel 2.6. This kernel only supports LVM
   version 2. Therefore, when talking about LVM, this section always
   refers to LVM version 2.
  </p></div><div class="sect2" title="2.2.2. LVM Configuration with YaST"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.system.lvm.yast"></a>2.2.2. LVM Configuration with YaST<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.system.lvm.yast">¶</a></span></h3></div></div></div><p>
   The YaST LVM configuration can be reached from the YaST Expert
   Partitioner (see <a class="xref" href="cha.advdisk.html#sec.yast2.i_y2_part_expert" title="2.1. Using the YaST Partitioner">Section 2.1, &#8220;Using the YaST Partitioner&#8221;</a>) within the
   <span class="guimenu">Volume Management</span> item in the <span class="guimenu">System
   View</span> pane. The Expert Partitioner allows you to edit and delete
   existing partitions and also create new ones that need to be used with
   LVM. The first task is to create PVs that provide space to a volume
   group:
  </p><div class="procedure"><ol class="procedure" type="1"><li><p>
     Select a hard disk from <span class="guimenu">Hard Disks</span>.
    </p></li><li><p>
     Change to the <span class="guimenu">Partitions</span> tab.
    </p></li><li><p>
     Click <span class="guimenu">Add</span> and enter the desired size of the PV on
     this disk.
    </p></li><li><p>
     Use <span class="guimenu">Do not format partition</span> and change the
     <span class="guimenu">File System ID</span> to <span class="guimenu">0x8E Linux LVM</span>.
     Do not mount this partition.
    </p></li><li><p>
     Repeat this procedure until you have defined all the desired physical
     volumes on the available disks.
    </p></li></ol></div><div class="sect3" title="2.2.2.1. Creating Volume Groups"><div class="titlepage"><div><div><h4 class="title"><a name="id438325"></a>2.2.2.1. Creating Volume Groups<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#id438325">¶</a></span></h4></div></div></div><p>
    If no volume group exists on your system, you must add one (see
    <a class="xref" href="cha.advdisk.html#fig.lvm.yast.volgrp" title="Figure 2.3. Creating a Volume Group">Figure 2.3, &#8220;Creating a Volume Group&#8221;</a>). It is possible to create
    additional groups by clicking on <span class="guimenu">Volume Management</span> in
    the <span class="guimenu">System View</span> pane, and then on <span class="guimenu">Add Volume
    Group</span>. One single volume group is usually sufficient.
   </p><div class="procedure"><ol class="procedure" type="1"><li><p>
      Enter a name for the VG, for example, <code class="literal">system</code>.
     </p></li><li><p>
      Select the desired <span class="guimenu">Physical Extend Size</span>. This value
      defines the size of a physical block in the volume group. All the disk
      space in a volume group is handled in blocks of this size.
     </p><div class="tip"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Tip: Logical Volumes and Block Sizes"><tr class="head"><td width="32"><img alt="[Tip]" src="admon/tip.png"></td><th align="left">Logical Volumes and Block Sizes</th></tr><tr><td colspan="2" align="left" valign="top"><p>
       The possible size of an LV depends on the block size used in the
       volume group. The default is 4 MB and allows for a maximum size
       of 256 GB for physical and LVs. The physical size should be
       increased, for example, to 8, 16, or 32 MB, if you need LVs
       larger than 256 GB.
      </p></td></tr></table></div></li><li><p>
      Add the prepared PVs to the VG by selecting the device and clicking on
      <span class="guimenu">Add</span>. Selecting several devices is possible by
      holding <span class="keycap">Ctrl</span> while selecting the devices.
     </p></li><li><p>
      Select <span class="guimenu">Finish</span> to make the VG available to further
      configuration steps.
     </p></li></ol></div><div class="figure"><a name="fig.lvm.yast.volgrp"></a><p class="title"><b>Figure 2.3. Creating a Volume Group</b><span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#fig.lvm.yast.volgrp">¶</a></span></p><div class="figure-contents"><div class="mediaobject"><table border="0" summary="manufactured viewport for HTML img" cellspacing="0" cellpadding="0" width="50%"><tr><td><img src="images/yast2_lvm4.png" width="100%" alt="Creating a Volume Group"></td></tr></table></div></div></div><br class="figure-break"><p>
    If you have multiple volume groups defined and want to add or remove
    PVs, select the volume group in the <span class="guimenu">Volume Management</span>
    list. Then change to the <span class="guimenu">Overview</span> tab and select
    <span class="guimenu">Resize</span>. In the following window, you can add or
    remove PVs to the selected volume group.
   </p></div><div class="sect3" title="2.2.2.2. Configuring Logical Volumes"><div class="titlepage"><div><div><h4 class="title"><a name="id438468"></a>2.2.2.2. Configuring Logical Volumes<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#id438468">¶</a></span></h4></div></div></div><p>
    After the volume group has been filled with PVs, define the LVs which
    the operating system should use in the next dialog. Choose the current
    volume group and change to the <span class="guimenu">Logical Volumes</span> tab.
    <span class="guimenu">Add</span>, <span class="guimenu">Edit</span>,
    <span class="guimenu">Resize</span>, and <span class="guimenu">Delete</span> LVs as needed
    until all space in the volume group has been occupied. Assign at least
    one LV to each volume group.
   </p><div class="figure"><a name="fig.lvm.yast.mgmt"></a><p class="title"><b>Figure 2.4. Logical Volume Management</b><span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#fig.lvm.yast.mgmt">¶</a></span></p><div class="figure-contents"><div class="mediaobject"><table border="0" summary="manufactured viewport for HTML img" cellspacing="0" cellpadding="0" width="75%"><tr><td><img src="images/yast2_lvm6.png" width="100%" alt="Logical Volume Management"></td></tr></table></div></div></div><br class="figure-break"><p>
    Click <span class="guimenu">Add</span> and go through the wizard-like popup that
    opens:
   </p><div class="orderedlist"><ol class="orderedlist" type="1"><li><p>
      Enter the name of the LV. For a partition that should be mounted to
      <code class="filename">/home</code>, a self-explanatory name like
      <code class="literal">HOME</code> could be used.
     </p></li><li><p>
      Select the size and the number of stripes of the LV. If you have only
      one PV, selecting more than one stripe is not useful.
     </p></li><li><p>
      Choose the filesystem to use on the LV as well as the mount point.
     </p></li></ol></div><p>
    By using stripes it is possible to distribute the data stream in the LV
    among several PVs (striping). However, striping a volume can only be done
    over different PVs, each providing at least the amount of space of the
    volume. The maximum number of stripes equals to the number of PVs, where
    Stripe "1" means "no striping".  Striping only makes sense with PVs on
    different hard disks, otherwise performance will decrease.
   </p><div class="warning"><table border="0" cellpadding="3" cellspacing="0" width="100%" summary="Warning: Striping"><tr class="head"><td width="32"><img alt="[Warning]" src="admon/warning.png"></td><th align="left">Striping</th></tr><tr><td colspan="2" align="left" valign="top"><p>
     YaST cannot, at this point, verify the correctness of your entries
     concerning striping. Any mistake made here is apparent only later when
     the LVM is implemented on disk.
    </p></td></tr></table></div><p>
    If you have already configured LVM on your system, the existing logical
    volumes can also be used. Before continuing, assign appropriate mount
    points to these LVs. With <span class="guimenu">Finish</span>, return to the
    YaST Expert Partitioner and finish your work there.
   </p></div></div></div><div class="sect1" title="2.3. Soft RAID Configuration"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="sec.yast2.system.raid"></a>2.3. Soft RAID Configuration<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.system.raid">¶</a></span></h2></div></div></div><a class="indexterm" name="id438617"></a><a class="indexterm" name="id438624"></a><a class="indexterm" name="id438632"></a><p>
  The purpose of RAID (redundant array of independent disks) is to combine
  several hard disk partitions into one large <span class="emphasis"><em>virtual</em></span>
  hard disk to optimize performance and/or data security. Most RAID
  controllers use the SCSI protocol because it can address a larger number
  of hard disks in a more effective way than the IDE protocol. It is also
  more suitable for the parallel command processing. There are some RAID
  controllers that support IDE or SATA hard disks. Soft RAID provides the
  advantages of RAID systems without the additional cost of hardware RAID
  controllers. However, this requires some CPU time and has memory
  requirements that make it unsuitable for high performance computers.
 </p><p>
  With openSUSE® , you can combine several hard disks into one soft
  RAID system. RAID implies several strategies for combining several hard
  disks in a RAID system, each with different goals, advantages, and
  characteristics. These variations are commonly known as <span class="emphasis"><em>RAID
  levels</em></span>.
 </p><p>
  Common RAID levels are:
 </p><div class="variablelist"><dl><dt><span class="term">RAID 0</span></dt><dd><p>
     This level improves the performance of your data access by spreading
     out blocks of each file across multiple disk drives. Actually, this is
     not really a RAID, because it does not provide data backup, but the
     name <span class="emphasis"><em>RAID 0</em></span> for this type of system is
     commonly used. With RAID 0, two or more hard disks are pooled
     together. Performance is enhanced, but the RAID system is destroyed and
     your data lost if even one hard disk fails.
    </p></dd><dt><span class="term">RAID 1</span></dt><dd><p>
     This level provides adequate security for your data, because the data
     is copied to another hard disk 1:1. This is known as <span class="emphasis"><em>hard
     disk mirroring</em></span>. If one disk is destroyed, a copy of its
     contents is available on the other one. All disks but one could be
     damaged without endangering your data. However, if the damage is not
     detected, the damaged data can be mirrored to the undamaged disk. This
     could result in the same loss of data. The writing performance suffers
     in the copying process compared to using single disk access (10 to 20 %
     slower), but read access is significantly faster in comparison to any
     one of the normal physical hard disks. The reason is that the duplicate
     data can be parallel-scanned. Generally it can be said that
     Level 1 provides nearly twice the read transfer rate of single
     disks and almost the same write transfer rate as single disks.
    </p></dd><dt><span class="term">RAID 2 and RAID 3</span></dt><dd><p>
     These are not typical RAID implementations. Level 2 stripes data
     at the bit level rather than the block level. Level 3 provides
     byte-level striping with a dedicated parity disk, and cannot service
     simultaneous multiple requests. These levels are rarely used.
    </p></dd><dt><span class="term">RAID 4</span></dt><dd><p>
     Level 4 provides block-level striping just like Level 0
     combined with a dedicated parity disk. In the case of data disk
     failure, the parity data is used to create a replacement disk. However,
     the parallel disk may create a bottleneck for write access.
    </p></dd><dt><span class="term">RAID 5</span></dt><dd><p>
     RAID 5 is an optimized compromise between Level 0 and
     Level 1, in terms of performance and redundancy. The hard disk
     space equals the number of disks used minus one. The data is
     distributed over the hard disks as with RAID 0. <span class="emphasis"><em>Parity
     blocks</em></span>, created on one of the partitions, exist for security
     reasons. They are linked to each other with XOR, enabling the contents
     to be reconstructed by the corresponding parity block in case of system
     failure. With RAID 5, no more than one hard disk can fail at the
     same time. If one hard disk fails, it must be replaced as soon as
     possible to avoid the risk of losing data.
    </p></dd><dt><span class="term">Other RAID Levels</span></dt><dd><p>
     Several other RAID levels have been developed (RAIDn, RAID 10,
     RAID 0+1, RAID 30, RAID 50, etc.), some of them being
     proprietary implementations created by hardware vendors. These levels
     are not very common and therefore are not explained here.
    </p></dd></dl></div><div class="sect2" title="2.3.1. Soft RAID Configuration with YaST"><div class="titlepage"><div><div><h3 class="title"><a name="sec.yast2.system.raid.conf"></a>2.3.1. Soft RAID Configuration with YaST<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#sec.yast2.system.raid.conf">¶</a></span></h3></div></div></div><p>
   The YaST <span class="guimenu">RAID</span> configuration can be reached from the
   YaST Expert Partitioner, described in
   <a class="xref" href="cha.advdisk.html#sec.yast2.i_y2_part_expert" title="2.1. Using the YaST Partitioner">Section 2.1, &#8220;Using the YaST Partitioner&#8221;</a>. This partitioning tool
   enables you to edit and delete existing partitions and create new ones to
   be used with soft RAID:
  </p><div class="procedure"><ol class="procedure" type="1"><li><p>
     Select a hard disk from <span class="guimenu">Hard Disks</span>.
    </p></li><li><p>
     Change to the <span class="guimenu">Partitions</span> tab.
    </p></li><li><p>
     Click <span class="guimenu">Add</span> and enter the desired size of the raid
     partition on this disk.
    </p></li><li><p>
     Use <span class="guimenu">Do not Format the Partition</span> and change the
     <span class="guimenu">File System ID</span> to <span class="guimenu">0xFD Linux
     RAID</span>. Do not mount this partition.
    </p></li><li><p>
     Repeat this procedure until you have defined all the desired physical
     volumes on the available disks.
    </p></li></ol></div><p>
   For RAID 0 and RAID 1, at least two partitions are
   needed&#8212;for RAID 1, usually exactly two and no more. If
   RAID 5 is used, at least three partitions are required. It is
   recommended to utilize partitions of the same size only. The RAID
   partitions should be located on different hard disks to decrease the risk
   of losing data if one is defective (RAID 1 and 5) and to optimize
   the performance of RAID 0. After creating all the partitions to use
   with RAID, click <span class="guimenu">RAID</span>+<span class="guimenu">Add
   RAID</span> to start the RAID configuration.
  </p><p>
   In the next dialog, choose between RAID levels 0, 1, 5, 6 and 10. Then,
   select all partitions with either the <span class="quote">&#8220;<span class="quote">Linux RAID</span>&#8221;</span> or
   <span class="quote">&#8220;<span class="quote">Linux native</span>&#8221;</span> type that should be used by the RAID system.
   No swap or DOS partitions are shown.
  </p><div class="figure"><a name="fig.yast2.system.raid.conf"></a><p class="title"><b>Figure 2.5. RAID Partitions</b><span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#fig.yast2.system.raid.conf">¶</a></span></p><div class="figure-contents"><div class="mediaobject"><table border="0" summary="manufactured viewport for HTML img" cellspacing="0" cellpadding="0" width="75%"><tr><td><img src="images/yast2_raid4.png" width="100%" alt="RAID Partitions"></td></tr></table></div></div></div><br class="figure-break"><p>
   To add a previously unassigned partition to the selected RAID volume,
   first click the partition then <span class="guimenu">Add</span>. Assign all
   partitions reserved for RAID. Otherwise, the space on the partition
   remains unused. After assigning all partitions, click
   <span class="guimenu">Next</span> to select the available <span class="guimenu">RAID
   Options</span>.
  </p><p>
   In this last step, set the file system to use as well as encryption and
   the mount point for the RAID volume. After completing the configuration
   with <span class="guimenu">Finish</span>, see the <code class="filename">/dev/md0</code>
   device and others indicated with <span class="emphasis"><em>RAID</em></span> in the expert
   partitioner.
  </p></div><div class="sect2" title="2.3.2. Troubleshooting"><div class="titlepage"><div><div><h3 class="title"><a name="id438960"></a>2.3.2. Troubleshooting<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#id438960">¶</a></span></h3></div></div></div><p>
   Check the file <code class="filename">/proc/mdstat</code> to find out whether a
   RAID partition has been damaged. In the event of a system failure, shut
   down your Linux system and replace the defective hard disk with a new one
   partitioned the same way. Then restart your system and enter the command
   <span class="command"><strong>mdadm /dev/mdX --add /dev/sdX</strong></span>. Replace 'X' with your
   particular device identifiers. This integrates the hard disk
   automatically into the RAID system and fully reconstructs it.
  </p><p>
   Note that although you can access all data during the rebuild, you may
   encounter some performance issues until the RAID has been fully rebuilt.
  </p></div><div class="sect2" title="2.3.3. For More Information"><div class="titlepage"><div><div><h3 class="title"><a name="id438986"></a>2.3.3. For More Information<span class="permalink"><a alt="Permalink" title="Copy Permalink" href="#id438986">¶</a></span></h3></div></div></div><p>
   Configuration instructions and more details for soft RAID can be found in
   the HOWTOs at:
  </p><div class="itemizedlist"><ul class="itemizedlist" type="bullet"><li class="listitem" style="list-style-type: disc"><p>
     <code class="filename">/usr/share/doc/packages/mdadm/Software-RAID.HOWTO.html</code>
    </p></li><li class="listitem" style="list-style-type: disc"><p>
     <a class="ulink" href="http://raid.wiki.kernel.org" target="_top">http://raid.wiki.kernel.org</a>
    </p></li></ul></div><p>
   Linux RAID mailing lists are available, such as
   <a class="ulink" href="http://marc.theaimsgroup.com/?l=linux-raid" target="_top">http://marc.theaimsgroup.com/?l=linux-raid</a>.
  </p></div></div></div><div class="navfooter"><table width="100%" summary="Navigation footer" border="0" class="bctable"><tr><td width="80%"><div class="breadcrumbs"><p><a href="index.html"> Documentation</a><span class="breadcrumbs-sep"> &gt; </span><a href="book.opensuse.reference.html">Reference</a><span class="breadcrumbs-sep"> &gt; </span><a href="part.reference.install.html">Advanced Deployment Scenarios</a><span class="breadcrumbs-sep"> &gt; </span><strong><a accesskey="p" title="Chapter 1. Remote Installation" href="cha.deployment.remoteinst.html"><span>&#9664;</span></a> </strong></p></div></td></tr></table></div></body></html>

ACC SHELL 2018