LVM Boot Failures
There are several reasons why an LVM configuration cannot boot. In addition to the problems
associated with boots from non-LVM disks, the following problems can cause an LVM-based system
not to boot.
Insufficient Quorum
In this scenario, not enough disks are present in the root volume group to meet the
quorum
requirements. At boot time, a message indicating that not enough physical volumes are available
appears:
panic: LVM: Configuration failure
To activate the root volume group and successfully boot the system, the number of available LVM
disks must be more than half the number of LVM disks that were attached when the volume group
was last active. Thus, if during the last activation there were two disks attached in the root volume
group, the “more than half” requirement means that both must be available. For information on
how to deal with quorum failures, see
“Volume Group Activation Failures” (page 111)
.
Corrupted LVM Data Structures on Disk
The LVM bootable disks contain vital boot information in the BDRA. This information can become
corrupted, not current, or just no longer present. Because of the importance of maintaining up-to-date
information within the BDRA, use the
lvrmboot
or
lvlnboot
commands whenever you make a
change that affects the location of the root, boot, primary swap, or dump logical volumes.
To correct this problem, boot the system in maintenance mode as described in
“Maintenance Mode
Boot” (page 108)
, then repair the damage to the system LVM data structures. Use
vgcfgrestore
on the boot disk.
Corrupted LVM Configuration File
Another problem with activation of a volume group is a missing or corrupted
/etc/lvmtab
or
/etc/lvmtab_p
file. After booting in maintenance mode, you can use the
vgscan
command
to re-create the
/etc/lvmtab
and
/etc/lvmtab_p
files. For more information, see vgscan(1M).
Problems After Reducing the Size of a Logical Volume
When a file system is first created within a logical volume, it is made as large as the logical volume
permits.
If you extend the logical volume without extending its file system, you can subsequently safely
reduce the logical volume size, as long as it remains as big as its file system. (Use the
bdf
command
to determine the size of your file system.) After you expand the file system, you can no longer safely
reduce the size of the associated logical volume.
If you reduce the size of a logical volume containing a file system to a size smaller than that of a
file system within it using the
lvreduce
command, you corrupt the file system. If you subsequently
attempt to mount the corrupt file system, you might crash the system. If this occurs, follow these
steps:
1.
Reboot your system in single-user mode.
2.
If you already have a good current backup of the data in the now corrupt file system, skip
this step. If you do not have backup data and if that data is critical, try to recover whatever
part of the data that might remain intact by attempting to back up the files on the file system.
Before you attempt any current backup, consider the following:
•
When your backup program accesses the corrupt part of the file system, your system will
crash again. You must reboot your system again to continue with the next step.
•
There is no guarantee that all (or any) of your data on that file system will be intact or
recoverable. This step is an attempt to save as much as possible. That is, any data
LVM Boot Failures
115