I don't work with LVM extremely frequently, it's one of those things I set up and tend to forget about, so when I need to make some changes, I find I have to look at the same old man pages again and again. I thought I could make my life easier and create a single post with most of the information I need in one place.
What makes this blog post different to any of the other millions already out there, I hear you say? Nothing, except that I'll find this one easier to find and hopefully writing it all up will cement it in my own head. So, here goes:
My lab has the following layout, all created in a VM:
HDD1 8GB 250MB /boot | LVM PV1 | 1GB swap
HDD2 8GB LVM PV2
HDD3 15GB MDRAID Disk1
HDD4 15GB MDRAID Disk2
HDD5 15GB MDRAID Disk3
The three mdraid disks are combined into a 30GB RAID5 volume, with a single LVM PV (LVM PV3) created on top.
PV1 and PV3 added to Volume Group 1 (VGlab) with separate LVs for /, /var, /home and /var/log with Ubuntu Server Edition 12.04 installed on top. LVM PV2 unused for now, and not all available spave in VGlab used.
So we have a server with the following output from df -h after a clean install:
stefan@lvm-lab:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/lab-os 4.7G 683M 3.8G 16% /
udev 241M 4.0K 241M 1% /dev
tmpfs 100M 292K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 248M 0 248M 0% /run/shm
/dev/sda2 223M 25M 187M 12% /boot
/dev/mapper/lab-home 4.7G 198M 4.3G 5% /home
/dev/mapper/lab-var 14G 473M 13G 4% /var
/dev/mapper/lab-log 7.5G 257M 6.9G 4% /var/log
stefan@lvm-lab:~$
So we can then start using some of the commands to see what's going on and making some changes.The first command which is useful is pvdisplay which lists out the physical volumes present in the physical machine. Bear in mind that this will list any volumes, physical or RAID, which have a LVM volume flag set. On my lab, I get the following output:
stefan@lvm-lab:~$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name lab
PV Size 29.99 GiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 7678
Free PE 0
Allocated PE 7678
PV UUID Marv0q-qLZJ-HWOt-mVUC-FKoS-zqT8-K5C214
--- Physical volume ---
PV Name /dev/sda3
VG Name lab
PV Size 6.84 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1749
Free PE 1560
Allocated PE 189
PV UUID rIZb3g-GZdJ-yuGY-UC2k-iG0G-Q4f3-2TmS9R
"/dev/sdb1" is a new physical volume of "8.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 8.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID jgePi7-SkVo-7kp2-KUdQ-Ajel-MQFU-Ubs4PH
stefan@lvm-lab:~$
You'll note that the first entry is the RAID array and the second is the LVM section of the first disk. You'll also note that the other 8GB disk is listed, even though it has not been added to a VG yet.
A similar command vgdisplay can be used to list all Volume Groups, in this lab I only haver a single VG but this would list all VGs present:
stefan@lvm-lab:~$ sudo vgdisplay
--- Volume group ---
VG Name lab
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 2
Act PV 2
VG Size 36.82 GiB
PE Size 4.00 MiB
Total PE 9427
Alloc PE / Size 7867 / 30.73 GiB
Free PE / Size 1560 / 6.09 GiB
VG UUID BgXqr1-gDCJ-OLzC-dYgV-2qa1-nzL2-95jpZx
stefan@lvm-lab:~$
As expected, there is also a similar command to view the Logical volumes in the VG as well, lvdisplay.
stefan@lvm-lab:~$ sudo lvdisplay
--- Logical volume ---
LV Name /dev/lab/os
VG Name lab
LV UUID fYXl5R-NlHb-44Gs-rQWh-IiRE-3aaQ-fHusdk
LV Write Access read/write
LV Status available
# open 1
LV Size 4.66 GiB
Current LE 1192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 252:0
--- Logical volume ---
LV Name /dev/lab/home
VG Name lab
LV UUID GUGN3K-ICHB-VnvE-GvcK-16nP-Q20e-pyAqse
LV Write Access read/write
LV Status available
# open 1
LV Size 4.66 GiB
Current LE 1192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 252:1
--- Logical volume ---
LV Name /dev/lab/log
VG Name lab
LV UUID OagOZY-fvan-ji02-92UA-bv2r-EQuS-SEebSH
LV Write Access read/write
LV Status available
# open 1
LV Size 7.45 GiB
Current LE 1907
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 252:2
--- Logical volume ---
LV Name /dev/lab/var
VG Name lab
LV UUID 0wztzb-mQOj-5Gp8-L21T-TIKP-1gcT-cermwT
LV Write Access read/write
LV Status available
# open 1
LV Size 13.97 GiB
Current LE 3576
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 252:3
stefan@lvm-lab:~$
So there is a pattern emerging here, the commands are generally the same, just the prefix of pv-, vg- and lv- changes. There are also a number of switches which can be used to shape how each command runs more intelligently. For example, to see information on only a single Logical Volume, with units in kB, you would run the following:
stefan@lvm-lab:~$sudo lvdisplay --units K /dev/lab/var
The man page for each command does a good job of explaining these various switches and options.
In the next installment, I'll be looking at how a series of commands can be used to interact with the LVM elements, such as adding PVs to an existing or new VG, altering the size of a VG, altering the size and/or number of LVs in a VG, and also look at some gotchas when it comes to the filesystems on the Logical Volumes.
No comments:
Post a Comment