From eBower Wiki
Jump to: navigation, search

What is LVM?[edit]

LVM is Logical Volume Manager. Make sense?

What, you need even more detail? You can think of it like an abstraction layer between a storage device like a hard drive and a logical partition. I'm sure it's a lot more powerful than what I'm about to describe, but this is my use case:

I have a video server. I've taken my DVD collection and converted them to video files which I store on my server. Now instead of needing to have boxes of DVDs, finding the one I want, bringing it to the room where I want to watch it, and putting it into a DVD player I can access the video on the server from just about any device in my house. But that doesn't matter, what does matter is that I've got a lot of stuff I need to store.

I started with 750GB disks. Then I found 1TB disks. Then 2TB and finally I'm working with an old 2TB drive and a newer 3TB drive. And I have a redundant pair of each. Here's the problem. I started with my videos on the 3TB drive and used the 2TB drive for everything else like documents and photos, and music. However, the 3TB drive filled up so I needed to move some videos to the 2TB drive. This created a big mess because one drive was mounted as "homes" and one as "media" and now there was media in the homes drive and some VM backups migrated to the media drive - it got confusing.

Enter LVM. With LVM I can create a logical volume to merge the two drives together into a single 5TB logical device. And when I run out of space, I can get whatever size drive is the most cost-effective and simply add that to the logical device.

Why not RAID?[edit]

RAID and I don't get along. I had a hardware RAID1 solution, two drives that were kept in sync. One drive failed and, because there was no OS reporting, it failed silently until the second drive failed as well.

OK, software RAID should be better since I can tie it into the OS and create nice alerts. And they work great, except my drives never really had a hard failure. Sure, I may have silently recovered from a bad sector or two, but I still ended up with some files getting corrupted somehow and the biggest use case I had was not redundancy, but backup (people who think RAID is a backup mechanism will be sorely disappointed at some point in their life).

So I just use rsync in a cron job to keep my stuff backed up. The best part is that cron job also backs up critical files to an offsite location as well. My videos are just too huge - and there's no point in off-site backups for a $10 DVD if it costs me $5/year to do so. It's much cheaper to just keep the physical disk at a relative's house if I really cared that much about it.

The net effect is that if a hard drive fails the cron fails and tells me why but I can recover. If I delete a file by accident I can recover last night's backup. It's only if the primary drive fails that I lose a day's worth of work. There isn't a whole lot there that didn't originate on my laptop first.

Configuring Encrypted LVM[edit]

First we need the dependencies. Assuming you haven't set up a drive yet, you'll need this:

sudo apt-get install cryptsetup-bin

I'm going to assume you're running in a VM and need to deal with /dev/vdb.

Officially you'll probably want to do a few other things. Check it for bad blocks:

sudo badblocks -c 10240 -s -w -t random -v /dev/vdb

Fill it with random data to help fool the bad guys into thinking it's completely full:

sudo shred -v -n 1 /dev/vdb

Now create the partition (I'm assuming a blank drive):

sudo parted /dev/vdb
(parted) mklabel gpt
(parted) mkpart pri ext4 1 -1
(parted) set 1 lvm on
(parted) quit

Now we encrypt the drive (make sure to write down your passphrase!):

sudo cryptsetup --verbose --verify-passphrase luksFormat /dev/vdb1

Unlock the drive, note that I'm calling it 0-3TB (it's the primary 3TB drive in my system, 1-3TB is the backup):

sudo cryptsetup luksOpen /dev/vdb1 0-3TB

Create the Physical Volume:

sudo pvcreate /dev/mapper/0-3TB

Create the volume group storage-volume:

sudo vgcreate storage-volume /dev/mapper/0-3TB

Create the logical volume with 100% of the space:

sudo lvcreate -l 100%VG -n storage /dev/storage-volume

Format the volume:

sudo mkfs.ext4 /dev/storage-volume/storage

Mount the volume:

sudo mkdir /mnt/storage
sudo mount /dev/storage-volume/storage /mnt/storage

Now, here's the problem. We need to decrypt the data. Which means we need to run the following commands over again:

sudo cryptsetup luksOpen /dev/vdb1 0-3TB
sudo mount /dev/storage-volume/storage /mnt/storage

This means you need a passphrase. Now you could set up a key, even one on a USB drive which is pretty clever, but either way you've got the key sitting on your system ready for someone to log in and grab it.

So, what are we protecting in a headless server? Well, when someone hacks your system they have access to the data because it's already nicely unlocked. If someone steals your computer, they've also got access to the data and the key. You could put the key on a thumb drive and remove it every time you need to reboot, but then you can't remotely manage your system. The one case that helps is if your hard drive is stolen but your computer is not. This seems like an odd corner case.

The net result is that an encrypted server seems like it may not be all it's cracked up to be. You should probably encrypt at the end user - if the clients are encrypting using their own password to generate the key then the server really only has random data.

That said, security is an onion. The more layers you have the better so if you like the idea of encrypting your data, have at it. But just understand this is a weak layer with a lot of pain associated with it - encrypting with a key present in the system is for the birds.

Unencrypted LVMs[edit]

Just because it's a bit painful for little real gain, I'm going to ignore the encrypted LVM for now. If you're using external drives it may be worthwhile, but personally I like the idea of end-to-end encryption best - never store anything the server can decrypt unless you want the hackers to decrypt it as well.

Create a Single-Disk LVM[edit]

We all start with one hard drive. I'm going to use /dev/vdb, create a single partition on it, call the disk 0-3TB (the primary 3TB drive, 1-3TB is the backup drive), and add it to a volume for storage. There will be a second for backup with a similar set of commands.

First, let's partition the bad boy as ext4. Normally you'd probably use fdisk. Normally you'd be wrong. Small drives can use fdisk without an issue, but anything >2TB an you'll want to use parted. Which means you'll ALWAYS want to use parted and forget fdisk even exists:

sudo parted /dev/vdb
(parted) mklabel gpt
(parted) mkpart pri ext4 1 -1
(parted) set 1 lvm on
(parted) quit

I'm going to assume that you'll want to keep the OS separate from the data store. I've got an 8GB LVM that I run all of my VMs in, it's plenty of space for my needs and making sure filling up the storage doesn't affect the VM operation is well worth a little lost space.

Now we'll create the physical volume:

$ pvcreate /dev/vdb1
  Physical volume "/dev/vdb1" successfully created

Now I'll create the volume group:

$ sudo vgcreate storage-vg /dev/vdb1
  Volume group "storage-vg" successfully created

Now for the logical volume:

$ sudo lvcreate -l 100%VG -n storage-lv storage-vg
  Logical volume "storage-lv" created

Now we format the drive:

$ sudo mkfs.ext4 /dev/storage-vg/storage-lv

And mount it to see if everything looks good:

$ sudo mkdir /mnt/storage
$ sudo mount /dev/storage-vg/storage-lv /mnt/storage

If it works, edit /etc/fstab and add the following:

/dev/storage-vg/storage-lv /mnt/storage ext4 defaults 0 1

You should now be rejoicing, you've just mounted your very own hard drive as an LVM.

Adding Another Drive to an Existing LVM[edit]

I've got a 3TB drive mounted in my LVM now. It's a brand new server so I have no data on it except for a test.txt file to make sure I preserve it as I add a second drive in /dev/vdc. First I check that I've got 3TB:

$ df -BG
Filesystem                          1G-blocks  Used Available Use% Mounted on
/dev/mapper/bower--storage-root            7G    2G        5G  21% /
udev                                       1G    1G        1G   1% /dev
tmpfs                                      1G    1G        1G   1% /run
none                                       1G    0G        1G   0% /run/lock
none                                       1G    0G        1G   0% /run/shm
/dev/vda1                                  1G    1G        1G  24% /boot
/dev/mapper/storage--vg-storage--lv     2751G    1G     2611G   1% /mnt/storage

Wow, that's a lot of wasted space. But 2.7TB is close enough to 3TB in HD speak.

Now we want to run the following to see which disc I want to use:


Now we'll create the partition in /dev/vdc:

sudo parted /dev/vdc
(parted) mklabel gpt
(parted) mkpart pri ext4 1 -1
(parted) set 1 lvm on
(parted) quit

Then we need to extend the existing Volume Group. If you can't remember the name, try "sudo vgdisplay":

$ sudo vgextend storage-vg /dev/vdc1
  No physical volume label read from /dev/vdc1
  Physical volume "/dev/vdc1" successfully created
  Volume group "storage-vg" successfully extended

Now we extend the Logical Volume (again, sudo lvdisplay can help here):

$ sudo lvextend -l100%VG /dev/storage-vg/storage-lv
  Extending logical volume storage-lv to 4.55 TiB
  Logical volume storage-lv successfully resized

Now we resize the filesystem:

$ sudo resize2fs /dev/storage-vg/storage-lv 
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/storage-vg/storage-lv is mounted on /mnt/storage; on-line resizing required
old_desc_blocks = 175, new_desc_blocks = 292
The filesystem on /dev/storage-vg/storage-lv is now 1220942848 blocks long.

Finally, we check on the size of the mount:

$ df -BG
Filesystem                          1G-blocks  Used Available Use% Mounted on
/dev/mapper/bower--storage-root            7G    2G        5G  21% /
udev                                       1G    1G        1G   1% /dev
tmpfs                                      1G    1G        1G   1% /run
none                                       1G    0G        1G   0% /run/lock
none                                       1G    0G        1G   0% /run/shm
/dev/vda1                                  1G    1G        1G  24% /boot
/dev/mapper/storage--vg-storage--lv     4585G    1G     4371G   1% /mnt/storage

We're up to the 5TB range (yes, I hate hard drive accounting as well but my non-LVM machine has a volume at 2751 and one at 1834 which add up to 4585G so we're not really losing anything...). More importantly:

$ ls /mnt/storage/
lost+found  test.txt

My test file is still there and this was all done seamlessly.


Ultimately I expect to need to retire a drive, here is where I'd figure out how to do so.

This is also a good reference for removing a drive.