Linux LVM: I’m Already Falling Asleep

I have been working on a P2V of some Red Hat nodes and have been doing some Logcial Volume Management updates post conversion. I started thinking about how I wish someone had taught me about LVM when I was first starting out. The abstractness of the idea is a lot to take in, but like most things, once you do it a few times you will develop your own style and ways to remember. We are going to walk through creating an LVM with little insights that I hope help you pick up the concepts faster. What could be more fun. 😉

Its All About Abstraction

The four main components of LVM are:

  1. The physical storage devices themselves
  2. The physical volumes
  3. The volume group
  4. The logical volumes

Image Source

The Physical Devices and the Physical Volumes

This idea was the hardest part initially for me. The physical volume is something you create from the physical device.Its the first logical abstraction. Think of these hard drives as unaltered in anyway. Its a device seen by the OS. That’s it. You could create a partition, format that partition with a file system, and mount it. Instead, with LVM, we partition it in preparation for it to become a physical volume. Technically, you don’t have to partition the disk. You can create a physical volume using the whole disk. But it is best practice to partition the disk, according to the Linux documentation project (of which I’m a fan).

I’ll be partitioning three disks and creating physical volumes from the disks. The disks are /dev/sdb /dev/sdc and /dev/sdd. The command we will be using for partitioning is fdisk. The commands we will be using for physical volume creation are pvs: physical volume show and pvcreate: physical volume create.

Partitioning with defaults:

Physical volume creation with defaults:

The fruits of our labor:

Looking back a the picture above we are now on the light blue line.

The Volume Group

The volume group can be thought of as a pool. It is one logical resource. The logical volumes we create on it later are not aware of the physical volumes we created in the previous section. So in that way, I always think about the volume group as the middle man in this setup. This is also the cool part of LVM. We can add more physical disk/ physical volumes later and expand the volume group. Its the thing that gives LVM the ability to expand and shrink compared to traditional storage. I’ll be creating one volume group called volume_group. The commands we will be using for volume group creation are vgs: volume group show, vgcreate: volume group creation and vgdisplay to see details of the volume group.

Volume group creation with defaults:

Volume group details. Notice the VG size. It is the sum of our three disks:

The fruits of our labor:

Looking again at the picture above, we are on the darker blue line.

The Logical Volume

Now we will be carving up the volume group into logical volumes. These volumes, like traditional disks, will need to be formatted and mounted. The commands we will be using for logical volume creation are lvs: logical volume show and lvcreate: logical volume creation. I’ll be creating two logical volumes and naming them Logical_volume_1 and logical_volume_2.  I highlighted the volume_group in an effort to show how the command is creating the logical_volume_1 and logical_volume_2 from the “pool.”

Logical volume creation with defaults:

The fruits of our labor, two logical volumes from one volume group:

Looking back at the picture we are now on the pink line.

The filesystem and mount:

Now if you look in /dev you will find a device called the name of your volume group. For example mine is /dev/volume_group. We have arrived. We now have something we can format, mount and save files to.

Looking back on the picture we are now on the yellow line.

Wrapping Up:

So to summarize: Physical device —-> Physical Volume —-> Volume Group —-> Logical Volume —-> Filesystem Mount

Check out this post for more detail than you ever thought possible on a subject:

Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux – PART 1



Splunk in My Homelab: Part One

Over time my homelab has grown to over 40 virtual machines spread over three subnets. All the boxes are active and I want more insight into what is going on, so I’m adding a Splunk server. Over the next three blog posts, I’ll walk you through setting up and searching Splunk. This first post is going to cover registering for, downloading, and installing the Splunk software. The next post will cover installing the forwarders giving Splunk something to index. Lastly, the third post will cover adding apps and querying data.

Before we begin, lets quickly look at the parts of Splunk. The layer closest to us is the search head. This is the front end that we interact with via search and the Splunk web UI. The indexer is the heart of Splunk. The indexer receives data, compresses it, and then indexes it. The last part is the forwarder. This is a Splunk instance on an end point that generates machine data. The forwarder forwards the data to the indexer. Typically you would separate the roles, but because of the small amount of data and for simplicity sake, I’m putting search head and indexer on one box.


First go to and register an account. Its free, but be aware they may ask to validate your email, in case you like to use throw away ones. Once logged in, click the big green free Splunk button. I chose the Splunk enterprise server free download.


I’m using Ubuntu 16.04 for my Splunk server, so I will chose Linux. At this point, you can download the .deb file, but as the second screen shot shows, you can also use wget to download the bits right to your box. That is what I will be doing. You’ll also notice that I pointed out the data limitation. This is not an issue for me since I’m just curious about my traffic and don’t actually need more than that. But if you want more and are willing to follow up, Google “Splunk Developer License.”

After running wget from the screen shot above run the following:

#Install splunk enterprise server
dpkg -i splunk-6.6.2-4b804538c686-linux-2.6-amd64.deb
#If you receive any errors looking for dependencies run this
sudo apt-get install -f
#Now start the splunk server
sudo /opt/splunk/bin/splunk --accept-license
#Finally set splunk to start on boot
sudo /opt/splunk/bin/splunk enable boot-start

Now navigate to http://splunkserver:8000.  Success!!!

In the next post, I will be installing a universal forwarder which will provide some data that I will later query in part three of this series.

PuTTYgen and an Ubuntu Server

Recently I used PuTTYgen to create a key pair which I intended to use to connect to one of the Ubuntu servers I have in my home lab. Not paying attention to detail, I uploaded the public key, ran

cat >> .ssh/authorized_keys

and was surprised to find that I was not able to connect. It turns out the formatting of the key is different in subtle ways. I’m going to cover how I manually modified the key so it would work, and then later found an easier way via ssh-keygen.

Manual Process

First I opened the key in Atom, an alternative to sublime text which I favor (when not using Vim that is 😉 ).

Turns out *Nix systems expect no other formatting or characters apart from the key itself. So I had to remove white spaces, comments and control characters. I manually removed all the words and comments. Then I hit find and searched via regex for carriage return and line feed. \r\n in the picture. I left the replace blank as I wanted to remove everything except for the key and replace it with nothing. You can see where Atom highlighted the space we would be removing.

Then I added ssh-rsa followed by a space at the start of the file. This signals the file encoding being used. Check RFC 4253 secion 6.6 (link) for a detailed, albeit boring, read about why we use ssh-rsa. If I want a comment, I can add a space followed by a comment at the end of the line.

Finally, just to be sure there were no issues with EOL between Windows and Linux, I installed an Atom package called line ending converter. To do this in Atom, I went to settings, chose install and searched community packages for the package. Once it is installed, it can be found under packages at Convert Line Endings to.

So now I have an acceptable key. I ssh to the box via username/password, open .ssh/authorized_keys, paste the key, and I’m good to go.

Or, if you’re the more streamlined type, this whole blog post can be done in one step. But, you wouldn’t learn what’s going on under the hood, and that’s the fun part.

ssh-keygen -i -f putty_key > new_key

Proxy Variables

While setting up squid proxy on my pfSense home lab gateway, I had trouble getting apt-get update to work on my Ubuntu/snort box which was behind the proxy. After some quick Googling, I tried the first response (because the first response is always the best right 😉 ), and the first response failed. After reading a few more blogs, I noticed there were many different ways to setup a proxy properly. Here’s what I found.

Setting up a proxy on the command line starts by declaring the proxy environment variable. Applicable variable options are:


Next, check if you currently have a proxy already set.

$ env | grep -i proxy 

If you get nothing from the command above, you know you don’t have a proxy yet. If you only want http and ftp to go through your proxy, export those variables instead of some of the other options above.

$ export {http,ftp}_proxy="http://proxy_name_or_ip:port_number"

This will only export the variable for the current session. If you log out or reset the computer, you will lose the proxy setting. If you need to make it permanent, use /etc/environment which is Ubuntu’s system-wide location for environment variables. You could also put it in /etc/profile.d as a file since this directory is ultimately read by /etc/profile. It’s not best practice to set it in /etc/bash.bashrc because variables in this file are specific to shells. Finally, if you want only a specific user to receive the proxy, you should set it at ~/.bashrc

If you want to read more about proper placement of environment variables, read this Ubuntu Environment Variables.

Now add your settings to /etc/environment

 echo "http_proxy=http://proxy_name_or_ip:port_number" >> /etc/environment;\
 echo "ftp_proxy=http://proxy_name_or_ip:port_number" >> /etc/environment 

If your proxy requires a username and password, the following format is often used:

echo "http_proxy=http://username:password@proxy_name_or_ip:port_number" >> /etc/environment

Most of the time you would be done at this point. But I had an issue with APT where I had to set the proxy in the APT configuration file.

I had to edit /etc/apt/apt.conf and add

Acquire::http::proxy "http://proxy_name_or_ip:port_number";

If you are curious about how to configure YUM similar to APT, you need to edit /etc/yum.conf

Once the file is open, add these lines to the section [main]


While we’re talking about proxies, CNTLM is another proxy that you install locally and point your proxy variable to localhost. It is a middle-man proxy that sits between you and a proxy that requires NTLM authentication. I have found this incredibly helpful when using Linux in a Windows environment. It’s a really cool piece of software and really easy to setup.

It’s in the Ubuntu repositories.

apt-get install cntlm

Its configuration file is found at /etc/cntlm.conf

Add the following:

Username        jamey
Proxy           proxy_ip:proxy_port

Next we create password hashes.

cntlm -H

The output should look like this

PassLM          ACF337F47B2E22ED552C4BCA4AEBFB11
PassNT          2A22CC95E275BE3150326D0C1E86A58E
PassNTLMv2      F001B46C503A3A01611D2859EBEA8762    # Only for user 'jamey', domain ''   

Copy/paste your output to /etc/cntlm.conf

Finally configure the local proxy variable as we did above to point to localhost instead of an external proxy.

export http_proxy=