This is a large project, so I’ll be breaking this into multiple parts. Some of this information was previously published, here and on other blogs I’ve operated in the past. Where possible, when information comes from external sources, I will credit it here.
I had budgetary reasons to migrate from my old cPanel server (I’ve been using cPanel to host my websites since, I dunno, 2005-2006, and always stuck with it for legacy/can’t be arsed reasons) to a new infrastructure. As technology improved, I had moved from a cPanel shared hosting account, to cPanel reseller accounts, to cPanel on a VPS, to cPanel on a dedicated server, back to cPanel as a virtual machine (using a custom KVM hypervisor/public cloud product provided by an old employer, and then to a VMware VM on ESXi)… but it was time to stop paying for a cPanel license when I know how to admin servers and have no need for it any more, and it was time to get a cheaper, more efficient way to manage my various projects, websites, e-mail infrastructure, etc. This is that story. Are there better ways to do most of these things? Probably. I will highlight specifics that I would do differently, were I to do this again, when we get to those. But otherwise, I feel like this is a good overview, not only for the specific topic in the post title, but in general for how to set up a home lab, or “private cloud” of sorts.
The pieces and parts
A very brief overview of what comprises my infrastructure.
- A cheap, dedicated server; or, a workstation in your closet, depending on your public-facing needs.
- Don’t pay more than $75/mo for a hosted dedicated server.
- If you want to self-host, pick up an old tower-style IBM ThinkServer on eBay. I have a friend who picked one up at their current going-rate of $200-$300.
- For the purposes of this guide, you’ll need something no older than a Nehalem-class Xeon CPU. That’s something like an L/X/E5520.
- Aim for >=16GB RAM. Preference for a >=1TB spinning disk and =>250GB SSD at the very least; RAID1 at a minimum is preferred. I recommend it, but we’re going budget here, so I’m not actually using it to be honest.
- OS on the bare metal depends on item #2 below.
- A virtualization system of your choice. Because of my experience with it, and the fact that it provides a WUI, I prefer oVirt. You may prefer VMware ESXi, the free one. Apparently it also comes with a WUI appliance now. This guide assumes you will use oVirt (don’t worry, I’ll walk you through it).
- opnSense as a NATing firewall.
- Ajenti and Ajenti V to manage vhosts. You can create all the nginx config files by hand, if you prefer, but I had a large number to manage, and I prefer Ajenti.
Part 1 – Installing oVirt
While I call this product a “private cloud,” we are going to start with a single management/compute node, because it’s necessary to keep within the budget. All of these items can be seamlessly expanded by simply repeating Part 1 as many times as you like to expand your cloud to multiple nodes. The technologies in oVirt and opnSense will bridge across any nodes you add, so this actually can be deployed as a private cloud; you can even span multiple data centers and geographic locations, if you like, although have fun with the latency.
This part of the guide comes from an old oVirt blog I used to run when I was running an oVirt infrastructure full-time. Modifications are made to the previous post to allow for oVirt to properly handle the NATing of the opnSense firewall.
This guide assumes that you will be using Intel-based servers that have the following BIOS settings enabled: Virtualization, and Disable Execute. Dig around in your BIOS settings before installing oVirt to make sure both these options are enabled.
This guide will also be using CentOS rather than Fedora or oVirt Node for all host installations. I found oVirt Node very difficult to deal with, and you will not be able to install “shared local” storage using oVirt Node because you do not have root read/write access to the filesystem with that OS. There is a much greater memory usage overhead using CentOS, so be warned.
This guide assumes a basic familiarity with CentOS, NFS, SSH, and to a lesser extent, oVirt itself. Further explanation and documentation on these technologies exists on their developers’ websites.
First, install CentOS on your first machine. This should be fairly simple. Install the basic minimum server distribution. When the OS starts for the first time, make sure sshd is up, iptables is empty, and that you save the sshd and iptables configurations. If you are using a dedicated server provider, you may want to uninstall some packages from their default OS deployment. I would strongly recommend going with a dedicated server provider that provides some sort of IP over KVM so that you can ensure you are running as lean as possible.
Once the OS is deployed, prepare the environment by wiping out the iptables rules.
Install the oVirt repository
yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
Install oVirt Engine
yum install -y ovirt-engine
Okay, so you’ve got all the packages installed, it’s time to set up the oVirt engine. This is done via a simple command-line function:
This will guide you through the installation of the oVirt Engine. Historically, I’ve had issues with the firewall configuration, so I do not let the engine setup perform this for me. For security, you should have the engine setup configure a firewall, but make sure to manually edit
/etc/sysconfig/iptables to punch holes in it for ports 80, 443, and the NFS/portmapper ports, as you’ll need them later. Accept all the other defaults, but don’t configure an ISO domain at this time.
Set up NFS
yum -y install nfs-utils
chkconfig nfs on
chkconfig rpcbind on
In addition to starting the NFS services, you need to configure /etc/exports. You should have lines in it for the loopback and public IPs of your current server, as well as any other servers you will be adding to the cluster. oVirt also requires very specific NFS settings. If you’re running with separate spinning disk and SSD volumes, I would recommend setting these up as separate exports in NFS. Make sure these mount points exist and you have your filesystems set up the way you want them.
Set up your NFS storage domain
Set the appropriate permissions on your NFS mount points.
chown -R 36:36 /isos /vmhdd /vmssd
Add the host in oVirt
Now, it’s time to use the web browser. Log into your server via HTTPS; for example, https://192.168.1.2, and click on the Administration Portal link. Enter the username “admin” and the password you specified during engine-setup.
Now go to the Hosts tab and click new. Put in the IP address and root password of your first node. Warning: click Advanced Settings and disable the automatic firewall configuration, or ports 80 and 443 will be blocked and you won’t be able to access the web admin any more. Wait for the node installation to finish (several yum packages will be installed). If you’re setting up a true private cloud, and you’ve been running these steps on multiple servers, you can add all your hosts at the same time.
Add the storage domain
Now it’s time to add your storage. Click the Storage tab and click New Storage. Select your new data center and give your new storage a name; for example, VM-HDD. Spaces are not allowed in storage domains. For Use Host, select the node that contains the directory you created; i.e. for your first node, select your first node from the drop-down list. This is important; if you have a mis-match of the node and directory names, this will fail. Finally, in the location, specify the bound IP address of your node, followed by the path of your storage domain. For example:
It will take several minutes for the cluster to come online and your storage to become usable. Add all your exports except your ISO domain in this step.
Now you can create an ISO domain. This has the same setup as a storage domain, except under Domain Function / Storage Type, select ISO/NFS. Point to your ISO folder:
P.S.: Uploading to ISO domains
I found it very difficult to parse how to upload ISOs to storage domains. The easiest way is to download your files locally on the same host that your ISO domain exists on, and then use the ovirt-iso-uploader.
ovirt-iso-uploader -r 192.168.1.2 --iso-domain=ISOs upload Fedora-Live-KDE-x86_64-22-3.iso
This will copy the file from your “local” filesystem to your “NFS” filesystem. Since it’s a local copy, it won’t take long.
Tweaks for opnSense
oVirt networking is not set up to pass NAT traffic properly. Make the following tweaks to fix this.
First, look at all of your networks:
This is the first caveat; I’ve said that there are probably better ways to do this; I don’t know which of these networks actually require these settings, so I’ve disabled these settings on all of the networks I could. This is just a trial-and-error dump of getting the NAT to pass traffic properly.
ethtool -K vnet3 tx off
ethtool -K vnet gso off
ethtool -K vnet3 gso off
ethtool -K vnet2 gso off
ethtool -K vnet1 gso off
ethtool -K vnet0 gso off
ethtool -K ovirtmgmt gso off
ethtool -K bond0 gso off
ethtool -K ovirtmgmt tx off
ethtool --offload ovirtmgmt rx off tx off
ethtool --offload eth0 rx off tx off
ethtool -k eth0
ethtool -K eth0 tso off
ethtool -K eth1 tso off
ethtool --offload eth1 rx off tx off
ethtool --offload bond0 rx off tx off
ethtool -K bond0 gso off
ethtool -K bond0 tso off
That’s as far as we’ll cover in part 1. Take a look around your first node (or nodes). Explore oVirt a bit. We’ll cover adding a VM and installing opnSense next.