Ok, full disclosure: I’m pretty sure I wrote this way back in 2014 or so. I’m not sure if it is still applicable to current versions of Ceph and OpenStack (or even if this procedure was correct at all). I’m going to go step by step through this at some point to vet this. I grabbed this from my old website; it’s weird to read stuff you’ve written and forgotten about.
This page exists primarily as my cheat sheet and notes to deploy OpenStack with a Rados block device storage backend provided by Ceph. The base OS used is RHEL 7 in order to use the OpenStack Platform (OSP) deployment tools. I used the packstack installer rather than the Foreman-based installer as this was simply a POC system consisting of a single controller node and two compute nodes — plus it was much easier.
Install and configure RHEL 7
On all nodes except where indicated:
ssh-keygen -t rsa cat ~/.ssh/id_pub.rsa | ssh [compute_node_IP] "cat >> ~/.ssh/authorized_keys subscription-manager register Username: [your_redhat_account_username] Password: [your password] subscription-manager subscribe --auto subscription-manager repos --disable=* subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-openstack-6.0-rpms subscription-manager repos --enable=rhel-7-server-rh-common-rpms subscription-manager repos --enable=rhel-7-server-optional-rpms subscription-manager repos --enable=rhel-7-server-openstack-6.0-installer-rpms subscription-manager repos --enable=rhel-server-rhscl-7-rpms yum -y install ntp lsscsi sg3_utils vim net-tools tcpdump yum -y update yum repolist systemctl stop NetworkManager.service systemctl disable NetworkManager.service systemctl disable firewalld.service systemctl stop firewalld.service setenforce 0 echo "SELINUX=disabled" > /etc/sysconfig/selinux echo "SELINUX" >> /etc/sysconfig/selinux echo "NETWORKING=yes" >> /etc/sysconfig/network echo "GATEWAY=[your_gateway_IP]" >> /etc/sysconfig/network systemctl start network.service systemctl enable network.service systemctl start ntpd.service systemctl enable ntpd.service
Install OpenStack
On the controller node:
sudo yum -y install openstack-packstack packstack --install-hosts=[CONTROLLER_IP,NODE_IP_ADDRESSES]
Compile and Install 3.14.33 Kernel
On all nodes:
yum -y groupinstall "Development Tools" yum -y install ncurses-devel bc wget wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.14.33.tar.xz tar xf linux-3.14.33.tar.xz cp -r linux-3.14.33 /usr/src/ cd /usr/src/ ln -s linux-3.14.33 linux cd linux make mrproper make menuconfig make bzImage make modules make modules_install make install reboot grub2-mkconfig -o /boot/grub2/grub.cfg
Configuring Ceph
On all nodes, create a ceph.repo file in /etc/yum.repos.d/:
[ceph] name=Ceph packages for $basearch baseurl=http://ceph.com/rpm-giant/rhel7/$basearch enabled=1 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=http://ceph.com/rpm-giant/rhel7/noarch enabled=1 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://ceph.com/rpm-giant/rhel7/SRPMS enabled=0 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
Then install the release.asc key:
sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
On the controller node, install the following:
sudo yum -y install python-ceph ceph
On each compute node, install the following:
sudo yum -y install ceph
On your Ceph administrator node or monitor node:
ssh 10.0.1.9[1-3] sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf ceph osd pool create volumes 128 ceph osd pool create images 128 ceph osd pool create backups 128 ceph osd pool create vms 128 ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring ceph auth get-or-create client.cinder | ssh {your-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring ssh {your-compute-server} sudo chown nova:nova /etc/ceph/ceph.client.cinder.keyring ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
On your compute nodes:
uuidgen 457eb676-33da-42ec-9a8c-9293d545c337 cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> <usage 'ceph'> <name>client.cinder secret</name> </usage> </secret> EOF sudo virsh secret-define --file secret.xml Secret 457eb676-33da-42ec-9a8c-9293d545c337 created sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
On your controller node, edit /etc/glance/glance-api.conf and add under the [glance_store] section:
[DEFAULT] ... default_store = rbd show_image_direct_url = True ... [glance_store] stores = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 ... [paste_deploy] flavor = keystone
I hit upon a bit of snag here. In the official Ceph documentation, it seems to indicate that you want to put all your rbd_ options in the [DEFAULT] section. And there are commented-out rbd_ options there that would bear that out. But when I initially did that, I wasn’t able to create volumes on Ceph. Subsequently, I found another source that has you put the rbd_ options in a separate [ceph] section. I did that and it worked. So, under pressure, I put all the rbd_ options under both [DEFAULT] and [ceph]. I suspect that if lvm was disabled as a backend, everything would be read from the [DEFAULTS] section. Anyway, I’ll do more research on that later. This configuration seems to work, as illogical, redundant, and messy as it is. On your controller node, edit /etc/cinder/cinder.conf:
[DEFAULT] enabled_backends = lvm,ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337 rbd_max_clone_depth=5 rbd_store_chunk_size=4 rados_connect_timeout=-1 glance_api_version = 2 ... [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337 rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2
In the same file add this for Cinder backup:
backup_driver = cinder.backup.drivers.ceph backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_user = cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
On your compute nodes edit /etc/ceph/ceph.conf:
[client] rbd_cache = true rbd_cache writethrough until flush = true admin_socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
On your compute nodes edit /etc/nova/nova.conf under the [libvirt] section:
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337 inject_password = false inject_key = false inject_partition = -2 libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
At this point, you are basically finished installing OpenStack with Ceph as a storage backend. Reboot your nodes or if you want to be all Unix proper:
sudo systemctl restart openstack-glance-api.service sudo systemctl restart openstack-nova-compute.service sudo systemctl restart openstack-cinder-volume.service sudo systemctl restart openstack-cinder-backup.service
Configure a Simple Neutron Network
While not technically part of a Ceph configuration, I had to configure a running network so that the instances would be more interesting to work with, and heck, accessible. Note that I had to use Linux kernel 3.14.33 in order to get the OVS bridging to work, as provided by the OpenStack packstack installer. It absolutely does NOT work under the current stable 3.19 kernel — like the one available from El Repo. But OVS bridging may work with the 3.19 kernel if you source the newest version, but I’m not doing that here. Anyway, go and destroy the default public network.
cd ~ source keystonerc_admin neutron router-gateway-clear router1 neutron subnet-delete public_subnet neutron net-delete public
Set your physical device to be an OVS port:
# vi /etc/sysconfig/network-scripts/ifcfg-enp8s0f0 DEVICE=enp8s0f0 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes
Configure your OVS bridge as you would a physical device:
# vi /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.120.10 NETMASK=255.255.255.0 GATEWAY=192.168.120.1 DNS1=192.168.120.1 ONBOOT=yes
Reboot that shit!
reboot
Really, you can do this entire part in the GUI. But here it is because I copied it from the docs. Change your public network settings to match your LAN.
# cd ~ # source keystonerc_admin # neutron net-create public --router:external=True # neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=192.168.100.20,end=192.168.100.100 --gateway=192.168.100.1 public 192.168.100.0/24 # neutron router-gateway-set router1 public # neutron net-create private_network # neutron subnet-create private_network 192.168.200.0/24 --name private_vmsubnet # neutron router-create router2 # neutron router-gateway-set router2 public # neutron router-interface-add router2 private_vmsubnet