VDI, Looking forward with ovirt 3.6 features of rbd support

Introduction

I'm working under the project of deploying of OpenSource VDI solution as access infrastructure for MPI cluster. In the middle of April i read the news about ovirt 3.6 and Cinder integration. This feature gives Ovirt/Rehat Virtualization opportunities of Vmware/Citrix/Hyper-V with SaleIO.
This article describes installing of beta (master-snaphot 3.6 road-map) ovirt with Cinder and rbd integration and with Hosted engine and Active Directory integration for testing.

Task details

Simple scheme.
500

Installing task includes:

*Installing ceph rbd on all nodes.
*Installing HA openstack components which include:
*Installing of mysql cluster
*Installing of RabbitMQ cluster
*Installing HA cluster
*Installing of HA OpenStack Identity
*Installing of HA Glance
*Installing of HA Cinder
*Installing of Ovirt 3.6 of Nightly with Hosted engine.

1. Nodes configuration

This article will not concern details of OS tuning and network configuration because it is too early with beta-release.
Hardware configuration.
One hdd for OS, three HDD for data, 2xXeon56XX, 96 GB MEM, 2x1Gbit/s Ethernet

Network configuration. Bond mode=0 and three VLANs were configured.

VLAN12 - 172.25.0.0/16 - VM network
VLAN13 - 172.26.2.0/24 - Management Network 
VLAN14 - 172.26.1.0/24 - Storage and message Network 

Example for IMM10:

cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:1e:67:0c:41:f8
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
MTU=1500
NM_CONTROLLED=no
IPV6INIT=no
cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=00:1e:67:0c:41:f9
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
MTU=1500
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=no
cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS='mode=0 miimon=100'
ONBOOT=yes
BOOTPROTO=none
MTU=1500
NM_CONTROLLED=no
IPV6INIT=no
HOTPLUG=no
cat /etc/sysconfig/network-scripts/ifcfg-bond0.12
DEVICE=bond0.12
VLAN=yes
BRIDGE=virtlocal
ONBOOT=yes
MTU=1500
NM_CONTROLLED=no
IPV6INIT=no
HOTPLUG=no
cat /etc/sysconfig/network-scripts/ifcfg-virtlocal
DEVICE=virtlocal
NM_CONTROLLED=no
ONBOOT=yes
TYPE=bridge
BOOTPROTO=none
IPADDR=172.25.0.210
PREFIX=16
MTU=1500
cat /etc/sysconfig/network-scripts/ifcfg-bond0.13
DEVICE=bond0.13
VLAN=yes
BRIDGE=ovirtmgmt
ONBOOT=yes
MTU=1500
NM_CONTROLLED=no
IPV6INIT=no
HOTPLUG=no
cat /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
DEVICE=ovirtmgmt
NM_CONTROLLED=no
ONBOOT=yes
TYPE=bridge
BOOTPROTO=none
IPADDR=172.26.2.210
PREFIX=24
IPV4_FAILURE_FATAL=no
MTU=1500
GATEWAY=172.26.2.205 # GATEWAY MUST BE CONFIGURED ON ovirtmgmt interface to install Hosted  Engine correctly . 
cat /etc/sysconfig/network-scripts/ifcfg-bond0.14
DEVICE=bond0.14
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
VLAN=yes
NETWORK=172.26.1.0
NETMASK=255.255.255.0
IPADDR=172.26.1.210

Software configuration.
OS Centos 7.1. Selinux and firewall were disabled.

Ssh passwordless access was configured on all nodes.

ssh-keygen -t dsa (creation of passwordless key)
cd /root/.ssh
cat id_dsa.pub >> authorized_keys
chown root.root authorized_keys
chmod 600 authorized_keys
echo "StrictHostKeyChecking no" > config

Data disks ware assembled with (RAIDz) base on zvol zfs on linux. Compression switch on. Vdev /dev/zvol/rzfs/cvol was created on each node.
For every of nodes install zfs on linux:

yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
yum install kernel-devel 
yum install zfs 

Checking for zfs.ko kernel module:

ls /usr/lib/modules/`uname -r`/extra/

For every of nodes create zpool rzfs:

zpool create rzfs raidz /dev/sdb /dev/sdc /dev/sdd

Create zvol with cvol name:

zfs create -V 1700G rzfs/cvol

Switch on compression:

 
zfs set compression=lz4 rzfs

All nodes should have the same /etc/hosts file:

172.26.1.210 imm10
172.26.1.211 imm11
172.26.1.212 imm12
172.26.1.213 imm13
172.26.2.210 imm10.virt
172.26.2.211 imm11.virt
172.26.2.212 imm12.virt
172.26.2.213 imm13.virt
172.26.2.250 manager #HOSTED_ENGINE VM
172.26.2.250 manager.virt #HOSTED_ENGINE VM
172.26.2.254 controller #ShareIP for Openstack  

2. Installing ceph rbd on all nodes

2.1. Setup repo on all nodes.

Set up repository: (on all nodes)

 cat << EOT > /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for \$basearch
baseurl=http://ceph.com/rpm-hammer/el7/\$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-hammer/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

EOT

Import gpgkey: (on all nodes)

 rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
2.2. Installing ceph.

Don't forget to configure ntp server on all nodes.

For all nodes:

yum install ceph-common ceph ceph-fuse ceph-deploy ntp -y  
systemctl enable ntpd
systemctl start ntpd
2.3. Deploying monitors.

For thirst node:

cd /etc/ceph
ceph-deploy new imm10 imm11 imm12 imm13
ceph-deploy mon create-initial

Check for running monitors:

ceph -s
    cluster 9185ec98-3dec-4ba8-ab7d-f4589d6c60e7
    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
  monmap e1: 4 mons at {imm10=172.26.1.210:6789/0,imm11=172.26.1.211:6789/0,imm12=172.26.1.212:6789/0,imm13=172.26.1.213:6789/0}
          election epoch 116, quorum 0,1,2,3 imm10,imm11,imm12,imm13
  osdmap e1: 0 osds: 0 up, 0 in
  pgmap v2: 192 pgs, 4 pools, 0 bytes data, 0 objects
        0 kB used, 0 kB / 0 kB avail
             192 creating
2.4. Deploying osd.

For every node:

cd /etc/ceph
ceph-deploy gatherkeys immX
ceph-deploy disk zap immX:zvol/rzfs/cvol
ceph-deploy osd prepare immX:zvol/rzfs/cvol

where X is node number.

Check for all osd running:

ceph -s
      cluster 9185ec98-3dec-4ba8-ab7d-f4589d6c60e7
   health HEALTH_OK
   monmap e1: 4 mons at {imm10=172.26.1.210:6789/0,imm11=172.26.1.211:6789/0,imm12=172.26.1.212:6789/0,imm13=172.26.1.213:6789/0}
          election epoch 116, quorum 0,1,2,3 imm10,imm11,imm12,imm13
   osdmap e337: 4 osds: 4 up, 4 in
    pgmap v1059868: 512 pgs, 4 pools, 108 GB data, 27775 objects
          216 GB used, 6560 GB / 6776 GB avail
               512 active+clean
client io 0 B/s rd, 2080 B/s wr, 1 op/s

For cloning purpose change /etc/ceph/ceph.conf for every nodes by adding following line:

rbd default format = 2

Restart ceph on every nodes

2.6. Preparing Ceph for Cinder and Glance (OpenStack).

From any of nodes.

Create pools:

ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
ceph osd pool create volumes 128
ceph osd pool set volumes min_size 1
ceph osd pool set volumes size 2
ceph osd pool create images 128
ceph osd pool set images min_size 1
ceph osd pool set images size 2
ceph osd pool create backups 128
ceph osd pool set backups min_size 1
ceph osd pool set backups size 2
ceph osd pool create vms 128
ceph osd pool set vms min_size 1
ceph osd pool set vms size 2

A PRESELECTION OF PG_NUM

Setup cephx authentication: (From any of nodes)

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' >> /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' >> /etc/ceph/ceph.client.glance.keyring

Copy /etc/ceph/ceph.client.cinder.keyring and /etc/ceph/ceph.client.glance.keyring to other nodes.

3. Installing and configuring HA cluster

3.1. Install pacemaker, corosync and pcs on all nodes.
yum install pacemaker corosync  resource-agents pcs -y
3.2. Configure HA cluster.

Set the same passsword for hacluster user on all nodes:

passwd hacluster

Enable and start pcsd service:

systemctl enable pcsd.service
systemctl start pcsd.service

Configure cluster from any of node:

pcs cluster auth imm10 imm11 imm12 imm13
pcs cluster setup --name ovirt  imm10 imm11 imm12 imm13 --force
pcs cluster start --all

Check:

pcs status

Cluster name: ovirt
Last updated: Thu Jul 16 01:45:10 2015
Last change: Thu Jul 16 01:43:18 2015
Stack: corosync
Current DC: imm11.virt (2) - partition with quorum
Version: 1.1.12-a14efad
4 Nodes configured
0 Resources configured
Online: [ imm10 imm11 imm12 imm13 ]
Full list of resources:
PCSD Status:
imm10: Online
imm11: Online
imm12: Online
imm13: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

Setup properties:

pcs property set stonith-enabled=false
pcs property set no-quorum-policy=stop 

Create HA resource. Configure share IP address:

pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.26.2.254 cidr_netmask=24 op monitor interval=30s

If check is ok, ha cluster was successfully created.

4. Installing infrastructure for OpenStack components

4.1. Installing and configuring mariadb cluster.

On all nodes

Setup repo:

 cat << EOT > /etc/yum.repos.d/mariadb.repo
 [mariadb]
 name = MariaDB
 baseurl = http://yum.mariadb.org/10.0/centos7-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1
 EOT

Installing:

 yum install MariaDB-Galera-server MariaDB-client rsync galera

start service:

 service mysql start
 chkconfig mysql on
 mysql_secure_installation

preparing for cluster:

 mysql -p
 GRANT USAGE ON *.* to sst_user@'%' IDENTIFIED BY 'PASS';
 GRANT ALL PRIVILEGES on *.* to sst_user@'%';
 FLUSH PRIVILEGES;
 exit
 service mysql stop

configuring cluster: (for immX)

 cat << EOT > /etc/my.cnf
 [mysqld]
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 binlog_format=ROW
 default-storage-engine=innodb
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1
 query_cache_size=0
 query_cache_type=0
 bind-address=172.26.1.X # setup real node ip
 datadir=/var/lib/mysql
 innodb_log_file_size=100M
 innodb_file_per_table
 innodb_flush_log_at_trx_commit=2
 wsrep_provider=/usr/lib64/galera/libgalera_smm.so
 wsrep_cluster_address="gcomm://172.26.1.210,172.26.1.211,172.26.1.212,172.26.1.213"
 wsrep_cluster_name='imm_galera_cluster'
 wsrep_node_address='172.26.1.X' # setup real node ip
 wsrep_node_name='immX' #  setup real node name
 wsrep_sst_method=rsync
 wsrep_sst_auth=sst_user:PASS
 
 EOT

(on imm10)

 /etc/init.d/mysql bootstrap

(on other nodes)

 /etc/init.d/mysql start

check on all nodes:

 mysql -p
 show status like 'wsrep%';
  1. —————————–+————————————————————————-+

| Variable_name | Value |

+------------------------------+-------------------------------------------------------------------------+
| wsrep_local_state_uuid       | 9d6a1186-2b82-11e5-8b9a-52fea1194a47                                    |
| wsrep_protocol_version       | 7                                                                       |
| wsrep_last_committed         | 900212                                                                  |
| wsrep_replicated             | 92767                                                                   |
| wsrep_replicated_bytes       | 37905769                                                                |
| wsrep_repl_keys              | 278607                                                                  |
| wsrep_repl_keys_bytes        | 4362508                                                                 |
| wsrep_repl_data_bytes        | 27606173                                                                |
| wsrep_repl_other_bytes       | 0                                                                       |
| wsrep_received               | 741                                                                     |
| wsrep_received_bytes         | 6842                                                                    |
| wsrep_local_commits          | 92767                                                                   |
| wsrep_local_cert_failures    | 0                                                                       |
| wsrep_local_replays          | 0                                                                       |
| wsrep_local_send_queue       | 0                                                                       |
|   wsrep_local_send_queue_max   | 2                                                                       |
| wsrep_local_send_queue_min   | 0                                                                       |
| wsrep_local_send_queue_avg   | 0.000460                                                                |
| wsrep_local_recv_queue       | 0                                                                       |
| wsrep_local_recv_queue_max   | 1                                                                       |
| wsrep_local_recv_queue_min   | 0                                                                       |
| wsrep_local_recv_queue_avg   | 0.000000                                                                |
| wsrep_local_cached_downto    | 807446                                                                  |
| wsrep_flow_control_paused_ns | 0                                                                       |
| wsrep_flow_control_paused    | 0.000000                                                                |
| wsrep_flow_control_sent      | 0                                                                       |
| wsrep_flow_control_recv      | 0                                                                       |
| wsrep_cert_deps_distance     | 5.293391                                                                |
| wsrep_apply_oooe             | 0.001347                                                                |
| wsrep_apply_oool             | 0.000000                                                                |
| wsrep_apply_window           | 1.001347                                                                |
| wsrep_commit_oooe            | 0.000000                                                                |
| wsrep_commit_oool            | 0.000000                                                                |
| wsrep_commit_window          | 1.000000                                                                |
| wsrep_local_state            | 4                                                                       |
| wsrep_local_state_comment    | Synced                                                                  |
| wsrep_cert_index_size        | 7                                                                       |
| wsrep_causal_reads           | 0                                                                       |
| wsrep_cert_interval          | 0.003342                                                                |
| wsrep_incoming_addresses     | 172.26.1.210:3306,172.26.1.211:3306,172.26.1.212:3306,172.26.1.213:3306 |
| wsrep_evs_delayed            |                                                                         |
| wsrep_evs_evict_list         |                                                                         |
| wsrep_evs_repl_latency       | 0.000441713/0.000505823/0.000570299/4.58587e-05/9                       |
| wsrep_evs_state              | OPERATIONAL                                                             |
| wsrep_gcomm_uuid             | 09ffddfe-4c9b-11e5-a6fa-aef3ff75a813                                    |
| wsrep_cluster_conf_id        | 4                                                                       |
| wsrep_cluster_size           | 4                                                                       |
| wsrep_cluster_state_uuid     | 9d6a1186-2b82-11e5-8b9a-52fea1194a47                                    |
| wsrep_cluster_status         | Primary                                                                 |
| wsrep_connected              | ON                                                                      |
| wsrep_local_bf_aborts        | 0                                                                       |
| wsrep_local_index            | 0                                                                       |
| wsrep_provider_name          | Galera                                                                  |
| wsrep_provider_vendor        | Codership Oy <info@codership.com>                                       |
| wsrep_provider_version       | 25.3.9(r3387)                                                           |
| wsrep_ready                  | ON                                                                      |
| wsrep_thread_count           | 2                                                                       |
+------------------------------+-------------------------------------------------------------------------+
57 rows in set (0.00 sec)

Remember, if all nodes will be down, actual node must be started with /etc/init.d/mysql start –wsrep-new-cluster. You should find an actual node. If you start node with not actual view, other nodes will issue error (see logs) - [ERROR] WSREP: gcs/src/gcs_group.cpp:void group_post_state_exchange(gcs_group_t*)():319: Reversing history: 0 → 0, this member has applied 140536161751824 more events than the primary component.Data loss is possible. Aborting.

4.2. Installing and configuring messaging rabbitmq cluster.

On all nodes install rabbitmq:

yum install rabbitmq-server -y
systemctl enable rabbitmq-server

Configuring rabbitmq cluster. For first node:

systemctl start rabbitmq-server
systemctl stop rabbitmq-server

Copy .elang.cookie from node1 to other nodes:

scp /var/lib/rabbitmq/.erlang.cookie root@node2:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie root@node3:/var/lib/rabbitmq/.erlang.cookie

On the target nodes ensure the correct owner, group, and permissions of the .erlang.cookie file:

chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
chmod 400 /var/lib/rabbitmq/.erlang.cookie

Start RabbitMQ on all nodes and verify the nodes are running:

systemctl start rabbitmq-server
rabbitmqctl cluster_status
Cluster status of node rabbit@immX ...
[{nodes,[{disc,[rabbit@immX]}]},
{running_nodes,[rabbit@immX]},
{cluster_name,<<"rabbit@immX">>},
{partitions,[]}]
...done.

Run the following commands on all nodes except the first one:

rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@node1
rabbitmqctl start_app

To verify the cluster status:

rabbitmqctl cluster_status
Cluster status of node rabbit@imm10 ...
[{nodes,[{disc,[rabbit@imm10,rabbit@imm11,rabbit@imm12,rabbit@imm13]}]},
 {running_nodes,[rabbit@imm11,rabbit@imm12,rabbit@imm10]},
 {cluster_name,<<"rabbit@imm10">>},
 {partitions,[]}]
...done.

The message broker creates a default account that uses guest for the username and password. To simplify installation of your test environment, we recommend that you use this account, but change the password for it. Run the following command:
Replace RABBIT_PASS with a suitable password.

rabbitmqctl change_password guest RABBIT_PASS

To ensure that all queues, except those with auto-generated names, are mirrored across all running nodes it is necessary to set the policy key ha-mode to all. Run the following command on one of the nodes:

rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'

5. Installing Ovirt 3.6 (at the time of testing it was beta version )

5.1. Preparing

Install bridg utils for all nodes.

yum install bridg-utils net-tools

Ovirtmgmt bridge interface should be already configured as describes in node configuration section.

5.2. Installing VDSM

Install gluster repo for all nodes:

cd /etc/yum.repos.d
wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

Install ovirt-3.6 repository for all nodes:

yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm
yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release36-snapshot.rpm

Install vdsm on all nodes

yum install vdsm 

Check for ceph support.

 qemu-img -h | grep rbd
 Supported formats: blkdebug blkverify bochs cloop dmg file ftp ftps gluster host_cdrom host_device host_floppy http https iscsi nbd parallels qcow qcow2 qed quorum raw rbd sheepdog ssh tftp vdi vhdx vmdk vpc vvfat
 /usr/libexec/qemu-kvm --drive format=? | grep rbd
 Supported formats: ssh gluster rbd tftp ftps ftp https http iscsi sheepdog nbd host_cdrom host_floppy host_device file blkverify blkdebug parallels quorum vhdx qed qcow2 vvfat vpc bochs dmg cloop vmdk vdi qcow raw

Authorize vdsm for using ceph storage.

Generate uuid on any of nodes:

uuidgen
52665bbc-5e41-4938-9225-0506dceaf615

Create a temporary copy of the cinder secret key on all nodes:

ceph auth get-key client.cinder >> /etc/ceph/client.cinder.key

Create secret.xml. Take client cinder key from /etc/ceph/client.cinder.key.

cat > /root/secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>52665bbc-5e41-4938-9225-0506dceaf615</uuid>
<usage type='ceph'>
<name>client.cinder AQCjVLdVNvosLxAAVyEjBEQOFFYqBXiIIghZqA==</name>
</usage>
</secret>

EOF

Copy secret.xml to other nodes.

For all nodes add key to libvirt. VDSM added password to libvirt.
You can use default vdsm libvirt user and password user name vdsm@ovirt and the password you'll find at /etc/pki/vdsm/keys/libvirt_password.
Also you can add a new user:

saslpasswd2 -a libvirt USERNAME
Password:
Again (for verification)
virsh secret-define --file /root/secret.xml
virsh secret-set-value --secret virsh secret-set-value --base64 $(cat /etc/ceph/client.cinder.key)   
5.3. Installing and configuring glusterfs

Installing glusterfs for all nodes:

yum install glusterfs-server 

Start glusterfs on all nodes:

systemctl start glusterd
systemctl enable glusterd

For every of nodes creating directories below:

mkdir /VIRT
mkdir /ISO
mkdir /STOR

On imm10:

gluster
peer probe imm11 imm12 imm13
volume create ISO replica 2 transport tcp imm10:/ISO imm11:/ISO imm12:/ISO imm13:/ISO force
volume create MAIN replica 2 transport tcp imm10:/STOR imm11:/STOR imm12:/STOR imm13:/STOR force
#by default hosted engine required 3 replica
volume create VIRT replica 3 transport tcp imm10:/VIRT imm11:/VIRT imm12:/VIRT force
volume start VIRT
volume start ISO
volume start STOR

volume set VIRT storage.owner-uid 36
volume set VIRT storage.owner-gid 36
volume set VIRT server.allow-insecure on

volume set ISO storage.owner-uid 36 
volume set ISO storage.owner-gid 36
volume set ISO server.allow-insecure on

volume set STOR storage.owner-uid 36
volume set STOR storage.owner-gid 36
volume set STOR server.allow-insecure on
5.4. Installing Self-Hosted Engine

For imm10,11,12:

yum install ovirt-hosted-engine-setup

Download iso Centos 7, for example on imm10:

wget http://mirror.yandex.ru/centos/7.1.1503/isos/x86_64/CentOS-7-x86_64-DVD-1503-01.iso

Setup Hosted Engine on imm10 and add imm11 and imm12.

As an answers you have to use:

glusterfs as a storage;\\
immX:/VIRT as path to storage; X - node where hosted-engine -deploy was started\\
manager as a fqdn and VM hostname;\\
immX.virt as name of host;\\
GATEWEY_IN_OVIRTMGMT_NETWORK as a gateway.\\

Use VIRT for hosted_engine vm, ISO - ISO_DOMAIN, STOR - MAIN Domain.

After successful hosted_engine installation login to engine web-portal and add host imm13.virt.

If installation were interrupted you need to destroy VM and start to install hosted-engine:
For example:

vdsClient -s 0 list

 vdsClient -s 0 destroy 0a461770-ef3c-4d99-b606-c7836682d75f

Don't forget to copy /etc/hosts from any of nodes to hosted-engine VM

5.5. Configuring vm netorks

Configure two networks from web-portal:
*virtlocal
*ovirtmgmt

600 600

Configure networks on all hosts and activate hosts. Then adding ISO and STOR storages.

6. Installing OpenStack components

6.1. Install and configure HA OpenStack Identity service.

Create the keystone database:

mysql -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

Enable openstack repository

yum install yum-plugin-priorities
rpm -ivh https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm

Install components for all nodes:

yum install openstack-keystone python-keystoneclient -y

Generate a random value to use as the administration token during initial configuration (on one node):

openssl rand -hex 10

Edit the /etc/keystone/keystone.conf file and complete the following actions on all nodes.

In the [DEFAULT] section, define the value of the initial administration token:

Select Text
[DEFAULT]
...
admin_token = ADMIN_TOKEN
Replace ADMIN_TOKEN with the random value that you generated in a previous step.

In the [database] section, configure database access.

Select Text
[database]
...
connection = mysql://keystone:KEYSTONE_DBPASS@localhost/keystone
Replace KEYSTONE_DBPASS with the password you chose for the database.

In the [token] section, configure the UUID token provider and SQL driver:

Select Text
[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token

In the [revoke] section, configure the SQL revocation driver:

Select Text
[revoke]
...
driver = keystone.contrib.revoke.backends.sql.Revoke

(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:

Select Text
[DEFAULT]
...
verbose = True

Create generic certificates and keys and restrict access to the associated files for imm10:

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl

Then copy /etc/keystone/ssl with files to other nodes. After copy set permissions:

chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl

Populate the Identity service database for imm10:

su -s /bin/sh -c "keystone-manage db_sync" keystone

Check that database was populated:

mysql -p 
\r keystone
show tables;
+-----------------------+
| Tables_in_keystone    |
+-----------------------+
| assignment            |
| credential            |
| domain                |
| endpoint              |
| group                 |
| id_mapping            |
| migrate_version       |
| policy                |
| project               |
| region                |
| revocation_event      |
| role                  |
| service               |
| token                 |
| trust                 |
| trust_role            |
| user                  |
| user_group_membership |
+-----------------------+
18 rows in set (0.00 sec)

Copy /etc/keystone/keystone.conf to other nodes.

Try to start and stop Identity service on every of node. If service stared successfuly then start the Identity service as HA.

pcs resource create  ovirt_keystone systemd:openstack-keystone \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s
pcs resource group add ovirt_gp  ClusterIP ovirt_keystone

Create tenants, users, and roles.
Configure the administration token and endpoint for every node:

export OS_SERVICE_TOKEN=6ccecc08c5f661c2bddd
export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

keystone tenant-create –name admin –description “Admin Tenant”

+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | 724adf6fdb324ba49385dba3a307d4d1 |
|     name    |              admin               |
+-------------+----------------------------------+  
keystone user-create --name admin --pass ADMIN_PASS --email test@test.ru
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           test@test.ru           |
| enabled  |               True               |
|    id    | d46ea880c23646b6858342eb98d32651 |
|   name   |              admin               |
| username |              admin               |
+----------+----------------------------------+
keystone role-create --name admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 81e202c2d28c41fdb1b6a45c772b4479 |
|   name   |              admin               |
+----------+----------------------------------+

keystone user-role-add --user admin --tenant admin --role admin
keystone tenant-create --name service --description "Service Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | 2250c77380534c3baaf5aa8cb1af0a05 |
|     name    |             service              |
+-------------+----------------------------------+
keystone service-create --name keystone --type identity --description "OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | 8e147209037a4071a8e04e0493c40a79 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ identity / {print $2}') \
--publicurl http://controller:5000/v2.0 \
--internalurl http://controller:5000/v2.0 \
--adminurl http://controller:35357/v2.0 \
--region regionOne
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |    http://controller:35357/v2.0     |
|      id     | 66d2cc8c0f4a4f73ba083286d4704889 |
| internalurl |     http://controller:5000/v2.0     |
|  publicurl  |     http://controller:5000/v2.0     |
|    region   |            regionOne             |
|  service_id | 8e147209037a4071a8e04e0493c40a79 |
+-------------+----------------------------------+

Create OpenStack client environment scripts.
Create client environment scripts for the admin tenant and user. Future portions of this guide reference these scripts to load appropriate credentials for client operations.

Edit the admin-openrc.sh file and add the following content:

export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v2.0

Now you use script above to export env:

source admin-openrc.sh
6.2. Install and configure HA Block Storage service Cider.

Before you install and configure the Block Storage service, you must create a database, service credentials, and API endpoints. Create the cinder database and grant proper access to the cinder database:

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';

To create the service credentials, complete these steps:

keystone user-create --name cinder --pass CINDER_PASS
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 3fc1ae9c723143fba59ac7795ffc039c |
|   name   |              cinder              |
| username |              cinder              |
+----------+----------------------------------+
keystone user-role-add --user cinder --tenant service --role admin
keystone service-create --name cinder --type volume  --description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 50c31611cc27447ea8c39fca8f7b86bf |
|     name    |              cinder              |
|     type    |              volume              |
+-------------+----------------------------------+

keystone service-create --name cinderv2 --type volumev2 --description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 52002528a42b41ffa2abf8b8bb799f08 |
|     name    |             cinderv2             |
|     type    |             volumev2             |
+-------------+----------------------------------+
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl http://controller:8776/v2/%\(tenant_id\)s \
--internalurl http://controller:8776/v2/%\(tenant_id\)s \
--adminurl http://controller:8776/v2/%\(tenant_id\)s \
--region regionOne
+-------------+--------------------------------------+
|   Property  |                Value                 |
+-------------+--------------------------------------+
|   adminurl  | http://controller:8776/v1/%(tenant_id)s |
|      id     |   140f2f2a3f7c4f31b7c3308660c1fe31   |
| internalurl | http://controller:8776/v1/%(tenant_id)s |
|  publicurl  | http://controller:8776/v1/%(tenant_id)s |
|    region   |              regionOne               |
|  service_id |   50c31611cc27447ea8c39fca8f7b86bf   |
+-------------+--------------------------------------+
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl http://controller:8776/v2/%\(tenant_id\)s \
--internalurl http://controller:8776/v2/%\(tenant_id\)s \
--adminurl http://controller:8776/v2/%\(tenant_id\)s \
--region regionOne
+-------------+--------------------------------------+
|   Property  |                Value                 |
+-------------+--------------------------------------+
|   adminurl  | http://controller:8776/v2/%(tenant_id)s |
|      id     |   233dcfa7ff6048a4ad72bec9c962df60   |
| internalurl | http://controller:8776/v2/%(tenant_id)s |
|  publicurl  | http://controller:8776/v2/%(tenant_id)s |
|    region   |              regionOne               |
|  service_id |   52002528a42b41ffa2abf8b8bb799f08   |
+-------------+--------------------------------------+

Install and configure Block Storage controller components on all nodes:

yum install openstack-cinder python-cinderclient python-oslo-db targetcli python-oslo-db MySQL-python -y

Edit the /etc/cinder/cinder.conf file and complete the following actions for all nodes.

In the [database] section, configure database access:

Select Text
[database]
...
connection = mysql://cinder:CINDER_DBPASS@localhost/cinder
Replace CINDER_DBPASS with the password you chose for the Block Storage database.

In the [DEFAULT] section, configure RabbitMQ message broker access:

Select Text
[DEFAULT]
...
rpc_backend = rabbit
rabbit_hosts = imm10:5672,imm11:5672,imm12:5672,imm13:5672
rabbit_password = RABBIT_PASS #Replace RABBIT_PASS with the password you chose for the guest account in RabbitMQ.
rabbit_userid = guest
rabbit_retry_interval=1
rabbit_retry_backoff=2
rabbit_max_retries=0
rabbit_durable_queues=true
rabbit_ha_queues=true

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

Select Text
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS #Replace CINDER_PASS with the password you chose for the cinder user in the Identity service.

Quota: (All values on your discretion)

# Number of volumes allowed per project (integer value)
quota_volumes=100 
# Number of volume snapshots allowed per project (integer
# value)
quota_snapshots=100
# Number of consistencygroups allowed per project (integer
# value)
quota_consistencygroups=100

# Total amount of storage, in gigabytes, allowed for volumes # and snapshots per project (integer value) quota_gigabytes=3300

For rbd support:

[DEFAULT]
...
enabled_backends=rbd 
...
[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 255
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 52665bbc-5e41-4938-9225-0506dceaf615
volume_backend_name = RBD
[Note]	Note
Comment out any auth_host, auth_port, and auth_protocol options because the identity_uri option replaces them.

In the [DEFAULT] section, configure the my_ip option to use the management interface IP address of the controller node:

Select Text
[DEFAULT]
...
my_ip = 172.26.2.254 #controller

(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:

Select Text
[DEFAULT]
...
verbose = True

Populate the Block Storage database:

su -s /bin/sh -c "cinder-manage db sync" cinder
2015-04-16 09:58:32.119 26968 INFO migrate.versioning.api [-] 0 -> 1...
2015-04-16 09:58:33.761 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:33.762 26968 INFO migrate.versioning.api [-] 1 -> 2...
2015-04-16 09:58:34.303 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:34.304 26968 INFO migrate.versioning.api [-] 2 -> 3...
2015-04-16 09:58:34.470 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:34.470 26968 INFO migrate.versioning.api [-] 3 -> 4...
2015-04-16 09:58:35.167 26968 INFO 004_volume_type_to_uuid [-] Created foreign key volume_type_extra_specs_ibfk_1
2015-04-16 09:58:35.173 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.173 26968 INFO migrate.versioning.api [-] 4 -> 5...
2015-04-16 09:58:35.308 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.309 26968 INFO migrate.versioning.api [-] 5 -> 6...
2015-04-16 09:58:35.492 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.492 26968 INFO migrate.versioning.api [-] 6 -> 7...
2015-04-16 09:58:35.666 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.666 26968 INFO migrate.versioning.api [-] 7 -> 8...
2015-04-16 09:58:35.757 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.757 26968 INFO migrate.versioning.api [-] 8 -> 9...
2015-04-16 09:58:35.857 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.857 26968 INFO migrate.versioning.api [-] 9 -> 10...
2015-04-16 09:58:35.957 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.958 26968 INFO migrate.versioning.api [-] 10 -> 11...
2015-04-16 09:58:36.117 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.119 26968 INFO migrate.versioning.api [-] 11 -> 12...
2015-04-16 09:58:36.253 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.253 26968 INFO migrate.versioning.api [-] 12 -> 13...
2015-04-16 09:58:36.419 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.419 26968 INFO migrate.versioning.api [-] 13 -> 14...
2015-04-16 09:58:36.544 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.545 26968 INFO migrate.versioning.api [-] 14 -> 15...
2015-04-16 09:58:36.600 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.601 26968 INFO migrate.versioning.api [-] 15 -> 16...
2015-04-16 09:58:36.751 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.751 26968 INFO migrate.versioning.api [-] 16 -> 17...
2015-04-16 09:58:37.294 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:37.294 26968 INFO migrate.versioning.api [-] 17 -> 18...
2015-04-16 09:58:37.773 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:37.773 26968 INFO migrate.versioning.api [-] 18 -> 19...
2015-04-16 09:58:37.896 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:37.896 26968 INFO migrate.versioning.api [-] 19 -> 20...
2015-04-16 09:58:38.030 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.030 26968 INFO migrate.versioning.api [-] 20 -> 21...
2015-04-16 09:58:38.053 26968 INFO 021_add_default_quota_class [-] Added default quota class data into the DB.
2015-04-16 09:58:38.058 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.058 26968 INFO migrate.versioning.api [-] 21 -> 22...
2015-04-16 09:58:38.173 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.174 26968 INFO migrate.versioning.api [-] 22 -> 23...
2015-04-16 09:58:38.263 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.264 26968 INFO migrate.versioning.api [-] 23 -> 24...
2015-04-16 09:58:38.642 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.642 26968 INFO migrate.versioning.api [-] 24 -> 25...
2015-04-16 09:58:39.443 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:39.444 26968 INFO migrate.versioning.api [-] 25 -> 26...
2015-04-16 09:58:39.455 26968 INFO 026_add_consistencygroup_quota_class [-] Added default consistencygroups quota class data into the DB.
2015-04-16 09:58:39.461 26968 INFO migrate.versioning.api [-] done

Try to start

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

check logs:

tail -f /var/log/cinder/api.log /var/log/cinder/scheduler.log
==> /var/log/cinder/api.log <==
2015-04-16 10:34:30.442 30784 INFO cinder.openstack.common.service [-] Started child 30818
2015-04-16 10:34:30.443 30818 INFO eventlet.wsgi.server [-] (30818) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.445 30784 INFO cinder.openstack.common.service [-] Started child 30819
2015-04-16 10:34:30.447 30819 INFO eventlet.wsgi.server [-] (30819) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.448 30784 INFO cinder.openstack.common.service [-] Started child 30820
2015-04-16 10:34:30.449 30820 INFO eventlet.wsgi.server [-] (30820) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.450 30784 INFO cinder.openstack.common.service [-] Started child 30821
2015-04-16 10:34:30.452 30821 INFO eventlet.wsgi.server [-] (30821) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.453 30784 INFO cinder.openstack.common.service [-] Started child 30822
2015-04-16 10:34:30.454 30822 INFO eventlet.wsgi.server [-] (30822) wsgi starting up on http://0.0.0.0:8776/
==> /var/log/cinder/scheduler.log <==
2015-04-16 10:34:29.940 30785 INFO oslo.messaging._drivers.impl_rabbit [req-80c27067-712a-47c3-a8e3-a448e1677e8a - - - - -] Connecting to AMQP server on imm13:5672
2015-04-16 10:34:29.961 30785 INFO oslo.messaging._drivers.impl_rabbit [req-80c27067-712a-47c3-a8e3-a448e1677e8a - - - - -] Connected to AMQP server on imm13:5672
2015-04-16 10:34:30.310 30785 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on imm13:5672
2015-04-16 10:34:30.322 30785 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on imm13:5672
2015-04-16 10:37:14.587 30785 INFO cinder.openstack.common.service [-] Caught SIGTERM, exiting
2015-04-16 10:37:43.146 31144 INFO cinder.service [-] Starting cinder-scheduler node (version 2014.2.2)
2015-04-16 10:37:43.148 31144 INFO oslo.messaging._drivers.impl_rabbit [req-6e41c1e3-1ffe-4918-bdb2-b99e665713a6 - - - - -] Connecting to AMQP server on imm11:5672
2015-04-16 10:37:43.167 31144 INFO oslo.messaging._drivers.impl_rabbit [req-6e41c1e3-1ffe-4918-bdb2-b99e665713a6 - - - - -] Connected to AMQP server on imm11:5672
2015-04-16 10:37:43.517 31144 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on imm10:5672
2015-04-16 10:37:43.530 31144 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on imm10:5672

check service:

source admin-openrc.sh
cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |      Host      | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   imm10.virt   | nova | enabled |   up  | 2015-08-30T13:39:11.000000 |       None      |
|  cinder-volume   | imm10.virt@rbd | nova | enabled |   up  | 2015-08-05T19:31:09.000000 |       None      |
|  cinder-volume   | imm11.virt@rbd | nova | enabled |   up  | 2015-08-30T13:39:12.000000 |       None      |
|  cinder-volume   | imm12.virt@rbd | nova | enabled |   up  | 2015-08-28T08:49:45.000000 |       None      |
|  cinder-volume   | imm13.virt@rbd | nova | enabled |   up  | 2015-08-30T13:39:12.000000 |       None      |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+

Create volume

cinder create --display-name demo-volume2 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-04-16T15:41:01.202976      |
| display_description |                 None                 |
|     display_name    |             demo-volume2             |
|      encrypted      |                False                 |
|          id         | 5c7d35e8-ea55-4aa9-8a57-bb0de94552e7 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

Check for image creation

cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 5c7d35e8-ea55-4aa9-8a57-bb0de94552e7 | available | demo-volume2 |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
rbd -p volumes ls -l
NAME                                         SIZE PARENT FMT PROT LOCK
volume-5c7d35e8-ea55-4aa9-8a57-bb0de94552e7 1024M          2

Stop service and start as HA

systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
pcs resource create  ovirt_cinder_api systemd:openstack-cinder-api \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s

pcs resource create  ovirt_cinder_scheduler systemd:openstack-cinder-scheduler \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s

pcs resource group add ovirt_gp ovirt_cinder_api ovirt_cinder_scheduler

For all nodes:

systemctl enable openstack-cinder-volume 
systemctl start openstack-cinder-volume
6.3. Install and configure HA Image service Glance.

This section describes how to install and configure the Image Service, code-named glance, on the controller node. For simplicity, this configuration stores images on the local file system.
This section assumes proper installation, configuration, and operation of the Identity service as described in the section called “Install and configure” and the section called “Verify operation”.

To configure prerequisites.
Before you install and configure the Image Service, you must create a database, service credentials, and API endpoints.
To create the database, complete these steps:

Use the database access client to connect to the database server as the root user:

mysql -u root -p

Create the glance database:

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

Replace GLANCE_DBPASS with a suitable password.

Exit the database access client.Source the admin credentials to gain access to admin-only CLI commands. Create the glance user:

keystone user-create --name glance --pass GLANCE_PASS
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |  
| enabled  |               True               |
|    id    | f89cca5865dc42b18e2421fa5f5cce66 |
|   name   |              glance              |
| username |              glance              |
+----------+----------------------------------+

Replace GLANCE_PASS with a suitable password.

Add the admin role to the glance user:

keystone user-role-add --user glance --tenant service --role admin
[Note]	Note
This command provides no output.

Create the glance service entity:

keystone service-create --name glance --type image --description "OpenStack Image Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Image Service      |
|   enabled   |               True               |
|      id     | 23f409c4e79f4c9e9d23d809c50fbacf |
|     name    |              glance              |
|     type    |              image               |
+-------------+----------------------------------+

Create the Image Service API endpoints:

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ image / {print $2}') \
--publicurl http://controller:9292 \
--internalurl http://controller:9292 \
--adminurl http://controller:9292 \
--region regionOne
+-------------+----------------------------------+
|   Property  |             Value                |
+-------------+----------------------------------+
|   adminurl  |   http://controller:9292         |
|      id     | a2ee818c69cb475199a1ca108332eb35 |
| internalurl |   http://controller:9292         |
|  publicurl  |   http://controller:9292         |
|    region   |           regionOne              |
|  service_id | 23f409c4e79f4c9e9d23d809c50fbacf |
+-------------+----------------------------------+

To install and configure the Image Service components. Install the packages for all nodes:

yum install openstack-glance python-glanceclient

Edit the /etc/glance/glance-api.conf file and complete the following actions:

In the [database] section, configure database access:

[database]
...
connection = mysql://glance:GLANCE_DBPASS@controller/glance

Replace GLANCE_DBPASS with the password you chose for the Image Service database.

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

[paste_deploy]

...
flavor = keystone

Replace GLANCE_PASS with the password you chose for the glance user in the Identity service.
Comment out any auth_host, auth_port, and auth_protocol options because the identity_uri option replaces them.
In the [glance_store] section, configure the rbd system store and location of image files:

[glance_store]
...
stores=glance.store.rbd.Store
rbd_store_ceph_conf=/etc/ceph/ceph.conf
rbd_store_user=glance
rbd_store_pool=images
rbd_store_chunk_size=8

In the [DEFAULT] section, configure the noop notification driver to disable notifications because they only pertain to the optional Telemetry service:

[DEFAULT]
...
notification_driver = noop

The Telemetry chapter provides an Image Service configuration that enables notifications. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:

[DEFAULT]
...
verbose = True

Edit the /etc/glance/glance-registry.conf file and complete the following actions. In the [database] section, configure database access:

[database]
...
connection = mysql://glance:GLANCE_DBPASS@localhost/glance
#Replace GLANCE_DBPASS with the password you chose for the Image Service database.

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:

[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS
...
[paste_deploy]
...
flavor = keystone
Replace GLANCE_PASS with the password you chose for the glance user in the Identity service.

Comment out any auth_host, auth_port, and auth_protocol options because the identity_uri option replaces them.
In the [DEFAULT] section, configure the noop notification driver to disable notifications because they only pertain to the optional Telemetry service:

[DEFAULT]
...
notification_driver = noop
The Telemetry chapter provides an Image Service configuration that enables notifications.

(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:

[DEFAULT]
...
verbose = True

Populate the Image Service database:

su -s /bin/sh -c "glance-manage db_sync" glance

To finalize installation

Start the Image Service services:

systemctl start openstack-glance-api.service openstack-glance-registry.service

Check operation. If ok:

systemctl start openstack-glance-api.service openstack-glance-registry.service

Add services to HA ():

pcs resource create  ovirt_glance_api systemd:openstack-glance-api \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s

pcs resource create  ovirt_glance_registry systemd:openstack-glance-registry \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s
pcs resource group add ovirt_gp ovirt_glance_api ovirt_glance_registry  
6.4. Add HA OpenStack Dashboard service.

Install the packages for all nodes:

yum install openstack-dashboard httpd mod_wsgi memcached python-memcached

To configure the dashboard: Edit the /etc/openstack-dashboard/local_settings file and complete the following actions.
Configure the dashboard to use OpenStack services on the controller node:

OPENSTACK_HOST = "controller"

Allow all hosts to access the dashboard:

ALLOWED_HOSTS = ['*']

Configure the memcached session storage service:

CACHES = {
   'default': {
       'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
       'LOCATION': '127.0.0.1:11211',
   }
}

Comment out any other session storage configuration.
Optionally, configure the time zone:

TIME_ZONE = "TIME_ZONE"

Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of time zones.

To finalize installation

chown -R apache:apache /usr/share/openstack-dashboard/static

For more information, see the bug report. Start the web server and session storage service and configure them to start when the system boots:

systemctl start httpd.service memcached.service

Verify operation.
This section describes how to verify operation of the dashboard.
Access the dashboard using a web browser: http://controller/dashboard .
Authenticate using admin

Add services to HA:

systemctl stop httpd.service memcached.service
pcs resource create  ovirt_memcached systemd:memcached \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s

pcs resource create  ovirt_httpd systemd:httpd \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s
pcs resource group add ovirt_gp ovirt_memcached ovirt_httpd 

7. Configuring integration ovirt with cinder and glance

7.1. Configuring volume type

Create a volume type first and associate it with a ceph backend with using openstack dashboard:

  • Admin → Volumes → Volume Types → Create Volume Type
  • On the new Volume Type → View Extra Specs → Create Key/Value → volume_backend_name/RBD

800

7.2 Connecting to cinder

Configuring from web-portal:

800

Check to complete:

800

Now you can create virtual disk in cinder:

800

7.3 Connecting to glance

Configuring from web-portal:

800

8 Joining to AD

8.1 Joining hosted_engine VM to AD

Create AD user for example ovirt_ad.

Install packages:

yum install krb5-workstation samba-winbind

Edit /etc/krb5.conf:

[libdefaults]
      default_realm = IMM.RU
      dns_lookup_realm = true
      dns_lookup_kdc = true
[appdefaults]
      proxiable = true
      ticket_lifetime = 24h
      ticket_lifetime = 36000
      renew_lifetime = 36000
      forwardable = true
[realms]
      IMM.RU = {
          kdc =  172.26.2.231 172.26.2.232
          admin_server =  172.26.2.231 172.26.2.232
          kpasswd_server =  172.26.2.231 172.26.2.232
          default_domain = imm.ru
      }
[domain_realm]
      .imm.ru = IMM.RU
[kdc]
      enable-kerberos4 = false

Init:

kinit USER_WITH_AD_RIGHTS_TO_ENTER_PCs_INTO_DOMAIN

Edit /etc/samba/smb.conf:

##################
# global settings
##################
[global]
 # access restrictions
 bind interfaces only = no
 hosts allow = 172.26.2 127.
 # server name
 workgroup = IMM
 netbios name = manager
 server string = manaher
 realm = IMM.RU
 domain master = No
 domain logons = No
 wins support = No
 wins proxy = no
 dns proxy = no
 template homedir = /home/%D/%U
 template shell = /bin/bash
 # acl settings
 nt acl support = yes
 inherit acls = yes
 map acl inherit = yes
 # security
 security = ads
 password server = 172.26.2.231 172.26.2.232 #AD Domain Controller
 encrypt passwords = true
 idmap config IMM : backend = ad
 idmap config IMM : schema_mode = rfc2307
 idmap config IMM : range = 10000-50000
 idmap config * : backend = tdb
 idmap config * : range = 50000-59999
 # logs
 log file = /var/log/samba/log.%m
 max log size = 50
 log level = 1 all:2
 syslog = 0
 # password store
 passdb backend = tdbsam
 # winbind
 winbind nss info = rfc2307
 winbind cache time = 120
 winbind separator = +

Join VM to IMM domain:

 net ads join -U USER_WITH_AD_RIGHTS_TO_ENTER_PCs_INTO_DOMAIN@imm.ru

Checking:

wbinfo -t
checking the trust secret for domain IMM via RPC calls succeeded

Enable AD authentication:

authconfig --enablewinbind --enablewinbindauth --update
8.2 Join management portal to AD
engine-manage-domains add --domain=imm.ru --provider=ad --user=ovirt_ad --interactive

Restart engine:

systemctl stop ovirt-engine.service
systemctl start ovirt-engine.service

Enjoy!

About author

Profile of the author

en/jobs/alfaovirtceph.txt · Last modified: 2015/08/31 21:17 by admin
Recent changes RSS feed Debian Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki