Openstack HA with ceph rbd over ZVOL and backup on zfs server.

Introduction

ZFS deduplication is very atractive thing that decrease space usage for storing VM. But how can we use it with openstack.
The answer is ceph rbd over ZVOL.
This article will explore procees of creation stend for test of HA openstack with rbd over zvol.

Task details

Installing task includes:

1. Installing ceph rbd on all nodes.

2. Installing HA openstack components which include:

2.1. Installing of mysql cluster
2.2. Installing of RabbitMQ cluster
2.3. Installing of HA OpenStack Identity*
2.4. Installing of HA Cinder*

* pcs cluster will be required.

This article will not concern details of OS tuning and network configuration because it is too early with alfa-release.
Hardware configuration.
Two hdd for OS (RAID1), three HDD for data, 2xXeon56XX, 48 GB MEM.
Software configuration.
OS Centos 7.1. Selinux and firewall were disabled. Ssh passwordless access was configured on all nodes. Data disks ware assembled with (RAIDz) base on zvol (zfs for linux). Deduplication switch on. Follow vdev was created /dev/zvol/data/vdata
All nodes have the same /etc/hosts file:

 192.168.6.214 node1
 192.168.6.215 node2
 192.168.6.216 node3
 192.168.6.240 shareIP

1. Installing ceph rbd on all nodes

1.1. Setup repo on all nodes.

Repo below should be installed with zfs on linux instalation. Anywhere:

yum localinstall --nogpgcheck http://mirror.yandex.ru/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
1.2. Installing ceph.

Don't forget to configure ntp server on all nodes.

For all nodes:

yum install ceph-common ceph ceph-fuse ceph-deploy ntp -y  
systemctl enable ntpd
systemctl start ntpd
1.3. Deploying monitors.

For thirst node:

cd /etc/ceph
ceph-deploy new node1 node2 node3
ceph-deploy mon create-initial

Check for running monitors:

ceph -s
  cluster 3a707493-a724-44f3-9ac6-9076b5d70a6c
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 3 mons at {node1=192.168.6.214:6789/0,node2=192.168.6.215:6789/0,node3=192.168.6.216:6789/0}, election epoch 6, quorum 0,1,2 node1,node2,node3
   osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
          0 kB used, 0 kB / 0 kB avail
               192 creating
1.4. Deploying osd.

For every node:

cd /etc/ceph
ceph-deploy gatherkeys nodeX
ceph-deploy disk zap nodeX:zvol/data/vdata
ceph-deploy osd prepare nodeX:zvol/data/vdata

where X is node number.

Check for all osd running:

  cluster 3a707493-a724-44f3-9ac6-9076b5d70a6c
   health HEALTH_WARN 982 pgs peering; 982 pgs stuck inactive; 985 pgs stuck unclean
   monmap e1: 3 mons at {node1=192.168.6.214:6789/0,node2=192.168.6.215:6789/0,node3=192.168.6.216:6789/0}, election epoch 6, quorum 0,1,2 node1,node2,node3
   osdmap e20: 3 osds: 3 up, 3 in
    pgmap v43: 1152 pgs, 3 pools, 0 bytes data, 0 objects
          115 MB used, 5082 GB / 5082 GB avail
                 3 active
               982 creating+peering
               167 active+clean
1.5. Preparing Ceph for OpenStack.

From any of nodes.

Create pools:

ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
ceph osd pool create volumes 1024
ceph osd pool set volumes min_size 1
ceph osd pool set volumes size 2
ceph osd pool create images 1024
ceph osd pool set images min_size 1
ceph osd pool set images size 2
ceph osd pool create backups 1024
ceph osd pool set backups min_size 1
ceph osd pool set backups size 2
ceph osd pool create vms 1024
ceph osd pool set vms min_size 1
ceph osd pool set vms size 2

Setup cephx authentication:

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' >> /etc/ceph/ceph.client.cinder.keyring

copy /etc/ceph/ceph.client.cinder.keyring to other nodes.

2. Installing and configuring HA cluster

2.1. Install pacemaker, corosync and pcs on all nodes.
yum install pacemaker corosync  resource-agents pcs -y
2.2. Configure HA cluster.

Set the same passsword for hacluster user on all nodes:

passwd hacluster

Enable and start pcsd service:

systemctl enable pcsd.service
systemctl start pcsd.service

Configure cluster from any of node:

pcs cluster auth node1 node2 node3
pcs cluster setup --name ovirt_alfa node1 node2 node3 --force
pcs cluster start --all

Check:

pcs status

Cluster name: ovirt_alfa
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Thu Apr 16 05:51:53 2015
Last change: Thu Apr 16 05:50:44 2015
Stack: corosync
Current DC: node1 (1) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
0 Resources configured
Online: [ node1 node2 node3 ]
Full list of resources:
PCSD Status:
node1: Online
node2: Online
node3: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

Setup properties:

pcs property set stonith-enabled=false
pcs property set no-quorum-policy=stop 

Create HA resource. Configure share IP address:

pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.6.240 cidr_netmask=24 op monitor interval=30s

If check is ok, ha cluster was successfully created.

3. Installing OpenStack components

3.1. Install and configure mariadb cluster.

On all nodes

Setup repo:

 cat << EOT > /etc/yum.repos.d/mariadb.repo
 [mariadb]
 name = MariaDB
 baseurl = http://yum.mariadb.org/10.0/centos7-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1
 EOT

Installing:

 yum install MariaDB-Galera-server MariaDB-client rsync galera

start service:

 service mysql start
 chkconfig mysql on
 mysql_secure_installation

preparing for cluster:

 mysql -p
 GRANT USAGE ON *.* to sst_user@'%' IDENTIFIED BY 'PASS';
 GRANT ALL PRIVILEGES on *.* to sst_user@'%';
 FLUSH PRIVILEGES;
 exit
 service mysql stop

configuring cluster: (for nodeX)

 cat << EOT > /etc/my.cnf
 [mysqld]
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 binlog_format=ROW
 default-storage-engine=innodb
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1
 query_cache_size=0
 query_cache_type=0
 bind-address=0.0.0.0
 datadir=/var/lib/mysql
 innodb_log_file_size=100M
 innodb_file_per_table
 innodb_flush_log_at_trx_commit=2
 wsrep_provider=/usr/lib64/galera/libgalera_smm.so
 wsrep_cluster_address="gcomm://192.168.6.214,192.168.6.215,192.168.6.216"
 wsrep_cluster_name='scanex_galera_cluster'
 wsrep_node_address='192.168.6.X' # setup real node ip
 wsrep_node_name='nodeX' #  setup real node name
 wsrep_sst_method=rsync
 wsrep_sst_auth=sst_user:PASS
 EOT

(on node1)

 /etc/init.d/mysql start --wsrep-new-cluster

(on other nodes)

 /etc/init.d/mysql start

check on all nodes:

 mysql -p
 show status like 'wsrep%';

| Variable_name | Value | +——————————+————————————–+

wsrep_local_state_uuid 739895d5-d6de-11e4-87f6-3a3244f26574
wsrep_protocol_version 7
wsrep_last_committed 0
wsrep_replicated 0
wsrep_replicated_bytes 0
wsrep_repl_keys 0
wsrep_repl_keys_bytes 0
wsrep_repl_data_bytes 0
wsrep_repl_other_bytes 0
wsrep_received 6
wsrep_received_bytes 425
wsrep_local_commits 0
wsrep_local_cert_failures 0
wsrep_local_replays 0
wsrep_local_send_queue 0
wsrep_local_send_queue_max 1
wsrep_local_send_queue_min 0
wsrep_local_send_queue_avg 0.000000
wsrep_local_recv_queue 0
wsrep_local_recv_queue_max 1
wsrep_local_recv_queue_min 0
wsrep_local_recv_queue_avg 0.000000
wsrep_local_cached_downto 18446744073709551615
wsrep_flow_control_paused_ns 0
wsrep_flow_control_paused 0.000000
wsrep_flow_control_sent 0
wsrep_flow_control_recv 0
wsrep_cert_deps_distance 0.000000
wsrep_apply_oooe 0.000000
wsrep_apply_oool 0.000000
wsrep_apply_window 0.000000
wsrep_commit_oooe 0.000000
wsrep_commit_oool 0.000000
wsrep_commit_window 0.000000
wsrep_local_state 4
wsrep_local_state_comment Synced
wsrep_cert_index_size 0
wsrep_causal_reads 0
wsrep_cert_interval 0.000000
wsrep_incoming_addresses 192.168.6.214:3306,192.168.6.216:3306,192.168.6.215:3306
wsrep_evs_delayed
wsrep_evs_evict_list
wsrep_evs_repl_latency 0/0/0/0/0
wsrep_evs_state OPERATIONAL
wsrep_gcomm_uuid 7397d6d6-d6de-11e4-a515-d3302a8c2342
wsrep_cluster_conf_id 2
wsrep_cluster_size 2
wsrep_cluster_state_uuid 739895d5-d6de-11e4-87f6-3a3244f26574
wsrep_cluster_status Primary
wsrep_connected ON
wsrep_local_bf_aborts 0
wsrep_local_index 0
wsrep_provider_name Galera
wsrep_provider_vendor Codership Oy info@codership.com
wsrep_provider_version 25.3.9(r3387)
wsrep_ready ON
wsrep_thread_count 2

+——————————+————————————–+

Remember, if all nodes will be down, actual node must be started with /etc/init.d/mysql start –wsrep-new-cluster. You should find an actual node. If you start node with not actual view, other nodes will issue error (see logs) - [ERROR] WSREP: gcs/src/gcs_group.cpp:void group_post_state_exchange(gcs_group_t*)():319: Reversing history: 0 → 0, this member has applied 140536161751824 more events than the primary component.Data loss is possible. Aborting.

3.2. Install and configure messaging rabbitmq cluster.

On all nodes install rabbitmq:

yum install rabbitmq-server -y
systemctl enable rabbitmq-server

Configuring rabbitmq cluster. For first node:

systemctl start rabbitmq-server
systemctl stop rabbitmq-server

Copy .elang.cookie from node1 to other nodes:

scp /var/lib/rabbitmq/.erlang.cookie root@node2:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie root@node3:/var/lib/rabbitmq/.erlang.cookie

On the target nodes ensure the correct owner, group, and permissions of the .erlang.cookie file:

chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
chmod 400 /var/lib/rabbitmq/.erlang.cookie

Start RabbitMQ on all nodes and verify the nodes are running:

systemctl start rabbitmq-server
rabbitmqctl cluster_status
Cluster status of node rabbit@nodeX ...
[{nodes,[{disc,[rabbit@nodeX]}]},
{running_nodes,[rabbit@nodeX]},
{cluster_name,<<"rabbit@nodeX">>},
{partitions,[]}]
...done.

Run the following commands on all nodes except the first one:

rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@node1
rabbitmqctl start_app

To verify the cluster status:

rabbitmqctl cluster_status
Cluster status of node rabbit@nodeX ...
[{nodes,[{disc,[rabbit@node1,rabbit@node2,rabbit@node3]}]},
{running_nodes,[rabbit@node3,rabbit@node2,rabbit@node1]},
{cluster_name,<<"rabbit@nodeX">>},
{partitions,[]}]
...done.

The message broker creates a default account that uses guest for the username and password. To simplify installation of your test environment, we recommend that you use this account, but change the password for it. Run the following command:
Replace RABBIT_PASS with a suitable password.

rabbitmqctl change_password guest RABBIT_PASS

To ensure that all queues, except those with auto-generated names, are mirrored across all running nodes it is necessary to set the policy key ha-mode to all. Run the following command on one of the nodes:

rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'
3.3. Install and configure HA OpenStack Identity service.

Create the keystone database:

mysql -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

Enable openstack repository

yum install yum-plugin-priorities
rpm -ivh https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm

Install components for all nodes:

yum install openstack-keystone python-keystoneclient -y

Generate a random value to use as the administration token during initial configuration (on one node):

openssl rand -hex 10

Edit the /etc/keystone/keystone.conf file and complete the following actions on all nodes.

In the [DEFAULT] section, define the value of the initial administration token:

Select Text
[DEFAULT]
...
admin_token = ADMIN_TOKEN
Replace ADMIN_TOKEN with the random value that you generated in a previous step.

In the [database] section, configure database access.

Select Text
[database]
...
connection = mysql://keystone:KEYSTONE_DBPASS@shareIP/keystone
Replace KEYSTONE_DBPASS with the password you chose for the database.

In the [token] section, configure the UUID token provider and SQL driver:

Select Text
[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token

In the [revoke] section, configure the SQL revocation driver:

Select Text
[revoke]
...
driver = keystone.contrib.revoke.backends.sql.Revoke

(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:

Select Text
[DEFAULT]
...
verbose = True

Create generic certificates and keys and restrict access to the associated files for node1:

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl

Then copy /etc/keystone/ssl with files to other nodes. After copy set permissions:

chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl

Populate the Identity service database for node1:

su -s /bin/sh -c "keystone-manage db_sync" keystone

Check that database was populated:

mysql -p 
\r keystone
show tables;
+-----------------------+
| Tables_in_keystone    |
+-----------------------+
| assignment            |
| credential            |
| domain                |
| endpoint              |
| group                 |
| id_mapping            |
| migrate_version       |
| policy                |
| project               |
| region                |
| revocation_event      |
| role                  |
| service               |
| token                 |
| trust                 |
| trust_role            |
| user                  |
| user_group_membership |
+-----------------------+
18 rows in set (0.00 sec)

Copy /etc/keystone/keystone.conf to other nodes.

Try to start and stop Identity service on every of node. If service stared successfuly then start the Identity service as HA.

pcs resource create  ovirt_keystone systemd:openstack-keystone \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s
pcs resource group add ovirt_gp  ClusterIP ovirt_keystone

Create tenants, users, and roles.
Configure the administration token and endpoint for every node:

export OS_SERVICE_TOKEN=6ccecc08c5f661c2bddd
export OS_SERVICE_ENDPOINT=http://shareIP:35357/v2.0

keystone tenant-create –name admin –description “Admin Tenant”

+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | 724adf6fdb324ba49385dba3a307d4d1 |
|     name    |              admin               |
+-------------+----------------------------------+  
keystone user-create --name admin --pass ADMIN_PASS --email test@test.ru
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           test@test.ru           |
| enabled  |               True               |
|    id    | d46ea880c23646b6858342eb98d32651 |
|   name   |              admin               |
| username |              admin               |
+----------+----------------------------------+
keystone role-create --name admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 81e202c2d28c41fdb1b6a45c772b4479 |
|   name   |              admin               |
+----------+----------------------------------+

keystone user-role-add --user admin --tenant admin --role admin
keystone tenant-create --name service --description "Service Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | 2250c77380534c3baaf5aa8cb1af0a05 |
|     name    |             service              |
+-------------+----------------------------------+
keystone service-create --name keystone --type identity --description "OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | 8e147209037a4071a8e04e0493c40a79 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ identity / {print $2}') \
--publicurl http://shareIP:5000/v2.0 \
--internalurl http://shareIP:5000/v2.0 \
--adminurl http://shareIP:35357/v2.0 \
--region regionOne
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |    http://shareIP:35357/v2.0     |
|      id     | 66d2cc8c0f4a4f73ba083286d4704889 |
| internalurl |     http://shareIP:5000/v2.0     |
|  publicurl  |     http://shareIP:5000/v2.0     |
|    region   |            regionOne             |
|  service_id | 8e147209037a4071a8e04e0493c40a79 |
+-------------+----------------------------------+

Create OpenStack client environment scripts.
Create client environment scripts for the admin tenant and user. Future portions of this guide reference these scripts to load appropriate credentials for client operations.Edit the admin-openrc.sh file and add the following content:
export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://shareIP:35357/v2.0

3.4. Install and configure HA Block Storage service Cider.

Before you install and configure the Block Storage service, you must create a database, service credentials, and API endpoints. Create the cinder database and grant proper access to the cinder database:

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';

To create the service credentials, complete these steps:

keystone user-create --name cinder --pass CINDER_PASS
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 3fc1ae9c723143fba59ac7795ffc039c |
|   name   |              cinder              |
| username |              cinder              |
+----------+----------------------------------+
keystone user-role-add --user cinder --tenant service --role admin
keystone service-create --name cinder --type volume  --description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 50c31611cc27447ea8c39fca8f7b86bf |
|     name    |              cinder              |
|     type    |              volume              |
+-------------+----------------------------------+

keystone service-create --name cinderv2 --type volumev2 --description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 52002528a42b41ffa2abf8b8bb799f08 |
|     name    |             cinderv2             |
|     type    |             volumev2             |
+-------------+----------------------------------+
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl http://shareIP:8776/v2/%\(tenant_id\)s \
--internalurl http://shareIP:8776/v2/%\(tenant_id\)s \
--adminurl http://shareIP:8776/v2/%\(tenant_id\)s \
--region regionOne
+-------------+--------------------------------------+
|   Property  |                Value                 |
+-------------+--------------------------------------+
|   adminurl  | http://shareIP:8776/v1/%(tenant_id)s |
|      id     |   140f2f2a3f7c4f31b7c3308660c1fe31   |
| internalurl | http://shareIP:8776/v1/%(tenant_id)s |
|  publicurl  | http://shareIP:8776/v1/%(tenant_id)s |
|    region   |              regionOne               |
|  service_id |   50c31611cc27447ea8c39fca8f7b86bf   |
+-------------+--------------------------------------+
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl http://shareIP:8776/v2/%\(tenant_id\)s \
--internalurl http://shareIP:8776/v2/%\(tenant_id\)s \
--adminurl http://shareIP:8776/v2/%\(tenant_id\)s \
--region regionOne
+-------------+--------------------------------------+
|   Property  |                Value                 |
+-------------+--------------------------------------+
|   adminurl  | http://shareIP:8776/v2/%(tenant_id)s |
|      id     |   233dcfa7ff6048a4ad72bec9c962df60   |
| internalurl | http://shareIP:8776/v2/%(tenant_id)s |
|  publicurl  | http://shareIP:8776/v2/%(tenant_id)s |
|    region   |              regionOne               |
|  service_id |   52002528a42b41ffa2abf8b8bb799f08   |
+-------------+--------------------------------------+

Install and configure Block Storage controller components on all nodes:

yum install openstack-cinder python-cinderclient python-oslo-db targetcli python-oslo-db MySQL-python -y

Edit the /etc/cinder/cinder.conf file and complete the following actions for all nodes.

In the [database] section, configure database access:

Select Text
[database]
...
connection = mysql://cinder:CINDER_DBPASS@shareIP/cinder
Replace CINDER_DBPASS with the password you chose for the Block Storage database.

In the [DEFAULT] section, configure RabbitMQ message broker access:

Select Text
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = node1:5672,node2:5672,node3:5672
rabbit_password = RABBIT_PASS
rabbit_retry_interval=1
rabbit_retry_backoff=2
rabbit_max_retries=0
rabbit_durable_queues=true
rabbit_ha_queues=true
Replace RABBIT_PASS with the password you chose for the guest account in RabbitMQ.

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

Select Text
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://shareIP:5000/v2.0
identity_uri = http://shareIP:35357
admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS
Replace CINDER_PASS with the password you chose for the cinder user in the Identity service.

For rbd support:

rbd_pool=volumes  
# The RADOS client name for accessing rbd volumes - only set
rbd_user=cinder # we setup cephx for client.cinder. See 1.5.
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
# The libvirt uuid of the secret for the rbd_user volumes. Generate with use uuidgen
rbd_secret_uuid=d668041f-d3c1-417b-b587-48261ad2abf3
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver

[Note] Note Comment out any auth_host, auth_port, and auth_protocol options because the identity_uri option replaces them.

In the [DEFAULT] section, configure the my_ip option to use the management interface IP address of the controller node:

Select Text
[DEFAULT]
...
my_ip = 192.168.6.240 #shareIP

(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:

Select Text
[DEFAULT]
...
verbose = True

Populate the Block Storage database:

su -s /bin/sh -c "cinder-manage db sync" cinder
2015-04-16 09:58:32.119 26968 INFO migrate.versioning.api [-] 0 -> 1...
2015-04-16 09:58:33.761 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:33.762 26968 INFO migrate.versioning.api [-] 1 -> 2...
2015-04-16 09:58:34.303 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:34.304 26968 INFO migrate.versioning.api [-] 2 -> 3...
2015-04-16 09:58:34.470 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:34.470 26968 INFO migrate.versioning.api [-] 3 -> 4...
2015-04-16 09:58:35.167 26968 INFO 004_volume_type_to_uuid [-] Created foreign key volume_type_extra_specs_ibfk_1
2015-04-16 09:58:35.173 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.173 26968 INFO migrate.versioning.api [-] 4 -> 5...
2015-04-16 09:58:35.308 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.309 26968 INFO migrate.versioning.api [-] 5 -> 6...
2015-04-16 09:58:35.492 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.492 26968 INFO migrate.versioning.api [-] 6 -> 7...
2015-04-16 09:58:35.666 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.666 26968 INFO migrate.versioning.api [-] 7 -> 8...
2015-04-16 09:58:35.757 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.757 26968 INFO migrate.versioning.api [-] 8 -> 9...
2015-04-16 09:58:35.857 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.857 26968 INFO migrate.versioning.api [-] 9 -> 10...
2015-04-16 09:58:35.957 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:35.958 26968 INFO migrate.versioning.api [-] 10 -> 11...
2015-04-16 09:58:36.117 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.119 26968 INFO migrate.versioning.api [-] 11 -> 12...
2015-04-16 09:58:36.253 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.253 26968 INFO migrate.versioning.api [-] 12 -> 13...
2015-04-16 09:58:36.419 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.419 26968 INFO migrate.versioning.api [-] 13 -> 14...
2015-04-16 09:58:36.544 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.545 26968 INFO migrate.versioning.api [-] 14 -> 15...
2015-04-16 09:58:36.600 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.601 26968 INFO migrate.versioning.api [-] 15 -> 16...
2015-04-16 09:58:36.751 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:36.751 26968 INFO migrate.versioning.api [-] 16 -> 17...
2015-04-16 09:58:37.294 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:37.294 26968 INFO migrate.versioning.api [-] 17 -> 18...
2015-04-16 09:58:37.773 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:37.773 26968 INFO migrate.versioning.api [-] 18 -> 19...
2015-04-16 09:58:37.896 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:37.896 26968 INFO migrate.versioning.api [-] 19 -> 20...
2015-04-16 09:58:38.030 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.030 26968 INFO migrate.versioning.api [-] 20 -> 21...
2015-04-16 09:58:38.053 26968 INFO 021_add_default_quota_class [-] Added default quota class data into the DB.
2015-04-16 09:58:38.058 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.058 26968 INFO migrate.versioning.api [-] 21 -> 22...
2015-04-16 09:58:38.173 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.174 26968 INFO migrate.versioning.api [-] 22 -> 23...
2015-04-16 09:58:38.263 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.264 26968 INFO migrate.versioning.api [-] 23 -> 24...
2015-04-16 09:58:38.642 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:38.642 26968 INFO migrate.versioning.api [-] 24 -> 25...
2015-04-16 09:58:39.443 26968 INFO migrate.versioning.api [-] done
2015-04-16 09:58:39.444 26968 INFO migrate.versioning.api [-] 25 -> 26...
2015-04-16 09:58:39.455 26968 INFO 026_add_consistencygroup_quota_class [-] Added default consistencygroups quota class data into the DB.
2015-04-16 09:58:39.461 26968 INFO migrate.versioning.api [-] done

Try to start

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

check logs:

tail -f /var/log/cinder/api.log /var/log/cinder/scheduler.log
==> /var/log/cinder/api.log <==
2015-04-16 10:34:30.442 30784 INFO cinder.openstack.common.service [-] Started child 30818
2015-04-16 10:34:30.443 30818 INFO eventlet.wsgi.server [-] (30818) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.445 30784 INFO cinder.openstack.common.service [-] Started child 30819
2015-04-16 10:34:30.447 30819 INFO eventlet.wsgi.server [-] (30819) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.448 30784 INFO cinder.openstack.common.service [-] Started child 30820
2015-04-16 10:34:30.449 30820 INFO eventlet.wsgi.server [-] (30820) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.450 30784 INFO cinder.openstack.common.service [-] Started child 30821
2015-04-16 10:34:30.452 30821 INFO eventlet.wsgi.server [-] (30821) wsgi starting up on http://0.0.0.0:8776/
2015-04-16 10:34:30.453 30784 INFO cinder.openstack.common.service [-] Started child 30822
2015-04-16 10:34:30.454 30822 INFO eventlet.wsgi.server [-] (30822) wsgi starting up on http://0.0.0.0:8776/
==> /var/log/cinder/scheduler.log <==
2015-04-16 10:34:29.940 30785 INFO oslo.messaging._drivers.impl_rabbit [req-80c27067-712a-47c3-a8e3-a448e1677e8a - - - - -] Connecting to AMQP server on node3:5672
2015-04-16 10:34:29.961 30785 INFO oslo.messaging._drivers.impl_rabbit [req-80c27067-712a-47c3-a8e3-a448e1677e8a - - - - -] Connected to AMQP server on node3:5672
2015-04-16 10:34:30.310 30785 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on node3:5672
2015-04-16 10:34:30.322 30785 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on node3:5672
2015-04-16 10:37:14.587 30785 INFO cinder.openstack.common.service [-] Caught SIGTERM, exiting
2015-04-16 10:37:43.146 31144 INFO cinder.service [-] Starting cinder-scheduler node (version 2014.2.2)
2015-04-16 10:37:43.148 31144 INFO oslo.messaging._drivers.impl_rabbit [req-6e41c1e3-1ffe-4918-bdb2-b99e665713a6 - - - - -] Connecting to AMQP server on node2:5672
2015-04-16 10:37:43.167 31144 INFO oslo.messaging._drivers.impl_rabbit [req-6e41c1e3-1ffe-4918-bdb2-b99e665713a6 - - - - -] Connected to AMQP server on node2:5672
2015-04-16 10:37:43.517 31144 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on node1:5672
2015-04-16 10:37:43.530 31144 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on node1:5672

check service:

source admin-openrc.sh
cinder service-list
+------------------+-------+------+---------+-------+----------------------------+-----------------+
|      Binary      |  Host | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | imm14 | nova | enabled |   up  | 2015-04-16T14:52:09.000000 |       None      |
+------------------+-------+------+---------+-------+----------------------------+-----------------+
|  cinder-volume   | imm14 | nova | enabled |   up  | 2015-04-16T15:13:37.000000 |       None      |
+------------------+-------+------+---------+-------+----------------------------+-----------------+

Create volume

cinder create --display-name demo-volume2 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-04-16T15:41:01.202976      |
| display_description |                 None                 |
|     display_name    |             demo-volume2             |
|      encrypted      |                False                 |
|          id         | 5c7d35e8-ea55-4aa9-8a57-bb0de94552e7 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

Check for image creation

cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 5c7d35e8-ea55-4aa9-8a57-bb0de94552e7 | available | demo-volume2 |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
rbd -p volumes ls -l
NAME                                         SIZE PARENT FMT PROT LOCK
volume-5c7d35e8-ea55-4aa9-8a57-bb0de94552e7 1024M          2

Stop service and start as HA

systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
pcs resource create  ovirt_cinder_api systemd:openstack-cinder-api \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s

pcs resource create  ovirt_cinder_scheduler systemd:openstack-cinder-scheduler \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s
pcs resource create  ovirt_cinder_volume systemd:openstack-cinder-volume \
op monitor interval=30s timeout=20s \
op start timeout=120s \
op stop timeout=120s
pcs resource group add ovirt_gp ovirt_cinder_api ovirt_cinder_scheduler ovirt_cinder_volume

About author

Profile of the author

en/jobs/openstackha.txt · Last modified: 2015/08/29 13:04 by admin
Recent changes RSS feed Debian Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki