Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
en:jobs:xenserverdundeebetaceph [2015/11/23 17:33] admin [2. Installing ceph rbd on all nodes] |
en:jobs:xenserverdundeebetaceph [2015/11/23 19:00] admin [2. Installing ceph rbd on XenServer node] |
*Installing ceph rbd on one node with %%XenServer%% Dundee Beta 1.\\ | *Installing ceph rbd on one node with %%XenServer%% Dundee Beta 1.\\ |
*Creating image and mapping to node.\\ | *Creating image and mapping to node.\\ |
*Activating mapping image in %%XenCenter%%.\\ | *Activating of mapped image in %%XenCenter%%.\\ |
| |
| |
| |
Doesn't metter. Only one ethernet port were used.\\ | Doesn't metter. Only one ethernet port were used.\\ |
I am using 192.168.5.119/23 for node and 192.168.4.197 for XenServer\\ | I am using 192.168.5.119/23 for node and 192.168.4.197 for %%XenServer%%\\ |
| |
**Instalating XenCenter and Xenserver** | **Instalating %%XenCenter%% and %%Xenserver%%** |
Simply install Windows 2012 R2 OS to XenCenter server. Configure Network with IP address and than install [[http://downloadns.citrix.com.edgesuite.net/10760/XenServer-6.6.90-XenCenterSetup.exe|XenCenter]] | Simply install Windows 2012 R2 OS to manager %%XenCenter%% server. Configure Network with IP address and than install [[http://downloadns.citrix.com.edgesuite.net/10760/XenServer-6.6.90-XenCenterSetup.exe|XenCenter]] |
| |
Also install XenServer to first hdd with using follow [[http://downloadns.citrix.com.edgesuite.net/10759/XenServer-6.6.90-install-cd.iso|ISO]] and configure Network with IP address. | Also install %%XenServer%% to first hdd with using follow [[http://downloadns.citrix.com.edgesuite.net/10759/XenServer-6.6.90-install-cd.iso|ISO]] and configure Network with IP address. Hosname is xenserver-test. |
| |
==== 2. Installing ceph rbd on XenServer node ==== | ==== 2. Installing ceph rbd on XenServer node ==== |
| |
==2.1. Setup Centos repo.== | ==2.1. Setup Centos repo.== |
Set up repository: | Set up repository Centos-Base, Centos-Updates and Centos-Extras by using real base links. |
| |
==2.2. Setup ceph repo.== | ==2.2. Setup ceph repo.== |
==2.3. Installing ceph.== | ==2.3. Installing ceph.== |
| |
yum install ceph-common ceph ceph-fuse ceph-deploy ntp -y | yum install ceph-common ceph ceph-deploy ntp -y |
systemctl enable ntpd | |
systemctl start ntpd | |
| |
==2.3. Deploying monitors.== | ==2.3. Deploying monitors.== |
| |
For thirst node: | Temporary edit /etc/centos-release for deploying because it is required by ceph-deploy. See [[https://github.com/ceph/ceph-deploy/blob/master/docs/source/install.rst|Supported distributions]]: |
| |
| cp /etc/centos-release /etc/centos-release.old |
| echo "CentOS Linux release 7.1.1503 (Core)" > /etc/centos-release |
| |
| Deploing monitor: |
| |
cd /etc/ceph | cd /etc/ceph |
ceph-deploy new imm10 imm11 imm12 imm13 | ceph-deploy new xenserver-test |
ceph-deploy mon create-initial | ceph-deploy mon create-initial |
| |
| |
ceph -s | ceph -s |
cluster 9185ec98-3dec-4ba8-ab7d-f4589d6c60e7 | cluster 37a9abb2-c7ba-45e9-aaf2-6486d7099819 |
health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds | health HEALTH_ERR |
monmap e1: 4 mons at {imm10=172.26.1.210:6789/0,imm11=172.26.1.211:6789/0,imm12=172.26.1.212:6789/0,imm13=172.26.1.213:6789/0} | 64 pgs stuck inactive |
election epoch 116, quorum 0,1,2,3 imm10,imm11,imm12,imm13 | 64 pgs stuck unclean |
osdmap e1: 0 osds: 0 up, 0 in | no osds |
pgmap v2: 192 pgs, 4 pools, 0 bytes data, 0 objects | monmap e1: 1 mons at {xenserver-test=192.168.5.119:6789/0} |
0 kB used, 0 kB / 0 kB avail | election epoch 2, quorum 0 xenserver-test |
192 creating | osdmap e1: 0 osds: 0 up, 0 in |
| pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects |
| 0 kB used, 0 kB / 0 kB avail |
| 64 creating |
| |
| |
==2.4. Deploying osd.== | ==2.4. Deploying osd.== |
| |
For every node: | |
| |
cd /etc/ceph | cd /etc/ceph |
ceph-deploy gatherkeys immX | ceph-deploy gatherkeys xenserver-test |
ceph-deploy disk zap immX:zvol/rzfs/cvol | ceph-deploy disk zap xenserver-test:sdb |
ceph-deploy osd prepare immX:zvol/rzfs/cvol | ceph-deploy osd prepare xenserver-test:sdb |
| |
| Recreating of rbd volume. |
| |
| ceph osd pool delete rbd rbd --yes-i-really-really-mean-it |
| |
| ceph osd pool create rbd 128 |
| ceph osd pool set rbd min_size 1 |
| ceph osd pool set rbd size 1 |
| |
where X is node number.\\ | |
| |
Check for all osd running: | Check for all osd running: |
| |
ceph -s | ceph -s |
cluster 9185ec98-3dec-4ba8-ab7d-f4589d6c60e7 | cluster 37a9abb2-c7ba-45e9-aaf2-6486d7099819 |
health HEALTH_OK | health HEALTH_OK |
monmap e1: 4 mons at {imm10=172.26.1.210:6789/0,imm11=172.26.1.211:6789/0,imm12=172.26.1.212:6789/0,imm13=172.26.1.213:6789/0} | monmap e1: 1 mons at {xenserver-test=192.168.5.119:6789/0} |
election epoch 116, quorum 0,1,2,3 imm10,imm11,imm12,imm13 | election epoch 2, quorum 0 xenserver-test |
osdmap e337: 4 osds: 4 up, 4 in | osdmap e12: 1 osds: 1 up, 1 in |
pgmap v1059868: 512 pgs, 4 pools, 108 GB data, 27775 objects | pgmap v19: 128 pgs, 1 pools, 0 bytes data, 0 objects |
216 GB used, 6560 GB / 6776 GB avail | 36268 kB used, 413 GB / 413 GB avail |
512 active+clean | 128 active+clean |
client io 0 B/s rd, 2080 B/s wr, 1 op/s | |
| |
For [[http://ceph.com/docs/master/rbd/rbd-snapshot/|cloning]] purpose change /etc/ceph/ceph.conf for every nodes by adding following line:\\ | |
| |
rbd default format = 2 | |
| |
Restart ceph on every nodes | Edit back /etc/centos-ralease file: |
| |
| cp /etc/centos-release.old /etc/centos-release |
| |
| ==== 3. Creating image and mapping to node ==== |
| |
| Create 50 GB image in rbd pool: |
| |
| rbd -p rbd create testimage --size 50000 |
| |
| Mapping immage to host: |
| |
| rbd map rbd/testimage --id admin --key AQBSJlNWtX/BHxAAcJ/yNe31rXjzmbX+Uxikug== |
| |
| Key were taken from /etc/ceph.client.admin.keyring. |
| |
| Follow device appeared |
| |
| /dev/rbd1 |
| |