Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
en:jobs:xenserverdundeebetaceph [2015/11/23 18:31]
admin [1. Installing XenServer and XenCenter]
en:jobs:xenserverdundeebetaceph [2015/11/23 18:41]
admin [Task details]
Line 10: Line 10:
 *Installing ceph rbd on one node with %%XenServer%% Dundee Beta 1.\\ *Installing ceph rbd on one node with %%XenServer%% Dundee Beta 1.\\
 *Creating image and mapping to node.\\ *Creating image and mapping to node.\\
-*Activating ​mapping ​image in %%XenCenter%%.\\+*Activating ​of mapped ​image in %%XenCenter%%.\\
  
  
Line 22: Line 22:
  
 Doesn'​t metter. Only one ethernet port were used.\\ Doesn'​t metter. Only one ethernet port were used.\\
-I am using 192.168.5.119/​23 for node and 192.168.4.197 for XenServer\\+I am using 192.168.5.119/​23 for node and 192.168.4.197 for %%XenServer%%\\
  
-**Instalating XenCenter and %%Xenserver%%** +**Instalating ​%%XenCenter%% and %%Xenserver%%** 
-Simply install Windows 2012 R2 OS to XenCenter server. Configure Network with IP address and than install [[http://​downloadns.citrix.com.edgesuite.net/​10760/​XenServer-6.6.90-XenCenterSetup.exe|XenCenter]]+Simply install Windows 2012 R2 OS to manager %%XenCenter%% server. Configure Network with IP address and than install [[http://​downloadns.citrix.com.edgesuite.net/​10760/​XenServer-6.6.90-XenCenterSetup.exe|XenCenter]]
  
 Also install %%XenServer%% to first hdd with using follow [[http://​downloadns.citrix.com.edgesuite.net/​10759/​XenServer-6.6.90-install-cd.iso|ISO]] and configure Network with IP address. Hosname is xenserver-test. Also install %%XenServer%% to first hdd with using follow [[http://​downloadns.citrix.com.edgesuite.net/​10759/​XenServer-6.6.90-install-cd.iso|ISO]] and configure Network with IP address. Hosname is xenserver-test.
Line 68: Line 68:
 ==2.3. Deploying monitors.== ==2.3. Deploying monitors.==
  
-For thirst node:+Temporary edit /​etc/​centos-release for deploying because it is required by ceph-deploy. See [[https://​github.com/​ceph/​ceph-deploy/​blob/​master/​docs/​source/​install.rst|Supported distributions]]:​ 
 +   
 +  cp /​etc/​centos-release /​etc/​centos-release.old 
 +  echo "​CentOS Linux release 7.1.1503 (Core)"​ > /​etc/​centos-release 
 + 
 +Deploing monitor:
  
   cd /etc/ceph   cd /etc/ceph
-  ceph-deploy new imm10 imm11 imm12 imm13+  ceph-deploy new xenserver-test
   ceph-deploy mon create-initial   ceph-deploy mon create-initial
  
Line 77: Line 82:
  
   ceph -s   ceph -s
-      ​cluster ​9185ec98-3dec-4ba8-ab7d-f4589d6c60e7 +    ​cluster ​37a9abb2-c7ba-45e9-aaf2-6486d7099819 
-      health HEALTH_ERR ​192 pgs stuck inactive; 192 pgs stuck uncleanno osds +     ​health HEALTH_ERR 
-    monmap e1: mons at {imm10=172.26.1.210:​6789/​0,​imm11=172.26.1.211:​6789/​0,​imm12=172.26.1.212:​6789/​0,​imm13=172.26.1.213:6789/0} +            64 pgs stuck inactive 
-            election epoch 116, quorum 0,1,2,3 imm10,​imm11,​imm12,​imm13 +            64 pgs stuck unclean 
-    osdmap e1: 0 osds: 0 up, 0 in +            ​no osds 
-    pgmap v2: 192 pgs, pools, 0 bytes data, 0 objects +     ​monmap e1: mons at {xenserver-test=192.168.5.119:6789/0} 
-          0 kB used, 0 kB / 0 kB avail +            election epoch 2, quorum 0 xenserver-test 
-               192 creating+     ​osdmap e1: 0 osds: 0 up, 0 in 
 +      pgmap v2: 64 pgs, pools, 0 bytes data, 0 objects 
 +            0 kB used, 0 kB / 0 kB avail 
 +                  ​64 ​creating 
  
  
 ==2.4. Deploying osd.== ==2.4. Deploying osd.==
  
-For every node: + 
-  +
   cd /etc/ceph   cd /etc/ceph
-  ceph-deploy gatherkeys ​immX +  ceph-deploy gatherkeys ​xenserver-test 
-  ceph-deploy disk zap immX:zvol/​rzfs/​cvol +  ceph-deploy disk zap xenserver-test:sdb 
-  ceph-deploy osd prepare ​immX:zvol/​rzfs/​cvol+  ceph-deploy osd prepare ​xenserver-test:sdb 
 + 
 +Recreating of rbd volume. 
 + 
 +ceph osd pool delete rbd rbd --yes-i-really-really-mean-it 
 + 
 +  ceph osd pool create rbd 128 
 +  ceph osd pool set rbd min_size 1 
 +  ceph osd pool set rbd size 1
  
-where X is node number.\\ 
  
 Check for all osd running: Check for all osd running:
  
   ceph -s   ceph -s
-        ​cluster ​9185ec98-3dec-4ba8-ab7d-f4589d6c60e7+    ​cluster ​37a9abb2-c7ba-45e9-aaf2-6486d7099819
      ​health HEALTH_OK      ​health HEALTH_OK
-     ​monmap e1: mons at {imm10=172.26.1.210:​6789/​0,​imm11=172.26.1.211:​6789/​0,​imm12=172.26.1.212:​6789/​0,​imm13=172.26.1.213:6789/0} +     ​monmap e1: mons at {xenserver-test=192.168.5.119:6789/0} 
-            election epoch 116, quorum 0,1,2,3 imm10,​imm11,​imm12,​imm13 +            election epoch 2, quorum 0 xenserver-test 
-     ​osdmap ​e337osds: up, in +     ​osdmap ​e12osds: up, in 
-      pgmap v1059868512 pgs, pools, ​108 GB data, 27775 objects +      pgmap v19128 pgs, pools, ​0 bytes data, objects 
-            ​216 GB used, 6560 GB / 6776 GB avail +            ​36268 kB used, 413 GB / 413 GB avail 
-                 512 active+clean +                 128 active+clean
-  client io 0 B/s rd, 2080 B/s wr, 1 op/s+
  
-For [[http://​ceph.com/​docs/​master/​rbd/​rbd-snapshot/​|cloning]] purpose change /​etc/​ceph/​ceph.conf for every nodes by adding following line:\\ 
-  ​ 
-  rbd default format = 2 
  
-Restart ceph on every nodes+Edit back /​etc/​centos-ralease file: 
 + 
 +  cp /​etc/​centos-release.old /​etc/​centos-release 
  
en/jobs/xenserverdundeebetaceph.txt · Last modified: 2015/11/23 19:01 by admin
Recent changes RSS feed Debian Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki