Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
en:jobs:xenserverdundeebetaceph [2015/11/23 17:00]
admin [1. Nodes configuration]
en:jobs:xenserverdundeebetaceph [2015/11/23 18:40]
admin [2. Installing ceph rbd on XenServer node]
Line 22: Line 22:
  
 Doesn'​t metter. Only one ethernet port were used.\\ Doesn'​t metter. Only one ethernet port were used.\\
-I am using 192.168.5.119/​23 for node and 192.168.4.197 for XenServer\\+I am using 192.168.5.119/​23 for node and 192.168.4.197 for %%XenServer%%\\
  
-**Instalating XenCenter and Xenserver** +**Instalating ​%%XenCenter%% and %%Xenserver%%** 
-Simply install Windows 2012 R2 OS to XenCenter server. Configure Network with IP address and than install [[http://​downloadns.citrix.com.edgesuite.net/​10760/​XenServer-6.6.90-XenCenterSetup.exe|XenCenter]]+Simply install Windows 2012 R2 OS to manager %%XenCenter%% server. Configure Network with IP address and than install [[http://​downloadns.citrix.com.edgesuite.net/​10760/​XenServer-6.6.90-XenCenterSetup.exe|XenCenter]]
  
-Also install XenServer to first hdd with using follow [[http://​downloadns.citrix.com.edgesuite.net/​10759/​XenServer-6.6.90-install-cd.iso|ISO]]+Also install ​%%XenServer%% to first hdd with using follow [[http://​downloadns.citrix.com.edgesuite.net/​10759/​XenServer-6.6.90-install-cd.iso|ISO]] ​and configure Network with IP address. Hosname is xenserver-test.
  
-Example for IMM10:\\+ ==== 2. Installing ceph rbd on XenServer node ====
  
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-eth0 +==2.1. Setup Centos repo.== 
-  DEVICE=eth0 +Set up repository Centos-Base,​ Centos-Updates and Centos-Extras by using real base links.
-  HWADDR=00:​1e:​67:​0c:​41:​f8 +
-  MASTER=bond0 +
-  SLAVE=yes +
-  ​ONBOOT=yes +
-  BOOTPROTO=none +
-  MTU=1500 +
-  NM_CONTROLLED=no +
-  IPV6INIT=no+
  
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-eth1 +==2.2. ​Setup ceph repo.== 
-  DEVICE=eth1 +Set up repository: ​
-  HWADDR=00:​1e:​67:​0c:​41:​f9 +
-  MASTER=bond0 +
-  SLAVE=yes +
-  ONBOOT=yes +
-  BOOTPROTO=none +
-  MTU=1500 +
-  DEFROUTE=yes +
-  NM_CONTROLLED=no +
-  IPV6INIT=no +
- +
- +
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-bond0 +
-  DEVICE=bond0 +
-  BONDING_OPTS='​mode=0 miimon=100'​ +
-  ONBOOT=yes +
-  BOOTPROTO=none +
-  MTU=1500 +
-  NM_CONTROLLED=no +
-  IPV6INIT=no +
-  HOTPLUG=no +
- +
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-bond0.12 +
-  DEVICE=bond0.12 +
-  VLAN=yes +
-  BRIDGE=virtlocal +
-  ONBOOT=yes +
-  MTU=1500 +
-  NM_CONTROLLED=no +
-  IPV6INIT=no +
-  HOTPLUG=no +
- +
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-virtlocal +
-  DEVICE=virtlocal +
-  NM_CONTROLLED=no +
-  ONBOOT=yes +
-  TYPE=bridge +
-  BOOTPROTO=none +
-  IPADDR=172.25.0.210 +
-  PREFIX=16 +
-  MTU=1500 +
- +
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-bond0.13 +
-  DEVICE=bond0.13 +
-  VLAN=yes +
-  BRIDGE=ovirtmgmt +
-  ONBOOT=yes +
-  MTU=1500 +
-  NM_CONTROLLED=no +
-  IPV6INIT=no +
-  HOTPLUG=no +
- +
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-ovirtmgmt +
-  DEVICE=ovirtmgmt +
-  NM_CONTROLLED=no +
-  ONBOOT=yes +
-  TYPE=bridge +
-  BOOTPROTO=none +
-  IPADDR=172.26.2.210 +
-  PREFIX=24 +
-  IPV4_FAILURE_FATAL=no +
-  MTU=1500 +
-  GATEWAY=172.26.2.205 # GATEWAY MUST BE CONFIGURED ON ovirtmgmt interface to install Hosted ​ Engine correctly .  +
- +
-  cat /​etc/​sysconfig/​network-scripts/​ifcfg-bond0.14 +
-  DEVICE=bond0.14 +
-  TYPE=Ethernet +
-  ONBOOT=yes +
-  BOOTPROTO=none +
-  VLAN=yes +
-  NETWORK=172.26.1.0 +
-  NETMASK=255.255.255.0 +
-  IPADDR=172.26.1.210 +
- +
- +
-**Software configuration.**\\ +
-OS Centos 7.1. Selinux and firewall were disabled.\\ +
- +
-Ssh passwordless access was configured on all nodes.\\ +
- +
-  ssh-keygen -t dsa (creation of passwordless key) +
-  cd /​root/​.ssh +
-  cat id_dsa.pub >> authorized_keys +
-  chown root.root authorized_keys +
-  chmod 600 authorized_keys +
-  echo "​StrictHostKeyChecking no" > config +
- +
-Data disks ware assembled with (RAIDz) base on zvol [[http://​zfsonlinux.org/​|zfs on linux]]. Compression switch on. Vdev /​dev/​zvol/​rzfs/​cvol was created on each node.\\ +
-For every of nodes install zfs on linux:\\ +
-  yum localinstall --nogpgcheck https://​download.fedoraproject.org/​pub/​epel/​7/​x86_64/​e/​epel-release-7-5.noarch.rpm +
-  yum localinstall --nogpgcheck http://​archive.zfsonlinux.org/​epel/​zfs-release.el7.noarch.rpm +
-  yum install kernel-devel  +
-  yum install zfs  +
- +
-Checking for zfs.ko kernel module:\\ +
- +
-  ls /​usr/​lib/​modules/​`uname -r`/​extra/​ +
- +
-For every of nodes create zpool rzfs:\\ +
- +
-  zpool create rzfs raidz /dev/sdb /dev/sdc /dev/sdd +
- +
-Create zvol with cvol name:\\ +
- +
-  zfs create -V 1700G rzfs/cvol +
- +
-Switch on compression:​\\ +
-    +
-  zfs set compression=lz4 rzfs +
- +
-All nodes should have the same /etc/hosts file:\\ +
- +
-  172.26.1.210 imm10 +
-  172.26.1.211 imm11 +
-  172.26.1.212 imm12 +
-  172.26.1.213 imm13 +
- +
-  172.26.2.210 imm10.virt +
-  172.26.2.211 imm11.virt +
-  172.26.2.212 imm12.virt +
-  172.26.2.213 imm13.virt +
-  172.26.2.250 manager #​HOSTED_ENGINE VM +
-  172.26.2.250 manager.virt #​HOSTED_ENGINE VM +
-  172.26.2.254 controller #ShareIP for Openstack ​  +
- ==== 2. Installing ​ceph rbd on all nodes ==== +
- +
-==2.1. Setup repo on all nodes.== +
-Set up repository: ​(on all nodes)+
  
    cat << EOT > /​etc/​yum.repos.d/​ceph.repo    cat << EOT > /​etc/​yum.repos.d/​ceph.repo
   [ceph]   [ceph]
-  name=Ceph packages for \$basearch +  name=Ceph packages for Citrix 
-  baseurl=http://​ceph.com/​rpm-hammer/el7/\$basearch+  baseurl=http://​download.ceph.com/​rpm/​el7/​x86_64/
   enabled=1   enabled=1
   gpgcheck=1   gpgcheck=1
   type=rpm-md   type=rpm-md
   gpgkey=https://​ceph.com/​git/?​p=ceph.git;​a=blob_plain;​f=keys/​release.asc   gpgkey=https://​ceph.com/​git/?​p=ceph.git;​a=blob_plain;​f=keys/​release.asc
 +
  
   [ceph-noarch]   [ceph-noarch]
   name=Ceph noarch packages   name=Ceph noarch packages
-  baseurl=http://​ceph.com/​rpm-hammer/el7/noarch+  baseurl=http://​download.ceph.com/​rpm/​el7/​noarch/
   enabled=1   enabled=1
   gpgcheck=1   gpgcheck=1
Line 191: Line 57:
   EOT   EOT
  
-Import gpgkey: ​(on all nodes)+ 
 +Import gpgkey:
  
    rpm --import '​https://​ceph.com/​git/?​p=ceph.git;​a=blob_plain;​f=keys/​release.asc'​    rpm --import '​https://​ceph.com/​git/?​p=ceph.git;​a=blob_plain;​f=keys/​release.asc'​
  
-==2.2. Installing ceph.==+==2.3. Installing ceph.==
  
-**Don'​t forget to configure ntp server on all nodes.** +  ​yum install ceph-common ceph ceph-deploy ntp -y  ​
- +
-For all nodes: +
-  ​yum install ceph-common ceph ceph-fuse ​ceph-deploy ntp -y  ​ +
-  systemctl enable ntpd +
-  systemctl start ntpd+
  
 ==2.3. Deploying monitors.== ==2.3. Deploying monitors.==
  
-For thirst node:+Temporary edit /​etc/​centos-release for deploying because it is required by ceph-deploy. See [[https://​github.com/​ceph/​ceph-deploy/​blob/​master/​docs/​source/​install.rst|Supported distributions]]:​ 
 +   
 +  cp /​etc/​centos-release /​etc/​centos-release.old 
 +  echo "​CentOS Linux release 7.1.1503 (Core)"​ > /​etc/​centos-release 
 + 
 +Deploing monitor:
  
   cd /etc/ceph   cd /etc/ceph
-  ceph-deploy new imm10 imm11 imm12 imm13+  ceph-deploy new xenserver-test
   ceph-deploy mon create-initial   ceph-deploy mon create-initial
  
Line 215: Line 82:
  
   ceph -s   ceph -s
-      ​cluster ​9185ec98-3dec-4ba8-ab7d-f4589d6c60e7 +    ​cluster ​37a9abb2-c7ba-45e9-aaf2-6486d7099819 
-      health HEALTH_ERR ​192 pgs stuck inactive; 192 pgs stuck uncleanno osds +     ​health HEALTH_ERR 
-    monmap e1: mons at {imm10=172.26.1.210:​6789/​0,​imm11=172.26.1.211:​6789/​0,​imm12=172.26.1.212:​6789/​0,​imm13=172.26.1.213:6789/0} +            64 pgs stuck inactive 
-            election epoch 116, quorum 0,1,2,3 imm10,​imm11,​imm12,​imm13 +            64 pgs stuck unclean 
-    osdmap e1: 0 osds: 0 up, 0 in +            ​no osds 
-    pgmap v2: 192 pgs, pools, 0 bytes data, 0 objects +     ​monmap e1: mons at {xenserver-test=192.168.5.119:6789/0} 
-          0 kB used, 0 kB / 0 kB avail +            election epoch 2, quorum 0 xenserver-test 
-               192 creating+     ​osdmap e1: 0 osds: 0 up, 0 in 
 +      pgmap v2: 64 pgs, pools, 0 bytes data, 0 objects 
 +            0 kB used, 0 kB / 0 kB avail 
 +                  ​64 ​creating 
  
  
 ==2.4. Deploying osd.== ==2.4. Deploying osd.==
  
-For every node: + 
-  +
   cd /etc/ceph   cd /etc/ceph
-  ceph-deploy gatherkeys ​immX +  ceph-deploy gatherkeys ​xenserver-test 
-  ceph-deploy disk zap immX:zvol/​rzfs/​cvol +  ceph-deploy disk zap xenserver-test:sdb 
-  ceph-deploy osd prepare ​immX:zvol/​rzfs/​cvol+  ceph-deploy osd prepare ​xenserver-test:sdb 
 + 
 +Recreating of rbd volume. 
 + 
 +ceph osd pool delete rbd rbd --yes-i-really-really-mean-it 
 + 
 +  ceph osd pool create rbd 128 
 +  ceph osd pool set rbd min_size 1 
 +  ceph osd pool set rbd size 1
  
-where X is node number.\\ 
  
 Check for all osd running: Check for all osd running:
  
   ceph -s   ceph -s
-        ​cluster ​9185ec98-3dec-4ba8-ab7d-f4589d6c60e7+    ​cluster ​37a9abb2-c7ba-45e9-aaf2-6486d7099819
      ​health HEALTH_OK      ​health HEALTH_OK
-     ​monmap e1: mons at {imm10=172.26.1.210:​6789/​0,​imm11=172.26.1.211:​6789/​0,​imm12=172.26.1.212:​6789/​0,​imm13=172.26.1.213:6789/0} +     ​monmap e1: mons at {xenserver-test=192.168.5.119:6789/0} 
-            election epoch 116, quorum 0,1,2,3 imm10,​imm11,​imm12,​imm13 +            election epoch 2, quorum 0 xenserver-test 
-     ​osdmap ​e337osds: up, in +     ​osdmap ​e12osds: up, in 
-      pgmap v1059868512 pgs, pools, ​108 GB data, 27775 objects +      pgmap v19128 pgs, pools, ​0 bytes data, objects 
-            ​216 GB used, 6560 GB / 6776 GB avail +            ​36268 kB used, 413 GB / 413 GB avail 
-                 512 active+clean +                 128 active+clean
-  client io 0 B/s rd, 2080 B/s wr, 1 op/s+
  
-For [[http://​ceph.com/​docs/​master/​rbd/​rbd-snapshot/​|cloning]] purpose change /​etc/​ceph/​ceph.conf for every nodes by adding following line:\\ 
-  ​ 
-  rbd default format = 2 
  
-Restart ceph on every nodes+Edit back /​etc/​centos-ralease file: 
 + 
 +  cp /​etc/​centos-release.old /​etc/​centos-release 
  
en/jobs/xenserverdundeebetaceph.txt · Last modified: 2015/11/23 19:01 by admin
Recent changes RSS feed Debian Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki