This is an old revision of the document!


Ceph + Xenserver , Looking forward with Dundee Beta 1 release

Introduction

One mouth ago i read a news about of issue of beta 1 release of XenServer Dundee.
This release is based on Centos 7.

Task details

*Installing ceph rbd on one node with XenServer Dundee Beta 1.
*Creating image and mapping to node.
*Activating mapping image in XenCenter.

1. Installing XenServer and XenCenter

Hardware configuration.
Node: One hdd for OS, on HDD for ceph storage, 2xXeon56XX, 24 GB MEM, 2x1Gbit/s Ethernet
Management Server: One HDD, 8 GB MEM, 1Gbit/s Ethernet, Windows 2012 R2

Network configuration.

Doesn't metter. Only one ethernet port were used.
I am using 192.168.5.119/23 for node and 192.168.4.197 for XenServer

Instalating XenCenter and Xenserver Simply install Windows 2012 R2 OS to XenCenter server. Configure Network with IP address and than install XenCenter

Also install XenServer to first hdd with using follow ISO and configure Network with IP address. Hosname is xenserver-test.

2. Installing ceph rbd on XenServer node

2.1. Setup Centos repo.

Set up repository Centos-Base, Centos-Updates and Centos-Extras by using real base links.

2.2. Setup ceph repo.

Set up repository:

 cat << EOT > /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for Citrix
baseurl=http://download.ceph.com/rpm/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

EOT

Import gpgkey:

 rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
2.3. Installing ceph.
yum install ceph-common ceph ceph-deploy ntp -y  
2.3. Deploying monitors.

For thirst node:

cd /etc/ceph
ceph-deploy new imm10 imm11 imm12 imm13
ceph-deploy mon create-initial

Check for running monitors:

ceph -s
    cluster 9185ec98-3dec-4ba8-ab7d-f4589d6c60e7
    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
  monmap e1: 4 mons at {imm10=172.26.1.210:6789/0,imm11=172.26.1.211:6789/0,imm12=172.26.1.212:6789/0,imm13=172.26.1.213:6789/0}
          election epoch 116, quorum 0,1,2,3 imm10,imm11,imm12,imm13
  osdmap e1: 0 osds: 0 up, 0 in
  pgmap v2: 192 pgs, 4 pools, 0 bytes data, 0 objects
        0 kB used, 0 kB / 0 kB avail
             192 creating
2.4. Deploying osd.

For every node:

cd /etc/ceph
ceph-deploy gatherkeys immX
ceph-deploy disk zap immX:zvol/rzfs/cvol
ceph-deploy osd prepare immX:zvol/rzfs/cvol

where X is node number.

Check for all osd running:

ceph -s
      cluster 9185ec98-3dec-4ba8-ab7d-f4589d6c60e7
   health HEALTH_OK
   monmap e1: 4 mons at {imm10=172.26.1.210:6789/0,imm11=172.26.1.211:6789/0,imm12=172.26.1.212:6789/0,imm13=172.26.1.213:6789/0}
          election epoch 116, quorum 0,1,2,3 imm10,imm11,imm12,imm13
   osdmap e337: 4 osds: 4 up, 4 in
    pgmap v1059868: 512 pgs, 4 pools, 108 GB data, 27775 objects
          216 GB used, 6560 GB / 6776 GB avail
               512 active+clean
client io 0 B/s rd, 2080 B/s wr, 1 op/s

For cloning purpose change /etc/ceph/ceph.conf for every nodes by adding following line:

rbd default format = 2

Restart ceph on every nodes

en/jobs/xenserverdundeebetaceph.1448292711.txt.gz · Last modified: 2015/11/23 18:31 by admin
Recent changes RSS feed Debian Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki