We are now ready to install Ceph and we shall install the “infernalis” release

 

Logon as “root” on the Ceph admin, and create this depository configuration file :

[root@ceph-admin ~]# cat /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-infernalis/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Launch the update procedure :

[root@ceph-admin ~]# yum -y update

and install “ceph-deploy” :

root@ceph-admin ~]# yum -y install ceph-deploy
...
...
...
Installed:
  ceph-deploy.noarch 0:1.5.31-0

Dependency Installed:
  python-backports.x86_64 0:1.0-8.el7                          python-backports-ssl_match_hostname.noarch 0:3.4.0.2-4.el7                          python-setuptools.noarch 0:0.9.8-4.el7

Complete!

Logon as “cephuser” :

[root@ceph-admin ~]# su - cephuser
Last login: Fri Feb 5 17:27:28 CET 2016 from ceph-monitor1.argonay.wou on pts/0

Create a directory to collect the installation log :

[cephuser@ceph-admin ~]$ mkdir ceph-deploy && cd ceph-deploy

Launch the installation procedure :

[cephuser@ceph-admin ceph-deploy]$ ceph-deploy new ceph-monitor1 ceph-monitor2

...
...
...
 [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

We have this configuration file created :

[cephuser@ceph-admin ceph-deploy]$ cat ceph.conf
[global]
fsid = 0c5c587a-ba48-49a9-99f5-b8475add5053
mon_initial_members = ceph-monitor1, ceph-monitor2
mon_host = 192.168.1.121,192.168.1.122
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

We add some more information in it :

[cephuser@ceph-admin ceph-deploy]$ cat ceph.conf
[global]
fsid = 0c5c587a-ba48-49a9-99f5-b8475add5053
mon_initial_members = ceph-monitor1, ceph-monitor2
mon_host = 192.168.1.121,192.168.1.122
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

# my network
public network = 192.168.1.0/24
cluster network = 192.168.1.0/24

# replicas and placement groups
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 256
osd pool default pgp num = 256

# crush leaf type
osd crush chooseleaf type = 1

Logged as “root” , add more sections in “/etc/yum.repos.d/ceph.repo” repository definition file :

[root@ceph-admin ~]# cat /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-infernalis/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-infernalis/el7/$basearch
enabled=1
priority=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-infernalis/el7/SRPMS
enabled=0
priority=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

Now, let’s go for the installation (pay your attention : it will take a while !) :

[cephuser@ceph-admin ceph-deploy]$ ceph-deploy install ceph-admin ceph-monitor1 ceph-monitor2 ceph-node1 ceph-node2 ceph-node3
...
...
...
[ceph-node3][DEBUG ] Complete!
[ceph-node3][INFO  ] Running command: sudo ceph --version
[ceph-node3][DEBUG ] ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)

 

 

 

PDF24    Send article as PDF   

2 thoughts on “Ceph installation on CentOS 7.2, step #2

  1. Hmm it appears like your website ate my first comment (it was super long) so I guess I’ll just sum it up what I wrote and say,
    I’m thoroughly enjoying your blog. I as well am an aspiring blog blogger but
    I’m still new to everything. Do you have any points for newbie blog writers?
    I’d genuinely appreciate it.

Leave a Reply

Your email address will not be published. Required fields are marked *


*