Integration

CEPH :: Integration

As a general rule, we recommend deploying Ceph on newer releases of Linux. We also recommend deploying on releases with long-term support. Homepage

CEPH :: Integration :: Quick Start Preflight

http://docs.ceph.com/docs/master/start/quick-start-preflight/

[xe1gyq@server ~]$ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
[xe1gyq@server ~]$ sudo vi /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
#baseurl=https://download.ceph.com/rpm/el7/noarch
baseurl=https://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[xe1gyq@server ~]$ sudo yum update
[xe1gyq@server ~]$ sudo yum install ceph-deploy

CEPH :: Integration :: How to build a Ceph Distributed Storage Cluster on CentOS 7

https://www.howtoforge.com/tutorial/how-to-build-a-ceph-cluster-on-centos-7/

  1. [OK] Step 1 - Configure All Nodes

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.74     ceph-admin
192.168.1.74     mon1
192.168.1.74     osd1
#192.168.1.74     osd2
#192.168.1.74     osd3
192.168.1.74     client
  1. [OK] Step 2 - Configure the SSH Server

  2. [OK] Step 3 - Configure Firewalld

  3. [ ] Step 4 - Configure the Ceph OSD Nodes

1

[xe1gyq@server ~]$ sudo mknod -m 0660 /dev/loop2 b 7 8

2

[xe1gyq@server ~]$ sudo mount -t tmpfs -o size=1G tmpfs /mnt
mkfs -t ext2 -q /dev/ram1 8192

3 Ok

[xe1gyq@server ~]$ sudo mknod -m 660 /dev/ram0 b 1 1
[xe1gyq@server ~]$ sudo chown root.disk /dev/ram0
[xe1gyq@server ~]$ ls -l /dev/ram*
brw-rw----. 1 root disk 1, 1 Feb 27 07:27 /dev/ram0
#mkdir -p /mnt/ramdisk
#mount /dev/ram0 /mnt/ramdisk
[xe1gyq@server ~]$ sudo mkfs.xfs /dev/ram0 -f
meta-data=/dev/ram0              isize=512    agcount=1, agsize=4096 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=4096, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[xe1gyq@server ~]$
[xe1gyq@server ~]$ sudo blkid -o value -s TYPE /dev/ram0
xfs
[xe1gyq@server ~]$

CEPH Deploy

[cephuser@server ~]$ sudo rpm -Uhv http://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm
[cephuser@server ~]$ sudo yum update -y && sudo yum install ceph-deploy -y
--> Finished Dependency Resolution
Error: Package: ceph-deploy-1.5.39-0.noarch (ceph-noarch)
           Requires: python-distribute
           Available: python-setuptools-0.9.8-7.el7.noarch (base)
               python-distribute = 0.9.8-7.el7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
[cephuser@server ~]$

Issue

https://tracker.ceph.com/issues/16399

[cephuser@server ~]$ wget http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.39-0.noarch.rpm
[cephuser@server ~]$ sudo rpm -Uvh ceph-deploy-1.5.39-0.noarch.rpm --nodeps
[cephuser@server ~]$
[cephuser@server cluster]$ ceph-deploy new mon1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy new mon1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x17a5320>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x17fed88>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['mon1']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[mon1][DEBUG ] connected to host: server 
[mon1][INFO  ] Running command: ssh -CT -o BatchMode=yes mon1
[mon1][DEBUG ] connection detected need for sudo
[mon1][DEBUG ] connected to host: mon1 
[mon1][DEBUG ] detect platform information from remote host
[mon1][DEBUG ] detect machine type
[mon1][DEBUG ] find the location of an executable
[mon1][INFO  ] Running command: sudo /usr/sbin/ip link show
[mon1][INFO  ] Running command: sudo /usr/sbin/ip addr show
[mon1][DEBUG ] IP addresses found: [u'192.168.1.74', u'192.168.122.1']
[ceph_deploy.new][DEBUG ] Resolving host mon1
[ceph_deploy.new][DEBUG ] Monitor mon1 at 192.168.1.74
[ceph_deploy.new][DEBUG ] Monitor initial members are ['mon1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.74']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[cephuser@server cluster]$
[cephuser@server cluster]$ ceph-deploy install ceph-admin mon1 osd1
[cephuser@server cluster]$ ceph-deploy mon create-initial

!!! Error

CEPH :: Integration :: CEPH-DEPLOY – DEPLOY CEPH WITH MINIMAL INFRASTRUCTURE

http://docs.ceph.com/ceph-deploy/docs/

CEPH :: Integration :: LXC

https://www.howtoforge.com/tutorial/how-to-install-a-ceph-cluster-on-ubuntu-16-04/

xe1gyq@kali:~/go/bin$ ./lxc launch ubuntu:16.04 ceph-admin
xe1gyq@kali:~/go/bin$ ./lxc launch ubuntu:16.04 mon1
xe1gyq@kali:~/go/bin$ ./lxc launch ubuntu:16.04 osd1
xe1gyq@kali:~/go/bin$ ./lxc launch ubuntu:16.04 osd2
xe1gyq@kali:~/go/bin$ ./lxc launch ubuntu:16.04 osd3
xe1gyq@kali:~/go/bin$ ./lxc launch ubuntu:16.04 client
xe1gyq@kali:~/go/bin$ ./lxc list
+------------+---------+-----------------------+------+------------+-----------+
|    NAME    |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------------+---------+-----------------------+------+------------+-----------+
| ceph-admin | RUNNING | 10.189.206.217 (eth0) |      | PERSISTENT | 0         |
+------------+---------+-----------------------+------+------------+-----------+
| client     | RUNNING | 10.189.206.74 (eth0)  |      | PERSISTENT | 0         |
+------------+---------+-----------------------+------+------------+-----------+
| mon1       | RUNNING | 10.189.206.18 (eth0)  |      | PERSISTENT | 0         |
+------------+---------+-----------------------+------+------------+-----------+
| osd1       | RUNNING | 10.189.206.98 (eth0)  |      | PERSISTENT | 0         |
+------------+---------+-----------------------+------+------------+-----------+
| osd2       | RUNNING | 10.189.206.76 (eth0)  |      | PERSISTENT | 0         |
+------------+---------+-----------------------+------+------------+-----------+
| osd3       | RUNNING | 10.189.206.202 (eth0) |      | PERSISTENT | 0         |
+------------+---------+-----------------------+------+------------+-----------+
xe1gyq@kali:~/go/bin$
xe1gyq@kali:~/go/bin$ ./lxc exec ceph-admin -- /bin/bash
root@ceph-admin:~# 
xe1gyq@kali:~/go/bin$ ./lxc exec mon1 -- /bin/bash
root@mon1:~# 
xe1gyq@kali:~/go/bin$ ./lxc exec osd1 -- /bin/bash
root@osd1:~# 
xe1gyq@kali:~/go/bin$ ./lxc exec osd2 -- /bin/bash
root@osd2:~# 
xe1gyq@kali:~/go/bin$ ./lxc exec osd3 -- /bin/bash
root@osd3:~# 
xe1gyq@kali:~/go/bin$ ./lxc exec client -- /bin/bash
root@client:~#
root@client:~# apt update
root@client:~# useradd -m -s /bin/bash cephuser
root@client:~# passwd cephuser
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
root@client:~# echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
cephuser ALL = (root) NOPASSWD:ALL
root@client:~# chmod 0440 /etc/sudoers.d/cephuser
root@client:~# sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
root@client:~# sudo apt-get install -y ntp ntpdate ntp-doc
root@client:~# ntpdate 0.us.pool.ntp.org
root@client:~# hwclock --systohc
root@client:~# systemctl enable ntp
root@client:~# systemctl start ntp
root@client:~# sudo apt-get install -y python python-pip parted
root@client:~# vim /etc/hosts
10.189.206.217        ceph
10.189.206.18         mon1
10.189.206.98         osd1
10.189.206.76         osd2
10.189.206.202        osd3
10.189.206.74         client
root@mon1:~# nano /etc/ssh/sshd_config
From:
PasswordAuthentication no
To:
PasswordAuthentication yes
root@mon1:~# sudo systemctl restart sshd

In Ceph-Admin

cephuser@ceph-admin:~$ ssh-keygen
cephuser@ceph-admin:~$ vim ~/.ssh/config
Host ceph-admin
        Hostname ceph-admin
        User cephuser

Host mon1
        Hostname mon1
        User cephuser

Host ceph-osd1
        Hostname ceph-osd1
        User cephuser

Host ceph-osd2
        Hostname ceph-osd2
        User cephuser

Host ceph-osd3
        Hostname ceph-osd3
        User cephuser

Host ceph-client
        Hostname ceph-client
        User cephuser
cephuser@ceph-admin:~$ chmod 644 ~/.ssh/config
cephuser@ceph-admin:~$ ssh-keyscan osd1 osd2 osd3 client mon1 >> ~/.ssh/known_hosts
# client:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd1:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd1:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd1:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd3:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# client:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# mon1:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# mon1:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd2:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd2:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd2:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd3:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# osd3:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# client:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
# mon1:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4
cephuser@ceph-admin:~$
cephuser@ceph-admin:~$ ssh-copy-id mon1
cephuser@ceph-admin:~$ ssh-copy-id osd1
cephuser@ceph-admin:~$ ssh-copy-id osd2
cephuser@ceph-admin:~$ ssh-copy-id osd3
cephuser@ceph-admin:~$ ssh-copy-id client
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephuser/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephuser@client's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'client'"
and check to make sure that only the key(s) you wanted were added.

In all servers

cephuser@ceph-admin:~$ ssh root@ceph-admin
cephuser@ceph-admin:~$ sudo apt-get install -y ufw
cephuser@ceph-admin:~$ sudo ufw allow 22/tcp
Rules updated
Rules updated (v6)
cephuser@ceph-admin:~$ sudo ufw allow 80/tcp
Rules updated
Rules updated (v6)
cephuser@ceph-admin:~$ sudo ufw allow 2003/tcp
Rules updated
Rules updated (v6)
cephuser@ceph-admin:~$ sudo ufw allow 4505:4506/tcp
Rules updated
Rules updated (v6)
cephuser@ceph-admin:~$
root@client:~# sudo ufw enable
Firewall is active and enabled on system startup
root@client:~#

In all OSDs

In Ceph-Admin

cephuser@ceph-admin:~/cluster$ vim ceph.conf
# Your network address
public network = 10.189.206.0/24
osd pool default size = 2
cephuser@ceph-admin:~$ sudo pip install ceph-deploy
The directory '/home/cephuser/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/cephuser/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting ceph-deploy
  Downloading ceph-deploy-2.0.0.tar.gz (113kB)
    100% |████████████████████████████████| 122kB 359kB/s 
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/dist-packages (from ceph-deploy)
Installing collected packages: ceph-deploy
  Running setup.py install for ceph-deploy ... done
Successfully installed ceph-deploy-2.0.0
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
cephuser@ceph-admin:~$ mkdir cluster
cephuser@ceph-admin:~$ cd cluster/
cephuser@ceph-admin:~/cluster$ ceph-deploy new mon1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/local/bin/ceph-deploy new mon1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb205a26dd0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['mon1']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7fb205a0a668>
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[mon1][DEBUG ] connected to host: ceph-admin 
[mon1][INFO  ] Running command: ssh -CT -o BatchMode=yes mon1
[mon1][DEBUG ] connection detected need for sudo
[mon1][DEBUG ] connected to host: mon1 
[mon1][DEBUG ] detect platform information from remote host
[mon1][DEBUG ] detect machine type
[mon1][DEBUG ] find the location of an executable
[mon1][INFO  ] Running command: sudo /bin/ip link show
[mon1][INFO  ] Running command: sudo /bin/ip addr show
[mon1][DEBUG ] IP addresses found: [u'10.189.206.18']
[ceph_deploy.new][DEBUG ] Resolving host mon1
[ceph_deploy.new][DEBUG ] Monitor mon1 at 10.189.206.18
[ceph_deploy.new][DEBUG ] Monitor initial members are ['mon1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.189.206.18']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
cephuser@ceph-admin:~/cluster$
cephuser@ceph-admin:~/cluster$ ceph-deploy install ceph-admin mon1 osd1 osd2 osd3
cephuser@ceph-admin:~/cluster$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/local/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1d9869bfc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f1d98b098c0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts mon1
[ceph_deploy.mon][DEBUG ] detecting platform for host mon1 ...
[mon1][DEBUG ] connection detected need for sudo
[mon1][DEBUG ] connected to host: mon1 
[mon1][DEBUG ] detect platform information from remote host
[mon1][DEBUG ] detect machine type
[mon1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 16.04 xenial
[mon1][DEBUG ] determining if provided host has same hostname in remote
[mon1][DEBUG ] get remote short hostname
[mon1][DEBUG ] deploying mon to mon1
[mon1][DEBUG ] get remote short hostname
[mon1][DEBUG ] remote hostname: mon1
[mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[mon1][DEBUG ] create the mon path if it does not exist
[mon1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-mon1/done
[mon1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-mon1/done
[mon1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-mon1.mon.keyring
[mon1][DEBUG ] create the monitor keyring file
[mon1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i mon1 --keyring /var/lib/ceph/tmp/ceph-mon1.mon.keyring --setuser 64045 --setgroup 64045
[mon1][DEBUG ] ceph-mon: renaming mon.noname-a 10.189.206.18:6789/0 to mon.mon1
[mon1][DEBUG ] ceph-mon: set fsid to 93a3e4ab-3884-4514-9a95-a5a67c071f42
[mon1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-mon1 for mon.mon1
[mon1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-mon1.mon.keyring
[mon1][DEBUG ] create a done file to avoid re-doing the mon deployment
[mon1][DEBUG ] create the init path if it does not exist
[mon1][INFO  ] Running command: sudo systemctl enable ceph.target
[mon1][INFO  ] Running command: sudo systemctl enable ceph-mon@mon1
[mon1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@mon1.service to /lib/systemd/system/ceph-mon@.service.
[mon1][INFO  ] Running command: sudo systemctl start ceph-mon@mon1
[mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok mon_status
[mon1][DEBUG ] ********************************************************************************
[mon1][DEBUG ] status for monitor: mon.mon1
[mon1][DEBUG ] {
[mon1][DEBUG ]   "election_epoch": 3, 
[mon1][DEBUG ]   "extra_probe_peers": [], 
[mon1][DEBUG ]   "monmap": {
[mon1][DEBUG ]     "created": "2018-02-28 11:56:34.235890", 
[mon1][DEBUG ]     "epoch": 1, 
[mon1][DEBUG ]     "fsid": "93a3e4ab-3884-4514-9a95-a5a67c071f42", 
[mon1][DEBUG ]     "modified": "2018-02-28 11:56:34.235890", 
[mon1][DEBUG ]     "mons": [
[mon1][DEBUG ]       {
[mon1][DEBUG ]         "addr": "10.189.206.18:6789/0", 
[mon1][DEBUG ]         "name": "mon1", 
[mon1][DEBUG ]         "rank": 0
[mon1][DEBUG ]       }
[mon1][DEBUG ]     ]
[mon1][DEBUG ]   }, 
[mon1][DEBUG ]   "name": "mon1", 
[mon1][DEBUG ]   "outside_quorum": [], 
[mon1][DEBUG ]   "quorum": [
[mon1][DEBUG ]     0
[mon1][DEBUG ]   ], 
[mon1][DEBUG ]   "rank": 0, 
[mon1][DEBUG ]   "state": "leader", 
[mon1][DEBUG ]   "sync_provider": []
[mon1][DEBUG ] }
[mon1][DEBUG ] ********************************************************************************
[mon1][INFO  ] monitor: mon.mon1 is running
[mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.mon1
[mon1][DEBUG ] connection detected need for sudo
[mon1][DEBUG ] connected to host: mon1 
[mon1][DEBUG ] detect platform information from remote host
[mon1][DEBUG ] detect machine type
[mon1][DEBUG ] find the location of an executable
[mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.mon1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpbBe8Qo
[mon1][DEBUG ] connection detected need for sudo
[mon1][DEBUG ] connected to host: mon1 
[mon1][DEBUG ] detect platform information from remote host
[mon1][DEBUG ] detect machine type
[mon1][DEBUG ] get remote short hostname
[mon1][DEBUG ] fetch remote file
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.mon1.asok mon_status
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.admin
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-mds
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-mgr
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-osd
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpbBe8Qo
cephuser@ceph-admin:~/cluster$
cephuser@ceph-admin:~/cluster$ ceph-deploy gatherkeys mon1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/local/bin/ceph-deploy gatherkeys mon1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f111fbea440>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['mon1']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x7f11200462a8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpkVjJJ0
[mon1][DEBUG ] connection detected need for sudo
[mon1][DEBUG ] connected to host: mon1 
[mon1][DEBUG ] detect platform information from remote host
[mon1][DEBUG ] detect machine type
[mon1][DEBUG ] get remote short hostname
[mon1][DEBUG ] fetch remote file
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.mon1.asok mon_status
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.admin
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-mds
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-mgr
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-osd
[mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.client.admin.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-mds.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-mgr.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-osd.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-rgw.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpkVjJJ0
cephuser@ceph-admin:~/cluster$

In OSD Physical Computer

[root@localhost xe1gyq]# sudo systemctl start firewalld
[root@localhost xe1gyq]# sudo systemctl enable firewalld
Created symlink from /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service to /usr/lib/systemd/system/firewalld.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/firewalld.service to /usr/lib/systemd/system/firewalld.service.
[root@localhost xe1gyq]# sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
success
[root@localhost xe1gyq]# sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent
success
[root@localhost xe1gyq]# sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
success
[root@localhost xe1gyq]# sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
success
[root@localhost xe1gyq]# sudo firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
success

Sandbox

[xe1gyq@server ~]$ wget -O /etc/yum.repos.d/epel-rhsm.repo http://repos.fedorapeople.org/repos/candlepin/subscription-manager/epel-subscription-manager.repo
[xe1gyq@server ~]$ yum install subscription-manager -y
[xe1gyq@server ~]$ sudo yum remove python2-uri-templates-0.6-5.el7.noarch
  Verifying  : 1:python-cinder-11.1.0-1.el7.noarch                          1/4 
  Verifying  : python2-google-api-client-1.4.2-4.el7.noarch                 2/4 
  Verifying  : 1:openstack-cinder-11.1.0-1.el7.noarch                       3/4 
  Verifying  : python2-uri-templates-0.6-5.el7.noarch                       4/4 
[xe1gyq@server ~]$ sudo yum install python2-uritemplate

Issue Jewel Release

--> Finished Dependency Resolution
Error: Package: ceph-deploy-1.5.39-0.noarch (ceph-noarch)
           Requires: python-distribute
           Available: python-setuptools-0.9.8-7.el7.noarch (base)
               python-distribute = 0.9.8-7.el7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Last updated