CEPH is a free and opensource object storage system. This article describes the basic terminology, installation and configuration parameters required to build your own CEPH environment.
CEPH has been designed to be a distributed storage system which is highly fault tolerant, scalable and configurable. It can run on large number of distributed commodity hardware thus eliminating the need for very large central storage solutions.
Down to the file you just downloaded and click to select it. MAC USERS: The Mac OS adds an extra file with the same name but with a leading dot. Skip over this file and select the one without the leading dot instead. Mac OS X Tiger (10.4) Mac OS X Panther (10.3) Threads 54.3K Messages 434.2K. Logic Pro 9 on High Sierra. Today at 5:32 PM; Wowfunhappy; Forums. MacRumors attracts a broad audience of both consumers and professionals interested in the latest technologies and products. We also boast an active community focused on purchasing decisions. Mac OS X 64-bit 5315 443311: MacBook Pro (Retina) GeForce GT 650M OpenCL Mac OS X 64-bit 07: MacBook Pro (Retina) HD Graphics 4000 OpenCL Mac OS X 64-bit 6092 438977: Intel NUC NUC7i3BNK Intel(R) HD Graphics 620 OpenCL Windows 64-bit 19566.
- Chapter 6: Distributed Computation with Ceph RADOS Classes. Chapter 7: Monitoring Ceph. Chapter 8: Tiering with Ceph. MAC OS X Universal (32 and 64-bit) Downloads.
- Proceed with board specific instructions. When the Arduino Software (IDE) is properly installed you can go back to the Getting Started Home and choose your board from the list on the right of the page. Last revision 2016/08/09 by SM.
This post aims to be a basic, short and self contained article that explains all the key details to understand and play with CEPH.
Environment
I've setup CEPH on my laptop with a few LXC containers. My setup has –
- Host: Ubuntu 16.04 xenial 64-bit OS
- LXC version: 2.0.6
- 40G disk partition that is free to use for this experiement.
- CEPH Release – Jewel
The Ubuntu site has documentation on LXC configuration here. But this site talks about creating unprivileged containers. However, we'll need 'privileged' containers. Privileged containers are't considered secure, since processes are mapped to root user on the host. Hence it has been used only for the purpose of experimenting.
See documentation on LXC site here to understand how to create privileged LXC containers.
Now, create these containers (as privileged) with names like these –
- cephadmin
- cephmon
- cephosd0
- cephosd1
- cephosd2
- cephradosgw
Here's a command to create a container with Ubuntu Xenial 64-bit container. You could choose another distro, in which case some of the instructions might not be applicable. Using the names above run the command to create each container –
Though CEPH installations talk about private and public networks. For testing purposes, the ‘lxcbr0' available from LXC is sufficient to successfully work with CEPH.
Start the containers and install SSH servers on all of them –
Repeat the above commands for all the containers you've created. Re-start all the containers. Register the container-names into your /etc/hosts for easily being able to login to them.
Disk Setup
On the 40G disk partition you've allocated for this exercise perform –
- Delete the existing partition
- Create 3 new partitions
- Format each of them with ‘XFS' filesystem.
CEPH recommends using XFS / BTRFS / EXT4. I've used XFS in my tests. I tried with EXT4 but received warnings related to limited xattr sizes while CEPH was being deployed.
Note down the device major and minor numbers for the partitions you've created from the above steps.
Make each of the partition you've created available to each of the cephosd0, cephosd1 and cephosd2 containers. If the above operation resulted in ‘/dev/sda8', ‘/dev/sda9' and ‘/dev/sda10' devices. Assign each of the device major and minor number to a corresponding cephosd container. To do that, you'll need to edit the LXC configuration file for each of the cephosd* container by performing the steps –
- Login to each cephosd node, and run
- Stop your cephosd containers (cephosd0, cephosd1, cephosd2) –
- On the host create corresponding ‘fstab' file for each container. You'd assign /dev/sda8 to cephosd0, /dev/sda9 to cephosd1 and so on. The syntax of fstab would have lines like this depending on how many disks / partitions you intend to share –
Example –
IMPORTANT NOTE
The has no preceding forward slash. The preceding slash is not required. Refer this post on askubuntu.com for more details.
- For each of the cephosd node edit the configuration file
Edit the container configuration file ‘config'. The lines below gives the container permission to access the device inside LXC and mount it to the mount point defined in the fstab file. Add the following lines –
Now, start/restart the containers. With the above set of steps we complete the creation of containers ready for us to install CEPH.
CEPH Installation
Complete the pre-flight steps on the CEPH quick install from here. The steps in pre-flight –
- Setup the ‘cephadmin' node with the ceph-deploy package.
- Installs ‘ntp' on all the nodes (required where OSD or MON run).
- Create a common ceph deploy user (password-less ssh sudo access) that will be used for CEPH installation on all nodes.
After the pre-flight steps are complete check –
- To ensure the password-less access works from ‘cephadmin' to all the other nodes on your cluster via the ceph deploy user you have created.
- The XFS partition created earlier is available on all OSDs under /mnt/xfsdisk.
Next, you need to complete the CEPH deployment. The steps with all the illustrations are available here. To get a good understanding of each command read the description from the link. Since our experiement has 3 OSDs and 3 monitor daemons; You could run these commands from the ‘cephadmin' node using the ceph deploy user created in pre-flight –
- Create a directory and run the commands below from within it.
- Designate following nodes as CEPH Monitors
- Install CEPH on all nodes including admin node.
- Add the initial monitors and gather their keys
- Prepare the disk on each OSD
- Activate each OSD
- Run this command to ensure cephadmin can perform administrative activities on all your nodes on the CEPH cluster
- Check health of cluster (You must see HEALTH_OK) if you've performed all the steps correctly.
- Install and deploy the instance of RADOS gateway
NOTE
If you haven't configured the OSD daemons to start automatically via upstart, they won't after a LXC startup/restart. And running ‘ceph -s' or ‘ceph -w' or ‘ceph health' on admin node shows HEALTH_ERR and degraded cluster since the OSDs are down. In such a case, manually login to each OSD and start the OSD instance with the command sudo systemctl start ceph-osd@; in our case is 0, 1, or 2.
With the above steps, the cluster must be up with all the PGs showing ‘active + clean' status;
Access data on cluster
To run these commands you'll need a subset ceph.conf with monitor node information and admin keyring file. For test purposes you could run it on the cephadmin node under the cluster directory you've created in the first step of CEPH installation section.
To list all available data pools
To create a pool
To insert data in a file into a pool Pandys cartoon adventure mac os.
Mac Os Mojave
To list all objects in a pool
Rados Mac Os Catalina
To get a data object from the pool