Airzero Cloud

Next Generation Cloud !

What is Ceph?

Ceph is a compelling open-source alternative to traditional vendors' proprietary software-defined storage solutions, with a thriving community working on the technology. Ubuntu was a strong supporter of Ceph and its community from the beginning. Canonical is still a premier member state and serves on the Ceph Foundation's governing board.

Many global enterprises and telco operators are running Ceph on Ubuntu, allowing them to combine block and object storage at scale while reaping the economic and upstream benefits of open source.

Why use Ceph?

Ceph is unique in that it provides data in three ways: as a POSIX compliant filesystem via CephFS, like block storage volumes via the RBD driver, and as an object store compatible with both the S3 and Swift protocols via the RADOS gateway.

Ceph is often used to deliver a block and object supply to OpenStack clouds via Cinder and as a Swift replacement. Ceph has also been embraced as a famous way for physical volumes (PV) as a Container Storage Interface (CSI) plugin by Kubernetes.

Even as a stand-alone solution, Ceph is a compelling open-source storage alternative to closed-source, proprietary solutions because it reduces the OpEx costs that organizations typically incur with storage due to licensing, upgrades, and potential vendor lock-in fees.

How does Ceph work?

Ceph stores data in pools that users or other benefits can access to deliver block, file, or object storage. Each of these mechanisms is supported by a Ceph pool. Replication, access rights, and other internal characteristics (such as data placement, ownership, and access, among others) are expressed per pool.

The Ceph Monitors (MONs) is in charge of keeping the Cluster in good working order. They use the CRUSH map to manage data location. These work in a cluster with quorum-based HA and data is stored and retrieved using Object Storage Devices (OSDs).

A storage device and a running OSD daemon process are mapped by 1:1. OSDs make extensive use of the CPU and RAM of the cluster member host. This is why, when designing a Ceph cluster, it is critical to carefully balance the number of OSDs with the number of CPU cores and memory. This is particularly accurate when trying to achieve a hyper-converged architecture.

Using LXD as a container hypervisor aids in properly enforcing resource limits on most processes running on a given node. By separating the Ceph MONs, LXD is used broadly to deliver the most reasonable economics in Canonical's Charmed OpenStack distribution. It is not currently recommended to containerize the Ceph OSDs.

Ceph storage mechanisms

Choosing the mechanism for accessing each data pool is equivalent to doing so. One pool, for example, may be used to store block volumes, while another serves as the storage backend for object storage or filesystems. In the case of volumes, the host attempting to mount the volume must first load the RBD kernel module, after which Ceph volumes can be mounted in the same way that local volumes are.

Object buckets are not normally climbed–client-side applications can use overlay filesystems to simulate a 'drive,' but it is not an actual volume that is being mounted. The RADOS Gateway, on the other hand, allows access to Object buckets. To access objects using the S3 or Swift protocols, RADOSGW provides a REST API. CephFS is used to create and format filesystems, which are then exported and made available to local networks in the same way that NFS mounts are.

Volume and object store use cases have been in large-scale production for quite some time. The use of Ceph to combine volume and object storage provides numerous advantages to operators.

Ceph storage support with Canonical

Canonical offers Ceph support as part of Ubuntu Advantage for Infrastructure, with Standard and Advanced SLAs, which correspond to business hours and 24x7 support, respectively. Each covered node supports up to 48TB of raw storage in a Ceph cluster.

This coverage is based on our reference hardware recommendation for OpenStack and Kubernetes in a hyper-converged architecture at an optimal price per TB range while retaining the best performance across computers and networks in an on-premise cloud. If the number of nodes to TB ratio does not correspond with our recommendation and exceeds this limit, Canonical provides per TB pricing in addition to accommodating our scale-out storage customers.

Ceph is general in the Ubuntu main storage, and as such, users obtain free security updates for up to five years if they employ an LTS version. Beyond the standard support cycle, an additional five years of paid commercial support are available.

If you've got any doubt regarding the above topic of Ceph storage in Ubuntu, Don’t hesitate to contact us. Airzero Cloud is going to be your digital partner.

Airzero Cloud is one of the top web hosting service provider that provides an array of the most effective tools. We are running to help you to support your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/