Airzero Cloud

Next Generation Cloud !

What is a RAID Controller?

A RAID controller is a hardware appliance or software schedule that manages hard disc drives or solid-state movements in a computer or storage array so that they can function as a logical unit. A RAID controller security stored data while also potentially enhancing computing commission by revving access to stored data.

  • A RAID controller acts as a bridge between an operating system and the physical drives.

A RAID controller offers applications and operating systems businesses or areas of drives as analytical units for which data security systems can be defined. Even though they may consist of parts of multiple drives, the logical units appear to applications and operating systems as endeavors. Because the controller can access numerous copies of data across numerous physical devices, it can improve performance and protect data in the event of a system crash.

There are approximately ten different RAID configurations unrestricted, as well as numerous proprietary variations of the standard set of RAID levels. A RAID controller will support a single RAID level or a group of levels that are related.

  • Hardware vs. software RAID controllers

A physical controller is used to manage the array in hardware-based RAID. The controller can be a PCI or PCI Express card that is created to support a specific drive format such as SATA or SCSI. Hardware RAID controllers are also known as RAID adapters.

Hardware controller prices vary significantly, with desktop-capable cards available for less than $50. More cosmopolitan hardware controllers capable of keeping shared networked storage are quite more expensive, typically ranging from a few hundred dollars to more than a thousand dollars.

LSI, Microsemi Adaptec, Intel, IBM, Dell, and Cisco are simply a few of the companies that currently provide hardware RAID controllers.

When choosing a hardware RAID controller, you should consider the following key features:

  • Interfaces for SATA and/or SAS (and related throughout speeds)
  • Supported RAID levels

  • Compatibility with operating systems

  • Supported device count

  • Performance in reading

  • IOPs evaluation

  • PCIe interface cache size

  • Capabilities for encryption

  • Consumption of energy

A controller can also be software-only, using the host system's hardware resources, especially the CPU and DRAM. Although software-based RAID delivers the same functionality as hardware-based RAID, its implementation is typically inferior to that of the hardware versions.

Because no special hardware is needed, the main benefits of using a software controller are flexibility and low cost. However, it is crucial to ensure that the host system's processor is powerful enough to run the software without negatively impacting the performance of other applications running on the host.

RAID controller software is contained in some operating systems. For example, RAID capabilities are provided by Windows Server's Storage Spaces facility. Most enterprise-class Linux servers include RAID controller software in the form of the Linux mdadm utility.

Third-party software RAID controllers, such as SnapRAID, Stablebit DrivePool, SoftRAID, and FlexRAID, are also available. These programs are typically adequate for small installations but may not meet the storage performance and capacity requirements of business environments.

Some commercially available storage arrays use software RAID controllers, but the software is typically developed and enhanced by the storage vendor to provide adequate performance. Furthermore, storage systems with software controllers are typically built around powerful processors dedicated to controlling and managing the shared storage system.

Airzero cloud is a cloud hosting service that provides compute power, database storage, content delivery, and a variety of other functions that aid in business integration.

If you have any doubt about the RAID controller. Don’t hesitate to contact us. Airzero cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:


Backing up the server you manage is critical if you are a system administrator and want to avoid losing important data. Setting up periodic backups allows you to restore the system in the event of an unexpected event, such as a hardware component failure, incorrect system configuration, or the presence of viruses. Use Microsoft's Windows Server Backup (WSB) solution to quickly and easily schedule backups of both the entire server and specific storage volumes, files, or folders if you use Windows Server. This blog will walk you through the steps needed to perform an automatic backup of your Cloud Server using Windows Server 2019.

  • Installing Windows Server Backup

Windows Server Backup is a Microsoft feature that allows you to create a backup copy of your server.

To begin, open the Windows Server Management Panel Dashboard, click "Add roles and features," and then install this backup feature.

On the left, a window with several sections will be displayed. You may proceed without providing the information requested in the first section "Before You Begin." Then, in the second window, "Installation Type," select "Role-based or feature-based installation" and continue.

Select the server where you want to install Windows Server Backup in the "Server Selection" section and proceed. Continue by clicking "Next" in the "Server Roles" section. Open the "Features" window, then scroll down and select the "Windows Server Backup" item.

Select "Restart the destination server automatically if required" in the "Confirmation" section and click "Install." Then, after the installation is complete, click "Close."

As a result, Windows Server Backup (WSB) is correctly installed. Start it now and configure it. The tool is accessible via the "Tools" menu in the Server Manager.

  • Configuring automatic backup

Once Windows Server Backup is open, select Local Backup on the left and then Backup Schedule on the right to configure the automatic backup rules.

A window with several sections will appear. To begin, simply click "Next" in the "Getting Started" section. Then, in "Select Backup Configuration," leave the entry "Full server" unchecked if you want to back up the entire system. Otherwise, select "Custom " to back up only a subset of volumes, files, or folders. Finally, click "Next" to proceed to the following section.

In the "Specify Backup Time" section, specify whether to back up once a day at a specific time or to perform multiple backups at different times on the same day.

If you selected "More than once a day," simply select the desired time in the left column and click "Add." To delete an incorrect time, simply click on the time in the right column and select "Remove." Once the backup frequency has been determined, click "Next."

You will be asked where you want to save your backup file in the "Specify Destination Type" section. Each storage option has advantages and disadvantages, which are detailed in its own section. As a result, think carefully about where you'll keep your backup.

There are three possibilities:

  • Saving to local hard disc:If this option is selected, the backup will be performed on a local hard disc installed on the server itself. Please keep in mind that once selected, the hard disc in question will be formatted, so remember to back up any existing data to another hard disc.

  • Saving to volume:By selecting this option, you can use a portion of your hard disc as backup storage. However, keep in mind that if you choose this option, the hard disk's read/write speed may slow significantly during the backup phase. If you intend to use this method, it may be a good idea to schedule the backup during times when your server receives fewer requests.

  • Saving to a network folder:By selecting this option, your server can be backed up to a network hard disc. This will allow you to store your data on a NAS or another Cloud Server that is available. However, because it is overwritten each time, only one backup can be saved at a time in this case.

After you've chosen the best option for you, click "Next." The option "Saving to volume" is selected in this example.

The "Confirmation" section now displays a summary of the backup settings you've selected. To schedule the backup, click "Finish." When you're finished, click "Close."


You have now successfully scheduled your first automatic backup on Windows Server 2019. Windows Server Backup will back up your data based on the frequency and storage options you specify, preserving a copy of the files on your server.

Backups of your Windows server should be scheduled at all times because they allow you to restore data and settings if something goes wrong, such as defective hardware, incorrect system configuration, or the presence of malware. Also, keep in mind the benefits and drawbacks of the various backup methods available to avoid file inaccessibility or backup overwrites.

Airzero cloud is a cloud hosting service that offers services such as compute power, database storage, content delivery, and a variety of other functions that will aid in the integration of a business.

If you have any doubt about How to schedule automatic backups on Windows Server 2019, Don’t hesitate to contact us through the given email.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:


MongoDB is a document database that is widely used in modern web applications. It has a classification.

Because it does not rely on a traditional table-based relational database, it is referred to as a NoSQL database structure.

Instead, it uses JSON-like documents with vibrant schemes, which suggests that, unlike relational databases, MongoDB, unlike other databases, does not require a predefined schema before adding data to a database. You can change the schema at any time and as often as you like without having to set it up. A new database with a new schema. In this blog, you will install, test, and learn about MongoDB on an Ubuntu 20.04 server. how it should be managed as a system service.


You will need the following items to follow this blog:

There is one Ubuntu 20.04 server. This server should have a non-root administrative user and a UFW-enabled firewall.

Step 1 — Installing MongoDB

A stable version of MongoDB is available in Ubuntu's official package repositories. However, as of this writing, the latest stable release of MongoDB is 4.4, and the version available from the default Ubuntu repositories is 3.6. To get the most up-to-date version of this software, add MongoDB's dedicated package repository to your APT sources. Then you can install mongodb-org, a meta-package that always points to the most recent version of MongoDB. To begin, run the following command to import the public GPG key for the most recent stable version of MongoDB. If you like to use various versions of MongoDB than 4.4, make certain to change 4.4 in the URL portion of this command to match the version you want to install:

curl -fsSL | sudo apt-key add -
  • A cURL is a command-line tool that can be found on many operating systems and is used to transfer data.
  • It reads the data stored at the URL passed to it and prints it to thesystem's output. It's also worth noting that this curl command includes the -fsSL flag, which tells cURL to fail silently. This means that if cURL is unable to contact the GPG server or the GPG server is unavailable, it will not inadvertently add the resulting error code to your list of trusted keys.

If the key was successfully added, this command will return OK:

  • If you want to double-check that the key was correctly added, use the following command:

apt-key list

  • The MongoDB key will be returned somewhere in the output:


pub   rsa4096 2019-05-28 [SC] [expires: 2024-05-26]
      2069 1EEC 3521 6C63 CAF6  6CE1 6564 08E3 90CF B1F5
uid           [ unknown] MongoDB 4.4 Release Signing Key 
. . .

At this point, your APT installation is still unsure where to look for the mongodb-org package. The package you must install is the most recent version of MongoDB.APT looks for online sources of packages to download and install on your server in two places: the sources. list file and the sources.list.d directory. sources.list is a file that lists active APT data sources, one per line, with the most preferred sources listed first. You can add such sources to the sources.list.d directory. Entries should be listed as separate files. Run the following command to create a file called mongodb-org-4.4.list in the sources.list.d directory. This file contains only one line of text.

 deb [ arch=amd64,arm64 ] focal/mongodb
-org/4.4 multiverse:
echo "deb [ arch=amd64,arm64 ] focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list

This single line informs APT of everything it needs to know about the source and where to look for it: deb: This indicates that the source entry refers to a standard Debian architecture.

In other cases, this part of the line may read deb-src, indicating that the source entry represents the source code of a Debian distribution

[arch=amd64,arm64]: [arch=amd64,arm64]: 

This specifies which architectures should receive APT data. It specifies the amd64 and arm64 architectures in this case.

This is a URI that represents the location of the APT data.

In this case, the URI refers to the HTTPS address of the official MongoDB repository.


Ubuntu repositories may include multiple releases. This tells Ubuntu that you only require version 4.4 of the mongodb-org package to be required for the focal release.

sudo apt update

Following that, you can enable MongoDB:

sudo apt install mongodb-org

When prompted, enter Y followed by ENTER to confirm that you want to install the package. MongoDB will be installed on your system once the command is completed. It is, however, not yet ready for use. After that, you'll start MongoDB and verify that it's operational.

Step 2: Launch the MongoDB Service and Test the Database

The previous step's installation automatically configures MongoDB to run as a daemon controlled by systemd, which means you can manage MongoDB using the various systemctl commands. This installation procedure, however, does not start the service automatically. use the systemctl command:

sudo systemctl start mongod.service

Then, check the status of the service. It's worth noting that this command excludes. service from the service file definition. so it's not necessary to include it:

sudo systemctl status mongod

This command will produce the following output, indicating that the service is operational:


● mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2020-06-09 12:57:06 UTC; 2s ago
   Main PID: 37128 (mongod)
     Memory: 64.8M
     CGroup: /system.slice/mongod.service
             └─37128 /usr/bin/mongod --config /etc/mongod.conf

enable the MongoDB service to start automatically at boot:

sudo systemctl enable mongod

You can confirm that the database is operational further by connecting to the database server and running a diagnostic command. The command given will connect to the database and output its current version, server address, and port. It will also return the outcome of the MongoDB internal connectionStatus command:

mongo --eval 'db.runCommand({ connectionStatus: 1 })'

will check the database connection and return it

A value of 1 in the response's ok field indicates that the server is functioning normally:

MongoDB shell version v4.4.0
connecting to: mongodb://
Implicit session: session { "id" : UUID("1dc7d67a-0af5-4394-b9c4-8a6db3ff7e64") }
MongoDB server version: 4.4.0
    "authInfo" : {
        "authenticatedUsers" : [ ],
        "authenticatedUserRoles" : [ ]
    "ok" : 1

Also, keep in mind that the database is running on port 27017 on, which is the local loopback address for localhost. Following that, we'll look at how to use systemd to manage the MongoDB server instance.

Step 3: Overseeing the MongoDB Service

you can manage services using standard systemctl commands, just like you would other Ubuntu system services.

The systemctl status command, as previously mentioned, checks the status of the MongoDB service:

sudo systemctl status mongod
  • You can cancel the service at any time by typing:
sudo systemctl stop mongod
  • To restart the service after it has been stopped, type:
sudo systemctl start mongod
  • When the server is already running, you can restart it:
sudo systemctl restart mongod

In Step 2, you enabled MongoDB to start with the server automatically. If you ever want to turn off the automatic startup, type:

sudo systemctl disable mongod

Then, to re-enable it to start up at boot, issue the following command: enable

sudo systemctl enable mongod

Systemd Essentials: Working with Services, Units, and the Journal contains more information on how to manage systemd services.


You added the official MongoDB repository to your APT instance and installed the latest version of MongoDB in this blog. You then practised some systemctl commands and tested Mongo's functionality.

If you have any questions about installing MongoDB on Ubuntu 20.04. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Airzero Cloud is a leading web hosting company with a variety of useful tools. We will help you expand your business.

Email id: [email protected] enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

In this blog, we will look at how to enable remote access to a MySQL database on an Ubuntu machine.

  • In the MySQL Configuration file, allow connections from clients other than localhost.

In the configuration file, allow MySQL connections from other clients. The MySQL configuration file will be located in the /etc/mysql/mysql.conf.d directory and will be named mysqld.cnf. MySQL is configured by default to accept connections only from localhost, i.e.; we must change this to to allow connections from other clients.


bind-address =


bind-address =
  • In the Ubuntu Machine's firewall, whitelist the client’s IP address.

The Ubuntu machine includes the Ubuntu Firewall, which by default does not permit incoming connections to MySQL port 3306. As a result, we must open the port for the client's specific IP address or, if your client does not have a fixed IP address, for all IP addresses.

Assume your client has the IP address The following line on your terminal will enable incoming connections to port 3306 from a client with the IP address

ufw allow any port 3306 from

If your client does not have a fixed IP address or if you need to allow all IP addresses (not recommended as anyone can attempt to connect to 3306),

ufw allow 3306

If you have any questions about How to Enable Remote MySQL Database Access on an Ubuntu Machine. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Airzero Cloud is a fantastic web hosting service provider with an array of powerful tools. We will assist you in growing your business.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

Cloud hosting for enterprise-level deployments in today's competitive business environment necessitates a highly scalable storage solution to streamline and manage critical business data.

With technology and most reasonable practices rapidly moving to cloud-based assistance in order to keep up with growing businesses, Ceph emerged as a solution to meet the need for a software storage solution that promotes a very sustainable growth model.

In this blog, we will go over various aspects of Ceph storage and how it can meet the demanding storage requirements of businesses.

What exactly is Ceph Storage?

Red Hat Ceph is open-source software that aims to provide highly scalable object, block, and file-based storage in a single system. Ceph is a powerful storage solution that uses its own Ceph file system (CephFS) and is self-managing and self-healing. It is capable of dealing with outages on its own and is constantly working to reduce administrative costs.

Another advantage of Ceph storage is that it is favorably fault-tolerant and effortlessly imitates data. This means that there are no bottlenecks in the process while Ceph is running.

There have been more than 15 Ceph releases since its initial release, with Red Hat recently announcing a major update as Red Hat Storage 4, which brings an array of improvements in monitoring, scalability, management, and security, making it easier for enterprises to get started with Ceph.

Ceph's most recent features include:

  • High scalability
  • Open-source
  • High reliability via distributed data storage
  • Robust data security via redundant storage
  • Benefits of continuous memory allocation
  • Convenient software-based increase in availability via an integrated data-location algorithm

Understanding How Ceph Block Storage Works

Ceph's primary storage medium is a Ceph block machine, which is a virtual disc that can be connected to virtual machines or bare-metal Linux-based servers. RADOS (Reliable Autonomic Distributed Object Store) is a key component in Ceph that provides powerful block storage capabilities such as replication and snapshots that can be integrated with OpenStack Block Storage.

Ceph also stores data in their storage clusters using POSIX (Portable Operating System Interface), a robust Ceph file system. The file system has the advantage of using the same clustered system as Ceph block storage and object storage to store massive amounts of data.

The architecture of Ceph Storage

Ceph requires several computers to be linked together in what is known as a cluster. Each of these networked computers is referred to as a node.

The following are some of the tasks that must be distributed among the network's nodes:

  • Monitor nodes (ceph-mon): These cluster monitors are primarily responsible for monitoring the status of individual cluster nodes, particularly object storage devices, managers, and metadata servers. It is recommended that at least three monitor nodes be used to ensure maximum reliability.

  • Object Storage Devices (ceph-osd): Ceph-OSDs are background applications that manage actual data and are in charge of storage, duplication, and data restoration. It is recommended that a cluster have at least three -ODSs.

  • Managers (ceph-mgr): They collaborate with ceph monitors to manage the status of system load, storage usage, and node capacity.

  • Metadata servers (ceph-MDS): They aid in the hold of metadata such as file names, storage paths, and timestamps of CephFS files for a variety of performance reasons.

The heart of Ceph data storage is an algorithm called CRUSH (Controlled Replication Under Scalable Hashing), which uses the CRUSH Map—an allocation table—to locate an OSD with the requested file. CRUSH selects the best storage location based on predefined criteria, determines which files are duplicated, and then saves them on physically separate media. The relevant criteria can be set by the network administrator.

RADOS, a completely reliable, distributed object store composed of self-mapping, intelligent storage nodes, serves as the foundation of the Ceph data storage architecture.

Some of the methods for accessing Ceph-stored data include:

  • radosgw: Using the HTTP Internet protocol, data can be read or written in this gateway.
  • librados: Native access to stored data is possible via APIs in programming and scripting languages such as Python, Java, C/C++, and PHP when using the librados software libraries.
  • RADOS Block Device: Data entry here necessitates the help of a virtual system such as QEMU/KVM or block storage via a kernel module.

Ceph Storage Capability

Ceph adds a number of advantages to OpenStack-based private clouds. Here are a few examples to help you better understand Ceph storage performance.

  • High availability and improved performance

Ceph's coding erasure feature vastly improves data availability by simply adding resiliency and durability. At times, writing speeds can be nearly twice as fast as the previous backend.

  • Strong security

Active directory integration, encryption features, LDAP, and other features in place with Ceph can help to limit unauthorized access to the system.

  • Adoption without a hitch

Making the switch to software-defined storage platforms can be difficult at times. Ceph solves the problem by allowing block and object storage in the same cluster without requiring you to manage separate storage services via other APIs.

Cost-effectiveness Ceph operates on item hardware, making it a low-cost solution that does not require any expensive or additional hardware.

Ceph Block and Ceph Object Storage Use Cases

Ceph was designed primarily to run smoothly on general-purpose server hardware. It supports elastic provisioning, making petabyte-to-exabyte scale data clusters economically feasible to build and maintain.

Unlike other mass storage systems that are great at storage but quickly run out of throughput or IOPS before they run out of capacity, Ceph scales performance and capacity independently, allowing it to support a variety of deployments optimized for a specific use case.

The following are some of the most common use cases for Ceph Block & Object Storage:

  • Ceph Block use cases

– Deploy elastic block storage with on-premise cloud

– Storage for VM disc volumes that run smoothly

– SharePoint, Skype, and other collaboration applications storage

– Primary storage for MY-SQL and other similar SQL database apps storage

– Dev/Test Systems storage

– IT management apps storage

  • Ceph Object Storage Use Cases

– Snapshots of VM disc volumes

– Video/audio/image repositories

– ISO image storage and repositories

– Archive and backup

– Deploy Dropbox-like services within the enterprise

– Deploy Amazon S3-like object store services with on-premise cloud

The Benefits and Drawbacks of Ceph Storage

While Ceph storage is a good option in many situations, it does have some drawbacks. In this section, we'll go over both of them.


– Despite its short development history, Ceph is a free and well-established storage method.

– The manufacturer has extensively and well-documented the application. There is a wealth of useful information available online for Ceph setup and maintenance.

– Ceph storage's scalability and integrated redundancy ensure data security and network flexibility.

Ceph's CRUSH algorithm ensures high availability.


– Due to the variety of components provided, a comprehensive network is required to fully utilize all of Ceph's functionalities.

– The installation of Ceph storage takes some time, and the user is not always sure where the data is physically stored.

– It necessitates more engineering oversight to implement and manage.

Ceph Storage vs AWS S3: Key Differences and Features

In this section, we will compare AWS S3 and Ceph Object Gateway, two popular object stores (RadosGW). We'll keep the focus on the similarities and some of the key differences.

While Amazon S3 (released in 2006) is primarily an AWS public object store that guarantees 99.9 percent object availability, Ceph storage is open-source software that provides distributed object, block, and file storage.

The Ceph Object Gateway daemon (released in 2006) operates under the LGPL 2.1 license and provides two sets of APIs:

  • compatible with a subset of the Amazon S3 RESTful APIs, and the other that is not.

  • A subset of the OpenStack Swift API is compatible with the second.

One key distinction between Amazon S3 and Ceph Storage is that, whereas Amazon S3 is a proprietary solution available only on Amazon's commercial public cloud (AWS), Ceph is an open-source product that can be easily installed on-premises as part of a private cloud.

Another distinction between the two is that Ceph offers strong consistency, which means that new objects and changes to existing objects are guaranteed to be visible to all clients. Amazon S3, on the other hand, provides read-after-write consistency when creating new objects and eventual consistency when updating and deleting objects.

Why is Ceph Storage insufficient for Modern Workloads?

While there is no denying that Ceph storage is highly scalable and a one-size-fits-all solution, it does have some architectural flaws, primarily because it was not designed for today's fast storage media–NAND flash and NVMe® flash.

For the following reasons, Ceph storage is unsuitable for modern workloads:

  • Enterprises that use the public cloud, their own private cloud, or are transitioning to modern applications require low latency and consistent response times. While BlueStore (a back-end object store for Ceph OSDs) can help to improve average and tail latency, it cannot always take advantage of the benefits of NVMe® flash.
  • To achieve the best possible performance, modern workloads typically deploy local flash (local NVMe® flash) on bare metal, and Ceph is not equipped to realize the optimized performance of this new media. In fact, Ceph can be an order of magnitude slower than a local flash in a Kubernetes environment where local flash is recommended.

  • Ceph has a low flash utilization rate (15-25 percent ). In the event of a Ceph or host failure, the rebuild time for shared storage needs can be extremely slow due to massive traffic flowing over the network for an extended period of time.


Choosing the right storage platform is becoming increasingly important as data takes center stage in almost every business. Ceph storage is intended to increase the accessibility of your data to you and your business applications.

Despite being a good choice for applications that don't require spinning drive performance, Ceph has architectural flaws that make it unsuitable for high-performance, scale-out databases, and other similar web-scale software infrastructure solutions.

If you have any doubt about Ceph storage. Do not hesitate to contact us. Airzero cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

What is Ceph?

Ceph is a compelling open-source alternative to traditional vendors' proprietary software-defined storage solutions, with a thriving community working on the technology. Ubuntu was a strong supporter of Ceph and its community from the beginning. Canonical is still a premier member state and serves on the Ceph Foundation's governing board.

Many global enterprises and telco operators are running Ceph on Ubuntu, allowing them to combine block and object storage at scale while reaping the economic and upstream benefits of open source.

Why use Ceph?

Ceph is unique in that it provides data in three ways: as a POSIX compliant filesystem via CephFS, like block storage volumes via the RBD driver, and as an object store compatible with both the S3 and Swift protocols via the RADOS gateway.

Ceph is often used to deliver a block and object supply to OpenStack clouds via Cinder and as a Swift replacement. Ceph has also been embraced as a famous way for physical volumes (PV) as a Container Storage Interface (CSI) plugin by Kubernetes.

Even as a stand-alone solution, Ceph is a compelling open-source storage alternative to closed-source, proprietary solutions because it reduces the OpEx costs that organizations typically incur with storage due to licensing, upgrades, and potential vendor lock-in fees.

How does Ceph work?

Ceph stores data in pools that users or other benefits can access to deliver block, file, or object storage. Each of these mechanisms is supported by a Ceph pool. Replication, access rights, and other internal characteristics (such as data placement, ownership, and access, among others) are expressed per pool.

The Ceph Monitors (MONs) is in charge of keeping the Cluster in good working order. They use the CRUSH map to manage data location. These work in a cluster with quorum-based HA and data is stored and retrieved using Object Storage Devices (OSDs).

A storage device and a running OSD daemon process are mapped by 1:1. OSDs make extensive use of the CPU and RAM of the cluster member host. This is why, when designing a Ceph cluster, it is critical to carefully balance the number of OSDs with the number of CPU cores and memory. This is particularly accurate when trying to achieve a hyper-converged architecture.

Using LXD as a container hypervisor aids in properly enforcing resource limits on most processes running on a given node. By separating the Ceph MONs, LXD is used broadly to deliver the most reasonable economics in Canonical's Charmed OpenStack distribution. It is not currently recommended to containerize the Ceph OSDs.

Ceph storage mechanisms

Choosing the mechanism for accessing each data pool is equivalent to doing so. One pool, for example, may be used to store block volumes, while another serves as the storage backend for object storage or filesystems. In the case of volumes, the host attempting to mount the volume must first load the RBD kernel module, after which Ceph volumes can be mounted in the same way that local volumes are.

Object buckets are not normally climbed–client-side applications can use overlay filesystems to simulate a 'drive,' but it is not an actual volume that is being mounted. The RADOS Gateway, on the other hand, allows access to Object buckets. To access objects using the S3 or Swift protocols, RADOSGW provides a REST API. CephFS is used to create and format filesystems, which are then exported and made available to local networks in the same way that NFS mounts are.

Volume and object store use cases have been in large-scale production for quite some time. The use of Ceph to combine volume and object storage provides numerous advantages to operators.

Ceph storage support with Canonical

Canonical offers Ceph support as part of Ubuntu Advantage for Infrastructure, with Standard and Advanced SLAs, which correspond to business hours and 24x7 support, respectively. Each covered node supports up to 48TB of raw storage in a Ceph cluster.

This coverage is based on our reference hardware recommendation for OpenStack and Kubernetes in a hyper-converged architecture at an optimal price per TB range while retaining the best performance across computers and networks in an on-premise cloud. If the number of nodes to TB ratio does not correspond with our recommendation and exceeds this limit, Canonical provides per TB pricing in addition to accommodating our scale-out storage customers.

Ceph is general in the Ubuntu main storage, and as such, users obtain free security updates for up to five years if they employ an LTS version. Beyond the standard support cycle, an additional five years of paid commercial support are available.

If you've got any doubt regarding the above topic of Ceph storage in Ubuntu, Don’t hesitate to contact us. Airzero Cloud is going to be your digital partner.

Airzero Cloud is one of the top web hosting service provider that provides an array of the most effective tools. We are running to help you to support your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

Automatic Virtual Machine Activation acts as a proof-of-purchase tool, allowing to confirm that Windows products are operated in agreement with the Product Use Rights and Microsoft Software License Terms.

AVMA lets you start Windows Server virtual machines on Windows Server Hyper-V host that is correctly activated, even in separate environments. AVMA binds the virtual machine activation to the certified virtualization host and activates the virtual machine when it starts up. Reporting and tracking data is open on the virtualization host.

Practical applications

On virtualization hosts, AVMA offers several benefits.

Server data center administrators can utilize AVMA to do the following:

  • Activate virtual machines in remote locations
  • Activate virtual devices with or without an internet connection
  • Track virtual device usage and permissions from the virtualization host, without requiring any access rights on the virtualized systems

Service Provider License Agreement partners and other hosting providers do not have to share product keys with tenants or access a tenant's virtual machine to start it. Virtual machine activation is transparent to the resident when AVMA is operated. Hosting providers can use the server logs to confirm license submission and to track client usage history.

System requirements

The virtualization host that will run virtual machines ought to be activated. Keys can be acquired through the Volume Licensing Service Center or your OEM provider.

AVMA demands Windows Server Datacenter edition with the Hyper-V host role installed. The working system version of the Hyper-V host chooses which versions of the operating system can be started in a virtual machine. Here are the guests that the different version hosts can start:

How to implement AVMA?

To start VMs with AVMA, you use a generic AVMA key that corresponds to the version of Windows Server that you like to start. To start a VM and start it with an AVMA key, do the following:

  • On the server that will host virtual devices, establish and configure the Microsoft Hyper-V Server role. For more details, see Install Hyper-V Server. Make sure that the server is successfully activated.
  • Create a virtual machine and establish a supported Windows Server working system on it.
  • Once Windows Server is established on the VM, you install the AVMA key in the VM. From PowerShell or an advanced Command Prompt, execute the following command:
slmgr /ipk 

Reporting and tracking

The Key-Value Pair exchange between the virtualization broadcaster and the VM bears real-time quest data for the visitor operating systems, including activation data. This activation data is stored in the Windows registry of the virtual machine. Historical data about AVMA requests are logged in Event Viewer on the virtualization host.

If you've got any doubt regarding automatic virtual machine activation in windows server. Don’t hesitate to contact us. Airzero Cloud is going to be your digital partner.

Airzero Cloud is a leading web hosting service provider that provides an array of the most powerful tools. We are going to help you to reinforce your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

In case you've created your own PC with a graphics card and processor or have updated your computer for fast speed, you might like to install and activate Windows 10.

Most of the top-range systems nowadays come with the Windows 10 operating system pre-installed. Regardless, in case you've created your own PC with a graphics card and processor or have updated your computer for fast, you might like to install and start Windows 10.

Activation of the Windows 10 operating system can be accomplished either by activating the product key besides if done so earlier. In the matter you activated the Windows 10 license connecting it to your Microsoft account, activation on the same system can be accomplished efficiently with the digital license.

Follow the easy steps to activate Windows 10, either by the product key or by joining the digital license.

What are the steps to activate Windows 10 with a product key?

  • For installation of Windows 10, first, enter your product license key.
  • Select the Windows key, go to Settings > Update and Security > Activation.
  • Select the Change Product key.
  • Enter your product key into the pop-up box and select Next
  • Choose Activate.

What are the steps to activate Windows 10 with a digital license?

  • While starting activation, choose the “I do not have a product key” option.
  • Setup and login into Windows 10 with your connected Microsoft account.

Windows 10 will be automatically started at this point. In case you have made hardware changes follow the below steps:

  • Choose the Windows key, then go to Settings > Update and Security > Activation.
  • If Windows is not started, search and press'Troubleshoot'.
  • Choose 'Activate Windows' in the new window and then Activate. Or, select “I changed hardware on this device,” if applicable.
  • If you get sign-in prompts, track them operating a Microsoft account linked to your digital license.
  • Choose the machine you are using and review 'This is the device I am using right now' next to it.
  • Select Activate.

If you've got any doubt regarding the installation of Windows 10. Don’t hesitate to contact us. Airzero Cloud is going to be your digital partner.

Airzero Cloud is an excellent internet hosting service provider that provides an array of powerful tools. We are going to assist you to reinforce your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

In this blog, we are looking at how to extend Linux disk VMware hyper-v centos/Redhat on the fly live.

Check if you can expand the current disk or require to add a new one.

This is rather a significant step because a disk that has been partitioned into 4 primary divisions already can not be raised anymore. To prevent this, log into your server and run fdisk -l at the command line.

# fdisk -l

Disk /dev/sda: 187.9 GB, 187904819200 bytes
255 heads, 63 sectors/track, 22844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       25      200781   83  Linux
/dev/sda2              26       2636    20972857+  8e  Linux LVM

If it examines like that, with only 2 sections, you can safely open the current hard disk in the Virtual Machine.

However, if it displays like this:

~# fdisk -l

Disk /dev/sda: 187.9 GB, 187904819200 bytes
255 heads, 63 sectors/track, 22844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       25      200781   83  Linux
/dev/sda2              26       2636    20972857+  8e  Linux LVM
/dev/sda3            2637       19581   136110712+  8e  Linux LVM
/dev/sda4           19582       22844   26210047+  8e  Linux LVM

It will offer you that there are already 4 primary sections on the system, and you ought to count a new Virtual Disk to your Virtual Machine. You can always use that extra Virtual Disk to improve your LVM size, so don’t worry.

The “hardware” part, “physically” adding disk space to your VM

Do that in your VMWare / esx / hyper-v console. If the option is greyed out to expand? Add a new disk or shut down the VM and extend the disk as such.

Partitioning the unallocated area: if you’ve raised the disk size

Once you’ve modified the disk’s size in VMware, bounce up your VM too if you had to shut it down to expand the disk size in vSphere. If you’ve rebooted the server, you won’t have to rescan your SCSI machines as that occurs on boot. If you did not reboot your server, rescan your SCSI systems as such.

First, check the name of your scsi devices.

$ ls /sys/class/scsi_device/
0:0:0:0 1:0:0:0  2:0:0:0

Then rescan the scsi bus. Beneath you can replace the ‘0:0:0:0’ with the actual scsi bus word found with the last command. Each colon is prefixed with a slash, which is what creates it looks weird.

$ echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan

Partitioning the unallocated space: if you’ve included a new disk

If you’ve included a new disk on the server, the activities are the same to those described above. Either of rescanning an already existing scsi bus like shown earlier, you have to rescan the server to detect the new scsi bus as you’ve included a new disk.

$ ls  /sys/class/scsi_host/
total 0
drwxr-xr-x  3 root root 0 Feb 13 02:55 .
drwxr-xr-x 39 root root 0 Feb 13 02:57 ..
drwxr-xr-x  2 root root 0 Feb 13 02:57 host0

Your host machine is called ‘host0’

$ echo "- - -" > /sys/class/scsi_host/host0/scan

It won’t show any output, but running ‘fdisk -l’ will display the new disk.

Create the new partition

Once the rescan is done, you can identify if the space can be seen on the disk.

~$  fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       13      104391   83  Linux
/dev/sda2           14      391     3036285   8e  Linux LVM

So the host can now visualize the 10GB hard disk.

~$  fdisk /dev/sda

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with:

Command (m for help): n

Now type ‘n’, to build a new partition.

Command action
e   extended
p   primary partition (1-4)

Now select “p” to create a further direct partition.

Partition number (1-4): 3

Select your partition number. Since it already had /dev/sda1 and /dev/sda2, the logical number would be 3.

First cylinder (392-1305, default 392): 
Using default value 392
Last cylinder or +size or +sizeM or +sizeK (392-1305, default 1305): 
Using default value 1305

The cylinder matters will vary depending on your plan. It should be secure to just hint enter, as fdisk will give you a default value for the first and last cylinder.

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)
Command (m for help): w

Once you get around to the major command within fdisk, type w to report your partitions to the disk. You’ll get a notification about the kernel still utilizing the old partition table, and to reboot to utilize the new table. The reboot is not required as you can even rescan for those partitions using partprobe. Execute the below to scan for the recently created partition.

~$ partprobe -s

If that does not function for you, you can attempt to use “partx” to rescan the machine and add the latest partitions. In the command below, difference /dev/sda to the disk on which you’ve just included a new partition.

~$ partx -v -a /dev/sda

If that even does not offer you the recently completed partition for you to operate, you have to reboot the server.

~$  fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       13      104391   83  Linux
/dev/sda2           14      391     3036285   8e  Linux LVM
/dev/sda3           392     1305    7341705   8e  Linux LVM

Extend your Logical Volume with the new partition.

Now, assemble the physical volume as a cause for your LVM. Please return /dev/sda3 with the newly built partition.

~$  pvcreate /dev/sda3
Physical volume "/dev/sda3" successfully created

Now identify how your Volume Group is called.

~$  vgdisplay
--- Volume group ---
VG Name             VolGroup00

Let’s give that Volume Group by including the newly built physical volume to it.

$  vgextend VolGroup00 /dev/sda3
Volume group "VolGroup00" successfully extended
~$  pvscan
PV /dev/sda2   VG VolGroup00   lvm2 [2.88 GB / 0    free]
PV /dev/sda3   VG VolGroup00   lvm2 [7.00 GB / 7.00 GB free]
Total: 2 [9.88 GB] / in use: 2 [9.88 GB] / in no VG: 0 [0   ]

Now we can extend Logical Volume .

~$  lvextend /dev/VolGroup00/LogVol00 /dev/sda3
Extending logical volume LogVol00 to 9.38 GB
Logical volume LogVol00 successfully resized

If you’re executing this on Ubuntu, use the following.

~$  lvextend /dev/mapper/vg-name /dev/sda3

All that remains now, is to resize the file system to the volume group, so we can utilize the space. Substitute the path to the correct /dev device if you’re on ubuntu/debian systems.

~$  resize2fs /dev/VolGroup00/LogVol00
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/LogVol00 to 2457600 (4k) blocks.
The filesystem on /dev/VolGroup00/LogVol00 is now 2457600 blocks long.
$ resize2fs /dev/mapper/centos_sql01-root
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos_sql01-root
Couldn't find valid filesystem superblock.

In that case, you’ll require to improve the XFS partition.

~$  df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 9.1G 1.8G  6.9G  21% /
/dev/sda1           99M   18M   77M  19% /boot
tmpfs               125M    0  125M   0% /dev/shm

If you have any doubt about the above topic. Don’t hesitate to contact us. Airzero cloud will be your digital partner.

Airzero Cloud is an excellent web hosting service that offers an array of powerful tools. We will help you to enhance your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

Deploying to a DigitalOcean Droplet

- Posted in Cloud by

This blog steps through how to deploy and host your Gatsby site on a DigitalOcean Droplet with Ubuntu and Nginx.

DigitalOcean delivers a cloud medium to deploy, operate, and scale applications of any size, emptying infrastructure friction and delivering predictability so developers and their teams can give more time to build better software.

DigitalOcean’s effect droplets are scalable computer IaaS or a VPS on the cloud which has excellent dependability and scalability. They come with various price ranges ideal for small apps to giant enterprise-level apps.

They deliver services to choose from different Unix-based distributions and establish your technology-based platform with preinstalled prerequisites from the marketplace. This blog will step through the exact options that work best for deploying a Gatsby site with DigitalOcean.


  • A Gatsby site in a Git repository
  • A DigitalOcean Droplet is configured with sudo group
  • A custom domain name for your Gatsby site to maintain with configuring HTTPS

How to deploy the Gatsby site to DigitalOcean

Install Node.js, npm and Gatsby-CLI

Log in to your droplet as a non-root user.

Install Node.js

sudo apt-get update
sudo apt-get install nodejs

Install npm

sudo apt-get install npm

Run the code

nodejs -v
npm -v

To download the new stable Node.js release using the n package,

sudo npm install -g n
sudo n stable
hash nodejs
hash npm

Download the Gatsby CLI globally. This will be good ahead in making the Gatsby site for production.

sudo npm install -g gatsby-clip

Clone your repository to the droplet

The subsequent step is to clone the storehouse having your Gatsby app

< your-github-repo-site> 
git clone < your-github-repo-site>

Copy the way where your

 < your-github-repo-site>

is cloned, for forthcoming reference.


In case of a notification associated with “Permission refused”, check if

< your non-root user>

has sudo rights. Or before cloning your storehouse, change authorizations for

< your non-root user>

to access the .


directory of under

 /home/< your non-root user>/:
cd ~/
sudo chown -R $(whoami) .config

Generate your Gatsby site for production

The fixed files will be posted publicly on the droplet. The

 gatsby build 

command gives utility to create the site and build the static files in the


Go to the path where

< my-gatsby-app>

is. You can utilize the simulated path for reference in a previous step.

sudo npm install

Run build to generate static files.

sudo gatsby build

Install Nginx Web Server and open firewall to accept HTTP and HTTPS requests

To host a website or static files onto a Linux-based server a web server like Apache or Nginx is needed.

Nginx is a web server. It delivers the infrastructure code for managing client requests from the World Wide Web, along with elements like a load balancer, mail proxy, and HTTP Cache.

Install Nginx.

sudo apt-get install nginx

Configure firewall sets of the droplet to attend to HTTP and HTTPS requests on ports 80 and 443.

sudo ufw allow 'Nginx HTTP'
sudo ufw allow 'Nginx HTTPS'

To check the connection,

sudo ufw app list

If ufw status is disabled, you can capable it with the following command:

sudo ufw enable

Allow the OpenSSH if not done, to not disconnect from your droplet.

sudo ufw allow 'OpenSSH'

Configure Nginx to point to your Gatsby site’s directory and include your domain

Change the root configuration of Nginx in the default server

Go to

cd /etc/nginx/sites-available/

Open the


in Vim

sudo vim default

Revise the file and complete the following modifications for the below-mentioned fields, exit the remainder of the fields as-is. Your actual path may change, but it may reach

server {
 root /public;
 index index.html index.htm index.nginx-debian.html;
 server_name ;
 location / {
   try_files $uri $uri/ =404;

Restart the Nginx

sudo systemctl restart nginx

You should now be capable of considering your built Gatsby site at your DigitalOcean IP address, before configuring a domain. Go to the Advanced DNS locations in your domain provider’s console and put an A record that means to the IP address of the droplet.

Configuring HTTPS for your Gatsby site

Observe the below steps to configure your site with a complimentary SSL/TLS certificate Let’s Encrypt using their Certbot CLI tool. Download Certbot onto your droplet. You’ll need to install Certbot using snapd.

sudo snap install core; sudo snap refresh core
sudo apt-get remove certbot

Execute the below command to install Certbot.

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot

Develop the certificate. Certbot will automatically cleanse and configure the Nginx config file and file to the certificate file. Execute the below command:

sudo certbot --nginx

If you are operating Certbot for the first time on this droplet then you resolve to be prompted to type your email for purposes.

Agree to the license contract on prompt. Restart the Nginx service.

sudo systemctl restart nginx

Now, you can connect your site at

< your-domain>

with a safe connection.

View your Gatsby site live

Once you’ve observed all the actions and configuration correctly, you can visit the site live at

< your-domain>.

Whenever there’s an update to your site, run a

sudo gatsby

build in the root of your

< my-gatsby-app>.

You’ve deployed your Gatsby App on a DigitalOcean droplet along with HTTPS for it.

If you have any doubt about the above topic. Don’t hesitate to contact us through the below email. Airzero Cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: