Airzero Cloud

Next Generation Cloud !

Introduction

MongoDB is a document database that is widely used in modern web applications. It has a classification.

Because it does not rely on a traditional table-based relational database, it is referred to as a NoSQL database structure.

Instead, it uses JSON-like documents with vibrant schemes, which suggests that, unlike relational databases, MongoDB, unlike other databases, does not require a predefined schema before adding data to a database. You can change the schema at any time and as often as you like without having to set it up. A new database with a new schema. In this blog, you will install, test, and learn about MongoDB on an Ubuntu 20.04 server. how it should be managed as a system service.

Prerequisites

You will need the following items to follow this blog:

There is one Ubuntu 20.04 server. This server should have a non-root administrative user and a UFW-enabled firewall.

Step 1 — Installing MongoDB

A stable version of MongoDB is available in Ubuntu's official package repositories. However, as of this writing, the latest stable release of MongoDB is 4.4, and the version available from the default Ubuntu repositories is 3.6. To get the most up-to-date version of this software, add MongoDB's dedicated package repository to your APT sources. Then you can install mongodb-org, a meta-package that always points to the most recent version of MongoDB. To begin, run the following command to import the public GPG key for the most recent stable version of MongoDB. If you like to use various versions of MongoDB than 4.4, make certain to change 4.4 in the URL portion of this command to match the version you want to install:

curl -fsSL https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -
  • A cURL is a command-line tool that can be found on many operating systems and is used to transfer data.
  • It reads the data stored at the URL passed to it and prints it to thesystem's output. It's also worth noting that this curl command includes the -fsSL flag, which tells cURL to fail silently. This means that if cURL is unable to contact the GPG server or the GPG server is unavailable, it will not inadvertently add the resulting error code to your list of trusted keys.

If the key was successfully added, this command will return OK:

Output
OK
  • If you want to double-check that the key was correctly added, use the following command:

apt-key list

  • The MongoDB key will be returned somewhere in the output:

Output

/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2019-05-28 [SC] [expires: 2024-05-26]
      2069 1EEC 3521 6C63 CAF6  6CE1 6564 08E3 90CF B1F5
uid           [ unknown] MongoDB 4.4 Release Signing Key 
. . .

At this point, your APT installation is still unsure where to look for the mongodb-org package. The package you must install is the most recent version of MongoDB.APT looks for online sources of packages to download and install on your server in two places: the sources. list file and the sources.list.d directory. sources.list is a file that lists active APT data sources, one per line, with the most preferred sources listed first. You can add such sources to the sources.list.d directory. Entries should be listed as separate files. Run the following command to create a file called mongodb-org-4.4.list in the sources.list.d directory. This file contains only one line of text.

 deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb
-org/4.4 multiverse:
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list

This single line informs APT of everything it needs to know about the source and where to look for it: deb: This indicates that the source entry refers to a standard Debian architecture.

In other cases, this part of the line may read deb-src, indicating that the source entry represents the source code of a Debian distribution

[arch=amd64,arm64]: [arch=amd64,arm64]: 

This specifies which architectures should receive APT data. It specifies the amd64 and arm64 architectures in this case.

https://repo.mongodb.org/apt/ubuntu:

This is a URI that represents the location of the APT data.

In this case, the URI refers to the HTTPS address of the official MongoDB repository.

focal/mongodb-org/4.4: 

Ubuntu repositories may include multiple releases. This tells Ubuntu that you only require version 4.4 of the mongodb-org package to be required for the focal release.

sudo apt update

Following that, you can enable MongoDB:

sudo apt install mongodb-org

When prompted, enter Y followed by ENTER to confirm that you want to install the package. MongoDB will be installed on your system once the command is completed. It is, however, not yet ready for use. After that, you'll start MongoDB and verify that it's operational.

Step 2: Launch the MongoDB Service and Test the Database

The previous step's installation automatically configures MongoDB to run as a daemon controlled by systemd, which means you can manage MongoDB using the various systemctl commands. This installation procedure, however, does not start the service automatically. use the systemctl command:

sudo systemctl start mongod.service

Then, check the status of the service. It's worth noting that this command excludes. service from the service file definition. so it's not necessary to include it:

sudo systemctl status mongod

This command will produce the following output, indicating that the service is operational:

Output

● mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2020-06-09 12:57:06 UTC; 2s ago
       Docs: https://docs.mongodb.org/manual
   Main PID: 37128 (mongod)
     Memory: 64.8M
     CGroup: /system.slice/mongod.service
             └─37128 /usr/bin/mongod --config /etc/mongod.conf

enable the MongoDB service to start automatically at boot:

sudo systemctl enable mongod

You can confirm that the database is operational further by connecting to the database server and running a diagnostic command. The command given will connect to the database and output its current version, server address, and port. It will also return the outcome of the MongoDB internal connectionStatus command:

mongo --eval 'db.runCommand({ connectionStatus: 1 })'
connectionStatus 

will check the database connection and return it

A value of 1 in the response's ok field indicates that the server is functioning normally:

Output
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("1dc7d67a-0af5-4394-b9c4-8a6db3ff7e64") }
MongoDB server version: 4.4.0
{
    "authInfo" : {
        "authenticatedUsers" : [ ],
        "authenticatedUserRoles" : [ ]
    },
    "ok" : 1
}

Also, keep in mind that the database is running on port 27017 on 127.0.0.1, which is the local loopback address for localhost. Following that, we'll look at how to use systemd to manage the MongoDB server instance.

Step 3: Overseeing the MongoDB Service

you can manage services using standard systemctl commands, just like you would other Ubuntu system services.

The systemctl status command, as previously mentioned, checks the status of the MongoDB service:

sudo systemctl status mongod
  • You can cancel the service at any time by typing:
sudo systemctl stop mongod
  • To restart the service after it has been stopped, type:
sudo systemctl start mongod
  • When the server is already running, you can restart it:
sudo systemctl restart mongod

In Step 2, you enabled MongoDB to start with the server automatically. If you ever want to turn off the automatic startup, type:

sudo systemctl disable mongod

Then, to re-enable it to start up at boot, issue the following command: enable

sudo systemctl enable mongod

Systemd Essentials: Working with Services, Units, and the Journal contains more information on how to manage systemd services.

Conclusion

You added the official MongoDB repository to your APT instance and installed the latest version of MongoDB in this blog. You then practised some systemctl commands and tested Mongo's functionality.

If you have any questions about installing MongoDB on Ubuntu 20.04. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Airzero Cloud is a leading web hosting company with a variety of useful tools. We will help you expand your business.

Email id: [email protected] enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

In this blog, we will look at how to enable remote access to a MySQL database on an Ubuntu machine.

  • In the MySQL Configuration file, allow connections from clients other than localhost.

In the configuration file, allow MySQL connections from other clients. The MySQL configuration file will be located in the /etc/mysql/mysql.conf.d directory and will be named mysqld.cnf. MySQL is configured by default to accept connections only from localhost, i.e. 127.0.0.1; we must change this to 0.0.0.0 to allow connections from other clients.

Change,

bind-address = 127.0.0.1

to

bind-address = 0.0.0.0
  • In the Ubuntu Machine's firewall, whitelist the client’s IP address.

The Ubuntu machine includes the Ubuntu Firewall, which by default does not permit incoming connections to MySQL port 3306. As a result, we must open the port for the client's specific IP address or, if your client does not have a fixed IP address, for all IP addresses.

Assume your client has the IP address 50.75.120.81. The following line on your terminal will enable incoming connections to port 3306 from a client with the IP address 50.75.120.81:

ufw allow any port 3306 from 50.75.120.81

If your client does not have a fixed IP address or if you need to allow all IP addresses (not recommended as anyone can attempt to connect to 3306),

ufw allow 3306

If you have any questions about How to Enable Remote MySQL Database Access on an Ubuntu Machine. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Airzero Cloud is a fantastic web hosting service provider with an array of powerful tools. We will assist you in growing your business.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Cloud hosting for enterprise-level deployments in today's competitive business environment necessitates a highly scalable storage solution to streamline and manage critical business data.

With technology and most reasonable practices rapidly moving to cloud-based assistance in order to keep up with growing businesses, Ceph emerged as a solution to meet the need for a software storage solution that promotes a very sustainable growth model.

In this blog, we will go over various aspects of Ceph storage and how it can meet the demanding storage requirements of businesses.

What exactly is Ceph Storage?

Red Hat Ceph is open-source software that aims to provide highly scalable object, block, and file-based storage in a single system. Ceph is a powerful storage solution that uses its own Ceph file system (CephFS) and is self-managing and self-healing. It is capable of dealing with outages on its own and is constantly working to reduce administrative costs.

Another advantage of Ceph storage is that it is favorably fault-tolerant and effortlessly imitates data. This means that there are no bottlenecks in the process while Ceph is running.

There have been more than 15 Ceph releases since its initial release, with Red Hat recently announcing a major update as Red Hat Storage 4, which brings an array of improvements in monitoring, scalability, management, and security, making it easier for enterprises to get started with Ceph.

Ceph's most recent features include:

  • High scalability
  • Open-source
  • High reliability via distributed data storage
  • Robust data security via redundant storage
  • Benefits of continuous memory allocation
  • Convenient software-based increase in availability via an integrated data-location algorithm

Understanding How Ceph Block Storage Works

Ceph's primary storage medium is a Ceph block machine, which is a virtual disc that can be connected to virtual machines or bare-metal Linux-based servers. RADOS (Reliable Autonomic Distributed Object Store) is a key component in Ceph that provides powerful block storage capabilities such as replication and snapshots that can be integrated with OpenStack Block Storage.

Ceph also stores data in their storage clusters using POSIX (Portable Operating System Interface), a robust Ceph file system. The file system has the advantage of using the same clustered system as Ceph block storage and object storage to store massive amounts of data.

The architecture of Ceph Storage

Ceph requires several computers to be linked together in what is known as a cluster. Each of these networked computers is referred to as a node.

The following are some of the tasks that must be distributed among the network's nodes:

  • Monitor nodes (ceph-mon): These cluster monitors are primarily responsible for monitoring the status of individual cluster nodes, particularly object storage devices, managers, and metadata servers. It is recommended that at least three monitor nodes be used to ensure maximum reliability.

  • Object Storage Devices (ceph-osd): Ceph-OSDs are background applications that manage actual data and are in charge of storage, duplication, and data restoration. It is recommended that a cluster have at least three -ODSs.

  • Managers (ceph-mgr): They collaborate with ceph monitors to manage the status of system load, storage usage, and node capacity.

  • Metadata servers (ceph-MDS): They aid in the hold of metadata such as file names, storage paths, and timestamps of CephFS files for a variety of performance reasons.

The heart of Ceph data storage is an algorithm called CRUSH (Controlled Replication Under Scalable Hashing), which uses the CRUSH Map—an allocation table—to locate an OSD with the requested file. CRUSH selects the best storage location based on predefined criteria, determines which files are duplicated, and then saves them on physically separate media. The relevant criteria can be set by the network administrator.

RADOS, a completely reliable, distributed object store composed of self-mapping, intelligent storage nodes, serves as the foundation of the Ceph data storage architecture.

Some of the methods for accessing Ceph-stored data include:

  • radosgw: Using the HTTP Internet protocol, data can be read or written in this gateway.
  • librados: Native access to stored data is possible via APIs in programming and scripting languages such as Python, Java, C/C++, and PHP when using the librados software libraries.
  • RADOS Block Device: Data entry here necessitates the help of a virtual system such as QEMU/KVM or block storage via a kernel module.

Ceph Storage Capability

Ceph adds a number of advantages to OpenStack-based private clouds. Here are a few examples to help you better understand Ceph storage performance.

  • High availability and improved performance

Ceph's coding erasure feature vastly improves data availability by simply adding resiliency and durability. At times, writing speeds can be nearly twice as fast as the previous backend.

  • Strong security

Active directory integration, encryption features, LDAP, and other features in place with Ceph can help to limit unauthorized access to the system.

  • Adoption without a hitch

Making the switch to software-defined storage platforms can be difficult at times. Ceph solves the problem by allowing block and object storage in the same cluster without requiring you to manage separate storage services via other APIs.

Cost-effectiveness Ceph operates on item hardware, making it a low-cost solution that does not require any expensive or additional hardware.

Ceph Block and Ceph Object Storage Use Cases

Ceph was designed primarily to run smoothly on general-purpose server hardware. It supports elastic provisioning, making petabyte-to-exabyte scale data clusters economically feasible to build and maintain.

Unlike other mass storage systems that are great at storage but quickly run out of throughput or IOPS before they run out of capacity, Ceph scales performance and capacity independently, allowing it to support a variety of deployments optimized for a specific use case.

The following are some of the most common use cases for Ceph Block & Object Storage:

  • Ceph Block use cases

– Deploy elastic block storage with on-premise cloud

– Storage for VM disc volumes that run smoothly

– SharePoint, Skype, and other collaboration applications storage

– Primary storage for MY-SQL and other similar SQL database apps storage

– Dev/Test Systems storage

– IT management apps storage

  • Ceph Object Storage Use Cases

– Snapshots of VM disc volumes

– Video/audio/image repositories

– ISO image storage and repositories

– Archive and backup

– Deploy Dropbox-like services within the enterprise

– Deploy Amazon S3-like object store services with on-premise cloud

The Benefits and Drawbacks of Ceph Storage

While Ceph storage is a good option in many situations, it does have some drawbacks. In this section, we'll go over both of them.

Advantages

– Despite its short development history, Ceph is a free and well-established storage method.

– The manufacturer has extensively and well-documented the application. There is a wealth of useful information available online for Ceph setup and maintenance.

– Ceph storage's scalability and integrated redundancy ensure data security and network flexibility.

Ceph's CRUSH algorithm ensures high availability.

Disadvantages

– Due to the variety of components provided, a comprehensive network is required to fully utilize all of Ceph's functionalities.

– The installation of Ceph storage takes some time, and the user is not always sure where the data is physically stored.

– It necessitates more engineering oversight to implement and manage.

Ceph Storage vs AWS S3: Key Differences and Features

In this section, we will compare AWS S3 and Ceph Object Gateway, two popular object stores (RadosGW). We'll keep the focus on the similarities and some of the key differences.

While Amazon S3 (released in 2006) is primarily an AWS public object store that guarantees 99.9 percent object availability, Ceph storage is open-source software that provides distributed object, block, and file storage.

The Ceph Object Gateway daemon (released in 2006) operates under the LGPL 2.1 license and provides two sets of APIs:

  • compatible with a subset of the Amazon S3 RESTful APIs, and the other that is not.

  • A subset of the OpenStack Swift API is compatible with the second.

One key distinction between Amazon S3 and Ceph Storage is that, whereas Amazon S3 is a proprietary solution available only on Amazon's commercial public cloud (AWS), Ceph is an open-source product that can be easily installed on-premises as part of a private cloud.

Another distinction between the two is that Ceph offers strong consistency, which means that new objects and changes to existing objects are guaranteed to be visible to all clients. Amazon S3, on the other hand, provides read-after-write consistency when creating new objects and eventual consistency when updating and deleting objects.

Why is Ceph Storage insufficient for Modern Workloads?

While there is no denying that Ceph storage is highly scalable and a one-size-fits-all solution, it does have some architectural flaws, primarily because it was not designed for today's fast storage media–NAND flash and NVMe® flash.

For the following reasons, Ceph storage is unsuitable for modern workloads:

  • Enterprises that use the public cloud, their own private cloud, or are transitioning to modern applications require low latency and consistent response times. While BlueStore (a back-end object store for Ceph OSDs) can help to improve average and tail latency, it cannot always take advantage of the benefits of NVMe® flash.
  • To achieve the best possible performance, modern workloads typically deploy local flash (local NVMe® flash) on bare metal, and Ceph is not equipped to realize the optimized performance of this new media. In fact, Ceph can be an order of magnitude slower than a local flash in a Kubernetes environment where local flash is recommended.

  • Ceph has a low flash utilization rate (15-25 percent ). In the event of a Ceph or host failure, the rebuild time for shared storage needs can be extremely slow due to massive traffic flowing over the network for an extended period of time.

Conclusion

Choosing the right storage platform is becoming increasingly important as data takes center stage in almost every business. Ceph storage is intended to increase the accessibility of your data to you and your business applications.

Despite being a good choice for applications that don't require spinning drive performance, Ceph has architectural flaws that make it unsuitable for high-performance, scale-out databases, and other similar web-scale software infrastructure solutions.

If you have any doubt about Ceph storage. Do not hesitate to contact us. Airzero cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

What is Ceph?

Ceph is a compelling open-source alternative to traditional vendors' proprietary software-defined storage solutions, with a thriving community working on the technology. Ubuntu was a strong supporter of Ceph and its community from the beginning. Canonical is still a premier member state and serves on the Ceph Foundation's governing board.

Many global enterprises and telco operators are running Ceph on Ubuntu, allowing them to combine block and object storage at scale while reaping the economic and upstream benefits of open source.

Why use Ceph?

Ceph is unique in that it provides data in three ways: as a POSIX compliant filesystem via CephFS, like block storage volumes via the RBD driver, and as an object store compatible with both the S3 and Swift protocols via the RADOS gateway.

Ceph is often used to deliver a block and object supply to OpenStack clouds via Cinder and as a Swift replacement. Ceph has also been embraced as a famous way for physical volumes (PV) as a Container Storage Interface (CSI) plugin by Kubernetes.

Even as a stand-alone solution, Ceph is a compelling open-source storage alternative to closed-source, proprietary solutions because it reduces the OpEx costs that organizations typically incur with storage due to licensing, upgrades, and potential vendor lock-in fees.

How does Ceph work?

Ceph stores data in pools that users or other benefits can access to deliver block, file, or object storage. Each of these mechanisms is supported by a Ceph pool. Replication, access rights, and other internal characteristics (such as data placement, ownership, and access, among others) are expressed per pool.

The Ceph Monitors (MONs) is in charge of keeping the Cluster in good working order. They use the CRUSH map to manage data location. These work in a cluster with quorum-based HA and data is stored and retrieved using Object Storage Devices (OSDs).

A storage device and a running OSD daemon process are mapped by 1:1. OSDs make extensive use of the CPU and RAM of the cluster member host. This is why, when designing a Ceph cluster, it is critical to carefully balance the number of OSDs with the number of CPU cores and memory. This is particularly accurate when trying to achieve a hyper-converged architecture.

Using LXD as a container hypervisor aids in properly enforcing resource limits on most processes running on a given node. By separating the Ceph MONs, LXD is used broadly to deliver the most reasonable economics in Canonical's Charmed OpenStack distribution. It is not currently recommended to containerize the Ceph OSDs.

Ceph storage mechanisms

Choosing the mechanism for accessing each data pool is equivalent to doing so. One pool, for example, may be used to store block volumes, while another serves as the storage backend for object storage or filesystems. In the case of volumes, the host attempting to mount the volume must first load the RBD kernel module, after which Ceph volumes can be mounted in the same way that local volumes are.

Object buckets are not normally climbed–client-side applications can use overlay filesystems to simulate a 'drive,' but it is not an actual volume that is being mounted. The RADOS Gateway, on the other hand, allows access to Object buckets. To access objects using the S3 or Swift protocols, RADOSGW provides a REST API. CephFS is used to create and format filesystems, which are then exported and made available to local networks in the same way that NFS mounts are.

Volume and object store use cases have been in large-scale production for quite some time. The use of Ceph to combine volume and object storage provides numerous advantages to operators.

Ceph storage support with Canonical

Canonical offers Ceph support as part of Ubuntu Advantage for Infrastructure, with Standard and Advanced SLAs, which correspond to business hours and 24x7 support, respectively. Each covered node supports up to 48TB of raw storage in a Ceph cluster.

This coverage is based on our reference hardware recommendation for OpenStack and Kubernetes in a hyper-converged architecture at an optimal price per TB range while retaining the best performance across computers and networks in an on-premise cloud. If the number of nodes to TB ratio does not correspond with our recommendation and exceeds this limit, Canonical provides per TB pricing in addition to accommodating our scale-out storage customers.

Ceph is general in the Ubuntu main storage, and as such, users obtain free security updates for up to five years if they employ an LTS version. Beyond the standard support cycle, an additional five years of paid commercial support are available.

If you've got any doubt regarding the above topic of Ceph storage in Ubuntu, Don’t hesitate to contact us. Airzero Cloud is going to be your digital partner.

Airzero Cloud is one of the top web hosting service provider that provides an array of the most effective tools. We are running to help you to support your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Automatic Virtual Machine Activation acts as a proof-of-purchase tool, allowing to confirm that Windows products are operated in agreement with the Product Use Rights and Microsoft Software License Terms.

AVMA lets you start Windows Server virtual machines on Windows Server Hyper-V host that is correctly activated, even in separate environments. AVMA binds the virtual machine activation to the certified virtualization host and activates the virtual machine when it starts up. Reporting and tracking data is open on the virtualization host.

Practical applications

On virtualization hosts, AVMA offers several benefits.

Server data center administrators can utilize AVMA to do the following:

  • Activate virtual machines in remote locations
  • Activate virtual devices with or without an internet connection
  • Track virtual device usage and permissions from the virtualization host, without requiring any access rights on the virtualized systems

Service Provider License Agreement partners and other hosting providers do not have to share product keys with tenants or access a tenant's virtual machine to start it. Virtual machine activation is transparent to the resident when AVMA is operated. Hosting providers can use the server logs to confirm license submission and to track client usage history.

System requirements

The virtualization host that will run virtual machines ought to be activated. Keys can be acquired through the Volume Licensing Service Center or your OEM provider.

AVMA demands Windows Server Datacenter edition with the Hyper-V host role installed. The working system version of the Hyper-V host chooses which versions of the operating system can be started in a virtual machine. Here are the guests that the different version hosts can start:

How to implement AVMA?

To start VMs with AVMA, you use a generic AVMA key that corresponds to the version of Windows Server that you like to start. To start a VM and start it with an AVMA key, do the following:

  • On the server that will host virtual devices, establish and configure the Microsoft Hyper-V Server role. For more details, see Install Hyper-V Server. Make sure that the server is successfully activated.
  • Create a virtual machine and establish a supported Windows Server working system on it.
  • Once Windows Server is established on the VM, you install the AVMA key in the VM. From PowerShell or an advanced Command Prompt, execute the following command:
slmgr /ipk 

Reporting and tracking

The Key-Value Pair exchange between the virtualization broadcaster and the VM bears real-time quest data for the visitor operating systems, including activation data. This activation data is stored in the Windows registry of the virtual machine. Historical data about AVMA requests are logged in Event Viewer on the virtualization host.

If you've got any doubt regarding automatic virtual machine activation in windows server. Don’t hesitate to contact us. Airzero Cloud is going to be your digital partner.

Airzero Cloud is a leading web hosting service provider that provides an array of the most powerful tools. We are going to help you to reinforce your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

In case you've created your own PC with a graphics card and processor or have updated your computer for fast speed, you might like to install and activate Windows 10.

Most of the top-range systems nowadays come with the Windows 10 operating system pre-installed. Regardless, in case you've created your own PC with a graphics card and processor or have updated your computer for fast, you might like to install and start Windows 10.

Activation of the Windows 10 operating system can be accomplished either by activating the product key besides if done so earlier. In the matter you activated the Windows 10 license connecting it to your Microsoft account, activation on the same system can be accomplished efficiently with the digital license.

Follow the easy steps to activate Windows 10, either by the product key or by joining the digital license.

What are the steps to activate Windows 10 with a product key?

  • For installation of Windows 10, first, enter your product license key.
  • Select the Windows key, go to Settings > Update and Security > Activation.
  • Select the Change Product key.
  • Enter your product key into the pop-up box and select Next
  • Choose Activate.

What are the steps to activate Windows 10 with a digital license?

  • While starting activation, choose the “I do not have a product key” option.
  • Setup and login into Windows 10 with your connected Microsoft account.

Windows 10 will be automatically started at this point. In case you have made hardware changes follow the below steps:

  • Choose the Windows key, then go to Settings > Update and Security > Activation.
  • If Windows is not started, search and press'Troubleshoot'.
  • Choose 'Activate Windows' in the new window and then Activate. Or, select “I changed hardware on this device,” if applicable.
  • If you get sign-in prompts, track them operating a Microsoft account linked to your digital license.
  • Choose the machine you are using and review 'This is the device I am using right now' next to it.
  • Select Activate.

If you've got any doubt regarding the installation of Windows 10. Don’t hesitate to contact us. Airzero Cloud is going to be your digital partner.

Airzero Cloud is an excellent internet hosting service provider that provides an array of powerful tools. We are going to assist you to reinforce your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

In this blog, we are looking at how to extend Linux disk VMware hyper-v centos/Redhat on the fly live.

Check if you can expand the current disk or require to add a new one.

This is rather a significant step because a disk that has been partitioned into 4 primary divisions already can not be raised anymore. To prevent this, log into your server and run fdisk -l at the command line.

# fdisk -l

Disk /dev/sda: 187.9 GB, 187904819200 bytes
255 heads, 63 sectors/track, 22844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       25      200781   83  Linux
/dev/sda2              26       2636    20972857+  8e  Linux LVM

If it examines like that, with only 2 sections, you can safely open the current hard disk in the Virtual Machine.

However, if it displays like this:

~# fdisk -l

Disk /dev/sda: 187.9 GB, 187904819200 bytes
255 heads, 63 sectors/track, 22844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       25      200781   83  Linux
/dev/sda2              26       2636    20972857+  8e  Linux LVM
/dev/sda3            2637       19581   136110712+  8e  Linux LVM
/dev/sda4           19582       22844   26210047+  8e  Linux LVM

It will offer you that there are already 4 primary sections on the system, and you ought to count a new Virtual Disk to your Virtual Machine. You can always use that extra Virtual Disk to improve your LVM size, so don’t worry.

The “hardware” part, “physically” adding disk space to your VM

Do that in your VMWare / esx / hyper-v console. If the option is greyed out to expand? Add a new disk or shut down the VM and extend the disk as such.

Partitioning the unallocated area: if you’ve raised the disk size

Once you’ve modified the disk’s size in VMware, bounce up your VM too if you had to shut it down to expand the disk size in vSphere. If you’ve rebooted the server, you won’t have to rescan your SCSI machines as that occurs on boot. If you did not reboot your server, rescan your SCSI systems as such.

First, check the name of your scsi devices.

$ ls /sys/class/scsi_device/
0:0:0:0 1:0:0:0  2:0:0:0

Then rescan the scsi bus. Beneath you can replace the ‘0:0:0:0’ with the actual scsi bus word found with the last command. Each colon is prefixed with a slash, which is what creates it looks weird.

$ echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan

Partitioning the unallocated space: if you’ve included a new disk

If you’ve included a new disk on the server, the activities are the same to those described above. Either of rescanning an already existing scsi bus like shown earlier, you have to rescan the server to detect the new scsi bus as you’ve included a new disk.

$ ls  /sys/class/scsi_host/
total 0
drwxr-xr-x  3 root root 0 Feb 13 02:55 .
drwxr-xr-x 39 root root 0 Feb 13 02:57 ..
drwxr-xr-x  2 root root 0 Feb 13 02:57 host0

Your host machine is called ‘host0’

$ echo "- - -" > /sys/class/scsi_host/host0/scan

It won’t show any output, but running ‘fdisk -l’ will display the new disk.

Create the new partition

Once the rescan is done, you can identify if the space can be seen on the disk.

~$  fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       13      104391   83  Linux
/dev/sda2           14      391     3036285   8e  Linux LVM

So the host can now visualize the 10GB hard disk.

~$  fdisk /dev/sda

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with:
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n

Now type ‘n’, to build a new partition.

Command action
e   extended
p   primary partition (1-4)
 p

Now select “p” to create a further direct partition.

Partition number (1-4): 3

Select your partition number. Since it already had /dev/sda1 and /dev/sda2, the logical number would be 3.

First cylinder (392-1305, default 392): 
Using default value 392
Last cylinder or +size or +sizeM or +sizeK (392-1305, default 1305): 
Using default value 1305

The cylinder matters will vary depending on your plan. It should be secure to just hint enter, as fdisk will give you a default value for the first and last cylinder.

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)
Command (m for help): w

Once you get around to the major command within fdisk, type w to report your partitions to the disk. You’ll get a notification about the kernel still utilizing the old partition table, and to reboot to utilize the new table. The reboot is not required as you can even rescan for those partitions using partprobe. Execute the below to scan for the recently created partition.

~$ partprobe -s

If that does not function for you, you can attempt to use “partx” to rescan the machine and add the latest partitions. In the command below, difference /dev/sda to the disk on which you’ve just included a new partition.

~$ partx -v -a /dev/sda

If that even does not offer you the recently completed partition for you to operate, you have to reboot the server.

~$  fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot     Start       End     Blocks   Id  System
/dev/sda1   *           1       13      104391   83  Linux
/dev/sda2           14      391     3036285   8e  Linux LVM
/dev/sda3           392     1305    7341705   8e  Linux LVM

Extend your Logical Volume with the new partition.

Now, assemble the physical volume as a cause for your LVM. Please return /dev/sda3 with the newly built partition.

~$  pvcreate /dev/sda3
Physical volume "/dev/sda3" successfully created

Now identify how your Volume Group is called.

~$  vgdisplay
--- Volume group ---
VG Name             VolGroup00
...

Let’s give that Volume Group by including the newly built physical volume to it.

$  vgextend VolGroup00 /dev/sda3
Volume group "VolGroup00" successfully extended
~$  pvscan
PV /dev/sda2   VG VolGroup00   lvm2 [2.88 GB / 0    free]
PV /dev/sda3   VG VolGroup00   lvm2 [7.00 GB / 7.00 GB free]
Total: 2 [9.88 GB] / in use: 2 [9.88 GB] / in no VG: 0 [0   ]

Now we can extend Logical Volume .

~$  lvextend /dev/VolGroup00/LogVol00 /dev/sda3
Extending logical volume LogVol00 to 9.38 GB
Logical volume LogVol00 successfully resized

If you’re executing this on Ubuntu, use the following.

~$  lvextend /dev/mapper/vg-name /dev/sda3

All that remains now, is to resize the file system to the volume group, so we can utilize the space. Substitute the path to the correct /dev device if you’re on ubuntu/debian systems.

~$  resize2fs /dev/VolGroup00/LogVol00
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/LogVol00 to 2457600 (4k) blocks.
The filesystem on /dev/VolGroup00/LogVol00 is now 2457600 blocks long.
$ resize2fs /dev/mapper/centos_sql01-root
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos_sql01-root
Couldn't find valid filesystem superblock.

In that case, you’ll require to improve the XFS partition.

~$  df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 9.1G 1.8G  6.9G  21% /
/dev/sda1           99M   18M   77M  19% /boot
tmpfs               125M    0  125M   0% /dev/shm

If you have any doubt about the above topic. Don’t hesitate to contact us. Airzero cloud will be your digital partner.

Airzero Cloud is an excellent web hosting service that offers an array of powerful tools. We will help you to enhance your business.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Deploying to a DigitalOcean Droplet

- Posted in by

This blog steps through how to deploy and host your Gatsby site on a DigitalOcean Droplet with Ubuntu and Nginx.

DigitalOcean delivers a cloud medium to deploy, operate, and scale applications of any size, emptying infrastructure friction and delivering predictability so developers and their teams can give more time to build better software.

DigitalOcean’s effect droplets are scalable computer IaaS or a VPS on the cloud which has excellent dependability and scalability. They come with various price ranges ideal for small apps to giant enterprise-level apps.

They deliver services to choose from different Unix-based distributions and establish your technology-based platform with preinstalled prerequisites from the marketplace. This blog will step through the exact options that work best for deploying a Gatsby site with DigitalOcean.

Prerequisites

  • A Gatsby site in a Git repository
  • A DigitalOcean Droplet is configured with sudo group
  • A custom domain name for your Gatsby site to maintain with configuring HTTPS

How to deploy the Gatsby site to DigitalOcean

Install Node.js, npm and Gatsby-CLI

Log in to your droplet as a non-root user.

Install Node.js

sudo apt-get update
sudo apt-get install nodejs

Install npm

sudo apt-get install npm

Run the code

nodejs -v
npm -v

To download the new stable Node.js release using the n package,

sudo npm install -g n
sudo n stable
hash nodejs
hash npm

Download the Gatsby CLI globally. This will be good ahead in making the Gatsby site for production.

sudo npm install -g gatsby-clip

Clone your repository to the droplet

The subsequent step is to clone the storehouse having your Gatsby app

< your-github-repo-site> 
git clone < your-github-repo-site>

Copy the way where your

 < your-github-repo-site>

is cloned, for forthcoming reference.

pwd

In case of a notification associated with “Permission refused”, check if

< your non-root user>

has sudo rights. Or before cloning your storehouse, change authorizations for

< your non-root user>

to access the .

config

directory of under

 /home/< your non-root user>/:
cd ~/
sudo chown -R $(whoami) .config

Generate your Gatsby site for production

The fixed files will be posted publicly on the droplet. The

 gatsby build 

command gives utility to create the site and build the static files in the

 /public.

Go to the path where

< my-gatsby-app>

is. You can utilize the simulated path for reference in a previous step.

cd 
sudo npm install

Run build to generate static files.

sudo gatsby build

Install Nginx Web Server and open firewall to accept HTTP and HTTPS requests

To host a website or static files onto a Linux-based server a web server like Apache or Nginx is needed.

Nginx is a web server. It delivers the infrastructure code for managing client requests from the World Wide Web, along with elements like a load balancer, mail proxy, and HTTP Cache.

Install Nginx.

sudo apt-get install nginx

Configure firewall sets of the droplet to attend to HTTP and HTTPS requests on ports 80 and 443.

sudo ufw allow 'Nginx HTTP'
sudo ufw allow 'Nginx HTTPS'

To check the connection,

sudo ufw app list

If ufw status is disabled, you can capable it with the following command:

sudo ufw enable

Allow the OpenSSH if not done, to not disconnect from your droplet.

sudo ufw allow 'OpenSSH'

Configure Nginx to point to your Gatsby site’s directory and include your domain

Change the root configuration of Nginx in the default server

Go to

 /etc/nginx/sites-available/
cd /etc/nginx/sites-available/

Open the

default

in Vim

sudo vim default

Revise the file and complete the following modifications for the below-mentioned fields, exit the remainder of the fields as-is. Your actual path may change, but it may reach

 /home///public.
default
server {
 root /public;
 index index.html index.htm index.nginx-debian.html;
 server_name ;
 location / {
   try_files $uri $uri/ =404;
 }
}

Restart the Nginx

sudo systemctl restart nginx

You should now be capable of considering your built Gatsby site at your DigitalOcean IP address, before configuring a domain. Go to the Advanced DNS locations in your domain provider’s console and put an A record that means to the IP address of the droplet.

Configuring HTTPS for your Gatsby site

Observe the below steps to configure your site with a complimentary SSL/TLS certificate Let’s Encrypt using their Certbot CLI tool. Download Certbot onto your droplet. You’ll need to install Certbot using snapd.

sudo snap install core; sudo snap refresh core
sudo apt-get remove certbot

Execute the below command to install Certbot.

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot

Develop the certificate. Certbot will automatically cleanse and configure the Nginx config file and file to the certificate file. Execute the below command:

sudo certbot --nginx

If you are operating Certbot for the first time on this droplet then you resolve to be prompted to type your email for purposes.

Agree to the license contract on prompt. Restart the Nginx service.

sudo systemctl restart nginx

Now, you can connect your site at

< your-domain>

with a safe connection.

View your Gatsby site live

Once you’ve observed all the actions and configuration correctly, you can visit the site live at

< your-domain>.

Whenever there’s an update to your site, run a

sudo gatsby

build in the root of your

< my-gatsby-app>.

You’ve deployed your Gatsby App on a DigitalOcean droplet along with HTTPS for it.

If you have any doubt about the above topic. Don’t hesitate to contact us through the below email. Airzero Cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Researchers at Recorded Future and MalwareHunterTeam have uncovered new highly refined ransomware called ALPHV (aka BlackCat) documented in the Rust programming language.

What has happened?

ALPHV is one of the foremost experienced ransomware crews to use Rust. This dangerous threat targets Windows, Linux, and VMWare ESXi systems.

  • Experimenters claim that the author of BlackCat ransomware was previously implicated with REvil ransomware actions.
  • ALPHV was discovered being suggested as RaaS on two cyber threat forums Exploit and XSS.
  • The threat group uses a double fleecing model.
  • It is examining partners and contributing up to 80%–90% ransom cut, based on the target value.

The targets

So far, the ransomware processes have targeted a few targets in the U.S., India, and Australia. The ransom requests vary between a few hundreds of thousands up to $3 worth of Bitcoin/Monero.

Additional insights

At present, the ALPHV ransomware group employs more than one leak site, with each site hosting data of only one or two victims.

  • It is thought that these leak spots may be hosted by additional ALPHV affiliates, which describes the use of various leak URLs.
  • The best initial entry vector is unknown. The detractors concentrate on stealing acute files and encrypting systems.

Conclusion

BlackCat is the foremost ransomware to use Rust and is a powerful threat. With its double fleecing skills, professionals believe that BlackCat would be a worthy successor to DarkSide and REvil. While the group is even in its early stages of growth, its progressive nature companies ought to be aware of the threat and execute proper defences.

If you have any doubts about aka blackcat, Don’t hesitate to contact us through the below email. Airzero Cloud will be your digital partner.

Email id: [email protected] enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

One time behind the disruptive force-chain seizures, researchers have watched two new sets of exertion from Russia- grounded actors that gesture substantial trouble may be brewing.

One time after the ignominious and far-reaching SolarWinds force-chain attacks, its lyricists are on the descent too. Experimenters said they’ve caught the troubled group – which Microsoft refers to as “ Nobelium” and which is connected to Russia’s asset agency – compromising transnational business and government marks with new tactics and custom malware, stealing data and moving indirectly across grids.

Experimenters from Mandiant have determined two distinct clusters of exertion that can be “ plausibly” attributed to the peril group, which they track as UNC2452, they said in a report published Monday.

Mandiant has followed the tardiest exertion as UNC3004 and UNC2652 since last time and throughout 2021, following the middle of a range of companies that always give technology results, pall and services as well as resellers, they said.

We want to know what your most significant pall security problems and challenges are, and how your business is dealing with them. Weigh in with our complete, anonymous Threatpost Poll!

Indeed, resellers were the prey of a crusade by Nobelium that Microsoft blazed in October, in which the group was caught using instrument-filling and phishing, as well as API abuse and commemorative theft, to gather honest account instruments and privileged entry to reseller networks. The ultimate thing of this movement sounded to be to reach downstream client networks, experimenters said at the time.

Nobelium also engaged in credential theft in April using a backdoor dubbed FoggyWeb to attack ActiveDirectory waiters, Microsoft blazoned in September. In the new collections observed by Mandiant, stolen credentials also eased original access to the targeted communities. Still, experimenters consider the peril actors reached the instruments from a word-stealer malware crusade of a third party rather than one of their own, they said.

New Malware and Exertion

Detractors have counted a number of new tactics, styles and procedures (TTPs) to bypass security rules within surroundings, including the birth of virtual machines to determine internal routing configurations, experimenters wrote. They also have new malware in their magazine, a unique, custom-made downloader that researchers have called Ceeloader. The malware, which is laboriously blurred, is composed in C and can execute shellcode loads directly in memory, they wrote. A Cobalt Strike lamp installs and runs Ceeloader, which itself doesn't have perseverance and so can’t execute automatically when Windows is initiated. The malware can bypass security protections, still, by rearranging calls to the Windows API with large blocks of useless law, experimenters said.

Another exertion followed in the attacks contains using accounts with operation impersonation rights to crop sensitive correspondence data, using domestic IP deputy services and recently equipped geo-located structure to communicate with compromised victims, and abuse of multi-factor authentication to influence “ drive” information on smartphones, experimenters said.

As with other Nobelium juggernauts, the motive for the clusters appears to be cyber spying, as the occurrences reveal the actors targeting businesses to steal data “ applicable to Russian interests,” according to Mandiant. “ In some situations, the theft of the details seems to be brought primarily to develop new routes to pierce other victim surroundings,” experimenters wrote.

Implicit for Downstream Concession

The so-called SolarWinds “ Solorigate” peril that was discovered last December is now the stuff of the tale. It came to a warning tale for how fast and how far a cyberattack can spread through a global force chain. In those occurrences, which affected multitudinous associations – including Microsoft and the Department of Homeland Security – Nobelium used a vicious binary called “ Sunburst” as a backdoor intoSolarWinds.Orion.Core.BusinessLayer.dll, a SolarWinds digitally inked part of the Orion software frame. The point is a plugin that displays via HTTP to third-party waiters, letting the attack increase snappily.

There's an analogous possibility for the wide attack in the new groups observed by Mandiant, experimenters said. They followed “ multitudinous examples where the peril actor compromised backing providers and used the nonpublic access and instruments belonging to these providers to compromise downstream guests,” they said.

Bushwhackers also used instruments they do to have entered from the third-party word-stealer drive to gain entry to an institution’s Microsoft 365 conditions via a stolen session commemorative. Scholars defined the word- purloiner CRYPTBOT on some of the systems shortly before the commemorative was generated, experimenters said. “ Mandiant estimates with confidence that the fascinating actor got the session commemorative from the drivers of the word-stealer malware,” experimenters wrote. “ These commemoratives were used by the actor through public VPN providers to establish the target’s Microsoft 365 condition.”

MFA Push Abuse

One novel and rather innovative fashion experimenters followed Nobelium using in the attacks is the abuse of duplicated MFA drive announcements to gain entry to commercial accounts, experimenters wrote.

Numerous MFA providers allow druggies to admit a phone app drive information or to admit a phone call and press a key as an alternate factor to establish access to an account.

Using a valid username and word admixture, the investigators said that the bushwhackers issued multitudinous MFA requests to an end stoner’s fair device until the mark entered the authentication. This ultimately blessed the dangerous actor's entry to the account, they said.

All by each, the new collections show that Nobelium’s eventuality for dangerous trouble exertion appears to be adding in both complexity and intensity, motioning the eventuality for another SolarWinds- style attack on the horizon, observed one security professional.

“ Cyberwarfare is now absolutely a part of ultramodern geopolitical vitality, so we can not anticipate these attacks to reduce up any time soon, substantially from the state- patronized actors,” noted Erich Kron, protection understanding advocate at safety establishment KnowBe4, in a dispatch to Threatpost. “ These attacks will continue to escalate as styles ameliorate and further coffers are allocated to cyberwarfare.

If you have any doubts about the above topic, Don’t hesitate to contact us through the below email. Airzero cloud will be your digital partner.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/