Airzero Cloud

Next Generation Cloud !

If you own a business, deciding between a cloud server and a dedicated server hosting solution is a critical decision. Many businesses used to start on a shared Linux server when they first learned about web hosting, and then they upgraded to dedicated servers to handle the increased web traffic.

Today, cloud hosting is the new paradigm, and businesses are no longer required to follow this traditional path, but can instead begin building a website on a managed cloud plan. Previously, it would have taken more time for the developer to build on a dedicated server independently, but thanks to new technology, this is no longer the case.

What is cloud hosting?

Cloud hosting is a method of hosting websites that distributes data across multiple machines. Users can manage their data that access the various servers in the cloud by using virtual machines. A significant difference between a cloud server and a dedicated server is that cloud hosting uses the computing power and services of multiple machines.

How does cloud hosting work?

When comparing a cloud server to a dedicated server, it is necessary to first understand how cloud servers work. Airzero Cloud, the best cloud hosting service, uses a virtual server that uses cloud computing technology to distribute data among connected servers located in different areas.
It is critical to understand the distinctions between public, private, hybrid, and managed cloud hosting frameworks when it comes to web hosting. You should also understand how these services relate to the specific web hosting requirements of Kerala's small businesses and other start-up software companies. Even business website owners must be aware of the distinctions between software as a service, platform as a service, and infrastructure as a service plan.

Advantages of cloud hosting

When it comes to cloud hosting, there are numerous plans, platforms, and services to choose from. These are exclusive to the company and the programming team that created them for the market. One of the primary benefits of cloud hosting plans is that they include a preinstalled elastic web server that supports custom stack software. This hosting provides more RAM and CPU cores, allowing each site to scale to consume more resources as needed.

What is dedicated server hosting?

On a physical server, this hosting could support a single client. A single client may require a cluster of servers, which is referred to as a private cloud. This is based on virtual technology, with many dedicated servers contributing to the virtual location. Only resources in the virtual space will be delivered to the client.

How does dedicated server hosting work?

The traditional benefit of such servers is that system administrators can easily configure them for the precise level of web traffic required to support online operations. Website owners will also need to provision dedicated servers with excess capacity, which will provide better performance during periods of lower-than-peak traffic activity.

Advantages of dedicated server hosting

When it comes to dedicated servers, the key advantage is customization. Most Kerala businesses must handle a high volume of traffic or run complex applications on a regular basis. Dedicated server hardware is required by web developers and programmers in order to create custom web server environments for complex application support. To build new applications or support legacy software, most developers will require dedicated server hardware that can be fully customized. The dedicated servers can be easily optimized to support high levels of web traffic for media, publishing, promotions, and other purposes.

Choosing the right server for your business

Dedicated servers were the traditional way to go, but cloud computing options are becoming increasingly competitive alternatives for developing small business software solutions at a low cost. All of this adds to the complexity of the cloud server vs. dedicated server debate.

You can also use a hybrid cloud, which may be a viable solution because it combines the benefits of both cloud and dedicated hosting.

It is up to you to select the best server. There is nothing wrong with either dedicated server hosting or cloud hosting because each has advantages. With all of the points mentioned above, you must select the one that best suits your company's needs. So, with all of this in mind, make sure you select the right server for your company. If you have any doubts about the aforementioned issue, please contact us. Please do not hesitate to get in touch with us. Your digital partner will be Airzero cloud.

Email id: [email protected]

A security group controls inbound and outbound traffic for your EC2 instances by acting as a virtual firewall. When you launch an EC2 instance in a VPC, you can assign the instance up to five security groups. Security groups operate at the instance level rather than the subnet level. As a result, each instance in a VPC subnet can be assigned to a different set of security groups.

If you launch an instance using the Amazon EC2 API or a command-line tool without specifying a security group, the instance is assigned to the VPC's default security group. When you launch an instance through the Amazon EC2 console, you can create a new security group for the instance.

You add rules to each security group that controls inbound traffic to instances and a separate set of rules that control outbound traffic. This section explains the fundamentals of security groups and their rules for your VPC.

Set up network ACLs with rules similar to your security groups to add an extra layer of security to your VPC. See Compare security groups and network ACLs for more information on the distinctions between the two types of ACLs.

Security group basics

Security groups have the following characteristics:

  • Allow rules can be specified, but not deny rules.
  • You can define different rules for inbound and outbound traffic.
  • You can use security group rules to filter traffic based on protocols and port numbers.
  • Security groups are stateful,which means that if you send a request from your instance, response traffic for that request is allowed to enter regardless of inbound security group rules. Regardless of outbound rules, responses to allowed inbound traffic are allowed to flow out.
  • When you create a security group for the first time, it has no inbound rules. As a result, until you add inbound rules to the security group, no inbound traffic from another host to your instance is allowed.
  • A security group includes an outbound rule by default that allows all outbound traffic. You can remove the rule and replace it with outbound rules that allow only specific outbound traffic. If your security group does not have any outbound rules, no outbound traffic from your instance is permitted.
  • There are limits to how many security groups you can create per VPC, how many rules you can add to each security group, and how many security groups you can associate with a network interface.
  • Instances in a security group cannot communicate with one another unless you add rules allowing the traffic (exception: the default security group has these rules by default).
  • Network interfaces are linked to security groups. You can change the security groups associated with an instance after it has been launched, which changes the security groups associated with the primary network interface (eth0). Any other network interface's security groups can also be specified or changed. When you create a network interface, it is automatically assigned to the VPC's default security group, unless you specify a different security group. See Elastic network interfaces for more information on network interfaces.
  • A security group can only be used within the VPC that you specify when you create it.

Your VPC's default security group

Your VPC comes with a default security group. If you do not specify a different security group when you launch the instance, we use the default security group.

You can modify the default security group's rules.

A default security group cannot be deleted. When you attempt to delete the default security group, you will receive the following error:

 Client.CannotDelete: the specified group: "sg-51530134" name: "default" cannot be deleted by a user.

We do not automatically add an outbound rule for IPv6 traffic when you associate an IPv6 block with your VPC if you have modified the outbound rules for your security group.

Security group rules

A security group's rules can be added or removed (also referred to as authorizing or revoking inbound or outbound access). A rule can be applied to either inbound or outbound traffic (egress). You can grant access to a specific CIDR range, another security group in your VPC, or another VPC in a peer VPC (requires a VPC peering connection).

A security group's rules govern the inbound traffic allowed to reach the instances associated with the security group. The rules also govern the amount of outbound traffic that is permitted to leave them.

Security group rules have the following characteristics:

  • Security groups are configured to allow all outbound traffic by default.
  • Security group rules are always permissive; rules that deny access cannot be created.
  • You can use security group rules to filter traffic based on protocols and port numbers.
  • Security groups are stateful, which means that if you send a request from your instance, the response traffic is allowed to flow in regardless of the inbound rules. This also implies that responses to permitted inbound traffic are permitted to flow out, regardless of the outbound rules.

For each rule, you specify the following:

  • Name :he security group's name. A name can have up to 255 characters. Characters allowed are a-z, A-Z, 0-9, spaces, and. -:/()#,@[]+=;!$*. When we save a name that contains trailing spaces, we trim the spaces. For instance, if you enter "Test Security Group" as the name, we will save it as "Test Security Group."
  • Protocol: The protocol to allow. The most common protocols are 6, 17, and 1.
  • Port range:The range of ports to allow for TCP, UDP, or a custom protocol. You have the option of specifying a single port number or a range of port numbers.
  • Source or destination:The origin or destination of the traffic.
  • The current security organization
  • For the same VPC, a different security group
  • In a VPC peering connection, a different security group is assigned to a peer VPC.

When you create a security group rule, AWS assigns the rule a unique ID. When you use the API or CLI to modify or delete a rule, you can use its ID.

When you specify a security group as the source or destination of a rule, the rule applies to all instances associated with that security group.

When you specify a security group as the source for a rule, traffic from network interfaces associated with the source security group is allowed for the specified protocol and port. Incoming traffic is permitted based on the private IP addresses of network interfaces linked to the source security group. If you configure routers to forward traffic between two instances in different subnets via a middlebox appliance, you must ensure that both instances' security groups allow traffic to flow between them.

Some firewall configuration systems allow you to filter on source ports. You can use security groups to filter only on destination ports.

When you add, update, or delete rules, the changes are applied to all instances associated with the security group.

The type of rules you add is frequently determined by the purpose of the security group. The table below contains example rules for a security group associated with web servers. The web servers can accept HTTP and HTTPS traffic from any IPv4 or IPv6 address and send SQL or MySQL traffic to a database server.

A different set of rules is required for a database server. Instead of inbound HTTP and HTTPS traffic, for example, you could add a rule that allows inbound MySQL or Microsoft SQL Server access.

Stale security group rules

If your VPC has a VPC peering connection to another VPC or uses a VPC shared by another account. This enables instances belonging to the referenced security group and those belonging to the referencing security group to communicate with one another.

The security group rule is marked as stale if the security group in the shared VPC is deleted or if the VPC peering connection is deleted. Stale security group rules can be deleted just like any other security group rule.

Work with security groups

Change the default security group.

A default security group is included in your VPC. This group cannot be deleted; however, the rules of the group can be changed. The procedure is the same as for any other security group modification.

Make a security group.

Although the default security group can be used for your instances, you may want to create your own groups to reflect the various roles that instances play in your system.

New security groups are created with only an outbound rule, which allows all traffic to leave the instances.

  • To start the Amazon VPC console.
  • Select Security Groups from the navigation pane.
  • There is a list of your security groups. Select the security group to view the details for that security group, including its inbound and outbound rules.
  • able any inbound traffic or to restrict outbound traffic, you must add rules.

To create using the console

  • Navigate to the Amazon VPC console.

  • Select Security Groups from the navigation pane.

  • Select Create a security group.

  • A security group's name and description cannot be changed after it has been created.

  • Select the VPC from the list.

You have the option of adding security group rules now or later. See Add rules to a security group for more information.

You have the option of adding tags now or later. Choose to add a new tag and enter the tag key and value to add a tag.

Select Create a security group.

To create it using the command line

  • create-security-group
  • New-EC2SecurityGroup

View your security groups

  • First, one is to view your security groups using the console

  • To view your security groups by command line

  • define-security-groups and define-security-group-rules (AWS CLI)

  • Get-EC2SecurityGroup and Get-EC2SecurityGroupRules commands (AWS Tools for Windows PowerShell)

  • To view all of your security groups through Regions

  • Open the Amazon EC2 Global-View console

Tag the security groups

Use tags to help organize and identify your resources, such as by purpose, owner, or environment. Tags can be added to security groups. Each security group's tag keys must be distinct. When you add a tag with a key that is already associated with a rule, the value of that tag is updated.

To tag a security group by console

  • Navigate to the Amazon VPC console.
  • Select Security Groups from the navigation pane.
  • Select Actions, then Tag Management.
  • The Manage Tags page lists all of the tags that have been assigned to the security group. To add a tag, select Add tag and fill in the tag key and value. To delete a tag, select Remove next to the tag you want to remove.
  • Select Save changes.

To tag a security group by command line

  • create-tags

  • New-EC2Tag

Add rules to the security group

When you add a rule to a security group, it is automatically applied to all instances associated with the security group. You can use security groups from the peer VPC as the source or destination in your security group rules if you have a VPC peering connection.

To add a rule using a console

  • Navigate to the Amazon VPC console.
  • Select Security Groups from the navigation pane.
  • Choose a security group.
  • Actions, Edit inbound rules or Actions, Edit outbound rules are available.

Choose Add rule for each rule and then do the following.

a.Select the type of protocol to allow under Type.

  • You must enter the port range to allow for TCP or UDP.
  • You must select the ICMP type name from Protocol and, if applicable, the code name from the Port range for custom ICMP.
  • The protocol and port range are automatically configured for any other type.

b. To allow traffic, do one of the following for the Source (inbound rules) or Destination (outbound rules):

  • Select Custom and then enter a CIDR-encoded IP address, a CIDR block, another security group, or a prefix list.
  • Select Anywhere to allow traffic from any IP address to reach your instances (inbound rules) or All IP addresses to allow traffic from your instances to reach all IP addresses (outbound rules). This option adds the 0.0.0.0/0 IPv4 CIDR block automatically.
  • If your security group is in an IPv6-enabled VPC, this option adds a rule for the::/0 IPv6 CIDR block automatically.
  • This option is acceptable for inbound rules for a short period of time in a test environment but is unsafe for production environments. Only allow a specific IP address or range of IP addresses to access your instances in production.
  • Choose Save rules.

To add a rule using the command line

  • authorization-security-group-entry and authorization-security-group-egress

  • Grant-EC2SecurityGroupEntry and Grant-EC2SecurityGroupExit

Update Rules

When you update a rule, it is automatically applied to all instances associated with the security group.

To update a rule by console

  • Navigate to the Amazon VPC console.
  • Select Security Groups from the navigation pane.
  • Choose a security group.
  • Actions, Edit inbound rules or Actions, Edit outbound rules are available.
  • As needed, revise the rule.
  • Select Save rules.

Tag Rules

Use tags to help organize and identify your resources, such as by purpose, owner, or environment. Tags can be added to security group rules. For each security group rule, tag keys must be unique. When you add a tag with a key that is already associated with a security group rule, the value of that tag is updated.

To tag a by console

  • Navigate to the Amazon VPC console.
  • Select Security Groups from the navigation pane.
  • Choose a security group.
  • Select the check box for the rule on the Inbound Rules or Outbound Rules tab, then choose to Manage tags.
  • The Manage Tags page displays any tags associated with the rule. To add a tag, select Add tag and fill in the tag key and value. To delete a tag, select Remove next to the tag you want to remove.
  • Select Save changes.

To tag a rule by command line

  • create-tags revoke-security-group-ingress and revoke-security-group-egress(AWS CLI)
  • New-EC2Tag

Delete Rules

When you remove a rule from a security group, the change is applied to all instances that are associated with the security group.

To delete a rule using a console

  • Navigate to the Amazon VPC console.
  • Select Security Groups from the navigation pane.
  • Choose a security group.
  • Select Actions, then Edit inbound rules to delete an inbound rule or Edit outbound rules to delete an outbound rule.
  • Select the Delete button next to the rule you want to remove.
  • Select Save rules.

To delete a rule by a command line - revocation-of-security-group-entry and revocation-of-security-group-egress (AWS CLI) - Revoke-EC2SecurityGroupEntry and Revoke-EC2SecurityGroupExit (AWS Tools for Windows PowerShell)

Change the security groups

When an instance is running or stopped after being launched into a VPC, you can change the security groups that are associated with it.

Delete the security group

A security group can only be deleted if it is not associated with any instances (either running or stopped). You can change the security groups associated with a running or stopped instance, and you can delete multiple security groups at once if you're using the console. When using the command line or the API, you can only delete one security group at a time.

To delete it using the console

  • Navigate to the Amazon VPC console.
  • Select Security Groups from the navigation pane.
  • Choose Actions, Delete security groups after selecting one or more security groups.
  • When prompted for confirmation, type delete and then press the Delete key.

To delete it using the command line

  • security-group-deletion (AWS CLI)

  • Delete EC2SecurityGroup (AWS Tools for Windows PowerShell)

AWS Firewall Manager can be used to centrally manage VPC security groups.

AWS Firewall Manager streamlines the administration and maintenance of your VPC security groups across multiple accounts and resources. With Firewall Manager, you can configure and audit your organization's security groups from a single central administrator account. Firewall Manager applies the rules and protections to all of your accounts and resources automatically, even as you add new ones. Firewall Manager is especially useful if you want to protect your entire organization or if you frequently add new resources that need to be protected from a central administrator account.

  • Configure your organization's common baseline security groups as follows: A common security group policy can be used to provide centralized control over the association of security groups to accounts and resources across your organization. You specify where and how the policy will be implemented in your organization.
  • Audit existing security groups in your organization: An audit security group policy can be used to check the existing rules in your organization's security groups. The policy can be configured to audit all accounts, specific accounts, or resources tagged within your organization. Firewall Manager automatically detects and audits new accounts and resources. You can create audit rules to specify which security group rules to allow or disallow within your organization, as well as to look for unused or redundant security groups.
  • Obtain reports on non-compliant resources and correct them: For your baseline and audit policies, you can receive reports and alerts for non-compliant resources. You can also configure auto-remediation workflows to remediate any non-compliant resources detected by the Firewall Manager.

If you have any doubt about the above topic. Don’t hesitate to contact us. Airzero Cloud will be your digital partner.

Email id: [email protected]

One of the most common system design interview questions in software engineering is how to create a URL shortener like TinyURL or Bitly.

While tinkering with Cloudflare Worker to sync the Daily LeetCode Challenge to To-do list, I had the idea to create an actual URL shortener that anyone could use.

What follows is my thought process, along with code examples, for creating a URL shortener using Cloudflare Worker. If you want to proceed, you'll need a Cloudflare account and the Wrangler CLI.

Requirements

Let's begin, as with any System Design interview, by defining some functional and non-functional requirements.

Functional

  • When given a URL, our service should return a unique and short URL of that URL. For instance,https://betterprogramming.pub/how-to-write-clean-code-in-python-5d67746133f2 s.jerrynsh.com/FpS0a2LU
  • When a user attempts to access s.jerrynsh.com/FpS0a2LU, he or she is redirected to the original URL.
  • The UUID should be encoded using the Base62 encoding scheme (26 + 26 + 10):
  • A lower case alphabet ranging from 'a' to 'z' with a total of 26 characters
  • An upper case alphabet ranging from 'A' to 'Z,' with a total of 26 characters.
  • A digit from '0' to '9', for a total of ten characters

We will not support custom short links in this POC. Our UUID should be 8 characters long because 628 would give us approximately 218 trillion possibilities.

The generated short URL should never expire.

Non-Functional

  • Low latency
  • High availability

Budget, Capacity, and Restrictions Planning

The goal is straightforward: I want to be able to host this service for free. As a result, our constraints are heavily reliant on Cloudflare Worker pricing and platform limitations.

At the time of writing, the constraints per account for hosting our service for free are as follows:

  • 100k requests per day at 1k requests per minute

  • CPU runtime of no more than 10ms

Our application, like most URL shorteners, is expected to have a high read rate but a low write rate. Cloudflare KV, a key-value data store that supports high read with low latency — ideal for our use case — will be used to store our data.

Continuing from our previous limitations, the free tier of KV and limit allows us to have:

  • 1k writes/day 100k reads/day
  • 1 GB of data storage

What is the maximum number of short URLs that we can store?

With a free maximum stored data limit of 1 GB in mind, let's try to estimate how many URLs we can store. I'm using this tool to estimate the byte size of the URL in this case:

  • One character equals one byte.
  • We have no problem with the key size limit because our UUID should only be a maximum of 8 characters.
  • The value size limit, on the other hand — I'm guessing that the maximum URL size should be around 200 characters. As a result, I believe it is safe to assume that each stored object should be an average of 400 bytes, which is significantly less than 25 MiB.
  • Finally, with 1 GB available, our URL shortener can support up to 2,500,000 short URLs.
  • I understand. 2.5 million URLs is not a large number.

In retrospect, we could have made our UUID 4 instead of 8, as 624 possibilities are far more than 2.5 million. Having said that, let's stick with an 8-character UUID.

Overall, I would say that the free tier for Cloudflare Worker and KV is quite generous and more than adequate for our proof of concept. Please keep in mind that the limits are applied per account.

Storage

As I previously stated, we will use Cloudflare KV as the database to store our shortened URLs because we anticipate more reads than writes.

Eventually Consistent

One thing to keep in mind is that, while KV can support extremely high read rates globally, it is not a long-term consistent storage solution. In other words, any writes may take up to 60 seconds to propagate globally — a drawback we can live with.

I've yet to come across anything that lasted more than a couple of seconds in my experiments.

Atomic Operation

Based on what I've learned about how KV works, it's clear that it's not ideal for situations that necessitate atomic operations. Fortunately for us, this is not an issue.

For our proof of concept, the key of our KV would be a UUID that comes after our domain name, and the value would be the long URL provided by the users.

Creating a KV

Simply run the following commands in Wrangler CLI to create a KV.

# Production namespace:
wrangler kv:namespace create "URL_DB"
# This namespace is used for `wrangler dev` local testing:
wrangler kv:namespace create "URL_DB" --preview

In order to create these KV namespaces, we must also update our wrangler.toml file to include the appropriate namespace bindings. To access your KV's dashboard, go to https://dash.cloudflare.com/your Cloudflare account id>/workers/kv/namespaces.

Short URL UUID Generation Logic

This is most likely the most crucial aspect of our entire application.

  • The goal, based on our requirements, is to generate an alphanumeric UUID for each URL, with the length of our key being no more than 8 characters.
  • In an ideal world, the UUID of the generated short link should be unique. Another critical factor to consider is what happens if multiple users shorten the same URL. We should ideally also check for duplicates.

Consider the following alternatives:

  • Using a UUID generator

     https://betterprogramming.pub/stop-using-exceptions-like-this-in-python-2bd8ba7d8841 → UUID Generator → Yyf6AJ39 → s.jerrynsh.com/Yyf6AJ39

This solution is relatively simple to implement. We simply call our UUID generator to generate a new UUID for each new URL we encounter. We'd then use the generated UUID as our key to assign the new URL.

If the UUID already exists (collision) in our KV, we can continue retrying. However, we should be cautious about retrying because it can be costly.

Furthermore, using a UUID generator would not assist us in dealing with duplicates in our KV. It would be relatively slow to look up the long URL value within our KV.

  • Hashing the URL

    https://betterprogramming.pub/how-to-write-clean-code-in-python-5d67746133f2 → MD5 Hash → 99d641e9923e135bd5f3a19dca1afbfa → 99d641e9 → s.jerrynsh.com/99d641e9

Hashing a URL, on the other hand, allows us to check for duplicated URLs because passing a string (URL) through a hashing function always produces the same result. We can then use the result (key) to check for duplication in our KV.

Assuming that we use MD5, our key would be 8 characters long. So, what if we just took the first 8 bytes of the MD5 hash generated? Isn't the problem solved?

No, not exactly. Collisions would always be produced by the hash function. We could generate a longer hash to reduce the likelihood of a collision. However, it would be inconvenient for users. In addition, we want to keep our UUID 8 characters.

  • Using an incremental counter

    https://betterprogramming.pub/3-useful-python-f-string-tricks-you-probably-dont-know-f908f7ed6cf5 → Counter → s.jerrynsh.com/12345678

In my opinion, this is the simplest yet most scalable solution. We will not have any collision problems if we use this solution. We can simply increment the number of characters in our UUID whenever we consume the entire set.

However, I do not want users to be able to guess a short URL at random by visitings.jerrynsh.com/12345678. As a result, this option is out of the question.

There are numerous other solutions (for example, pre-generate a list of keys and assign an unused key when a new request comes in) available, each with its own set of advantages and disadvantages.

We're going with solution 1 for our proof of concept because it's simple to implement and I'm fine with duplicates. To avoid duplicates, we could cache our users' requests in order to shorten URLs.

Nano ID

The nanoid package is used to generate a UUID. We can use the Nano ID collision calculator to estimate our collision rate:

Okay, enough chit-chat; let's get to work on some code!

To deal with the possibility of collision, we simply keep retrying:

// utils/urlKey.js
import { customAlphabet } from 'nanoid'
const ALPHABET = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
/*
Generate a unique `urlKey` using `nanoid` package.
Keep retrying until a unique urlKey does not exist in the URL_DB.
*/
export const generateUniqueUrlKey = async () => {
   const nanoId = customAlphabet(ALPHABET, 8)
   let urlKey = nanoId()
   while ((await URL_DB.get(urlKey)) !== null) {
       urlKey = nanoId()
   }
   return urlKey
}

API

We will define the API endpoints that we want to support in this section. This project is started with the itty-router worker template, which assists us with all of the routing logic:

wrangler generate  https://github.com/cloudflare/worker-template-router

The beginning of our project lies in the index.js:

// index.js
import { Router } from 'itty-router'
import { createShortUrl } from './src/handlers/createShortUrl'
import { redirectShortUrl } from './src/handlers/redirectShortUrl'
import { LANDING_PAGE_HTML } from './src/utils/constants'
const router = Router()
// GET landing page html
router.get('/', () => {
   return new Response(LANDING_PAGE_HTML, {
       headers: {
           'content-type': 'text/html;charset=UTF-8',
       },
   })
})
// GET redirects the short URL to its original URL.
router.get('/:text', redirectShortUrl)
// POST creates a short URL that is associated with its original URL.
router.post('/api/url', createShortUrl)
// 404 for everything else.
router.all('*', () => new Response('Not Found', { status: 404 }))
// All incoming requests are passed to the router where your routes are called and the response is sent.
addEventListener('fetch', (e) => {
   e.respondWith(router.handle(e.request))
})

In the interest of providing a better user experience, I created a simple HTML landing page that anyone can use.

Creating short URL

To begin, we'll need a POST endpoint (/api/url) that calls createShortUrl, which parses the originalUrl from the body and returns a short URL.

Example Code:

// handlers/createShortUrl.js
import { generateUniqueUrlKey } from '../utils/urlKey'
export const createShortUrl = async (request, event) => {
   try {
       const urlKey = await generateUniqueUrlKey()
       const { host } = new URL(request.url)
       const shortUrl = `https://${host}/${urlKey}`
       const { originalUrl } = await request.json()
       const response = new Response(
           JSON.stringify({
               urlKey,
               shortUrl,
               originalUrl,
           }),
           { headers: { 'Content-Type': 'application/json' } }
       )
       event.waitUntil(URL_DB.put(urlKey, originalUrl))
       return response
   } catch (error) {
       console.error(error, error.stack)
       return new Response('Unexpected Error', { status: 500 })
   }
}

To test this locally, run the following Curl command:

curl --request POST \\
 --url http://127.0.0.1:8787/api/url \\
 --header 'Content-Type: application/json' \\
 --data '{
    "originalUrl": "https://www.google.com/"
}'

Redirecting short URL

As a URL shortening service, we want users to be able to visit a short URL and be redirected to their original URL:

// handlers/redirectShortUrl.js

export const redirectShortUrl = async ({ params }) => {

   const urlKey = decodeURIComponent(params.text)

   const originalUrl = await URL_DB.get(urlKey)

   if (originalUrl) {

       return Response.redirect(originalUrl, 301)

   }

   return new Response('Invalid Short URL', { status: 404 })

}

What about removing something? Because the user does not need to be authorized to shorten any URL, the decision was made to proceed without a deletion API because it makes no sense for any user to simply delete another user's short URL.

Simply run wrangler dev to test our URL shortener locally.

What happens if a user decides to shorten the same URL multiple times? We don't want our KV to end up with duplicated URLs with unique UUIDs, do we? To address this, we could use a cache middleware that caches the originalUrl submitted by users via the Cache API:

import { URL_CACHE } from '../utils/constants'

export const shortUrlCacheMiddleware = async (request) => {
   const { originalUrl } = await request.clone().json()

   if (!originalUrl) {

       return new Response('Invalid Request Body', {

           status: 400,

       })

   }
   const cache = await caches.open(URL_CACHE)

   const response = await cache.match(originalUrl)

   if (response) {

       console.log('Serving response from cache.')

       return response

   }

}

Update our index.js accordingly:

// index.js
...
router.post('/api/url', shortUrlCacheMiddleware, createShortUrl)
...

Finally, after shortening the URL, we must ensure that our cache instance is updated with the original URL:

// handlers/createShortUrl.js

import { URL_CACHE } from '../utils/constants'

import { generateUniqueUrlKey } from '../utils/urlKey'

export const createShortUrl = async (request, event) => {

   try {

       const urlKey = await generateUniqueUrlKey()

       const { host } = new URL(request.url)

       const shortUrl = `https://${host}/${urlKey}`

       const { originalUrl } = await request.json()

       const response = new Response(

           JSON.stringify({

               urlKey,

               shortUrl,

               originalUrl,

           }),

           { headers: { 'Content-Type': 'application/json' } }

       )

       const cache = await caches.open(URL_CACHE) // Access our API cache instance

       event.waitUntil(URL_DB.put(urlKey, originalUrl))

       event.waitUntil(cache.put(originalUrl, response.clone())) // Update our cache here

       return response

   } catch (error) {

       console.error(error, error.stack)

       return new Response('Unexpected Error', { status: 500 })

   }

}

During my testing with wrangler dev, I discovered that the Worker cache does not function locally or on any worker.dev domain.

To test this, run wrangler publishes to publish the application on a custom domain. You can test the changes by sending a request to the /api/url endpoint while using the wrangler tail to view the log.

Deployment

Isn't it true that no side project is ever completed without hosting?

Before you publish your code, you must edit the wrangler.toml file and include your Cloudflare account id. More information about configuring and publishing your code is available in the official documentation. Simply run wrangler publish to deploy and publish any new changes to your Cloudflare Worker.

Conclusion

To be honest, researching, writing, and building this POC at the same time is the most fun I've had in a long time. There are numerous other things that come to mind that we could have done for our URL shortener; to name a few:

  • Metadata such as creation date and the number of visits are saved.
  • Including authentication
  • Handle the deletion and expiration of short URLs.
  • Users' Analytics
  • Personalized link

One issue that most URL shortening services face is that short URLs are frequently used to redirect users to malicious websites. I believe it would be an interesting topic to investigate further. If you have any questions about the preceding topic. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Creating cloud resources is as simple as going to the candy store. It only takes a few clicks for an organization to create an account with a public cloud provider and eventually create resources that may include complex infrastructure to set up a distributed environment.

As time passes, "clutter," which includes unused or unwanted resources, accumulates. This clutter is not limited to categories such as compute, storage, and so on, but can also include unused roles, over-privileged policies, unused tags, and so on. This cloud clutter can, in particular, result in:

  • Increase in wasteful cloud spending
  • An increase in attack surface area exposes a security vulnerability.

I recently faced a similar challenge, and this post summarises my approach to cleaning the clutter in an AWS environment. This sanitization effort will eventually provide more control over the resources being used, reducing the attack surface area, increasing security posture, and lowering operating costs. A summary of various approaches to decluttering AWS environments is provided below.

Using Trusted Advisor — Cost Optimization, identify idle or underutilized resources.

The 'Cost Optimization' feature in AWS Trusted Advisor not only recommends cost-cutting measures but also lists unused or idle resources that could be deleted. This is a very useful service and a good place to start the journey to clean up the cloud clutter, but it is not a one-size-fits-all solution because the inspection of resource utilization is limited to:

  • Idle Instances of RDS DB
  • Balancers for Idle Loads
  • Inadequate utilization of AWS EC2 instances
  • IP addresses that are unrelated
  • EBS Volumes That Are Underutilized

AWS Security Hub Findings can help you identify unnecessary resources.

The primary goal of AWS Security Hub is to detect deviations from security best practices and reduce mean time to resolution through automated response and remediation actions. However, AWS Security Hub's ability to aggregate security findings from various AWS integrations and partner services is a critical feature.

Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS IAM Access Analyzer, AWS Systems Manager Patch Manager, AWS Config, and AWS Firewall Manager are among the AWS integrations. This means that security findings will concentrate on a broad range of AWS resources, including IAM roles and policies.

Remediating each security finding is a time-consuming process, but it will aid in understanding an organization's resource posture. For example, the remediation process will identify unused SNS topics, SQS queues, Secrets, KMS keys, over-provisioned policies, users with console access but no MFA setup, and so on.

Determine appropriate tags and tag the necessary resources.

After cleaning up the idle, underutilized, and unnecessary resources, it's critical to identify a clear set of tags that can be used to group resources and tag appropriately.

For resource tagging, it is strongly advised to use approaches such as Infrastructure as Code (IaaC) (for example, Terraform or AWS CloudFormation). An alternative, but time-consuming, method of tagging resources is to use the 'Tag Editor' feature of the service 'AWS Resource Groups.'

It is also recommended that the required tags be activated as 'Cost Allocation Tags.' This can be accomplished by using the 'Cost allocation tags' option in AWS Billing.

Identify resources that have not-yet-required tags and either clean up the resource or the tags that are attached to the resources.

Tags that existed prior to this sanitization effort could exist. These tags can be applied to resources that are either required or must be deleted. The EC2 service's 'Tags' section will list all used tags and associated resources. A review of the non-required tags will help to declutter the environment even more.

Maintain environmental sanity by using an automated solution such as The Cloud Custodian

Cloud Custodian is an open-source tool that can automate cloud environment management by ensuring compliance with security policies, tax policies, unused resource garbage collection, and cost management. The tool is simple to use and allows you to create millions of policies using an easy-to-read DSL.

The tool is highly recommended because it provides the necessary automation for removing cloud clutter.

Conclusion

To summarise, cleaning up cloud environments is a frequently overlooked task. However, this could be extremely costly in terms of actual spending while also posing a significant risk to the security posture. A one-time cleanup followed by an automation setup using tools like Cloud Custodian will be extremely beneficial in the long run. If you have any doubt about this topic. Don’t hesitate to contact us. Airzero Cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

How To Install WordPress?

- Posted in WordPress by

WordPress is well-known to everyone. Is it necessary for me to tell you about it?

Let me explain because some of you may still be unaware. WordPress is the most popular open-source CMS for creating websites and blogs. Believe me, it is the simplest way to create a website. This is due to the availability of various plugins and themes.

So, if you've already downloaded and installed it, you're a pro.

For those who are new to WordPress, this blog will assist them in learning the steps to download and install.

WordPress has enabled over 30% of internet users to self-host and design their own websites. Aside from that, even inexperienced users may take longer than five minutes to figure out how to download and install WordPress on their own. WordPress can be installed in two ways. One is the long route, which allows you to tailor your installation by understanding your exact needs from the start. While the alternate method is by one click, which is quick, you may need to work on it later. This blog will walk you through the steps of downloading and installing WordPress.

Before we get there, let's take a look at your specifications!

You'll need the following items to download and install WordPress.

Before you begin installing WordPress, you will need the following items:

You should be able to connect to your server. You will be unable to host your website if you do not have it.

A text editor that is appropriate.

FileZilla is an FTP (File Transfer Protocol) client. Within the MilesWeb dashboard, you will also have quick file access.

When you've met all of these requirements, you'll have everything you need to get started!

5 Steps to Download WordPress and Install the Software

Are you ready to install WordPress? Let's get started. It's not difficult, but you should pay close attention to this.

  • Get the WordPress.zip file.
  • Set up a WordPress database and user account.
  • Create the wp-config.php file.
  • Use FTP to upload your WordPress files.
  • Launch the WordPress installation program.

You may take longer than others to implement these steps based on your expertise. However, the first step should be simple.

  • Download the WordPress .zip File

To begin, you will need to download WordPress. Those who regularly use the internet, thankfully, will find this step simple. Navigate to the Download WordPress page and then click the blue button on the right. You'll see a Download.tar.gz link below, but ignore it — you only need the.zip file. Save it to your computer, then double-click on it to access the files contained within.

  • Create a WordPress Database and User

You must now decide whether or not to create a WordPress database and user. You won't have to do this if your host takes care of it for you, so it's worth investigating further. You might be able to find the answer in your host's documentation, or you can ask your host directly. If you need to manually create a database and user, you should also be aware of the web hosting control panel you're using. There are only two options: Plesk or cPanel.

You can create a database and user by following a few installation steps. You will need to make changes to your WordPress core files here.

  • Set Up wp-config.php

The next step is to open a core WordPress file called wp-config.php, which will allow WordPress to connect to your database. You can do this later on while running the WordPress installer. If this does not work, you will need to repeat your steps, so it is best to configure the file now. Begin by going to your computer's WordPress files and renaming the wp-config-sample.php file to wp-config.php. Then, in your text editor, open the file and find the following line: ** MySQL configuration – This information can be obtained from your web host ** /.

You'll find a list of options below:

Change nothing about the DB CHARSET and DB COLLATE options. Simply modify the following using the credentials you created in step two:

DB_NAME — Your database name
DB_USER — Your username
DB_PASSWORD — Your password
DB_HOST — Your hostname (usually localhost)

Then, look for the section * Authentication Unique Keys and Salts:

Simply generate a set of secret keys and paste them into this field. These keys will protect and fortify your WordPress installation. Once finished, save your changes and prepare to upload the files.

  • Upload your WordPress files via FTP

You are now ready to install WordPress on your server. Access your FTP credentials, which can be found in your hosting control panel. Then, in the right panel, open FileZilla, log in to your server and navigate to your root directory. It's known as www or public_HTML.

Navigate to the WordPress folder on your computer in the left panel. Follow the steps below depending on whether you're uploading it to your root directory or a subdirectory:

Upload the files directly into the root directory, avoiding the WordPress folder.

Subdirectory — Rename the WordPress folder to something unique before uploading the folder and its contents to your server.

Everything is now finished except for the actual installation.

  • Run the WordPress Installer

The WordPress installer must be run as the final step. Open your preferred browser and follow one of the following steps, depending on where you installed WordPress:

Navigate to http://example.com/wp-admin/install.php in the root directory. Subdirectory — Go to http://example.com/blogname/wp-admin/install.php,

where "blog name" refers to the folder name you created in the previous step You will see the WordPress logo as well as a screen with settings that may differ depending on your host:

Several of these options can be changed later from the General Settings screen. However, make a note of your username and password. Finally, click the Install WordPress button. When it's finished, you'll be able to access your brand-new website.

In Conclusion

If you haven't learned how to download and install WordPress, the process may appear to be difficult. But, believe me, even if you aren't technically savvy, you can get your WordPress up and running in less than five minutes. Choose a suitable web host to get faster results.

If you have any doubts about how to install WordPress. Don’t hesitate to contact us. Airzero Cloud will be your digital partner.

[email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

What is a Web Server?

A web server is a program that uses HTTP to serve files that create web pages for users in response to requests sent by their computer's HTTP clients.

A web server can be any server that sends an XML document to another device. So you'll type a URL into your browser and hit enter. That's all!

The location of your website's web server in the world makes no difference because the page you've browsed appears on your computer screen immediately.

Also look at Apache, IIS, NGINX, and GWS as examples of web servers. Which option do you prefer?

A web server's connection to the internet is never lost. Each web server has its own address, which is made up of four numbers ranging from 0 to 255. A period separates these numbers.

Hosting providers can manage multiple domains (users) on a single server using the webserver.

A web hosting service provider rents out space on a server or cluster of servers to people so that they can create their own websites.

What are the Types of Web Servers?

Web servers are classified into four types:

  • Apache Web Server
  • IIS Web Server
  • Nginx Web Server
  • LiteSpeed Web Server

Apache Web Server

The Apache Software Foundation's Apache web server is one of the most popular web servers. Apache is software that operates with every operating system, including Linux, Windows, Unix FreeBSD, Mac OS X, and others. Apache Web Server is used by approximately 60% of the machines.

Because of its modular structure, an apache web server can be easily customized. Because it is open-source, you can add your own modules to the server to make changes to suit your needs.

It is extremely stable in comparison to other web servers, and administrative issues on it are easily resolved. Apache can be successfully installed on multiple platforms.

When compared to earlier versions, Apache's latest versions give you the ability to handle more requests.

IIS Web Server

IIS, a Microsoft product, is a server that includes all of the features found in Apache. Because it is not open-source, adding and modifying personal modules is more difficult.

It is compatible with all platforms that run the Windows operating system.

Nginx Web Server

After Apache, Nginx is the next open-source web server. It includes an IMAP/POP3 proxy server. Nginx's notable features include high performance, stability, ease of configuration, and low resource usage. Instead, it employs a highly scalable event-driven architecture that uses a small and predictable amount of memory under load. It has recently gained popularity and now hosts approximately 7.5 percent of all domains globally.

LiteSpeed Web Server

LiteSpeed (LSWS), a commercial web server that is a high-performance Apache drop-in replacement, is the fourth most popular web server on the internet. When you upgrade your webserver to LiteSpeed, you will notice improved performance at a low cost.

This service works with the most common Apache features, including. htaccess, mod-rewrite, and mod security. It can replace the Apache in less than 15 minutes with no rest. To simplify use and make the transition from Apache smooth and easy, LSWS replaces all Apache functions that other front-end proxy solutions cannot.

Apache Tomcat

Apache Tomcat is an open-source Java servlet container that also serves as a web server. A Java servlet is a Java program that expands the powers of a server. Servlets can respond to any type of request, but they are most commonly used to implement web-based applications. Sun Microsystems donated Tomcat's codebase to the Apache Software Foundation in 1999, and the project was promoted to top-level Apache status in 2005. It currently powers just under 1% of all websites.

Apache Tomcat, released under the Apache License version 2, is commonly used to run Java applications. However, it can be extended with Coyote to function as a standard web server, serving local files as HTTP documents.

Apache Tomcat is frequently listed alongside other open-source Java application servers. Wildfly, JBoss, and Glassfish are a few examples.

Node.js

Node.js is essentially a server-side JavaScript environment for network applications like web servers. Ryan Dahl originally wrote it in 2009. Despite its smaller market share, Node.js powers 0.2 percent of all websites. The Linux Foundation's Collaborative Projects program assists the Node.js project, which is managed by the Node.js Foundation.

Node.js employs an event-driven architecture that supports asynchronous I/O. Because of these design choices, throughput and scalability in web applications are optimized, allowing them to run real-time communication and browser games.

Lighttpd

Lighttpd, which is pronounced "lightly," was first released in March 2003. It currently powers about 0.1 percent of all websites and is available under a BSD license. It uses an event-driven architecture that is optimized for a large number of parallel associations and helps FastCGI, Auth, Output-compression, SCGI, URL-rewriting, and a mixture of other elements. It's a popular web server for web frameworks like Catalyst and Ruby on Rails.

There are also some other types of servers, which are listed below:

  • Mail Server:
    A mail server have a centrally located pool of disc area for network users to keep and distribute various documents in the form of emails. Because all data is stored in a single location, administrators only need to backup files from one computer.
  • Application Server:
    It is a collection of features that can be reached by a software developer through an API specified by the platform itself. These parts are typically performed in a surrounding similar to that of the web server for the web applications. Their primary responsibility is to assist in the creation of dynamic pages.
  • File Transfer Protocol (FTP) Server:
    FTP uses separate control and data connection between the client and the server. They can, however, connect with anonymous names if the server is not configured to allow them. The username and password must be encrypted using FTP and SSL for transmission security.
  • Database Server:
    A database server is a computer program that gives database services to other computer programs through the use of client-server functionality. Some DBMSs rely on the client-server model for database access. This type of server can be accessed via a "front end" that runs on the user's computer where the request is made or a "back end" where services such as data analysis and storage are provided.
  • DNS (Domain Name System) Server:
    A name server is a computer server that hosts a network service that provides responses to queries. It either maps an addressing component or a numerical identification.DNS also aids in the recognition of an Internet namespace, which is used to identify and locate computer systems and resources on the Internet.

Conclusion

Web hosting primarily selects web servers based on client needs, the number of clients on a single server, the applications/software clients use, and the amount of traffic a web server can handle generated by clients. So, when selecting a web server, consider all of these factors first, and then choose one. If you have any doubt about web server and web server types. Don’t hesitate to contact us. Airzero Cloud will be your digital partner.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

When you choose the Microsoft Outlook program from the Start menu or a shortcut icon, but Outlook does not open, the correct solution is dependent on what you're experiencing and the version of Microsoft Outlook you're using.

  • Reasons for Outlook Not Opening

The outlook may open incorrectly or not at all due to a variety of issues. Problematic add-ins are one of the most common culprits.

  • Files have been damaged.
  • A tainted profile.
  • Problems with the navigation pane.

How to Resolve Outlook Not Opening Issues in Windows?

If you use Outlook on a Windows computer and it won't open or open with errors, try the troubleshooting steps presented here in the order presented, from simple to more complicated.

If you use Microsoft 365 on a PC or a Mac, the automated Support and Recovery Assistant tool can diagnose and resolve a variety of issues, including Microsoft Outlook not starting.

In Safe Mode, launch Outlook. If Outlook extends generally in Safe Mode, the problem is considered likely caused by an add-in or toolbar extension.

Disable any add-ons. One or more add-ins may be incompatible with Outlook and cause the issue.

  • Disable all add-ins and see if that resolves the problem.

    • Navigate to File > Options > Add-ins.
    • Select Go in the Manage section.
    • Remove the checkmarks from the checkboxes next to the add-ins you want to disable.
    • Choose OK.
  • Outlook should be repaired. The Outlook application could be harmed. To repair it, use the built-in Microsoft Office repair utility.

    • Close all Office programs.
    • Navigate to Start > Control Panel.
    • Choose Category View.
    • Select Uninstall a Program in the Programs area.
    • Change can be accessed by right-clicking Microsoft Office.

Choose either Online Repair or Repair. If a user account control prompt occurs, choose Yes.

After the process is finished, restart Outlook.

You should repair your Outlook profile. Outlook profiles can become corrupted, resulting in a variety of issues, including Outlook, not opening.

  • Account Settings can be accessed by going to File > Account Settings > Account Settings.
  • Navigate to the Email tab.
  • Select Repair to launch the Repair wizard
  • Follow the on-screen instructions to finish the wizard and restart Outlook.

  • Outlook data files must be repaired. If Outlook still does not open, use the Inbox Repair tool to identify and possibly resolve the issue.

    • Exit Strategy.
    • Download and run Microsoft's Inbox Repair tool.
    • Select Browse, then navigate to your personal folders (.pst) file, and then press the Start button.
    • If the scan reveals any errors, select Repair.
    • Restart Outlook once the repair is finished.

The navigation pane should be reset. The outlook may not open properly if there is a problem with the navigation pane during startup. Resetting the navigation pane may help to resolve the problem.

  • Exit Strategy.
  • Go to Start > Run, or press the Windows Key+R combination.
  • Select OK after typing or pasting outlook.exe /resetnavpane.
  • Launch Outlook. The navigation pane will be re-created.
  • Office Desktop Version vs. Microsoft 365

How to Resolve Outlook Not Opening on a Mac?

The troubleshooting techniques listed below are applicable to Outlook 2016 for Mac and Outlook 2011 for Mac.

Check back for updates. A recent update may have included a fix for the problem of Outlook not starting. Even if you can't open Outlook, check for and install any available updates.

  • Review for Updates by moving to Help > Check for Updates.
  • Select Update to download and install any updates that are available.

Rebuild Outlook's database. Using the Microsoft utility to reconstruct a befouled database may resolve the Outlook not opening on a Mac issue.

  • Close all Office programs.

  • To open the Microsoft Database Utility, press the Option key and select the Outlook icon in the Dock.

  • Choose the database whose identity you want to rebuild.

  • Select Rebuild.

Restart Outlook once the process is finished.

Airzero Cloud is a cloud hosting service that offers compute power, database storage, content delivery, and various business integration tools.

If you have any questions about What To Do When Microsoft Outlook Won't Open? Please let us know. If you have any doubts, please do not hesitate to contact us. Airzero cloud will be your digital companion.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Today, we'll look at how to create a Web app on Azure. We're doing this with the most recent Azure Cloud portal view. To create a Web app on Azure, follow the steps outlined below.

  • Log in to the site using the user credentials.
  • You will be directed to a portal page that will display the dashboard. From the left side of the page, we must select the New option.
  • When you select it, a new option will be added to the Marketplace. In this case, we must select the WEB+Mobile option. When we select it, we will be taken to the Web app dashboard.
  • In this case, we must select a Web app from the bottom of the page.

On this page, we will find a description of the Web app, read it to learn about its use and features, then click the Create button.

Here, we must assign an app name and subscription, as well as a Resource Group and an App Service plan/location. We must also select Application insight and then click the Create option.

After you click Create, your Web app will be created after some time.

Your first Web app on Azure has been successfully created and deployed. We can now only manage app services on Azure. We can use Visual Studio to open published profiles that we have downloaded.

Airzero Cloud is a cloud hosting service that provides compute power, database storage, content delivery, and other business integration tools.

Please let us know if you have any questions on how to create a web app in the Azure portal. Please do not hesitate to contact us if you have any questions. Airzero Cloud will be your digital companion.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

What is a RAID Controller?

A RAID controller is a hardware appliance or software schedule that manages hard disc drives or solid-state movements in a computer or storage array so that they can function as a logical unit. A RAID controller security stored data while also potentially enhancing computing commission by revving access to stored data.

  • A RAID controller acts as a bridge between an operating system and the physical drives.

A RAID controller offers applications and operating systems businesses or areas of drives as analytical units for which data security systems can be defined. Even though they may consist of parts of multiple drives, the logical units appear to applications and operating systems as endeavors. Because the controller can access numerous copies of data across numerous physical devices, it can improve performance and protect data in the event of a system crash.

There are approximately ten different RAID configurations unrestricted, as well as numerous proprietary variations of the standard set of RAID levels. A RAID controller will support a single RAID level or a group of levels that are related.

  • Hardware vs. software RAID controllers

A physical controller is used to manage the array in hardware-based RAID. The controller can be a PCI or PCI Express card that is created to support a specific drive format such as SATA or SCSI. Hardware RAID controllers are also known as RAID adapters.

Hardware controller prices vary significantly, with desktop-capable cards available for less than $50. More cosmopolitan hardware controllers capable of keeping shared networked storage are quite more expensive, typically ranging from a few hundred dollars to more than a thousand dollars.

LSI, Microsemi Adaptec, Intel, IBM, Dell, and Cisco are simply a few of the companies that currently provide hardware RAID controllers.

When choosing a hardware RAID controller, you should consider the following key features:

  • Interfaces for SATA and/or SAS (and related throughout speeds)
  • Supported RAID levels

  • Compatibility with operating systems

  • Supported device count

  • Performance in reading

  • IOPs evaluation

  • PCIe interface cache size

  • Capabilities for encryption

  • Consumption of energy

A controller can also be software-only, using the host system's hardware resources, especially the CPU and DRAM. Although software-based RAID delivers the same functionality as hardware-based RAID, its implementation is typically inferior to that of the hardware versions.

Because no special hardware is needed, the main benefits of using a software controller are flexibility and low cost. However, it is crucial to ensure that the host system's processor is powerful enough to run the software without negatively impacting the performance of other applications running on the host.

RAID controller software is contained in some operating systems. For example, RAID capabilities are provided by Windows Server's Storage Spaces facility. Most enterprise-class Linux servers include RAID controller software in the form of the Linux mdadm utility.

Third-party software RAID controllers, such as SnapRAID, Stablebit DrivePool, SoftRAID, and FlexRAID, are also available. These programs are typically adequate for small installations but may not meet the storage performance and capacity requirements of business environments.

Some commercially available storage arrays use software RAID controllers, but the software is typically developed and enhanced by the storage vendor to provide adequate performance. Furthermore, storage systems with software controllers are typically built around powerful processors dedicated to controlling and managing the shared storage system.

Airzero cloud is a cloud hosting service that provides compute power, database storage, content delivery, and a variety of other functions that aid in business integration.

If you have any doubt about the RAID controller. Don’t hesitate to contact us. Airzero cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Introduction

Backing up the server you manage is critical if you are a system administrator and want to avoid losing important data. Setting up periodic backups allows you to restore the system in the event of an unexpected event, such as a hardware component failure, incorrect system configuration, or the presence of viruses. Use Microsoft's Windows Server Backup (WSB) solution to quickly and easily schedule backups of both the entire server and specific storage volumes, files, or folders if you use Windows Server. This blog will walk you through the steps needed to perform an automatic backup of your Cloud Server using Windows Server 2019.

  • Installing Windows Server Backup

Windows Server Backup is a Microsoft feature that allows you to create a backup copy of your server.

To begin, open the Windows Server Management Panel Dashboard, click "Add roles and features," and then install this backup feature.

On the left, a window with several sections will be displayed. You may proceed without providing the information requested in the first section "Before You Begin." Then, in the second window, "Installation Type," select "Role-based or feature-based installation" and continue.

Select the server where you want to install Windows Server Backup in the "Server Selection" section and proceed. Continue by clicking "Next" in the "Server Roles" section. Open the "Features" window, then scroll down and select the "Windows Server Backup" item.

Select "Restart the destination server automatically if required" in the "Confirmation" section and click "Install." Then, after the installation is complete, click "Close."

As a result, Windows Server Backup (WSB) is correctly installed. Start it now and configure it. The tool is accessible via the "Tools" menu in the Server Manager.

  • Configuring automatic backup

Once Windows Server Backup is open, select Local Backup on the left and then Backup Schedule on the right to configure the automatic backup rules.

A window with several sections will appear. To begin, simply click "Next" in the "Getting Started" section. Then, in "Select Backup Configuration," leave the entry "Full server" unchecked if you want to back up the entire system. Otherwise, select "Custom " to back up only a subset of volumes, files, or folders. Finally, click "Next" to proceed to the following section.

In the "Specify Backup Time" section, specify whether to back up once a day at a specific time or to perform multiple backups at different times on the same day.

If you selected "More than once a day," simply select the desired time in the left column and click "Add." To delete an incorrect time, simply click on the time in the right column and select "Remove." Once the backup frequency has been determined, click "Next."

You will be asked where you want to save your backup file in the "Specify Destination Type" section. Each storage option has advantages and disadvantages, which are detailed in its own section. As a result, think carefully about where you'll keep your backup.

There are three possibilities:

  • Saving to local hard disc:If this option is selected, the backup will be performed on a local hard disc installed on the server itself. Please keep in mind that once selected, the hard disc in question will be formatted, so remember to back up any existing data to another hard disc.

  • Saving to volume:By selecting this option, you can use a portion of your hard disc as backup storage. However, keep in mind that if you choose this option, the hard disk's read/write speed may slow significantly during the backup phase. If you intend to use this method, it may be a good idea to schedule the backup during times when your server receives fewer requests.

  • Saving to a network folder:By selecting this option, your server can be backed up to a network hard disc. This will allow you to store your data on a NAS or another Cloud Server that is available. However, because it is overwritten each time, only one backup can be saved at a time in this case.

After you've chosen the best option for you, click "Next." The option "Saving to volume" is selected in this example.

The "Confirmation" section now displays a summary of the backup settings you've selected. To schedule the backup, click "Finish." When you're finished, click "Close."

Conclusion

You have now successfully scheduled your first automatic backup on Windows Server 2019. Windows Server Backup will back up your data based on the frequency and storage options you specify, preserving a copy of the files on your server.

Backups of your Windows server should be scheduled at all times because they allow you to restore data and settings if something goes wrong, such as defective hardware, incorrect system configuration, or the presence of malware. Also, keep in mind the benefits and drawbacks of the various backup methods available to avoid file inaccessibility or backup overwrites.

Airzero cloud is a cloud hosting service that offers services such as compute power, database storage, content delivery, and a variety of other functions that will aid in the integration of a business.

If you have any doubt about How to schedule automatic backups on Windows Server 2019, Don’t hesitate to contact us through the given email.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/