Airzero Cloud

Next Generation Cloud !

One of the most common system design interview questions in software engineering is how to create a URL shortener like TinyURL or Bitly.

While tinkering with Cloudflare Worker to sync the Daily LeetCode Challenge to To-do list, I had the idea to create an actual URL shortener that anyone could use.

What follows is my thought process, along with code examples, for creating a URL shortener using Cloudflare Worker. If you want to proceed, you'll need a Cloudflare account and the Wrangler CLI.

Requirements

Let's begin, as with any System Design interview, by defining some functional and non-functional requirements.

Functional

  • When given a URL, our service should return a unique and short URL of that URL. For instance,https://betterprogramming.pub/how-to-write-clean-code-in-python-5d67746133f2 s.jerrynsh.com/FpS0a2LU
  • When a user attempts to access s.jerrynsh.com/FpS0a2LU, he or she is redirected to the original URL.
  • The UUID should be encoded using the Base62 encoding scheme (26 + 26 + 10):
  • A lower case alphabet ranging from 'a' to 'z' with a total of 26 characters
  • An upper case alphabet ranging from 'A' to 'Z,' with a total of 26 characters.
  • A digit from '0' to '9', for a total of ten characters

We will not support custom short links in this POC. Our UUID should be 8 characters long because 628 would give us approximately 218 trillion possibilities.

The generated short URL should never expire.

Non-Functional

  • Low latency
  • High availability

Budget, Capacity, and Restrictions Planning

The goal is straightforward: I want to be able to host this service for free. As a result, our constraints are heavily reliant on Cloudflare Worker pricing and platform limitations.

At the time of writing, the constraints per account for hosting our service for free are as follows:

  • 100k requests per day at 1k requests per minute

  • CPU runtime of no more than 10ms

Our application, like most URL shorteners, is expected to have a high read rate but a low write rate. Cloudflare KV, a key-value data store that supports high read with low latency — ideal for our use case — will be used to store our data.

Continuing from our previous limitations, the free tier of KV and limit allows us to have:

  • 1k writes/day 100k reads/day
  • 1 GB of data storage

What is the maximum number of short URLs that we can store?

With a free maximum stored data limit of 1 GB in mind, let's try to estimate how many URLs we can store. I'm using this tool to estimate the byte size of the URL in this case:

  • One character equals one byte.
  • We have no problem with the key size limit because our UUID should only be a maximum of 8 characters.
  • The value size limit, on the other hand — I'm guessing that the maximum URL size should be around 200 characters. As a result, I believe it is safe to assume that each stored object should be an average of 400 bytes, which is significantly less than 25 MiB.
  • Finally, with 1 GB available, our URL shortener can support up to 2,500,000 short URLs.
  • I understand. 2.5 million URLs is not a large number.

In retrospect, we could have made our UUID 4 instead of 8, as 624 possibilities are far more than 2.5 million. Having said that, let's stick with an 8-character UUID.

Overall, I would say that the free tier for Cloudflare Worker and KV is quite generous and more than adequate for our proof of concept. Please keep in mind that the limits are applied per account.

Storage

As I previously stated, we will use Cloudflare KV as the database to store our shortened URLs because we anticipate more reads than writes.

Eventually Consistent

One thing to keep in mind is that, while KV can support extremely high read rates globally, it is not a long-term consistent storage solution. In other words, any writes may take up to 60 seconds to propagate globally — a drawback we can live with.

I've yet to come across anything that lasted more than a couple of seconds in my experiments.

Atomic Operation

Based on what I've learned about how KV works, it's clear that it's not ideal for situations that necessitate atomic operations. Fortunately for us, this is not an issue.

For our proof of concept, the key of our KV would be a UUID that comes after our domain name, and the value would be the long URL provided by the users.

Creating a KV

Simply run the following commands in Wrangler CLI to create a KV.

# Production namespace:
wrangler kv:namespace create "URL_DB"
# This namespace is used for `wrangler dev` local testing:
wrangler kv:namespace create "URL_DB" --preview

In order to create these KV namespaces, we must also update our wrangler.toml file to include the appropriate namespace bindings. To access your KV's dashboard, go to https://dash.cloudflare.com/your Cloudflare account id>/workers/kv/namespaces.

Short URL UUID Generation Logic

This is most likely the most crucial aspect of our entire application.

  • The goal, based on our requirements, is to generate an alphanumeric UUID for each URL, with the length of our key being no more than 8 characters.
  • In an ideal world, the UUID of the generated short link should be unique. Another critical factor to consider is what happens if multiple users shorten the same URL. We should ideally also check for duplicates.

Consider the following alternatives:

  • Using a UUID generator

     https://betterprogramming.pub/stop-using-exceptions-like-this-in-python-2bd8ba7d8841 → UUID Generator → Yyf6AJ39 → s.jerrynsh.com/Yyf6AJ39

This solution is relatively simple to implement. We simply call our UUID generator to generate a new UUID for each new URL we encounter. We'd then use the generated UUID as our key to assign the new URL.

If the UUID already exists (collision) in our KV, we can continue retrying. However, we should be cautious about retrying because it can be costly.

Furthermore, using a UUID generator would not assist us in dealing with duplicates in our KV. It would be relatively slow to look up the long URL value within our KV.

  • Hashing the URL

    https://betterprogramming.pub/how-to-write-clean-code-in-python-5d67746133f2 → MD5 Hash → 99d641e9923e135bd5f3a19dca1afbfa → 99d641e9 → s.jerrynsh.com/99d641e9

Hashing a URL, on the other hand, allows us to check for duplicated URLs because passing a string (URL) through a hashing function always produces the same result. We can then use the result (key) to check for duplication in our KV.

Assuming that we use MD5, our key would be 8 characters long. So, what if we just took the first 8 bytes of the MD5 hash generated? Isn't the problem solved?

No, not exactly. Collisions would always be produced by the hash function. We could generate a longer hash to reduce the likelihood of a collision. However, it would be inconvenient for users. In addition, we want to keep our UUID 8 characters.

  • Using an incremental counter

    https://betterprogramming.pub/3-useful-python-f-string-tricks-you-probably-dont-know-f908f7ed6cf5 → Counter → s.jerrynsh.com/12345678

In my opinion, this is the simplest yet most scalable solution. We will not have any collision problems if we use this solution. We can simply increment the number of characters in our UUID whenever we consume the entire set.

However, I do not want users to be able to guess a short URL at random by visitings.jerrynsh.com/12345678. As a result, this option is out of the question.

There are numerous other solutions (for example, pre-generate a list of keys and assign an unused key when a new request comes in) available, each with its own set of advantages and disadvantages.

We're going with solution 1 for our proof of concept because it's simple to implement and I'm fine with duplicates. To avoid duplicates, we could cache our users' requests in order to shorten URLs.

Nano ID

The nanoid package is used to generate a UUID. We can use the Nano ID collision calculator to estimate our collision rate:

Okay, enough chit-chat; let's get to work on some code!

To deal with the possibility of collision, we simply keep retrying:

// utils/urlKey.js
import { customAlphabet } from 'nanoid'
const ALPHABET = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
/*
Generate a unique `urlKey` using `nanoid` package.
Keep retrying until a unique urlKey does not exist in the URL_DB.
*/
export const generateUniqueUrlKey = async () => {
   const nanoId = customAlphabet(ALPHABET, 8)
   let urlKey = nanoId()
   while ((await URL_DB.get(urlKey)) !== null) {
       urlKey = nanoId()
   }
   return urlKey
}

API

We will define the API endpoints that we want to support in this section. This project is started with the itty-router worker template, which assists us with all of the routing logic:

wrangler generate  https://github.com/cloudflare/worker-template-router

The beginning of our project lies in the index.js:

// index.js
import { Router } from 'itty-router'
import { createShortUrl } from './src/handlers/createShortUrl'
import { redirectShortUrl } from './src/handlers/redirectShortUrl'
import { LANDING_PAGE_HTML } from './src/utils/constants'
const router = Router()
// GET landing page html
router.get('/', () => {
   return new Response(LANDING_PAGE_HTML, {
       headers: {
           'content-type': 'text/html;charset=UTF-8',
       },
   })
})
// GET redirects the short URL to its original URL.
router.get('/:text', redirectShortUrl)
// POST creates a short URL that is associated with its original URL.
router.post('/api/url', createShortUrl)
// 404 for everything else.
router.all('*', () => new Response('Not Found', { status: 404 }))
// All incoming requests are passed to the router where your routes are called and the response is sent.
addEventListener('fetch', (e) => {
   e.respondWith(router.handle(e.request))
})

In the interest of providing a better user experience, I created a simple HTML landing page that anyone can use.

Creating short URL

To begin, we'll need a POST endpoint (/api/url) that calls createShortUrl, which parses the originalUrl from the body and returns a short URL.

Example Code:

// handlers/createShortUrl.js
import { generateUniqueUrlKey } from '../utils/urlKey'
export const createShortUrl = async (request, event) => {
   try {
       const urlKey = await generateUniqueUrlKey()
       const { host } = new URL(request.url)
       const shortUrl = `https://${host}/${urlKey}`
       const { originalUrl } = await request.json()
       const response = new Response(
           JSON.stringify({
               urlKey,
               shortUrl,
               originalUrl,
           }),
           { headers: { 'Content-Type': 'application/json' } }
       )
       event.waitUntil(URL_DB.put(urlKey, originalUrl))
       return response
   } catch (error) {
       console.error(error, error.stack)
       return new Response('Unexpected Error', { status: 500 })
   }
}

To test this locally, run the following Curl command:

curl --request POST \\
 --url http://127.0.0.1:8787/api/url \\
 --header 'Content-Type: application/json' \\
 --data '{
    "originalUrl": "https://www.google.com/"
}'

Redirecting short URL

As a URL shortening service, we want users to be able to visit a short URL and be redirected to their original URL:

// handlers/redirectShortUrl.js

export const redirectShortUrl = async ({ params }) => {

   const urlKey = decodeURIComponent(params.text)

   const originalUrl = await URL_DB.get(urlKey)

   if (originalUrl) {

       return Response.redirect(originalUrl, 301)

   }

   return new Response('Invalid Short URL', { status: 404 })

}

What about removing something? Because the user does not need to be authorized to shorten any URL, the decision was made to proceed without a deletion API because it makes no sense for any user to simply delete another user's short URL.

Simply run wrangler dev to test our URL shortener locally.

What happens if a user decides to shorten the same URL multiple times? We don't want our KV to end up with duplicated URLs with unique UUIDs, do we? To address this, we could use a cache middleware that caches the originalUrl submitted by users via the Cache API:

import { URL_CACHE } from '../utils/constants'

export const shortUrlCacheMiddleware = async (request) => {
   const { originalUrl } = await request.clone().json()

   if (!originalUrl) {

       return new Response('Invalid Request Body', {

           status: 400,

       })

   }
   const cache = await caches.open(URL_CACHE)

   const response = await cache.match(originalUrl)

   if (response) {

       console.log('Serving response from cache.')

       return response

   }

}

Update our index.js accordingly:

// index.js
...
router.post('/api/url', shortUrlCacheMiddleware, createShortUrl)
...

Finally, after shortening the URL, we must ensure that our cache instance is updated with the original URL:

// handlers/createShortUrl.js

import { URL_CACHE } from '../utils/constants'

import { generateUniqueUrlKey } from '../utils/urlKey'

export const createShortUrl = async (request, event) => {

   try {

       const urlKey = await generateUniqueUrlKey()

       const { host } = new URL(request.url)

       const shortUrl = `https://${host}/${urlKey}`

       const { originalUrl } = await request.json()

       const response = new Response(

           JSON.stringify({

               urlKey,

               shortUrl,

               originalUrl,

           }),

           { headers: { 'Content-Type': 'application/json' } }

       )

       const cache = await caches.open(URL_CACHE) // Access our API cache instance

       event.waitUntil(URL_DB.put(urlKey, originalUrl))

       event.waitUntil(cache.put(originalUrl, response.clone())) // Update our cache here

       return response

   } catch (error) {

       console.error(error, error.stack)

       return new Response('Unexpected Error', { status: 500 })

   }

}

During my testing with wrangler dev, I discovered that the Worker cache does not function locally or on any worker.dev domain.

To test this, run wrangler publishes to publish the application on a custom domain. You can test the changes by sending a request to the /api/url endpoint while using the wrangler tail to view the log.

Deployment

Isn't it true that no side project is ever completed without hosting?

Before you publish your code, you must edit the wrangler.toml file and include your Cloudflare account id. More information about configuring and publishing your code is available in the official documentation. Simply run wrangler publish to deploy and publish any new changes to your Cloudflare Worker.

Conclusion

To be honest, researching, writing, and building this POC at the same time is the most fun I've had in a long time. There are numerous other things that come to mind that we could have done for our URL shortener; to name a few:

  • Metadata such as creation date and the number of visits are saved.
  • Including authentication
  • Handle the deletion and expiration of short URLs.
  • Users' Analytics
  • Personalized link

One issue that most URL shortening services face is that short URLs are frequently used to redirect users to malicious websites. I believe it would be an interesting topic to investigate further. If you have any questions about the preceding topic. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Creating cloud resources is as simple as going to the candy store. It only takes a few clicks for an organization to create an account with a public cloud provider and eventually create resources that may include complex infrastructure to set up a distributed environment.

As time passes, "clutter," which includes unused or unwanted resources, accumulates. This clutter is not limited to categories such as compute, storage, and so on, but can also include unused roles, over-privileged policies, unused tags, and so on. This cloud clutter can, in particular, result in:

  • Increase in wasteful cloud spending
  • An increase in attack surface area exposes a security vulnerability.

I recently faced a similar challenge, and this post summarises my approach to cleaning the clutter in an AWS environment. This sanitization effort will eventually provide more control over the resources being used, reducing the attack surface area, increasing security posture, and lowering operating costs. A summary of various approaches to decluttering AWS environments is provided below.

Using Trusted Advisor — Cost Optimization, identify idle or underutilized resources.

The 'Cost Optimization' feature in AWS Trusted Advisor not only recommends cost-cutting measures but also lists unused or idle resources that could be deleted. This is a very useful service and a good place to start the journey to clean up the cloud clutter, but it is not a one-size-fits-all solution because the inspection of resource utilization is limited to:

  • Idle Instances of RDS DB
  • Balancers for Idle Loads
  • Inadequate utilization of AWS EC2 instances
  • IP addresses that are unrelated
  • EBS Volumes That Are Underutilized

AWS Security Hub Findings can help you identify unnecessary resources.

The primary goal of AWS Security Hub is to detect deviations from security best practices and reduce mean time to resolution through automated response and remediation actions. However, AWS Security Hub's ability to aggregate security findings from various AWS integrations and partner services is a critical feature.

Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS IAM Access Analyzer, AWS Systems Manager Patch Manager, AWS Config, and AWS Firewall Manager are among the AWS integrations. This means that security findings will concentrate on a broad range of AWS resources, including IAM roles and policies.

Remediating each security finding is a time-consuming process, but it will aid in understanding an organization's resource posture. For example, the remediation process will identify unused SNS topics, SQS queues, Secrets, KMS keys, over-provisioned policies, users with console access but no MFA setup, and so on.

Determine appropriate tags and tag the necessary resources.

After cleaning up the idle, underutilized, and unnecessary resources, it's critical to identify a clear set of tags that can be used to group resources and tag appropriately.

For resource tagging, it is strongly advised to use approaches such as Infrastructure as Code (IaaC) (for example, Terraform or AWS CloudFormation). An alternative, but time-consuming, method of tagging resources is to use the 'Tag Editor' feature of the service 'AWS Resource Groups.'

It is also recommended that the required tags be activated as 'Cost Allocation Tags.' This can be accomplished by using the 'Cost allocation tags' option in AWS Billing.

Identify resources that have not-yet-required tags and either clean up the resource or the tags that are attached to the resources.

Tags that existed prior to this sanitization effort could exist. These tags can be applied to resources that are either required or must be deleted. The EC2 service's 'Tags' section will list all used tags and associated resources. A review of the non-required tags will help to declutter the environment even more.

Maintain environmental sanity by using an automated solution such as The Cloud Custodian

Cloud Custodian is an open-source tool that can automate cloud environment management by ensuring compliance with security policies, tax policies, unused resource garbage collection, and cost management. The tool is simple to use and allows you to create millions of policies using an easy-to-read DSL.

The tool is highly recommended because it provides the necessary automation for removing cloud clutter.

Conclusion

To summarise, cleaning up cloud environments is a frequently overlooked task. However, this could be extremely costly in terms of actual spending while also posing a significant risk to the security posture. A one-time cleanup followed by an automation setup using tools like Cloud Custodian will be extremely beneficial in the long run. If you have any doubt about this topic. Don’t hesitate to contact us. Airzero Cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

How To Install WordPress?

- Posted in WordPress by

WordPress is well-known to everyone. Is it necessary for me to tell you about it?

Let me explain because some of you may still be unaware. WordPress is the most popular open-source CMS for creating websites and blogs. Believe me, it is the simplest way to create a website. This is due to the availability of various plugins and themes.

So, if you've already downloaded and installed it, you're a pro.

For those who are new to WordPress, this blog will assist them in learning the steps to download and install.

WordPress has enabled over 30% of internet users to self-host and design their own websites. Aside from that, even inexperienced users may take longer than five minutes to figure out how to download and install WordPress on their own. WordPress can be installed in two ways. One is the long route, which allows you to tailor your installation by understanding your exact needs from the start. While the alternate method is by one click, which is quick, you may need to work on it later. This blog will walk you through the steps of downloading and installing WordPress.

Before we get there, let's take a look at your specifications!

You'll need the following items to download and install WordPress.

Before you begin installing WordPress, you will need the following items:

You should be able to connect to your server. You will be unable to host your website if you do not have it.

A text editor that is appropriate.

FileZilla is an FTP (File Transfer Protocol) client. Within the MilesWeb dashboard, you will also have quick file access.

When you've met all of these requirements, you'll have everything you need to get started!

5 Steps to Download WordPress and Install the Software

Are you ready to install WordPress? Let's get started. It's not difficult, but you should pay close attention to this.

  • Get the WordPress.zip file.
  • Set up a WordPress database and user account.
  • Create the wp-config.php file.
  • Use FTP to upload your WordPress files.
  • Launch the WordPress installation program.

You may take longer than others to implement these steps based on your expertise. However, the first step should be simple.

  • Download the WordPress .zip File

To begin, you will need to download WordPress. Those who regularly use the internet, thankfully, will find this step simple. Navigate to the Download WordPress page and then click the blue button on the right. You'll see a Download.tar.gz link below, but ignore it — you only need the.zip file. Save it to your computer, then double-click on it to access the files contained within.

  • Create a WordPress Database and User

You must now decide whether or not to create a WordPress database and user. You won't have to do this if your host takes care of it for you, so it's worth investigating further. You might be able to find the answer in your host's documentation, or you can ask your host directly. If you need to manually create a database and user, you should also be aware of the web hosting control panel you're using. There are only two options: Plesk or cPanel.

You can create a database and user by following a few installation steps. You will need to make changes to your WordPress core files here.

  • Set Up wp-config.php

The next step is to open a core WordPress file called wp-config.php, which will allow WordPress to connect to your database. You can do this later on while running the WordPress installer. If this does not work, you will need to repeat your steps, so it is best to configure the file now. Begin by going to your computer's WordPress files and renaming the wp-config-sample.php file to wp-config.php. Then, in your text editor, open the file and find the following line: ** MySQL configuration – This information can be obtained from your web host ** /.

You'll find a list of options below:

Change nothing about the DB CHARSET and DB COLLATE options. Simply modify the following using the credentials you created in step two:

DB_NAME — Your database name
DB_USER — Your username
DB_PASSWORD — Your password
DB_HOST — Your hostname (usually localhost)

Then, look for the section * Authentication Unique Keys and Salts:

Simply generate a set of secret keys and paste them into this field. These keys will protect and fortify your WordPress installation. Once finished, save your changes and prepare to upload the files.

  • Upload your WordPress files via FTP

You are now ready to install WordPress on your server. Access your FTP credentials, which can be found in your hosting control panel. Then, in the right panel, open FileZilla, log in to your server and navigate to your root directory. It's known as www or public_HTML.

Navigate to the WordPress folder on your computer in the left panel. Follow the steps below depending on whether you're uploading it to your root directory or a subdirectory:

Upload the files directly into the root directory, avoiding the WordPress folder.

Subdirectory — Rename the WordPress folder to something unique before uploading the folder and its contents to your server.

Everything is now finished except for the actual installation.

  • Run the WordPress Installer

The WordPress installer must be run as the final step. Open your preferred browser and follow one of the following steps, depending on where you installed WordPress:

Navigate to http://example.com/wp-admin/install.php in the root directory. Subdirectory — Go to http://example.com/blogname/wp-admin/install.php,

where "blog name" refers to the folder name you created in the previous step You will see the WordPress logo as well as a screen with settings that may differ depending on your host:

Several of these options can be changed later from the General Settings screen. However, make a note of your username and password. Finally, click the Install WordPress button. When it's finished, you'll be able to access your brand-new website.

In Conclusion

If you haven't learned how to download and install WordPress, the process may appear to be difficult. But, believe me, even if you aren't technically savvy, you can get your WordPress up and running in less than five minutes. Choose a suitable web host to get faster results.

If you have any doubts about how to install WordPress. Don’t hesitate to contact us. Airzero Cloud will be your digital partner.

[email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

What is a Web Server?

A web server is a program that uses HTTP to serve files that create web pages for users in response to requests sent by their computer's HTTP clients.

A web server can be any server that sends an XML document to another device. So you'll type a URL into your browser and hit enter. That's all!

The location of your website's web server in the world makes no difference because the page you've browsed appears on your computer screen immediately.

Also look at Apache, IIS, NGINX, and GWS as examples of web servers. Which option do you prefer?

A web server's connection to the internet is never lost. Each web server has its own address, which is made up of four numbers ranging from 0 to 255. A period separates these numbers.

Hosting providers can manage multiple domains (users) on a single server using the webserver.

A web hosting service provider rents out space on a server or cluster of servers to people so that they can create their own websites.

What are the Types of Web Servers?

Web servers are classified into four types:

  • Apache Web Server
  • IIS Web Server
  • Nginx Web Server
  • LiteSpeed Web Server

Apache Web Server

The Apache Software Foundation's Apache web server is one of the most popular web servers. Apache is software that operates with every operating system, including Linux, Windows, Unix FreeBSD, Mac OS X, and others. Apache Web Server is used by approximately 60% of the machines.

Because of its modular structure, an apache web server can be easily customized. Because it is open-source, you can add your own modules to the server to make changes to suit your needs.

It is extremely stable in comparison to other web servers, and administrative issues on it are easily resolved. Apache can be successfully installed on multiple platforms.

When compared to earlier versions, Apache's latest versions give you the ability to handle more requests.

IIS Web Server

IIS, a Microsoft product, is a server that includes all of the features found in Apache. Because it is not open-source, adding and modifying personal modules is more difficult.

It is compatible with all platforms that run the Windows operating system.

Nginx Web Server

After Apache, Nginx is the next open-source web server. It includes an IMAP/POP3 proxy server. Nginx's notable features include high performance, stability, ease of configuration, and low resource usage. Instead, it employs a highly scalable event-driven architecture that uses a small and predictable amount of memory under load. It has recently gained popularity and now hosts approximately 7.5 percent of all domains globally.

LiteSpeed Web Server

LiteSpeed (LSWS), a commercial web server that is a high-performance Apache drop-in replacement, is the fourth most popular web server on the internet. When you upgrade your webserver to LiteSpeed, you will notice improved performance at a low cost.

This service works with the most common Apache features, including. htaccess, mod-rewrite, and mod security. It can replace the Apache in less than 15 minutes with no rest. To simplify use and make the transition from Apache smooth and easy, LSWS replaces all Apache functions that other front-end proxy solutions cannot.

Apache Tomcat

Apache Tomcat is an open-source Java servlet container that also serves as a web server. A Java servlet is a Java program that expands the powers of a server. Servlets can respond to any type of request, but they are most commonly used to implement web-based applications. Sun Microsystems donated Tomcat's codebase to the Apache Software Foundation in 1999, and the project was promoted to top-level Apache status in 2005. It currently powers just under 1% of all websites.

Apache Tomcat, released under the Apache License version 2, is commonly used to run Java applications. However, it can be extended with Coyote to function as a standard web server, serving local files as HTTP documents.

Apache Tomcat is frequently listed alongside other open-source Java application servers. Wildfly, JBoss, and Glassfish are a few examples.

Node.js

Node.js is essentially a server-side JavaScript environment for network applications like web servers. Ryan Dahl originally wrote it in 2009. Despite its smaller market share, Node.js powers 0.2 percent of all websites. The Linux Foundation's Collaborative Projects program assists the Node.js project, which is managed by the Node.js Foundation.

Node.js employs an event-driven architecture that supports asynchronous I/O. Because of these design choices, throughput and scalability in web applications are optimized, allowing them to run real-time communication and browser games.

Lighttpd

Lighttpd, which is pronounced "lightly," was first released in March 2003. It currently powers about 0.1 percent of all websites and is available under a BSD license. It uses an event-driven architecture that is optimized for a large number of parallel associations and helps FastCGI, Auth, Output-compression, SCGI, URL-rewriting, and a mixture of other elements. It's a popular web server for web frameworks like Catalyst and Ruby on Rails.

There are also some other types of servers, which are listed below:

  • Mail Server:
    A mail server have a centrally located pool of disc area for network users to keep and distribute various documents in the form of emails. Because all data is stored in a single location, administrators only need to backup files from one computer.
  • Application Server:
    It is a collection of features that can be reached by a software developer through an API specified by the platform itself. These parts are typically performed in a surrounding similar to that of the web server for the web applications. Their primary responsibility is to assist in the creation of dynamic pages.
  • File Transfer Protocol (FTP) Server:
    FTP uses separate control and data connection between the client and the server. They can, however, connect with anonymous names if the server is not configured to allow them. The username and password must be encrypted using FTP and SSL for transmission security.
  • Database Server:
    A database server is a computer program that gives database services to other computer programs through the use of client-server functionality. Some DBMSs rely on the client-server model for database access. This type of server can be accessed via a "front end" that runs on the user's computer where the request is made or a "back end" where services such as data analysis and storage are provided.
  • DNS (Domain Name System) Server:
    A name server is a computer server that hosts a network service that provides responses to queries. It either maps an addressing component or a numerical identification.DNS also aids in the recognition of an Internet namespace, which is used to identify and locate computer systems and resources on the Internet.

Conclusion

Web hosting primarily selects web servers based on client needs, the number of clients on a single server, the applications/software clients use, and the amount of traffic a web server can handle generated by clients. So, when selecting a web server, consider all of these factors first, and then choose one. If you have any doubt about web server and web server types. Don’t hesitate to contact us. Airzero Cloud will be your digital partner.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

When you choose the Microsoft Outlook program from the Start menu or a shortcut icon, but Outlook does not open, the correct solution is dependent on what you're experiencing and the version of Microsoft Outlook you're using.

  • Reasons for Outlook Not Opening

The outlook may open incorrectly or not at all due to a variety of issues. Problematic add-ins are one of the most common culprits.

  • Files have been damaged.
  • A tainted profile.
  • Problems with the navigation pane.

How to Resolve Outlook Not Opening Issues in Windows?

If you use Outlook on a Windows computer and it won't open or open with errors, try the troubleshooting steps presented here in the order presented, from simple to more complicated.

If you use Microsoft 365 on a PC or a Mac, the automated Support and Recovery Assistant tool can diagnose and resolve a variety of issues, including Microsoft Outlook not starting.

In Safe Mode, launch Outlook. If Outlook extends generally in Safe Mode, the problem is considered likely caused by an add-in or toolbar extension.

Disable any add-ons. One or more add-ins may be incompatible with Outlook and cause the issue.

  • Disable all add-ins and see if that resolves the problem.

    • Navigate to File > Options > Add-ins.
    • Select Go in the Manage section.
    • Remove the checkmarks from the checkboxes next to the add-ins you want to disable.
    • Choose OK.
  • Outlook should be repaired. The Outlook application could be harmed. To repair it, use the built-in Microsoft Office repair utility.

    • Close all Office programs.
    • Navigate to Start > Control Panel.
    • Choose Category View.
    • Select Uninstall a Program in the Programs area.
    • Change can be accessed by right-clicking Microsoft Office.

Choose either Online Repair or Repair. If a user account control prompt occurs, choose Yes.

After the process is finished, restart Outlook.

You should repair your Outlook profile. Outlook profiles can become corrupted, resulting in a variety of issues, including Outlook, not opening.

  • Account Settings can be accessed by going to File > Account Settings > Account Settings.
  • Navigate to the Email tab.
  • Select Repair to launch the Repair wizard
  • Follow the on-screen instructions to finish the wizard and restart Outlook.

  • Outlook data files must be repaired. If Outlook still does not open, use the Inbox Repair tool to identify and possibly resolve the issue.

    • Exit Strategy.
    • Download and run Microsoft's Inbox Repair tool.
    • Select Browse, then navigate to your personal folders (.pst) file, and then press the Start button.
    • If the scan reveals any errors, select Repair.
    • Restart Outlook once the repair is finished.

The navigation pane should be reset. The outlook may not open properly if there is a problem with the navigation pane during startup. Resetting the navigation pane may help to resolve the problem.

  • Exit Strategy.
  • Go to Start > Run, or press the Windows Key+R combination.
  • Select OK after typing or pasting outlook.exe /resetnavpane.
  • Launch Outlook. The navigation pane will be re-created.
  • Office Desktop Version vs. Microsoft 365

How to Resolve Outlook Not Opening on a Mac?

The troubleshooting techniques listed below are applicable to Outlook 2016 for Mac and Outlook 2011 for Mac.

Check back for updates. A recent update may have included a fix for the problem of Outlook not starting. Even if you can't open Outlook, check for and install any available updates.

  • Review for Updates by moving to Help > Check for Updates.
  • Select Update to download and install any updates that are available.

Rebuild Outlook's database. Using the Microsoft utility to reconstruct a befouled database may resolve the Outlook not opening on a Mac issue.

  • Close all Office programs.

  • To open the Microsoft Database Utility, press the Option key and select the Outlook icon in the Dock.

  • Choose the database whose identity you want to rebuild.

  • Select Rebuild.

Restart Outlook once the process is finished.

Airzero Cloud is a cloud hosting service that offers compute power, database storage, content delivery, and various business integration tools.

If you have any questions about What To Do When Microsoft Outlook Won't Open? Please let us know. If you have any doubts, please do not hesitate to contact us. Airzero cloud will be your digital companion.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Today, we'll look at how to create a Web app on Azure. We're doing this with the most recent Azure Cloud portal view. To create a Web app on Azure, follow the steps outlined below.

  • Log in to the site using the user credentials.
  • You will be directed to a portal page that will display the dashboard. From the left side of the page, we must select the New option.
  • When you select it, a new option will be added to the Marketplace. In this case, we must select the WEB+Mobile option. When we select it, we will be taken to the Web app dashboard.
  • In this case, we must select a Web app from the bottom of the page.

On this page, we will find a description of the Web app, read it to learn about its use and features, then click the Create button.

Here, we must assign an app name and subscription, as well as a Resource Group and an App Service plan/location. We must also select Application insight and then click the Create option.

After you click Create, your Web app will be created after some time.

Your first Web app on Azure has been successfully created and deployed. We can now only manage app services on Azure. We can use Visual Studio to open published profiles that we have downloaded.

Airzero Cloud is a cloud hosting service that provides compute power, database storage, content delivery, and other business integration tools.

Please let us know if you have any questions on how to create a web app in the Azure portal. Please do not hesitate to contact us if you have any questions. Airzero Cloud will be your digital companion.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

What is a RAID Controller?

A RAID controller is a hardware appliance or software schedule that manages hard disc drives or solid-state movements in a computer or storage array so that they can function as a logical unit. A RAID controller security stored data while also potentially enhancing computing commission by revving access to stored data.

  • A RAID controller acts as a bridge between an operating system and the physical drives.

A RAID controller offers applications and operating systems businesses or areas of drives as analytical units for which data security systems can be defined. Even though they may consist of parts of multiple drives, the logical units appear to applications and operating systems as endeavors. Because the controller can access numerous copies of data across numerous physical devices, it can improve performance and protect data in the event of a system crash.

There are approximately ten different RAID configurations unrestricted, as well as numerous proprietary variations of the standard set of RAID levels. A RAID controller will support a single RAID level or a group of levels that are related.

  • Hardware vs. software RAID controllers

A physical controller is used to manage the array in hardware-based RAID. The controller can be a PCI or PCI Express card that is created to support a specific drive format such as SATA or SCSI. Hardware RAID controllers are also known as RAID adapters.

Hardware controller prices vary significantly, with desktop-capable cards available for less than $50. More cosmopolitan hardware controllers capable of keeping shared networked storage are quite more expensive, typically ranging from a few hundred dollars to more than a thousand dollars.

LSI, Microsemi Adaptec, Intel, IBM, Dell, and Cisco are simply a few of the companies that currently provide hardware RAID controllers.

When choosing a hardware RAID controller, you should consider the following key features:

  • Interfaces for SATA and/or SAS (and related throughout speeds)
  • Supported RAID levels

  • Compatibility with operating systems

  • Supported device count

  • Performance in reading

  • IOPs evaluation

  • PCIe interface cache size

  • Capabilities for encryption

  • Consumption of energy

A controller can also be software-only, using the host system's hardware resources, especially the CPU and DRAM. Although software-based RAID delivers the same functionality as hardware-based RAID, its implementation is typically inferior to that of the hardware versions.

Because no special hardware is needed, the main benefits of using a software controller are flexibility and low cost. However, it is crucial to ensure that the host system's processor is powerful enough to run the software without negatively impacting the performance of other applications running on the host.

RAID controller software is contained in some operating systems. For example, RAID capabilities are provided by Windows Server's Storage Spaces facility. Most enterprise-class Linux servers include RAID controller software in the form of the Linux mdadm utility.

Third-party software RAID controllers, such as SnapRAID, Stablebit DrivePool, SoftRAID, and FlexRAID, are also available. These programs are typically adequate for small installations but may not meet the storage performance and capacity requirements of business environments.

Some commercially available storage arrays use software RAID controllers, but the software is typically developed and enhanced by the storage vendor to provide adequate performance. Furthermore, storage systems with software controllers are typically built around powerful processors dedicated to controlling and managing the shared storage system.

Airzero cloud is a cloud hosting service that provides compute power, database storage, content delivery, and a variety of other functions that aid in business integration.

If you have any doubt about the RAID controller. Don’t hesitate to contact us. Airzero cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Introduction

Backing up the server you manage is critical if you are a system administrator and want to avoid losing important data. Setting up periodic backups allows you to restore the system in the event of an unexpected event, such as a hardware component failure, incorrect system configuration, or the presence of viruses. Use Microsoft's Windows Server Backup (WSB) solution to quickly and easily schedule backups of both the entire server and specific storage volumes, files, or folders if you use Windows Server. This blog will walk you through the steps needed to perform an automatic backup of your Cloud Server using Windows Server 2019.

  • Installing Windows Server Backup

Windows Server Backup is a Microsoft feature that allows you to create a backup copy of your server.

To begin, open the Windows Server Management Panel Dashboard, click "Add roles and features," and then install this backup feature.

On the left, a window with several sections will be displayed. You may proceed without providing the information requested in the first section "Before You Begin." Then, in the second window, "Installation Type," select "Role-based or feature-based installation" and continue.

Select the server where you want to install Windows Server Backup in the "Server Selection" section and proceed. Continue by clicking "Next" in the "Server Roles" section. Open the "Features" window, then scroll down and select the "Windows Server Backup" item.

Select "Restart the destination server automatically if required" in the "Confirmation" section and click "Install." Then, after the installation is complete, click "Close."

As a result, Windows Server Backup (WSB) is correctly installed. Start it now and configure it. The tool is accessible via the "Tools" menu in the Server Manager.

  • Configuring automatic backup

Once Windows Server Backup is open, select Local Backup on the left and then Backup Schedule on the right to configure the automatic backup rules.

A window with several sections will appear. To begin, simply click "Next" in the "Getting Started" section. Then, in "Select Backup Configuration," leave the entry "Full server" unchecked if you want to back up the entire system. Otherwise, select "Custom " to back up only a subset of volumes, files, or folders. Finally, click "Next" to proceed to the following section.

In the "Specify Backup Time" section, specify whether to back up once a day at a specific time or to perform multiple backups at different times on the same day.

If you selected "More than once a day," simply select the desired time in the left column and click "Add." To delete an incorrect time, simply click on the time in the right column and select "Remove." Once the backup frequency has been determined, click "Next."

You will be asked where you want to save your backup file in the "Specify Destination Type" section. Each storage option has advantages and disadvantages, which are detailed in its own section. As a result, think carefully about where you'll keep your backup.

There are three possibilities:

  • Saving to local hard disc:If this option is selected, the backup will be performed on a local hard disc installed on the server itself. Please keep in mind that once selected, the hard disc in question will be formatted, so remember to back up any existing data to another hard disc.

  • Saving to volume:By selecting this option, you can use a portion of your hard disc as backup storage. However, keep in mind that if you choose this option, the hard disk's read/write speed may slow significantly during the backup phase. If you intend to use this method, it may be a good idea to schedule the backup during times when your server receives fewer requests.

  • Saving to a network folder:By selecting this option, your server can be backed up to a network hard disc. This will allow you to store your data on a NAS or another Cloud Server that is available. However, because it is overwritten each time, only one backup can be saved at a time in this case.

After you've chosen the best option for you, click "Next." The option "Saving to volume" is selected in this example.

The "Confirmation" section now displays a summary of the backup settings you've selected. To schedule the backup, click "Finish." When you're finished, click "Close."

Conclusion

You have now successfully scheduled your first automatic backup on Windows Server 2019. Windows Server Backup will back up your data based on the frequency and storage options you specify, preserving a copy of the files on your server.

Backups of your Windows server should be scheduled at all times because they allow you to restore data and settings if something goes wrong, such as defective hardware, incorrect system configuration, or the presence of malware. Also, keep in mind the benefits and drawbacks of the various backup methods available to avoid file inaccessibility or backup overwrites.

Airzero cloud is a cloud hosting service that offers services such as compute power, database storage, content delivery, and a variety of other functions that will aid in the integration of a business.

If you have any doubt about How to schedule automatic backups on Windows Server 2019, Don’t hesitate to contact us through the given email.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Introduction

MongoDB is a document database that is widely used in modern web applications. It has a classification.

Because it does not rely on a traditional table-based relational database, it is referred to as a NoSQL database structure.

Instead, it uses JSON-like documents with vibrant schemes, which suggests that, unlike relational databases, MongoDB, unlike other databases, does not require a predefined schema before adding data to a database. You can change the schema at any time and as often as you like without having to set it up. A new database with a new schema. In this blog, you will install, test, and learn about MongoDB on an Ubuntu 20.04 server. how it should be managed as a system service.

Prerequisites

You will need the following items to follow this blog:

There is one Ubuntu 20.04 server. This server should have a non-root administrative user and a UFW-enabled firewall.

Step 1 — Installing MongoDB

A stable version of MongoDB is available in Ubuntu's official package repositories. However, as of this writing, the latest stable release of MongoDB is 4.4, and the version available from the default Ubuntu repositories is 3.6. To get the most up-to-date version of this software, add MongoDB's dedicated package repository to your APT sources. Then you can install mongodb-org, a meta-package that always points to the most recent version of MongoDB. To begin, run the following command to import the public GPG key for the most recent stable version of MongoDB. If you like to use various versions of MongoDB than 4.4, make certain to change 4.4 in the URL portion of this command to match the version you want to install:

curl -fsSL https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -
  • A cURL is a command-line tool that can be found on many operating systems and is used to transfer data.
  • It reads the data stored at the URL passed to it and prints it to thesystem's output. It's also worth noting that this curl command includes the -fsSL flag, which tells cURL to fail silently. This means that if cURL is unable to contact the GPG server or the GPG server is unavailable, it will not inadvertently add the resulting error code to your list of trusted keys.

If the key was successfully added, this command will return OK:

Output
OK
  • If you want to double-check that the key was correctly added, use the following command:

apt-key list

  • The MongoDB key will be returned somewhere in the output:

Output

/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2019-05-28 [SC] [expires: 2024-05-26]
      2069 1EEC 3521 6C63 CAF6  6CE1 6564 08E3 90CF B1F5
uid           [ unknown] MongoDB 4.4 Release Signing Key 
. . .

At this point, your APT installation is still unsure where to look for the mongodb-org package. The package you must install is the most recent version of MongoDB.APT looks for online sources of packages to download and install on your server in two places: the sources. list file and the sources.list.d directory. sources.list is a file that lists active APT data sources, one per line, with the most preferred sources listed first. You can add such sources to the sources.list.d directory. Entries should be listed as separate files. Run the following command to create a file called mongodb-org-4.4.list in the sources.list.d directory. This file contains only one line of text.

 deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb
-org/4.4 multiverse:
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list

This single line informs APT of everything it needs to know about the source and where to look for it: deb: This indicates that the source entry refers to a standard Debian architecture.

In other cases, this part of the line may read deb-src, indicating that the source entry represents the source code of a Debian distribution

[arch=amd64,arm64]: [arch=amd64,arm64]: 

This specifies which architectures should receive APT data. It specifies the amd64 and arm64 architectures in this case.

https://repo.mongodb.org/apt/ubuntu:

This is a URI that represents the location of the APT data.

In this case, the URI refers to the HTTPS address of the official MongoDB repository.

focal/mongodb-org/4.4: 

Ubuntu repositories may include multiple releases. This tells Ubuntu that you only require version 4.4 of the mongodb-org package to be required for the focal release.

sudo apt update

Following that, you can enable MongoDB:

sudo apt install mongodb-org

When prompted, enter Y followed by ENTER to confirm that you want to install the package. MongoDB will be installed on your system once the command is completed. It is, however, not yet ready for use. After that, you'll start MongoDB and verify that it's operational.

Step 2: Launch the MongoDB Service and Test the Database

The previous step's installation automatically configures MongoDB to run as a daemon controlled by systemd, which means you can manage MongoDB using the various systemctl commands. This installation procedure, however, does not start the service automatically. use the systemctl command:

sudo systemctl start mongod.service

Then, check the status of the service. It's worth noting that this command excludes. service from the service file definition. so it's not necessary to include it:

sudo systemctl status mongod

This command will produce the following output, indicating that the service is operational:

Output

● mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2020-06-09 12:57:06 UTC; 2s ago
       Docs: https://docs.mongodb.org/manual
   Main PID: 37128 (mongod)
     Memory: 64.8M
     CGroup: /system.slice/mongod.service
             └─37128 /usr/bin/mongod --config /etc/mongod.conf

enable the MongoDB service to start automatically at boot:

sudo systemctl enable mongod

You can confirm that the database is operational further by connecting to the database server and running a diagnostic command. The command given will connect to the database and output its current version, server address, and port. It will also return the outcome of the MongoDB internal connectionStatus command:

mongo --eval 'db.runCommand({ connectionStatus: 1 })'
connectionStatus 

will check the database connection and return it

A value of 1 in the response's ok field indicates that the server is functioning normally:

Output
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("1dc7d67a-0af5-4394-b9c4-8a6db3ff7e64") }
MongoDB server version: 4.4.0
{
    "authInfo" : {
        "authenticatedUsers" : [ ],
        "authenticatedUserRoles" : [ ]
    },
    "ok" : 1
}

Also, keep in mind that the database is running on port 27017 on 127.0.0.1, which is the local loopback address for localhost. Following that, we'll look at how to use systemd to manage the MongoDB server instance.

Step 3: Overseeing the MongoDB Service

you can manage services using standard systemctl commands, just like you would other Ubuntu system services.

The systemctl status command, as previously mentioned, checks the status of the MongoDB service:

sudo systemctl status mongod
  • You can cancel the service at any time by typing:
sudo systemctl stop mongod
  • To restart the service after it has been stopped, type:
sudo systemctl start mongod
  • When the server is already running, you can restart it:
sudo systemctl restart mongod

In Step 2, you enabled MongoDB to start with the server automatically. If you ever want to turn off the automatic startup, type:

sudo systemctl disable mongod

Then, to re-enable it to start up at boot, issue the following command: enable

sudo systemctl enable mongod

Systemd Essentials: Working with Services, Units, and the Journal contains more information on how to manage systemd services.

Conclusion

You added the official MongoDB repository to your APT instance and installed the latest version of MongoDB in this blog. You then practised some systemctl commands and tested Mongo's functionality.

If you have any questions about installing MongoDB on Ubuntu 20.04. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Airzero Cloud is a leading web hosting company with a variety of useful tools. We will help you expand your business.

Email id: [email protected] enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

In this blog, we will look at how to enable remote access to a MySQL database on an Ubuntu machine.

  • In the MySQL Configuration file, allow connections from clients other than localhost.

In the configuration file, allow MySQL connections from other clients. The MySQL configuration file will be located in the /etc/mysql/mysql.conf.d directory and will be named mysqld.cnf. MySQL is configured by default to accept connections only from localhost, i.e. 127.0.0.1; we must change this to 0.0.0.0 to allow connections from other clients.

Change,

bind-address = 127.0.0.1

to

bind-address = 0.0.0.0
  • In the Ubuntu Machine's firewall, whitelist the client’s IP address.

The Ubuntu machine includes the Ubuntu Firewall, which by default does not permit incoming connections to MySQL port 3306. As a result, we must open the port for the client's specific IP address or, if your client does not have a fixed IP address, for all IP addresses.

Assume your client has the IP address 50.75.120.81. The following line on your terminal will enable incoming connections to port 3306 from a client with the IP address 50.75.120.81:

ufw allow any port 3306 from 50.75.120.81

If your client does not have a fixed IP address or if you need to allow all IP addresses (not recommended as anyone can attempt to connect to 3306),

ufw allow 3306

If you have any questions about How to Enable Remote MySQL Database Access on an Ubuntu Machine. Please do not hesitate to contact us. Your digital partner will be Airzero cloud.

Airzero Cloud is a fantastic web hosting service provider with an array of powerful tools. We will assist you in growing your business.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/