Airzero Cloud

Next Generation Cloud !


Hosting is a service through which storage and computing resources are provided to an individual or organization for the accommodation and maintenance of one or more websites and related services

You may have attended to the mining fever in the cryptocurrency world – especially with Bitcoin mining with your pc, but how does one become one and precisely what type of tools do you require to mine?

Well-set-up mining rigs and someones who came in before have been paid significantly. Just like the gold rush days, you could simply become very wealthy without being lucky at all.

But as of right now, is it too late to mine?

In this blog, we’ll give you all the details you require and tips on how to set up the excellent rig to mine your very own bitcoin.

What are the system requirements for Bitcoin mining?

Hardware : Model

Motherboard : Asus B250 Mining Expert

CPU : Intel Celeron / Pentium / Ivy Bridge

Graphics Card :Nvidia GTX 970 / AMD Vega Frontier Edition

Power Supply : Corsair HX1200i / EVGA SuperNOVA 1600

RAM and Storage : Newegg Patriot Memory 4GB DDR

Mining Bitcoin is a positively competitive business… So hold in mind that if you desire to run your own bitcoin mining procedure, these specs are not adequate. But if you are preparing to join a bitcoin mining pool, then these techniques can be profitable.

What is bitcoin mining?

Now before we go catch our shovels to begin mining digital gold, let’s comprehend how people are creating again and you even earn bitcoin as a reward in the first place. To explain merely, let’s use Alice and Bob, and we’ll guide them as A and B for short. Now when A transmits B a transaction, Alice’s transaction gets placed in a ‘block’ on the blockchain. A blockchain is easy “a chain of blocks” right after each other. Literally, visualize blocks being bound by a chain. To get a block ‘confirmed’ by the blockchain, miners are required to solve complex cryptographic puzzles to make sure these blocks.

This is where miners get cited when they are capable of solving the puzzles – and they get rewarded in 2 ways.

  • The Cryptocurrency itself
  • Transaction Fees

Every block has a fixed amount of crypto for the miner to take + the transaction fees of all the tx’s in the block. When Alice transmits money to Bob, she also ought to set an amount as fees for the miner to take.

And each block can fit various tx’s.

So per block mined, the miner is capable of creating revenue of the set block reward + the sum of all the txs in the blocks.

Pretty nice right?

Presently, the block reward for bitcoin is 6.25 + tx fees for blocks. Every four years, the reward will half and be decreased to 3.125 per block in 2024. All of this is for the assistance of helping the blockchain network be more confident and be capable to process trades and money anywhere in the world.

What do you need to become a Bitcoin miner?

To become a good bitcoin miner, you can do it in two separate ways. The first method is to have hardware capabilities to contest with massive data warehouses that use the latest ASIC graphics card to mine. To compete with them, you’ll require to somehow source ASIC cards – which are excessively hard to come by as the supply is fixed. If you’re capable, then you may be competent to out mine them. The major areas where miners work out are China, Europe, USA and Canada.

The second method is by joining mining pools. Mining pools are essentially made up of thousands of unique miners who connect their hash power to mine. Tips are then split to the pool established on the hash power. Either way, you’ll require to first set up your own mining beast to mine the digital gold. Hold in mind the computer will be operating 24/7 so depreciation of the GPU and electricity prices require to be factored in. Getting a good power supply is important as you like to improve the efficiency of your PC using electricity. The next thing is getting the most elevated tier of graphics cards you can. If you can go for the new Nvidia RTX 3000 then do it. But you won’t require it if you’re scheduling to join a mining pool. Because mining is a graphics card-intensive action, you won’t require a high-tier CPU.

A basic one is more than enough. As for Ram, 4GB is also ok for the rig. Also, BTC is not the only thing you can mine. There are many other digital currencies that can be excavated for profit and on a smaller scale. Crypto such as Monero, Litecoin etc are all great examples of a digital currency that is still advantageous to mine with your PC. You can examine the mining data on coins you’re curious about to see what type of data like hash rate, the tribulation the crypto is at right now. And with the data, you would be capable of making the right conclusion.

Bitcoin mining may appear something you can do on the side, but we’ll suggest that if you are considering starting it, drive it like an existing business. Keep away from all profits and expenses. Once you have everything set up and right to go, it can still be advantageous if you have accounted for the depreciation of assets and your electricity costs.

If you have any doubts about requirements for Bitcoin mining, Don’t hesitate to contact us through the below email. Airzero Cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

NFT marketplace is an entire place to display the creativities, Collectible, or any other forms of digital assets to create them available for effective asset management and crypto trade-off along with improved token utility. With individual intelligence getting an efficient stage to showcase their significance and value, expressing and showcasing the rare collectibles and digital assets are made easy via the non-fungible tokens.

What is NFT?

Non-fungible tokens, or NFTs, are pieces of digital range linked to the blockchain, the digital database underpinning cryptocurrencies such as bitcoin and ethereum. Unlike NFTs, those assets are fungible, meaning they can be substituted or traded with another equivalent one of the same value, much like a dollar bill.

How to Know When You're Prepared to Deploy to a Live Site

  • You are ready to transfer your NFT to the world!

By now, you have likely already made an NFT on your local machine, however, you may now like to share your innovation with family, friends, or potential customers. To take the next step, this usually suggests hosting your minting scripts/frontend apps on an online web hosting assistance so that anyone can carry a part in your NFT! Many of these services include access to a public URL so that anyone can explore and visit your website online.

  • Infrastructure conditions on your local computing environment

For users with operating systems that do not natively help web packages required to run, deploying online on cloud computing providers may assign these users to collect and run code. Cloud services often supply an alternative for developers that will not need you to modify your hardware, configure VMs, or add more infrastructure to your local circumstances.

Deploying Your Code to a Live Website

Pick a Web Hosting Service!

When deploying your code online, designers first ought to select an online web hosting service that sufficiently suits their requirements! For this stage, you have many options

Here are a few benefits that are typically used for consumer-grade web applications:

  • Heroku
  • DigitalOcean
  • PythonAnywhere

Hints & Tips on Web Hosting!

Each web hosting service has its own design parameters and quirks so after picking a web hosting provider, you should direct to their official documentation to get the most delinquent and most up-to-date info on getting started! However, there are a few locations that will likely be separate from your experience deploying on your local machine — making environment variables and maintaining uptime.

Creating Environment Variables

Normally, background variables are stored in a .env file on our local machine. With some online web hosting assistance, this is not the case. As an example, in Heroku, we describe Heroku-specific background variables through the Heroku command-line interface. To set a circumstances variable on Heroku for your Alchemy Key, for instance, we would execute the following command:

heroku config:set KEY="< YOUR ALCHEMY KEY >"

Then, to ensure that it is correctly configured, you can view environment variables on Heroku with: heroku config

If configured correctly, your Heroku environment variables should look identical to this:

For other web hosting services, this setup procedure might look further. For instance, with Digital Ocean, you can even make environment variables within your account's dashboard UI!

Maintaining Uptime

While many web hosting services offer good uptime for dashboards/scripts, trial-tier versions may not deliver enough range for production-grade applications. For some benefits, applications that are not operated within a specific time frame are put into a "sleeping state" and may not be capable of serving range when hit with a POST or GET request.

If you like your cloud-hosted dashboards/scripts to stay awake for a longer period of time, you may require to pay for more computational budget or regularly run scheduled jobs at regular intervals to provide you have full uptime

If you have any doubts about NFT hosting, Don’t hesitate to contact us through the below email. Airzero cloud will be your digital partner.

Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

In this blog, you will use Certbot to access SSL certificate for Apache on Ubuntu 18.04 and made your certificate renew automatically.

This blog will use an Apache virtual host file instead of the default configuration file.

We recommend building Apache virtual host files for each domain because it helps to deny regular mistakes and maintains the default files as a fallback configuration.

What are the prerequisites needed?

To follow this blog, you will need:

  • One Ubuntu 18.04 server was set up by supporting this initial server setup for Ubuntu 18.04 tutorial, adding a sudo non-root user and a firewall.
  • A fully designated domain name. This tutorial will use your_domain as an illustration throughout. You can acquire a domain name on Namecheap, get one for easy on Freenom, or use the domain registrar of your choice.
  • Both of the resulting DNS records are set up for your server.
    A record with your_domain aiming to your server’s IP address. A record with www.your_domain aiming to your server’s IP address.

What are the steps to secure apache?

  • Step 1 — Installing Certbot The first step to using Let’s Encrypt to acquire an SSL certificate is to establish the Certbot software on the server.

Certbot is in very rapid growth, so the Certbot packages produced by Ubuntu manage to be outdated. However, the Certbot developers keep a Ubuntu software container with up-to-date versions, so use that repository instead.

First, include the repository:

sudo add-apt-repository ppa:certbot/certbot

You’ll be required to choose ENTER to accept. Enable Certbot’s Apache package with apt:

sudo apt install python-certbot-apache

Certbot is now ready to begin, but in order for it to configure SSL for Apache, we need to check some of Apache’s configuration.

Step 2 — Set Up the SSL Certificate: Certbot requires to be capable to identify the virtual host in your Apache configuration for it to default configure SSL. Especially, it does this for a ServerName directive that matches the domain.

You should have a VirtualHost block for your domain at /etc/apache2/sites-available/ with the ServerNamedirective already set appropriately. Start the virtual host file for your domain using directive already set appropriately. Start the virtual host file for your domain using

Identify the ServerName line. It should display like this:

ServerName your_domain;

If it does, exit from the editor and follow the next step. verify the command of your configuration edits:

sudo apache2ctl configtest

If you get a mistake, again open the virtual host file and check for any typos Once your configuration file’s command is correct, reload Apache at the new configuration:

sudo systemctl reload apache2

Certbot can now identify the suitable VirtualHost block and update it. Next, update the firewall to enable HTTPS traffic.

Step 3 — Enabling HTTPS Through the Firewall

If you have the ufw firewall access, you’ll need to adjust the settings to enable HTTPS traffic.

Apache registers numerous profiles with ufw upon installation. You can see the current setting by entering:

sudo ufw status

It will look like this, meaning that only HTTP traffic is accessed to the webserver:

Status: active
To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Apache                     ALLOW       Anywhere  
OpenSSH (v6)               ALLOW       Anywhere (v6)
Apache (v6)                ALLOW       Anywhere (v6)

To let in HTTPS traffic, access the Apache Full profile and delete the redundant Apache profile allowance:

sudo ufw allow 'Apache Full'
sudo ufw delete allow 'Apache'

Your syntax should now look like this:

sudo ufw status

Output Status: active

To Action From -- ------ ---- OpenSSH ALLOW Anywhere
Apache Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Apache Full (v6) ALLOW Anywhere (v6)

Next, let’s execute Certbot and fetch our certificates.

Step 4 — Obtaining an SSL Certificate Certbot gives multiple ways to access SSL certificates through plugins. The Apache plugin will reconfigure Apache and reload the config whenever necessary. enter the following:

sudo certbot --apache -d your_domain -d www.your_domain

This executes certbot with the --apache plugin, using -d to make the names you’d like the certificate to be valid for. while executing certbot, you will be prompted to type an email address and agree to the terms of service. If that’s correct, certbot will ask how you’d like to configure your HTTPS settings:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for

new sites, or if you're confident your site works on HTTPS. You can undo this change by editing your web server's configuration.

Select the correct number [1-2] then [enter] (press 'c' to cancel):

Select your option then hit ENTER. The configuration will be renewed, and Apache will reload to pick up the new settings. certbot will wrap up with a note telling you the method was corrected and where your certificates are kept:

Donating to ISRG / Let's Encrypt:
Donating to EFF:          

Your certificates are enabled, installed, and loaded. Try reloading your website using https:// and notice your browser’s safety indicator. It should indicate that

The site is properly safer, usually with a green lock icon. Step 5 — Verifying Certbot Auto-Renewal

The certbot package we enabled takes care of renewals by adding a renew script to /etc/cron.d, which is directed by a systemctl service called certbot.timer

To check the status of this service and make sure it’s active and executing, you can use:

sudo systemctl status certbot.timer

You’ll get output like to this:

certbot.timer - Run certbot twice daily
 Loaded: loaded (/lib/systemd/system/certbot.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2020-04-28 17:57:48 UTC; 17h ago
Trigger: Wed 2020-04-29 23:50:31 UTC; 12h left

Triggers: certbot.service

Apr 28 17:57:48 fine-turtle systemd[1]: Started Run certbot twice daily.

you can do a dry execute with certbot:

sudo certbot renew --dry-run


In this blog, you enabled the Let’s Encrypt client certbot, downloaded SSL certificates for your domain, configured Apache to use these certificates and set up automatic certificate renewal.

If you have any doubts about how to secure Apache, Don’t hesitate to contact us through the below email. Airzero Cloud will be your digital partner. Email: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

Explain About Tiny PHP File Manager

- Posted in Hosting by

enter image description here

What is a Tiny File Manager?

Tiny File Manager is a web-based file director and it is an easy, secure, and small file manager with a private file, a multi-language available web application for collecting, uploading, styling, and managing records and folders online via a web browser. The App runs on PHP 5.5+, It supports the production of multiple users and each user can have its own directory and build-in guide for managing text files with cloud9 IDE and it helps syntax highlighting for over 150+ languages and over 35+ themes.

How to use Tiny PHP file manager?

  • Download ZIP with the updated version from the master branch.
  • Copy tinyfilemanager.php to your website file and start it with a web browser.
  • Default username/password: admin/admin. The identification has been encrypted with MD5.
  • Information: Please set your personal username and password in $auth_users ere used.
  • To enable/disable identification set $use_auth to true or false.

What are the features of the Tiny PHP file manager?

  • Open Source, information and very simple
  • Basic peculiarities like to Create, Delete, Modify, View, Download, Copy and Moving files.
  • Capability to upload various files.
  • Facility to generate folders and files.
  • Facility to concentrate, extract files.
  • Help user permissions - based on session.
  • Copy straight file URL
  • Edit text file forms using an advanced editor with language highlight
  • Backup files
  • Advanced Seach

If you have any doubt don’t hesitate to ask. Share your doubts contact us through the given email. E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

Explain About Amazon Versioning

- Posted in Hosting by

enter image description here

How can you preserve valuable assets and data when doing Amazon S3? A feature called versioning operates as a unique answer to this question. By the way, when you upload a thing to S3, that aim is redundantly collected to implement perfect stability. This suggests that for 10,000 objects stored on S3, you can assume the loss of a particular object once every 10,000,000 years. Those are some pretty great odds, so why are we even required to answer this question? Because while the underlying foundation powering S3 gives serious durability, it does not protect you from overwriting your objects or even removing those objects. Or does it? Not by default, but it does if we allow versioning.

What is Versioning?

Versioning automatically follows up with several versions of the same object. For example, say that you have an object currently collected in a bucket. With wanted environments, if you upload a fresh version of object1 to that container, object1 will be substituted by the new version. Then, if you receive that you messed up and need the previous version back, you are out of luck except you have an alternate on your local computer. With versioning allowed, the old version is still saved in your bucket, and it has an unusual Version ID so that you can still watch it, download it, or use it in your app.

How to Enable Versioning?

When we set up versioning, we do it at the container level. So instead of allowing it for individual objects, we turn it on in a container and all things in that bucket automatically use versioning from that point onward. We can allow versioning at the bucket level from the AWS console, or from SDKs and API calls. Once we allow versioning, any new object uploaded to that container will take a Version ID. This ID is used to identify that version uniquely, and it is what we can do to reach that object at any point in time. If we earlier had objects in that container before enabling versioning, then those things will sim have a Version ID of “null.”What about removing an object? What happens when we do that with versioning? If we try to remove the object, all versions will wait in the bucket, but S3 will include a delete marker at the latest version of that object. That means that if we try to retrieve the object, we will get a 404 Not Found error. However, we can still retrieve earlier versions by specifying their IDs, so they are not totally forgotten. If we want to, we do have the right to remove specific versions by defining the Version ID. If we do that with the latest version, then S3 will automatically bump the following version as the default version, instead of giving us a 404 error. That is only one option you have to replace a previous version of an object. Say that you upload an object to S3 that previously exists. That latest version will display the default version. Then say you want to set the previous version as the default. You can remove the particular version ID of the newest version (because recognize, that will not give us a 404, whereas removing the object itself will). Alternatively, you can also COPY the account that you want back into that same container. Imitating an object makes a GET request, and then a PUT application. Any time you have a PUT request in an S3 bucket that has versioning allowed, it triggers that object to display the latest version because it gives it a different Version ID.So those are some of the advantages we can get by allowing versioning. We can defend our data from being destroyed and also from being overwritten unintentionally. We can also use this to store various versions of logs for our own records.

What Else?

There are several things you should know before making a version. First of all, once you allow versioning, you cannot fully disable it. However, you can put the container in a “versioning-suspended” position. If you do that, then further objects will get Version IDs of null. Other objects that previously have versions will continue to have those versions. Secondly, because you are collecting various versions of the same object, your bill force goes up. Those accounts take space, and S3 currently requires a certain amount per GB. There is a process to help prevent this. It’s another point called Lifecycle Management. Lifecycle Management lets you choose what happens to versions after a determined amount of time. For example, if you’re collecting valuable logs and you have various versions of those logs — depending on how much information is stored in those logs — it could take up a lot of time. Instead of collecting all of those versions, you can keep logs up to a certain date, and then transfer them to Amazon Glacier. Amazon Glacier is much more affordable but limits how fast you can access data, so it’s best used for data that you’re probably not going to really use, but still required to store in case you do require it one day. By performing this sort of policy, you can actually cut back on costs if you have a lot of objects. Also, several versions of the same object can have various properties. For example, by specifying a Version ID, we could make that version openly available by anyone on the Internet, but the other versions would still be removed.

At this point, if you have any problems with S3 versioning, feel free to ask through the email id given below. AIR ZERO CLOUD always is your digital partner.

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

In this blog, you will discover how to build an automated software release pipeline that extends a live sample app. You will build the pipeline using AWS code pipeline, a service that creates, tests, and deploys your code each time there is a code modification. You will use your GitHub account, an amazon s3 bucket, or an AWS codecommit repository as the source position for the example app’s code. You will also use AWS elastic beanscode as the deployment point for the sample app. Your finished pipeline will be ready to identify modifications made to the source repository containing the sample app and then automatically modify your live sample app.

Continuous deployment enables you to expand revisions to a production background automatically without express approval from a developer, making the complete software release process automated.

How to Create a deployment environment?

Your constant deployment pipeline will need a destination environment containing virtual servers, or Amazon EC2 instances, where it will deploy sample code. You will make this environment before making the pipeline.

  • To explain the method of setting up and configuring EC2 instances for this blog, you will turn up an example environment using AWS Elastic Beanstalk. Elastic Beanstalk lets you simply host web applications without wanting to launch, configure, or operate virtual servers on your personal. It automatically provisions and manages the infrastructure and gives the application stack for you.
  • Prefer PHP from the drop-down list and then match Launch Now.
  • Elastic Beanstalk will begin building an individual background for you to expand your application to. It will build an Amazon EC2 instance, a security group, an Auto Scaling group, an Amazon S3 container, Amazon CloudWatch alarms, and a domain name for your app.

How to Get a copy of the sample code?

In this step, you will recover a copy of the sample app’s code and determine a reference to host the code. The pipeline does code from the source and then performs actions on it. You can use one of three choices as your source: a GitHub repository, an Amazon S3 bucket, or an AWS CodeCommit repository.

How to Create your pipeline?

In this step, you will build and configure an easy pipeline with two things: source and expand. You will present CodePipeline with the situations of your source repository and deployment environment.

  • On the entry page, match Create pipeline.
  • If this is your initial time using AWS CodePipeline, an introductory page shows instead of Welcome. Select Get Started.

Go through the below steps:

  • On Step 1: Name page: Pipeline name: register the name for your pipeline, DemoPipeline. Select Next step.
  • On Step 2: Source page, choose the location of the source you decided and understand the steps below:

Source Provider: GitHub

  • In the Connect to GitHub part, succeed Connect to GitHub.

  • A fresh browser window will start to compare you to GitHub. If inspired to sign in, produce your GitHub credentials.

  • You will be requested to approve application entrance to your account. Choose Approve application.

Designate the repository and branch:

  • Repository: In the drop-down list, choose the GitHub repository you want to use as the source situation for your pipeline. Match the angled repository in your GitHub account including the sample code called aws-codepipeline-s3-aws-codedeploy_linux.

  • Branch: In the drop-down list, like the branch, you need to use, master.

  • Click Next level.

  • A true constant deployment pipeline needs a build stage, where code is organized and unit tested. CodePipeline lets you secure your favoured build provider into your pipeline. However, in this blog, you will jump to the build stage.

  • In Step 3: Build page, want No Build.

  • Click Next step.

  • In Step 4: Beta page:

  • Deployment provider: Select AWS Elastic Beanstalk.

  • Application name: select My First Elastic Beanstalk Application.

  • Environment name: Select Default-Environment.

  • Select Next step.

  • In Step 5: Service Role page:

  • Service Role: Select Generate role.

  • You will be redirected to an IAM console page that explains the AWS-CodePipeline-Service role that will be generated for you. Press Allow

  • After you perform the role, you are declared to the Step 5: Service Role page where AWS-CodePipeline-Service issues in the Role name. Click Next step.

How to Activate your pipeline to deploy your code?

In this step, you will start your pipeline. Once your pipeline has been built, it will begin to work automatically. First, it recognizes the sample app code in your source area, packages up the files, and then moves them to the second step that you established. During this step, it gives the code to Elastic Beanstalk, which includes the EC2 instance that will receive your code. Elastic Beanstalk holds deploying the code to the EC2 instance.

  • In Step 6: Summary page, examine the data and select Create pipeline.
  • After your pipeline is built, the pipeline status page looks and the pipeline automatically starts to work. You can see the process, as well as success and failure messages as the pipeline, make each action. To make sure your pipeline ran successfully, monitor the progress of the pipeline as it moves through each stage. The change of each stage will change from No executions yet to In Progress, and then to either Succeeded or Failed. The pipeline should build the initial run within a few minutes.
  • In the status section for the Beta stage, select AWS Elastic Beanstalk.
  • The AWS Elastic Beanstalk console starts with the parts of the deployment. Click the background you built earlier, called Default-Environment.
  • Click the URL that looks in the upper-right section of the page to observe the sample website you extended.

How to Commit a change and then update your app?

In this step, you will change the sample code and perform the transition to your repository. CodePipeline will identify your looped sample code and then automatically start deploying it to your EC2 instance via Elastic Beanstalk. Note that the example web page you deployed refers to AWS CodeDeploy, a set that automates code deployments. In CodePipeline, CodeDeploy is an option to use Elastic Beanstalk for deployment activities. Let’s renew the example code so that it accurately states that you extended the sample using Elastic Beanstalk.

Encourage your own copy of the repository that you forked in GitHub.

  • Open index.html
  • Choose the Edit icon.
  • Refresh the webpage
  • Perform the addition to your repository.
  • Go back to your pipeline in the CodePipeline console. In a few minutes, you should see the Reference change to blue, symbolizing that the pipeline has identified the modifications you made to your source repository. Once this happens, it will automatically move the renewed code to Elastic Beanstalk.
  • After the pipeline status designates Succeeded, in the state area for the Beta stage, select AWS Elastic Beanstalk.
  • The AWS Elastic Beanstalk console starts with the features of the deployment. Select the environment you built earlier, called Default-Environment.
  • Click the URL that seems in the upper-right part of the page to see the sample website again. Your manual has been renewed automatically through the constant deployment pipeline!

How to Clean up your resources?

To bypass future charges, you will remove all the materials you launched throughout this blog, which covers the pipeline, the Elastic Beanstalk application, and the source you set up to host the code.

First, you will remove your pipeline:

  • In the pipeline scene, select Edit.
  • Select Delete.
  • Type in the name of the pipeline and select Delete.

Second, remove your Elastic Beanstalk application:

  • Read the Elastic Beanstalk console.

  • Select Actions.

  • Then click Terminate Environment.

If you built an S3 bucket for this blog, remove the bucket you built:

  • Enter the S3 console.

  • Right-click the bucket including and select Delete Bucket.

  • When a verification message arrives, type the bucket name and then select Delete.

You have favorably produced an automated software release pipeline using AWS CodePipeline, Using CodePipeline, you built a pipeline that uses GitHub, Amazon S3, or AWS CodeCommit as the source location for use code and then expands the code to an Amazon EC2 instance run by AWS Elastic Beanstalk. Your pipeline will automatically extend your code every time there is a code transformation. You are one step closer to practicing constant deployment!

If you have any questions about the above topic or have to get services and consultations and get the best AWS services. Feel free to contact us. AIRO ZERO CLOUD will be your strong digital partner. E-mail id: [email protected]

enter image description here

Jenkins is an open-source automation server that combines with a number of AWS Services, before-mentioned as AWS CodeCommit, AWS CodeDeploy, Amazon EC2 Spot, and Amazon EC2 Fleet. You can use Amazon Elastic Compute Cloud to extend a Jenkins application on AWS in a matter of minutes.

This tutorial steps you through the method of deploying a Jenkins application. You will begin an EC2 instance, install Jenkins on that instance, and configure Jenkins to automatically turn up Jenkins agents if build techniques want to be augmented on the instance.

What are the prerequisites needed?

  1. An AWS account, if you don’t have one, please register.
  2. An Amazon EC2 key pair, if you don’t have one, see the segment.

How to create a key pair?

To create your key pair:

  1. Open the Amazon EC2 console and sign in.
  2. In the exploration pane, under NETWORK & SECURITY, choose Key Pairs.
  3. Select Generate key pair.
  4. For Name, start a detailed name for the key pair. Amazon EC2 joins the public key with the name that you define as the key name. A key name can add up to 255 ASCII characters. It can’t add leading or tracking spaces.
  5. For File format, choose the format in which to store the private key. To protect the private key in a composition that can be done with OpenSSH, choose pem. To protect the private key in a format that can be done with PuTTY, choose ppk.
  6. Choose Generate key pair.
  7. The private key data is automatically downloaded by your browser. The first file name is the name you designated as the name of your key pair, and the file name expansion is limited by the file format you desired. Save the private key file in a protected place.
  8. If you will use an SSH client on a macOS or Linux computer to attach to your Linux instance, use the subsequent command to set the acceptance of your private key file so that only you can read it.

    $ chmod 400 <key_pair_name>.pem

How to create a security group?

A security group acts as a firewall that examines the traffic left to join one or more EC2 instances. When you start an instance, you can select one or more safety groups. You add controls to each security group that controls the traffic permitted to reach the instances in the security group. Note that you can transform the practices for a security group at any time.

For this blog, you will make a security group and join the subsequent rules.

To make and configure your security group:

  • Choose who may enter your instance, for example, a personal computer or all advanced computers on a system. In this blog, you can use the free IP address of your computer. To expose your IP address, use the checkip services from AWS3 or seek the phrase "what is my IP address" in any Internet search engine. If you are combining through an ISP or from following your firewall outdoors a static IP address, you will require to find the range of IP addresses used by client computers. If you don’t understand this address range, you can use for this blog. However, this is risky for making conditions because it enables everyone to transfer your instance using SSH.
  • Sign in to the AWS management console.
  • Start the Amazon EC2 console by taking EC2 under Compute.
  • In the left-hand navigation bar, want Security Groups, and then match Create Security Group.
  • In the Security, group title enter WebServerSG or any favoured name of your choosing and present a description.
  • Want your VPC from the list, you can use the want VPC.
  • On the Inbound tab, add the commands as follows:
  • Match Add Rule, and then accept SSH from the Type list. Under Source, picked Custom and in the text box enter /32 i.e
  • Match Add Rule, and then take HTTP from the Type list.
  • Agree on Add Rule, and then choose Custom TCP Rule from the Type list. Below Port Range enter 8080.
  • select Create.

How to launch an Amazon EC2 instance?

To launch an EC2 instance:

  • Sign in to the AWS management console.
  • Start the Amazon EC2 console by taking EC2 under Compute.
  • From the Amazon EC2 dashboard, Select publish Instance.
  • The Like an Amazon Machine Image page presents a list of essential shapes called Amazon Machine Images that work as templates for your situation. Select the HVM version of the Amazon Linux AMI. Notice that this arrangement is marked Free tier available.
  • On the Choose an Instance Type page, the t2.micro situation is chosen by default. Keep this example type to visit within the free tier. Report and Launch.
  • On the Survey Instance Launch page, agree to Edit security groups.
  • On the Configure Security Group page:
  • Choose an existing security group.
  • Choose the WebServerSG security group that you built.
  • Click Review and Launch.
  • On the Survey Instance Launch page, agree on Launch.
  • In the Select, an existent key pair or generate a new key pair dialogue box, select Take an existing key pair and then choose the key pair you generated in the section above or any existing key pair you plan to use.
  • In the left-hand navigation bar, like Instances to see the situation of your instance. Originally, the status of your situation is pending. After the status switches to running, your instance is available for use.

How to install and configure Jenkins?

  • Connect to your Linux instance.
  • Download and install Jenkins.
  • Configure Jenkins.

How to use PuTTy to connect to your instance?

  • From the Start menu, like All Programs > PuTTY > PuTTY.
  • In the Class pane, select Session, and execute the following fields:
  • In Host Name, start ec2-user@public_dns_name.
  • Guarantee that Port is 22.
  • In the Section pane, expand Messenger, develop SSH, and then choose Auth. Perform the following:
  • Click Browse.
  • Choose the .ppk file that you created for your key pair, as defined in and then snap Open.
  • Click Open to start the PuTTY session.

How to use SSH to connect to your instance?

Use the ssh command to attach to the situation. You will define the private key (.pem) file and ec2-user@public_dns_name.

$ ssh -i /path/my-key-pair.pem ec2-user@ec2-198-51-

You will see a response like the following:

The authenticity of host '
(' cant is  established
RSA key fingerprint is 1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f.

Type yes.

You will see a command like the following:

Warning: Permanently added '' (RSA) to the list of known hosts.

To download and install Jenkins:

  • To guarantee that your software packages are up to time on your situation, use the following command to activate a quick software update:

    [ec2-user ~]$ sudo yum update –y

  • Include the Jenkins repo using the command:

    [ec2-user ~]$ sudo wget -O /etc/yum.repos.d/jenkins.repo \

  • Enter a key file from Jenkins-CI to activate installation from the package:

    [ec2-user ~]$ sudo rpm --import [ec2-user ~]$ sudo yum upgrade

  • Download Jenkins:

    [ec2-user ~]$ sudo yum install jenkins java-1.8.0-openjdk-devel -y [ec2-user ~]$ sudo systemctl daemon-reload

  • Begin Jenkins as a service:

    [ec2-user ~]$ sudo systemctl start jenkins

  • You can check the status of the Jenkins consults by below the command:

    [ec2-user ~]$ sudo systemctl status jenkins

How to configure the Jenkins?

Jenkins is now downloaded and executing on your EC2 instance. To configure Jenkins:

  • Attach to http://:8080 from your favourite browser.
  • As defined, enter the password identify in /var/lib/jenkins/secrets/initialAdminPassword.

Use the below command to show this password:

[ec2-user ~]$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword
  • The Jenkins installation characters direct you to the Customize Jenkins page. Click Install recommended plugins.
  • Once the connection is terminated, Author First Admin User agrees on Save and Stay.
  • On the left-hand side, match Manage Jenkins, and then select Manage Plugins.
  • Choose on the Free tab, and then open the Amazon EC2 plugin at the top right.
  • Choose the checkbox subsequent to the Amazon EC2 plugin, and then click Install outdoors restart.
  • Once the establishment is done, click Terminal to Dashboard.
  • Agree on configuring a cloud.
  • Agree to Add a new cloud, and choose Amazon EC2. A compilation of new fields emerges.
  • Fill out all the forms. You are now super ready to use EC2 instances as Jenkins agents

If you have any questions about the above topic or have to get services and consultations and get the best Jenkins application services. Feel free to contact us. AIRO ZERO CLOUD will be your strong digital partner. E-mail id: [email protected]

What is AWS EC2 Autoscaling?

- Posted in Hosting by

enter image description here

Auto-scaling is the capability made into AWS to ensure you have the right EC2 instances provisioned to allow the load of your app. Using Auto-scaling, you can delete the guesswork in choosing how many EC2 instances are needed to provide an acceptable level of performance for your app without over-provisioning resources and incurring unnecessary costs.

When you are executing workloads in production it is a better thought to use Amazon CloudWatch to monitor resource usage like CPU usage, however when desired borders are exceeded, CloudWatch will not default provision more resources to hold the increased load, which is where auto-scaling comes into play.

Depending on the character of your app, it is usual for traffic loads to differ depending on the time, or day of the week.

If you provide enough EC2 instances to cope with the highest demand, then you will have plenty of other days and time periods where you have lots of ability that remains unwanted. Which means you are paying for instances that are laying idle.

Conversely, if you do not provide enough capacity, then in peak times when the processing power needed to provide acceptable actions is needed by demand, then your app’s performance will destroy and you may have users experiencing severe slow or even timeouts due to lack of available cpu storage.

Auto-scaling is the solution, by making you automate the addition and removal of EC2 instances really based on monitored metrics like CPU usage. This makes you to minimise costs during the time of low demand, but ramp up guides during peak load times so app performance is not affected

What are Autoscaling components?

There are 3 components needed for auto scaling.

Launch Configuration

These things relate to what will be launched by your autoscaler. Same as launching an EC2 instance from the console, you explain what AMI to use, what instance kinds to add and which safety groups and roles the instances should need for it.

Auto Scaling Group

This thing of autoscaling relates to where the autoscaling should act. Which VPC and subnets to use, what hold balancer to attach, what the minimum and maximum of EC2 instances to scale out and in and the desired needed number.

If you made the minimum instance number to 2, then should the instance count be made below 2 for any reason, the autoscaler will include back instances until the lowest number of EC2 instances are executing.

If you make the number of instances to 10, then the autoscaler will keep including EC2 instances when CPU load warrants it until you reach 10, at which point no additional instances will be included even if CPU load is maxed out.

Auto Scaling Policy

This third component of autoscaling relates to when auto-scaling is invoked. This can be charged like a specific day and time or on-demand based on checked metrics that will invoke the addition and deletion of EC2 instances from your workload.

What about Dynamic AWS Ec2 Autoscaling?

One method of dynamic auto scaling is to enable Amazon CloudWatch to trigger auto-scaling when thresholds are not limited.

You can make performance from the CloudWatch alarm when CPU use exceeds or is lower than an already explained threshold and you can also explain the time period that the out-of-border condition should persist. So for instance, if the CPU threshold is larger than 80% for 5 minutes, then an auto-scaling performance happens. You can also make a Dynamic Scaling Policy when building the ASG to scale instances in and out based on a few thresholds.

How to set up an AWS EC2 Autoscaling group?

  • To set up EC2 Autoscaling, you first are required to make a new ASG which can be identified in the EC2 dashboard.
  • The first step when building the new ASG is to name the group and optionally click a previously saved launch template.
  • To build a launch template, type the new template dialogue. First you will be required to name the template and write the version.
  • Next you will be required to choose which Amazon device Image to use which contains the OS and architecture to provision.
  • Now you should make or choose a key-pair to use to hold the instances provisioned within the ASG and nominate whether you target to make the resources within a VPC or not.
  • Next you can choose to keep volumes and resource tags and then build the template.
  • Now we can use the template to build the ASG by making the latest ASG name and choosing the template and advancing to the next following page.
  • The next step is to “Configure settings” step where you can be with the launch template config
  • The next step “Advanced Options” makes you to attach or built a load balancer and making an optional health check monitoring.
  • Once you made the auto scaling will provide the desired number of instances and then respond to loads and scale out and in as needed.
  • To control the auto scaling policies, you can create the Automatic Scaling tab and build a dynamic scaling policy.
  • To remove the ASG, you choose the ASG from the EC2 Auto scaling groups dashboard and choose delete.

If you have any questions about this subject or have to get services and the best Auto Scaling EC2 services. Feel free to contact us. AIR ZERO CLOUD will be your digital friend. Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

The remote desktop app is looking forward but depending on how you are required to connect, the app is only one piece of some puzzle, since you must also identify additional settings and forward the correct port in the router to perfectly connect to other windows 10 machines remotely. Although you can use the Remote Desktop application on any version on Windows 10, the remote desktop protocol that enables connections to a device is only available on Windows 10 Pro and business variants of the OperatingSystem. Windows 10 Home doesn't enable remote connections.

How to enable remote connection on Windows 10?

When trying to enable a remote connection from within the LAN, you only require to make sure the computer you're trying to access has the tool to allow remote desktop connections enabled.

Use these steps to enable remote connection on Windows 10:

  • The first step is to Open Control Panel
  • The second step is to Click on System and Security
  • The next step is to click the Allow remote access option
  • The following step is to Click the Remote tab
  • The next step is to check the Allow remote connections to this computer option
  • The next step is to Check the Allow connections only from computers running Remote Desktop with Network Level Authentication option
  • The next step is to Click the OK button
  • The next step is to Click the Apply button
  • The next step is to Click the OK button

How to set up the app?

  • The first step is to Open Settings
  • The second step is to Click on System
  • The third step is to Click on Remote Desktop
  • The fourth step is to turn on the install Remote Desktop toggle switch
  • The last step is to Click the Confirm button

How to enable remote connections on the router?

Before enabling remote connections on the router first you have to configure the static IP address on windows 10.For this the steps are:

  • The first step is to Open Control Panel
  • The second step is to Click on the Network and Internet
  • The third step is to Click on the Network and Sharing Center
  • The fourth step is to Click the Change adapter settings
  • The fifth step is to Right-click the active adapter and select the Properties tool
  • The next step is to Select the Internet Protocol Version 4 option
  • The next step is to Click the Properties button
  • The next step is to Click the General tab
  • Click the Use the following IP address option
  • The following step is to validate local IP addresses outside the local DHCP scope to prevent address conflicts
  • The next step is to Specify a subnet mask for the network
  • The next step is to Specify the default gateway address, which is the router's address
  • Under the "Use the following DNS server addresses" section, in the "Preferred DNS server" field, specify the IP address of your DNS server, which in most cases is also the address of the router
  • The next step is to Click the OK button
  • The next step is to Click the Close button

How to determine network public IP address?

To find an IP address you need to use the below steps:

  • Open browser
  • Visit
  • Search for "What's my IP."
  • Confirm your public IP address in the result

How to forward a port on your router?

Steps to forward a port on your router:

  • The first is to click Start
  • The second is to Search for Command Prompt and click the top result to open the console.
  • The next step is to Type the following command to display the current TCP configuration and press Enter: ipconfig
  • The next step is to confirm the device address
  • The next step is Under the "Default Gateway" field, confirm the device gateway address
  • You should Open web browser
  • The next step is to enter the IP address of the router in the address bar
  • The next following step is to Sign in to the router using the username and password
  • The next step is to Browse to the Port Forwarding settings page
  • The next step is to Confirm that the Port Forwarding service is enabled
  • The next is to port forwarding list and click the Add profile button
  • The next step is to Create a new port forward with the needed information.
  • The next step is to click the ok button

How to enable remote desktop connection?

First, you have to install the remote desktop for this you need to follow the below steps:

  • First, you need to Open the Microsoft room desktop app
  • Second, you need to click the Get button
  • Next is to Click the Open Microsoft Store button
  • Next is to click the Get button

Next, you need to start remote desktop connection:

  • First, you need to Open Remote Desktop app
  • Second you need to Click the + Add button in the top right
  • The third step is you need to Click the PCs option
  • The next step is to specify the TCP/IP address of the computer you're trying to connect
  • The next step you need to do is Under the "User account" section, click the +button in the top-right.
  • The next step is to Confirm the account details to sign in to the remote computer
  • Select the next button
  • Select the next option
  • Press the save button
  • Press the connection to start a remote session
  • The next step is to Check the Don't ask about this certificate again option
  • The next step is to Click the Connect button
  • Change the app connections setting
  • Change the session setting
  • Change the next connection setting

If you have any questions about this topic or have to get services and the best Remote desktop application services. Feel free to contact us. AIR ZERO CLOUD will be your digital solution. Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

How To Enable cPanel On Centos 7?

- Posted in Hosting by

enter image description here

When creating a new CentOS 7 server, you may identify yourself looking for control panel software that will access you to control your websites and web applications in a graphical user interface. One of the most popular web hosting control panel solutions is cPanel. This software gives you a terrific control panel interface that accesses you to manage and personalize many different views of your server in a user-friendly environment. In this blog, we will give a path on how to prepare your CentOS 7 and install cPanel on centos 7 using the command-line interface. Before running the steps in this blog, please ensure that you have set up SSH access on your server.

What is cPanel?

cPanel is a Linux control panel used to conveniently manage your hosting. The system operates constantly to a desktop application. With cPanel, you can be acting actions from a user-friendly dashboard instead of running the complex syntax. You should be careful while selecting cPanel services. You should select the best cPanel services.

What are the steps to prepare for installation:

Before you can enable cPanel on CentOS, you will first need to remove your firewall, the network manager, and SELinux.

  • The First Step is to stop the service using the below command:

    systemctl stop firewalld.service

  • The next step is to disable the server using the below command.

    systemctl disable firewalld.service

  • OK, the next step after disabling the firewall, You will need to stop the network manager service using the following command.

    systemctl stop NetworkManager

  • The next is once the service is stopped, you can disable the network manager using the below command.

    systemctl disable networkmanager

  • The next step is you will need to disable SeLinux by editing the following file with the below nano command.

    nano /etc/selinux/config

How to install Cpanel?

  • The first step is to change directly into the /home/folder with the following command.

    cd /home

  • The next step is to download the latest release of Cpanel using the below command.

    curl -o latest -L

  • After the process finishes Cpanel should now be installed on your system.

    sh latest

Congratulations, you have successfully installed Cpanel on Centos 7.

If you have any doubts about this topic or have to get services and the best cPanel hosting services. Feel free to contact us. AIR ZERO CLOUD will be your digital solution.

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: