Airzero Cloud

Next Generation Cloud !

enter image description here

Jenkins is an open-source automation server that combines with a number of AWS Services, before-mentioned as AWS CodeCommit, AWS CodeDeploy, Amazon EC2 Spot, and Amazon EC2 Fleet. You can use Amazon Elastic Compute Cloud to extend a Jenkins application on AWS in a matter of minutes.

This tutorial steps you through the method of deploying a Jenkins application. You will begin an EC2 instance, install Jenkins on that instance, and configure Jenkins to automatically turn up Jenkins agents if build techniques want to be augmented on the instance.

What are the prerequisites needed?

  1. An AWS account, if you don’t have one, please register.
  2. An Amazon EC2 key pair, if you don’t have one, see the segment.

How to create a key pair?

To create your key pair:

  1. Open the Amazon EC2 console and sign in.
  2. In the exploration pane, under NETWORK & SECURITY, choose Key Pairs.
  3. Select Generate key pair.
  4. For Name, start a detailed name for the key pair. Amazon EC2 joins the public key with the name that you define as the key name. A key name can add up to 255 ASCII characters. It can’t add leading or tracking spaces.
  5. For File format, choose the format in which to store the private key. To protect the private key in a composition that can be done with OpenSSH, choose pem. To protect the private key in a format that can be done with PuTTY, choose ppk.
  6. Choose Generate key pair.
  7. The private key data is automatically downloaded by your browser. The first file name is the name you designated as the name of your key pair, and the file name expansion is limited by the file format you desired. Save the private key file in a protected place.
  8. If you will use an SSH client on a macOS or Linux computer to attach to your Linux instance, use the subsequent command to set the acceptance of your private key file so that only you can read it.

    $ chmod 400 <key_pair_name>.pem

How to create a security group?

A security group acts as a firewall that examines the traffic left to join one or more EC2 instances. When you start an instance, you can select one or more safety groups. You add controls to each security group that controls the traffic permitted to reach the instances in the security group. Note that you can transform the practices for a security group at any time.

For this blog, you will make a security group and join the subsequent rules.

To make and configure your security group:

  • Choose who may enter your instance, for example, a personal computer or all advanced computers on a system. In this blog, you can use the free IP address of your computer. To expose your IP address, use the checkip services from AWS3 or seek the phrase "what is my IP address" in any Internet search engine. If you are combining through an ISP or from following your firewall outdoors a static IP address, you will require to find the range of IP addresses used by client computers. If you don’t understand this address range, you can use for this blog. However, this is risky for making conditions because it enables everyone to transfer your instance using SSH.
  • Sign in to the AWS management console.
  • Start the Amazon EC2 console by taking EC2 under Compute.
  • In the left-hand navigation bar, want Security Groups, and then match Create Security Group.
  • In the Security, group title enter WebServerSG or any favoured name of your choosing and present a description.
  • Want your VPC from the list, you can use the want VPC.
  • On the Inbound tab, add the commands as follows:
  • Match Add Rule, and then accept SSH from the Type list. Under Source, picked Custom and in the text box enter /32 i.e
  • Match Add Rule, and then take HTTP from the Type list.
  • Agree on Add Rule, and then choose Custom TCP Rule from the Type list. Below Port Range enter 8080.
  • select Create.

How to launch an Amazon EC2 instance?

To launch an EC2 instance:

  • Sign in to the AWS management console.
  • Start the Amazon EC2 console by taking EC2 under Compute.
  • From the Amazon EC2 dashboard, Select publish Instance.
  • The Like an Amazon Machine Image page presents a list of essential shapes called Amazon Machine Images that work as templates for your situation. Select the HVM version of the Amazon Linux AMI. Notice that this arrangement is marked Free tier available.
  • On the Choose an Instance Type page, the t2.micro situation is chosen by default. Keep this example type to visit within the free tier. Report and Launch.
  • On the Survey Instance Launch page, agree to Edit security groups.
  • On the Configure Security Group page:
  • Choose an existing security group.
  • Choose the WebServerSG security group that you built.
  • Click Review and Launch.
  • On the Survey Instance Launch page, agree on Launch.
  • In the Select, an existent key pair or generate a new key pair dialogue box, select Take an existing key pair and then choose the key pair you generated in the section above or any existing key pair you plan to use.
  • In the left-hand navigation bar, like Instances to see the situation of your instance. Originally, the status of your situation is pending. After the status switches to running, your instance is available for use.

How to install and configure Jenkins?

  • Connect to your Linux instance.
  • Download and install Jenkins.
  • Configure Jenkins.

How to use PuTTy to connect to your instance?

  • From the Start menu, like All Programs > PuTTY > PuTTY.
  • In the Class pane, select Session, and execute the following fields:
  • In Host Name, start ec2-user@public_dns_name.
  • Guarantee that Port is 22.
  • In the Section pane, expand Messenger, develop SSH, and then choose Auth. Perform the following:
  • Click Browse.
  • Choose the .ppk file that you created for your key pair, as defined in and then snap Open.
  • Click Open to start the PuTTY session.

How to use SSH to connect to your instance?

Use the ssh command to attach to the situation. You will define the private key (.pem) file and ec2-user@public_dns_name.

$ ssh -i /path/my-key-pair.pem ec2-user@ec2-198-51-

You will see a response like the following:

The authenticity of host '
(' cant is  established
RSA key fingerprint is 1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f.

Type yes.

You will see a command like the following:

Warning: Permanently added '' (RSA) to the list of known hosts.

To download and install Jenkins:

  • To guarantee that your software packages are up to time on your situation, use the following command to activate a quick software update:

    [ec2-user ~]$ sudo yum update –y

  • Include the Jenkins repo using the command:

    [ec2-user ~]$ sudo wget -O /etc/yum.repos.d/jenkins.repo \

  • Enter a key file from Jenkins-CI to activate installation from the package:

    [ec2-user ~]$ sudo rpm --import [ec2-user ~]$ sudo yum upgrade

  • Download Jenkins:

    [ec2-user ~]$ sudo yum install jenkins java-1.8.0-openjdk-devel -y [ec2-user ~]$ sudo systemctl daemon-reload

  • Begin Jenkins as a service:

    [ec2-user ~]$ sudo systemctl start jenkins

  • You can check the status of the Jenkins consults by below the command:

    [ec2-user ~]$ sudo systemctl status jenkins

How to configure the Jenkins?

Jenkins is now downloaded and executing on your EC2 instance. To configure Jenkins:

  • Attach to http://:8080 from your favourite browser.
  • As defined, enter the password identify in /var/lib/jenkins/secrets/initialAdminPassword.

Use the below command to show this password:

[ec2-user ~]$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword
  • The Jenkins installation characters direct you to the Customize Jenkins page. Click Install recommended plugins.
  • Once the connection is terminated, Author First Admin User agrees on Save and Stay.
  • On the left-hand side, match Manage Jenkins, and then select Manage Plugins.
  • Choose on the Free tab, and then open the Amazon EC2 plugin at the top right.
  • Choose the checkbox subsequent to the Amazon EC2 plugin, and then click Install outdoors restart.
  • Once the establishment is done, click Terminal to Dashboard.
  • Agree on configuring a cloud.
  • Agree to Add a new cloud, and choose Amazon EC2. A compilation of new fields emerges.
  • Fill out all the forms. You are now super ready to use EC2 instances as Jenkins agents

If you have any questions about the above topic or have to get services and consultations and get the best Jenkins application services. Feel free to contact us. AIRO ZERO CLOUD will be your strong digital partner. E-mail id: [email protected]

enter image description here

What is DevOps?

DevOps is a package of practices and tools that automate and merge the processes between software development and IT groups. It motivates team empowerment, cross-team communication and collaboration. The term DevOps, a combination of the expansion and operations of the word, shows the process of integrating these disciplines into a unit, continuous process.

How does DevOps work?

A DevOps society includes developers and IT operations working collaboratively throughout the business lifecycle, to raise the speed and quality of software development. It’s a new method of working, a social shift, that has significant implications for teams and the organizations they work for. Under a DevOps model, increase and operations teams are no longer “siloed.” Sometimes, these two teams join into a single team where the engineers operate across the entire application lifecycle — from evolution and test to deployment and processes— and have a range of multidisciplinary skills.

DevOps tools are used to automate and stimulate processes, which serves to increase authenticity. A DevOps toolchain helps teams tackle crucial DevOps fundamentals including continuous integration, continuous delivery, automation, and collaboration.

What is the DevOps life cycle?

Because of the eternal nature of DevOps, practitioners use the continuity loop to show how the aspects of the DevOps lifecycle compare to each other. Notwithstanding emerging to flow sequentially, the loop signifies the necessity for continuous collaboration and iterative improvement throughout the entire lifecycle.

The DevOps lifecycle consists of six states enacting the methods, abilities, and tools required for growth and assistance. Throughout each stage, partners cooperate and communicate to maintain alignment, velocity, and quality.

What are DevOps tools?

DevOps tools discuss the key points of the DevOps lifecycle. They enable DevOps trade by helping to increase collaboration, decrease context-switching, inject industrialization, and enable observability and monitoring. DevOps toolchains normally support two passageways: an all-in-one or public toolchain. An all-in-one toolchain grants a perfect solution that normally doesn’t combine with other third-party tools, while an open toolchain deducts for customization with different tools.

What are the benefits of DevOps?

The benefits of DevOps are:

  • Speed:
    Companies that follow DevOps statement deliverables more generally, with greater quality and durability. Some researchers found that elite organizations deploy 208 times more regularly and 106 times more durable than low-performing partners. Constant delivery enables teams to develop, test, and rescue software with automated tools.
  • Improved collaboration:
    The support of DevOps is a history of collaboration among developers and operations partners, who participate in responsibilities and join the craft. This addresses teams more efficiently and keeps time-related to manage handoffs and create code that is intended for the situation where it goes.
  • Rapid deployment:
    By raising the repetition and quickness of releases, DevOps teams develop results quickly. An aggressive benefit can be achieved by quickly delivering new stories and correcting bugs.
  • Quality and reliability:
    Works like continuous synthesis and constant delivery guarantee changes are practical and reliable, which increases the worth of software merchandise. Monitoring supports teams to stay informed of production in real-time.
  • Security:
    By combining safety into constant integration, continuous performance, and continuous deployment pipeline, DevSecOps is an effective, integrated part of the expansion process. Safety is made into the goods by integrating active safety audits and security testing into agile addition and DevOps workflows.

What are the challenges of adopting DevOps?

Habits are hard to break. Partners entrenched in siloed processes of performance can cope with, or even be immune to, improving team constructions to adopt DevOps manners. Some teams may falsely consider new tools are enough to adopt DevOps. Yet, DevOps is a mixture of people, tools, and experience. Everyone on a DevOps team wants to get the entire value stream — from ideation to development, to the end-user experience. It needs to break down silos in series to cooperate throughout the lifecycle of the goods.

Moving from legacy foundation to using IaC and microservices can offer faster growth and innovation, but the improved operational workload can be challenging. It’s enough to build out strong support of automation, shape control, and continuous delivery practices to help ease the load.

An over-dependence on tools can reduce teams from the essential foundations of DevOps: the team and company structure. Once construction is installed, the processes and partners should come attached and the tools should support them.

How to adopt DevOps?

Utilizing DevOps first needs involvement to evaluate and possibly change or remove any teams, tools, or processes your business currently uses. It means producing the essential foundation to give teams the freedom to create, deploy, and manage their outcomes without having to rely too heavily on outside teams.

How to start with DevOps?

The simplest way to get excited with DevOps is to know a small value stream and start searching with some DevOps manners. As with software development, it is far more obvious to change a single stream with a small group of stakeholders than to attempt an all-at-once organizational change to a new way of operating.

If you have any questions about the above topic or have to get services and consultations and get the best DevOps services. Feel free to contact us. AIRO ZERO CLOUD will be your strong digital partner. E-mail id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

How does AWS Auto Scaling ensures you always have enough instances?

AWS autoscaling uses automation to instantly scale materials to fit demand and server load. By accessing AWS Auto Scaling’s tools, you can be very confident that you’ll always have enough instances to get the application load, no matter how better the traffic may spike. And, not only does it execute capacity to make a constant performance, it does so for a small price.

If AWS Auto Scaling sounds like a perfect option for controlling the money and automating resources, it is. But if you’re new to the option you should work with an experienced AWS person. They can tell your so many auto-scaling options, and build and implement an auto-scaling plan for your business’s requirements

AWS Auto Scaling offers multiple features and advantages:

  • AWS Auto Scaling gives a single user interface.
  • The auto-scaling adds computing power to handle the rising application load.
  • Auto-scaling works for EC2 instances.
  • Resource scaling is configured and monitored according to your specific scaling plan.
  • Custom scaling plans are predictive and can help you with load forecasting
  • An AWS consultant can help you customize your auto-scaling plan.

How do AWS autoscaling options meet the requirements perfectly?

Not all AWS Auto Scaling options are created the same, and it’s very important to carefully suggest the plan you go with. Perpetuate Existing Instance Levels Indefinitely The first auto-scaling plan is easy to configure the auto-scaling to maintain a set number of instances. Amazon EC2 auto-scaling routinely scans things to determine their works. If it detects the worst instance, it will end it and start a replacement one. This gives you a predefined number of instances, running at all times.

  • Try to implement Manual Scaling

You can be able to go back to manual scaling, which is the first way of scaling materials. Amazon EC2 Auto Scaling can monitor instance creation and termination to upkeep a constant capacity, which is a value you’ve required. This makes you maintain the maximum and minimum capacity of your options for your auto-scaling team.

  • Scale in Accordance with a Schedule

Scaling programs can be set to activate automatically at a certain time and time. This is really helpful in situations where you can clearly forecast demand. What’s different about this plan is that following a schedule tells the number of available resources at a given period in advance rather than using automation to make appropriate mounts from time to time.

  • Scale Along with Demand

While AWS Auto Scaling can perform all of the more traditional scaling methods mentioned in strategies one through three, scaling along with the demand is where AWS’s special capabilities start to shine. The ability to shift seamlessly between the more old strategies and those discussed in numbers four and five is another nice feature of AWS Auto Scaling in and of itself.

Demand-based scaling is more responsive to fluctuating traffic and helps accommodate traffic spikes you cannot actually tell. That makes it a better all-around, “cover all your bases” all your needs. And it has various settings, too.

  • Use Predictive Scaling:
    At least you can always merge AWS Auto Scaling with Amazon EC2 Auto Scaling to scale resources throughout many apps with predictive scaling. This includes three sub-options:
  • Load Forecasting:
    This method analyzes history for up to 14 days to forecast what demand for the coming two days. Updated every day, the data is created to reflect one-hour intervals.
  • Maximum Capacity Behavior:
    Designate a minimum and maximum capacity value for all materials, and AWS Auto Scaling will keep each resource within that range. This gives AWS some flexibility within set parameters. And, you can control if the apps can add more resources when needs imply forecasted to be above maximum capacity.

When to use AWS Auto Scaling strategies?

There is a fixed time for using these multiple auto-scaling strategies. Basically, they boil down to whether you’re using dynamic scaling. While predictive scaling always predicts future traffic based on past trends, dynamic scaling uses a logical algorithm for automated resource provisioning. If you’re trying to decide which to use or when to start by use metrics to determine traffic and usage patterns. First, determine the stability of usage patterns, as well as the frequency of traffic spikes. Then define what you actually needed.

  • Dynamic scaling: It is the most practical solution in the majority of situations where web traffic varies somewhat evenly over time. But it may not be able to respond quickly to sharp spikes unless your AWS setup is configured for perfect scaling thresholds.
  • Predictive scaling: It should be used when you know to expect an elevated level of usage.

If your apps experience traffic fluctuations on a routine basis, make sure you always have instances to support them using AWS Auto Scaling. Not only does it give the materials you need when you need them most, but it does so for the small cost available.

If you have any doubts about this subject or have to get services and the best Auto Scaling EC2 services. Feel free to contact us. AIR ZERO CLOUD will be your digital partner. Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

What is AWS EC2 Autoscaling?

- Posted in Hosting by

enter image description here

Auto-scaling is the capability made into AWS to ensure you have the right EC2 instances provisioned to allow the load of your app. Using Auto-scaling, you can delete the guesswork in choosing how many EC2 instances are needed to provide an acceptable level of performance for your app without over-provisioning resources and incurring unnecessary costs.

When you are executing workloads in production it is a better thought to use Amazon CloudWatch to monitor resource usage like CPU usage, however when desired borders are exceeded, CloudWatch will not default provision more resources to hold the increased load, which is where auto-scaling comes into play.

Depending on the character of your app, it is usual for traffic loads to differ depending on the time, or day of the week.

If you provide enough EC2 instances to cope with the highest demand, then you will have plenty of other days and time periods where you have lots of ability that remains unwanted. Which means you are paying for instances that are laying idle.

Conversely, if you do not provide enough capacity, then in peak times when the processing power needed to provide acceptable actions is needed by demand, then your app’s performance will destroy and you may have users experiencing severe slow or even timeouts due to lack of available cpu storage.

Auto-scaling is the solution, by making you automate the addition and removal of EC2 instances really based on monitored metrics like CPU usage. This makes you to minimise costs during the time of low demand, but ramp up guides during peak load times so app performance is not affected

What are Autoscaling components?

There are 3 components needed for auto scaling.

Launch Configuration

These things relate to what will be launched by your autoscaler. Same as launching an EC2 instance from the console, you explain what AMI to use, what instance kinds to add and which safety groups and roles the instances should need for it.

Auto Scaling Group

This thing of autoscaling relates to where the autoscaling should act. Which VPC and subnets to use, what hold balancer to attach, what the minimum and maximum of EC2 instances to scale out and in and the desired needed number.

If you made the minimum instance number to 2, then should the instance count be made below 2 for any reason, the autoscaler will include back instances until the lowest number of EC2 instances are executing.

If you make the number of instances to 10, then the autoscaler will keep including EC2 instances when CPU load warrants it until you reach 10, at which point no additional instances will be included even if CPU load is maxed out.

Auto Scaling Policy

This third component of autoscaling relates to when auto-scaling is invoked. This can be charged like a specific day and time or on-demand based on checked metrics that will invoke the addition and deletion of EC2 instances from your workload.

What about Dynamic AWS Ec2 Autoscaling?

One method of dynamic auto scaling is to enable Amazon CloudWatch to trigger auto-scaling when thresholds are not limited.

You can make performance from the CloudWatch alarm when CPU use exceeds or is lower than an already explained threshold and you can also explain the time period that the out-of-border condition should persist. So for instance, if the CPU threshold is larger than 80% for 5 minutes, then an auto-scaling performance happens. You can also make a Dynamic Scaling Policy when building the ASG to scale instances in and out based on a few thresholds.

How to set up an AWS EC2 Autoscaling group?

  • To set up EC2 Autoscaling, you first are required to make a new ASG which can be identified in the EC2 dashboard.
  • The first step when building the new ASG is to name the group and optionally click a previously saved launch template.
  • To build a launch template, type the new template dialogue. First you will be required to name the template and write the version.
  • Next you will be required to choose which Amazon device Image to use which contains the OS and architecture to provision.
  • Now you should make or choose a key-pair to use to hold the instances provisioned within the ASG and nominate whether you target to make the resources within a VPC or not.
  • Next you can choose to keep volumes and resource tags and then build the template.
  • Now we can use the template to build the ASG by making the latest ASG name and choosing the template and advancing to the next following page.
  • The next step is to “Configure settings” step where you can be with the launch template config
  • The next step “Advanced Options” makes you to attach or built a load balancer and making an optional health check monitoring.
  • Once you made the auto scaling will provide the desired number of instances and then respond to loads and scale out and in as needed.
  • To control the auto scaling policies, you can create the Automatic Scaling tab and build a dynamic scaling policy.
  • To remove the ASG, you choose the ASG from the EC2 Auto scaling groups dashboard and choose delete.

If you have any questions about this subject or have to get services and the best Auto Scaling EC2 services. Feel free to contact us. AIR ZERO CLOUD will be your digital friend. Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

How to visit the IAM management console?

You'll be concentrating on signing in with your own Amazon credentials first.

IAM is Amazon's access management system, in which you can build users with access to as many or few of your Amazon AWS accounts as you wish.

How to click create new users?

Enter a username that makes common sense. Like Firstname.Lastname or FirstnameL

Select create for the user. Don't bother generating Access Keys for this new user, they can be built their own later on.

How to give the new user administrator access?

You've now built the new user, here called "test.jim" let's add them Administrator Access

  • The first step is to Select the user from the list of users on the display.
  • The second step is to select the "Permissions" tab that displayed in the pane below the users list.
  • The third step is to select the "attach user policy" button in that "permissions" tab.

How to select administrator access?

In the manage user permission page. In that page, there will be an option name “administrator access” press the select option.

How to apply the policy?

Leave the suggested permissions at their defaults, and click "Apply Policy"

Congratulations, you've built an administrator. Now to select them to log in and keep reading on.

How to give your teammate a password?

Select on the "Security Credentials" tab following next to the "Permissions" one you were using in the past.

Then click "Manage Password" button

How to copy the password to your teammate?

EIther on the mobile, a piece of paper on their desk, or in just a Message. They should instantly change their password soon after you give it to them.

How to provide instructions to your teammate for logging in?

Your teammate will need some instructions for logging into your management console.

The login URL for your AWS account is located on your dashboard.

  • First, click the displayed dashboard.
  • Write down sign in URL for your Amazon AWS console

How to customize the sign in the URL?

You can personalise the URL by giving it a name that is usually used, like your organization name.

How to tell your employee the user name+password+sign in URL?

Your staff will require the Username, password, and sign-in URL that you built in order to sign in. They cannot sign-in on the basic Amazon website, they must need to use the special sign in the URL that you give them.

If you have any doubts about this subject or have to get services and the best Amazon AWS services. Feel free to contact us. AIR ZERO CLOUD will be your digital friend. Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

The remote desktop app is looking forward but depending on how you are required to connect, the app is only one piece of some puzzle, since you must also identify additional settings and forward the correct port in the router to perfectly connect to other windows 10 machines remotely. Although you can use the Remote Desktop application on any version on Windows 10, the remote desktop protocol that enables connections to a device is only available on Windows 10 Pro and business variants of the OperatingSystem. Windows 10 Home doesn't enable remote connections.

How to enable remote connection on Windows 10?

When trying to enable a remote connection from within the LAN, you only require to make sure the computer you're trying to access has the tool to allow remote desktop connections enabled.

Use these steps to enable remote connection on Windows 10:

  • The first step is to Open Control Panel
  • The second step is to Click on System and Security
  • The next step is to click the Allow remote access option
  • The following step is to Click the Remote tab
  • The next step is to check the Allow remote connections to this computer option
  • The next step is to Check the Allow connections only from computers running Remote Desktop with Network Level Authentication option
  • The next step is to Click the OK button
  • The next step is to Click the Apply button
  • The next step is to Click the OK button

How to set up the app?

  • The first step is to Open Settings
  • The second step is to Click on System
  • The third step is to Click on Remote Desktop
  • The fourth step is to turn on the install Remote Desktop toggle switch
  • The last step is to Click the Confirm button

How to enable remote connections on the router?

Before enabling remote connections on the router first you have to configure the static IP address on windows 10.For this the steps are:

  • The first step is to Open Control Panel
  • The second step is to Click on the Network and Internet
  • The third step is to Click on the Network and Sharing Center
  • The fourth step is to Click the Change adapter settings
  • The fifth step is to Right-click the active adapter and select the Properties tool
  • The next step is to Select the Internet Protocol Version 4 option
  • The next step is to Click the Properties button
  • The next step is to Click the General tab
  • Click the Use the following IP address option
  • The following step is to validate local IP addresses outside the local DHCP scope to prevent address conflicts
  • The next step is to Specify a subnet mask for the network
  • The next step is to Specify the default gateway address, which is the router's address
  • Under the "Use the following DNS server addresses" section, in the "Preferred DNS server" field, specify the IP address of your DNS server, which in most cases is also the address of the router
  • The next step is to Click the OK button
  • The next step is to Click the Close button

How to determine network public IP address?

To find an IP address you need to use the below steps:

  • Open browser
  • Visit
  • Search for "What's my IP."
  • Confirm your public IP address in the result

How to forward a port on your router?

Steps to forward a port on your router:

  • The first is to click Start
  • The second is to Search for Command Prompt and click the top result to open the console.
  • The next step is to Type the following command to display the current TCP configuration and press Enter: ipconfig
  • The next step is to confirm the device address
  • The next step is Under the "Default Gateway" field, confirm the device gateway address
  • You should Open web browser
  • The next step is to enter the IP address of the router in the address bar
  • The next following step is to Sign in to the router using the username and password
  • The next step is to Browse to the Port Forwarding settings page
  • The next step is to Confirm that the Port Forwarding service is enabled
  • The next is to port forwarding list and click the Add profile button
  • The next step is to Create a new port forward with the needed information.
  • The next step is to click the ok button

How to enable remote desktop connection?

First, you have to install the remote desktop for this you need to follow the below steps:

  • First, you need to Open the Microsoft room desktop app
  • Second, you need to click the Get button
  • Next is to Click the Open Microsoft Store button
  • Next is to click the Get button

Next, you need to start remote desktop connection:

  • First, you need to Open Remote Desktop app
  • Second you need to Click the + Add button in the top right
  • The third step is you need to Click the PCs option
  • The next step is to specify the TCP/IP address of the computer you're trying to connect
  • The next step you need to do is Under the "User account" section, click the +button in the top-right.
  • The next step is to Confirm the account details to sign in to the remote computer
  • Select the next button
  • Select the next option
  • Press the save button
  • Press the connection to start a remote session
  • The next step is to Check the Don't ask about this certificate again option
  • The next step is to Click the Connect button
  • Change the app connections setting
  • Change the session setting
  • Change the next connection setting

If you have any questions about this topic or have to get services and the best Remote desktop application services. Feel free to contact us. AIR ZERO CLOUD will be your digital solution. Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

What is Amazon RDS?

Amazon Relational Database Service is a managed SQL database consultancy provided by Amazon Web Services. Amazon RDS holds an array of database engines to store and retrieve data. It also helps with relational database management tasks, such as data migration, backup, and patching.

Amazon RDS facilitates the deployment and support of relational databases in the cloud. A cloud administrator uses Amazon RDS to build-up, operate, manage and scale a relational instance of a cloud database. Amazon RDS is not a database, it is a service used to manage relational databases.

How does Amazon RDS work?

Databases are used to keep data that applications can draw on to help them perform different functions. A relational database uses tables to keep data. It is called relational because it manages data points with.

Amazon provides several instance kinds with multiple combinations of resources, such as CPU, memory, storage options and networking capacity. Each type comes in a variety of sizes to suit the resources of different workloads. RDS users can use AWS identity to define and set permissions for who can access an RDS database.

What are the important features of Amazon RDS?

  • Uses replication features
  • Different types of storage
  • Monitoring
  • Patching
  • Backups
  • Incremental billing
  • Encryption

What are the advantages and disadvantages of Amazon RDS?

Benefits are:

  • Ease of use
  • Cost-effectiveness
  • Reducing the workload on that one instance
  • RDS splits up compute and storage

Drawbacks are:

  • Lack of root access

  • Downtime

What are Amazon RDS database instances?

A database administrator can build, configure, manage and delete an Amazon RDS instance, along with the resources it uses. An Amazon RDS instance is a cloud database ecosystem. The individuals can also spin up many databases depending on the database used.

What are Amazon RDS database engines?

  • Amazon aurora
  • RDS for mariaDB
  • RDS for MySQL
  • RDS for oracle database
  • RDS for PostgreSQL
  • RDS for SQL server

What are the Amazon RDS use cases?

  • Online retailing
  • Mobile and online gaming
  • Travel application
  • Streaming application
  • Finance application

If you have any questions about this topic or have to get services and the best AWS hosting services. Feel free to contact us. AIR ZERO CLOUD will be your digital solution. Email id: [email protected]

enter image description here Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

CentOS Web Panel is a control panel for web hosting. It is a free member to cPanel. It has an easy-to-use user interface and a variety of features for newbies who want to build and manage hosting servers. Using CWP is simple and convenient as you don’t have to get the server with SSH for every little task that needs to be completed.

This blog provides a detailed guide on enabling and using the CentOS Web Panel on CentOS 8.

What are the important points to remember while installing cents web panel:

  • After you install CWP, it cannot be deleted
  • You have to reinstall to delete CWP from your system
  • Your hostname cannot be the same
  • You should only install CWP on a computer with a freshly installed operating system
  • CWP does not support sticky and dynamic

What are the steps to enable the centos panel on centos 8?

  • The first step is to get the server ready
  • The second step is to update the server
  • The next step is the installation of CWP
  • The next step is to configure the centos web panel

How to get the server ready?

First, you need to install the EPEL repository :

$ sudo dnf install epel-release

After that, install the packages required like “wget” for CWP installation by using the below command:

$ sudo dnf install wget -y

Once the required packages are installed, update the host. How to update the server?

Now we will use the command given below to update the host:

$ sudo dnf update -y

We will have to restart the server now to let the updates modify the system. So, restart the system using the command:

$ reboot

After restarting the CentOS 8 system, it is fully set up for installing the CentOS Web Panel.

How to install the CWP?

We are ready to enable CWP on our system now that we have perfectly prepared our server.

First, use the cd command to change your directory to /usr/local/src using the command:

$ cd /usr/local/src

Now use the wget syntax to install the latest version of CWP on your system:

$ sudo wget

Now run the following syntax to install the downloaded shell script:

$ sudo sh cwp-el8-latest

CWP has been perfectly installed. Restart the server again to let the changes take effect:

$ reboot

You can also use the -r flag with the sh syntax to automatically restart the system after CWP is successfully installed:

$ sudo sh cwp-el8-latest -r yes

Now we will be going to learn how to configure and use the CentOS Web Panel on CentOS 8.

How to configure centos web panel?

First, access the Admin Control WebPanel GUI by providing the server IP address and port number 2030.

To check the server IP, open up the terminal of the system on which you enable CWP and enter the below command:

$ ip a

Input root in place of a name and provide the server’s password to login into the control panel.

Add Name Server 1 and Name Server 2 with their IP addresses and select the Save Changes button: Provide all the details such as domain, username, email, and select on the create button. Finally, we will add a domain.

To add a domain, click on “Domains” and then go to “Add Domain”:

CentOS Web Panel is a control panel for web hosting with an intuitive interface and many features to create and manage hosting servers. In this blog, we have learned how to first prepare the server for installation, and then we have learned to install and configure CentOS Web Panel on CentOS 8 Operating system.

If you have any questions about this topic or have to get services and the best Cpanel hosting services. Feel free to contact us. AIR ZERO CLOUD will be your digital solution. Email id: [email protected]

How To Enable cPanel On Centos 7?

- Posted in Hosting by

enter image description here

When creating a new CentOS 7 server, you may identify yourself looking for control panel software that will access you to control your websites and web applications in a graphical user interface. One of the most popular web hosting control panel solutions is cPanel. This software gives you a terrific control panel interface that accesses you to manage and personalize many different views of your server in a user-friendly environment. In this blog, we will give a path on how to prepare your CentOS 7 and install cPanel on centos 7 using the command-line interface. Before running the steps in this blog, please ensure that you have set up SSH access on your server.

What is cPanel?

cPanel is a Linux control panel used to conveniently manage your hosting. The system operates constantly to a desktop application. With cPanel, you can be acting actions from a user-friendly dashboard instead of running the complex syntax. You should be careful while selecting cPanel services. You should select the best cPanel services.

What are the steps to prepare for installation:

Before you can enable cPanel on CentOS, you will first need to remove your firewall, the network manager, and SELinux.

  • The First Step is to stop the service using the below command:

    systemctl stop firewalld.service

  • The next step is to disable the server using the below command.

    systemctl disable firewalld.service

  • OK, the next step after disabling the firewall, You will need to stop the network manager service using the following command.

    systemctl stop NetworkManager

  • The next is once the service is stopped, you can disable the network manager using the below command.

    systemctl disable networkmanager

  • The next step is you will need to disable SeLinux by editing the following file with the below nano command.

    nano /etc/selinux/config

How to install Cpanel?

  • The first step is to change directly into the /home/folder with the following command.

    cd /home

  • The next step is to download the latest release of Cpanel using the below command.

    curl -o latest -L

  • After the process finishes Cpanel should now be installed on your system.

    sh latest

Congratulations, you have successfully installed Cpanel on Centos 7.

If you have any doubts about this topic or have to get services and the best cPanel hosting services. Feel free to contact us. AIR ZERO CLOUD will be your digital solution.

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile:

enter image description here

What is a firewall?

A firewall is software that prevents unwanted access to a network. It inspects incoming and outgoing traffic using a set of rules to find and block problems.

Firewalls are used in both private and enterprise database settings, and many devices come with one built-in, including Mac, Windows, and Linux computers. They are widely considered a component of network security.

In addition to immediate cyber threat defense, firewalls perform important logging functions. They keep a record of events, which can be used by administrators to identify patterns and maintain rule sets. This is the important purpose of a firewall.

The Linux contains the Netfilter system, which is used to decide the way of network traffic headed into or through the server. All new firewall solutions use this system for packet filtering.

The packet filtering system would be of small use to administrators without a userspace interface to manage it. This is the job of iptables:

  • When a packet reaches your server
  • It will be given to the Netfilter subsystem for acceptance
  • Manipulation
  • Rejection based on the instructions supplied to it from userspace via iptables.

iptables is all you need to manage the firewall if you’re common with it, but many frontends are available to make the task easy.

UFW - Uncomplicated Firewall

The automated firewall tool for Ubuntu is UFW. Build to simplify iptables firewall configuration, UFW provides a user-friendly way to create an IPv4 or IPv6 host-based firewall.

UFW by default is initially disabled. From the UFW man page:

“UFW is not intended to provide full firewall functionality through its command interface, but instead provides an easy way to add or delete easy rules. It is currently mainly used for server-based firewalls.”

Below are some examples of how to use UFW:

  • The first point is, ufw needs to be enabled. From a terminal prompt enter:

    sudo ufw enable

  • To open a port :

    sudo ufw allow 22

  • Rules can also be added using a numbered format:

    sudo ufw insert 1 allow 80

  • As the same, to close an opened port:

    sudo ufw deny 22

  • To remove a rule, use delete followed by the rule:

    sudo ufw delete deny 22

It is also right to allow access from a specific server. The below example allows SSH access from host to any IP address on this host:

sudo ufw allow proto TCP from to any port 22
  • Replace with to allow SSH access from the entire subnet.

Adding the –dry-run option to a ufw command will extract the resulting rules, but not possible to apply them. For example, look at the below code command:

sudo ufw --dry-run allow HTTP

*filter :ufw-user-input - [0:0] :ufw-user-output - [0:0] :ufw-user-forward - [0:0] :ufw-user-limit - [0:0] :ufw-user-limit-accept - [0:0] ### RULES ###

### tuple ### allow tcp 80 any -A ufw-user-input -p tcp --dport 80 -j ACCEPT

### END RULES ###
-A ufw-user-input -j RETURN
-A ufw-user-output -j RETURN
-A ufw-user-forward -j RETURN
-A ufw-user-limit -m limit --limit 3/minute -j LOG --log-prefix "[UFW LIMIT]: "
-A ufw-user-limit -j REJECT
-A ufw-user-limit-accept -j ACCEPT

Rules that are updated.

  • UFW can be disabled by:

    sudo ufw disable

  • To see the firewall status, enter:

    sudo ufw status

  • And for more verbose status information use:

    sudo ufw status verbose

  • Want to see the numbered format:

    sudo ufw status numbered

If the port you want to open or close is explained in /etc/services, you can use the port name instead of the given number. In the above examples, replace 22 with ssh. This is a quick referral to using ufw. ufw Application Integration.

Applications that open ports can include an ufw biodata, which details the ports needed for the application to function properly. The profiles are stored in /etc/ufw/applications.d and can be edited if the default ports have been replaced by anything else.

  • To view which applications have kept data, enter the following in a terminal:

    sudo ufw app list

  • Similar to allowing traffic to a port, using an application profile is accomplished by giving:

    sudo ufw allow Samba

  • An extended syntax is needed as well:

    ufw allow from to any app Samba

  • Replace Samba and with the application profile you are using and the IP range for your network. To view details about which ports, protocols, etc., are defined for an application, enter:

    sudo ufw app info Samba

Not all applications that needed opening a network port come with ufw profiles, but if you have profiled an application and want the file to be added with the package, please file a bug against the package in Launchpad.

`ubuntu-bug name of the package`

What is IP Masquerading?

The purpose of IP Masquerading is to use machines with private, non-routable IP addresses on your network to allow the Internet to use the machine doing the masquerading. Traffic from your private network aimed at the Internet must be redirected for replies to be routable back to the machine that made the request. To do this, the kernel must rebuild the IP address of each host so that replies will be routed back to it, rather than to the private address that made the request, which is even not possible over the Internet. Linux uses Tracking to view the track which communication belongs to which machines and reroute each packet constantly. Traffic leaving your private network is thus “masqueraded” as having been born from your Ubuntu gateway. This process is referred to in Microsoft documentation as Internet Connection Sharing.

What is ufw Masquerading?

IP Masquerading can be reached using custom ufw instructions. This is possible because the back-end for ufw is iptables-reassure with the instructions files located in /etc/ufw/*.rules.

These files are a perfect place to include legacy iptables rules used without ufw, and rules that are more network gateway.

The rules are classified into two different folders, rules that should be run before ufw command line rules, and rules that are run after ufw command line rules.

First, the packet directing needs to be allowed in ufw. Two configuration files will need to be balanced, in /etc/default/ufw change the DEFAULT_FORWARD_POLICY to “ACCEPT”: DEFAULT_FORWARD_POLICY="ACCEPT" Then edit /etc/ufw/sysctl.confand uncomment:


Similarly, for IPv6 directing uncomment:


Now add instructions to the /etc/ufw/before.rules file. The automated rules only configure the filter table and access masquerading the nat table will need to be configured. Add the below to the top of the file just after the header comments:

#nat Table rules *nat :POSTROUTING ACCEPT [0:0]

# Forward traffic from eth1 through eth0. -A POSTROUTING -s -o eth0 -j MASQUERADE

# don't delete the 'COMMIT' line or these nat table rules won't be processed COMMIT

The comments are not really necessary, but it is considered a good exercise to document your configuration. Also, when modifying any of the rules files in /etc/ufw, make sure these lines are the last line for each table modified: # don't delete the 'COMMIT' line or these rules won't be processed COMMIT.

For each Table, a corresponding COMMIT command is needed. In these examples, only the nat and filter tables are viewed, but you can also add instructions for the raw and mangle tables.

Finally, remove and re-enable ufw to apply the changes:

sudo ufw disable && sudo ufw enable

IP Masquerading should now be enabled. You can also add any additional FORWARD rules to the /etc/ufw/before.rules. It is recommended that these additional rules be added to the ufw-before-forward chain.

How are the iptables Masquerading? iptables can also be used to allow Masquerading. Similar to ufw, the first step is to enable IPv4 packet forwarding by resubmitting /etc/sysctl.conf and disabling the following line: net.ipv4.ip_forward=1

  • If you dream to enable IPv6 forwarding also comment:


  • Next, run the sysctl command to enable the new features in the configuration file:

    sudo sysctl -p

IP Masquerading can now be completed with a single iptables instruction, which may differ slightly based on your network configuration:

sudo iptables -t nat -A POSTROUTING -s -o ppp0 -j MASQUERADE

The above command assumes that your personal address space is and that your Internet-facing machine is ppp0. The syntax is broken down as follows:

  • -t nat – the rule is to go into the nat table
  • -A POSTROUTING – the instruction is to be appended (-A) to the POSTROUTING chain
  • -s – the instruction applies to traffic originating from the specified address space
  • -o ppp0 – the instruction applies to traffic scheduled to be routed through the network device
  • -j MASQUERADE – traffic matching this instruction is to “jump” (-j) to the MASQUERADE target to be changed as described above

Also, each chain in the filter table has an automated policy of ACCEPT, but if you are building a firewall in addition to a gateway machine, you may have set the policies to DROP, in which case your masqueraded traffic needs to be accessed through the FORWARD chain for the above rule to work:

`sudo iptables -A FORWARD -s -o ppp0 -j ACCEPT`

sudo iptables -A FORWARD -d -m state \ --state ESTABLISHED, RELATED -I ppp0 -j ACCEPT

The above commands will enable all connections from your network to the Internet and all traffic related to those connections to return to the machine that initiated them.

If you want to masquerade to be enabled on restart, which you probably do, edit /etc/rc.local and add commands used above. For example, add the first command with no filtering:

iptables -t nat -A POSTROUTING -s -o ppp0 -j MASQUERADE

What are firewall Logs?

Firewall logs are very essential for recognizing attacks, troubleshooting your firewall rules, and noticing unwanted activity on your network. You must include logging instructions in your firewall for them to be made, though, and logging instructions must come before any applicable terminating rule.

If you are using ufw, you can turn on logging by adding the following in a terminal:

sudo ufw logging on

To turn logging off in ufw, simply replace on with off in the above command. If u are accessing iptables instead of ufw, enter:

sudo iptables -A INPUT -m state --state NEW -p tcp --dport 80 \ -j LOG --log-prefix "NEW_HTTP_CONN:"

A request on port 80 from the machine, then, would generate a log in dmesg that looks like this :

[4304885.870000] NEW_HTTP_CONN: IN=lo OUT=
SRC= DST= LEN=60 TOS=0x00 PREC=0x00 TTL=64
SPT=53981 DPT=80 WINDOW=32767 RES=0x00 SYN URGP=0

The above log will also be viewed in /var/log/messages, /var/log/syslog and /var/log/kern.log. This attitude can be modified by editing /etc/syslog.conf

appropriately or by installing and configuring ulogd and using the ULOG aims instead of LOG. The ulogd daemon is a userspace server that listens for logging rules from the kernel specifically for firewalls and can log to any folder you like, or even to a PostgreSQL or MySQL database. Making sense of your firewall logs can be made little by using log analyzing tools such as logwatch, fwanalog, fwlogwatch, or lire.

If you have any questions about this topic or have to get services and server administration services. Feel free to contact us. Always AIRZERO CLOUD will be your strong firewall.

Email id: [email protected]

enter image description here

Author - Johnson Augustine
Cloud Architect, Ethical hacker
Founder: Airo Global Software Inc
LinkedIn Profile: