IaC and Configuration Management

Table of contents

Understanding Infrastructure as Code and Configuration Management

IaC and Configuration Management Tools:

1. Infrastructure as Code (IaC) Tools

These tools focus on provisioning and managing infrastructure resources like servers, networks, and storage.

  • Terraform:
    An open-source tool that uses a declarative configuration language to provision and manage infrastructure across cloud providers.

  • AWS CloudFormation:
    A native AWS service for managing AWS resources using templates.

  • Azure Resource Manager (ARM):
    Microsoft Azure's native IaC service for managing cloud resources.

  • Google Cloud Deployment Manager:
    Google Cloud’s tool for defining and managing cloud resources using configuration files.

  • Pulumi:
    A modern IaC tool that allows using general-purpose languages (e.g., Python, TypeScript) to define infrastructure.


2. Configuration Management (CM) Tools

These tools focus on managing and maintaining the state of software, packages, and settings on existing infrastructure.

  • Ansible:
    A simple, agentless tool that uses YAML to define configurations. Ideal for both configuration management and orchestration tasks.

  • Chef:
    Uses a Ruby-based DSL (domain-specific language) to automate infrastructure configuration and application deployment.

  • Puppet:
    A declarative CM tool that enforces the desired state on nodes using manifests.

  • SaltStack:
    A flexible CM tool that uses YAML-based states to manage systems at scale.

  • CFEngine:
    One of the oldest CM tools, focusing on high-performance, lightweight configuration management.

IaC VS CM:

Difference between Infrastructure as Code (IaC) and Configuration Management (CM):

1. Purpose

  • IaC: Automates the provisioning and deployment of entire infrastructure (e.g., servers, networks, storage).

  • CM: Manages the configuration and state of already deployed infrastructure to ensure consistency and compliance.

2. Scope

  • IaC: Focuses on creating, managing, and destroying infrastructure resources.

  • CM: Deals with managing software, packages, and settings on existing infrastructure.

3. Usage Timing

  • IaC: Used at the time of infrastructure creation or for scaling.

  • CM: Used after the infrastructure is up and running to manage system state or changes.

4. Tools

  • IaC: Tools like Terraform, AWS CloudFormation, and Pulumi are commonly used.

  • CM: Tools like Ansible, Chef, Puppet, and SaltStack are typical.

5. Example

  • IaC: Writing a Terraform script to provision a new virtual machine.

  • CM: Using Ansible to install software packages and apply specific settings to that virtual machine.

6. State Management

  • IaC: Manages the desired state at the infrastructure level (e.g., the existence of resources).

  • CM: Manages the desired state at the software and configuration level.

    What are the most common IaC and Config Management Tools?

    These tools are widely used for automating the provisioning of cloud and on-premises infrastructure:

    1. Terraform

      • Provider: HashiCorp

      • Features: Multi-cloud support, declarative language, state management, and modularity.

      • Use Case: Provisioning infrastructure on AWS, Azure, Google Cloud, and more.

    2. AWS CloudFormation

      • Provider: Amazon Web Services (AWS)

      • Features: Native AWS IaC tool, uses templates for infrastructure deployment and updates.

      • Use Case: Automating AWS resource management.

    3. Azure Resource Manager (ARM)

      • Provider: Microsoft Azure

      • Features: Native IaC tool for Azure, template-driven, integrates with Azure DevOps.

      • Use Case: Managing Azure cloud resources.

    4. Google Cloud Deployment Manager

      • Provider: Google Cloud Platform (GCP)

      • Features: Configuration-driven deployments of GCP resources.

      • Use Case: Deploying infrastructure in GCP environments.

    5. Pulumi

      • Provider: Pulumi

      • Features: Allows defining infrastructure using general-purpose programming languages like Python and TypeScript.

      • Use Case: Modern cloud deployments across multiple cloud providers.


Most Common Configuration Management (CM) Tools

These tools are used to manage the configuration, state, and consistency of systems and applications:

  1. Ansible

    • Provider: Red Hat

    • Features: Agentless, uses YAML playbooks, simple and easy to learn.

    • Use Case: Configuring servers, application deployments, and orchestration tasks.

  2. Puppet

    • Provider: Puppet Labs

    • Features: Uses declarative manifests, agent-based architecture, and supports state enforcement.

    • Use Case: Managing large-scale environments with complex configurations.

  3. Chef

    • Provider: Progress (formerly Chef Software)

    • Features: Uses Ruby-based DSL, client-server architecture, and strong policy-driven automation.

    • Use Case: Application automation, server configuration, and compliance.

  4. SaltStack

    • Provider: VMware

    • Features: YAML-driven state definitions, highly scalable, agent-based, or agentless options.

    • Use Case: Real-time event-driven automation and configuration.

  5. CFEngine

    • Provider: Northern. tech

    • Features: Lightweight and high-performance, suitable for managing large infrastructures.

    • Use Case: Managing system configurations and ensuring compliance.

Understanding Configuration Management with Ansible

What is Ansible?

Ansible is an open-source automation tool developed by Red Hat that is widely used for configuration management, application deployment, and orchestration. It’s known for its simplicity and agentless architecture, making it easy to use and highly efficient.


Key Features of Ansible

  1. Agentless Architecture

    • Unlike tools like Puppet and Chef, Ansible does not require agents to be installed on the managed nodes (servers and devices).

    • Communication happens over SSH (for Linux) or WinRM (for Windows), simplifying deployment.

  2. YAML Playbooks

    • Ansible uses Playbooks, which are YAML files, to define automation tasks in human-readable language.

    • Playbooks are easy to read and maintain, even for those new to automation.

  3. Idempotency

    • Ansible ensures that running the same task multiple times will not change the system’s state if it is already in the desired state.
  4. Extensibility

    • Ansible supports a large library of modules that can automate everything from package installation to cloud provisioning.
  5. Cross-Platform

    • Supports a variety of operating systems (Linux, macOS, Windows), cloud services (AWS, Azure, GCP), and network devices.

Common Use Cases

  1. Configuration Management

    • Ensure systems are configured consistently with specific software, users, firewall rules, etc.

    • Example: Installing Nginx and configuring it to serve a web application.

  2. Application Deployment

    • Deploy multi-tier applications with all necessary dependencies across different environments.

    • Example: Deploying a web app to a fleet of servers.

  3. Infrastructure Automation

    • Automate the creation and management of cloud infrastructure.

    • Example: Provisioning virtual machines and configuring networking.

  4. Orchestration

    • Coordinate the execution of multiple tasks across different nodes or services.

    • Example: Automating the scaling of a web application across multiple servers.


Example Ansible Playbook

A simple playbook to install and start Apache on a server:

- name: Install and configure Apache
  hosts: web_servers
  become: yes  # Run tasks with elevated privileges
  tasks:
    - name: Install Apache
      apt:
        name: apache2
        state: present

    - name: Ensure Apache is running
      service:
        name: apache2
        state: started

Why Choose Ansible?

  • Ease of Use: Simple syntax and no need for agents make it easy to adopt.

  • Scalability: Can manage large-scale environments efficiently.

  • Community Support: Strong community and support from Red Hat.

  • Versatility: Supports cloud automation, network automation, and much more.

Task-01

  • Installation of Ansible on AWS EC2 (Master Node) sudo apt-add-repository ppa:ansible/ansible sudo apt update sudo apt install ansible

To install Ansible on an AWS EC2 instance running Ubuntu, follow these steps:


1. Launch an AWS EC2 Instance (Ubuntu)

  1. Log in to your AWS Management Console.

  2. Create a new EC2 instance (Ubuntu 20.04/22.04 is recommended).

  3. Connect to the instance using SSH or the AWS CLI.


2. Install Ansible

Once you are connected to the EC2 instance, execute the following commands to install Ansible:

  1. Add the Ansible PPA Repository
    This adds the official Ansible repository to your system.

     sudo apt-add-repository ppa:ansible/ansible
    
  2. Update the System
    Update the list of available packages.

     sudo apt update
    
  3. Install Ansible
    Install Ansible using the apt package manager.

     sudo apt install ansible -y
    

3. Verify Installation

Check the installed version of Ansible to ensure it was installed correctly:

ansible --version

You should see the installed version number, confirming that Ansible is ready to use on your EC2 instance.


Hosts file sudo nano /etc/ansible/hosts ansible-inventory --list -y:

1. Ansible Hosts File (/etc/ansible/hosts)

The host file is a critical part of the Ansible configuration. It defines the inventory (i.e., the systems that Ansible will manage).

Purpose
  • It tells Ansible which servers (nodes) it can manage.

  • You can group servers into different categories or environments (e.g., web servers, database servers).

  • The default location is /etc/ansible/hosts.

Editing the Host File

To edit the host’s file, run:

sudo nano /etc/ansible/hosts
Basic Example of an Ansible Hosts File
# Example of static inventory
[web_servers]
192.168.1.10
192.168.1.11

[db_servers]
db.example.com
Explanation of the Example
  • [web_servers] and [db_servers] are groups.

  • You can refer to these groups in Ansible playbooks to run tasks only on those specific servers.

Additional Options
  • You can also specify the SSH user, port, and other connection options in the host file:
[web_servers]
192.168.1.10 ansible_user=ubuntu ansible_port=22

2. Ansible Inventory Command (ansible-inventory --list -y)

This command is used to list the current inventory in YAML format, which is easier to read.

Command:

ansible-inventory --list -y
What it Does
  • Displays all groups and hosts defined in your inventory file in YAML format.

  • Shows any connection details, variables, or metadata specified for the hosts.

Sample Output
all:
  children:
    web_servers:
      hosts:
        192.168.1.10: {}
        192.168.1.11: {}
    db_servers:
      hosts:
        db.example.com: {}
Why Use It?
  • It’s a quick way to verify your inventory setup and check for any misconfigurations or errors.

  • Helpful for debugging when Ansible cannot connect to hosts or run playbooks correctly.


Task-02:

  • Setup 2 more EC2 instances with the same Private keys as the previous instance (Node)

  • Copy the private key to the master server where Ansible is set

  • Try a ping command using Ansible to the Nodes.

This task involves setting up two more EC2 instances (nodes), copying the private key to the Ansible master server, and running an Ansible ping command to ensure connectivity.


Step 1: Launch 2 More EC2 Instances (Nodes)

  1. Log in to AWS Management Console:

    • Launch two more EC2 instances (Ubuntu recommended).

    • Use the same private key (.pem file) as the first instance to ensure you can SSH into them.

  2. Security Group:
    Ensure that all instances (master and nodes) are in the same security group and allow SSH connections from each other.


Step 2: Copy the Private Key to the Ansible Master

Copy the private key (your-key.pem) to the Ansible master instance:

  1. Connect to your Ansible master via SSH.

  2. Copy the private key from your local system to the master using scp:

     scp -i your-key.pem your-key.pem ubuntu@<master-public-ip>:/home/ubuntu/
    
  3. Set proper permissions for the private key:

     chmod 600 /home/ubuntu/your-key.pem
    

Step 3: Edit the Ansible Hosts File

Add the new nodes to your Ansible hosts file:

sudo nano /etc/ansible/hosts

Add the new nodes under a group (e.g., [nodes]):

[nodes]
192.168.1.20 ansible_user=ubuntu ansible_ssh_private_key_file=/home/ubuntu/your-key.pem
192.168.1.21 ansible_user=ubuntu ansible_ssh_private_key_file=/home/ubuntu/your-key.pem

Replace 192.168.1.20 and 192.168.1.21 with the private IP addresses of your EC2 nodes.


Step 4: Ping the Nodes Using Ansible

Run the following Ansible command to test connectivity to the nodes:

ansible nodes -m ping
  • Expected Output: If the connection is successful, you should see something like this:

      192.168.1.20 | SUCCESS => {
        "changed": false,
        "ping": "pong"
      }
      192.168.1.21 | SUCCESS => {
        "changed": false,
        "ping": "pong"
      }
    

Understanding Ad-hoc commands in Ansible

Ad-hoc commands in Ansible are simple, one-time commands used to perform tasks on managed nodes without creating or running a playbook. They are useful for quick, repetitive tasks or testing connectivity to nodes before writing a full playbook.


Why Use Ad-Hoc Commands?

  • Quick Actions: Perform immediate tasks like package installation, service management, or file manipulation.

  • Testing: Useful for testing node connectivity or verifying configurations.

  • Troubleshooting: Quickly debug or fix issues on remote systems.


Basic Syntax of Ansible Ad-Hoc Commands

bashCopyEditansible <target> -m <module> -a "<arguments>"
  • <target>: The group or host you want to run the command on (e.g., all, web_servers).

  • -m <module>: The Ansible module to use (e.g., ping, shell, apt).

  • -a "<arguments>": The arguments to pass to the module.


Common Examples of Ad-Hoc Commands

  1. Ping All Nodes
    Test connectivity to all managed nodes:

     bashCopyEditansible all -m ping
    
    • Expected output:

        jsonCopyEdit192.168.1.10 | SUCCESS => {
          "changed": false,
          "ping": "pong"
        }
      
  2. Run Shell Commands
    Execute shell commands on the nodes:

     bashCopyEditansible all -m shell -a "uptime"
    
  3. Package Installation
    Install the nginx package on all nodes (Debian/Ubuntu systems):

     bashCopyEditansible all -m apt -a "name=nginx state=present"
    
  4. Service Management
    Start the nginx service on all nodes:

     bashCopyEditansible all -m service -a "name=nginx state=started"
    
  5. Copy Files
    Copy a local file to the remote nodes:

     bashCopyEditansible all -m copy -a "src=/path/to/local/file dest=/path/to/remote/location"
    
  6. Manage Users
    Add a user named ansible_user on all nodes:

     bashCopyEditansible all -m user -a "name=ansible_user state=present"
    

When to Use Ad-Hoc Commands vs. Playbooks

  • Ad-Hoc Commands: For quick, one-time tasks or immediate testing.

  • Playbooks: For complex, reusable, and structured automation workflows.

Task-03:

  • write an ansible ad hoc ping command to ping 3 servers from the inventory file

  • Write an ansible ad hoc command to check uptime.

1. Ping 3 Servers from the Inventory File

To ping three servers listed in your inventory file, use the following command:

ansible all -m ping
  • Explanation:

    • all: Targets all hosts in the inventory file. You can replace all with a specific group or hostname, like web_servers or 192.168.1.10.

    • -m ping: Uses the ping module to test connectivity.

Example (if targeting specific hosts by IP or group name):

ansible 192.168.1.10,192.168.1.11,192.168.1.12 -m ping

2. Check Uptime on Managed Nodes

To check the system uptime on all servers:

ansible all -m shell -a "uptime"
  • Explanation:

    • -m shell: Uses the shell module to run the uptime command on all target hosts.

    • -a "uptime": Specifies the command to execute on the remote nodes.

Example for specific hosts:

ansible web_servers -m shell -a "uptime"

These ad-hoc commands are great for quick diagnostics or testing across multiple nodes.