Skip to content

AWS CLI and SDKs

Mastering Programmatic Access to AWS Services

Section titled “Mastering Programmatic Access to AWS Services”

The AWS CLI and SDKs enable programmatic access to AWS services, essential for DevOps automation and infrastructure management.

AWS Programmatic Access Options
+------------------------------------------------------------------+
| |
| +------------------------+ |
| | AWS Programmatic | |
| | Access | |
| +------------------------+ |
| | |
| +--------------------+--------------------+ |
| | | | |
| v v v |
| +----------+ +----------+ +----------+ |
| | AWS CLI | | SDKs | | APIs | |
| | | | | | | |
| | - Command| | - Python | | - REST | |
| | Line | | - Java | | - HTTP | |
| | - Shell | | - Node.js| | calls | |
| | Scripts| | - Go | | | |
| | | | - .NET | | | |
| +----------+ +----------+ +----------+ |
| |
+------------------------------------------------------------------+

AWS CLI Installation
+------------------------------------------------------------------+
| |
| Method 1: Official Installer |
| +----------------------------------------------------------+ |
| | # Linux/macOS | |
| | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" \ |
| | -o "awscliv2.zip" | |
| | unzip awscliv2.zip | |
| | sudo ./aws/install | |
| +----------------------------------------------------------+ |
| |
| Method 2: Package Manager |
| +----------------------------------------------------------+ |
| | # macOS (Homebrew) | |
| | brew install awscli | |
| | | |
| | # Ubuntu/Debian | |
| | sudo apt-get install awscli | |
| | | |
| | # Windows (Chocolatey) | |
| | choco install awscli | |
| +----------------------------------------------------------+ |
| |
| Method 3: Python pip |
| +----------------------------------------------------------+ |
| | pip install awscli | |
| | pip install --upgrade awscli | |
| +----------------------------------------------------------+ |
| |
| Verify Installation: |
| +----------------------------------------------------------+ |
| | $ aws --version | |
| | aws-cli/2.15.0 Python/3.11.6 Linux/6.1.0 botocore/2.25.0 | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+
AWS CLI Configuration
+------------------------------------------------------------------+
| |
| Method 1: Interactive Configuration (aws configure) |
| +----------------------------------------------------------+ |
| | $ aws configure | |
| | AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE | |
| | AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/... | |
| | Default region name [None]: us-east-1 | |
| | Default output format [None]: json | |
| +----------------------------------------------------------+ |
| |
| Method 2: Environment Variables |
| +----------------------------------------------------------+ |
| | export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE | |
| | export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/... | |
| | export AWS_DEFAULT_REGION=us-east-1 | |
| | export AWS_DEFAULT_OUTPUT=json | |
| +----------------------------------------------------------+ |
| |
| Method 3: Credentials File |
| +----------------------------------------------------------+ |
| | # ~/.aws/credentials | |
| | [default] | |
| | aws_access_key_id = AKIAIOSFODNN7EXAMPLE | |
| | aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/... | |
| | | |
| | [production] | |
| | aws_access_key_id = AKIAI44QH8DHBEXAMPLE | |
| | aws_secret_access_key = je7MtGbClwBF/2Zp9Utk/... | |
| +----------------------------------------------------------+ |
| |
| Method 4: Config File |
| +----------------------------------------------------------+ |
| | # ~/.aws/config | |
| | [default] | |
| | region = us-east-1 | |
| | output = json | |
| | | |
| | [profile production] | |
| | region = us-west-2 | |
| | output = table | |
| | role_arn = arn:aws:iam::123456789012:role/AdminRole | |
| | source_profile = default | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+
Credential Precedence Order
+------------------------------------------------------------------+
| |
| 1. Highest Priority |
| +----------------------------------------------------------+ |
| | Command line options (--profile, --region) | |
| +----------------------------------------------------------+ |
| | |
| v |
| 2. Environment Variables |
| +----------------------------------------------------------+ |
| | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, etc. | |
| +----------------------------------------------------------+ |
| | |
| v |
| 3. Assume Role Credentials |
| +----------------------------------------------------------+ |
| | From aws configure set or environment | |
| +----------------------------------------------------------+ |
| | |
| v |
| 4. Credentials File (~/.aws/credentials) |
| +----------------------------------------------------------+ |
| | [default] profile | |
| +----------------------------------------------------------+ |
| | |
| v |
| 5. Config File (~/.aws/config) |
| +----------------------------------------------------------+ |
| | Profile settings | |
| +----------------------------------------------------------+ |
| | |
| v |
| 6. Container Credentials (EC2/ECS) |
| +----------------------------------------------------------+ |
| | IAM role for EC2 instance or ECS task | |
| +----------------------------------------------------------+ |
| | |
| v |
| 7. Instance Profile Credentials |
| +----------------------------------------------------------+ |
| | EC2 instance metadata | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+

AWS CLI Command Structure
+------------------------------------------------------------------+
| |
| aws <service> <operation> [parameters] [options] |
| |
| +----------------------------------------------------------+ |
| | Component | Description | Example | |
| | --------------|--------------------------|---------------| |
| | aws | CLI command prefix | aws | |
| | <service> | AWS service name | ec2, s3, iam | |
| | <operation> | Action to perform | describe-instances |
| | [parameters] | Required arguments | --instance-ids |
| | [options] | Optional flags | --region, --output |
| +----------------------------------------------------------+ |
| |
| Examples: |
| +----------------------------------------------------------+ |
| | # List EC2 instances | |
| | aws ec2 describe-instances | |
| | | |
| | # List S3 buckets | |
| | aws s3 ls | |
| | | |
| | # Get specific EC2 instance | |
| | aws ec2 describe-instances --instance-ids i-1234567890 | |
| | | |
| | # Use specific profile | |
| | aws s3 ls --profile production | |
| | | |
| | # Change output format | |
| | aws ec2 describe-instances --output table | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+
AWS CLI Parameter Types
+------------------------------------------------------------------+
| |
| 1. Simple Values |
| +----------------------------------------------------------+ |
| | --instance-type t2.micro | |
| | --region us-east-1 | |
| +----------------------------------------------------------+ |
| |
| 2. List Values |
| +----------------------------------------------------------+ |
| | --instance-ids i-1234567890 i-0987654321 | |
| | --security-group-ids sg-123456 sg-789012 | |
| +----------------------------------------------------------+ |
| |
| 3. Key-Value Pairs |
| +----------------------------------------------------------+ |
| | --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=MyInstance}]' |
| +----------------------------------------------------------+ |
| |
| 4. JSON Structures |
| +----------------------------------------------------------+ |
| | --cli-input-json file://instance-config.json | |
| | | |
| | # Inline JSON | |
| | --filters '[{"Name":"instance-type","Values":["t2.micro"]}]' |
| +----------------------------------------------------------+ |
| |
| 5. Shorthand Syntax |
| +----------------------------------------------------------+ |
| | # Key=value format | |
| | --block-device-mappings '[{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":100}}]' |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+

AWS CLI Output Formats
+------------------------------------------------------------------+
| |
| 1. JSON (Default) |
| +----------------------------------------------------------+ |
| | $ aws ec2 describe-instances --output json | |
| | { | |
| | "Reservations": [ | |
| | { | |
| | "Instances": [ | |
| | { | |
| | "InstanceId": "i-1234567890", | |
| | "InstanceType": "t2.micro", | |
| | "State": { | |
| | "Name": "running" | |
| | } | |
| | } | |
| | ] | |
| | } | |
| | ] | |
| | } | |
| +----------------------------------------------------------+ |
| |
| 2. Table |
| +----------------------------------------------------------+ |
| | $ aws ec2 describe-instances --output table | |
| | ------------------------------------------------ | |
| | | DescribeInstances | | |
| | ------------------------------------------------ | |
| | ||InstanceId ||InstanceType||State ||| | |
| | ||i-1234567890 ||t2.micro ||running ||| | |
| | ||i-0987654321 ||t2.small ||stopped ||| | |
| | ------------------------------------------------ | |
| +----------------------------------------------------------+ |
| |
| 3. Text |
| +----------------------------------------------------------+ |
| | $ aws ec2 describe-instances --output text | |
| | RESERVATIONS r-1234567890 123456789012 | |
| | INSTANCES i-1234567890 t2.micro running | |
| +----------------------------------------------------------+ |
| |
| 4. YAML |
| +----------------------------------------------------------+ |
| | $ aws ec2 describe-instances --output yaml | |
| | Reservations: | |
| | - Instances: | |
| | - InstanceId: i-1234567890 | |
| | InstanceType: t2.micro | |
| | State: | |
| | Name: running | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+
AWS CLI Query Examples
+------------------------------------------------------------------+
| |
| Basic Queries: |
| +----------------------------------------------------------+ |
| | # Get all instance IDs | |
| | aws ec2 describe-instances \ | |
| | --query 'Reservations[*].Instances[*].InstanceId' | |
| | # Output: ["i-1234567890", "i-0987654321"] | |
| +----------------------------------------------------------+ |
| |
| Filter Queries: |
| +----------------------------------------------------------+ |
| | # Get only running instances | |
| | aws ec2 describe-instances \ | |
| | --query 'Reservations[*].Instances[?State.Name==`running`].InstanceId' |
| +----------------------------------------------------------+ |
| |
| Nested Queries: |
| +----------------------------------------------------------+ |
| | # Get instance ID and type | |
| | aws ec2 describe-instances \ | |
| | --query 'Reservations[*].Instances[*].[InstanceId,InstanceType]' |
| | # Output: [["i-1234567890", "t2.micro"], ...] | |
| +----------------------------------------------------------+ |
| |
| Custom Output: |
| +----------------------------------------------------------+ |
| | # Format with custom names | |
| | aws ec2 describe-instances \ | |
| | --query 'Reservations[*].Instances[*].{ID:InstanceId,Type:InstanceType,State:State.Name}' |
| | # Output: | |
| | # [ | |
| | # { "ID": "i-1234567890", "Type": "t2.micro", "State": "running" } |
| | # ] | |
| +----------------------------------------------------------+ |
| |
| Sort and Limit: |
| +----------------------------------------------------------+ |
| | # Sort by launch time, get first 5 | |
| | aws ec2 describe-instances \ | |
| | --query 'sort_by(Reservations[*].Instances[*],&LaunchTime)[:5].InstanceId' |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+

AWS SDKs Comparison
+------------------------------------------------------------------+
| |
| SDK | Language | Package Name | Use Case |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | Python | boto3 | Scripting, |
| Python | | | Automation |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | JavaScript | @aws-sdk/* | Node.js apps, |
| JavaScript | (Node.js) | | Web apps |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | Java | aws-java-sdk | Enterprise |
| Java | | | applications |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | Go | github.com/ | Microservices,|
| Go | | aws/aws-sdk-go | CLI tools |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | .NET | AWSSDK.* | Windows apps, |
| .NET | | | Azure hybrid |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | Ruby | aws-sdk | Rails apps, |
| Ruby | | | Automation |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | PHP | aws/aws-sdk-php | Web apps, |
| PHP | | | CMS |
| ---------------|------------|-----------------|---------------|
| AWS SDK for | C++ | aws-sdk-cpp | Game dev, |
| C++ | | | Embedded |
+------------------------------------------------------------------+

# Installation
pip install boto3
# Basic Usage
import boto3
# Create client (low-level service access)
s3_client = boto3.client('s3')
# Create resource (high-level object-oriented access)
s3_resource = boto3.resource('s3')
# Specify region
ec2 = boto3.client('ec2', region_name='us-west-2')
# Use specific profile
session = boto3.Session(profile_name='production')
s3 = session.client('s3')
boto3 Client vs Resource
+------------------------------------------------------------------+
| |
| Client (Low-Level) |
| +----------------------------------------------------------+ |
| | - Maps 1:1 with AWS API operations | |
| | - Returns dictionary responses | |
| | - More control over parameters | |
| | - Handles pagination manually | |
| | | |
| | # Example | |
| | client = boto3.client('s3') | |
| | response = client.list_buckets() | |
| | for bucket in response['Buckets']: | |
| | print(bucket['Name']) | |
| +----------------------------------------------------------+ |
| |
| Resource (High-Level) |
| +----------------------------------------------------------+ |
| | - Object-oriented interface | |
| | - Returns resource objects | |
| | - Automatic pagination | |
| | - Simpler syntax | |
| | | |
| | # Example | |
| | s3 = boto3.resource('s3') | |
| | for bucket in s3.buckets.all(): | |
| | print(bucket.name) | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+
import boto3
from botocore.exceptions import ClientError
# ============================================================
# Pattern 1: Error Handling
# ============================================================
def get_object_safe(bucket, key):
s3 = boto3.client('s3')
try:
response = s3.get_object(Bucket=bucket, Key=key)
return response['Body'].read()
except ClientError as e:
error_code = e.response['Error']['Code']
if error_code == 'NoSuchKey':
print(f"Object {key} not found")
elif error_code == 'AccessDenied':
print("Access denied")
else:
print(f"Error: {e}")
return None
# ============================================================
# Pattern 2: Pagination
# ============================================================
def list_all_instances():
ec2 = boto3.client('ec2')
instances = []
paginator = ec2.get_paginator('describe_instances')
for page in paginator.paginate():
for reservation in page['Reservations']:
for instance in reservation['Instances']:
instances.append(instance)
return instances
# ============================================================
# Pattern 3: Waiters
# ============================================================
def wait_for_instance(instance_id):
ec2 = boto3.client('ec2')
waiter = ec2.get_waiter('instance_running')
try:
waiter.wait(InstanceIds=[instance_id])
print(f"Instance {instance_id} is now running")
except Exception as e:
print(f"Error waiting for instance: {e}")
# ============================================================
# Pattern 4: Resource Operations
# ============================================================
def manage_s3_objects():
s3 = boto3.resource('s3')
# Create bucket
bucket = s3.create_bucket(Bucket='my-new-bucket')
# Upload file
bucket.upload_file('local-file.txt', 'remote-key.txt')
# Download file
bucket.download_file('remote-key.txt', 'downloaded-file.txt')
# Delete objects
bucket.Object('remote-key.txt').delete()
# ============================================================
# Pattern 5: Assume Role
# ============================================================
def assume_role(role_arn, session_name):
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn=role_arn,
RoleSessionName=session_name
)
# Create session with temporary credentials
session = boto3.Session(
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken']
)
return session

// Installation
// npm install @aws-sdk/client-ec2 @aws-sdk/client-s3
// ES Modules (recommended)
import { S3Client, ListBucketsCommand } from '@aws-sdk/client-s3';
import { EC2Client, DescribeInstancesCommand } from '@aws-sdk/client-ec2';
// Create client
const s3Client = new S3Client({ region: 'us-east-1' });
// CommonJS
const { S3Client, ListBucketsCommand } = require('@aws-sdk/client-s3');
import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import { EC2Client, DescribeInstancesCommand, StartInstancesCommand } from '@aws-sdk/client-ec2';
// ============================================================
// Pattern 1: Basic Operations
// ============================================================
async function listBuckets() {
const client = new S3Client({ region: 'us-east-1' });
try {
const command = new ListBucketsCommand({});
const response = await client.send(command);
response.Buckets.forEach(bucket => {
console.log(bucket.Name);
});
} catch (error) {
console.error('Error:', error);
}
}
// ============================================================
// Pattern 2: S3 Upload/Download
// ============================================================
async function uploadToS3(bucket, key, body) {
const client = new S3Client({ region: 'us-east-1' });
const command = new PutObjectCommand({
Bucket: bucket,
Key: key,
Body: body
});
try {
const response = await client.send(command);
console.log('Upload successful:', response);
} catch (error) {
console.error('Upload failed:', error);
}
}
async function downloadFromS3(bucket, key) {
const client = new S3Client({ region: 'us-east-1' });
const command = new GetObjectCommand({
Bucket: bucket,
Key: key
});
try {
const response = await client.send(command);
const body = await response.Body.transformToString();
return body;
} catch (error) {
console.error('Download failed:', error);
return null;
}
}
// ============================================================
// Pattern 3: EC2 Operations
// ============================================================
async function listInstances() {
const client = new EC2Client({ region: 'us-east-1' });
const command = new DescribeInstancesCommand({});
try {
const response = await client.send(command);
response.Reservations.forEach(reservation => {
reservation.Instances.forEach(instance => {
console.log(`Instance: ${instance.InstanceId}, State: ${instance.State.Name}`);
});
});
} catch (error) {
console.error('Error:', error);
}
}
// ============================================================
// Pattern 4: Pagination
// ============================================================
import { paginateListObjects } from '@aws-sdk/client-s3';
async function listAllObjects(bucket) {
const client = new S3Client({ region: 'us-east-1' });
const objects = [];
const paginator = paginateListObjects({ client }, { Bucket: bucket });
for await (const page of paginator) {
if (page.Contents) {
objects.push(...page.Contents);
}
}
return objects;
}
// ============================================================
// Pattern 5: Waiters
// ============================================================
import { waitUntilInstanceRunning } from '@aws-sdk/client-ec2';
async function startAndWaitForInstance(instanceId) {
const client = new EC2Client({ region: 'us-east-1' });
// Start instance
await client.send(new StartInstancesCommand({ InstanceIds: [instanceId] }));
// Wait for running state
await waitUntilInstanceRunning({ client }, { InstanceIds: [instanceId] });
console.log(`Instance ${instanceId} is now running`);
}

// Installation
// go get github.com/aws/aws-sdk-go-v2
// go get github.com/aws/aws-sdk-go-v2/config
// go get github.com/aws/aws-sdk-go-v2/service/s3
// go get github.com/aws/aws-sdk-go-v2/service/ec2
package main
import (
"context"
"fmt"
"log"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/ec2"
)
func main() {
// Load configuration
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
log.Fatalf("unable to load SDK config, %v", err)
}
// Create clients
s3Client := s3.NewFromConfig(cfg)
ec2Client := ec2.NewFromConfig(cfg)
// Use clients...
_ = s3Client
_ = ec2Client
}
package main
import (
"context"
"fmt"
"log"
"io"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/ec2"
"github.com/aws/aws-sdk-go-v2/service/ec2/types"
)
// ============================================================
// Pattern 1: S3 Operations
// ============================================================
func listBuckets() {
cfg, _ := config.LoadDefaultConfig(context.TODO())
client := s3.NewFromConfig(cfg)
result, err := client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
if err != nil {
log.Fatal(err)
}
for _, bucket := range result.Buckets {
fmt.Printf("Bucket: %s\n", *bucket.Name)
}
}
func uploadToS3(bucket, key string, body io.Reader) error {
cfg, _ := config.LoadDefaultConfig(context.TODO())
client := s3.NewFromConfig(cfg)
_, err := client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
Body: body,
})
return err
}
// ============================================================
// Pattern 2: EC2 Operations
// ============================================================
func listInstances() {
cfg, _ := config.LoadDefaultConfig(context.TODO())
client := ec2.NewFromConfig(cfg)
result, err := client.DescribeInstances(context.TODO(), &ec2.DescribeInstancesInput{})
if err != nil {
log.Fatal(err)
}
for _, reservation := range result.Reservations {
for _, instance := range reservation.Instances {
fmt.Printf("Instance: %s, State: %s\n",
*instance.InstanceId,
instance.State.Name)
}
}
}
func startInstance(instanceID string) error {
cfg, _ := config.LoadDefaultConfig(context.TODO())
client := ec2.NewFromConfig(cfg)
_, err := client.StartInstances(context.TODO(), &ec2.StartInstancesInput{
InstanceIds: []string{instanceID},
})
return err
}
// ============================================================
// Pattern 3: Pagination
// ============================================================
func listAllObjects(bucket string) error {
cfg, _ := config.LoadDefaultConfig(context.TODO())
client := s3.NewFromConfig(cfg)
paginator := s3.NewListObjectsV2Paginator(client, &s3.ListObjectsV2Input{
Bucket: aws.String(bucket),
})
for paginator.HasMorePages() {
page, err := paginator.NextPage(context.TODO())
if err != nil {
return err
}
for _, obj := range page.Contents {
fmt.Printf("Object: %s (%d bytes)\n", *obj.Key, obj.Size)
}
}
return nil
}
// ============================================================
// Pattern 4: Error Handling
// ============================================================
import (
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"errors"
)
func handleS3Errors(err error) {
var noSuchKey *types.NoSuchKey
var accessDenied *types.AccessDenied
switch {
case errors.As(err, &noSuchKey):
fmt.Println("Object not found")
case errors.As(err, &accessDenied):
fmt.Println("Access denied")
default:
fmt.Printf("Error: %v\n", err)
}
}

#!/bin/bash
# ============================================================
# Script: Backup S3 Bucket
# ============================================================
SOURCE_BUCKET="my-source-bucket"
DEST_BUCKET="my-backup-bucket"
DATE=$(date +%Y-%m-%d)
# Sync buckets
aws s3 sync s3://$SOURCE_BUCKET s3://$DEST_BUCKET/$DATE/ \
--storage-class STANDARD_IA \
--quiet
echo "Backup completed: s3://$DEST_BUCKET/$DATE/"
# ============================================================
# Script: EC2 Instance Management
# ============================================================
# Start instances with specific tag
aws ec2 start-instances \
--instance-ids $(aws ec2 describe-instances \
--filters "Name=tag:Environment,Values=development" \
"Name=instance-state-name,Values=stopped" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text)
# Stop instances at end of day
aws ec2 stop-instances \
--instance-ids $(aws ec2 describe-instances \
--filters "Name=tag:AutoStop,Values=true" \
"Name=instance-state-name,Values=running" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text)
# ============================================================
# Script: Clean Up Old Snapshots
# ============================================================
RETENTION_DAYS=30
# Get snapshots older than retention period
OLD_SNAPSHOTS=$(aws ec2 describe-snapshots \
--owner-ids self \
--query "Snapshots[?StartTime<='$(date -d "-$RETENTION_DAYS days" +%Y-%m-%d)'].SnapshotId" \
--output text)
# Delete old snapshots
for snapshot in $OLD_SNAPSHOTS; do
echo "Deleting snapshot: $snapshot"
aws ec2 delete-snapshot --snapshot-id $snapshot
done
# ============================================================
# Script: Rotate Access Keys
# ============================================================
USERNAME="dev-user"
# Create new access key
NEW_KEY=$(aws iam create-access-key \
--user-name $USERNAME \
--query 'AccessKey.[AccessKeyId,SecretAccessKey]' \
--output text)
ACCESS_KEY_ID=$(echo $NEW_KEY | awk '{print $1}')
SECRET_KEY=$(echo $NEW_KEY | awk '{print $2}')
echo "New Access Key: $ACCESS_KEY_ID"
echo "New Secret Key: $SECRET_KEY"
# List old keys
OLD_KEYS=$(aws iam list-access-keys \
--user-name $USERNAME \
--query "AccessKeyMetadata[?AccessKeyId!='$ACCESS_KEY_ID'].AccessKeyId" \
--output text)
# Delete old keys
for key in $OLD_KEYS; do
echo "Deleting old key: $key"
aws iam delete-access-key \
--user-name $USERNAME \
--access-key-id $key
done

AWS CLI/SDK Best Practices
+------------------------------------------------------------------+
| |
| 1. Credential Management |
| +----------------------------------------------------------+ |
| | - Use IAM roles instead of access keys where possible | |
| | - Rotate access keys regularly | |
| | - Use named profiles for different accounts | |
| | - Never hardcode credentials in code | |
| +----------------------------------------------------------+ |
| |
| 2. Error Handling |
| +----------------------------------------------------------+ |
| | - Always implement retry logic | |
| | - Use exponential backoff | |
| | - Handle specific error codes | |
| | - Log errors for debugging | |
| +----------------------------------------------------------+ |
| |
| 3. Performance |
| +----------------------------------------------------------+ |
| | - Use pagination for large result sets | |
| | - Use waiters instead of polling | |
| | - Use concurrent requests where appropriate | |
| | - Cache frequently accessed data | |
| +----------------------------------------------------------+ |
| |
| 4. Security |
| +----------------------------------------------------------+ |
| | - Use least privilege IAM policies | |
| | - Enable CloudTrail for API auditing | |
| | - Use VPC endpoints for private connectivity | |
| | - Encrypt sensitive data in transit | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+

The AWS CLI is not just a tool — it’s your primary interface to AWS as a DevOps engineer. Mastering CLI patterns is the difference between manual operations and fully automated infrastructure.

CLI in DevOps Workflow
+------------------------------------------------------------------+
| |
| Real DevOps CLI Usage: |
| |
| 1. Automation Scripts |
| +----------------------------------------------------------+ |
| | - Deployment scripts that target multiple environments | |
| | - Health check scripts run by systemd timers | |
| | - Backup scripts triggered by cron/timers | |
| +----------------------------------------------------------+ |
| |
| 2. Incident Response |
| +----------------------------------------------------------+ |
| | - Quick diagnostics during production incidents | |
| | - Rolling back deployments via CLI | |
| | - Gathering evidence for postmortems | |
| +----------------------------------------------------------+ |
| |
| 3. CI/CD Integration |
| +----------------------------------------------------------+ |
| | - GitHub Actions/GitLab CI calling AWS CLI | |
| | - Docker image builds pushing to ECR | |
| | - Terraform using AWS CLI for state management | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+

Terminal window
# Complete Arch Linux AWS DevOps Setup
sudo pacman -S aws-cli-v2 jq python-boto3 python-pip
sudo pacman -S terraform ansible docker kubectl helm
yay -S aws-vault aws-sso-cli
# Shell aliases for productivity (~/.zshrc or ~/.bashrc)
alias awswho='aws sts get-caller-identity'
alias awsregions='aws ec2 describe-regions --query "Regions[*].RegionName" --output text | tr "\t" "\n" | sort'
alias ec2ls='aws ec2 describe-instances --query "Reservations[*].Instances[*].[InstanceId,State.Name,InstanceType,PrivateIpAddress,Tags[?Key==\`Name\`].Value|[0]]" --output table'
alias s3size='aws s3 ls --summarize --human-readable --recursive'
# AWS profile switcher function
aws-switch() {
export AWS_PROFILE="$1"
echo "Switched to profile: $1"
aws sts get-caller-identity --output table 2>/dev/null || echo "Failed to authenticate"
}
# Quick EC2 SSH helper
ec2ssh() {
local instance_id="$1"
local ip=$(aws ec2 describe-instances \
--instance-ids "$instance_id" \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text)
ssh -i ~/.ssh/aws-key.pem ec2-user@"$ip"
}
# AWS completion for zsh
autoload -Uz compinit && compinit
complete -C '/usr/bin/aws_completer' aws
/usr/local/bin/deploy-to-ecs.sh
#!/bin/bash
# Production deployment script with error handling
set -euo pipefail
trap 'echo "ERROR: Deployment failed at line $LINENO" >&2; exit 1' ERR
CLUSTER="production"
SERVICE="api-service"
IMAGE_TAG="${1:?Usage: $0 <image-tag>}"
ECR_REPO="123456789012.dkr.ecr.us-east-1.amazonaws.com/api"
echo "=== Deploying $IMAGE_TAG to $CLUSTER/$SERVICE ==="
echo "Timestamp: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
# Get current task definition
CURRENT_TD=$(aws ecs describe-services \
--cluster "$CLUSTER" \
--services "$SERVICE" \
--query 'services[0].taskDefinition' \
--output text)
echo "Current task definition: $CURRENT_TD"
# Create new task definition with updated image
NEW_TD=$(aws ecs describe-task-definition \
--task-definition "$CURRENT_TD" \
--query 'taskDefinition' | \
jq --arg IMG "$ECR_REPO:$IMAGE_TAG" \
'.containerDefinitions[0].image = $IMG | del(.taskDefinitionArn, .revision, .status, .requiresAttributes, .compatibilities, .registeredAt, .registeredBy)' | \
aws ecs register-task-definition --cli-input-json file:///dev/stdin \
--query 'taskDefinition.taskDefinitionArn' --output text)
echo "New task definition: $NEW_TD"
# Update service
aws ecs update-service \
--cluster "$CLUSTER" \
--service "$SERVICE" \
--task-definition "$NEW_TD" \
--force-new-deployment > /dev/null
# Wait for deployment
echo "Waiting for deployment to stabilize..."
aws ecs wait services-stable \
--cluster "$CLUSTER" \
--services "$SERVICE"
echo "✅ Deployment complete!"

IssueCauseSolution
Unable to locate credentialsNo credentials configuredRun aws configure or set env vars
ExpiredTokenSTS tokens expiredRe-authenticate, check token expiry
SignatureDoesNotMatchClock skew on systemSync system clock with timedatectl set-ntp true
ThrottlingExceptionToo many API callsImplement exponential backoff, use --page-size
Wrong region resultsDefault region misconfiguredCheck AWS_DEFAULT_REGION or --region flag
SSL certificate errorsOutdated CA certificatessudo pacman -S ca-certificates
Terminal window
# Debug AWS CLI issues
# Verbose output to see HTTP requests
aws s3 ls --debug 2>&1 | head -50
# Check credential chain resolution
aws configure list
# Verify clock is synced (critical for AWS auth)
timedatectl status
# If not synced:
sudo timedatectl set-ntp true
sudo systemctl restart systemd-timesyncd
# Test connectivity to AWS endpoints
curl -s https://sts.amazonaws.com/ | head -5
# Check for proxy issues
echo "HTTP_PROXY: ${HTTP_PROXY:-not set}"
echo "HTTPS_PROXY: ${HTTPS_PROXY:-not set}"
echo "NO_PROXY: ${NO_PROXY:-not set}"

CLI/SDK Anti-Patterns
+------------------------------------------------------------------+
| |
| ❌ Mistake 1: Hardcoding Credentials in Scripts |
| +----------------------------------------------------------+ |
| | Problem: AWS keys in shell scripts or Python code | |
| | Impact: Credentials in version control | |
| | Fix: Use aws-vault, IAM roles, or environment vars | |
| +----------------------------------------------------------+ |
| |
| ❌ Mistake 2: Not Handling Pagination |
| +----------------------------------------------------------+ |
| | Problem: Only getting first page of results | |
| | Impact: Missing resources in scripts (phantom data) | |
| | Fix: Use --paginate flag or SDK paginators | |
| +----------------------------------------------------------+ |
| |
| ❌ Mistake 3: No Error Handling in Automation |
| +----------------------------------------------------------+ |
| | Problem: Scripts fail silently or partially | |
| | Impact: Inconsistent state, failed deployments | |
| | Fix: set -euo pipefail, try/except in Python | |
| +----------------------------------------------------------+ |
| |
| ❌ Mistake 4: Not Using --query for Filtering |
| +----------------------------------------------------------+ |
| | Problem: Piping massive JSON through grep/sed | |
| | Impact: Slow, fragile, hard to maintain | |
| | Fix: Use JMESPath --query or jq for structured parsing | |
| +----------------------------------------------------------+ |
| |
| ❌ Mistake 5: Ignoring Rate Limiting |
| +----------------------------------------------------------+ |
| | Problem: Hammering AWS APIs in tight loops | |
| | Impact: ThrottlingException, failed automation | |
| | Fix: Implement backoff, use waiters, batch operations | |
| +----------------------------------------------------------+ |
| |
+------------------------------------------------------------------+

  1. Q: What is the AWS credential resolution order?

    • A: (1) Command line options, (2) Environment variables, (3) AWS credentials file, (4) AWS config file, (5) Container credentials (ECS), (6) Instance profile credentials (EC2). Higher priority sources override lower ones.
  2. Q: When would you use boto3 client vs resource?

    • A: Client provides a low-level, 1:1 mapping to AWS API operations — gives full control and works with all services. Resource provides a high-level, object-oriented interface with automatic pagination — simpler code but not available for all services. Use client for complex operations, resource for common CRUD.
  3. Q: How do you handle API rate limiting in AWS CLI scripts?

    • A: Implement exponential backoff with jitter. Use SDK built-in retry logic (configurable via max_attempts in config). Reduce API call frequency with caching. Use CloudWatch events instead of polling. Batch operations where API supports it.
  1. Q: Write a script to find all unattached EBS volumes across all regions.

    • A:
    Terminal window
    for region in $(aws ec2 describe-regions --query 'Regions[*].RegionName' --output text); do
    aws ec2 describe-volumes \
    --region "$region" \
    --filters Name=status,Values=available \
    --query 'Volumes[*].[VolumeId,Size,CreateTime]' \
    --output table 2>/dev/null
    done
  2. Q: How would you automate daily snapshots of all production EBS volumes?

    • A: Use a combination of: (1) AWS CLI script that finds tagged production volumes and creates snapshots, (2) systemd timer or Lambda on schedule, (3) Tag snapshots with creation date and source volume, (4) Cleanup script to delete snapshots older than retention period, (5) CloudWatch alarm if snapshot fails.

Exam Tip

  1. Credential Precedence: Know the order of credential sources
  2. Output Formats: JSON, table, text, yaml - know when to use each
  3. JMESPath Queries: Understand query syntax for filtering output
  4. Client vs Resource: boto3 client is low-level, resource is high-level
  5. Pagination: Use paginators for large result sets
  6. Waiters: Use waiters to wait for resource state changes
  7. Error Handling: Implement retry with exponential backoff
  8. Profiles: Use named profiles for multi-account access

Chapter 5: AWS Well-Architected Framework


Last Updated: March 2026

Last Updated: February 2026