Kubernetes_replicasets
Chapter 19: Kubernetes ReplicaSets - Ensuring High Availability
Section titled “Chapter 19: Kubernetes ReplicaSets - Ensuring High Availability”Table of Contents
Section titled “Table of Contents”- Introduction to ReplicaSets
- Why We Need ReplicaSets
- ReplicaSet vs Replication Controller
- ReplicaSet Specification
- Creating ReplicaSets
- Scaling ReplicaSets
- ReplicaSet Lifecycle
- Best Practices
- Hands-on Lab
- Summary
Introduction to ReplicaSets
Section titled “Introduction to ReplicaSets”What is a ReplicaSet?
Section titled “What is a ReplicaSet?”A ReplicaSet is a Kubernetes resource that ensures a stable set of replica Pods are running at any given time. It provides declarative updates for Pod replicas and is used to guarantee availability and scalability.
┌─────────────────────────────────────────────────────────────────────────────┐│ REPLICASET CONCEPT │├─────────────────────────────────────────────────────────────────────────────┤│ ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ REPLICASET │ ││ │ name: myapp-replicaset │ ││ │ replicas: 3 │ ││ │ selector: app=myapp │ ││ │ │ ││ │ ┌─────────────────────────────────────────────────────────────┐ │ ││ │ │ MANAGED PODS │ │ ││ │ │ │ │ ││ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ ││ │ │ │ Pod 1 │ │ Pod 2 │ │ Pod 3 │ │ │ ││ │ │ │ │ │ │ │ │ │ │ ││ │ │ │ app=myapp│ │ app=myapp│ │ app=myapp│ │ │ ││ │ │ │ running │ │ running │ │ running │ │ │ ││ │ │ └─────────┘ └─────────┘ └─────────┘ │ │ ││ │ │ │ │ ││ │ └─────────────────────────────────────────────────────────────┘ │ ││ │ │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────────────────────┘Why We Need ReplicaSets
Section titled “Why We Need ReplicaSets”The Problem: Pod Failures
Section titled “The Problem: Pod Failures”┌─────────────────────────────────────────────────────────────────────────────┐│ WHY REPLICASETS MATTER │├─────────────────────────────────────────────────────────────────────────────┤│ ││ Without ReplicaSet: ││ ─────────────────── ││ ││ ┌─────────┐ ││ │ Pod │ If this pod crashes, your app is DOWN! ││ │ │ ││ │ MyApp │ ❌ No automatic recovery ││ │ │ ❌ No redundancy ││ │ Running│ ❌ Manual intervention required ││ └─────────┘ ││ ││ With ReplicaSet: ││ ──────────────── ││ ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ REPLICASET │ ││ │ │ ││ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ ││ │ │ Pod 1 │ │ Pod 2 │ │ Pod 3 │ │ ││ │ │ │ │ │ │ │ │ ││ │ │ Running │ │ Running │ │ Running │ │ ││ │ └─────────┘ └─────────┘ └─────────┘ │ ││ │ │ ││ │ ✓ Automatic failure recovery │ ││ │ ✓ High availability │ ││ │ ✓ Load distribution │ ││ │ ✓ Self-healing │ ││ │ │ ││ └───────────────────────────────────────────────────────────────────┘ ││ ││ If Pod 1 crashes: ││ ───────────────── ││ 1. ReplicaSet detects failure ││ 2. Creates new Pod to replace failed one ││ 3. Your app stays available! ││ │└─────────────────────────────────────────────────────────────────────────────┘ReplicaSet vs Replication Controller
Section titled “ReplicaSet vs Replication Controller”Key Differences
Section titled “Key Differences”┌─────────────────────────────────────────────────────────────────────────────┐│ REPLICASET vs REPLICATION CONTROLLER │├─────────────────────────────────────────────────────────────────────────────┤│ ││ Feature │ ReplicationController │ ReplicaSet ││ ──────────────────┼───────────────────────┼───────────── ││ Selector Support │ Equality-based only │ Set-based & equality ││ Example │ env=prod │ env in (prod, dev) ││ Recommended Use │ Legacy │ Current ││ Owned By │ N/A │ Deployment (usually) ││ ││ Selector Comparison: ││ ─────────────────── ││ ││ ReplicationController (equality-based): ││ ┌─────────────────────────────────────────────────────────────────┐ ││ │ selector: │ ││ │ app: nginx # Only matches app=nginx │ ││ └─────────────────────────────────────────────────────────────────┘ ││ ││ ReplicaSet (两种都支持): ││ ┌─────────────────────────────────────────────────────────────────┐ ││ │ selector: │ ││ │ matchLabels: │ ││ │ app: nginx # Equality-based │ ││ │ matchExpressions: │ ││ │ - key: tier │ ││ │ operator: In │ ││ │ values: [frontend, backend] # Set-based │ ││ └─────────────────────────────────────────────────────────────────┘ ││ ││ Note: ReplicaSet is the recommended resource for most use cases. ││ │└─────────────────────────────────────────────────────────────────────────────┘ReplicaSet Specification
Section titled “ReplicaSet Specification”YAML Structure
Section titled “YAML Structure”apiVersion: apps/v1kind: ReplicaSetmetadata: name: myapp-replicaset labels: app: myapp tier: frontendspec: # Number of replicas replicas: 3
# Selector to identify managed pods selector: matchLabels: app: myapp
# Pod template template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:1.21 ports: - containerPort: 80 resources: limits: memory: "256Mi" cpu: "500m" requests: memory: "128Mi" cpu: "250m"Key Fields Explained
Section titled “Key Fields Explained”┌─────────────────────────────────────────────────────────────────────────────┐│ REPLICASET SPEC FIELDS │├─────────────────────────────────────────────────────────────────────────────┤│ ││ Field │ Description ││ ──────────────────────┼───────────────────────────── ││ replicas │ Number of desired Pods (default: 1) ││ selector │ Labels to identify managed Pods ││ minReadySeconds │ Seconds to wait before marking ready ││ revisionHistoryLimit │ Number of ReplicaSet revisions to retain ││ progressDeadlineSeconds│ Max seconds to wait for progress ││ ││ Template Spec (Pod definition inside): ││ ──────────────────────────────── ││ • metadata.labels - Must match selector ││ • spec.containers - Container definitions ││ • spec.initContainers - Init containers (optional) ││ • spec.volumes - Volume definitions (optional) ││ │└─────────────────────────────────────────────────────────────────────────────┘Creating ReplicaSets
Section titled “Creating ReplicaSets”Creating from YAML
Section titled “Creating from YAML”# Create ReplicaSetkubectl apply -f replicaset.yaml
# Verify creationkubectl get replicasetkubectl describe replicaset myapp-replicaset
# Check managed podskubectl get pods -l app=myappUsing kubectl run (Imperative)
Section titled “Using kubectl run (Imperative)”# Create ReplicaSet with kubectl run (deprecated method)kubectl run myapp \ --image=nginx \ --replicas=3 \ --labels=app=myapp
# Scale (creates ReplicaSet if doesn't exist)kubectl scale rs myapp --replicas=5Using Selectors
Section titled “Using Selectors”# Using matchLabels (equality-based)selector: matchLabels: app: myapp tier: frontend
# Using matchExpressions (set-based)selector: matchExpressions: - key: app operator: In values: - myapp - webapp - key: tier operator: NotIn values: - backendScaling ReplicaSets
Section titled “Scaling ReplicaSets”Manual Scaling
Section titled “Manual Scaling”# Scale using kubectlkubectl scale rs myapp-replicaset --replicas=5
# Scale deployment (recommended way)kubectl scale deployment myapp-deployment --replicas=3Auto-Scaling (HPA)
Section titled “Auto-Scaling (HPA)”# Create HorizontalPodAutoscalerapiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: myapp-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: ReplicaSet name: myapp-replicaset minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80# Create HPAkubectl apply -f hpa.yaml
# Check HPA statuskubectl get hpa
# Autoscale based on CPUkubectl autoscale rs myapp-replicaset --min=2 --max=10 --cpu-percent=70Scaling Behavior
Section titled “Scaling Behavior”┌─────────────────────────────────────────────────────────────────────────────┐│ SCALING BEHAVIOR │├─────────────────────────────────────────────────────────────────────────────┤│ ││ Scale Up: ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ │ ││ │ replicas: 3 ────▶ replicas: 6 │ ││ │ │ ││ │ ┌─────┐ ┌─────┐ ┌─────┐ │ ││ │ │ Pod │ │ Pod │ │ Pod │ ──▶ +3 new Pods │ ││ │ │ 1 │ │ 2 │ │ 3 │ │ ││ │ └─────┘ └─────┘ └─────┘ │ ││ │ │ ││ └─────────────────────────────────────────────────────────────────────┘ ││ ││ Scale Down: ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ │ ││ │ replicas: 6 ────▶ replicas: 3 │ ││ │ │ ││ │ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │ ││ │ │ Pod │ │ Pod │ │ Pod │ │ Pod │ │ Pod │ │ Pod │ ──▶ -3 Pods │ ││ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 │ │ 6 │ │ ││ │ └─────┘ └─────┘ └─────┘ └─────┘ └─────┘ └─────┘ │ ││ │ │ ││ └─────────────────────────────────────────────────────────────────────┘ ││ ││ Scale Down Behavior: ││ • Terminates pods with longest running time first ││ • Respects PodDisruptionBudget ││ • Checks readiness probes ││ │└─────────────────────────────────────────────────────────────────────────────┘ReplicaSet Lifecycle
Section titled “ReplicaSet Lifecycle”Pod Management
Section titled “Pod Management”┌─────────────────────────────────────────────────────────────────────────────┐│ REPLICASET LIFECYCLE │├─────────────────────────────────────────────────────────────────────────────┤│ ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ CREATION │ ││ │ • User creates ReplicaSet YAML │ ││ │ • ReplicaSet controller creates Pods │ ││ │ • Scheduler assigns Pods to nodes │ ││ └─────────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ RUNNING │ ││ │ • Monitors replica count │ ││ │ • Reconciles desired vs actual │ ││ │ • Handles Pod failures │ ││ │ • Responds to scale commands │ ││ └─────────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ UPDATING │ ││ │ • Image updates trigger new Pods │ ││ │ • Rolling update (via Deployment) │ ││ │ • Maintains desired replica count │ ││ └─────────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ DELETION │ ││ │ • On deletion, managed Pods are also deleted │ ││ │ • Can keep Pods with orphan option │ ││ │ • Finalizers ensure cleanup │ ││ └─────────────────────────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────────────────────┘Orphaning Pods
Section titled “Orphaning Pods”# Delete ReplicaSet but keep Podskubectl delete rs myapp-replicaset --cascade=false
# Pods will remain running without ReplicaSet managementBest Practices
Section titled “Best Practices”Using Deployments Instead of ReplicaSets
Section titled “Using Deployments Instead of ReplicaSets”┌─────────────────────────────────────────────────────────────────────────────┐│ BEST PRACTICES │├─────────────────────────────────────────────────────────────────────────────┤│ ││ ✓ DO: ││ ───────────────────────────────────────────────────────────────────── ││ • Use Deployments instead of bare ReplicaSets ││ • Deployments manage ReplicaSets automatically ││ • Use HPA for automatic scaling ││ • Set appropriate resource requests/limits ││ • Use labels for organization ││ ││ ✗ DON'T: ││ ───────────────────────────────────────────────────────────────────── ││ • Don't create bare ReplicaSets (use Deployments) ││ • Don't mix ReplicaSet with Deployments targeting same pods ││ • Don't set replicas: 0 unless absolutely necessary ││ • Don't forget to set resource limits ││ ││ Deployment > ReplicaSet > Pod: ││ ───────────────────────────────── ││ ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ DEPLOYMENT │ ││ │ Manages ReplicaSets for controlled rollouts │ ││ │ │ ││ │ ┌─────────────────────────────────────────────────────────────┐ │ ││ │ │ REPLICASET (v1) │ │ ││ │ │ ┌─────┐ ┌─────┐ ┌─────┐ │ │ ││ │ │ │ Pod │ │ Pod │ │ Pod │ image: v1.0 │ │ ││ │ │ └─────┘ └─────┘ └─────┘ │ │ ││ │ └─────────────────────────────────────────────────────────────┘ │ ││ │ │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────────────────────┘Hands-on Lab
Section titled “Hands-on Lab”Lab: Working with ReplicaSets
Section titled “Lab: Working with ReplicaSets”In this hands-on lab, we’ll create and manage ReplicaSets.
Prerequisites
Section titled “Prerequisites”- A running Kubernetes cluster (minikube, kind, or cloud)
Lab Steps
Section titled “Lab Steps”# Step 1: Create a ReplicaSetcat > replicaset.yaml << 'EOF'apiVersion: apps/v1kind: ReplicaSetmetadata: name: nginx-replicaset labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 resources: limits: memory: "256Mi" cpu: "500m" requests: memory: "128Mi" cpu: "100m"EOF
kubectl apply -f replicaset.yaml
# Step 2: Verify ReplicaSetkubectl get rskubectl get rs nginx-replicaset -o wide
# Step 3: Describe ReplicaSetkubectl describe rs nginx-replicaset
# Step 4: Check managed podskubectl get pods -l app=nginx
# Step 5: Scale the ReplicaSetkubectl scale rs nginx-replicaset --replicas=5
# Step 6: Verify scalingkubectl get pods -l app=nginxkubectl get rs nginx-replicaset
# Step 7: Delete a pod to see self-healingkubectl delete pod nginx-replicaset-xxxxxkubectl get pods -l app=nginx -w
# Step 8: Scale downkubectl scale rs nginx-replicaset --replicas=2
# Step 9: Clean upkubectl delete rs nginx-replicasetSummary
Section titled “Summary”Key Takeaways
Section titled “Key Takeaways”- ReplicaSets ensure availability - Maintains desired number of Pods
- Self-healing - Automatically replaces failed Pods
- Use Deployments - Deployments manage ReplicaSets
- Scaling - Manual and automatic scaling options
Quick Reference
Section titled “Quick Reference”# Create ReplicaSetkubectl apply -f rs.yaml
# Get ReplicaSetskubectl get rs
# Scale ReplicaSetkubectl scale rs myapp --replicas=5
# Delete ReplicaSetkubectl delete rs myapp
# Get pods with labelkubectl get pods -l app=myappNext Steps
Section titled “Next Steps”In the next chapter, we’ll explore Kubernetes ConfigMaps and Secrets (Chapter 22), covering:
- Configuration management
- ConfigMaps for configuration data
- Secrets for sensitive data