Skip to content

Interview_questions

This chapter contains comprehensive interview questions and detailed answers for DevOps, SRE, and SysAdmin positions focusing on bash scripting skills. These questions are real-world scenario based and test practical knowledge.


Question 1: What is the difference between a shell and a terminal?

Section titled “Question 1: What is the difference between a shell and a terminal?”

Answer:

A terminal (also called terminal emulator) is a program that provides a text-based interface for input and output. Examples: gnome-terminal, konsole, alacritty, kitty on Arch Linux.

A shell is the command-line interpreter that executes commands. Examples: bash, zsh, fish, sh.

┌─────────────────────────────────────────────────────────────┐
│ Terminal ───► Shell ───► Kernel │
│ │
│ Terminal (Emulator) → Shell (bash/zsh) → Linux Kernel │
│ - Renders text - Parses commands - Executes │
│ - Handles input - Expands variables - Manages │
│ - Displays output - Runs scripts - Resources │
│ │
└─────────────────────────────────────────────────────────────┘

The terminal sends your keystrokes to the shell, which interprets them, expands variables, and sends instructions to the kernel.


Question 2: What is the shebang (#!) and why is it important?

Section titled “Question 2: What is the shebang (#!) and why is it important?”

Answer:

The shebang (#!) is the first line of a script that tells the system which interpreter should execute the script. It must be the very first line.

#!/bin/bash # Use bash interpreter
#!/usr/bin/env bash # Portable way - finds bash in PATH
#!/bin/zsh # Use zsh
#!/usr/bin/python3 # Use Python

Why important:

  1. Determines which shell runs the script
  2. Makes script portable across systems
  3. Enables proper syntax highlighting
  4. Required for script to execute directly
Terminal window
# Without shebang - runs with current shell (might be different!)
echo $SHELL # Could be /bin/zsh, but script uses bash features
# With shebang - always uses specified interpreter
#!/bin/bash
echo ${BASH_VERSION} # Will work if bash is the interpreter

Question 3: Explain the difference between $@ and $*

Section titled “Question 3: Explain the difference between $@ and $*”

Answer:

Both $@ and $* expand to all positional parameters, but differently:

#!/bin/bash
# Demo script
show_args() {
echo "Using \$@:"
for arg in "$@"; do
echo " $arg"
done
echo "Using \$*:"
for arg in "$*"; do
echo " $arg"
done
}
# Call with arguments containing spaces
show_args "Hello World" "DevOps" "SRE"

Output:

Using $@:
Hello World
DevOps
SRE
Using $*:
Hello World DevOps SRE

Key Difference:

  • "$@" - Each argument remains separate: "$1" "$2" "$3"
  • "$*" - All arguments become one string: "$1 $2 $3"
Terminal window
# Practical example - passing all arguments to another command
log_message() {
# $@ preserves individual arguments
logger -t myapp "$@"
}
log_message "Error" "Connection failed" "Server: prod-db-01"
# Wrong: logger -t myapp $* would join them incorrectly

Question 4: What does set -e and set -u mean?

Section titled “Question 4: What does set -e and set -u mean?”

Answer:

These are shell options that make scripts more robust:

#!/bin/bash
# set -e (exit on error)
# Script exits immediately if any command returns non-zero exit status
set -e
# Example 1: Command fails, script exits
cd /nonexistent # Exit status 1
echo "This never prints" # Not reached
# Example 2: With error handling
set -e
command_that_might_fail || { echo "Failed but handled"; exit 0; }
# set -u (exit on undefined variable)
set -u
# Example: Undefined variable causes exit
echo "$undefined_var" # Exit with error: unbound variable
# Common combination
set -euo pipefail
# pipefail - pipeline fails if any command fails
set -euo pipefail
false | true # Without pipefail: succeeds
# With pipefail: fails (because false returns 1)

Question 5: How do you debug a bash script?

Section titled “Question 5: How do you debug a bash script?”

Answer:

Several techniques for debugging:

#!/bin/bash
# 1. Use -x flag (debug mode)
# bash -x script.sh
# Shows each command before execution
# 2. Add set -x in script
#!/bin/bash
set -x # Enable debug from here
commands
set +x # Disable debug
# 3. Use -v flag (verbose)
bash -v script.sh # Shows lines as read
# 4. Combine for thorough debugging
bash -xv script.sh
# 5. Use DEBUG trap
#!/bin/bash
debug_trap() {
echo "Executing: $BASH_COMMAND"
echo "Line: $LINENO"
}
trap debug_trap DEBUG
# 6. Use ERR trap
#!/bin/bash
error_trap() {
echo "Error on line $LINENO: '$BASH_COMMAND' exited with $?"
}
trap error_trap ERR
# 7. Print variable values
echo "Debug: var = $var"
# 8. Use shellcheck for static analysis
shellcheck script.sh

Question 6: What is the difference between single quotes and double quotes?

Section titled “Question 6: What is the difference between single quotes and double quotes?”

Answer:

The key difference is variable/command expansion:

#!/bin/bash
name="DevOps"
echo 'Hello $name' # Output: Hello $name (literal)
echo "Hello $name" # Output: Hello DevOps (expanded)
# Single quotes - preserve everything literally
# Double quotes - allow variable and command expansion
# Command substitution difference
echo 'Today is $(date)' # Output: Today is $(date)
echo "Today is $(date)" # Output: Today is Sat Feb 22 10:30:00
# Special characters
echo 'Path: $PATH' # Output: Path: $PATH
echo "Path: $PATH" # Output: Path: /usr/bin:/bin
# Nested quotes
echo "He's learning" # Single in double - works
echo 'She said "Hello"' # Double in single - works
# echo "He said "Hello"" # ERROR - double quotes can't nest
# Escape characters
echo "Special: \$, \", \\" # Output: Special: $, ", \
echo 'Special: \$, \", \\' # Output: Special: \$, \", \\

When to use which:

  • Single quotes: When you want literal text (passwords, paths with special chars)
  • Double quotes: When you need variable/command expansion

Question 7: Explain parameter expansion in bash

Section titled “Question 7: Explain parameter expansion in bash”

Answer:

Bash provides powerful parameter expansion:

#!/bin/bash
# Basic variable
name="admin"
# Default values
# ${var:-default} - use default if unset or null
echo "${name:-guest}" # admin (name is set)
echo "${undefined:-guest}"# guest (undefined uses default)
# Note: doesn't change the variable
# ${var:=default} - assign default if unset
echo "${count:=0}" # 0 and now count=0
# ${var:?message} - error if unset
: "${required_var:?Error: required_var is not set}"
# ${var:+value} - use value if set
echo "${name:+set}" # set (name is set)
echo "${undefined:+set}" # (empty)
# String operations
path="/home/user/documents/file.txt"
# Remove prefix/suffix
echo "${path##*/}" # file.txt (remove longest /...)
echo "${path#*/}" # home/user/documents/file.txt (shortest)
echo "${path%.*}" # /home/user/documents/file (remove .*)
echo "${path%%.*}" # /home/user/documents/file (longest)
# Substring
text="Hello World"
echo "${text:0:5}" # Hello (from index 0, length 5)
echo "${text:6}" # World (from index 6 to end)
# Length
echo "${#text}" # 11
# Case conversion (bash 4+)
echo "${text^^}" # HELLO WORLD
echo "${text,,}" # hello world
echo "${text^}" # Hello world (first char upper)
# Search and replace
echo "${text/World/Everyone}" # Hello Everyone
echo "${text//l/L}" # HeLLo WorLd (all occurrences)

Question 8: What are associative arrays and how do you use them?

Section titled “Question 8: What are associative arrays and how do you use them?”

Answer:

Associative arrays (dictionaries) use string keys instead of numeric indices:

#!/bin/bash
# Declare associative array
declare -A user_info
# Add elements
user_info[name]="john"
user_info[role]="admin"
user_info[email]="john@example.com"
# Access elements
echo "${user_info[name]}" # john
echo "${user_info[role]}" # admin
# List all keys
echo "${!user_info[@]}" # name role email
# List all values
echo "${user_info[@]}" # john admin john@example.com
# Iterate over keys
for key in "${!user_info[@]}"; do
echo "$key: ${user_info[$key]}"
done
# Count elements
echo "${#user_info[@]}" # 3
# Remove element
unset user_info[email]
# Check if key exists
if [[ -v user_info[role] ]]; then
echo "Role exists"
fi
# Real-world example: server inventory
declare -A servers
servers[web-01]="192.168.1.10"
servers[web-02]="192.168.1.11"
servers[db-01]="192.168.1.20"
servers[cache-01]="192.168.1.30"
# Get IP for server
get_server_ip() {
local server="$1"
echo "${servers[$server]:-not_found}"
}
echo "$(get_server_ip web-01)" # 192.168.1.10

Question 9: Explain the difference between if/elif/else and case statements

Section titled “Question 9: Explain the difference between if/elif/else and case statements”

Answer:

Both are used for conditional logic, but with different use cases:

#!/bin/bash
# if/elif/else - for complex conditions
if [[ $status -eq 200 ]]; then
echo "Success"
elif [[ $status -eq 404 ]]; then
echo "Not Found"
elif [[ $status -ge 500 ]]; then
echo "Server Error"
else
echo "Unknown status"
fi
# case - for pattern matching and simple values
case $environment in
production)
echo "Running in production"
echo "Extra production checks"
;;
staging)
echo "Running in staging"
;;
development|dev)
echo "Running in development"
;;
*)
echo "Unknown environment: $environment"
exit 1
;;
esac
# Practical example: service management
case $1 in
start)
echo "Starting service..."
systemctl start myapp
;;
stop)
echo "Stopping service..."
systemctl stop myapp
;;
restart)
$0 stop
sleep 2
$0 start
;;
status)
systemctl status myapp
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac
# Pattern matching in case
read -r filename
case "$filename" in
*.txt)
echo "Text file"
;;
*.jpg|*.png|*.gif)
echo "Image file"
;;
*.sh)
echo "Shell script"
;;
*)
echo "Unknown file type"
;;
esac

Question 10: How do you loop through files in a directory?

Section titled “Question 10: How do you loop through files in a directory?”

Answer:

Multiple approaches depending on needs:

#!/bin/bash
# Loop through files in current directory
for file in *; do
echo "File: $file"
done
# Loop through specific extension
for file in *.txt; do
echo "Text file: $file"
done
# Loop through files recursively
for file in $(find . -type f); do
echo "Found: $file"
done
# Better: handle spaces in filenames
while IFS= read -r file; do
echo "File: $file"
done < <(find . -type f)
# Modern bash: globstar for recursive
shopt -s globstar
for file in **/*; do
[[ -f "$file" ]] && echo "File: $file"
done
# With specific pattern
for file in /var/log/**/*.log; do
echo "Log: $file"
done
# Practical example: process each file
process_files() {
local dir="$1"
local count=0
for file in "$dir"/*; do
if [[ -f "$file" ]]; then
((count++))
echo "Processing $count: $file"
# Do something with file
fi
done
echo "Total files processed: $count"
}
process_files "/path/to/directory"

Question 11: How do you return a value from a function?

Section titled “Question 11: How do you return a value from a function?”

Answer:

Bash functions can’t directly return values like other languages. Multiple approaches:

#!/bin/bash
# Method 1: Exit status (for success/failure only)
is_root() {
[[ $EUID -eq 0 ]]
}
if is_root; then
echo "Running as root"
fi
# Method 2: Echo output and capture
get_date() {
date '+%Y-%m-%d'
}
today=$(get_date)
echo "Today is: $today"
# Method 3: Use global variable (not recommended)
result=""
compute() {
result=$((10 + 20))
}
compute
echo "Result: $result"
# Method 4: Use nameref for output variable
get_square() {
local -n output="$1" # Nameref to caller's variable
local num=$2
output=$((num * num))
}
get_square sq 5
echo "Square: $sq" # 25
# Method 5: Return array
get_users() {
local users=(admin user1 user2)
printf '%s\n' "${users[@]}"
}
# Read into array
mapfile -t user_list < <(get_users)
echo "${user_list[@]}"
# Method 6: Complex data as JSON
get_server_info() {
cat <<EOF
{"name": "web-01", "ip": "192.168.1.10", "status": "running"}
EOF
}
info=$(get_server_info)
name=$(echo "$info" | jq -r '.name')

Question 12: What is local variables and why use them?

Section titled “Question 12: What is local variables and why use them?”

Answer:

Local variables confine variable scope to the function:

#!/bin/bash
# Global variable - accessible everywhere
global_var="I'm global"
function test_scope() {
# Local variable - only in this function
local local_var="I'm local"
echo "Inside function:"
echo " global_var = $global_var" # Works
echo " local_var = $local_var" # Works
}
test_scope
echo "Outside function:"
echo " global_var = $global_var" # Works
echo " local_var = $local_var" # Empty/unset!
# Why use local?
# 1. Prevent variable conflicts
# 2. Make functions reusable
# 3. Avoid polluting global namespace
counter=0
increment() {
local counter # Separate from global counter
counter=5
echo "Local counter: $counter"
}
increment # Prints 5
echo "Global counter: $counter" # Still 0
# Performance: local is slightly faster (no environment export)

Question 13: How would you extract the IP address from a log line?

Section titled “Question 13: How would you extract the IP address from a log line?”

Answer:

Multiple tools available:

#!/bin/bash
log_line='2024-02-22 10:30:45 ERROR Connection from 192.168.1.100 failed'
# Method 1: grep -o (extract matches)
echo "$log_line" | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+'
# Method 2: sed
echo "$log_line" | sed -nE 's/.*([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+).*/\1/p'
# Method 3: awk
echo "$log_line" | awk '{for(i=1;i<=NF;i++) if($i ~ /^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/) print $i}'
# Method 4: bash parameter expansion
if [[ "$log_line" =~ ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) ]]; then
echo "${BASH_REMATCH[1]}"
fi
# Method 5: Complex parsing with awk
# Sample: "user=john ip=10.0.0.5 action=login"
echo "user=john ip=10.0.0.5 action=login" | awk -F'ip=' '{split($2,a," "); print a[1]}'
# Output: 10.0.0.5
# Real-world: Extract from nginx access log
nginx_log='192.168.1.50 - - [22/Feb/2024:10:30:45 +0000] "GET /api/users HTTP/1.1" 200 1234'
ip=$(echo "$nginx_log" | awk '{print $1}')
echo "Client IP: $ip" # 192.168.1.50

Question 14: Explain the difference between sed and awk

Section titled “Question 14: Explain the difference between sed and awk”

Answer:

Both are powerful text processing tools, but with different strengths:

#!/bin/bash
# sed - Stream Editor (line-by-line, simple transformations)
# Replace text
sed 's/old/new/' file.txt # First occurrence per line
sed 's/old/new/g' file.txt # All occurrences
sed 's/old/new/2' file.txt # Second occurrence
sed -i 's/old/new/g' file.txt # In-place edit
# Delete lines
sed '/pattern/d' file.txt # Delete matching lines
sed '1,5d' file.txt # Delete lines 1-5
sed '/^$/d' file.txt # Delete empty lines
# Print specific lines
sed -n '5p' file.txt # Print line 5
sed -n '5,10p' file.txt # Print lines 5-10
sed -n '/pattern/p' file.txt # Print matching lines
# Multiple operations
sed -e 's/a/b/' -e 's/c/d/' file.txt
# --- awk - Advanced (field-based processing) ---
# Print specific columns
awk '{print $1}' file.txt # First column
awk '{print $1, $3}' file.txt # Columns 1 and 3
# Field separator
awk -F',' '{print $2}' csv.txt # Comma-separated
awk -F'|' '{print $NF}' file.txt # Last field
# Patterns and conditions
awk '/error/ {print}' log.txt # Lines with 'error'
awk 'NR>1 {print}' file.txt # Skip header
# Computations
awk '{sum+=$1} END {print sum}' numbers.txt
# Built-in variables
# NR - current line number
# NF - number of fields
# FS - field separator
# OFS - output field separator
# Practical example: Parse server stats
echo "web01 100 500 2000" | awk '{
server=$1
cpu=$2
mem=$3
disk=$4
if (cpu > 80 || mem > 80) {
print "ALERT: " server " high usage - CPU:" cpu "% MEM:" mem "%"
}
}'

Summary:

  • sed: Good for simple, line-based text replacements
  • awk: Good for structured data, calculations, complex logic

Question 15: How do you run commands in parallel in bash?

Section titled “Question 15: How do you run commands in parallel in bash?”

Answer:

Several methods for parallel execution:

#!/bin/bash
# Method 1: Background jobs with &
echo "Starting background jobs..."
process_item item1 &
process_item item2 &
process_item item3 &
wait # Wait for all to complete
echo "All done"
# Method 2: GNU Parallel (recommended for complex scenarios)
# Install: sudo pacman -S parallel
# Simple parallel
ls *.txt | parallel process_file {}
# With specific number of jobs
ls *.txt | parallel -j 4 process_file {}
# Using all CPU cores
ls *.txt | parallel -j+0 process_file {}
# With progress
ls *.txt | parallel --progress process_file {}
# Method 3: xargs parallelism
find . -name "*.txt" | xargs -P 4 -I {} process {}
# Method 4: Parallel with output handling
results=()
for item in item1 item2 item3; do
results+=("$(process "$item" &)")
done
wait
printf '%s\n' "${results[@]}"
# Method 5: Named pipes for parallel processing
mkfifo pipe1 pipe2
process1 > pipe1 &
process2 > pipe2 &
cat pipe1 pipe2
rm pipe1 pipe2
# Real-world: Process multiple log files
process_logs() {
local log_file="$1"
echo "Processing $log_file"
# analysis commands
}
export -f process_logs
ls /var/log/*.log | parallel -j+0 process_logs {}

Question 16: What are signals and how do you handle them?

Section titled “Question 16: What are signals and how do you handle them?”

Answer:

Signals are messages sent to processes. Common ones:

#!/bin/bash
# Signal types
# SIGINT (2) - Ctrl+C, interrupt
# SIGTERM (15) - Graceful termination request
# SIGKILL (9) - Force kill, cannot be caught
# SIGHUP (1) - Hangup, often used for config reload
# SIGUSR1 (10) - User-defined
# SIGUSR2 (12) - User-defined
# Using trap to handle signals
cleanup() {
echo "Cleaning up..."
# Remove temp files
rm -f /tmp/script_*
# Stop background processes
kill $PID 2>/dev/null
exit 0
}
# Catch SIGINT and SIGTERM
trap cleanup SIGINT SIGTERM
# Ignore signal (Ctrl+C won't work)
trap '' SIGINT
# Default behavior
trap - SIGINT
# Example: Graceful shutdown
#!/bin/bash
PID_FILE="/var/run/myapp.pid"
start() {
echo "Starting..."
./myapp &
echo $! > "$PID_FILE"
}
stop() {
if [[ -f "$PID_FILE" ]]; then
pid=$(cat "$PID_FILE")
kill -TERM "$pid"
rm "$PID_FILE"
echo "Stopped"
fi
}
# Handle TERM signal
trap 'stop; exit' TERM
# Reload configuration (USR1)
reload() {
echo "Reloading config..."
kill -USR1 $(cat "$PID_FILE")
}
trap reload USR1
case "$1" in
start) start ;;
stop) stop ;;
reload) reload ;;
esac

Question 17: Write a script to automate log rotation

Section titled “Question 17: Write a script to automate log rotation”

Answer:

#!/usr/bin/env bash
#
# log-rotation.sh - Automate log rotation
#
set -euo pipefail
LOG_DIR="/var/log/myapp"
RETENTION_DAYS=30
MAX_SIZE_MB=100
rotate_logs() {
local log_dir="$1"
local retention="$2"
local max_size="$3"
echo "Starting log rotation in $log_dir"
# Create timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
# Find and rotate old logs
find "$log_dir" -name "*.log" -type f | while read -r log_file; do
# Check file size
size_mb=$(du -m "$log_file" | cut -f1)
if [[ $size_mb -ge $max_size ]]; then
echo "Rotating $log_file (size: ${size_mb}MB)"
# Compress and rotate
mv "$log_file" "${log_file}.${timestamp}"
gzip "${log_file}.${timestamp}"
# Create new empty log
touch "$log_file"
chmod 644 "$log_file"
fi
done
# Clean up old compressed logs
find "$log_dir" -name "*.log.*.gz" -type f -mtime +"$retention" -delete
# Count remaining files
total_logs=$(find "$log_dir" -name "*.log*" -type f | wc -l)
echo "Rotation complete. Total log files: $total_logs"
}
# Main
main() {
if [[ $EUID -ne 0 ]]; then
echo "Please run as root"
exit 1
fi
rotate_logs "$LOG_DIR" "$RETENTION_DAYS" "$MAX_SIZE_MB"
}
main "$@"

Question 18: How would you implement a retry mechanism in bash?

Section titled “Question 18: How would you implement a retry mechanism in bash?”

Answer:

Essential for production scripts dealing with unreliable services:

#!/usr/bin/env bash
#
# retry.sh - Retry mechanism for commands
#
# Simple retry function
retry() {
local max_attempts=$1
local delay=$2
shift 2
local attempt=1
while [[ $attempt -le $max_attempts ]]; do
echo "Attempt $attempt/$max_attempts: $*"
if "$@"; then
echo "Success on attempt $attempt"
return 0
fi
if [[ $attempt -lt $max_attempts ]]; then
echo "Failed. Retrying in ${delay}s..."
sleep "$delay"
fi
((attempt++))
done
echo "Failed after $max_attempts attempts"
return 1
}
# Retry with exponential backoff
retry_with_backoff() {
local max_attempts=$1
local initial_delay=$2
local max_delay=$3
shift 3
local attempt=1
local delay=$initial_delay
while [[ $attempt -le $max_attempts ]]; do
echo "Attempt $attempt/$max_attempts: $*"
if "$@"; then
echo "Success on attempt $attempt"
return 0
fi
if [[ $attempt -lt $max_attempts ]]; then
echo "Failed. Retrying in ${delay}s (max: ${max_delay}s)..."
sleep "$delay"
# Exponential backoff
delay=$((delay * 2))
[[ $delay -gt $max_delay ]] && delay=$max_delay
fi
((attempt++))
done
echo "Failed after $max_attempts attempts"
return 1
}
# Usage examples
retry 3 5 curl -s https://api.example.com/health
retry_with_backoff 5 1 30 docker pull nginx:latest
# With custom error handling
retry_command() {
local max_attempts=5
local attempt=1
until curl -sf http://localhost:8080/health > /dev/null; do
echo "Health check failed, attempt $attempt/$max_attempts"
if [[ $attempt -eq $max_attempts ]]; then
echo "Service failed to become healthy"
return 1
fi
((attempt++))
sleep 5
done
echo "Service is healthy"
return 0
}

Question 19: Write a script to check server health

Section titled “Question 19: Write a script to check server health”

**bash-scripting-guide/12_interview/45_interview_questions.md

#!/usr/bin/env bash
#
# health-check.sh - Comprehensive server health check
#
set -euo pipefail
ALERT_THRESHOLD_CPU=80
ALERT_THRESHOLD_MEM=85
ALERT_THRESHOLD_DISK=90
check_cpu() {
local usage
usage=$(top -bn1 | grep "Cpu(s)" | awk '{print int($2)}')
echo "CPU Usage: ${usage}%"
if [[ $usage -gt $ALERT_THRESHOLD_CPU ]]; then
echo "ALERT: CPU usage above threshold!"
return 1
fi
return 0
}
check_memory() {
local mem_usage
mem_usage=$(free | grep Mem | awk '{printf "%.0f", $3/$2 * 100}')
echo "Memory Usage: ${mem_usage}%"
if [[ $mem_usage -gt $ALERT_THRESHOLD_MEM ]]; then
echo "ALERT: Memory usage above threshold!"
return 1
fi
return 0
}
check_disk() {
local disk_usage
disk_usage=$(df -h / | tail -1 | awk '{print int($5)}')
echo "Disk Usage: ${disk_usage}%"
if [[ $disk_usage -gt $ALERT_THRESHOLD_DISK ]]; then
echo "ALERT: Disk usage above threshold!"
return 1
fi
return 0
}
check_services() {
local services=("nginx" "docker" "postgresql")
for service in "${services[@]}"; do
if systemctl is-active --quiet "$service"; then
echo "$service: RUNNING"
else
echo "ALERT: $service is NOT running!"
return 1
fi
done
return 0
}
check_ports() {
local ports=(80 443 22)
for port in "${ports[@]}"; do
if ss -tuln | grep -q ":$port "; then
echo "Port $port: LISTENING"
else
echo "ALERT: Port $port not listening!"
return 1
fi
done
return 0
}
check_load() {
local load
load=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | sed 's/,//')
cores=$(nproc)
load_int=$(echo "$load" | cut -d'.' -f1)
echo "Load Average: $load (cores: $cores)"
if [[ $load_int -gt $((cores * 2)) ]]; then
echo "ALERT: Load is very high!"
return 1
fi
return 0
}
# Main health check
main() {
echo "=== Server Health Check ==="
echo "Hostname: $(hostname)"
echo "Date: $(date)"
echo ""
local failed=0
check_cpu || ((failed++))
echo ""
check_memory || ((failed++))
echo ""
check_disk || ((failed++))
echo ""
check_services || ((failed++))
echo ""
check_ports || ((failed++))
echo ""
check_load || ((failed++))
echo ""
if [[ $failed -eq 0 ]]; then
echo "=== All checks passed ==="
exit 0
else
echo "=== ALERT: $failed check(s) failed ==="
exit 1
fi
}
main "$@"

Question 20: How would you implement idempotent operations in bash?

Section titled “Question 20: How would you implement idempotent operations in bash?”

**bash-scripting-guide/12_interview/45_interview_questions.md

Answer:

Idempotent means running the script multiple times produces the same result:

#!/usr/bin/env bash
#
# Idempotent script examples
#
# ❌ NOT idempotent - adds line every time
add_line() {
echo "192.168.1.100 server1" >> /etc/hosts
}
# ✓ Idempotent - checks before adding
add_to_hosts() {
local ip="$1"
local hostname="$2"
if ! grep -q "$hostname" /etc/hosts; then
echo "$ip $hostname" >> /etc/hosts
fi
}
# ✓ Idempotent - creates user if not exists
create_user() {
local username="$1"
if ! id "$username" &>/dev/null; then
useradd -m "$username"
echo "User $username created"
else
echo "User $username already exists"
fi
}
# ✓ Idempotent - service configuration
configure_service() {
local config_file="/etc/myapp/config.conf"
# Backup existing
[[ -f "$config_file" ]] && cp "$config_file" "${config_file}.bak"
# Create with idempotent content
cat > "$config_file" <<EOF
# Configuration
port=8080
environment=production
max_connections=100
EOF
echo "Configuration written"
}
# ✓ Idempotent - symlink
ensure_symlink() {
local target="$1"
local link="$2"
if [[ -L "$link" ]]; then
current_target=$(readlink "$link")
if [[ "$current_target" == "$target" ]]; then
echo "Symlink already correct"
return 0
fi
rm "$link"
fi
ln -s "$target" "$link"
echo "Symlink created"
}
# ✓ Idempotent - package installation
install_packages() {
local packages=("nginx" "curl" "git")
for pkg in "${packages[@]}"; do
if pacman -Qs "^$pkg$" &>/dev/null; then
echo "$pkg already installed"
else
sudo pacman -S --noconfirm "$pkg"
fi
done
}
# Key principle: Check state before changing
# Use: if ! condition; then action; fi

Question 21: Explain shell expansion order

Section titled “Question 21: Explain shell expansion order”

**bash-scripting-guide/12_interview/45_interview_questions.md

Answer:

Understanding expansion order is crucial for predictable behavior:

#!/bin/bash
# Expansion order:
# 1. Brace expansion
# 2. Tilde expansion
# 3. Parameter and variable expansion
# 4. Command substitution
# 5. Arithmetic expansion
# 6. Word splitting
# 7. Pathname expansion
# 1. Brace expansion (first)
echo {1..3} # 1 2 3
echo {a,b,c} # a b c
echo file{A,B,C} # fileA fileB fileC
# 2. Tilde expansion
echo ~ # /home/username
echo ~root # /root
echo ~+ # current directory (same as $PWD)
# 3. Parameter/variable expansion
name="world"
echo "Hello $name" # Hello world
# 4. Command substitution
echo "Today: $(date)" # Today: Sat Feb 22 10:00:00
echo "Files: $(ls | wc -l)"
# 5. Arithmetic expansion
echo $((10 + 5)) # 15
echo $((10 * 2)) # 20
# 6. Word splitting (by $IFS)
words="one two three"
echo $words # one two three (split)
echo "$words" # one two three (preserved)
# 7. Pathname expansion (globbing)
ls *.txt # all txt files
echo *.sh # all sh files
# Practical example - understanding why this fails:
file="my document.txt"
ls $file
# Expands to: ls my document.txt (WRONG - 3 args!)
# Fix with quoting:
ls "$file"
# Expands to: ls "my document.txt" (CORRECT - 1 arg)

Question 22: What is the difference between subshell and current shell?

Section titled “Question 22: What is the difference between subshell and current shell?”

**bash-scripting-guide/12_interview/45_interview_questions.md

Answer:

Understanding subshells is crucial for variable scope:

#!/bin/bash
# Variables in subshell don't affect parent
(
subshell_var="I'm in subshell"
echo "Inside subshell: $subshell_var"
)
echo "Outside: $subshell_var" # Empty!
# Using parentheses creates subshell
# 1. Command grouping with ()
result=$(
cd /tmp
pwd
)
echo "$result" # /tmp, but shell's cwd unchanged
# 2. Pipelines create subshells
# In bash, each command in pipeline runs in subshell
who | sort # sort runs in subshell
# Modern bash: last command in pipeline can be in current shell
# bash 4.2+: lastpipe option
shopt -s lastpipe
who | sort # sort runs in current shell
# 3. Command substitution $()
output=$(process)
# 4. Process substitution <()
while read -r line; do
echo "$line"
done < <(echo "test")
# When to use subshell:
# - Need isolated variables
# - Want to change directory temporarily
# - Run potentially dangerous commands
# When NOT to use subshell:
# - Need to preserve variables
# - Performance critical loops
# Performance difference:
# Slow - subshell in loop
for i in {1..1000}; do
result=$((i * 2))
done
# Faster - same shell
for i in {1..1000}; do
result=$((i * 2))
done
# (difference is minimal in practice)

This chapter covered:

  • ✅ Basic shell concepts (terminal, shell, shebang)
  • ✅ Variables and quoting
  • ✅ Parameter expansion
  • ✅ Associative arrays
  • ✅ Control flow (if/case, loops)
  • ✅ Functions and return values
  • ✅ Text processing (sed, awk)
  • ✅ Process management and parallelism
  • ✅ Signals and signal handling
  • ✅ DevOps scenarios (log rotation, health checks, retry)
  • ✅ Idempotent operations
  • ✅ Shell expansion order
  • ✅ Subshell vs current shell

Continue to the final chapter to practice with real-world exercises.


Previous Chapter: Security Considerations Next Chapter: Practical Exercises