Linux_Practical_Interview_1251 1500
Linux Practical Interview Questions (1251-1500)
Section titled “Linux Practical Interview Questions (1251-1500)”Linux Kernel Tuning Advanced
Section titled “Linux Kernel Tuning Advanced”Q1251: How do you configure kernel parameters at boot?
Section titled “Q1251: How do you configure kernel parameters at boot?”Answer: Kernel parameters can be set at boot time through GRUB:
# Edit GRUB configvim /etc/default/grub
# Add to GRUB_CMDLINE_LINUXGRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 crashkernel=auto"
# Regenerate GRUBupdate-grub # Debian/Ubuntugrub2-mkconfig -o /boot/grub2/grub.cfg # RHEL/CentOS
# View current parameterscat /proc/cmdline
# Temporary changes (current session only)sysctl -w parameter=value
# Permanent changes# /etc/sysctl.conf or /etc/sysctl.d/99-custom.confQ1252: How do you tune network kernel parameters?
Section titled “Q1252: How do you tune network kernel parameters?”Answer:
# Network corenet.core.somaxconn=65535net.core.netdev_max_backlog=65535net.core.rmem_max=16777216net.core.wmem_max=16777216net.core.optmem_max=25165824
# TCP tuningnet.ipv4.tcp_rmem=4096 87380 16777216net.ipv4.tcp_wmem=4096 65536 16777216net.ipv4.tcp_congestion_control=cubicnet.ipv4.tcp_fastopen=3net.ipv4.tcp_max_syn_backlog=8192net.ipv4.tcp_fin_timeout=15net.ipv4.tcp_keepalive_time=600net.ipv4.tcp_keepalive_intvl=60net.ipv4.tcp_keepalive_probes=5net.ipv4.tcp_tw_reuse=1
# Applysysctl -pQ1253: How do you optimize memory management?
Section titled “Q1253: How do you optimize memory management?”Answer:
# Virtual memoryvm.swappiness=10vm.dirty_ratio=15vm.dirty_background_ratio=5vm.dirty_expire_centisecs=3000vm.dirty_writeback_centisecs=500vm.vfs_cache_pressure=50vm.min_free_kbytes=65536
# Memory overcommitvm.overcommit_memory=0vm.overcommit_ratio=50vm.oom_dump_tasks=1vm.oom_kill_allocating_task=1
# Huge pagesvm.nr_hugepages=1024
# Transparent huge pagesecho never > /sys/kernel/mm/transparent_hugepage/enabledecho never > /sys/kernel/mm/transparent_hugepage/defragQ1254: How do you configure kernel modules at startup?
Section titled “Q1254: How do you configure kernel modules at startup?”Answer:
# Load module at bootloopst8021q
# Module parameters# /etc/modprobe.d/.confoptions bonding mode=active-backupoptions iptables conntrack_hashsize=262144
# Blacklist module# /etc/modprobe.d/blacklist.confblacklist nouveaublacklist snd_pcsp
# Create module dependenciesdepmod -a
# View loaded moduleslsmodQ1255: How do you implement kernel live patching?
Section titled “Q1255: How do you implement kernel live patching?”Answer:
# Using kpatch (RHEL/CentOS)yum install kpatchkpatch install
# Create patch# 1. Create patch file# patch.diffdiff -Naur orig/file.c new/file.c
# 2. Build patchkpatch build patch.diff
# 3. Applykpatch load kpatch-mypatch.ko
# Using livepatch (Ubuntu)snap install canonical-livepatchcanonical-livepatch enable <token>
# Check statuscanonical-livepatch statuskpatch listLinux Services Advanced
Section titled “Linux Services Advanced”Q1256: How do you configure HAProxy with SSL?
Section titled “Q1256: How do you configure HAProxy with SSL?”Answer:
# Generate certificatesopenssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -keyout /etc/ssl/private/haproxy.key \ -out /etc/ssl/certs/haproxy.crt
# Combine to PEMcat /etc/ssl/certs/haproxy.crt /etc/ssl/private/haproxy.key > /etc/haproxy/haproxy.pem
# HAProxy config# /etc/haproxy/haproxy.cfgfrontend https_front bind *:443 ssl crt /etc/haproxy/haproxy.pem http-response set Strict-Transport-Security "max-age=31536000" default_backend app_servers
backend app_servers balance roundrobin option httpchk GET /health http-check expect status 200 server app1 192.168.1.10:8080 check inter 2000 rise 2 fall 3 server app2 192.168.1.11:8080 check inter 2000 rise 2 fall 3
# OCSP staplingfrontend https_front bind *:443 ssl crt /etc/haproxy/haproxy.pem ocsp-int ca-file /etc/ssl/certs/ca-certificates.crtQ1257: How do you configure Nginx with HTTP/2?
Section titled “Q1257: How do you configure Nginx with HTTP/2?”Answer:
# Install nginx-extras for HTTP/2apt install nginx-extras
http { # HTTP/2 settings server { listen 443 ssl http2; server_name example.com;
ssl_certificate /etc/ssl/certs/server.crt; ssl_certificate_key /etc/ssl/private/server.key;
# SSL configuration ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256; ssl_prefer_server_ciphers off;
# OCSP stapling ssl_stapling on; ssl_stapling_verify on;
location / { proxy_pass http://backend; } }}Q1258: How do you configure Redis cluster?
Section titled “Q1258: How do you configure Redis cluster?”Answer:
# Create clusterredis-cli --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 \ 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 \ --cluster-replicas 1
# Check clusterredis-cli -c -p 7001 cluster nodesredis-cli -c -p 7001 cluster info
# Connect to clusterredis-cli -c -p 7001
# OperationsCLUSTER INFOCLUSTER SLOTSCLUSTER NODES
# Failover# Master electionredis-cli -p 7001 cluster failover
# Reshardredis-cli --cluster reshard 127.0.0.1:7001Q1259: How do you configure PostgreSQL replication?
Section titled “Q1259: How do you configure PostgreSQL replication?”Answer:
# Master configurationwal_level = replicamax_wal_senders = 3max_replication_slots = 3wal_keep_size = 1GB
# /etc/postgresql/14/main/pg_hba.confhost replication replicator 192.168.1.0/24 md5
# Create replication userpsql -U postgresCREATE USER replicator REPLICATION LOGIN PASSWORD 'password';
# Backup for replicapg_basebackup -h master -D /var/lib/postgresql/14/main -U replicator -P -X stream
# Replica configuration# /etc/postgresql/14/main/postgresql.confhot_standby = on
# Create recovery.conf# /etc/postgresql/14/main/recovery.confstandby_mode = 'on'primary_conninfo = 'host=master port=5432 user=replicator password=password'Q1260: How do you configure MySQL group replication?
Section titled “Q1260: How do you configure MySQL group replication?”Answer:
# MySQL 8.0 Group Replication[mysqld]server-id=1gtid_mode=ONenforce_gtid_consistency=ONbinlog_checksum=NONElog_slave_updates=ONrelay_log=relay-binbinlog_format=ROWtransaction_write_set_extraction=XXHASH64
# Group Replication pluginplugin_load_add='group_replication.so'group_replication_group_name="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"group_replication_start_on_boot=OFFgroup_replication_local_address="127.0.0.1:33061"group_replication_group_seeds="127.0.0.1:33061,127.0.0.1:33062,127.0.0.1:33063"group_replication_bootstrap_group=ON
# Start group replicationSET GLOBAL group_replication_group_name = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee";START GROUP_REPLICATION;Linux Security Advanced
Section titled “Linux Security Advanced”Q1261: How do you implement AppArmor?
Section titled “Q1261: How do you implement AppArmor?”Answer:
# Installapt install apparmor apparmor-utils
# Statusaa-statusapparmor_status
# Disable/enable profilesaa-disable /usr/sbin/namedaa-enable /usr/sbin/named
# Create profileaa-genprof /usr/bin/myapp
# Edit profilevim /etc/apparmor.d/usr.bin.myapp
# Example profile#include <tunables/global>/usr/bin/myapp { #include <abstractions/base> #include <abstractions/bash>
/etc/myapp/** r, /var/log/myapp/* rw, /run/myapp.sock rw,
# Deny deny /etc/shadow r,}
# Reloadapparmor_parser -r /etc/apparmor.d/usr.bin.myapp
# Change to enforce/complain modeaa-complain /usr/bin/myappaa-enforce /usr/bin/myappQ1262: How do you configure fail2ban?
Section titled “Q1262: How do you configure fail2ban?”Answer:
# Installapt install fail2ban
# Configure# /etc/fail2ban/jail.local[DEFAULT]bantime = 3600findtime = 600maxretry = 5destemail = admin@example.comsender = fail2ban@example.comaction = %(action_mwl)s
[sshd]enabled = trueport = sshfilter = sshdlogpath = /var/log/auth.logmaxretry = 3
[nginx-http-auth]enabled = truefilter = nginx-http-authport = http,httpslogpath = /var/log/nginx/error.log
# Custom filter# /etc/fail2ban/filter.d/myapp.conf[Definition]failregex = <HOST> - .* "GET /adminignoreregex =
# Testfail2ban-regex /var/log/nginx/error.log /etc/fail2ban/filter.d/nginx-http-auth
# Commandsfail2ban-client statusfail2ban-client set sshd unbanip 192.168.1.100Q1263: How do you implement two-factor authentication?
Section titled “Q1263: How do you implement two-factor authentication?”Answer:
# Install Google Authenticator PAM moduleapt install libpam-google-authenticator
# Configure PAM# /etc/pam.d/sshd# Add before @include common-authauth required pam_google_authenticator.so
# Configure SSH# /etc/ssh/sshd_configChallengeResponseAuthentication yesAuthenticationMethods password,keyboard-interactive
# For usersu - usernamegoogle-authenticator
# TOTP configuration# Add to user .google_authenticator# Store secret securely
# Testssh username@server# Enter password# Enter 6-digit codeQ1264: How do you secure the boot process?
Section titled “Q1264: How do you secure the boot process?”Answer:
# Enable UEFI secure boot# Sign kernel# 1. Generate keysopenssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -nodes -days 36500 -subj "/CN=My Secure Boot/"
# 2. Sign kernelsbsign --key MOK.priv --cert MOK.der /boot/vmlinuz-$(uname -r) --output /boot/vmlinuz-$(uname -r)
# 3. Enroll keymokutil --import MOK.der
# GRUB password protection# /etc/grub.d/40_customset superusers="admin"password_pbkdf2 admin grub.pbkdf2.sha512.10000.salt.hash
# Recompile GRUBupdate-grub
# Disable USB boot# BIOS settings or# /etc/modprobe.d/blacklist.confinstall usb-storage /bin/trueQ1265: How do you implement file integrity monitoring?
Section titled “Q1265: How do you implement file integrity monitoring?”Answer:
# Install AIDEapt install aide
# Configure# /etc/aide/aide.conf# Databasedatabase=file:/var/lib/aide/aide.dbdatabase_out=file:/var/lib/aide/aide.db.new
# RulesFip = p+i+n+u+g+s+m+c+md5+sha256Lnx = p+u+g+i+n+S
# Directories/etc Fip/bin Lnx/sbin Lnx/usr Lnx
# Initializeaideinit
# Checkaide --checkaide --update
# Cron job# /etc/cron.d/aide0 5 * * * root /usr/bin/aide --check | mail -s "AIDE Report" admin@example.comLinux Cloud Native
Section titled “Linux Cloud Native”Q1266: How do you configure container orchestration?
Section titled “Q1266: How do you configure container orchestration?”Answer:
# Kubernetes with kubeadmkubeadm init --pod-network-cidr=10.244.0.0/16
# Install CNI (Flannel)kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# Allow pods on masterkubectl taint nodes --all node-role.kubernetes.io/master-
# Deploy application# deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: myappspec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:latest ports: - containerPort: 8080
# Applykubectl apply -f deployment.yaml
# Scalekubectl scale deployment myapp --replicas=5Q1267: How do you configure Docker Swarm?
Section titled “Q1267: How do you configure Docker Swarm?”Answer:
# Initialize swarmdocker swarm init --advertise-addr 192.168.1.10
# Join nodesdocker swarm join --token SWMTKN-1-xxx 192.168.1.10:2377
# Create servicedocker service create \ --name myapp \ --replicas 3 \ --publish 8080:80 \ myapp:latest
# Scale servicedocker service scale myapp=5
# Update servicedocker service update \ --image myapp:v2 \ myapp
# Stack deploydocker stack deploy -c docker-compose.yml myapp
# Visualizedocker node lsdocker service lsdocker service ps myappQ1268: How do you configure Helm charts?
Section titled “Q1268: How do you configure Helm charts?”Answer:
# Install Helmcurl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Add repohelm repo add stable https://charts.helm.sh/stablehelm repo update
# Create charthelm create mychart
# Chart structure# mychart/# Chart.yaml# values.yaml# templates/# deployment.yaml# service.yaml
# Installhelm install myrelease mycharthelm install myrelease mychart --set replicaCount=3
# Upgradehelm upgrade myrelease mychart
# Rollbackhelm rollback myrelease 1
# Values overridehelm install myrelease mychart -f values-prod.yamlQ1269: How do you configure service mesh?
Section titled “Q1269: How do you configure service mesh?”Answer:
# Install Istiocurl -L https://istio.io/downloadIstio | sh -istioctl install --set profile=demo
# Enable injectionkubectl label namespace default istio-injection=enabled
# Deploy applicationkubectl apply -f app.yaml
# Configure traffic# VirtualServiceapiVersion: networking.istio.io/v1beta1kind: VirtualServicemetadata: name: myappspec: hosts: - myapp http: - route: - destination: host: myapp subset: v1 weight: 80 - destination: host: myapp subset: v2 weight: 20
# Monitoristioctl dashboard kialiistioctl dashboard prometheusQ1270: How do you configure GitOps with ArgoCD?
Section titled “Q1270: How do you configure GitOps with ArgoCD?”Answer:
# Install ArgoCDkubectl create namespace argocdkubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Get passwordkubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# Access UIkubectl port-forward svc/argocd-server -n argocd 8080:443
# Create application# application.yamlapiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: myapp namespace: argocdspec: project: default source: repoURL: https://github.com/org/repo.git targetRevision: HEAD path: deploy destination: server: https://kubernetes.default.svc namespace: myapp syncPolicy: automated: prune: true selfHeal: true
kubectl apply -f application.yamlLinux Storage Advanced
Section titled “Linux Storage Advanced”Q1271: How do you configure distributed storage?
Section titled “Q1271: How do you configure distributed storage?”Answer:
# GlusterFS installationapt install glusterfs-server
# Add trusted poolgluster peer probe server2gluster peer probe server3gluster peer status
# Create volumegluster volume create gv0 \ replica 3 \ server1:/brick1/server \ server2:/brick1/server \ server3:/brick1/server
# Start volumegluster volume start gv0
# Mountmount -t glusterfs server1:/gv0 /mnt/glusterfs
# Volume optionsgluster volume set gv0 performance.cache-size 256MBgluster volume set gv0 network.ping_timeout 10
# Rebalancegluster volume rebalance gv0 startQ1272: How do you configure multipath I/O?
Section titled “Q1272: How do you configure multipath I/O?”Answer:
# Installapt install multipath-tools
# Configure# /etc/multipath.confdefaults { user_friendly_names yes find_multipaths yes}
devices { device { vendor "Dell" product "MD36*" path_grouping_policy multibus path_checker tur features "0" hardware_handler "0" }}
# Start multipathdsystemctl start multipathd
# Commandsmultipath -llmultipath -v2multipath -F
# Add pathmultipath /dev/sda
# Get WWIDmultipath -l /dev/sdaQ1273: How do you configure encrypted storage?
Section titled “Q1273: How do you configure encrypted storage?”Answer:
# LUKS encryptioncryptsetup luksFormat /dev/sdb1
# Open containercryptsetup luksOpen /dev/sdb1 encrypted_volume
# Create filesystemmkfs.xfs /dev/mapper/encrypted_volume
# Mountmount /dev/mapper/encrypted_volume /mnt/data
# Add key slotcryptsetup luksAddKey /dev/sdb1
# Backup headercryptsetup luksHeaderBackup /dev/sdb1 --header-backup-file header.img
# Auto unlock# /etc/crypttabencrypted_volume /dev/sdb1 none luks
# /etc/fstab/dev/mapper/encrypted_volume /mnt/data xfs defaults 0 2Q1274: How do you configure object storage?
Section titled “Q1274: How do you configure object storage?”Answer:
# Install MinIOwget https://dl.min.io/server/minio/release/linux-amd64/miniochmod +x minio
# Start MinIOexport MINIO_ROOT_USER=minioadminexport MINIO_ROOT_PASSWORD=minioadmin./minio server /data --console-address ":9001"
# Using mc (MinIO Client)mc alias set myminio http://localhost:9000 minioadmin minioadmin
# Create bucketmc mb myminio/mybucket
# Set policymc anonymous set download myminio/mybucket
# Replicationmc mirror backup myminio/mybucket-archive
# Use with S3 SDK# AWS CLIaws configureaws s3 ls s3://mybucket/Q1275: How do you configure snapshot and backup?
Section titled “Q1275: How do you configure snapshot and backup?”Answer:
# LVM snapshotslvcreate -L 10G -s -n snap_data /dev/vg_data/lv_data
# Mount snapshotmount -o ro,nouuid /dev/vg_data/snap_data /mnt/snap
# Remove snapshotlvremove /dev/vg_data/snap_data
# Btrfs snapshotsbtrfs subvolume snapshot /data /data/snap-$(date +%Y%m%d)
# ZFS snapshotszfs snapshot pool/data@snap-$(date +%Y%m%d)zfs list -t snapshot
# Send/receivebtrfs send /data/snap1 | btrfs receive /backup/zfs send pool/data@snap1 | zfs receive backup/pool/data
# Incrementalbtrfs send -p /data/snap1 /data/snap2 | btrfs receive /backup/zfs send -i pool/data@snap1 pool/data@snap2 | zfs receive backup/pool/dataLinux Performance Advanced
Section titled “Linux Performance Advanced”Q1276: How do you use BPF for performance?
Section titled “Q1276: How do you use BPF for performance?”Answer:
# Install bpftraceapt install bpftrace
# List probestplist -l
# Trace file opensbpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%s %s\n", comm, str(args->filename)); }'
# Trace TCP connectbpftrace -e 'tracepoint:syscalls:sys_enter_connect { printf("Connect: %s\n", comm); }'
# Custom program# /root/bpf/hello.bt#!/usr/bin/bpftraceBEGIN{ printf("Tracing... Hit Ctrl-C to end.\n");}
tracepoint:syscalls:sys_enter_read{ @read_bytes = comm;}
# Using perfperf record -g ./programperf report
# Using BCC tools# /usr/share/bcc/tools//usr/share/bcc/tools/ext4slower 1/usr/share/bcc/tools/tcpconnectQ1277: How do you profile applications?
Section titled “Q1277: How do you profile applications?”Answer:
# Using gprofgcc -pg -g program.c -o program./programgprof program gmon.out > analysis.txt
# Using valgrindvalgrind --tool=callgrind ./program# View with kcachegrindkcachegrind callgrind.out.*
# Using perfperf record -g ./programperf reportperf annotate
# Using stracestrace -c ./programstrace -T -tt -e trace=write ./program
# Using flamegraphperf record -F 99 -g ./programperf script | stackcollapse-perf.pl > out.foldedflamegraph.pl out.folded > flamegraph.svgQ1278: How do you tune database performance?
Section titled “Q1278: How do you tune database performance?”Answer:
# PostgreSQL tuning# postgresql.confshared_buffers = 256MBeffective_cache_size = 768MBmaintenance_work_mem = 64MBcheckpoint_completion_target = 0.9wal_buffers = 16MBdefault_statistics_target = 100random_page_cost = 1.1effective_io_concurrency = 200work_mem = 4MBmin_wal_size = 1GBmax_wal_size = 4GB
# MySQL tuning# my.cnfinnodb_buffer_pool_size = 1Ginnodb_log_file_size = 256Minnodb_flush_log_at_trx_commit = 2innodb_flush_method = O_DIRECTkey_buffer_size = 256Mquery_cache_size = 0tmp_table_size = 64Mmax_connections = 200
# Analyze queriesEXPLAIN ANALYZE SELECT * FROM table WHERE condition;SHOW PROCESSLIST;Q1279: How do you optimize web server performance?
Section titled “Q1279: How do you optimize web server performance?”Answer:
# Nginx optimizationworker_processes auto;worker_rlimit_nofile 65535;
events { worker_connections 65535; use epoll; multi_accept on;}
http { # Buffer sizes client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 4 32k;
# Caching open_file_cache max=10000 inactive=30s; open_file_cache_valid 60s; open_file_cache_min_uses 2; open_file_cache_errors on;
# Gzip gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_types text/plain text/css text/xml application/json application/javascript;}Q1280: How do you implement caching strategies?
Section titled “Q1280: How do you implement caching strategies?”Answer:
# Varnish VCLvcl 4.1;
backend default { .host = "127.0.0.1"; .port = "8080";}
sub vcl_recv { # Don't cache if (req.method == "POST" || req.method == "PUT") { return (pass); }
# Strip cookies for static if (req.url ~ "\.(css|js|jpg|png|gif|ico|svg)$") { unset req.http.Cookie; }}
sub vcl_backend_response { if (beresp.http.Set-Cookie) { return (pass); }
# Cache static if (bereq.url ~ "\.(css|js|jpg|png)$") { set beresp.ttl = 24h; }}
# Redis caching# /etc/redis/redis.confmaxmemory 2gbmaxmemory-policy allkeys-lrusave ""appendonly yesLinux Automation Advanced
Section titled “Linux Automation Advanced”Q1281: How do you use Packer for images?
Section titled “Q1281: How do you use Packer for images?”Answer:
{ "builders": [{ "type": "amazon-ebs", "region": "us-east-1", "source_ami": "ami-0c55b159cbfafe1f0", "instance_type": "t2.micro", "ssh_username": "ubuntu", "ami_name": "myapp-{{timestamp}}", "run_tags": { "Name": "packer-builder" } }], "provisioners": [{ "type": "shell", "execute_command": "sudo {{.Path}}", "script": "provision.sh" }, { "type": "ansible", "playbook_file": "playbook.yml" }], "post-processors": [{ "type": "vagrant", "keep_input_artifact": true }]}Q1282: How do you use Vagrant with provisioning?
Section titled “Q1282: How do you use Vagrant with provisioning?”Answer:
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/jammy64"
# Shell provisioner config.vm.provision "shell", inline: "apt-get update"
# File provisioner config.vm.provision "file", source: "config/", destination: "/tmp/config"
# Ansible provisioner config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" ansible.extra_vars = { deploy_user: "vagrant" } end
# Docker provisioner config.vm.provision "docker" do |d| d.pull_images "ubuntu:22.04" d.run "nginx", ports: ["80:80"] end
# Puppet provisioner config.vm.provision "puppet" do |p| p.manifest_file = "site.pp" p.module_path = "modules" endendQ1283: How do you create custom AMI?
Section titled “Q1283: How do you create custom AMI?”Answer:
# Using Packerpacker build template.json
# Manual steps# 1. Launch instanceaws ec2 run-instances \ --image-id ami-0c55b159cbfafe1f0 \ --instance-type t3.micro \ --key-name mykey
# 2. Customizessh -i key.pem ubuntu@instancesudo apt updatesudo apt install nginx docker.io
# 3. Create imageaws ec2 create-image \ --instance-id i-1234567890abcdef0 \ --name "Custom-Nginx-$(date +%Y%m%d)" \ --description "Custom AMI with nginx" \ --no-reboot
# 4. Share AMIaws ec2 modify-image-attribute \ --image-id ami-12345678 \ --launch-permission "Add=[{UserId=123456789012}]"Q1284: How do you configure infrastructure testing?
Section titled “Q1284: How do you configure infrastructure testing?”Answer:
# Using Inspec# inspec.ymlname: linux-baselinetitle: Linux Baselineversion: 1.0.0
# controls/apache.rbcontrol 'apache-01' do impact 1.0 title 'Apache should be installed' desc 'Apache is required for web serving'
describe package('apache2') do it { should be_installed } end
describe service('apache2') do it { should be_installed } it { should be_running } end
describe port(80) do it { should be_listening } endend
# Runinspec exec profile/inspec exec profile/ --attrs attributes.ymlinspec check profile/Q1285: How do you implement GitOps workflow?
Section titled “Q1285: How do you implement GitOps workflow?”Answer:
# 1. Store infrastructure in Gitgit init infra-repocd infra-repo
# 2. Directory structure# .# ├── applications/# │ └── myapp/# │ ├── deployment.yaml# │ └── service.yaml# └── infrastructure/# ├── terraform/# └── ansible/
# 3. Deploy with ArgoCDkubectl apply -f application.yaml
# 4. CI pipeline# .gitlab-ci.ymldeploy: stage: deploy script: - git add . - git commit -m "Update myapp to $CI_COMMIT_SHA" - git push only: - main
# 5. Drift detection# ArgoCD will detect drift and sync automaticallyargocd app sync myappargocd app diff myappLinux Network Advanced
Section titled “Linux Network Advanced”Q1286: How do you configure network bonding modes?
Section titled “Q1286: How do you configure network bonding modes?”Answer:
# Mode 0: Round-robinalias bond0 bondingoptions bond0 mode=0 miimon=100
# Mode 1: Active-backupoptions bond0 mode=1 miimon=100 primary=eth0
# Mode 4: 802.3ad (LACP)options bond0 mode=4 miimon=100 lacp_rate=1
# Mode 5: Balance-tlboptions bond0 mode=5 miimon=100
# Interface configuration# /etc/sysconfig/network-scripts/ifcfg-bond0DEVICE=bond0BONDING_OPTS="mode=5 miimon=100"IPADDR=192.168.1.10NETMASK=255.255.255.0ONBOOT=yes
# Slave interfaces# /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0MASTER=bond0SLAVE=yesONBOOT=yesQ1287: How do you configure VLAN tagging?
Section titled “Q1287: How do you configure VLAN tagging?”Answer:
# Create VLAN interfaceip link add link eth0 name eth0.100 type vlan id 100ip link set eth0.100 upip addr add 192.168.100.1/24 dev eth0.100
# Using vconfigvconfig add eth0 100ifconfig eth0.100 192.168.100.1 netmask 255.255.255.0 up
# Persistent configuration# /etc/network/interfaces (Debian)auto eth0.100iface eth0.100 inet static address 192.168.100.1 netmask 255.255.255.0 vlan-raw-device eth0
# RHEL# /etc/sysconfig/network-scripts/ifcfg-eth0.100VLAN=yesDEVICE=eth0.100PHYSDEV=eth0VLAN_ID=100IPADDR=192.168.100.1NETMASK=255.255.255.0Q1288: How do you configure bridging?
Section titled “Q1288: How do you configure bridging?”Answer:
# Create bridgebrctl addbr br0ip addr add 192.168.1.1/24 dev br0ip link set br0 up
# Add interfacesbrctl addif br0 eth0brctl addif br0 eth1
# Persistent configuration# /etc/network/interfacesauto br0iface br0 inet static address 192.168.1.1 netmask 255.255.255.0 bridge_ports eth0 eth1 bridge_stp off bridge_fd 0 bridge_maxwait 0
# Using iproute2ip link add name br0 type bridgeip link set dev eth0 master br0ip link set dev eth1 master br0
# Viewbrctl showip link show type bridgeQ1289: How do you configure tunnel interfaces?
Section titled “Q1289: How do you configure tunnel interfaces?”Answer:
# GRE tunnelip tunnel add gre0 mode gre remote 203.0.113.2 local 203.0.113.1ip link set gre0 upip addr add 10.0.0.1/30 dev gre0
# IPIP tunnelip tunnel add ipip0 mode ipip remote 203.0.113.2 local 203.0.113.1ip link set ipip0 upip addr add 10.0.0.1/30 dev ipip0
# WireGuardapt install wireguard
# wg0.conf[Interface]PrivateKey = <server-private-key>Address = 10.0.0.1/24ListenPort = 51820
[Peer]PublicKey = <client-public-key>AllowedIPs = 10.0.0.2/32
wg-quick up wg0Q1290: How do you configure VRRP?
Section titled “Q1290: How do you configure VRRP?”Answer:
# Install keepalivedapt install keepalived
vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100
virtual_ipaddress { 192.168.1.100/24 dev eth0 }
track_interface { eth0 weight -20 }
authentication { auth_type AH auth_pass secret123 }
notify_backup "/usr/local/bin/backup.sh" notify_fault "/usr/local/bin/fault.sh" notify_master "/usr/local/bin/master.sh"}
# Backup configuration# priority 90# state BACKUPLinux Virtualization Advanced
Section titled “Linux Virtualization Advanced”Q1291: How do you configure KVM live migration?
Section titled “Q1291: How do you configure KVM live migration?”Answer:
# Enable migration on sourcevirsh migrate --live --persistent --undefinesource \ vmname qemu+ssh://dest-host/system
# Using migrate commandvirsh migrate --live vmname dest-host
# With compressionvirsh migrate --live --compressed --desturi qemu+tcp://dest-host/system vmname
# With TLSvirsh migrate --live --tls dest-host vmname
# Configure for migration# /etc/libvirt/libvirtd.conflisten_tls = 0listen_tcp = 1auth_tcp = "none"
# Enable in /etc/default/libvirt-bin# LIBVIRTD_ARGS="--listen"
# Verify migrationvirsh dominfo vmnamevirsh migrate-getspeed vmnameQ1292: How do you configure nested virtualization?
Section titled “Q1292: How do you configure nested virtualization?”Answer:
# Enable nesting (Intel)# Add to kernel parameterskvm-intel.nested=1
# Checkcat /sys/module/kvm_intel/parameters/nested# Y = enabled
# Create nested VMvirt-install \ --name nested-vm \ --ram 2048 \ --vcpus 2 \ --disk path=/var/lib/libvirt/images/nested.qcow2 \ --os-variant ubuntu22.04 \ --graphics vnc \ --cpu host-passthrough
# Verify# Inside nested VMlscpu | grep Virtualization# Should show VT-x or AMD-V
# Nested Docker# On nested VMapt install docker.io# Should workdocker run hello-worldQ1293: How do you configure PCI passthrough?
Section titled “Q1293: How do you configure PCI passthrough?”Answer:
# Enable IOMMUGRUB_CMDLINE_LINUX="intel_iommu=on"# orGRUB_CMDLINE_LINUX="amd_iommu=on"
# Update GRUBupdate-grubreboot
# Verifydmesg | grep -e DMAR -e IOMMU
# Bind device to vfiolspci -nnk -d 10de: # NVIDIA# 10de:1b80:01:00.0
# vfio-pci.idsecho 10de 1b80 > /etc/modprobe.d/vfio.conf
# Unbind from drivervirsh nodedev-detach pci_0000_01_00_0
# Add to VM# virsh edit vmname<hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source></hostdev>Q1294: How do you configure GPU virtualization?
Section titled “Q1294: How do you configure GPU virtualization?”Answer:
# NVIDIA vGPU# Install nvidia-vgpud and nvidia-vgpu-mgrapt install nvidia-vgpu-mgr
# Configure vGPU# /etc/nvidia-vgpu-mgr/vgpuConfig.txtvgpuConfig=1guestVmTimeout=60NVMgrid
# Create vGPUvirsh nodedev-create vgpu0
# Or using NVIDIA GRID# Install NVIDIA driver in VM./NVIDIA-Linux-x86_64-grid.run
# Checknvidia-smi# Should show vGPU
# For AMD GPU passthrough# See PCI passthroughQ1295: How do you configure storage pools?
Section titled “Q1295: How do you configure storage pools?”Answer:
# Create directory poolvirsh pool-define-as default dir --target /var/lib/libvirt/imagesvirsh pool-build defaultvirsh pool-start defaultvirsh pool-autostart default
# Create LVM poolvirsh pool-define-as vgpool logical \ --source-name vg_libvirt \ --target /dev/vg_libvirt
# Create NFS poolvirsh pool-define-as nfspool netfs \ --source-format nfs \ --source-host 192.168.1.10 \ --source-path /share \ --target /mnt/nfs
# View poolsvirsh pool-listvirsh pool-info default
# Create volumevirsh vol-create default --name vm1.qcow2 --capacity 10G --allocation 10Gvirsh vol-create-as default vm2.qcow2 20GLinux Monitoring Advanced
Section titled “Linux Monitoring Advanced”Q1296: How do you configure custom metrics?
Section titled “Q1296: How do you configure custom metrics?”Answer:
# Using Prometheus node_exporter# Custom collector script#!/bin/bashwhile true; do echo "custom_app_requests_total $(date +%s)" echo "custom_app_active_connections $(netstat -an | grep :8080 | wc -l)" sleep 10done | nc -q1 localhost 9100
# Using textfile collectorcat > /usr/local/bin/app_metrics.sh << 'EOF'#!/bin/bashMETRICS_DIR="/var/lib/node_exporter/textfile_collector"
# Application metricsAPP_REQUESTS=$(curl -s localhost:8080/metrics | grep requests | awk '{print $2}')APP_LATENCY=$(curl -s localhost:8080/metrics | grep latency | awk '{print $2}')
cat > ${METRICS_DIR}/app.prom << EOMapp_requests_total ${APP_REQUESTS}app_latency_seconds ${APP_LATENCY}EOMEOF
chmod +x /usr/local/bin/app_metrics.shcrontab -e# * * * * * /usr/local/bin/app_metrics.shQ1297: How do you configure alerting?
Section titled “Q1297: How do you configure alerting?”Answer:
# Prometheus alerting rulesgroups: - name: linux_alerts interval: 30s rules: - alert: HighCPUUsage expr: 100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80 for: 5m labels: severity: warning annotations: summary: "High CPU usage on {{ $labels.instance }}"
- alert: HighMemoryUsage expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100 > 90 for: 5m labels: severity: critical
- alert: DiskSpaceLow expr: (node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) < 0.1 for: 10m labels: severity: warningQ1298: How do you configure distributed tracing?
Section titled “Q1298: How do you configure distributed tracing?”Answer:
# Install Jaegerdocker run -d --name jaeger \ -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \ -p 6831:6831/udp \ -p 16686:16686 \ jaegertracing/all-in-one:1.35
# Using OpenTelemetry# Python examplefrom opentelemetry import tracefrom opentelemetry.exporter.jaeger.thrift import JaegerExporterfrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessor
trace.set_tracer_provider(TracerProvider())jaeger_exporter = JaegerExporter( agent_host_name="localhost", agent_port=6831,)trace.get_tracer_provider().add_span_processor( BatchSpanProcessor(jaeger_exporter))
tracer = trace.get_tracer(__name__)with tracer.start_as_current_span("hello-span"): print("Hello, World!")Q1299: How do you configure log aggregation?
Section titled “Q1299: How do you configure log aggregation?”Answer:
# Filebeat configurationfilebeat.inputs: - type: log paths: - /var/log/syslog - /var/log/auth.log fields: type: syslog fields_under_root: true
- type: log paths: - /var/log/nginx/*.log fields: type: nginx fields_under_root: true
output.logstash: hosts: ["logstash:5044"]
processors: - add_host_metadata: fields_under_root: true - add_docker_metadata: ~
# Logstash pipeline# /etc/logstash/conf.d/01-input.confinput { beats { port => 5044 }}
# /etc/logstash/conf.d/02-filter.conffilter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:message}" } } date { match => [ "timestamp", "MMM dd HH:mm:ss" ] } }}Q1300: How do you configure Grafana dashboards?
Section titled “Q1300: How do you configure Grafana dashboards?”Answer:
{ "dashboard": { "title": "System Overview", "panels": [ { "id": 1, "title": "CPU Usage", "type": "graph", "gridPos": {"x": 0, "y": 0, "w": 12, "h": 8}, "targets": [ { "expr": "100 - (avg by (mode) (irate(node_cpu_seconds_total{mode='idle'}[5m])) * 100)", "legendFormat": "{{mode}}" } ] }, { "id": 2, "title": "Memory Usage", "type": "graph", "gridPos": {"x": 12, "y": 0, "w": 12, "h": 8}, "targets": [ { "expr": "node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes", "legendFormat": "Used" }, { "expr": "node_memory_MemAvailable_bytes", "legendFormat": "Available" } ] }, { "id": 3, "title": "Network Traffic", "type": "graph", "gridPos": {"x": 0, "y": 8, "w": 12, "h": 8}, "targets": [ { "expr": "irate(node_network_receive_bytes_total{device!='lo'}[5m])", "legendFormat": "{{device}} RX" }, { "expr": "irate(node_network_transmit_bytes_total{device!='lo'}[5m])", "legendFormat": "{{device}} TX" } ] } ] }}Linux Services Troubleshooting
Section titled “Linux Services Troubleshooting”Q1301: How do you debug network issues?
Section titled “Q1301: How do you debug network issues?”Answer:
# Check interface statusip link showip addr showethtool eth0miitool eth0
# Check routingip route showip route get 8.8.8.8ip neighbor show
# Check DNSdig +short example.comgetent hosts example.comcat /etc/resolv.conf
# Connectivity testsping -c 4 8.8.8.8traceroute -I 8.8.8.8mtr -n 8.8.8.8
# Port testsnc -zv host porttelnet host portss -tulpn | grep :port
# Packet capturetcpdump -i eth0 host 192.168.1.1tcpdump -i eth0 port 80tcpdump -i eth0 -w capture.pcapQ1302: How do you debug service failures?
Section titled “Q1302: How do you debug service failures?”Answer:
# Service statussystemctl status service-namesystemctl list-units --failed
# Service logsjournalctl -u service-name -n 50journalctl -u service-name --since "1 hour ago"journalctl -xe
# Process infops auxf | grep service-namelsof -p $(pgrep -f service-name)
# Configuration testservice-name -tnginx -tapache2ctl configtest
# Dependenciessystemctl list-dependencies service-name
# Resource limitscat /proc/$(pgrep -f service-name)/limits
# Stracestrace -f -p $(pgrep -f service-name)strace -c service-name
# Cgroupssystemd-cgls | grep service-namesystemctl show service-nameQ1303: How do you debug performance issues?
Section titled “Q1303: How do you debug performance issues?”Answer:
# CPU analysistop -chtopmpstat -P ALL 1pidstat -p <pid> 1
# Memory analysisfree -hvmstat 1pmap -x <pid>cat /proc/<pid>/status
# I/O analysisiostat -xz 1iotoppidstat -d 1
# Network analysisnethogsiftopss -s
# System resourcessar -A 1 5atop
# Process analysisperf topflamegraph.pl < perf.data
# Application profilingpython -m cProfile -o profile.out app.pypyprof2calltree -i profile.outQ1304: How do you debug disk issues?
Section titled “Q1304: How do you debug disk issues?”Answer:
# Disk usagedf -hdf -idu -sh /*
# Find large filesfind / -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -h
# I/O statsiostat -xz 1sar -d 1
# Check filesystemfsck -n /dev/sda1xfs_repair -n /dev/sda1
# Check SMARTsmartctl -a /dev/sdasmartctl -H /dev/sda
# Mount optionsmount | grep sda1cat /proc/mounts
# Lsof for deleted fileslsof +L1ls -l /proc/*/fd/* | grep deletedQ1305: How do you debug authentication issues?
Section titled “Q1305: How do you debug authentication issues?”Answer:
# Check SSH logstail -f /var/log/auth.logjournalctl -u sshd -f
# Check PAMtail -f /var/log/syslog | grep -i pam
# Test authentication# SSH with debugssh -vvv user@host
# Test PAMpamtester login username authenticate
# Check sudosudo -ltail -f /var/log/auth.log | grep sudo
# LDAP issuesldapsearch -x -D "cn=admin,dc=example,dc=com" -Wgetent passwd usernameid username
# Kerberoskinit -f user@REALMklistLinux Advanced Networking
Section titled “Linux Advanced Networking”Q1306: How do you configure IP masquerading?
Section titled “Q1306: How do you configure IP masquerading?”Answer:
# Enable IP forwardingsysctl -w net.ipv4.ip_forward=1
# Add to /etc/sysctl.confnet.ipv4.ip_forward=1
# NAT with iptables# Outbound NATiptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# Or specific IPiptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source 203.0.113.10
# Inbound port forwardingiptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.168.1.10:8080
# Save rulesiptables-save > /etc/iptables/rules.v4# orservice iptables saveQ1307: How do you configure packet filtering?
Section titled “Q1307: How do you configure packet filtering?”Answer:
# Basic filter rules# Allow loopbackiptables -A INPUT -i lo -j ACCEPT
# Allow establishediptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow SSHiptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Allow HTTP/HTTPSiptables -A INPUT -p tcp --dport 80 -j ACCEPTiptables -A INPUT -p tcp --dport 443 -j ACCEPT
# Drop everything elseiptables -A INPUT -j DROP
# Forward rulesiptables -A FORWARD -i eth0 -o eth1 -j ACCEPTiptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT
# Rate limitingiptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --setiptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 -j DROPQ1308: How do you configure DNS over TLS?
Section titled “Q1308: How do you configure DNS over TLS?”Answer:
# Install stubbyapt install stubby
# Configure# /etc/stubby/stubby.ymlresolution_type: GETDNS_RESOLUTION_TYPE_STUBlisten_addresses: - 127.0.0.1@53upstream_recursive_servers: - address_data: 1.1.1.1 tls_auth_name: "cloudflare-dns.com" - address_data: 8.8.8.8 tls_auth_name: "dns.google"
# Enable systemd-resolvedsystemctl start stubbysystemctl enable stubby
# Configure systemd-resolved# /etc/systemd/resolved.conf[Resolve]DNS=127.0.0.1DNSOverTLS=yes
# Testdig @127.0.0.1 example.com +tlsQ1309: How do you configure VPN?
Section titled “Q1309: How do you configure VPN?”Answer:
# OpenVPNapt install openvpn easy-rsa
# Generate keyscd /etc/openvpn/easy-rsa./easyrsa init-pki./easyrsa build-ca./easyrsa build-server-full server nopass./easyrsa build-client-full client1 nopass
# Server config# /etc/openvpn/server.confport 1194proto udpdev tunca ca.crtcert server.crtkey server.keydh dh.pemserver 10.8.0.0 255.255.255.0push "redirect-gateway def1 bypass-dhcp"push "dhcp-option DNS 8.8.8.8"keepalive 10 120cipher AES-256-GCMauth SHA256persist-keypersist-tunstatus openvpn-status.log
# Client configclientdev tunproto udpremote vpn.example.com 1194resolv-retry infinitenobindpersist-keypersist-tunremote-cert-tls servercipher AES-256-GCMauth SHA256Q1310: How do you configure policy routing?
Section titled “Q1310: How do you configure policy routing?”Answer:
# Additional routing tablesecho "100 fast" >> /etc/iproute2/rt_tablesecho "200 slow" >> /etc/iproute2/rt_tables
# Add routes to tablesip route add 192.168.1.0/24 dev eth1 src 192.168.2.1 table fastip route add default via 192.168.1.1 table fast
# Policy routing rulesip rule add from 192.168.2.10 table fastip rule add to 192.168.1.0/24 table fastip rule add fwmark 1 table slow
# Mark packetsiptables -A PREROUTING -s 192.168.2.10 -j MARK --set-mark 1iptables -A OUTPUT -s 192.168.2.10 -j MARK --set-mark 1
# Persistent# /etc/network/interfacespost-up ip rule add from 192.168.2.10 table fastpost-down ip rule del from 192.168.2.10 table fastLinux High Availability Advanced
Section titled “Linux High Availability Advanced”Q1311: How do you configure fencing?
Section titled “Q1311: How do you configure fencing?”Answer:
# Install fence-agentsyum install fence-agents-all
# Configure STONITH# /etc/cluster/cluster.conf<cluster name="mycluster" config_version="1"> <fence_daemon post_fail_delay="0" post_join_delay="3"/> <fence_method name="method" device="name"> <fence_level level="1" id="level1"> <device name="device" action="on" port="node1"/> </fence_level> </fence_method> <nodes> <node name="node1" nodeid="1" fences="method"/> <node name="node2" nodeid="2" fences="method"/> </nodes></cluster>
# IPMI fencestonith -a ipmi -t "ipmilan=user:pass@node1" -v
# Test fencepcs stonith fence node1pcs stonith history node1Q1312: How do you configure resource constraints?
Section titled “Q1312: How do you configure resource constraints?”Answer:
# Colocation constraintpcs constraint colocation add WebService VirtualIP INFINITY
# Order constraintpcs constraint order VirtualIP then WebService
# Location constraint (preference)pcs constraint location WebService prefers node1=50
# Location constraint (avoid)pcs constraint location WebService avoids node2
# Resource stickinesspcs resource meta WebService resource-stickiness=100
# Migration thresholdpcs resource meta WebService migration-threshold=3
# Prioritypcs resource priority WebService=10
# Utilizationpcs node utilization node1 cpu=4 memory=16pcs resource utilization WebService cpu=2 memory=4Q1313: How do?
Section titled “Q1313: How do?”Answer: you configure load balancing```bash
LVS (Linux Virtual Server)
Section titled “LVS (Linux Virtual Server)”Install
Section titled “Install”apt install ipvsadm
Configure
Section titled “Configure”ipvsadm -A -t 192.168.1.100:80 -s rr ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.10:80 -m ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.11:80 -m
ipvsadm -l -n ipvsadm -l -n —stats
Persistence
Section titled “Persistence”ipvsadm -A -t 192.168.1.100:80 -s rr -p 300
Weight adjustment
Section titled “Weight adjustment”ipvsadm -e -t 192.168.1.100:80 -r 192.168.1.10:80 -m -w 2
### Q1314: How do you configure health checking?**Answer:**```bash# HAProxy health check# /etc/haproxy/haproxy.cfgbackend servers balance roundrobin option httpchk GET /health http-check expect status 200 server s1 192.168.1.10:8080 check inter 2000 rise 2 fall 3 server s2 192.168.1.11:8080 check inter 2000 rise 2 fall 3 slowstart 10s
# Custom health check script# healthcheck.sh#!/bin/bashcurl -sf http://localhost:8080/health || exit 1
# Using monitor-ptr in nginx# nginx.confupstream backend { server 192.168.1.10:8080; server 192.168.1.11:8080; server 192.168.1.12:8080 backup;}
# Keepalived health checkvrrp_script check_service { script "/usr/local/bin/healthcheck.sh" interval 5 weight -10}Q1315: How do you implement split-brain prevention?
Section titled “Q1315: How do you implement split-brain prevention?”Answer:
# Quorum configurationquorum { provider: corosync_votequorum expected_votes: 2 two_node: 1}
# Ping node for quorumquorum { provider: corosync_votequorum expected_votes: 3 wait_for_all: 1}
# Tie-breakertotem { interface { ringnumber: 0 transport: udpu }}
nodelist { node { ring0_addr: node1 nodeid: 1 } node { ring0_addr: node2 nodeid: 2 } node { ring0_addr: node3 nodeid: 3 }}
# Split-brain recovery script# /usr/local/bin/split-brain-recovery.shpcs cluster stop node1pcs cluster start node2Linux Advanced Scripting
Section titled “Linux Advanced Scripting”Q1316: How do you write efficient scripts?
Section titled “Q1316: How do you write efficient scripts?”Answer:
#!/bin/bashset -euo pipefailIFS=$'\n\t'
# Use arrays for loopsfiles=( /var/log/*.log )for file in "${files[@]}"; do [[ -f "$file" ]] || continue process "$file"done
# Use functions with local variablesprocess() { local file="$1" local content content=$(<"$file") echo "${content:0:100}"}
# Avoid subshells in loopswhile IFS= read -r line; do ((count++))done < <(grep -r "pattern" .)
# Use coprocessescoproc BC { bc -l; }echo "scale=10; 355/113" >&${BC[1]}read -r pi <&${BC[0]}
# Use process substitutiondiff <(sort file1) <(sort file2)
# Parallel processingparallel -j 4 process {} ::: *.logQ1317: How do you parse JSON in bash?
Section titled “Q1317: How do you parse JSON in bash?”Answer:
# Using jq# Parse JSONcat data.json | jq '.name'cat data.json | jq '.items[].value'
# Filtercat data.json | jq '.items[] | select(.id > 5)'
# Transformcat data.json | jq '{name: .name, computed: .value * 2}'
# Arrayscat data.json | jq '.items | length'cat data.json | jq '.items | map(.name)'
# Create JSONjq -n '{name: "test", items: [1,2,3]}'
# Modifyjq '.name = "new"' data.jsonjq '.items += ["new"]' data.json
# Input from variabledata='{"name":"test"}'echo "$data" | jq '.name'Q1318: How do you write Python scripts?
Section titled “Q1318: How do you write Python scripts?”Answer:
#!/usr/bin/env python3import sysimport jsonimport subprocessimport logging
logging.basicConfig(level=logging.INFO)
def run_command(cmd): result = subprocess.run( cmd, shell=True, capture_output=True, text=True ) return result.stdout.strip()
def parse_json_file(filepath): with open(filepath) as f: return json.load(f)
def main(): # Get system info cpu = run_command("nproc") mem = run_command("free -h | awk '/^Mem:/ {print $2}'")
# Process data data = parse_json_file("config.json")
# Output result = { "cpu_cores": cpu, "memory": mem, "config": data } print(json.dumps(result, indent=2))
return 0
if __name__ == "__main__": sys.exit(main())Q1319: How do you use regex in scripts?
Section titled “Q1319: How do you use regex in scripts?”Answer:
# Extract IP addressesgrep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' file.txt
# Validate emailemail="user@example.com"if [[ "$email" =~ ^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$ ]]; then echo "Valid"fi
# Extract datesgrep -oE '[0-9]{4}-[0-9]{2}-[0-9]{2}' file.txt
# Replace with sedsed -E 's/([0-9]{4})([0-9]{2})([0-9]{2})/\1-\2-\3/g' file.txt
# Using awkawk '/pattern/ {print $1, $2}' file.txtawk 'match($0, /[0-9]+/) {print substr($0, RSTART, RLENGTH)}' file.txt
# Complex parsingawk 'BEGIN { FPAT = "([^,]+)|(\"[^\"]+\")" }{ for (i=1; i<=NF; i++) { gsub(/^"|"$/, "", $i) print i ": " $i }}' data.csvQ1320: How do you handle errors in scripts?
Section titled “Q1320: How do you handle errors in scripts?”Answer:
#!/bin/bashset -euo pipefail
# Exit handlerscleanup() { local exit_code=$? if [[ $exit_code -ne 0 ]]; then echo "Script failed with exit code $exit_code" >&2 fi}trap cleanup EXIT
# Error functionerror() { echo "ERROR: $*" >&2 return 1}
# Validate inputvalidate_input() { local file="$1" [[ -f "$file" ]] || error "File not found: $file" [[ -r "$file" ]] || error "File not readable: $file"}
# Retry logicretry() { local max_attempts=3 local delay=5 local attempt=1
while [[ $attempt -le $max_attempts ]]; do if "$@"; then return 0 fi echo "Attempt $attempt failed, retrying in $delay seconds..." sleep $delay ((attempt++)) done error "Failed after $max_attempts attempts"}
# Testvalidate_input "/etc/passwd" || exit 1retry curl -sf http://example.comLinux Compliance Advanced
Section titled “Linux Compliance Advanced”Q1321: How do you implement audit logging?
Section titled “Q1321: How do you implement audit logging?”Answer:
# Configure auditd# Watch files-w /etc/passwd -p wa -k passwd_changes-w /etc/shadow -p wa -k shadow_changes-w /etc/sudoers -p wa -k sudoers_changes-w /etc/ssh/sshd_config -p wa -k sshd_config
# Watch directories-w /etc/httpd/conf -p wa -k httpd_conf-w /etc/nginx/nginx.conf -p wa -k nginx_conf
# System calls-a always,exit -F arch=b64 -S execve -F path=/usr/bin/wget -k network_download-a always,exit -F arch=b64 -S execve -F path=/usr/bin/curl -k network_download-a always,exit -F arch=b64 -S setuid -k privilege_escalation
# Commands-w /usr/bin/sudo -p x -k sudo_commands-w /usr/bin/su -p x -k su_commands
# View logsausearch -k passwd_changesaureport -faureport -uQ1322: How do you implement access control?
Section titled “Q1322: How do you implement access control?”Answer:
# Configure PAMsession required pam_unix.sosession optional pam_mkhomedir.so skel=/etc/skel umask=077
# /etc/security/limits.conf# Resource limits* soft nofile 65535* hard nofile 65535* soft nproc 4096* hard nproc 8192
# Time restrictions# /etc/security/time.conflogin;*;user1;Al0900-1700sshd;*;user2;Al0900-1700
# Using ACLs# Installapt install acl
# Set ACLsetfacl -m u:john:r-x /var/www/htmlsetfacl -m g:developers:rx /var/www/html
# Default ACLsetfacl -d -m u:john:rx /var/www/html
# View ACLgetfacl /var/www/htmlQ1323: How do you implement network segmentation?
Section titled “Q1323: How do you implement network segmentation?”Answer:
# Create isolated network namespaceip netns add isolatedip netns exec isolated ip link set lo up
# Configure VLANip link add link eth0 name eth0.100 type vlan id 100ip addr add 192.168.100.1/24 dev eth0.100ip link set eth0.100 up
# iptables zones# DMZiptables -N DMZ-ZONEiptables -A DMZ-ZONE -j DROP
# Internaliptables -N INT-ZONEiptables -A INT-ZONE -j ACCEPT
# Bridge isolationip link add name br0 type bridgeip link set eth0 master br0ip link set eth1 master br0
# Prevent forwardingiptables -A FORWARD -i br0 -o br0 -j DROP
# AppArmor namespacesapparmor_parser -r /etc/apparmor.d/*Q1324: How do you implement encryption at rest?
Section titled “Q1324: How do you implement encryption at rest?”Answer:
# LUKS encryptioncryptsetup luksFormat /dev/sdb1cryptsetup luksOpen /dev/sdb1 encryptedmkfs.xfs /dev/mapper/encryptedmount /dev/mapper/encrypted /mnt/data
# eCryptfsapt install ecryptfs-utilsmount -t ecryptfs /data /encrypted
# TPM encryptionapt install tpm-tools
# VeraCryptveracrypt --create containerveracrypt container /mnt/veracrypt
# Filesystem encryption# fscryptmkfs.ext4 -O encrypt /dev/sda1mount /dev/sda1 /mnt/datafscrypt setupfscrypt encrypt /mnt/dataQ1325: How do you implement key management?
Section titled “Q1325: How do you implement key management?”Answer:
# Generate GPG keygpg --full-generate-key
# Export public keygpg --armor --export user@example.com > public.asc
# Import keygpg --import public.asc
# Encrypt filegpg -e -r user@example.com file.txt
# Decrypt filegpg -d file.txt.gpg
# Hash verificationsha256sum file.tar.gzsha256sum -c file.tar.gz.sha256
# HMACopenssl dgst -sha256 -hmac "key" file.txt
# Use keyctlkeyctl add user mykey "my secret data" @ukeyctl list @ukeyctl pipe $(keyctl search @u user mykey)Linux Advanced Topics
Section titled “Linux Advanced Topics”Q1326: How do you use eBPF?
Section titled “Q1326: How do you use eBPF?”Answer:
# Install bpftraceapt install bpftrace
# Simple tracebpftrace -e 'BEGIN { printf("Tracing... Hit Ctrl-C to end.\n"); }'
# Trace syscallsbpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%s\n", str(args->filename)); }'
# Custom bpf program# hello.bt#!/usr/bin/bpftracetracepoint:syscalls:sys_enter_read/pid == 1234/{ @[comm] = count();}
# Runchmod +x hello.bt./hello.bt
# Using bcc/usr/share/bcc/tools/execsnoop/usr/share/bcc/tools/opensnoop/usr/share/bcc/tools/tcplifeQ1327: How do you implement zero-downtime deployment?
Section titled “Q1327: How do you implement zero-downtime deployment?”Answer:
# Blue-green deployment with HAProxy# haproxy.cfglisten app bind *:80 mode http
balance roundrobin
server blue 192.168.1.10:8080 check server green 192.168.1.11:8080 check backup
# Switch traffic# Deploy to green# Test green# Switchecho "set server app/green state READY" | socat stdio /var/run/haproxy.sock
# Rolling update with systemd# /etc/systemd/system/myapp.service.d/override.conf[Service]ExecStartPost=/usr/local/bin/healthcheck.sh
# Kubernetes rolling updatekubectl set image deployment/myapp myapp=myapp:v2kubectl rollout status deployment/myappkubectl rollout undo deployment/myapp
# Nginx# nginx.confupstream backend { server 192.168.1.10:8080; server 192.168.1.11:8080;}Q1328: How do you implement rate limiting?
Section titled “Q1328: How do you implement rate limiting?”Answer:
# iptables rate limitiptables -A INPUT -p tcp --dport 80 -m state --state NEW \ -m recent --setiptables -A INPUT -p tcp --dport 80 -m state --state NEW \ -m recent --update --seconds 60 --hitcount 10 -j DROP
# Nginx rate limiting# nginx.confhttp { limit_req_zone $binary_remote_addr zone=limit:10m rate=10r/s;
server { location / { limit_req zone=limit burst=20 nodelay; } }}
# HAProxy rate limiting# haproxy.cfghttp-request deny deny_status 429 if { sc_http_req_rate(10) gt 10 }
# Application rate limiting# Python examplefrom flask import Flask, request, jsonifyfrom flask_limiter import Limiter
app = Flask(__name__)limiter = Limiter(app, key_func=get_remote_address)
@app.route("/api")@limiter.limit("10 per minute")def api(): return jsonify({"status": "ok"})Q1329: How do you implement circuit breaker?
Section titled “Q1329: How do you implement circuit breaker?”Answer:
# Python circuit breakerfrom functools import wrapsimport timeimport logging
class CircuitBreaker: def __init__(self, failure_threshold=5, timeout=60): self.failure_threshold = failure_threshold self.timeout = timeout self.failures = 0 self.last_failure_time = None self.state = "CLOSED"
def call(self, func, *args, **kwargs): if self.state == "OPEN": if time.time() - self.last_failure_time > self.timeout: self.state = "HALF_OPEN" else: raise Exception("Circuit breaker OPEN")
try: result = func(*args, **kwargs) if self.state == "HALF_OPEN": self.state = "CLOSED" self.failures = 0 return result except Exception as e: self.failures += 1 self.last_failure_time = time.time() if self.failures >= self.failure_threshold: self.state = "OPEN" raise
# Usagecb = CircuitBreaker()result = cb.call(risky_function)Q1330: How do you implement service mesh?
Section titled “Q1330: How do you implement service mesh?”Answer:
# Istio VirtualServiceapiVersion: networking.istio.io/v1beta1kind: VirtualServicemetadata: name: myappspec: hosts: - myapp http: - route: - destination: host: myapp subset: v1 weight: 80 - destination: host: myapp subset: v2 weight: 20 - match: - headers: x-api-version: exact: v2 route: - destination: host: myapp subset: v2 retries: attempts: 3 perTryTimeout: 2s timeout: 10s---apiVersion: networking.istio.io/v1beta1kind: DestinationRulemetadata: name: myappspec: host: myapp trafficPolicy: connectionPool: tcp: maxConnections: 100 http: h2UpgradePolicy: UPGRADE http1MaxPendingRequests: 100 http2MaxRequests: 1000 loadBalancer: simple: ROUND_ROBINLinux Testing
Section titled “Linux Testing”Q1331: How do you test network performance?
Section titled “Q1331: How do you test network performance?”Answer:
# iperf serveriperf -s
# iperf clientiperf -c server-ipiperf -c server-ip -P 4iperf -c server-ip -M 1400iperf -c server-ip -t 60
# iperf3iperf3 -siperf3 -c server-ipiperf3 -c server-ip -R # reverseiperf3 -c server-ip -P 8 # parallel
# Network speed testspeedtest-cli
# Ping testfping -g 192.168.1.0/24ping -M do -s 1400 host
# Latency testsockperf ping-pong --tcp -i server -p 12345
# MTU discoverytracepath server
# Netcat throughputnc -l -p 9000 > /dev/null &nc -N server 9000 < /dev/zeroQ1332: How do you test disk I/O?
Section titled “Q1332: How do you test disk I/O?”Answer:
# fio installationapt install fio
# Sequential readfio --name=seqread --readonly --direct=1 --ioengine=libaio \ --bs=4k --iodepth=32 --numjobs=4 --rw=readread \ --filename=/dev/sda1 --size=1G
# Random writefio --name=randwrite --direct=1 --ioengine=libaio \ --bs=4k --iodepth=32 --numjobs=4 --rw=randwrite \ --filename=/tmp/test --size=1G
# Mixed workloadfio --name=mixed --direct=1 --ioengine=libaio \ --bs=4k --iodepth=1 --numjobs=1 --rw=randrw \ --rwmixread=70 --filename=/tmp/test
# Using hdparmhdparm -t /dev/sdahdparm -T /dev/sda
# Using dd# Write speeddd if=/dev/zero of=/tmp/test bs=1M count=1024 oflag=direct
# Read speeddd if=/tmp/test of=/dev/null bs=1M count=1024 iflag=directQ1333: How do you load test web servers?
Section titled “Q1333: How do you load test web servers?”Answer:
# Apache Bench (ab)ab -n 10000 -c 100 http://localhost/index.htmlab -n 1000 -c 10 -t 60 http://localhost/api
# wrkwrk -t4 -c100 -d30s http://localhost/wrk -t2 -c50 -d30s --latency http://localhost/
# Custom scriptwrk.method = "POST"wrk.body = '{"data":"test"}'wrk.headers["Content-Type"] = "application/json"
# siegesiege -c100 -r100 http://localhost/siege -b -t5M http://localhost/
# Apache JMeter# GUIjmeter
# CLIjmeter -n -t test.jmx -l results.jtl
# k6# test.jsimport http from 'k6/http';import { check, sleep } from 'k6';
export const options = { vus: 10, duration: '30s',};
export default function() { const res = http.get('http://localhost/'); check(res, { 'status is 200': r => r.status === 200 }); sleep(1);}Q1334: How do you test database performance?
Section titled “Q1334: How do you test database performance?”Answer:
# PostgreSQL# EXPLAIN ANALYZEEXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
# pgbenchpgbench -i -s 100 mydbpgbench -c 10 -j 2 -t 1000 mydb
# MySQL# EXPLAINEXPLAIN SELECT * FROM users WHERE email = 'test@example.com';
# mysqlslapmysqlslap --user=root --password --auto-generate-sql \ --concurrency=10 --iterations=5
# sysbenchsysbench /usr/share/sysbench/oltp_read_write.lua \ --mysql-host=localhost \ --mysql-db=test \ --threads=10 \ --time=60 \ run
# pt-query-digestpt-query-digest slow-query.logQ1335: How do you perform security testing?
Section titled “Q1335: How do you perform security testing?”Answer:
# Nmap scansnmap -sS -sV -O targetnmap -sV --script=vuln targetnmap -p- targetnmap --script=banner target
# OpenVASopenvasmd --create-user adminopenvasmd --user=admin --new-password=passwordopenvasmd --updateopenvasmd --rebuild
# Niktonikto -h http://target/nikto -h https://target/ - SSL
# SQLMapsqlmap -u "http://target/page.php?id=1" --dbssqlmap -u "http://target/page.php?id=1" -D database --tables
# XSS testing# xsserxsser -u "http://target/page?q="
# Directory enumerationdirb http://target/gobuster dir -u http://target/ -w /usr/share/wordlists/dirb/common.txtLinux Container Advanced
Section titled “Linux Container Advanced”Q1336: How do you optimize Docker images?
Section titled “Q1336: How do you optimize Docker images?”Answer:
# Use minimal base imageFROM alpine:3.18
# Multi-stage buildFROM node:18-alpine AS builderWORKDIR /appCOPY package*.json ./RUN npm ci --only=productionCOPY . .
FROM node:18-alpine AS productionWORKDIR /appCOPY --from=builder /app/node_modules ./node_modulesCOPY --from=builder /app .USER nodeEXPOSE 3000CMD ["node", "index.js"]
# .dockerignorenode_modules.git*.md.env*Q1337: How do you secure containers?
Section titled “Q1337: How do you secure containers?”Answer:
# Use specific versionFROM nginx:1.25-alpine
# Create non-root userRUN addgroup -g 1000 -S appgroup && \ adduser -u 1000 -S appuser -G appgroup
# Set ownershipCOPY --chown=appuser:appgroup . /usr/share/nginx/html
USER appuser
# Read-only filesystem# docker run --read-only nginx
# Drop capabilities# docker run --cap-drop ALL --cap-add NET_BIND_SERVICE nginx
# Limit resources# docker run --memory=256m --cpus=0.5 nginx
# Scan imagestrivy image myimagedocker scan myimageQ1338: How do you configure Docker networking?
Section titled “Q1338: How do you configure Docker networking?”Answer:
# Custom bridge networkdocker network create --driver bridge mynetworkdocker network create --subnet=192.168.100.0/24 mynetwork
# Overlay networkdocker network create --driver overlay myoverlay
# Host networkdocker run --network host nginx
# Macvlandocker network create -d macvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o parent=eth0 mymacvlan
# DNS configurationdocker run --dns 8.8.8.8 --network-alias myapp myimage
# Port mappingdocker run -p 8080:80 nginx
# Connect to networkdocker network connect mynetwork containerQ1339: How do you configure Docker storage?
Section titled “Q1339: How do you configure Docker storage?”Answer:
# Create volumedocker volume create mydata
# Mount volumedocker run -v mydata:/data mysql
# Bind mountdocker run -v /host/path:/container/path nginx
# tmpfs mountdocker run --tmpfs /tmp nginx
# NFS volumedocker volume create --driver local \ --opt type=nfs \ --opt o=addr=192.168.1.100,rw \ --opt device=:/path/to/share \ nfsvolume
# Backup volumedocker run --rm -v mydata:/data -v $(pwd):/backup alpine \ tar cvf /backup/backup.tar /dataQ1340: How do you configure Docker Swarm services?
Section titled “Q1340: How do you configure Docker Swarm services?”Answer:
# Initialize swarmdocker swarm init --advertise-addr 192.168.1.10
# Create servicedocker service create \ --name myapp \ --replicas 3 \ --publish 8080:80 \ --update-delay 10s \ --update-parallelism 1 \ --update-failure-action rollback \ --rollback-monitor 5s \ --rollback-max-failure-ratio 0.1 \ myimage:latest
# Update servicedocker service update --image myimage:v2 myapp
# Scaledocker service scale myapp=5
# Networksdocker network create -d overlay myoverlaydocker service update --network-add myoverlay myapp
# Secretsecho "mypassword" | docker secret create mysecret -docker secret create mysecret mysecret.txtdocker service update --secret-add mysecret myappLinux Kubernetes Advanced
Section titled “Linux Kubernetes Advanced”Q1341: How do you configure pod resources?
Section titled “Q1341: How do you configure pod resources?”Answer:
apiVersion: v1kind: Podmetadata: name: myappspec: containers: - name: myapp image: myapp:latest resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5Q1342: How do you configure Kubernetes networking?
Section titled “Q1342: How do you configure Kubernetes networking?”Answer:
apiVersion: v1kind: Servicemetadata: name: myappspec: selector: app: myapp ports: - port: 80 targetPort: 8080 type: ClusterIP
---apiVersion: v1kind: Servicemetadata: name: myapp-lbspec: selector: app: myapp ports: - port: 80 targetPort: 8080 type: LoadBalancer
---apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: myapp annotations: nginx.ingress.kubernetes.io/rewrite-target: /spec: rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80Q1343: How do you configure persistent storage?
Section titled “Q1343: How do you configure persistent storage?”Answer:
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: mypvcspec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
---apiVersion: v1kind: Podmetadata: name: myappspec: containers: - name: myapp image: nginx volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: mypvc
---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: fastprovisioner: kubernetes.io/gce-pdparameters: type: pd-ssd replication-type: regional-pdQ1344: How do you configure RBAC?
Section titled “Q1344: How do you configure RBAC?”Answer:
apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata: name: pod-readerrules:- apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "list", "watch"]
---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: read-podssubjects:- kind: User name: jane apiGroup: rbac.authorization.k8s.ioroleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: secret-readerrules:- apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"]Q1345: How do you configure Helm charts?
Section titled “Q1345: How do you configure Helm charts?”Answer:
apiVersion: v2name: myappdescription: My applicationtype: applicationversion: 1.0.0appVersion: "1.0"
# values.yamlreplicaCount: 3
image: repository: myapp pullPolicy: IfNotPresent tag: "latest"
service: type: ClusterIP port: 80
ingress: enabled: true className: nginx annotations: cert-manager.io/cluster-issuer: letsencrypt-prod hosts: - host: myapp.example.com paths: - path: / pathType: Prefix
resources: limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64MiLinux Cloud Integration
Section titled “Linux Cloud Integration”Q1346: How do you configure auto-scaling?
Section titled “Q1346: How do you configure auto-scaling?”Answer:
# Kubernetes HPAapiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: myapp-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 10 periodSeconds: 60 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Percent value: 100 periodSeconds: 15Q1347: How do you configure secrets management?
Section titled “Q1347: How do you configure secrets management?”Answer:
# Using HashiCorp Vaultexport VAULT_ADDR="https://vault.example.com:8200"vault login -method=token token=<token>
# Kubernetes secretskubectl create secret generic mysecret \ --from-literal=username=admin \ --from-literal=password=secret
# Using Sealed Secrets# Install controllerhelm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secretshelm install sealed-secrets sealed-secrets/sealed-secrets
# Encryptkubeseal --format yaml < secret.yaml > sealed-secret.yaml
# AWS Secrets Manageraws secretsmanager get-secret-value --secret-id mysecretaws secretsmanager create-secret --name mysecret --secret-string '{"username":"admin","password":"secret"}'
# Use with CSIkubectl apply -f secret-provider-class.yamlQ1348: How do you configure CI/CD pipelines?
Section titled “Q1348: How do you configure CI/CD pipelines?”Answer:
stages: - build - test - deploy
variables: DOCKER_IMAGE: registry.example.com/myapp KUBECONFIG: /tmp/kubeconfig
build: stage: build image: docker:latest services: - docker:dind script: - docker build -t $DOCKER_IMAGE:$CI_COMMIT_SHA . - docker push $DOCKER_IMAGE:$CI_COMMIT_SHA
test: stage: test image: myapp:test script: - npm test - npm run lint
deploy: stage: deploy image: bitnami/kubectl:latest script: - kubectl set image deployment/myapp myapp=$DOCKER_IMAGE:$CI_COMMIT_SHA - kubectl rollout status deployment/myapp only: - mainQ1349: How do you configure cloud storage?
Section titled “Q1349: How do you configure cloud storage?”Answer:
# AWS S3aws s3 ls s3://mybucket/aws s3 cp file.txt s3://mybucket/aws s3 sync ./folder s3://mybucket/folder
# S3FS mountapt install s3fs-fuseecho mybucket:access_key:secret_key > ~/.passwd-s3fschmod 600 ~/.passwd-s3fss3fs mybucket /mnt/s3 -o passwd_file=~/.passwd-s3fs
# MinIO clientmc alias set myminio http://localhost:9000 minioadmin minioadminmc ls myminio/mc mirror ./data myminio/mybucket
# Google Cloud Storagegsutil ls gs://mybucket/gsutil cp file.txt gs://mybucket/gsutil rsync -R ./folder gs://mybucket/folder
# Azure Blobaz storage container create --name mycontaineraz storage blob upload --container-name mycontainer --name myfile --file myfile.txtQ1350: How do you configure multi-cloud deployment?
Section titled “Q1350: How do you configure multi-cloud deployment?”Answer:
# Terraform multi-cloud# main.tfprovider "aws" { region = "us-east-1" alias = "aws"}
provider "google" { project = "myproject" region = "us-east1" alias = "gcp"}
resource "aws_instance" "aws_vm" { provider = aws ami = "ami-0c55b159cbfafe1f0" instance_type = "t2.micro"}
resource "google_compute_instance" "gcp_vm" { provider = gcp name = "gcp-vm" machine_type = "e2-micro" zone = "us-east1-b"
boot_disk { initialize_params { image = "debian-cloud/debian-11" } }
network_interface { network = "default" }}
# Kubernetes multi-cluster# kubeconfigcontexts:- name: aws-cluster context: cluster: aws-cluster user: admin- name: gcp-cluster context: cluster: gcp-cluster user: admin
# Deploy to bothkubectl --context=aws-cluster apply -f deployment.yamlkubectl --context=gcp-cluster apply -f deployment.yamlLinux Best Practices
Section titled “Linux Best Practices”Q1351: How do you implement backup automation?
Section titled “Q1351: How do you implement backup automation?”Answer:
#!/bin/bash# Automated backup script
set -euo pipefail
BACKUP_DIR="/backup"DATE=$(date +%Y%m%d_%H%M%S)RETENTION_DAYS=30
# Database backupbackup_db() { local db_name="$1" local db_user="$2" local db_pass="$3"
mysqldump -u"$db_user" -p"$db_pass" --single-transaction \ --routines --triggers "$db_name" | gzip > \ "$BACKUP_DIR/mysql/${db_name}_${DATE}.sql.gz"}
# File backupbackup_files() { local source="$1" local dest="$2"
rsync -avz --delete \ --link-dest="$BACKUP_DIR/files/latest" \ "$source" "$BACKUP_DIR/files/${DATE}"
rm -f "$BACKUP_DIR/files/latest" ln -s "${DATE}" "$BACKUP_DIR/files/latest"}
# Clean old backupscleanup() { find "$BACKUP_DIR" -type f -mtime +$RETENTION_DAYS -delete find "$BACKUP_DIR" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} + 2>/dev/null || true}
# Mainmain() { mkdir -p "$BACKUP_DIR/mysql" "$BACKUP_DIR/files" backup_db "mydb" "backupuser" "password" backup_files "/data" "files" cleanup echo "Backup completed at $(date)"}
mainQ1352: How do you implement disaster recovery?
Section titled “Q1352: How do you implement disaster recovery?”Answer:
# DR plan documentation# 2. Recovery proceduresrecover_database() { # Stop applications systemctl stop myapp
# Drop existing database mysql -u root -p -e "DROP DATABASE IF EXISTS mydb;"
# Restore from backup gunzip < /backup/mysql/mydb_20240101.sql.gz | mysql -u root -p mydb
# Verify mysql -u root -p -e "USE mydb; SHOW TABLES;"
# Start applications systemctl start myapp}
# 3. Test DRtest_dr() { echo "Starting DR test..."
# Spin up test environment vagrant up dr-test
# Restore vagrant ssh dr-test -c "/backup/scripts/restore.sh"
# Verify vagrant ssh dr-test -c "curl http://localhost/health"
# Clean up vagrant destroy dr-test}
# 4. Document RTO/RPO# RTO: 4 hours# RPO: 24 hoursQ1353: How do you implement configuration management?
Section titled “Q1353: How do you implement configuration management?”Answer:
# Ansible inventory# inventory.ini[webservers]web1.example.comweb2.example.com
[databases]db1.example.com
[webservers:vars]nginx_version=1.25app_port=8080
[all:vars]env=production
# Ansible playbook- name: Configure webservers hosts: webservers become: yes
tasks: - name: Update packages apt: update_cache: yes cache_valid_time: 3600
- name: Install nginx apt: name: nginx state: present
- name: Configure nginx template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf notify: restart nginx
- name: Start nginx service: name: nginx state: started enabled: yes
handlers: - name: restart nginx service: name: nginx state: restartedQ1354: How do you implement monitoring alerting?
Section titled “Q1354: How do you implement monitoring alerting?”Answer:
# Prometheus alert rulesgroups: - name: critical interval: 30s rules: - alert: InstanceDown expr: up == 0 for: 1m labels: severity: critical annotations: summary: "Instance {{ $labels.instance }} down"
- alert: HighCPU expr: 100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 90 for: 5m labels: severity: warning
- alert: DiskSpace expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) < 0.1 for: 10m labels: severity: warning
# AlertManager config# alertmanager.ymlroute: group_by: ['alertname'] group_wait: 10s group_interval: 10s repeat_interval: 12h receiver: 'team-notifications'receivers: - name: 'team-notifications' email_configs: - to: 'team@example.com' send_resolved: true slack_configs: - api_url: 'https://hooks.slack.com/services/XXX'Q1355: How do you implement log management?
Section titled “Q1355: How do you implement log management?”Answer:
# Filebeat configurationfilebeat.inputs: - type: log paths: - /var/log/*.log - /var/log/*/*.log fields: type: syslog fields_under_root: true
processors: - add_host_metadata: fields_under_root: true - add_docker_metadata: ~ - add_cloud_metadata: ~
output.logstash: hosts: ["logstash:5044"]
# Log rotation# /etc/logrotate.d/myapp/var/log/myapp/*.log { daily missingok rotate 14 compress delaycompress notifempty create 0640 myapp myapp postrotate systemctl reload myapp > /dev/null 2>&1 || true endscript}
# Log analysis scriptanalyze_logs() { local logfile="$1"
# Error count echo "Errors: $(grep -c ERROR "$logfile")"
# Top errors echo "Top errors:" grep ERROR "$logfile" | cut -d: -f4- | sort | uniq -c | sort -rn | head -10
# Time distribution echo "Time distribution:" awk '/ERROR/ {print $2}' "$logfile" | cut -d: -h1 | sort | uniq -c}Linux Troubleshooting Advanced
Section titled “Linux Troubleshooting Advanced”Q1356: How do you debug kernel panics?
Section titled “Q1356: How do you debug kernel panics?”Answer:
# Configure kdumpGRUB_CMDLINE_LINUX="crashkernel=auto"
# Install kdumpapt install kdump-tools
# Configure# /etc/kdump.confpath /var/crashcore_collector makedumpfile -c --page-analysis
# Enablesystemctl enable kdumpsystemctl start kdump
# Testecho c > /proc/sysrq-trigger
# Analyze crash dumpcrash /var/crash/2024-01-01-01:01/vmcore /usr/lib/debug/boot/vmlinux-$(uname -r)
# Kernel panic symptoms# "Kernel panic - not syncing: VFS: Unable to mount root fs"# "Kernel panic - not syncing: Out of memory and no killable processes"
# Debug# /etc/sysctl.confkernel.panic=10kernel.panic_on_oops=1Q1357: How do you debug memory leaks?
Section titled “Q1357: How do you debug memory leaks?”Answer:
# Using valgrindvalgrind --leak-check=full --show-leak-kinds=all --track-origins=yes ./program
# Using memleaxmemleax -p $(pgrep -f myapp)
# System memory analysisps aux --sort=-rss | headpmap -x $(pgrep -f myapp)
# Using /proccat /proc/$(pgrep -f myapp)/status | grep -i vm
# Using smemsmem -msmem -r -k
# gdb debugginggdb -p $(pgrep -f myapp)(gdb) info proc mappings(gdb) info registers
# For Javajmap -heap $(pgrep -f java)jmap -histo $(pgrep -f java) | head -20jmap -dump:format=b,file=heap.bin $(pgrep -f java)
# For Pythonpython -m memory_profiler program.pyQ1358: How do you debug I/O issues?
Section titled “Q1358: How do you debug I/O issues?”Answer:
# iostatiostat -xz 1
# iotopiotop -o
# pidstatpidstat -d 1pidstat -D sda 1
# blockdevblockdev --getsize64 /dev/sdablockdev --report /dev/sda
# hdparmhdparm -tT /dev/sda
# ftrace# Block I/O tracingecho nop > /sys/kernel/debug/tracing/current_tracerecho 1 > /sys/kernel/debug/tracing/events/block/block_rq_issue/enablecat /sys/kernel/debug/tracing/trace_pipe
# strace for I/Ostrace -e trace=openat,read,write -p $(pgrep -f myapp)
# iopingioping -c 10 /dev/sdaioping -c 10 /mnt/dataQ1359: How do you debug DNS issues?
Section titled “Q1359: How do you debug DNS issues?”Answer:
# Query testsdig @8.8.8.8 example.comdig +trace example.comnslookup example.com 8.8.8.8host -v example.com
# Check resolvercat /etc/resolv.confsystemd-resolve --status
# Check DNS cache# systemd-resolvedsystemd-resolve --flush-caches
# nscdnscd -i hosts
# Check serverdig @ns1.example.com example.com AXFR
# tcpdumptcpdump -i eth0 -n port 53
# Check resolution ordergetent hosts example.com
# Debug nsswitch# /etc/nsswitch.confhosts: files dns
# Test with stracestrace -e openat,read nslookup example.comQ1360: How do you debug performance regressions?
Section titled “Q1360: How do you debug performance regressions?”Answer:
# Baseline comparison# Save baselinesar -A -o baseline.data 1 60
# Compare with regressionsar -A -o regression.data 1 60
# Using sadfsadf -d baseline.data | headsadf -d regression.data | head
# Using SAR for comparisonsar -q baseline.data > baseline_q.txtsar -q regression.data > regression_q.txtdiff baseline_q.txt regression_q.txt
# Using perfperf stat -a --repeat=3 ./benchmarkperf record -g ./regressionperf report
# Using bpfb# Baselinebpftrace -e 'kprobe:do_nanosleep { @start = nsecs(); }' > baseline.txt
# Regressionbpftrace -e 'kprobe:do_nanosleep { @start = nsecs(); }' > regression.txtLinux Advanced Maintenance
Section titled “Linux Advanced Maintenance”Q1361: How do you perform kernel upgrades?
Section titled “Q1361: How do you perform kernel upgrades?”Answer:
# Debian/Ubuntuapt updateapt list --upgradableapt upgradeapt-get dist-upgradereboot
# RHEL/CentOSyum updatereboot
# Manual kernel compilecd /usr/srcwget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.15.tar.xztar -xJf linux-5.15.tar.xzcd linux-5.15cp /boot/config-$(uname -r) .configmake menuconfigmake -j$(nproc)make modules_installmake installupdate-grubrebootQ1362: How do you migrate services?
Section titled “Q1362: How do you migrate services?”Answer:
# Database migration# 1. Export datamysqldump -u root -p mydb > mydb.sql
# 2. Copy to new serverrsync -avz mydb.sql newserver:/tmp/
# 3. Importmysql -u root -p mydb < mydb.sql
# 4. Verifymysql -u root -p -e "USE mydb; SHOW TABLES;"
# Application migration# 1. Stop applicationsystemctl stop myapp
# 2. Sync filesrsync -avz --delete /var/www/html/ newserver:/var/www/html/
# 3. Copy configsrsync -avz /etc/nginx/ newserver:/etc/nginx/
# 4. DNS switch# Update DNS records# or# Use load balancer
# 5. Start on new serversystemctl start myappQ1363: How do you capacity planning?
Section titled “Q1363: How do you capacity planning?”Answer:
# CPU capacity# Current usagesar -u 1# Projected# (current_avg * growth_factor * days) / capacity
# Memory capacityfree -hvmstat 1
# Disk capacitydf -h# Monitor trendsiostat -x 1
# Network capacityiftopnethogs
# Calculate requirements# CPU: (peak_cpu / cores) * growth_factor# Memory: (peak_mem * growth_factor) + headroom# Disk: (current_disk * (1 + growth_rate)^years)# Network: peak_bandwidth * redundancy
# Tools# Prometheus + node_exporter# Grafana dashboards# Custom scriptsQ1364: How do you perform security audits?
Section titled “Q1364: How do you perform security audits?”Answer:
# Install audit toolsapt install lynis rkhunter chkrootkit
# Run Lynislynis audit systemlynis audit system --profile cis-ubuntu-22.04
# Run RKHunterrkhunter --checkrkhunter --propupd
# Run CHKRootKitchkrootkit
# Check open portsnmap -sT -O localhostss -tulpn
# Check usersawk -F: '($3 == 0) {print $1}' /etc/passwdlastlog
# Check logsgrep -i "failed password" /var/log/auth.loggrep -i "invalid user" /var/log/auth.log
# File integrityaide --checktripwire --checkQ1365: How do you implement documentation?
Section titled “Q1365: How do you implement documentation?”Answer:
# System Documentation
## Overview- Purpose: Production web server- OS: Ubuntu 22.04 LTS- Hostname: web01.example.com
## Hardware- CPU: 4 vCPU- RAM: 8 GB- Disk: 100 GB SSD
## Network- IP: 192.168.1.10- Gateway: 192.168.1.1- DNS: 8.8.8.8, 8.8.4.4
## Services| Service | Port | Status | Auto-start ||---------|------|--------|------------|| nginx | 80, 443 | running | yes || php-fpm | 9000 | running | yes || mysql | 3306 | running | yes |
## Backups- Schedule: Daily at 2 AM- Retention: 30 days- Location: /backup
## Monitoring- Prometheus: http://monitoring:9090- Grafana: http://monitoring:3000- Alerts: #ops-alerts
## Runbooks- [Service restart](runbooks/service-restart.md)- [Disk full](runbooks/disk-full.md)- [High CPU](runbooks/high-cpu.md)Linux Expert Topics
Section titled “Linux Expert Topics”Q1366: How do you implement zero trust security?
Section titled “Q1366: How do you implement zero trust security?”Answer:
# Network policies (Kubernetes)kubectl apply -f - <<EOFapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-denyspec: podSelector: {} policyTypes: - Ingress - Egress---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-to-dnsspec: podSelector: matchLabels: app: myapp egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - protocol: UDP port: 53EOF
# iptables zero trustiptables -P INPUT DROPiptables -P FORWARD DROPiptables -P OUTPUT ACCEPTiptables -A INPUT -i lo -j ACCEPTiptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPTiptables -A INPUT -p tcp --dport 22 -j DROP
# mTLS with Istio# See service mesh configurationQ1367: How do you implement immutable infrastructure?
Section titled “Q1367: How do you implement immutable infrastructure?”Answer:
# Packer for immutable imagespacker build template.json
# Use cloud-init for configuration# cloud-config.yaml#cloud-configpackage_update: truepackages: - nginx
# No SSH access, use session manager# AWS Systems Manager Session Manager# Install ssm agentyum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpmsystemctl enable amazon-ssm-agentsystemctl start amazon-ssm-agent
# Use containers instead of VMs# Deploy via CI/CD# Never SSH into production# Rollback by redeployingQ1368: How do you implement feature flags?
Section titled “Q1368: How do you implement feature flags?”Answer:
# Simple feature flag implementationimport jsonfrom functools import wraps
class FeatureFlags: def __init__(self, flags_file): self.flags = self._load_flags(flags_file)
def _load_flags(self, flags_file): with open(flags_file) as f: return json.load(f)
def is_enabled(self, flag_name): return self.flags.get(flag_name, {}).get('enabled', False)
def get_variant(self, flag_name): return self.flags.get(flag_name, {}).get('variant', 'control')
flags = FeatureFlags('/etc/flags.json')
# Usageif flags.is_enabled('new_checkout'): return render_new_checkout()else: return render_old_checkout()
# Using LaunchDarklyimport launchdarkly as ldld_client = ld.LDClient("sdk-key")
feature_flag = ld_client.variation("new-feature", {"key": "user123"}, False)Q1369: How do you implement service catalog?
Section titled “Q1369: How do you implement service catalog?”Answer:
# Backstage installationhelm repo add backstage https://backstage.github.io/helm-chartshelm install backstage backstage/backstage
# Create catalog# catalog-info.yamlapiVersion: backstage.io/v1alpha1kind: Componentmetadata: name: my-service annotations: github.com/project-slug: org/repospec: type: service lifecycle: production owner: team-a providesApis: - my-service-api
# Register in Backstage# app-config.yamlcatalog: locations: - type: url target: https://github.com/org/repo/blob/main/catalog-info.yaml
# Add to service catalogkubectl apply -f catalog-info.yamlQ1370: How do you optimize Linux for containers?
Section titled “Q1370: How do you optimize Linux for containers?”Answer:
# Kernel parameters for containersnet.bridge.bridge-nf-call-iptables=1net.bridge.bridge-nf-call-ip6tables=1net.ipv4.ip_forward=1
# Disable swapvm.swappiness=0
# File limitsfs.file-max=65536
# Networknet.core.somaxconn=1024net.ipv4.tcp_max_syn_backlog=2048
# Applysysctl -p
# Container-optimized OS# Use Ubuntu Core, RancherOS, Flatcar
# Docker daemon# /etc/docker/daemon.json{ "storage-driver": "overlay2", "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, "live-restore": true, "default-ulimits": { "nofile": { "Name": "nofile", "Hard": 64000, "Soft": 64000 } }}Q1371: How do you implement GitOps?
Section titled “Q1371: How do you implement GitOps?”Answer:
# 1. Git repository structure# ├── base/# │ ├── deployment.yaml# │ └── service.yaml# ├── overlays/# │ ├── dev/# │ │ └── kustomization.yaml# │ ├── staging/# │ │ └── kustomization.yaml# │ └── prod/# │ └── kustomization.yaml
# 2. Kustomize# overlays/prod/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationbases:- ../../basepatchesStrategicMerge:- replica-patch.yamlreplicas:- name: myapp count: 5
# 3. Applykustomize build overlays/prod | kubectl apply -f -
# 4. ArgoCDargocd app create myapp \ --repo https://github.com/org/repo \ --path overlays/prod \ --dest-server https://kubernetes.default.svc \ --dest-namespace defaultQ1372: How do you implement chaos engineering?
Section titled “Q1372: How do you implement chaos engineering?”Answer:
# Install Chaos Meshhelm repo add chaos-mesh https://charts.chaos-mesh.orghelm install chaos-mesh chaos-mesh/chaos-mesh -n chaos-mesh --create-namespace
# Kubernetes chaos experimentapiVersion: chaos-mesh.org/v1alpha1kind: PodChaosmetadata: name: pod-failurespec: action: pod-failure mode: one duration: "60s" selector: namespaces: - default labelSelectors: app: myapp
# Gremlin-like attacks# CPUecho 1 > /proc/sys/kernel/sysrqecho c > /proc/sysrq-trigger
# Memory# malloc until OOM# See stress tool
# Using chaoskubechaoskube --interval=30s --labels=app=test
# Litmuslitmusctl run chaos -f ./chaos-experiment.yamlQ1373: How do you implement service mesh observability?
Section titled “Q1373: How do you implement service mesh observability?”Answer:
# Istio telemetry# Enable tracingistioctl install --set values.telemetry.enabled=true
# Configure tracing# istio configmapapiVersion: v1kind: ConfigMapmetadata: name: istiodata: meshConfig: | enableTracing: true defaultConfig: tracing: sampling: 10 zipkin: address: jaeger-collector.observability:9411
# Access dashboards# Jaegerkubectl port-forward -n istio-system svc/jaeger-query 16686:16686
# Kialikubectl port-forward -n istio-system svc/kiali 20001:20001
# Grafanakubectl port-forward -n istio-system svc/grafana 3000:3000Q1374: How do you implement multi-tenancy?
Section titled “Q1374: How do you implement multi-tenancy?”Answer:
# Kubernetes namespaceskubectl create namespace tenant1kubectl create namespace tenant2
# Resource quotasapiVersion: v1kind: ResourceQuotametadata: name: tenant1-quotaspec: hard: requests.cpu: "4" requests.memory: 8Gi limits.cpu: "8" limits.memory: 16Gi pods: "20" services: "10" secrets: "20" configmaps: "20"
# Limit rangesapiVersion: v1kind: LimitRangemetadata: name: tenant1-limitsspec: limits: - max: cpu: "2" memory: "4Gi" min: cpu: "100m" memory: "128Mi" default: cpu: "500m" memory: "1Gi" defaultRequest: cpu: "200m" memory: "512Mi" type: Container
# RBACkubectl create rolebinding tenant1-admin \ --role=admin \ --user=user1 \ --namespace=tenant1