Skip to content

Serverless

Building Applications Without Managing Servers

Section titled “Building Applications Without Managing Servers”

Serverless computing is a cloud computing execution model where the cloud provider runs the server, and dynamically manages the allocation of machine resources. You write code, and the provider handles everything else.

Traditional vs Serverless
=========================
TRADITIONAL:
+--------------------------------------------------+
| You manage: |
| - Server provisioning |
| - OS updates |
| - Scaling |
| - Capacity planning |
| - Patching |
+--------------------------------------------------+
SERVERLESS:
+--------------------------------------------------+
| Cloud provider manages: |
| - Server provisioning |
| - OS updates |
| - Scaling |
| - Capacity planning |
| - Patching |
+--------------------------------------------------+
You manage: Code only!

Serverless Flow
==============
User Request
|
v
+-----------------+
| API Gateway |
+-----------------+
|
v
+-----------------+
| Function | (AWS Lambda, GCP Cloud Functions)
| (Container) | - Spawns on demand
| - Run code | - Scales automatically
| - Return result| - Pay per invocation
+-----------------+
|
v
+-----------------+
| Cloud Services |
| - Database |
| - Storage |
| - Auth |
+-----------------+
Serverless Scaling
=================
Requests: 1 -> 10 -> 100 -> 1000 -> 10000
Traditional:
+--------------------------------+
| Capacity: Fixed |
| More requests = slower |
| or fail |
+--------------------------------+
Serverless:
+--------------------------------+
| Instances: 1 -> 10 -> 100 |
| Each request gets resources |
| Auto-scales to meet demand |
+--------------------------------+
No concurrency limits (within reason)!

ProviderServiceLanguages
AWSLambdaNode.js, Python, Java, Go, .NET, Ruby
GoogleCloud FunctionsNode.js, Python, Go, Java, .NET
MicrosoftAzure FunctionsC#, JavaScript, Python, Java, PowerShell
CloudflareWorkersJavaScript, Rust, C++
FeatureDescription
LanguagesNode.js, Python, Java, Go, .NET, Ruby
RuntimeUp to 15 minutes execution
Memory128MB to 10GB
PackagingZIP or container image
Cold Start100ms to seconds
PricingPer request + execution time

Serverless REST API
==================
+--------------------------------------------------+
| API Gateway |
+--------------------------------------------------+
|
+---------------+---------------+
| | |
v v v
+--------+ +--------+ +--------+
| GET | | POST | | DELETE |
| /users | | /users | | /users |
+--------+ +--------+ +--------+
| | |
v v v
+--------+ +--------+ +--------+
| Lambda | | Lambda | | Lambda |
+--------+ +--------+ +--------+
| | |
+---------------+---------------+
|
v
+-------------+
| DynamoDB |
+-------------+
Event Processing
================
Event Source Lambda Trigger Processing
+---------+ +----------+ +---------+
| S3 | -----> | Lambda | -----> | Send |
| Upload | Object | Resize | | to SNS |
+---------+ Created +----------+ +---------+
|
+---------+
| Email |
| Service |
+---------+
Scheduled Functions
===================
+--------------------------------------------------+
| CloudWatch Events |
| (Every hour) |
+--------------------------------------------------+
|
v
+-----------+
| Lambda |
| Process |
| Batch Job |
+-----------+
|
v
+-----------+
| Database |
| (Write) |
+-----------+
Use cases:
- Daily reports
- Data cleanup
- Batch analytics

BenefitDescription
No server managementFocus on code
Auto-scalingHandles any load
Pay per useNo idle capacity
Faster deploymentQuick to ship
Built-in availabilityHigh availability
Reduced costsFor sporadic workloads

ChallengeDescription
Cold startsInitial latency
Execution limitsTimeouts, memory limits
Vendor lock-inCloud-specific APIs
TestingHarder to test locally
DebuggingDistributed debugging harder
StateCan’t maintain in-memory state
Cold Start Timeline
===================
Request 1 (Cold):
+--------------------------------------------------+
| Invoke | Download | Init | Execute | Response |
| | Runtime | Init | | |
| | (2-5 sec)| | | |
+--------------------------------------------------+
Request 2-100 (Warm):
+--------------------------------------------------+
| Invoke | Execute | Response |
| | | |
| | ~10ms | |
+--------------------------------------------------+
Mitigation:
- Provisioned concurrency (pay for warm)
- Keep functions warm (ping)
- Design for eventual consistency

Use CaseWhy Serverless
Web APIsVariable traffic
Event processingS3 triggers, queues
Scheduled tasksCron jobs
Real-time processingChat, notifications
Mobile backendsVariable load
Proof of conceptFast to build
Use CaseWhy Not
Long-running processes15-minute timeout
Consistent high trafficMay be cheaper with dedicated
Predictable workloadsReserved instances cheaper
Stateful applicationsHard to maintain state
Low-latency requirementsCold starts

AspectServerlessContainers
ManagementProviderYou/Orchestrator
ScalingAutomaticManual/auto
StartupCold startFast (if warm)
CostPay per useAlways running
ControlLimitedFull
StateExternalIn-memory possible
Choose Serverless when:
======================
- Traffic is variable
- Want to move fast
- Don't want ops overhead
- Event-driven workloads
Choose Containers when:
======================
- Consistent traffic
- Need full control
- Long-running processes
- Complex orchestration

PracticeDescription
Single responsibilityOne function per task
StatelessUse external state store
Minimal dependenciesFaster cold starts
Async where possibleQueue-based processing
Proper sizingRight memory for cost
MonitoringUse provider tools

Key serverless concepts:

  1. Function as a Service - Code runs on demand
  2. Auto-scaling - Handles any load automatically
  3. Pay per use - Cost-effective for variable workloads
  4. Cold starts - Initial latency, plan for it
  5. Stateless - Use external services for state
  6. Vendor lock-in - Consider when choosing
  7. Event-driven - Works great with events

Next: Chapter 16: API Design Principles