Server and Serverless
Our applications ultimately need to run somewhere. That "somewhere" typically comes down to two options: traditional servers (where you play the house owner) or serverless environments (where you're a cloud nomad). Both approaches keep your code running in the clouds, but one gives you keys to the server room while the other hands you an Uber-like bill. Let's break down these options.
Server
While serverless gets all the hype, traditional server-based architectures still power enterprise systems. Here's what you need to know:
- You own the box: physical (on-premise hardware) or virtual (cloud VMs such as AWS EC2 or Google Compute Engine), you have full control over:
- OS patches and security updates.
- Scaling strategies (vertical/horizontal).
- Resource allocation (CPU, RAM, storage).
- Predictable costs: Fixed monthly bills regardless of traffic spikes.
Serverless
The term “Serverless” is a bit misleading. There are still servers involved, but you never see or manage them.
- Instead of worrying about hardware, software updates, or scaling, developers focus purely on writing code.
- Cloud providers (like AWS, Google Cloud, or Microsoft Azure) handle the grunt work: spinning up servers when needed, patching security holes, balancing traffic, and even backing up data automatically.
The Serverless Dream can quickly become a nightmare
Serverless computing promises unparalleled scalability, but that flexibility often comes with an unpredictable cost structure.
A striking example of this came to light when cara.app, a small indie project, went viral. Built using serverless functions, the app’s sudden popularity led to a staggering $96,000 cloud bill for its creator.
This isn’t an isolated incident. Serverless pricing is heavily based on usage, and while that can seem cost-effective at first, a spike in demand can quickly send your costs into orbit.
Conclusion: It's Not Either/Or
| Criteria | Server | Serverless |
|---|---|---|
| Cost Structure | Predictable (fixed/month) | Variable (pay-per-execution) |
| Scaling | Manual/auto-scaling setup | Automatic (but costly spikes) |
| Maintenance | High (you manage everything) | Zero (cloud provider's duty) |
| Control | Full control over environment | Limited to runtime/config |
| Cold Starts | None (always running) | There's latency |
| Best For | Stable workloads, stateful apps | Event-driven, bursty traffic |
Most teams you'll join have already made this decision years before your arrival. But here's why this matters to you:
- Architecture discussions: Understand tradeoffs when colleagues debate migrating to Lambda.
- Debugging context: Issues in serverless (cold starts/timeouts) are different from those in servers (thread pool).
- Career moves: Cloud certifications expect this knowledge.
- Promotion potential: Roles such as Tech leads need to advise on infrastructure choices.
Whether you're maintaining a dusty server or building serverless microservices, the best architecture is the one your team can operate.