KahWee - Web Development, AI Tools & Tech Trends

Expert takes on AI tools like Claude and Sora, modern web development with React and Vite, and tech trends. By KahWee.

AWS Lambda's Bad Interface Is Actually a Feature

[2025 Update: Lambda won. Serverless functions are now standard practice, with Cloudflare Workers, Vercel Functions, and others following AWS's model. The deployment experience improved dramatically with infrastructure-as-code tools like the Serverless Framework, SAM, and CDK. But the core insight here—that constraints can improve design—remains relevant.]

I had the opportunity to test out AWS Lambda at a hackathon. It made me rethink what microservices architecture is really about.

What Lambda Solves

Before Lambda (and serverless functions generally), deploying backend code meant:

  1. Provisioning servers (EC2 instances, load balancers, auto-scaling groups)
  2. Managing server configuration (OS updates, security patches, runtime versions)
  3. Scaling infrastructure (monitoring load, spinning up/down instances)
  4. Paying for idle capacity (servers running 24/7 even with zero traffic)

Lambda flips this model. You write a function, upload it to AWS, and they handle everything else. You pay only for actual execution time, down to 100ms increments.

The promise: Focus on code instead of infrastructure.

The reality: The deployment experience in 2016 was deliberately primitive.

The Lambda Model

The idea isn't new—Google App Engine did something similar earlier. You send your source code to AWS and it prepares it onto a server and exposes an index.handler. What this handler does is expose a set of APIs you can call using REST (through API Gateway) or trigger from other AWS services.

A basic Lambda function in 2016 looked like this:

exports.handler = function(event, context) {
  console.log('Processing event:', event);

  // Your business logic here
  const result = processData(event.data);

  context.succeed(result);
};

Simple enough. The complexity came from deployment.

The "Bad" Deployment Experience

The only way to deploy was uploading a ZIP file with your source code. You had two options:

  1. Direct upload: ZIP your code and upload through the AWS Console
  2. S3 upload: Push ZIP to S3, then reference it in Lambda

Both approaches felt rudimentary compared to modern deployment pipelines. No CI/CD integration. No automated testing. No gradual rollouts. Just ZIP and upload.

Coming from systems with git push deployments, this felt like a step backwards.

Why Friction Can Be Good

Here's my contrarian take: the terrible interface to update my files worked in Lambda's advantage.

The deployment friction encouraged me to not deploy too many times. What I found myself doing was splitting code into small, focused pieces. After all, each time some piece of code reaches stability, it doesn't need touching anymore. You don't want to redeploy it just because adjacent code changed.

This constraint reinforced the core idea of microservices: doing just one small bit super reliably.

The Mental Shift

Traditional deployment encourages monolithic thinking:

  • "I'll add this feature to the existing service"
  • "Let me update this while I'm here"
  • "These functions are related, so they should live together"

Lambda's friction encourages modular thinking:

  • "Does this deserve its own function?"
  • "Can I make this completely independent?"
  • "What's the smallest unit I can deploy?"

The deployment pain made me think harder about boundaries. Each Lambda function became truly focused on a single responsibility.

How This Changed My Approach

At the hackathon, I started with one Lambda function doing everything. After the third painful ZIP upload, I split it:

  • auth-handler: Validates tokens, nothing else
  • data-processor: Transforms input, no side effects
  • notification-sender: Sends emails, decoupled from other logic

Each function could be updated independently. Each had a single reason to change. The friction forced better software engineering discipline.

The Broader Lesson

Lambda's clunky deployment wasn't an oversight—it reflected AWS's serverless philosophy. Functions should be small, focused, and stable. If you're deploying constantly, you're probably doing too much in one function.

Modern developer tools have smoothed the deployment experience with frameworks like Serverless, SAM, and Terraform. These tools are better for productivity, but something was lost: the forcing function that made you think about modularity.

Sometimes friction is a feature, not a bug. The best constraints don't block you—they push you toward better solutions.

Lambda in 2016 vs. Today

Lambda's primitive deployment in 2016 taught me a valuable lesson about constraints and design. The platform has matured significantly since then (containerized functions, streaming responses, better monitoring), but the core insight remains: the best tools make you think differently about problems.

AWS Lambda pushed me to think small and modular. That turned out to be exactly the right idea.

Fast forward to 2025, and the serverless landscape has evolved significantly. Cloudflare Workers offers similar serverless capabilities with edge deployment and often simpler configuration than Lambda. The fundamental concepts from Lambda's early days still apply, but the tooling has gotten much better.