Serverless scales to zero. Traditional servers scale predictably.
Serverless functions (Vercel, AWS Lambda) and traditional servers (long-running Node.js process) have different performance characteristics, cost models, and limitations. For Next.js API routes, serverless is the default. For WebSockets and background jobs, you need a traditional server.
Architecture decision about serverless vs. traditional long-running server — often triggered by limitations of serverless (connection pooling, cold starts, timeouts)
Serverless functions (AWS Lambda, Vercel Functions) are stateless, short-lived compute units. Each invocation is independent: no persistent connections, no shared memory between invocations, and a maximum execution time (typically 10 seconds on Vercel, 15 minutes on Lambda).
Serverless is the right model for:
- REST API endpoints (handle request → return response)
- Webhook handlers
- Image transformation
- Scheduled tasks with short runtime
- Any stateless request/response workload
Where serverless breaks down:
WebSockets. Serverless functions can't maintain persistent connections. WebSockets require a long-lived connection — you need a traditional server (or a managed WebSocket service like Pusher/Ably).
Database connection pooling. Each serverless invocation opens a new database connection. At scale, hundreds of concurrent Lambda functions = hundreds of concurrent database connections. Postgres has connection limits. Solutions: PgBouncer (connection pooler), Neon's connection pooling, or a traditional server that maintains a connection pool.
Long-running jobs. Serverless functions time out. Background jobs that run for minutes (video processing, bulk data imports, report generation) need a different compute model: queues + dedicated workers, not serverless.
Shared state. Serverless functions can't share in-memory state between invocations. In-memory caches don't work. Solutions: Redis for shared state, or a traditional server.
Architecture that uses serverless for stateless API endpoints and traditional servers for stateful workloads that don't fit serverless constraints
Serverless by default for Next.js API routes. Traditional servers (via Fly.io or Railway) for WebSocket servers and background job workers.
One honest number to start.
Fixed-scope, fixed-price. The number below is the starting point — final scope is built from your brief.
Architecture that uses serverless for stateless API endpoints and traditional servers for stateful workloads that don't fit serverless constraints
Three steps, every time.
The same repeatable engagement on every project. No surprises, no mystery, no billable ambiguity.
Brief & discovery.
We send you questions, then get on a call. Output: a written scope with every step, feature, and integration listed.
Build & ship.
Fixed schedule, weekly reviews. No scope creep unless you change the scope — and if you do, we reprice it transparently.
Warranty & retainer.
30-day warranty on every launch. Most clients stay on a monthly retainer for ongoing features and maintenance.
Why Fixed-Price Matters Here
Infrastructure architecture decisions are made at project start. Included in the proposal.
Questions, answered.
Serverless cold starts (the latency of initializing a new function instance) are a real issue for user-facing endpoints. Vercel's infrastructure minimizes cold starts for Next.js; Lambda cold starts are more significant. Edge functions (running in 30+ locations) eliminate cold starts. For latency-sensitive endpoints, warm-up strategies or edge deployment matter.
No. Vercel is a serverless platform. For traditional long-running Node.js servers, use Fly.io, Railway, or AWS ECS/EC2.
Tell Ryel about your project.
Describe what you’re building and what outcome you need. You’ll have a written, fixed-price scope within the week.