Use Domain-Driven Design (DDD) to define service boundaries based on business capabilities. Key principles: (1) Bounded contexts: each microservice encapsulates a specific business domain with clear boundaries where a model is defined and applicable, (2) One service = one responsibility (do only one thing), (3) Independent data storage per service (no shared databases). DDD phases: Strategic DDD defines large-scale structure and business capabilities, Tactical DDD implements aggregates and entities. Service boundary techniques: (a) Event Storming workshops to map system events and identify logical boundaries, (b) Context mapping to control how bounded contexts interact. Implementation: const userService = { domain: 'user-management', database: 'users-db', api: '/api/users' };. Best practices: (1) Low coupling between aggregates, high cohesion within aggregates, (2) Each service has independent deployment pipeline, (3) Services communicate via well-defined APIs, (4) Regularly review and refactor boundaries as business evolves. DDD is iterative - boundaries aren't fixed and may split as application grows.
Architecture Patterns FAQ & Answers
9 expert Architecture Patterns answers researched from official documentation. Every answer cites authoritative sources you can verify.
unknown
9 questionsUse synchronous (REST, gRPC) for real-time requests and asynchronous (message queues) for decoupling. Synchronous patterns: (1) REST: Simple, ubiquitous, uses JSON over HTTP, methods GET/POST/PUT/DELETE, best for external APIs and simple CRUD, (2) gRPC: High-performance RPC with Protocol Buffers (binary serialization), 2-10x faster than REST, supports streaming and bi-directional communication over HTTP/2, best for internal service communication and real-time services. Asynchronous patterns: (3) Message Queues (Kafka, RabbitMQ, NATS): Producers and consumers operate independently, enables eventual consistency, loose coupling, and fault tolerance, best for event-driven architectures and high availability. Performance: Binary protocols (gRPC, AMQP) have lower network load than text-based (REST JSON). When to use: REST for simplicity and external APIs, gRPC for performance-critical internal calls, message queues for decoupling and scalability. Modern systems often combine all three based on specific use case requirements.
Use event sourcing with CQRS pattern and message brokers (Kafka or RabbitMQ). Event sourcing: Store state changes as immutable event log instead of modifying state in-place. Events trigger state changes: const events = [{ type: 'UserCreated', data: {...} }, { type: 'EmailUpdated', data: {...} }];. CQRS (Command Query Responsibility Segregation): Split application into command side (updates) and query side (reads), enables independent optimization for writes vs reads. Message brokers: (1) Apache Kafka: Distributed streaming platform, high-throughput (millions of events/sec), fault-tolerant, use for event streaming and central nervous system, (2) RabbitMQ: Reliable message broker, supports multiple protocols, flexible routing, use for task queues and RPC. Axon Framework (Java) simplifies event sourcing + CQRS implementation. Best practices: (1) Events are immutable and append-only, (2) Use Kafka for high-volume streaming, RabbitMQ for complex routing, (3) Implement event replay for rebuilding state, (4) Handle eventual consistency in reads.
Use proper HTTP methods, status codes, and versioning for RESTful API design. HTTP methods (CRUD): GET (read), POST (create), PUT (create or replace), PATCH (partial update), DELETE (remove). Status codes: 200 OK (success), 201 Created (new resource), 204 No Content (successful delete), 400 Bad Request (invalid input), 404 Not Found (resource doesn't exist), 500 Internal Server Error. Versioning strategies: (1) URI versioning: /v1/users (most common, clear and visible), (2) Header versioning: Custom-Version: v2 header (cleaner URLs), (3) Query parameter: /users?version=2 (simple but less robust), (4) Content negotiation: Accept: application/vnd.company.v2+json (follows HTTP standards). Best practices: (1) Use plural lowercase nouns (/users not /getUser), (2) Keep hierarchy shallow (/users/123/orders not /api/v1/users/123/orders/456/items), (3) Return appropriate status codes, (4) Implement proper error responses with details, (5) Document with OpenAPI/Swagger. Feature-based versioning emerging in 2025: request specific feature sets via headers instead of full version.
Separate command (write) and query (read) models with independent optimization. CQRS splits application into two parts: (1) Command side: Handles writes with complex business logic, validates and processes state changes, writes to write-optimized database, (2) Query side: Handles reads with simple focused queries, reads from read-optimized database (could be different DB type). Implementation: const commandService = { createUser: async (data) => { validate(data); await writeDB.insert(data); await eventBus.publish('UserCreated', data); } }; const queryService = { getUser: async (id) => readDB.findById(id) };. Database options: (1) Same RDBMS with read replicas, (2) Write to RDBMS, read from NoSQL (e.g., PostgreSQL → MongoDB), (3) Event sourcing for write side, materialized views for read side. Benefits: Clean separation of concerns, independent scaling, optimized for reads vs writes. Challenges: Adds complexity, eventual consistency between command and query sides. CQRS is NOT dependent on event sourcing or DDD but works well together.
Use Saga pattern with compensation transactions for distributed systems (not 2PC). Two-Phase Commit (2PC): Strong consistency, locks resources, all nodes commit or rollback together. Limitations: Performance bottleneck from locks, not supported in NoSQL databases, doesn't work well in microservices. Saga pattern: Manages transactions as sequence of local transactions, each service commits locally, if any fails, run compensation transactions to rollback. Implementation: (1) Orchestration: Central coordinator manages saga workflow, (2) Choreography: Services listen to events and react. Example: OrderService → PaymentService → ShippingService. If payment fails, compensate: CancelOrder → RefundPayment. Code: const compensate = { CancelOrder: async (orderId) => orders.delete(orderId), RefundPayment: async (paymentId) => payments.refund(paymentId) };. Requirements: Compensating transactions must be idempotent and retryable. Tradeoffs: Saga provides eventual consistency (not immediate), more scalable (no global locks), no read isolation (users may see intermediate states). Best for: Long-running transactions, microservices, NoSQL databases.
Use Kong or AWS API Gateway for centralized routing, authentication, and rate limiting. API Gateway benefits: Simplified client communication, single entry point, enhanced security, load balancing, caching, rate limiting, request/response transformation. Kong Gateway (open-source): Highly customizable, 10,000 RPS at 12.5ms latency, supports OAuth2/JWT/mTLS/API keys, integrates with Keycloak/Okta/Auth0. Rate limiting: kong.plugins = { rateLimit: { minute: 100, hour: 1000 } };. Authentication: kong.plugins = { jwt: { secret: 'key', algorithm: 'HS256' } };. AWS API Gateway (managed): 8,000 RPS, seamless AWS integration (Lambda, Cognito, CloudWatch), $3.50 per million requests, vendor lock-in tradeoff. Features: (1) Per-endpoint rate limits, (2) Version-specific auth (JWT for v2, token for v1), (3) Request throttling, (4) API keys and usage plans. Best practices: (1) Use Kong for self-hosted high-performance needs, (2) Use AWS API Gateway for serverless and AWS ecosystem, (3) Implement circuit breakers for upstream failures, (4) Cache responses to reduce backend load.
Use horizontal scaling with load balancing and database sharding for scalability. Horizontal scaling: Add more instances/nodes and distribute load instead of upgrading single server (vertical scaling). Key components: (1) Load balancing: Distribute traffic evenly across servers (nginx, HAProxy, AWS ALB), prevents single server overload, enables adding/removing servers dynamically. (2) Database sharding: Split data across multiple database instances (shards), each shard holds subset of data with same schema, enables horizontal scaling of database. Sharding strategies: (a) Range-based: shard by ID ranges (users 1-1000, 1001-2000), (b) Hash-based: shard by hash(user_id) % num_shards, (c) Geographic: shard by user location. Critical: Choose balanced sharding key to avoid hotspots - ensures even data distribution. Implementation: const shardId = hash(userId) % totalShards; const db = shardConnections[shardId];. Additional techniques: (1) CDNs for static content distribution, (2) Caching (Redis, Memcached) to reduce database load, (3) Database replication for read scaling, (4) Message queues for asynchronous processing. 2025 focus: Combining all techniques for consistent performance under peak load.
Use Resilience4j library for circuit breaker and fault tolerance (Hystrix is deprecated). Circuit breaker prevents cascading failures by rejecting requests after repeated failures to downstream service. States: (1) Closed: Requests pass through normally, (2) Open: Requests fail immediately without calling service (after threshold failures), (3) Half-Open: Test if service recovered with limited requests. Resilience4j vs Hystrix: Netflix discontinued Hystrix in 2018 (maintenance mode), Resilience4j is actively maintained (2025), lightweight functional programming design, no separate thread pools (executes in current thread). Setup: Add dependency spring-cloud-starter-circuitbreaker-resilience4j. Configuration: @CircuitBreaker(name = "userService", fallbackMethod = "fallback") public User getUser(Long id) { return userClient.get(id); } public User fallback(Long id, Exception e) { return User.cached(id); }. Additional Resilience4j features: Rate Limiter (block too frequent requests), Retry (automatic retry with backoff), Bulkhead (limit concurrent requests). Best practices: (1) Configure failure threshold (e.g., 5 failures → open), (2) Set timeout for half-open state, (3) Implement meaningful fallbacks, (4) Monitor circuit breaker state changes.