MicroServices and SpringCloud
MicroServices and SpringCloud
1. What are the five main components of Spring Cloud?
Answer:
Initially, the five key components of Spring Cloud were:
- Eureka: Service registry.
- Ribbon: Client-side load balancer.
- Feign: Declarative service invocation.
- Hystrix: Circuit breaker.
- Zuul/Gateway: API gateway.
With the rise of Spring Cloud Alibaba, some additional components were incorporated into our projects:
- Service Registration and Configuration Center: Nacos.
- Load Balancing: Ribbon.
- Service Invocation: Feign.
- Service Protection: Sentinel.
- API Gateway: Gateway.
2. What is service registration and discovery? How does Spring Cloud implement it?
Answer:
Service registration and discovery encompass three core functions: service registration, service discovery, and service health monitoring.
In our project, we used Eureka as the service registry:
- Service Registration: Providers register their service details (e.g., name, IP, port) with Eureka.
- Service Discovery: Consumers fetch service details from Eureka and use a load-balancing algorithm to call a service instance.
- Service Health Monitoring: Providers send regular heartbeats to Eureka to report their status; if no heartbeat is received within a certain time, the instance is removed.
3. What are the differences between Nacos and Eureka?
Answer:
From my experience using Nacos as a registry, the key similarities and differences between Nacos and Eureka are:
Similarities:
- Both support service registration and discovery.
- Both use heartbeat checks for health monitoring.
Differences:
- Health Check:
- Nacos can actively probe the provider's health.
- Eureka relies solely on client heartbeats.
- Instance Types:
- Nacos distinguishes between temporary and non-temporary instances and uses different health-check strategies.
- Change Notifications:
- Nacos supports service-list updates via push notifications.
- Consistency Models:
- Nacos defaults to AP mode, switching to CP mode for non-temporary instances.
- Eureka operates purely in AP mode.
4. How is load balancing implemented in your project?
Answer:
We used Ribbon in Spring Cloud for client-side load balancing. The Feign client integrates Ribbon internally, simplifying its use.
During remote service calls, Ribbon:
- Fetches the service's instance list from the registry.
- Selects an instance based on a routing strategy (default is round-robin).
- Sends the request to the chosen instance.
5. What are the load balancing strategies provided by Ribbon?
Answer:
Ribbon supports several load balancing strategies, including:
- RoundRobinRule: Default round-robin strategy.
- WeightedResponseTimeRule: Chooses based on server response time.
- RandomRule: Randomly selects a server.
- ZoneAvoidanceRule: Prefers servers in the same zone/region.
6. How do you customize Ribbon's load balancing strategy?
Answer:
To customize Ribbon's load-balancing strategy, there are two approaches:
- Create a class implementing the
IRule
interface to define a global strategy. - Specify a strategy for individual services in the client's configuration file.
7. What is service avalanche, and how do you address it?
Answer:
A service avalanche occurs when the failure of one service causes a cascading failure across the service chain.
To address this, we use:
- Service Degradation: Reduce non-critical service loads during traffic spikes.
- Service Circuit Breaking: Enable a circuit breaker to halt requests when the failure rate exceeds a threshold.
8. How do you monitor your microservices?
Answer:
We use SkyWalking for monitoring:
- It tracks service interfaces, physical instances, and latency.
- Alerts are configured to notify stakeholders via SMS or email when anomalies are detected.
9. Have you implemented rate-limiting in your project? How?
Answer:
Yes, we implemented rate-limiting to handle sudden traffic surges:
- Version 1: Used Nginx for rate-limiting with a leaky bucket algorithm, limiting by IP.
- Version 2: Switched to Spring Cloud Gateway's RequestRateLimiter filter, which uses a token bucket algorithm and supports IP or path-based limiting.
10. What are common rate-limiting algorithms?
Answer:
Common rate-limiting algorithms include:
- Leaky Bucket: Processes requests at a fixed rate, smoothing bursts.
- Token Bucket: Tokens are generated at a fixed rate, and requests require a token, accommodating fluctuations.
11. What is the CAP theorem?
Answer:
The CAP theorem is a foundational concept in distributed systems, stating that it is impossible to achieve all three of the following simultaneously:
- Consistency (C): All nodes see the same data at the same time.
- Availability (A): Every request receives a response.
- Partition Tolerance (P): The system continues to operate despite network partitions.
12. Why can't distributed systems guarantee both consistency and availability?
Answer:
In a distributed system, ensuring partition tolerance is essential. When a partition occurs:
- Guaranteeing consistency may require sacrificing availability (e.g., by rejecting reads/writes until consistency is restored).
- Guaranteeing availability may lead to stale or inconsistent data.
13. What is the BASE theorem?
Answer:
The BASE theorem extends the AP model of the CAP theorem, prioritizing:
- Basic Availability: The system remains operational during failures.
- Soft State: Data can be temporarily inconsistent.
- Eventual Consistency: Data becomes consistent over time.
14. What distributed transaction solution do you use?
Answer:
We use Seata in AT mode for distributed transactions. The AT mode ensures eventual consistency by logging business data changes and coordinating rollback/recovery if a transaction fails.
15. How do you ensure idempotency in distributed service interfaces?
Answer:
We use tokens and Redis for idempotency:
- Generate a token and store it in Redis when the user initiates an action.
- Validate the token during the request. If valid, process the request and delete the token.
- Reject duplicate tokens to ensure each request is processed only once.
16. What routing strategies does XXL-Job support?
Answer:
XXL-Job supports routing strategies such as:
- Round Robin: Distributes tasks evenly.
- Failover: Redirects tasks to healthy instances.
- Sharding Broadcast: Broadcasts tasks across shards.
17. How do you handle XXL-Job task execution failures?
Answer:
To handle failures, we:
- Use a failover routing strategy to retry tasks on healthy instances.
- Configure task retry counts.
- Enable logging and email alerts to notify stakeholders.
18. How do you manage tasks with high data volume in XXL-Job?
Answer:
We use sharding broadcast routing to distribute the workload across multiple instances. The task logic dynamically assigns data partitions based on the shard ID and total shards.