Message Queue
November 21, 2024About 3 min
Message Queue
1. How does RabbitMQ ensure message delivery without loss?
- Answer: We use RabbitMQ to ensure data dual-write consistency between MySQL and Redis, which requires us to implement high availability for messages. Specific measures include:
- Enable producer acknowledgment to ensure messages are delivered to the queue; if there is an error, log it and fix the data.
- Enable persistence to ensure messages are not lost in the queue before consumption; persistence needs to be applied to the exchange, queue, and the message itself.
- Enable automatic acknowledgment for consumers and set retry counts. For example, we set 3 retries; if it fails, the message is sent to an exception exchange for manual processing.
2. How to solve the problem of duplicate message consumption in RabbitMQ?
- Answer: We have encountered the issue of duplicate message consumption, and the solution is:
- Set the consumer to automatic acknowledgment mode. If the service crashes before acknowledgment, it may consume the same message again after restarting.
- Check the existence of data in the database using a unique business identifier. If it does not exist, process the message; if it exists, ignore it to avoid duplicate consumption.
3. Do you know any other solutions?
- Answer: Yes, this is an idempotency issue, which can be solved by:
- Using Redis distributed locks or database locks to ensure the idempotency of operations.
4. Are you familiar with dead letter exchanges in RabbitMQ? (Have you heard of RabbitMQ delayed queues?)
- Answer: Yes, we use RabbitMQ to implement delayed queues in our project, mainly through dead letter exchanges and TTL (Time-To-Live) to achieve this.
- If a message times out and is not consumed, it becomes a dead letter, and the queue can bind to a dead letter exchange to implement the delay function.
- Another method is to install the RabbitMQ dead letter plugin to simplify configuration, specifying it as a dead letter exchange when declaring the exchange and setting the message timeout.
5. If there are 1 million messages piled up in the MQ, how to solve it?
- Answer: If there is message accumulation, the following measures can be taken:
- Increase the consumption capacity of consumers, such as using multithreading.
- Increase the number of consumers, adopting a work queue model to allow multiple consumers to consume the same queue in parallel.
- Expand the queue capacity by using RabbitMQ's lazy queues, which support storing millions of messages directly on disk rather than in memory.
6. Are you familiar with RabbitMQ's high availability mechanism?
- Answer: Our project uses a RabbitMQ cluster in the production environment, adopting a mirrored queue model with a one-master, multiple-slave structure.
- The master node handles all operations and synchronizes to the slave nodes. If the master node crashes, a slave node can take over as the master, but care must be taken to ensure data synchronization integrity.
7. How to solve data loss issues?
- Answer: Use an arbitration queue in a master-slave mode, implementing strong consistency data synchronization based on the Raft protocol to simplify configuration and enhance data security.
8. How does Kafka ensure messages are not lost?
- Answer: Kafka ensures messages are not lost through the following measures:
- Producers send messages using asynchronous callbacks and set retry mechanisms to handle network issues.
- In the Broker, through replication mechanisms, set the acks parameter to all to ensure messages are confirmed in all replicas.
- Consumers manually commit the offsets of successfully consumed messages to avoid data loss or duplicate consumption that may occur with automatic commits.
9. How to solve the problem of duplicate message consumption in Kafka?
- Answer: The following methods can solve the problem of duplicate consumption in Kafka:
- Disable automatic offset commits and manually control the timing of offset commits.
- Ensure the idempotency of message consumption, for example, by using a unique primary key or distributed locks.
10. How does Kafka ensure message order?
- Answer: Kafka does not guarantee message order by default, but it can be achieved through the following methods:
- Store messages in the same partition by specifying the partition number or using the same business key.
11. Are you familiar with Kafka's high availability mechanism?
- Answer: Kafka's high availability is mainly achieved through the following mechanisms:
- Cluster deployment with multiple broker instances, ensuring that single points of failure do not affect overall service.
- Replication mechanisms, where each partition has multiple replicas, with a leader and followers. If the leader fails, a new leader is elected from the followers.
12. Can you explain the ISR in the replication mechanism?
- Answer: ISR (In-Sync Replicas) refers to the followers that are in sync with the leader.
- When the leader fails, a new leader is elected from the ISR first, as they have higher data consistency.
13. Are you familiar with Kafka's data cleanup mechanism?
- Answer: Kafka's data cleanup includes:
- Cleanup based on message retention time.
- Cleanup based on topic data size, allowing for the configuration of the deletion of the oldest messages.
14. Do you know about high-performance design in Kafka?
- Answer: High-performance design in Kafka includes:
- Message partitioning to enhance data processing capabilities.
- Sequential read and write to improve disk operation efficiency.
- Page caching to reduce disk access.
- Zero-copy to minimize data copying and context switching.
- Message compression to reduce IO load.
- Batching messages to lower network overhead.