The primary bottleneck is the gin router. With an optimized router engine, the QPS could potentially reach 800000/s. - See gin router performance benchmarks.
A infinitely scalable. advertisement management system, baked with replicated advertisement business state machine, replicated log system, and fault recovery mechanism. Guaranteed the consistency and durability of the advertisement operation.
When I saw the requirements for this topic, The challenge of achieving a QPS greater than 10,000 led me to explore various solutions beyond a single Redis instance. So, I started thinking about this problem and came up with a more interesting solution. This solution involves using an in-memory database to address the issue, along with a Redis stream for handling log ordering, and PostgreSQL for persistence. As it’s a local in-memory database, the read operations can be infinitely scaled using solutions like Kubernetes Deployment or docker compose --scale. Write operations, however, remain constrained by the slower of Redis and PostgreSQL - Max(redis, postgres). Therefore, we can choose NoSQL database to achieve the higher write speed, and use Kafka to handle the log ordering and log replication as redis stream alternative(better consistency and durability). In my implementation, I’ve made every effort to ensure the system is fault-tolerant and consistent. Feedback on potential cases not covered or areas for optimization is welcome. Thank you!
R/W Request: Represents the read/write requests initiated by the users or other systems interacting with the AD server. These requests are the entry point into the system.
Load Balance: Distributes incoming requests evenly across multiple API instances to ensure that no single instance is overwhelmed.
Instances (API Instance 1, API Instance 2, API Instance 3): These are the multiple API server instances that handle the incoming requests. Each instance includes:
Dispatcher: Manages the routing of the requests to the appropriate components within the instance.
State Machine: Handles the logic related to the advertisement CRUD operations and maintains the state of the application in a consistent manner.
Asynq Scheduler: A scheduling component that manages time-based tasks such as scheduling the deletion of ads at their end time. It interacts with the instances to trigger these tasks.
Redis / Redis Stream: Acts as a distributed log system where updates, including creates, updates, and deletes, are logged. Ensuring that all instances are synchronized by subscribing to this stream.
Postgres:: A persistence layer where all advertisements and their related data are stored in a structured format. It ensures data durability and is the source of truth for the system.
Connections:
Load Balance -> Instances: Directs incoming R/W requests to one of the API instances.
Instances -> Postgres (Write/Delete Log): API instances perform write or delete operations on the Postgres database.
Postgres -> Redis/Redis Stream (Update Log): Updates in Postgres are logged into Redis Stream to maintain consistency across instances.
Redis/Redis Stream -> Instances (Subscribe Log): Instances subscribe to Redis Stream to stay updated with the logs for consistency.
Asynq Scheduler -> Instances (Schedule Delete at Ad End Time): Scheduler triggers deletion of ads at their end time by interacting with instances.
Business State Machine (Service State, Reply Cache)#
For each instance, it is a state machine that can handle the advertisement CRUD operation and the range query operation. In the above diagram, it should use single-threaded to guarantee the read and write order. In Our Scenario, the consistency isn’t the most important thing, so we can use Readers–writer lock to handle the concurrent read, the write operation is still single-threaded.
The state machine can be recovered from the snapshot, and the snapshot only modified if there is a new create, update, or delete operation. The snapshot can be stored in postgresql, and the recovery process can be done by the snapshot and the log to prevent the state machine need to replay all the log from the beginning. The concept is similar to the AOF and RDB in redis.
Since we didn’t use the interval tree to handle the range query, we need to remove the outdated data from the in-memory database, so we need to use some scheduler to remove the outdated data from the in-memory database.
After multiple worker race for handling the delete task, the delete log would be also published to the redis stream, so the state machine can also handle the delete operation, this method also prevent the Restore operation from reading and serving stale data.
After trying so many ways, I think the most robust, simple, and efficient way is to use sqlite as in-memory database. The performance is also good, the SQL read speed would be about 60000/s, However, the real query may be slower than ideal speed since the query is not simple as the benchmark query. But remember, our design can scale the read operation speed linearly to infinite, so the read speed in a single instance is not the most important thing.
typeIndexNodeinterface{AddAd(ad*model.Ad)GetAd(req*model.GetAdRequest)([]*model.Ad,error)DeleteAd(ad*model.Ad)}typeIndexInternalNodestruct{Keystring// The key this node indexes on, e.g., "country", "age"
Childrencmap.ConcurrentMap[FieldStringer,IndexNode]// The children of this node
}funcNewIndexInternalNode(keystring)IndexNode{return&IndexInternalNode{Key:key,Children:cmap.NewStringer[FieldStringer,IndexNode](),}}typeIndexLeafNodestruct{musync.RWMutexAds*sortedset.SortedSet// map[string]*model.Ad
}funcNewIndexLeafNode()IndexNode{return&IndexLeafNode{Ads:sortedset.New(),}}
Implement the advertisement store by map with id primary key (v2.0 deprecated)
Implement the advertisement indexing by map[string]mapset.Set[string]
By the way, originally I was using map[string]map[string]*model.Ad, and the concurrent read speed was only 4000 QPS. After changing it to map[string]mapset.Set[string], the concurrent read speed increased to over 10000 QPS!!!
upd: I leverage the characteristic of Pointer is Comparable in Golang, then the performance become: write: 407676.68 QPS / read: 22486.06 QPS
I’m considering implementing multi-indexing to improve the read performance, not yet implemented currently
upd: I have tried to implement the multi-indexing, the write performance is down, but the read performance is now 1166960 QPS, so I think it’s worth it - commit detail
define the multi-indexing with priority, and use reflect to generate the index function(tree structure), and use concurrent map to store the index, we would add the index concurrently, the result read performance become 800000 QPS
Implement the advertisement range query(ageStart, ageEnd, StartTime, EndTime) by interval tree (v4.0 deprecated)
I have tried some interval tree library, but the read performance is not good, so I give up this implementation
Currently, I just iterate all the advertisement and filter the result by the condition
if interval tree is in use, it doesn’t apply on time range query since the performance issue
github.com/rdleal/intervalst
github.com/biogo/store/interval
Just iterate all the advertisement and filter the result by the condition
compound index with nested map - 1000000 QPS
compound index generalization (provide the easy-to-use index API function and the index priority, tree structure) - 800000 QPSprovide a flexible API for the developer to define the index, but the performance reduce about 10%, move some coding complexity to time & space complexity
after the time display in the process in column, the advertisement deleted operation would consider as a log which is persisted in the redis stream, so the state machine can also handle the delete operation, this method also prevent the Restore operation from reading and serving stale data.
Data Source: Extracted from the country query parameter.
Validation:
omitempty: The country field is optional. Validation rules apply only if the field is provided.
iso3166_1_alpha2: If present, the country code must conform to the ISO 3166-1 alpha-2 standard, which consists of two-letter country codes (e.g., US for the United States, CA for Canada).
Data Source: Extracted from the gender query parameter.
Validation:
omitempty: The gender field is optional. Validation rules apply only if the field is provided.
oneof=M F: If present, gender must be either “M” (Male) or “F” (Female). This restriction ensures that the gender field, if specified, adheres to the predefined options.
Data Source: Extracted from the platform query parameter.
Validation:
omitempty: The platform field is optional. Validation rules apply only if the field is provided.
oneof=android ios web: If present, platform must be one of the following values: “android”, “ios”, or “web”. This rule ensures that the platform, if specified, matches one of the supported types.
Data Source: Extracted from the offset query parameter.
Validation:
default=0: If the offset field is not provided, it defaults to 0. This behavior is useful for pagination, indicating the starting point of the dataset to be returned.
min=0: The offset, if specified or defaulted, must be a non-negative integer. This rule ensures that the offset value is valid for use in pagination calculations.
Data Source: Extracted from the limit query parameter.
Validation:
default=10: If the limit field is not provided, it defaults to 10. This default value controls the maximum number of items to be returned in a single request, useful for pagination.
min=1,max=100: The limit, if specified, must be between 1 and 100, inclusive. This range ensures that a reasonable number of items are returned, preventing overly large or empty responses.
Currently, the load test is only performed on the local machine, for a more accurate result, the load test should be performed distributedly, We can adopt the k6 operator to run the distributed load test in the kubernetes cluster, and the result should be analyzed by the Prometheus and Grafana
Inject random data to the database
Run the server, the server would read the data from the database as snapshot and store it in the in-memory database
Start the k6 load test
make install-k6
cp .env.example env.dev
make dev-up
make dev-migrate
make inject # inject the test datamake run-release # run the servermake k6 # run on another terminal