This is a microservice that manages a blocklist of IPs. This service can be used to prevent abuse in different applications to ban IPs that are known to be used for malicious purposes.
The service has a single REST endpoint that should take an IP v4 encoded as a string (e.g. "127.0.0.1"
), and return "true"
if the IP is part of the blacklist, and "false"
otherwise.
Instead of creating our own list of IPs, we take advantage of this public list, which gets updated every 24hs. The microservice is in sync with it.
This service is highly available, minimizing the time it takes to restart it and the downtime when updating the blocklist. The service should remain operational under heavy load, and be able to respond in a reasonably low time.
This is an example of how calling the microservice looks like
$ curl http://blocklist/check_ip/127.0.0.1
{"blocked": false}
- Description: Verify if an IP is on a blocklist.
- Parameters:
ip
(string): IP address to check.
- Response:
200 OK
:{"blocked": true}
: If the IP is on the blocklist.{"blocked": false}
: If the IP is not on the blocklist.
Python, Docker, Minikube, Unix.
- Execute
docker-compose up --build
- Then go to 0.0.0.0:8000
- First have a redis instance running locally, you can do so using docker.
docker run --name my-redis -d -p 6379:6379 redis:latest
- Create a python virtual environment (so you can keep clean your python installation) and activate it.
python3 -m venv .venv
source .venv/bin/activate
- Install all needed requirements.
pip install -r blocklistupdater/requirements.txt
pip install -r ipchecking/requirements.txt
pip install -r tests/requirements.txt
- Then run:
pytest tests/test_blocklistupdater.py
pytest tests/test_ipchecking.py
- Give execution permisions to
setup_minikube.sh
:
chmod u+x setup_minikube.sh
- Then run
./setup_minikube.sh
. - Once the process is done, you can check the service via
minikube service ipchecking --url
- You can also checkout the dashboard to enter the pods and see their logs
minikube dashboard
We use separated services for checking IPs and updating the blocklist locally. We want to avoid making external calls for each call to our service, thus we save the blocklist locally in redis in memory. This gives us a faster response and avoids any latency and downtime that our external connections could have.
Our approach ensures operational stability and performance under heavy load through the following key strategies:
- Redis Master-Replica Configuration: Provides high availability, fault tolerance, and scalability for read and write operations.
- Dedicated Blocklist Updater Service: Ensures efficient and reliable updates to the blocklist without affecting the IP checking service.
- High-Performance IP Checking Service: Built using FastAPI for low latency and high concurrency, with the ability to scale horizontally.
- Kubernetes Orchestration: Automates the management, scaling, and monitoring of the services, ensuring that they remain operational and responsive under varying loads.
The IP checking service is built using FastAPI, which is known for its high performance and low latency. FastAPI leverages asynchronous programming to handle a large number of concurrent requests efficiently.
This is the main service exposing the endpoint /check_ip/
- Horizontal Scaling: The IP checking service can be horizontally scaled by increasing the number of replicas. Kubernetes can automatically distribute the load across multiple instances, ensuring that the service remains responsive under heavy load.
This is a separate service in order to perform updates on our local redis service from source of truth. This service will perform updates once every 24h or when starting up (whichever comes first, could be improved with a TTL flag to strictly perform updates every 24h).
-
Separation of Concerns: By separating the blocklist update functionality into a dedicated service, we ensure that the IP checking service is not affected by the update process. This separation allows each service to be optimized for its specific workload.
-
Exponential Backoff: The blocklist updater service uses an exponential backoff strategy for retries, ensuring that temporary network issues do not overwhelm the system with repeated requests.
-
Atomic Updates: By using a temporary key and renaming it atomically, we ensure that the blocklist is updated in a consistent and atomic manner, preventing partial updates and ensuring data integrity.
Redis is an in-memory data store, providing extremely low latency for both read and write operations. This ensures that the system can respond quickly even under heavy load.
- By using a master-replica setup, we ensure that the data is replicated across multiple nodes. If the master node fails, the replicas can still serve read requests, and a new master can be promoted to handle write operations.
Kubernetes Deployments manage redis, blocklist updater and the IP checking services, ensuring that they are always running the desired number of replicas. Kubernetes can automatically restart failed pods and distribute the load across the cluster, ensuring high availability and operational under heavy load.
- Generate metrics and a dashboard using Prometheus/Grafana to check the health and load of the services, and decide if there is need to scale or check for any issues.
- Load testing, there are several services to try and perform stress tests, like Locust or k6.
- Implement Redis Sentinel, we can automatically detect failures and promote a replica to master, ensuring minimal downtime.