The A-Block L2 node for data exchange between peers. Complete with E2E encryption.
Official documentation »
Table of Contents
In order to run this server as a community provider, or simply to use it yourself, you'll need to have Docker installed (minimum tested v20.10.12) and be comfortable working with the command line.
If you'd like to develop on this repo, you'll have the following additional requirements:
- Rust (tested on 1.68.0 nightly)
..
..
With Docker installed and running, you can clone this repo and get everything installed with the following:
# SSH clone
git clone git@gitlab.com:ABlockOfficial/Valence.git
# Navigate to the repo
cd Valence
# Build Docker image
docker build -t valence .
..
To use the server as is, you can simply run the following in the root folder of the repo:
docker-compose up -d
Docker will orchestrate the node itself, the Redis instance, and the MongoDB long-term storage, after which you can make calls to your server at port 3030. Data saved to the Redis and MongoDB instances is kept within a Docker volume.
To run the server in a development environment, run the following command:
cargo build --release
cargo run --release
..
The server functions on a very basic set of rules. Clients exchange data between each other through the use of public key addresses. If Alice wants to exchange data with Bob, she will need to supply the Valence node with Bob's address, as well as her own address, public key, and signature in the call headers. The next time Bob fetches data from the server using his public key address, he would find that Alice has exchanged data to him.
..
Sets data in the Redis instance and marks it for pending retrieval in the server. To send data to Bob, we could use the following headers in the set_data
call:
{
"address": "76e...dd6", // Bob's public key address
"public_key": "a4c...e45", // Alice's public key
"signature": "b9f...506" // Alice's signature of Bob's address, using his public key
}
The body of the set_data
call would contain the value_id
for that entry and the data
being exchanged :
{
"data_id": "EntryId"
"data": "hello Bob"
}
data_id
is required and allows for mutiple entries under one address. If the data_id
value is the same as an existing entry for that address, it is updated. If the data_id
is unique it will be added to the hashmap for that address
The headers that Alice sends in her call will be validated by the Valence, after which they'll be stored at Bob's address for his later retrieval using the get_data
call.
..
Gets pending data from the server for a given address. To retrieve data for Bob, he only has to supply his credentials in the call header:
[
{
"address": "76e...dd6", // Bob's public key address
"public_key": "a4c...e45" // Bob's public key corresponding to his address
"signature": "b9f...506", // Bob's signature of the public key
}
]
If data_id
is provided in the request (get_data/[value_id]
), the specific entry associated to that id is retrieved. If no data_id
is provided, the full hashmap is retrieved.
Again, the Valence will validate the signature before returning the data to Bob.
Delete pending data from the server for a given address. To delete data for Bob, he only has to supply his credentials in the call header:
[
{
"address": "76e...dd6", // Bob's public key address
"public_key": "a4c...e45" // Bob's public key corresponding to his address
"signature": "b9f...506", // Bob's signature of the public key
}
]
If data_id
is provided in the request (del_data/[value_id]
), the specific entry associated to that id is deleted. If no data_id
is provided, the full hashmap is deleted.
Again, the Valence will validate the signature before returning the data to Bob.
For best practice, it's recommended that Alice and Bob encrypt their data using their private keys, before exchanging it with each other. This ensures that the data exchange is E2E encrypted, and that the Valence maintains no knowledge of the data's content.
..
- Match public key to address for
get_data
(resolved by using address directly for retrieval) - Add a rate limiting mechanism
- Set Redis keys to expire (handle cache lifetimes)
- Handle multiple data entries per address
- Add tests
..