Project Overview
Volatix is a lightweight, high-performance in-memory database written in Rust. It implements the Redis RESP3 protocol for client compatibility, adds built-in TTL and persistence layers, and ships with first-class CLI tooling. Released under the MIT license, Volatix targets use cases where low latency and simple scaling matter most.
Why Volatix Exists
• In-memory speed: Deliver microsecond-scale reads and writes.
• Modern protocol: Leverage RESP3 for richer data types and client interoperability.
• Predictable eviction: Native TTL support per key.
• Durable snapshots: Periodic persistence to disk for fast recovery.
• Rust-safe: Memory and thread safety without a garbage collector.
• Open source (MIT): Audit and extend without licensing friction.
How Volatix Fits vs. Redis
• Protocol
– Volatix: RESP3 only (full typed frames).
– Redis: RESP2 with optional RESP3 in newer versions.
• Persistence
– Volatix: Configurable RDB-style snapshots + append-only logs.
– Redis: RDB + AOF; more complex tuning.
• Footprint
– Volatix: Single binary, minimal dependencies.
– Redis: C codebase, external dependencies for modules.
• Extensibility
– Volatix: Modular Rust core (process, resp3, storage).
– Redis: C modules require manual memory management.
Key Features
- In-memory data store with optional disk persistence
- RESP3 protocol support (rich frames, Pub/Sub compatibility)
- Per-key TTL and automatic expiry
- Snapshot and append-only persistence modes
- CLI tools:
• volatix-server (TCP server)
• volatix-cli (interactive client)
• volatix-bench (benchmark harness) - MIT license—see LICENSE for full terms
Quick Start
Run the Server
Specify port and data directory for persistence:
cargo run --release --bin volatix-server -- \
--port 6379 \
--data-dir ./data \
--snapshot-interval 60
Use the CLI
Connect to your Volatix instance:
cargo run --release --bin volatix-cli -- \
--host 127.0.0.1 \
--port 6379
# Inside REPL:
> SET session:1234 user42 EX 300 # set with 5-minute TTL
> GET session:1234
" user42 "
Benchmark Throughput
Measure performance under mixed workloads:
cargo run --release --bin volatix-bench -- \
--duration 60 \
--ratio 0.8 \
--threads 16
See README.md
for full configuration options and advanced tuning.
2. Getting Started
Get Volatix up and running in minutes. You’ll clone the repo, build the workspace, start the server, and launch the interactive CLI.
Prerequisites
- Rust toolchain (rustc 1.60+, cargo)
- Git
Clone the Repository
git clone https://github.com/juanmilkah/volatix.git
cd volatix
Build the Workspace
Compile all binaries in release mode for best performance.
cargo build --release
Run the Volatix Server
By default the server listens on 127.0.0.1:6379.
cargo run --release --bin server
You’ll see:
[INFO] Starting Volatix server on 127.0.0.1:6379
To bind a different address or port, set the VOLATIX_ADDR
environment variable:
export VOLATIX_ADDR=0.0.0.0:8000
cargo run --release --bin server
Launch the Interactive CLI
Open a new terminal. The CLI connects to 127.0.0.1:6379 by default.
cargo run --bin cli
Use flags to change host or port:
cargo run --bin cli -- --host 127.0.0.1 --port 8000
Example Session
> PING
+PONG
> SET greeting "hello, volatix"
+OK
> GET greeting
$13
hello, volatix
> EXISTS greeting
:1
> DEL greeting
:1
> GET greeting
$-1
> QUIT
Next Steps
- Explore the
volatix-bench
binary for performance testing:cargo run --release --bin volatix-bench -- --help
- Check out the Server Library in
server/src/lib.rs
to embed Volatix in your own Rust application.
3. Data Model & Protocol
Volatix uses a Redis-style protocol (RESP/RESP3) end-to-end. The CLI parses user input into a Command
enum, serializes it into RESP, sends it over the network, deserializes RESP replies into a Response
enum, and the server parses RESP3 requests into a RequestType
, processes them via process_request
, and emits RESP3 replies.
3.1 CLI Command Parsing
Convert a raw input line into a typed Command
enum.
Key API
fn parse_line(input: &str) -> Result<Command, ParseError>;
Command Enum (excerpt)
pub enum Command {
Get(String),
Set(String, String),
Delete(String),
List(String, Vec<String>),
ConfigGet(String),
ConfigSet(String, String),
// … more variants
}
Example
use cli::parse::parse_line;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let raw = r#"SET session_token "abc 123""#;
let cmd = parse_line(raw)?;
match cmd {
Command::Set(key, value) => {
println!("SET {} = {}", key, value);
}
_ => println!("Other command"),
}
Ok(())
}
3.2 Request Serialization
Serialize a Command
into RESP protocol bytes for transmission.
Key API
fn serialize_request(cmd: &Command) -> Vec<u8>;
Example: send over TCP
use std::net::TcpStream;
use std::io::Write;
use cli::{parse::parse_line, serialize::serialize_request};
let mut stream = TcpStream::connect("127.0.0.1:6379")?;
let cmd = parse_line(r#"GET mykey"#)?;
let buf = serialize_request(&cmd);
stream.write_all(&buf)?;
3.3 Response Deserialization
Convert raw RESP response bytes into a Response
enum.
Key API
fn deserialize_response(buf: &[u8]) -> Result<Response, DeserializeError>;
Response Enum (excerpt)
pub enum Response {
SimpleString(String),
Error(String),
Integer(i64),
Boolean(bool),
BulkString(Option<Vec<u8>>),
Array(Option<Vec<Response>>),
Null,
// … BigNumber, Double, etc.
}
Example
use std::net::TcpStream;
use std::io::Read;
use cli::deserialize::deserialize_response;
let mut stream = TcpStream::connect("127.0.0.1:6379")?;
stream.write_all(b"*2\r\n$3\r\nGET\r\n$5\r\nmykey\r\n")?;
let mut buf = Vec::new();
stream.read_to_end(&mut buf)?;
let resp = deserialize_response(&buf)?;
println!("Server replied: {:?}", resp);
3.4 RESP3 Protocol Implementation
Server-side parsing/encoding of RESP3 messages lives in server/src/resp3.rs
.
Parsing Requests
fn parse_request(buf: &[u8]) -> Result<RequestType, String>;
- Reads the first byte →
DataType
(e.g.b'*'
→Array
,b'$'
→BulkString
,b':'
→Integer
, etc.). - Delegates to
parse_simple_string
,parse_integers
,parse_bulk_strings
,parse_arrays
,parse_maps
,parse_sets
, etc. - Builds a
RequestType
enum with nested children.
pub enum RequestType {
SimpleString { data: Vec<u8> },
Integer { data: Vec<u8> },
BulkString { data: Vec<u8> },
Null,
Boolean { data: bool },
Array { children: Vec<RequestType> },
Map { children: HashMap<String, RequestType> },
Set { children: HashSet<Vec<u8>> },
// … other RESP3 types
}
Encoding Responses
Each RequestType
→Vec<u8>
helper:
fn serialize_resp3(rt: &RequestType) -> Vec<u8>;
Example: parse a GET
use server::resp3::{parse_request, RequestType};
let buf = b"*2\r\n$3\r\nGET\r\n$5\r\nmykey\r\n";
let req = parse_request(buf)?;
if let RequestType::Array { children } = req {
// children[0] = BulkString("GET"), children[1] = BulkString("mykey")
}
3.5 Server Request Processing
Tie parsed RequestType
to storage operations and produce RESP3 replies.
Key API
fn process_request(
req: RequestType,
storage: Arc<RwLock<LockedStorage>>
) -> Vec<u8>;
- Matches on
RequestType::Array { children }
. - Extracts the command name (first element) and arguments.
- Routes to handlers:
handle_get
,handle_set
,handle_delete
,handle_list
,handle_config
, etc. - Each handler returns a RESP3-encoded
Vec<u8>
.
Example: simple server loop
use std::net::{TcpListener, TcpStream};
use std::io::Read;
use std::sync::Arc;
use parking_lot::RwLock;
use server::{
resp3::parse_request,
process::process_request,
storage::{LockedStorage, StorageOptions},
};
fn handle_client(mut stream: TcpStream, storage: Arc<RwLock<LockedStorage>>) {
let mut buf = [0u8; 4096];
if let Ok(n) = stream.read(&mut buf) {
if n == 0 { return; }
match parse_request(&buf[..n]) {
Ok(req) => {
let reply = process_request(req, storage.clone());
let _ = stream.write_all(&reply);
}
Err(e) => {
let err = format!("-ERR {}\r\n", e);
let _ = stream.write_all(err.as_bytes());
}
}
}
}
fn main() -> std::io::Result<()> {
let opts = StorageOptions::default();
let storage = Arc::new(RwLock::new(LockedStorage::new(opts)));
let listener = TcpListener::bind("127.0.0.1:6379")?;
for client in listener.incoming() {
if let Ok(stream) = client {
let st = storage.clone();
std::thread::spawn(move || handle_client(stream, st));
}
}
Ok(())
}
This completes the end-to-end data-model and protocol flow inside Volatix.
4. Configuration & Deployment
Volatix ships as a single Rust binary plus optional persistence file. You configure storage behavior, network settings, and persistence via command-line flags or environment variables. This section covers tuning storage options, starting the server, containerizing Volatix, and integrating CI/CD.
4.1 Storage Configuration
Volatix’s in-memory storage exposes these runtime options:
- ttl: default time-to-live per entry (seconds)
- max_capacity: maximum number of entries
- eviction_policy: LRU or TTL
- compression: enabled/disabled for large Text values
- compression_threshold: byte size above which Text is compressed
- persistence_file: path for on-disk snapshot
- persistence_interval: interval (seconds) between automatic snapshots
Example: Overriding Defaults via CLI
volatix-server \
--storage-ttl 7200 \
--storage-max-capacity 5000 \
--eviction-policy LRU \
--compression enabled \
--compression-threshold 1024 \
--persistence-file /var/lib/volatix/data.db \
--persistence-interval 120
Behind the scenes, server/src/main.rs
builds a StorageOptions
:
use std::time::Duration;
use server::storage::{StorageOptions, EvictionPolicy, Compression};
let opts = StorageOptions::new(
Duration::from_secs(cli.storage_ttl),
cli.storage_max_capacity,
&EvictionPolicy::from_str(&cli.eviction_policy)?,
&Compression::from_str(&cli.compression)?,
cli.compression_threshold,
);
let storage = LockedStorage::new(opts);
4.2 Running the Server
By default, Volatix listens on port 6379 and uses storage.db
in the working directory.
# Basic startup
volatix-server --port 6379
# Verbose logging, custom persistence path
volatix-server \
--port 6380 \
--persistence-file ./snapshots/volatix.db \
--persistence-interval 60
Logs include startup info, periodic persistence messages, and graceful shutdown on SIGINT/SIGTERM. Integration tests in CI spin up the server, run Redis-compatible commands, then shut it down:
- name: Start server
run: cargo run --release -- --port 6379 &
- name: Wait for ready
run: sleep 2
- name: Run integration tests
run: cargo test -- --ignored
- name: Stop server
run: kill $!
4.3 Container Deployment
Dockerfile example for production:
# Builder stage
FROM rust:1.70 AS builder
WORKDIR /usr/src/volatix
COPY . .
RUN cargo build --release
# Runtime stage
FROM debian:buster-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/src/volatix/target/release/volatix-server /usr/local/bin/volatix-server
VOLUME /data
EXPOSE 6379
ENTRYPOINT ["volatix-server"]
CMD ["--port", "6379", "--persistence-file", "/data/storage.db"]
Build and run:
docker build -t volatix .
docker run -d \
-p 6379:6379 \
-v $(pwd)/data:/data \
volatix \
--storage-ttl 3600 \
--compression enabled \
--compression-threshold 512
4.4 CI/CD with GitHub Actions
Volatix’s .github/workflows/rust.yml
automates formatting, linting, building, testing, and integration tests with caching:
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Cache dependencies
uses: actions/cache@v3
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: ${{ runner.os }}-cargo-
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Check formatting
run: cargo fmt -- --check
- name: Lint
run: cargo clippy -- -D warnings
- name: Build release
run: cargo build --release
- name: Run unit tests
run: cargo test -- --quiet
- name: Integration tests
env:
RUST_LOG: info
run: |
cargo run --release -- --port 6379 &
sleep 2
cargo test -- --ignored
kill $!
Practical tips:
- Bump cache key suffix (e.g.
-v2
) when upgrading toolchain major versions. - Split CI jobs (format/lint vs build/test) if you need faster feedback.
- Use
restore-keys
to salvage partial caches whenCargo.lock
changes.
5. CLI & Benchmark Tools
This section documents the two end-user tools shipped in this repository:
- volatix CLI – an interactive and scripting Redis-style client (
cli/
) - volatix-bench – a multi-threaded TCP benchmark harness (
volatix-bench/
)
5.1 volatix CLI
Provides an interactive prompt and scripted commands against any RESP-compliant server.
Installation
cargo install --path cli
Invocation
# Interactive mode (starts REPL)
volatix-cli --addr 127.0.0.1:6379
# Single command and exit
volatix-cli --addr 127.0.0.1:6379 --cmd "SET foo bar"
Global Options
–-addr HOST:PORT TCP address (default “127.0.0.1:6379”)
–-help Show help
Supported Subcommands
Basic key/value:
• get
• set
• del
Batch operations:
• mget
• mset
TTL management:
• ttl
• expire
Configuration:
• config get
• config set
Stats: • stats Show server stats
Examples
Start interactive session
volatix-cli --addr 127.0.0.1:6379
In REPL:
SET user:1 alice OK GET user:1 alice TTL user:1 -1
One-off batch set
volatix-cli --addr 127.0.0.1:6379
--cmd "MSET user:2 bob user:3 carol"
Inspect server config
volatix-cli --addr 127.0.0.1:6379
--cmd "CONFIG GET maxmemory"
Programmatic Request Example
This snippet shows how the CLI’s core Connection
and RESP framing can be used in your own tool:
use cli::connection::Connection;
use cli::serialize::serialize_request;
use cli::deserialize::deserialize_response;
fn main() -> anyhow::Result<()> {
let mut conn = Connection::connect("127.0.0.1:6379")?;
// Prepare a SET command
let frame = serialize_request(&["SET", "foo", "bar"]);
conn.send(&frame)?;
// Read and parse server reply
let raw = conn.recv()?;
let resp = deserialize_response(&raw)?;
println!("Response: {}", resp);
Ok(())
}
5.2 volatix-bench
A configurable, multi-threaded TCP benchmarking tool for GET/SET workloads.
Installation
cargo install --path volatix-bench
Invocation
volatix-bench [OPTIONS]
Key Options
–-addr HOST:PORT Target server (default “127.0.0.1:6379”)
–-threads
–-requests
–-ratio
–-compression <none|lz4> Enable optional payload compression
–-latency Measure per-operation latency
Run volatix-bench --help
for full details.
Simple Benchmark
Measure 100k mixed GET/SET requests with 4 threads:
volatix-bench \
--addr 127.0.0.1:6379 \
--threads 4 \
--requests 100000 \
--ratio 0.8 \
--latency
Sample output:
Threads: 4 Requests/thread: 100000
Read/Write ratio: 80/20
Throughput: 320k ops/sec
Avg latency: GET=1.2ms SET=1.5ms
P50/P99: GET=1.1/2.3ms SET=1.4/3.0ms
Advanced Scenario: Compressed Payloads
Enable LZ4 compression for SET values to test CPU vs. network:
volatix-bench \
--addr 127.0.0.1:6379 \
--threads 8 \
--requests 200000 \
--ratio 0.2 \
--compression lz4
Extending the Benchmark
Benchmark code parses options with clap
, spawns worker threads, and:
- Serializes each GET/SET into a per-thread
Vec<u8>
. - Sends over a shared
TcpStream
. - Optionally measures and aggregates latencies.
To customize payload generation or add new commands, modify worker_loop
in volatix-bench/src/main.rs
, adjusting:
// generate op
let op = if rand::random::<f64>() < config.ratio {
Op::Get(&keys[i])
} else {
let value = random_bytes(config.value_size);
Op::Set(&keys[i], &value);
};
write_resp(&mut buf, &mut itoa_buf, op);
// write to server
stream.write_all(&buf)?;
Use this pattern to inject your own request types or measurement hooks.
6. Development & Contribution
This section describes the Volatix code layout, local build & test workflows, style guidelines, and the process for proposing changes.
6.1 Repository Layout
The volatix
project uses a Cargo workspace:
# Cargo.toml (workspace root)
[workspace]
members = [
"server", # Core in-memory database server library/binary
"cli", # Command-line REPL client
"volatix-bench" # Benchmarking tool
]
[profile.release]
opt-level = "z"
lto = true
Folder structure:
/server
•src/lib.rs
re-exports modules:process
,resp3
,storage
.
•src/main.rs
runs the server binary./cli
• REPL client that connects over TCP, performs handshake, issues commands./volatix-bench
• CLI to generate mixed workloads, measure latency/throughput./tests/integration.rs
• Integration tests using RESP-3 macros over TCP socket./.github/workflows/rust.yml
• CI pipeline for formatting, linting, building, testing.
6.2 Build & Test Workflow
Local Commands
- Install Rust + tools
rustup default stable cargo install cargo-watch # optional: for live rebuilds
- Format and lint
cargo fmt --all # apply rustfmt cargo clippy --all -- -D warnings
- Build binaries
cargo build --all # debug builds cargo build --release --all
- Unit & Integration tests
cargo test --all # runs server/cli/bench tests # Integration tests assume server binds to 127.0.0.1:7878
- Manual integration testing
# In one terminal: cargo run --bin server --release # In another: cargo test --test integration
CI Pipeline (.github/workflows/rust.yml
)
On each push or pull-request targeting main
:
- Set up Rust toolchain
- Cache
cargo
registry and target cargo fmt --all -- --check
cargo clippy --all -- -D warnings
cargo build --all --release
- Launch
server
in background - Run
cargo test -- --test-threads=1
- Tear down server
6.3 Style Guidelines
- Adhere to rustfmt defaults (run
cargo fmt
pre-commit). - Treat all Clippy warnings as errors (
-D warnings
). - Limit lines to 100 columns.
- Document public APIs in
server/src/lib.rs
with///
comments. - Write integration tests in
tests/integration.rs
using provided RESP-3 macros. - Follow Conventional Commits for messages:
- feat: new feature
- fix: bug fix
- docs: documentation only changes
6.4 Contribution Process
- Fork the repository and branch off
main
:git clone git@github.com:your-user/volatix.git cd volatix git checkout -b feat/awesome-feature
- Implement your changes, add/update tests.
- Run formatting, linting, build, and tests locally.
cargo fmt --all cargo clippy --all -- -D warnings cargo test --all
- Commit using Conventional Commits.
- Push to your fork and open a Pull Request against
main
.
• Fill out the PR template, reference related issues.
• Ensure all CI checks pass. - Address review comments. Once approved, maintainers will merge and deploy.
Thank you for contributing to Volatix!