Project Overview
foundation-rs is a Rust workspace that delivers a suite of Bitcoin-focused crates for hardware wallets, data‐exchange protocols, mining clients, and desktop utilities. The workspace unifies shared primitives, enforces consistent build profiles, and targets three environments:
- no_std: Core primitives and algorithms with zero‐allocation design.
- Embedded: Firmware for Cortex‐M devices, USB transport, secure key storage.
- Desktop: Command‐line tools, mining client, SBOM generation, QR/UR utilities.
Key use cases:
- Hardware wallet firmware development
- QR/UR-based PSBT and text exchange
- Mining pool integration via Stratum
- Secure CLI wallet and utility scripts
- Generating software Bills-of-Materials for compliance
Workspace Crates
core
Provides fundamental types (keys, PSBT, scripts), cryptographic primitives, serialization. Compiles inno_std
andstd
contexts.firmware
Implements hardware wallet firmware for ARM Cortex-M, USB HID transport, secure element interface.qr
Encodes and decodes Bitcoin data in QR codes usingqrcode
and image backends.ur
Serializes and deserializes UR formats (CBOR wrappers) for cross-device data exchange.mining-client
Connects to mining pools over Stratum V2, manages work submission and difficulty adjustments.cli
Provides a command-line wallet, PSBT inspector, QR/UR encoder/decoder, and mining diagnostics.
Licensing & Environment
- Minimum Rust version: 1.77
- Dual-licensed: MIT OR GPLv3
- Supported OS: Linux, macOS, Windows
- SBOM generation: leverages
cargo-spdx-helpers
Quickstart
Build all crates in release mode for your host:
cargo build --workspace --release
Cross-compile core
for embedded targets:
cargo build -p core --target thumbv7em-none-eabihf --release
Run the CLI wallet and list commands:
cargo run -p cli -- --help
Generate an SPDX SBOM for audit:
cargo sbom --all-targets -o sbom.spdx.json
## Getting Started
This section shows how to obtain the foundation-rs codebase, select the Rust toolchain, build the workspace, toggle features, run tests, and achieve a “first success” by building and running a small program using the `codecs` crate.
### 1. Clone the Repository
```bash
git clone https://gitlab.com/Foundation-Devices/foundation-rs.git
cd foundation-rs
2. Select the Rust Toolchain
foundation-rs requires Rust 1.77.x. The rust-toolchain.toml
file in the repo root ensures consistency:
# If you don't use automatic toolchain detection:
rustup override set 1.77.1
3. Enter a Reproducible Dev Environment
Choose one of:
- Guix
guix shell --manifest=manifest.scm
- Nix
nix develop # or: nix-shell shell.nix
These commands drop you into a shell with all C/C++ and Rust dependencies ready.
4. Build the Entire Workspace
cargo build --workspace
By default this builds all crates (e.g. foundation-bert
, codecs
, etc.).
5. Enable or Disable Features
- Build with a subset of features:
cargo build --workspace \ --no-default-features \ --features "serde log"
- Re-enable defaults plus extra:
cargo build --workspace --features "default,tokio-runtime"
6. Run the Test Suite
cargo test --workspace
7. First Success: Small Program with codecs
Create a standalone project that uses the local codecs
crate:
cd ..
cargo new codec-demo
cd codec-demo
In Cargo.toml
, add:
[dependencies]
codecs = { path = "../foundation-rs/codecs" }
Replace ../foundation-rs
with your clone path if needed.
Create src/main.rs
:
use codecs::cobs::Cobs; // COBS encoder/decoder from the codecs crate
fn main() {
let data = b"Hello, world!";
// Encode using COBS
let encoded = Cobs::encode(data).expect("COBS encoding failed");
println!("Encoded: {:?}", encoded);
// Decode back
let decoded = Cobs::decode(&encoded).expect("COBS decoding failed");
println!("Decoded: {:?}", String::from_utf8_lossy(&decoded));
}
Build and run:
cargo run
You should see the encoded byte array followed by the original string. This confirms your setup and the codecs
crate work as expected.
Crate Guides
Boxed Allocation with Foundation Arena
Purpose
Provide a heapless alternative to std::boxed::Box
by allocating T
values in a statically-sized arena. Ideal for recursive data structures or when you need owned pointers without the system heap.
How It Works
Box<'a, T>
wraps a&'a mut T
allocated inside anArena<T, N>
.- Call
Box::new_in(item, &arena)
to attempt allocation; returnsOk(Box)
on success orErr(item)
if the arena is full. - Implements
Deref<Target = T>
for ergonomic access,PartialEq
for value comparisons across arenas, and aDrop
that runsptr::drop_in_place
to dropT
.
Usage Example
use foundation_arena::{Arena, boxed::Box};
/// A simple recursive list stored in an arena
#[derive(Debug, PartialEq)]
enum List<'a, T> {
Cons(T, Box<'a, List<'a, T>>),
Nil,
}
fn main() {
// Create an arena that can hold up to 4 List nodes
let arena: Arena<List<'_, i32>, 4> = Arena::new();
// Build list: 1 -> 2 -> Nil
let list: List<'_, i32> = List::Cons(
1,
Box::new_in(
List::Cons(2, Box::new_in(List::Nil, &arena).unwrap()),
&arena
).unwrap()
);
// Access via Deref
if let List::Cons(head, tail) = &list {
println!("Head = {}, Tail = {:?}", head, tail);
}
// Compare two lists in different arenas
let other_arena: Arena<List<'_, i32>, 4> = Arena::new();
let clone = List::Cons(
1,
Box::new_in(
List::Cons(2, Box::new_in(List::Nil, &other_arena).unwrap()),
&other_arena
).unwrap()
);
assert_eq!(list, clone);
}
Practical Guidance
- Keep the arena alive as long as any
Box
references exist. Dropping the arena invalidates all&mut T
pointers. new_in
returns the originalitem
onErr
; handle allocation failure in tight-space contexts.- Memory is never reclaimed until the arena itself is dropped—avoid unbounded growth in long-lived arenas.
- Use for tree- or graph-like structures where nodes need owned, dereferenceable pointers but you cannot use the system heap.
Encoding Nostr Public and Secret Keys (NIP-19)
Purpose
Provide no_std-friendly, fixed-size Bech32 encoding of Nostr public and secret keys (NIP-19) without heap allocations. Uses heapless buffers or any fmt::Write
implementation.
Essentials
• NPUB_LEN = bech32_len("npub", 32)
• NSEC_LEN = bech32_len("nsec", 32)
• encode_npub
, encode_nsec
return heapless::String
of the exact length
• encode_npub_to_fmt
, encode_nsec_to_fmt
write into any fmt::Write
Constants
// Length of the resulting Bech32 string (hrp + separator + data + checksum)
pub const NPUB_LEN: usize = bech32_len("npub", 32);
pub const NSEC_LEN: usize = bech32_len("nsec", 32);
Quick Examples
- Encoding to a
heapless::String
use foundation_codecs::nostr::{encode_npub, encode_nsec, NPUB_LEN, NSEC_LEN};
let pubkey: [u8; 32] = [/* 32 bytes of public key */];
let seckey: [u8; 32] = [/* 32 bytes of secret key */];
// Returns String<NPUB_LEN> and String<NSEC_LEN>, no allocations
let npub: heapless::String<NPUB_LEN> = encode_npub(&pubkey);
let nsec: heapless::String<NSEC_LEN> = encode_nsec(&seckey);
assert_eq!(npub.len(), NPUB_LEN);
assert_eq!(nsec.len(), NSEC_LEN);
println!("npub: {}", npub);
println!("nsec: {}", nsec);
- Writing directly to any
fmt::Write
use core::fmt::Write;
use foundation_codecs::nostr::{encode_npub_to_fmt, encode_nsec_to_fmt};
use heapless::String;
// Pre-allocate a heapless buffer with exact capacity
let mut buf: String<NPUB_LEN> = String::new();
encode_npub_to_fmt(&pubkey, &mut buf)
.expect("buffer capacity NPUB_LEN is sufficient");
println!("npub via fmt: {}", buf);
// Or write into a frame buffer, serial port writer, etc.
struct Logger;
impl core::fmt::Write for Logger {
fn write_str(&mut self, s: &str) -> core::fmt::Result {
// send s over UART or store in flash...
Ok(())
}
}
let mut logger = Logger;
encode_nsec_to_fmt(&seckey, &mut logger)?;
When to use which API
• Use encode_npub
/ encode_nsec
for quick one-liner encodings to heapless::String
.
• Use encode_*_to_fmt
when you want to stream directly into an existing formatter or avoid even the tiny heapless buffer.
Why fixed-size buffers
By computing the exact output length with bech32_len
, you ensure no runtime checks or reallocations. This is critical in embedded/no_std environments where heap usage is disallowed or unpredictable.
Parsing and Formatting Uniform Resources (UR)
Purpose
Convert between UR-encoded strings and the in-memory UR<'a>
enum, handle parsing errors, and reserialize back to a canonical UR string.
Essential APIs
UR::parse(&str) -> Result<UR, ParseURError>
impl Display for UR<'a>
(callsto_string()
)#[cfg(feature = "alloc")] pub fn to_string(ur_type: &str, message: &[u8]) -> String
- Parsing a UR string
use foundation_ur::UR;
use foundation_ur::ParseURError;
fn demo_parse(input: &str) {
match UR::parse(input) {
Ok(ur) if ur.is_single_part() => {
println!("Single-part of type “{}”: {}",
ur.as_type(),
ur.as_bytewords().unwrap()
);
}
Ok(ur) if ur.is_multi_part() => {
let seq = ur.sequence().unwrap();
let total = ur.sequence_count().unwrap();
println!("Multipart {}/{} fragment: {}",
seq, total, ur.as_bytewords().unwrap()
);
}
Err(e) => eprintln!("Failed to parse UR: {}", e),
_ => unreachable!(),
}
}
// Examples
demo_parse("ur:bytes/aaabbbccc"); // 1-part
demo_parse("ur:crypto-seed/2-5/xyzxyzxyzxyz"); // 2 of 5 multipart
demo_parse("invalid:string").unwrap_err(); // ParseURError::InvalidScheme
- Handling parse errors
ParseURError
covers common issues:
InvalidScheme
– missing or wrong"ur:"
prefixTypeUnspecified
– no slash after typeInvalidCharacters
– non-alphanumeric in typeInvalidIndices
– malformed"seq-count"
in multipartParseInt(_)
– non-numeric sequence values
use foundation_ur::{UR, ParseURError};
let bad = "ur:bytes#/foo";
assert_eq!(UR::parse(bad).unwrap_err(), ParseURError::InvalidCharacters);
- Serializing back to UR text
ur.to_string()
(always available)foundation_ur::ur::to_string(ur_type, message)
whenalloc
feature is enabled
use foundation_ur::UR;
#[cfg(feature = "alloc")]
use foundation_ur::ur::to_string;
let payload = b"hello";
let ur = UR::SinglePartDeserialized { ur_type: "bytes", message: payload };
// via Display
assert_eq!(ur.to_string(), "ur:bytes/hejehjehe");
// via convenience fn
#[cfg(feature = "alloc")]
assert_eq!(to_string("bytes", payload), "ur:bytes/hejehjehe");
Practical tips
- Always check
is_single_part()
vs.is_multi_part()
before callingsequence()
orsequence_count()
. - Use
as_bytewords()
to retrieve the raw bytewords string for feeding into aBaseDecoder
orHeaplessDecoder
. - The
Display
impl automatically applies minimalbytewords
encoding for deserialized data.
Configuring Cargo Features for foundation-urtypes
Purpose
Show how to enable or disable optional functionality (e.g. std
, Bitcoin support, alloc
-only mode) via Cargo features.
By default, foundation-urtypes
enables the std
feature set. Override this to compile in a no_std
context, pull in Bitcoin-specific UR types, or reduce dependencies to a pure-CBOR/alloc build.
Essential Cargo.toml snippet:
[dependencies]
foundation-urtypes = { version = "0.5.0", default-features = false, features = ["std"] }
Available features:
std
(default):
• Enables Rust’s standard library
• Pulls inalloc
,minicbor/std
, andbitcoin/std
alloc
:
• Nostd
support, but enables heap allocation viaalloc
• Activatesminicbor/alloc
onlybitcoin
:
• Provides Bitcoin-specific UR types (addresses, HD keys, descriptors)
Practical examples
Full-featured (with
std
+bitcoin
):foundation-urtypes = { version = "0.5.0", features = ["std", "bitcoin"] }
no_std
with allocator only (for embedded/firmware):foundation-urtypes = { version = "0.5.0", default-features = false, features = ["alloc"] }
Disabling Bitcoin modes but keeping
std
& CBOR:foundation-urtypes = { version = "0.5.0", default-features = false, features = ["std"] }
Running integration tests (address/hdkey):
cargo test --features bitcoin
Notes
minicbor
support is driven bystd
oralloc
features; you don’t need to listminicbor
explicitly.- Dev-dependency
foundation-test-vectors
is pulled in only when running tests (no impact on release builds). - To compile entirely without
std
oralloc
, you’d need to fork or request a dedicated feature set (not provided by v0.5.0).
Tokio Adapter Integration
Purpose
Adapt a tokio::net::TcpStream
to the embedded_io_async
traits required by stratum-v1::Client
.
Stratum-v1’s Client<C, RX, TX>
expects a transport implementing embedded_io_async::Read
, ReadReady
, and Write
. The adapter::FromTokio<T>
wrapper provides these by delegating to Tokio’s async I/O.
Adapter Definition (simplified)
mod adapter {
use core::future::poll_fn;
use core::pin::Pin;
use core::task::Poll;
use embedded_io_async::{Read, ReadReady, Write};
#[derive(Clone)]
pub struct FromTokio<T: ?Sized> {
inner: T,
}
impl<T> FromTokio<T> {
pub fn new(inner: T) -> Self { Self { inner } }
}
impl<T: tokio::io::AsyncRead + Unpin + ?Sized> Read for FromTokio<T> {
async fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
if buf.is_empty() { return Ok(0); }
poll_fn(|cx| {
let mut rb = tokio::io::ReadBuf::new(buf);
match Pin::new(&mut self.inner).poll_read(cx, &mut rb) {
Poll::Ready(Ok(())) => Poll::Ready(Ok(rb.filled().len())),
Poll::Ready(Err(e)) => Poll::Ready(Err(e)),
Poll::Pending => Poll::Pending,
}
}).await
}
}
impl<T: Readable + Unpin + ?Sized> ReadReady for FromTokio<T> {
fn read_ready(&mut self) -> Result<bool, std::io::Error> {
tokio::task::block_in_place(|| {
tokio::runtime::Handle::current().block_on(poll_fn(|cx| {
match Pin::new(&mut self.inner).poll_read_ready(cx) {
Poll::Ready(_) => Poll::Ready(Ok(true)),
Poll::Pending => Poll::Ready(Ok(false)),
}
}))
})
}
}
impl<T: tokio::io::AsyncWrite + Unpin + ?Sized> Write for FromTokio<T> {
async fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
let n = poll_fn(|cx| Pin::new(&mut self.inner).poll_write(cx, buf)).await?;
if n == 0 && !buf.is_empty() {
Err(std::io::ErrorKind::WriteZero.into())
} else {
Ok(n)
}
}
async fn flush(&mut self) -> Result<(), std::io::Error> {
poll_fn(|cx| Pin::new(&mut self.inner).poll_flush(cx)).await
}
}
pub trait Readable {
fn poll_read_ready(&self, cx: &mut core::task::Context<'_>) -> core::task::Poll<std::io::Result<()>>;
}
impl Readable for tokio::net::TcpStream {
fn poll_read_ready(&self, cx: &mut core::task::Context<'_>) -> core::task::Poll<std::io::Result<()>> {
self.poll_read_ready(cx)
}
}
}
Wrapping a TcpStream
use tokio::net::TcpStream;
use stratum_v1::Client;
// Connect to the pool
let stream = TcpStream::connect("38.51.144.240:21496").await?;
// Wrap it in FromTokio
let conn = adapter::FromTokio::<TcpStream>::new(stream);
// Instantiate the Stratum client with chosen buffer sizes
let mut client: Client<_, 1480, 512> = Client::new(conn);
- The first generic parameter
C
is inferred asFromTokio<TcpStream>
. 1480
and512
are the receive and transmit buffer sizes; adjust for your use-case.
Practical Usage
Enable software rolling:
client.enable_software_rolling(/*version*/ true, /*extranonce2*/ false, /*ntime*/ false);
Send requests and authorize:
let exts = Extensions { version_rolling: Some(VersionRolling { mask: Some(0x1fffe000), min_bit_count: Some(10) }), minimum_difficulty: None, subscribe_extranonce: None, info: None, }; client.send_configure(exts).await?; client.send_connect(Some("worker01".into())).await?; client.send_authorize("user.worker".into(), "x".into()).await?;
Poll for messages and submit shares using the same
client
instance in your Tokio tasks.
Creating and Slicing NOR-Flash Byte Inputs
Purpose
Instantiate a nom-compatible Bytes<S, N>
over a NOR-flash device and derive sub-slices for parsing without pulling the entire region into RAM.
Bytes<S, N>
wraps any S: ReadNorFlash
behind Rc<RefCell<_>>
and implements nom’s InputTake
/Slice
traits. You can:
• Create a window over flash
• Split or limit that window
• Pass it directly into nom parsers (e.g. take
, tag
)
Essential Steps
- Wrap your flash device in
Rc<RefCell<_>>
. - Call
Bytes::new(offset, len, storage)
to get the full input slice. - Use
.take(count)
,.take_split(count)
or.slice(range)
to carve out sub-regions. - Feed the resulting
Bytes
into nom combinators directly.
use core::cell::RefCell;
use nom::bytes::complete::{tag, take};
use embedded_storage::nor_flash::ReadNorFlash;
use nom_embedded_storage::{Bytes, rc::Rc};
// Suppose `MyFlash` implements `ReadNorFlash<READ_SIZE = 1>`
let raw_flash: MyFlash = /* … */;
let storage = Rc::new(RefCell::new(raw_flash));
// 2. Create a `Bytes` window over the first 1024 bytes of flash, chunk buffer size 64:
let input = Bytes::<_, 64>::new(0, 1024, storage.clone())
.expect("offset+len within flash capacity");
// 3a. Take the first 8 bytes as a header:
let header_slice = input.take(8);
// 3b. Split the input into (prefix, suffix):
let (first4, rest) = input.take_split(4);
// 3c. Slice by Rust ranges:
let middle = input.slice(100..200); // bytes 100..200
let tail = input.slice(200..); // bytes 200..end
let all = input.slice(..); // full original window
// 4. Nom parsing directly on flash-backed bytes:
let (remaining, magic) = tag([0xDE, 0xAD, 0xBE, 0xEF])(input)
.expect("flash begins with magic");
let (remaining, length) = take(2usize)(remaining)
.expect("next 2 bytes are length field");
// `remaining` is still a `Bytes<S, 64>` pointing just past the parsed data.
Practical Tips
- Ensure your NOR-flash driver’s
READ_SIZE == 1
. - Choose
N
(heapless buffer size) ≥ your parser’s look-ahead. .take_split()
is zero-copy: it just adjusts offsets/lengths, no extra flash reads.- When nom requests a slice, the
Slice
impl adjusts the offset so you always read only what you need.
Parsing and Verifying Firmware Headers
Purpose
Extract the 2 KiB header from a Passport firmware image, parse it into a Header
struct, and validate its fields before any signature checks.
Essential Items
HEADER_LEN
(u32): size of the header in bytes (2048).header(&[u8]) -> IResult<&[u8], Header>
: nom-based parser for the header.Header::verify() -> Result<(), VerifyHeaderError>
: checks magic, timestamp, length bounds, and public-key indexes.Header::is_signed_by_user() -> bool
: true whenpublic_key1 == USER_KEY
(255).
Code Example: parsing and header validation
use foundation_firmware::{header, Header, VerifyHeaderError, HEADER_LEN};
use nom::Finish;
fn validate_header(fw: &[u8]) -> Result<Header, VerifyHeaderError> {
// Ensure buffer is at least HEADER_LEN
let header_bytes = fw.get(..usize::try_from(HEADER_LEN).unwrap())
.expect("firmware too short for header");
// Parse
let (_, hdr) = header(header_bytes)
.finish()
.map_err(|_| VerifyHeaderError::UnknownMagic(0))?; // parse failure
// Structural checks: magic, timestamp, lengths, key indexes
hdr.verify()?;
Ok(hdr)
}
Practical Usage
Read entire firmware into memory:
let buf = std::fs::read("firmware.bin")?;
Call
validate_header(&buf)
to catch malformed headers early.Inspect key fields:
println!("Magic: {:#010X}", hdr.information.magic); println!("Timestamp: {}", hdr.information.timestamp); println!("Version: {}", hdr.information.version); println!("Length: {} bytes", hdr.information.length); if hdr.is_signed_by_user() { println!("This firmware is user-signed"); }
After header validation, compute hashes and call
verify_signature(...)
.
Error Cases
UnknownMagic(magic)
: not mono (0x50415353
) or color (0x53534150
).InvalidTimestamp
: timestamp zero.FirmwareTooSmall
/FirmwareTooBig
: length outside[HEADER_LEN, MAX_LEN]
.InvalidPublicKey1Index
/InvalidPublicKey2Index
: index >MAX_PUBLIC_KEYS
.SamePublicKeys
: both signatures use the same key index.
Keep header parsing and verification separate from signature validation to fail fast on structural issues.
FFI Integration Guide
This guide shows how to generate C headers from the Foundation FFI Rust library, integrate them into a CMake‐based C/C++ project, and call the nip19 codec functions from C++.
Generating Headers with cbindgen
Configure cbindgen via ffi/cbindgen.toml
. Generate or update foundation.h
with:
# From repository root
cargo install --locked cbindgen
cbindgen \
--config ffi/cbindgen.toml \
--crate foundation-ffi \
--output ffi/include/foundation.h
The generated header defines:
void nip19_encode_pubkey(const uint8_t pk[32], char out[65]);
void nip19_encode_secret(const uint8_t sk[32], char out[65]);
CI (.github/workflows/ffi.yaml
) checks header consistency on each commit.
CMake Integration
Use Corrosion to fetch and build the Rust crate alongside your C++ code.
cmake_minimum_required(VERSION 3.15)
project(FoundationFFIExample LANGUAGES CXX)
# Fetch Rust crate
find_package(Corrosion REQUIRED)
corrosion_crate(NAME foundation-ffi
CARGO_MANIFEST_PATH ${CMAKE_SOURCE_DIR}/ffi/Cargo.toml)
# Build shared Rust library
add_library(foundation_ffi_lib SHARED IMPORTED)
set_target_properties(foundation_ffi_lib PROPERTIES
IMPORTED_LOCATION
$<TARGET_FILE:foundation-ffi> # Corrosion exports this target
)
add_dependencies(foundation_ffi_lib foundation-ffi)
# Expose headers
target_include_directories(foundation_ffi_lib PUBLIC
${CMAKE_SOURCE_DIR}/ffi/include
)
# Your executable
add_executable(my_app src/main.cpp)
target_link_libraries(my_app PUBLIC foundation_ffi_lib)
set_target_properties(my_app PROPERTIES CXX_STANDARD 14)
Build and run:
mkdir build && cd build
cmake ..
cmake --build .
./my_app
Usage Example in C++
The following C++ snippet encodes a 32-byte public key (uint8_t[32]
) into an npub string.
#include "foundation.h"
#include <array>
#include <iostream>
int main() {
// Example 32-byte public key
std::array<uint8_t, 32> pubkey = {
0x1a,0xb3,0xc4,0xd5,0xe6,0xf7,0x08,0x19,
0x2a,0x3b,0x4c,0x5d,0x6e,0x7f,0x80,0x91,
0xa2,0xb3,0xc4,0xd5,0xe6,0xf7,0x08,0x19,
0x2a,0x3b,0x4c,0x5d,0x6e,0x7f,0x80,0x91
};
// Buffer must hold 64 chars + null terminator
char npub[65] = {0};
nip19_encode_pubkey(pubkey.data(), npub);
std::cout << "Encoded npub: " << npub << "\n";
return 0;
}
For secret keys, call nip19_encode_secret(secret.data(), output_buffer);
similarly. Ensure your build links against the generated foundation_ffi
library.
Command-Line Tools
This section covers three command-line utilities provided in this repository: the firmware validator, the Stratum Tokio CLI client, and the real-time QR code scanner.
Firmware Validator (foundation-firmware
)
Validates firmware images by parsing headers and verifying cryptographic signatures.
Build
cd firmware
cargo build --release --bin foundation-firmware
The resulting binary is target/release/foundation-firmware
.
Usage
foundation-firmware [OPTIONS] <FIRMWARE_FILE>
Options:
-h, --help Print help information
--header-only Parse and display header without signature check
-p, --public-key
Examples
# Display firmware header and metadata
foundation-firmware firmware.bin
# Only parse and show header fields
foundation-firmware --header-only firmware.bin
# Verify signature using a custom public key
foundation-firmware --public-key certs/pubkey.pem firmware.bin
Fields displayed include firmware version, build timestamp, target device, and signature status.
Stratum Tokio CLI (tokio-cli
)
A Tokio-based client for connecting to Bitcoin mining pools via the Stratum v1 protocol.
Build & Run
cd stratum-v1
cargo run --release --example tokio-cli -- [OPTIONS]
Usage
tokio-cli [OPTIONS]
Options:
-u, --pool-uri
Examples
# Connect to Antpool and simulate share submission
cargo run --example tokio-cli -- \
--pool-uri stratum+tcp://antpool.com:3333 \
--username worker1 \
--password x \
--timeout 30
The client:
- Establishes TCP/TLS connection
- Authenticates via
mining.authorize
- Receives work notifications
- Submits shares via
mining.submit
QR Code Scanner (scan_qr.py
)
Scans and prints QR codes in real time using the default camera.
Requirements
pip3 install opencv-python
Usage
python3 tools/scan_qr.py [--camera IDX]
Options: --camera IDX Camera index (default: 0) --help Show help message
Examples
# Start scanner on default camera (0)
python3 tools/scan_qr.py
# Use external camera (e.g. index 1)
python3 tools/scan_qr.py --camera 1
Output format:
[2025-08-19 14:05:12] Detected QR: https://example.com
[2025-08-19 14:05:15] Detected QR: WIFI:S:MySSID;T:WPA;P:password;;
The script:
- Captures video frames
- Detects multiple QR codes per frame
- Prints each new code exactly once per session
Architecture & Memory Model
This section describes how Foundation-Devices crates manage memory and features across embedded (no_std) and hosted (std) environments. It covers crate feature flags, heapless vs alloc collections, fixed-capacity arenas, unified error/log macros, and collection trait abstractions.
no_std vs std
All crates default to #![no_std]
to support embedded targets. Enable the Rust standard library via the std
feature:
Cargo.toml:
[package]
name = "foundation-devices"
version = "0.1.0"
[features]
default = ["alloc"]
alloc = []
std = ["alloc"] # pulls in `std` automatically
defmt = ["stratum-v1/defmt"]
log = ["stratum-v1/log"]
Crate root (e.g., arena/src/lib.rs
):
#![no_std]
#[cfg(feature = "std")]
extern crate std;
Heapless vs alloc Collections
Use heapless
for fixed-capacity, stack-allocated containers or alloc
for dynamic ones. stratum-v1
exposes type aliases:
// stratum-v1/src/fmt.rs
#[cfg(feature = "alloc")]
pub use alloc::vec::Vec as AVec;
#[cfg(not(feature = "alloc"))]
pub use heapless::Vec as HVec;
#[cfg(feature = "alloc")]
pub use alloc::string::String as AString;
#[cfg(not(feature = "alloc"))]
pub use heapless::String as HString;
Example: building a buffer that adapts to features
use stratum_v1::fmt::{HVec, AVec};
fn build_buffer() -> AVec<u8> {
let mut buf = AVec::new();
buf.extend_from_slice(&[1,2,3]).unwrap();
buf
}
Feature Flags & Conditional Macros
stratum-v1
provides logging and assertion macros that route to defmt
, log
, or no-op based on features:
// Enable defmt logging in Cargo.toml:
// [features]
// defmt = ["stratum-v1/defmt"]
use stratum_v1::log_trace;
use stratum_v1::error;
fn example() {
log_trace!("Starting computation: x = {}", 42);
if let Err(e) = do_work() {
error!("Work failed: {:?}", e);
}
}
Macros expand to:
defmt::trace!
/log::trace!
/ no-opdefmt::error!
/log::error!
/ no-op
Memory Arena
The arena
crate provides compile-time, fixed-capacity object storage without a global allocator:
use arena::Arena;
// Create an arena for up to 16 `MyStruct` values
let mut arena: Arena<MyStruct, 16> = Arena::new();
struct MyStruct { id: u32 }
// Allocate and initialize in-place
let obj: &mut MyStruct = arena.alloc(|| MyStruct { id: 1 })
.expect("Arena full");
assert_eq!(obj.id, 1);
Internally, Arena
manages chunks in a stack array and hands out mutable references on each alloc
call.
Error Handling & Assertions
Use assert_eq!
, assert!
macros from stratum-v1
for portable panics or testable failures in no_std
:
use stratum_v1::assert_eq;
fn calculate(a: u8, b: u8) -> u8 {
let sum = a + b;
assert_eq!(sum, a.wrapping_add(b), "Overflow detected");
sum
}
These expand to debug_assert_eq!
in std
or to panic!
/abort
in no_std
.
Collection Trait Abstractions
The ur::collections
module defines reusable traits and re-exports for deques, sets, and vectors. Future plans align with the cc-traits
ecosystem.
use ur::collections::{VecExt, DequeExt};
// Create a vector with generic capacity
fn push_items<V: VecExt<u8>>(mut v: V) {
v.push(10).unwrap();
v.push(20).unwrap();
}
// Using heapless deque
use heapless::Deque;
let mut dq: Deque<i32, 8> = Deque::new();
dq.push_back(1).unwrap();
dq.push_front(0).unwrap();
Trait definitions let you write generic code over any VecExt
, DequeExt
, or SetExt
implementation.
Fountain Encoder Usage
The ur::fountain::Encoder
illustrates combining memory models and trait abstractions:
use ur::fountain::Encoder;
// Using heapless under `alloc` feature
let data: &[u8] = b"Hello, fountain!";
let mut encoder: Encoder<_, 32> = Encoder::new(data);
// Generate fragments
while let Some(fragment) = encoder.next_part() {
// Send fragment over transport
send(fragment);
}
assert!(encoder.is_complete());
- The buffer type adapts to
heapless::Vec
oralloc::Vec
via feature flags. - Encoder uses XOR-based coding, storing intermediate state in the arena or dynamic buffer.
This architecture ensures consistent API across embedded and hosted targets while optimizing for performance, footprint, and flexibility.
Development & Contribution
This section outlines the project layout, coding standards, CI pipelines, fuzz testing, development environments, and guidelines for safely adding new crates or features.
Workspace Layout
The repository uses a Cargo workspace at its root. Key directories:
/src
– Core Rust crates/fuzz
–cargo-fuzz
targets/contrib
– Utility scripts (e.g.,fuzz.sh
).github/workflows
– CI definitions
Workspace members live in the root Cargo.toml
:
[workspace]
members = [
"src/device-core",
"src/device-client",
"fuzz/endpoint-fuzz",
# add new crates here
]
Coding Standards
C/C++ Formatting
We enforce a Chromium-based .clang-format
:
- Indent width: 2 spaces
- Align parameters and trailing comments
- Place braces on the same line
To apply:
clang-format -i **/*.cpp **/*.h
Commit hooks should run clang-format
before staging.
Rust Formatting and Lints
We use rustfmt
and Clippy:
# Check formatting
cargo fmt --all -- --check
# Run Clippy, deny warnings
cargo clippy --all-targets --all-features -- -D warnings
Continuous Integration
CI workflows live under .github/workflows
:
lint.yaml
- Checks REUSE compliance
- Runs
cargo fmt
,cargo clippy
,cargo test
under multiple feature sets
dependencies.yaml
cargo audit
for security advisoriescargo update --workspace --dry-run
+cargo check
basic-fuzzing.yaml
- Launches
cargo fuzz
on each push for continuous coverage
- Launches
Local CI validation:
# Formatting, clippy, tests
cargo fmt --all -- --check
cargo clippy --all-targets --all-features -- -D warnings
cargo test --workspace
# Dependency audit
cargo audit
Fuzz Testing
GitHub Actions
basic-fuzzing.yaml
triggers cargo fuzz
on push.
Local Fuzz Harness
The contrib/fuzz.sh
script discovers and runs all fuzz targets:
#!/usr/bin/env bash
set -euo pipefail
# Discover fuzz targets via Cargo metadata
targets=$(cargo metadata --format-version 1 \
| jq -r '.packages[] | select(.name | test("-fuzz$")) .name')
for target in $targets; do
echo "Fuzzing $target..."
cargo fuzz run "$target" \
-- -max_len=2048 -max_total_time=30
done
Run:
./contrib/fuzz.sh
Development Environments
Nix Shell
Use shell.nix
to drop into a reproducible environment:
nix-shell
# or with flakes
nix develop
# Inside shell:
cargo build
clang-format --version
Guix Shell
Launch via the Guix manifest:
guix shell --manifest=manifest.scm
# Inside shell:
cmake --version
rustc --version
Adding New Crates or Features
Create your crate under
src/
orfuzz/
.Add it to the workspace in
Cargo.toml
:members = [ "src/device-new", # ... ]
Implement functionality, add unit tests in
src/device-new/src/lib.rs
.Update
.clang-format
if introducing C/C++ code.Validate locally:
cargo fmt --all -- --check cargo clippy --all-targets --all-features -- -D warnings cargo test ./contrib/fuzz.sh cargo audit
Push a branch and open a PR. CI will run lint, tests, dependency checks, and fuzzing.
Licensing & Compliance
This section summarizes the dual-licensing model for foundation-rs, outlines additional licenses on bundled test vectors, and shows how to generate a Software Bill-of-Materials (SBOM) for compliance.
Dual Licensing
foundation-rs is distributed under either of these licenses, at your option:
- GPL-3.0-or-later (SPDX: GPL-3.0-or-later)
- MIT (SPDX: MIT)
Choose one license and comply with its terms:
- If you use GPL-3.0-or-later, you must provide source code of any derivative work under the same license.
- If you use MIT, you may include the MIT notice in your documentation and binaries.
All source files include SPDX license identifiers in their headers. Verify with:
grep -R "SPDX-License-Identifier" src/
Test Vectors and Third-Party Licenses
The test-vectors/
directory includes sample data from external sources, each under its own license. Key points:
- Each vector file carries a header with its SPDX identifier.
- Common licenses for test vectors include:
- Creative Commons Zero v1.0 Universal (SPDX: CC0-1.0)
- Apache License 2.0 (SPDX: Apache-2.0)
- To redistribute or modify these vectors, preserve their individual license headers.
- See
.reuse/dep5
for a complete mapping of vector files to their licenses.
Generating a Software Bill-of-Materials
You can generate an SPDX-compliant SBOM covering foundation-rs and its dependencies using the REUSE tool. This helps track licensing across your supply chain.
Install the REUSE tool (requires Python 3.7+):
pip install reuse
Generate an SPDX JSON SBOM:
cd /path/to/foundation-rs reuse spdx generate \ --format json \ --output sbom.json \ .
Review
sbom.json
to verify:- Project metadata (name, version, license)
- All source files and their SPDX identifiers
- Third-party crates and their licenses
For alternative formats (e.g., XML), adjust --format xml
. Include the generated SBOM with your releases or distribute it alongside binaries to meet compliance requirements.