main*
šŸ“s3cli-custom-aws-signature.md
šŸ“…November 14, 2025ā”‚ā±6 min read

Building s3cli: A CLI for Any S3-Compatible Storage

#rust#cli#s3#cloudflare#backblaze#minio

Building s3cli: A CLI for Any S3-Compatible Storage

There's been a lot of debate recently about MCP servers vs CLI tools for AI agents. The real takeway? If it can be a CLI, it should be.

MCP servers have their place - when you need real-time events, stateful connections, or tight integration with a specific app. But for many tool integrations, a CLI is simpler: no server to run, no context bloat, no maintenance overhead.

"Replace MCP With CLI. The Best AI Agent Interface Already Exists" - Cobus Greyling

So when I wanted S3 storage capabilities for AI agents, I built s3cli - a lean CLI that works directly, without an MCP server.

Why a CLI Instead of an MCP?

When I wanted to give AI agents S3 storage capabilities, I had two paths:

  1. Build an MCP server - Great for real-time events, stateful connections
  2. Build a CLI tool - Simpler, no server needed, works with any agent

For S3 storage, the CLI made more sense. Now Claude Code, Cursor, or any agent can just run:

s3cli push screenshot.png --public
s3cli share abc123 --copy
s3cli ls --long

No MCP server to maintain. No context bloat. Just a binary that works.

But wait - why not just use the official AWS SDK? That's the third option I initially considered, and it's where the real story begins.

Why Not Just Use the AWS SDK?

When I started, the obvious path was: just use aws-sdk-s3. It's the official library, battle-tested, handles everything.

The problem? It's heavy. For a CLI tool that's supposed to be lean, the SDK adds compile time overhead and increases binary size.

More importantly - I didn't need most of what the SDK provides. I just needed to:

  • Upload a file
  • Download a file
  • List files
  • Generate a shareable link
  • Delete a file

That's 5 operations. The SDK has hundreds.

So I asked myself: how hard can S3 signing actually be? The answer: hard enough to be interesting, but doable.

The Implementation

I implemented S3 signing myself to keep it lean - no heavy SDK dependencies:

  • reqwest for HTTP
  • hmac and sha2 for cryptographic signing
  • chrono for timestamps

The Core: S3 Signing

The hardest part was getting S3 signing right. Here's the flow:

1. Create canonical request hash
2. Create string to sign  
3. Derive signing key (HMAC-SHA256 nested: date → region → service → "aws4_request")
4. Sign the string with the key
5. Include in Authorization header

The signing key derivation is the key insight:

// AWS4-HMAC-SHA256 key derivation
let k_date = hmac_sha256(format!("AWS4{}", secret).as_bytes(), date_stamp.as_bytes());
let k_region = hmac_sha256(&k_date, region.as_bytes());
let k_service = hmac_sha256(&k_region, b"s3");
let k_signing = hmac_sha256(&k_service, b"aws4_request");
let signature = hmac_sha256_hex(&k_signing, string_to_sign.as_bytes());

Supported Providers

Because it's just S3-compatible, s3cli works with:

  • AWS S3 - the original
  • Cloudflare R2 - zero egress fees
  • Backblaze B2 - cheap storage
  • MinIO - self-hosted development
  • Local filesystem - testing without cloud

Current Commands

s3cli push demo.mp4              # Upload
s3cli pull abc123                # Download  
s3cli ls                         # List files
s3cli share abc123 --copy        # Presigned URL (auto-copies)
s3cli rm abc123                  # Delete
s3cli info abc123                # Metadata
s3cli copy src dest              # Copy within bucket
s3cli move src dest              # Move within bucket
s3cli cat abc123                 # Stream to stdout
s3cli config init                # Interactive setup

Project Structure

src/
ā”œā”€ā”€ main.rs              # CLI with clap
ā”œā”€ā”€ commands/            # push, pull, ls, rm, share, etc.
ā”œā”€ā”€ config/              # Config loading (TOML + env)
ā”œā”€ā”€ storage/
│   ā”œā”€ā”€ s3.rs           # Custom S3 client
│   └── mod.rs          # Storage trait
└── models/             # FileEntry, FileMetadata

Key Design Decisions

1. Why Crockford Base32 for IDs?

I needed short, URL-safe identifiers for files. UUIDs are too long (36 chars). Sequential IDs leak information and cause hot spots.

Crockford Base32 encodes a UUID into just 12 characters:

  • URL-safe (no /, +, =)
  • Case-insensitive (great for CLI)
  • Short enough to type
fn crockford_encode(bytes: &[u8]) -> String {
    const ALPHABET: &[u8] = b"0123456789ABCDEFGHJKMNPQRSTVWXYZ";
    // ... encode UUID to 12-char string
}
// UUID v4 -> "abc123def456"

2. Why a Storage Trait?

AWS S3, Cloudflare R2, and Backblaze B2 all have the same API - they're S3-compatible. But MinIO might behave differently locally. And I wanted a local filesystem backend for testing without cloud costs.

The trait lets me swap backends without changing any command logic:

#[async_trait]
pub trait Storage: Send + Sync {
    async fn put(&self, key: &str, data: impl Read, metadata: &FileMetadata) -> Result<FileEntry>;
    async fn get(&self, key: &str) -> Result<StreamedData>;
    async fn delete(&self, key: &str) -> Result<()>;
    async fn list(&self, prefix: Option<&str>, pagination: &Pagination) -> Result<ListResult>;
    async fn presign(&self, key: &str, expires: Duration) -> Result<String>;
}

3. Why Minimal Dependencies?

The AWS SDK is great, but it pulls in a lot. For a CLI that should:

  • Install fast (cargo install s3cli)
  • Compile fast
  • Have a small binary footprint

...using reqwest directly was the right call:

# Just the essentials
reqwest = "0.11"
hmac = "0.12"
sha2 = "0.10"
chrono = "0.4"
clap = "4.5"

This is what lets the CLI feel lightweight.

What I Learned

Building this taught me several things:

1. S3 signing is documented, but the docs assume you're using their SDK

The AWS signature docs are comprehensive, but every example assumes you're using their libraries. Translating to raw HTTP requires piecing together what each header actually means. The key insight is the nested HMAC key derivation - once that clicks, everything else follows.

2. S3-compatible really means S3-compatible

I was surprised how well this worked across providers. R2, B2, MinIO - they all accept the same signed requests. The protocol is the real standard here, not AWS itself.

3. reqwest is all you need

For this use case, reqwest handles everything - HTTP/1.1, connection pooling, timeouts. The official SDK has nice-to-haves (retries, paginators), but for a focused CLI, less is more.

4. CLI tools are the simplest integration point

This was the original thesis, and it held up. Any agent that can execute shell commands can use s3cli. No MCP server, no SDK installation, no context overhead. Just a binary on the PATH.

Next Steps

  • Add progress bars for large files
  • Implement multipart uploads
  • Add sync command
  • Binary releases via cargo-dist

The full spec is in SPEC.md if you want to see the roadmap.


Last Updated: 2026-02-17