Server Architecture

What the servers do, what they know, and what they cannot know.

Design Philosophy

RVNT servers are designed to be compromisable without consequence. If an attacker gains full root access to every server we operate, they learn nothing about your messages, your contacts, or your identity. The servers are infrastructure, not trust anchors. They facilitate key exchange and peer discovery. They never relay message content. They never possess encryption keys. They are, by design, useless to an adversary.

This is architecture, not policy. A privacy policy can be changed with a board vote. RVNT's server architecture cannot be changed to reveal user data without rewriting the protocol -- and users would see the change in the open source code immediately.

Identity Server

The identity server maps usernames to public key bundles. When you register a username, the server stores:

  • Your chosen username
  • Your public key bundle (identity key, signed prekey, one-time prekeys, PQ prekey)
  • Proof-of-work hash (prevents username squatting)

The server does NOT store:

  • Your private keys (never leave your device)
  • Your IP address (logs are disabled, Tor enforced)
  • Your contacts or conversation history
  • Any message content or metadata

Technical Details

Framework:    Axum (Rust, async, tower middleware)
Database:     RocksDB (embedded key-value store, no external DB server)
TLS:          Let's Encrypt via Nginx reverse proxy
Logging:      Error counts and response times only
              Zero IP logging. Zero request body logging.
              stdout/stderr only, no log files on disk.
Rate limit:   30 req/min per Tor circuit (API)
              5 req/min per Tor circuit (registration)
Location:     Hetzner EU-Central (Nuremberg, Germany)
              GDPR jurisdiction
Onion:        Available as Tor hidden service (v3 .onion)
Redundancy:   2 instances, active-passive failover

API Endpoints

POST   /v1/register          Register username + upload key bundle
GET    /v1/bundle/:username   Fetch a user's public key bundle
PUT    /v1/prekeys            Replenish one-time prekeys
GET    /v1/prekey-count       Check remaining OPK count
DELETE /v1/identity           Delete account and all associated data

All endpoints:
  - Accept and return Protocol Buffers (not JSON)
  - Require valid proof-of-work header for write operations
  - Rate limited per Tor circuit (not per IP, since Tor hides IPs)
  - No authentication token for reads (public key bundles are public)
  - Signed request bodies for writes (Ed25519 signature from identity key)

Key Bundle Storage

RocksDB schema:

Key:    "bundle:{username}"
Value:  Serialized KeyBundle (protobuf, ~5 KB with 100 OPKs)

Key:    "opk:{username}:{opk_id}"
Value:  OPK public key (32 bytes)
        Consumed OPKs are deleted immediately.

Key:    "pow:{username}"
Value:  Proof-of-work hash (32 bytes)
        Registration requires SHA-256 hash with 20 leading zero bits.
        ~1 million hashes on modern hardware. ~2-5 seconds.
        Prevents automated username squatting.

Key:    "meta:{username}"
Value:  { registered_at: timestamp, last_prekey_update: timestamp }
        No IP, no device info, no location.

RocksDB is embedded (no network-accessible database server).
No SQL injection possible. No query language. Key-value only.

Bootstrap Nodes

Bootstrap nodes are entry points to the Kademlia DHT (Distributed Hash Table). When your app starts, it connects to a bootstrap node to join the global peer network. After joining, your app discovers other peers through the DHT itself -- the bootstrap node is no longer in the path.

What bootstrap nodes see

✓ Your Tor exit node's IP (not YOUR IP, because Tor)
✓ Your libp2p PeerId (a transport-layer identifier, NOT your RVNT identity)
✓ That you are joining the DHT (generic libp2p traffic)

✗ Your real IP address (hidden behind Tor)
✗ Your RVNT identity key (Ed25519 public key, never exposed to DHT layer)
✗ Who you are messaging (DHT queries are for PeerIds, not identities)
✗ Your username (never exposed to DHT layer)
✗ Message content (never touches DHT)

Technical Details

Protocol:     libp2p Kademlia DHT over QUIC
Transport:    QUIC with Noise XX authentication
Discovery:    mDNS (LAN) + DNS-SD (Bonjour) + DHT (global)
Ports:        4001/udp (QUIC), 4001/tcp (fallback)
Location:     Hetzner EU-Central (Nuremberg, Germany)
Replication:  DHT records replicated across k=20 closest nodes
Instances:    3 bootstrap nodes (boot1, boot2, boot3)
Uptime:       99.9% target (systemd auto-restart on failure)

Message Delivery

Messages are delivered directly between devices (peer-to-peer). Servers do NOT relay messages. When both devices are online, messages travel:

Your device → [Tor circuit] → Internet → [Tor circuit] → Peer's device

The only server involvement is:
  1. Initial key bundle fetch (identity server, one-time per contact)
  2. Peer discovery via DHT (bootstrap nodes, brief initial contact)

After two devices establish a libp2p connection, all communication
is direct. No server is in the message path. No relay. No proxy.
No store-and-forward (for online peers).

Offline Message Handling

Currently, both devices must be online for message delivery. Messages to offline recipients are queued locally on the sender's device and retried when the recipient comes online (detected via DHT presence).

Planned: Encrypted relay nodes (future release)
  - Encrypted envelopes stored on relay nodes temporarily
  - TTL: 7 days (then deleted)
  - Relay sees only: recipient_id (truncated hash) + opaque blob
  - Relay cannot read content, cannot identify sender
  - Relay cannot correlate messages to conversations
  - Multiple relays for redundancy (no single point of failure)

Server Hardening

Access control:
  - SSH key authentication only (no passwords, no root login)
  - 2FA required for SSH (TOTP via yubikey or authenticator app)
  - IP allowlist for SSH access (admin IPs only)

Network:
  - UFW firewall: only ports 80, 443, 4001 exposed
  - Fail2ban: 5 failed attempts = 1 hour ban
  - Nginx: rate limiting, request size limits, header sanitization
  - No ICMP redirects, no IP forwarding, SYN cookies enabled

System:
  - Automatic security updates (unattended-upgrades, daily)
  - Systemd sandboxing:
    - NoNewPrivileges=true
    - ProtectSystem=strict
    - ProtectHome=true
    - PrivateTmp=true
    - ReadOnlyPaths=/
    - ReadWritePaths=/var/lib/rvnt (data directory only)
  - Kernel hardening (sysctl):
    - net.ipv4.tcp_syncookies = 1
    - net.ipv4.conf.all.rp_filter = 1
    - net.ipv4.conf.all.send_redirects = 0
    - kernel.randomize_va_space = 2 (full ASLR)

Monitoring:
  - Error rate alerting (PagerDuty)
  - Disk usage alerting (prevent RocksDB filling disk)
  - Memory usage alerting
  - No application-level monitoring that could leak user data

Zero Logging Policy

# Nginx configuration
access_log off;
error_log /dev/null;

# Headers stripped before reaching application
proxy_set_header X-Real-IP "";
proxy_set_header X-Forwarded-For "";
proxy_set_header X-Forwarded-Proto "";

# Systemd journal (application logs)
StandardOutput=journal
# Journal configured with Storage=volatile (RAM only, not persisted to disk)
# Journal entries contain only:
#   - Timestamp
#   - Log level (error/warn)
#   - Error message (no request data, no IPs, no identifiers)

# Application logging (Rust, tracing crate)
# Configured at WARN level in production
# No request bodies logged at any level
# No IP addresses logged at any level
# No user identifiers logged at any level

# What IS logged:
#   - Server start/stop events
#   - Error counts per endpoint (e.g., "register: 3 errors in last hour")
#   - Response time percentiles (p50, p95, p99)
#   - Disk and memory usage metrics

This is not a marketing claim. The server configurations are
open source and auditable at github.com/rvntos/rvnt/server/

Warrant Canary

RVNT publishes a warrant canary, signed with our GPG key,
updated on the 1st of every month.

The canary affirms that as of the publication date:
  1. We have not received a National Security Letter
  2. We have not received a FISA court order
  3. We have not received a gag order
  4. We have not been compelled to insert a backdoor
  5. We have not been compelled to modify our code

If the canary is not updated by the 5th of the month,
or if any of the above statements is removed, users
should assume the relevant event has occurred.

Location: https://rvntos.io/canary.txt
Signature: https://rvntos.io/canary.txt.sig
Verification: gpg --verify canary.txt.sig canary.txt

Infrastructure Summary

ComponentCountLocationPurpose
Identity server2 (active-passive)Hetzner EU-CentralUsername → public key mapping
Bootstrap nodes3Hetzner EU-CentralDHT entry points
Website1Cloudflare PagesStatic site, no server-side code
Relay nodes0 (planned)PlannedOffline message storage

Further Reading

Last updated: 2026-04-12

RVNT Documentation — Post-quantum encrypted communications