What is it?
S3-compatible object storage with zero egress bandwidth fees. Built on Cloudflare's global network with strong consistency and high durability. Access via Workers Binding, S3-compatible API, or REST API.
What is it for?
- Web content hosting (static assets, images, media)
- User-generated content and file uploads
- Data lakes and analytics storage
- ML/AI training data and artifacts
- Media archiving and batch processing output
Why was it chosen?
Key advantages:
- Zero egress fees (primary differentiator)
- S3-compatible API (same code works with AWS S3)
- Native Cloudflare Workers integration
- Global CDN acceleration via 330+ edge nodes
See vendors.md for pricing, free tier limits, and provider alternatives.
Known limitations
S3 API gaps (completely unsupported):
- Versioning
- Object locking
- Bucket notifications (event triggers)
- Bucket policies and ACLs
- Logging, analytics, metrics
- Replication
- Website hosting
- Object tagging
Encryption:
- No KMS encryption
- Only customer-provided keys (SSE-C) supported
Storage tiers:
- Only Standard and Infrequent Access
- No Glacier-equivalent for cold storage
Other constraints:
- Region selection: buckets created in nearest region (hints not guaranteed)
- Limited migration tools (Super Slurper, Sippy)
- No S3 Select (SQL queries against buckets)
- Launched 2022 vs S3's 2006—less mature ecosystem
Connection
Uses the S3-compatible API via @aws-sdk/client-s3. The endpoint is derived from the account ID — no separate endpoint env var needed.
Environment variables:
| Variable | Purpose |
|---|---|
R2_ACCOUNT_ID |
Cloudflare account ID (endpoint computed: https://{id}.r2.cloudflarestorage.com) |
R2_ACCESS_KEY_ID |
S3 API token access key |
R2_SECRET_ACCESS_KEY |
S3 API token secret |
R2_BUCKET_NAME |
Target bucket name |
Client setup: $lib/server/store/index.ts creates an S3Client with region: 'auto' and endpoint derived from the account ID. Returns null when credentials are missing (graceful degradation via feature flags).
Implementation
The store layer lives at $lib/server/store/ with the same structure as the db and graph layers:
src/lib/server/store/
├── index.ts # S3Client setup, BUCKET export
├── types.ts # ObjectInfo, ObjectDetail, BucketStats, PresignedUrlResult, etc.
├── errors.ts # StoreError class, classifyS3Error()
└── showcase/
├── queries.ts # verifyConnection, listObjects, getDetail, presignedDownload, rangeRead
├── mutations.ts # presignedUpload, confirmUpload, deleteObject
├── seed.ts # reseedBucket() — 11 seed objects
└── guards.ts # assertShowcaseKey, checkObjectLimit
Error handling: classifyS3Error() normalizes AWS SDK errors into typed StoreError kinds: credentials, not_found, forbidden, timeout, limit, unavailable, unknown.
Safety guards:
- All operations scoped to
showcase/prefix viaassertShowcaseKey() - Max 20 objects in showcase namespace via
checkObjectLimit() - Upload validation: allowlisted MIME types, 2 MB size cap
- UUID-based key generation prevents collisions
Upload flow:
Client Server R2
│ │ │
├── Request upload URL ──▶│ │
│ ├── Validate + presign ─▶│
│◀── Presigned PUT URL ───┤ │
│ │ │
├── PUT file directly ────┼───────────────────────▶│
│ │ │
├── Confirm upload ──────▶│ │
│ ├── HeadObject verify ──▶│
│◀── Upload result ───────┤ │
Content-Type is locked into the presigned URL signature — the client must send the exact MIME type that was validated server-side.
Related
- postgres.md - Structured data
- neo4j.md - Graph data
- drizzle.md - ORM (PostgreSQL only)
- ../core/podman.md - Local MinIO setup