ArylHive GitHub CI/CD Infrastructure
This document reveals the in-depth architectural breakdown of the GitHub Deployment System within ArylHive. It explains how raw Git repositories are transformed via secure tokens into globally distributed edge applications in seconds.
1. Core Pipeline Architecture
The deployment pipeline is an orchestration between GitHub webhooks, Cloudflare Workers acting as the CI/CD controller, a dedicated remote Build Server, and highly-available Backblaze B2/R2 storage.
2. The Deployment Execution Flow
A. Secure Authentication via GitHub Apps
Unlike basic OAuth flows that give overarching access to a user's account, ArylHive uses a deeply integrated GitHub App.
- The Edge Controller utilizes its Private Key (`PKCS#8`) loaded via Cloudflare WebCrypto to securely sign a JWT.
- This JWT is sent to GitHub to exchange for a short-lived Installation Access Token.
- This ephemeral token is strictly scoped only to the specific repositories the user has selected, guaranteeing minimal blast radius.
B. The Build Server Execution (`hive-build`)
The Edge Controller instructs the external Build Server (a scalable VM or container cluster) to begin execution by passing the repo details and the scoped Installation Token.
- Clone: The builder securely clones the repository securely via HTTPS using the token.
- Dependency Resolution: Executes `npm install`, `yarn`, `pnpm`, or `bun install` depending on lockfiles detected.
- Build Generation: Executes the user-defined build command (e.g., `npm run build`).
- Archive: The generated static `out` or `dist` directory is compressed into an optimized `.zip` archive.
- Storage: The archive is transferred directly to the B2 origin storage buckets over a private network.
C. Edge Data Plane & Active Shift
Once successfully built, the Edge Controller atomically updates the deployment status in Turso DB to READY. It shifts the primary routing hash so that traffic on [domain].aryl.app instantly points to the new deployment ZIP hash, completing an atomic zero-downtime rollover.
3. How Requests Are Served at the Edge
The most innovative part of the system is how files are served from a `.zip` file on-the-fly globally.
4. Advanced Engineering Considerations
On-the-fly Unzipping (Stream Extraction)
Rather than unpacking millions of tiny files into an S3 bucket (which incurs massive PUT/GET request costs and extremely slow upload times), the entire website is stored as a single `.zip` file.
When a user requests `index.html`, the Cloudflare Edge Worker reads the ZIP's Central Directory locally via memory, executes a precise HTTP Range request to B2, downloads only the exact bytes correlated to that specific file, decompresses it on the edge using fflate, and serves it to the browser.
Tiered Edge Caching
Extracting bytes from a zip file on every single request would consume vast CPU cycles. Therefore, once a file is extracted and served once, it is injected directly into Cloudflare's Global Cache. Every subsequent request globally is served straight from Cloudflare's RAM in <10ms, entirely bypassing the unzipping logic and the B2 database.
Next: Learn about The Edge Network →
