20x Faster Cache
Restore dependencies in seconds. Stop paying for time spent waiting on downloads.
Restore dependencies in seconds. Stop paying for time spent waiting on downloads.
Monk CI's cache operates at 20 Gbps compared to GitHub Actions' standard 1 Gbps. For most workflows, cache restore and save steps that previously took 30–60 seconds complete in under 3 seconds. That time compounds across every job, every branch, every day.
How It Works
Co-located warm cache
Monk CI cache nodes are co-located with the runners on the same high-speed network. There is no round-trip to a remote object store. Cache hits are served from local NVMe-backed storage over a 20 Gbps link, the same speed regardless of cache size.
Persistent across runs
Cache entries persist across workflow runs and are available to all jobs in the same repository. Dependency installs (npm, pip, Maven, Gradle, Go modules, and more) that were cached on a previous run are immediately available on the next.
Automatic invalidation
Cache keys follow the same key and restore-key pattern as actions/cache. Entries are invalidated when the key changes. For example, when a lock file is updated, stale dependencies are never silently restored.
No size limits
Monk CI does not impose per-repository or per-organization cache size caps. Large monorepos, multi-language projects, and Docker layer caches all work without needing to manually prune or rotate entries.
Monk CI Cache vs GitHub Actions Cache
| Monk CI | GitHub Actions | |
|---|---|---|
| Cache transfer speed | 20 Gbps | 1 Gbps |
| Cache location | Co-located | Remote object store |
| Typical restore time | Less than 3 seconds | 30–60 seconds |
| Storage limit | No limit | 10 GB per repo |
| Compatible with actions/cache | Yes | N/A |
| Metric | Value | Detail |
|---|---|---|
| Faster cache | 20x | vs GitHub Actions cache |
| Transfer speed | 20 Gbps | vs 1 Gbps on GitHub |
| Avg restore time | Less than 3s | For most dependency trees |
Monk CI Runners and Cache work together natively to deliver maximum performance, providing the most dramatic speed gains for jobs that previously spent 40% of their runtime on cache operations.