Simple pricing
Start free, move to Pro when you want checkout-ready acceleration, and use the team-evaluation path if you're buying for a rollout instead of an individual install.
Community
For individual developers who want to try Epochly fast
- Inference profiling + micro-batching
- L1 cache + circuit breakers
- Levels 0–3 optimization (4 cores)
- Lens snapshot (24h fleet view)
- Community support
Pro
For AI/ML engineers and teams shipping production workloads
- AI inference: torch.compile + A/B testing
- L2/L3 distributed caching + LLM companion
- Model registry + cost projections
- Levels 0–4 (unlimited cores + GPU)
- Lens dashboard (single service)
- Email support
You won't be charged for 30 days. Cancel anytime.Secure checkout via Polar.sh — powered by Stripe
Enterprise
For teams managing fleets with compliance and visibility needs
- Fleet-wide Lens dashboard
- Unlimited alerts + 13-month retention
- RBAC + audit logs
- Priority support
We'll follow up by email to discuss your fleet requirements.
Buying for a team?
If self-serve fits, use pricing and checkout. If you need to talk through rollout constraints, Lens visibility, or security questions first, use the structured contact path and tell us what you're evaluating.
Feature Comparison
| Feature | Community | Pro | Enterprise |
|---|---|---|---|
| Inference profiling | |||
| Dynamic micro-batching | |||
| L1 in-memory cache | |||
| Circuit breakers + safety gates | |||
| torch.compile (safety-gated) | |||
| L2/L3 distributed cache | |||
| A/B model testing | |||
| Model registry | |||
| LLM companion (vLLM/TGI) | |||
| Cost projections + OTel | |||
| Level 0-2 (Monitor, Thread, JIT) | |||
| Level 3 (Multicore) | 4 cores | Unlimited | Unlimited |
| Level 4 (GPU) | |||
| Telemetry | CLI | CLI + Lens dashboard | Fleet-wide Lens |
| Support | Community | Priority | |
| Fleet-wide Lens dashboard | |||
| RBAC + audit logs |
Pricing FAQ
What happens after my Pro trial ends?
After 30 days, your subscription begins automatically at the plan you selected. Cancel anytime before the trial ends to avoid being billed — your account will revert to Community tier (4 cores, no GPU acceleration). No data is lost.
How do I start a Pro trial?
Click "Get Pro" above. You’ll enter your payment details but won’t be charged for 30 days. Pro features activate immediately. Cancel anytime before the trial ends.
Do I need a credit card to try Pro?
The web checkout requires a payment method, but you won't be charged during the 30-day trial period. Cancel anytime before it ends.
Can I use Epochly in production?
Yes. Community tier is production-ready for workloads that fit within 4 cores. For unlimited scaling and GPU acceleration, upgrade to Pro.
What if I’m buying for a team?
Start with Pro if self-serve fits your rollout. For fleet-wide Lens, RBAC, or extended retention, contact us about Enterprise. Use the team-evaluation path on the contact page and tell us what environment you’re running.
What does Enterprise include?
Fleet-wide Lens dashboard, unlimited alerts, 13-month data retention, RBAC with audit logs, and priority support. Contact us to discuss your needs.
What AI frameworks does Epochly support?
Epochly automatically detects PyTorch, HuggingFace Transformers, and ONNX Runtime when you import them. The Community tier includes profiling, micro-batching, and L1 caching. Pro adds torch.compile, distributed caching, A/B testing, and LLM companion support for vLLM and TGI.
Can I use inference features with the free Community tier?
Yes. Community includes framework auto-detection, model profiling, dynamic micro-batching, L1 in-memory caching, circuit breakers, and Prometheus metrics — enough to see measurable throughput improvements. Pro unlocks torch.compile, multi-tier caching, A/B testing, and the model registry for deeper optimization.
Choose the path that fits how you buy
Try Epochly for free, use checkout when self-serve is enough, or start a team evaluation if you need to talk through rollout details first.
Secure checkout via Polar.sh — powered by Stripe