Epochly Level Configuration Guide
This guide explains how to tune Epochly's optimization levels for your specific workloads.
Overview of Enhancement Levels
Epochly uses a progressive enhancement model with five levels:
| Level | Name | Optimization Type | Best For |
|---|---|---|---|
| 0 | Monitor | None (baseline) | Diagnostics |
| 1 | Threading | Thread pool for I/O | I/O-bound tasks |
| 2 | JIT | Just-in-time compilation | Numerical computation |
| 3 | Multicore | Full parallelism | CPU-bound tasks |
| 4 | GPU | GPU acceleration | Large array operations |
Configuration Methods
Epochly supports configuration through:
- Environment variables (recommended for deployment)
- TOML configuration files (for complex setups)
- Runtime API (for level changes only)
Environment Variable Reference
| Variable | Purpose | Values |
|---|---|---|
EPOCHLY_LEVEL | Set target enhancement level | 0-4 |
EPOCHLY_MAX_WORKERS | Maximum worker processes | Integer (default: CPU count) |
EPOCHLY_JIT_BACKEND | Force specific JIT backend | numba, native, auto |
EPOCHLY_JIT_HOT_PATH_THRESHOLD | Function calls before JIT | Integer (default: 1000) |
EPOCHLY_GPU_ENABLED | Enable/disable GPU | true, false |
EPOCHLY_GPU_MEMORY_LIMIT | GPU memory limit (MB) | Integer |
EPOCHLY_MEMORY_SHARED_SIZE | Shared memory pool size | Integer (bytes) |
EPOCHLY_TELEMETRY | Enable telemetry | true, false |
EPOCHLY_MODE | Operating mode | auto, off |
Runtime API
import epochly# Check current statusstatus = epochly.get_status()print(f"Current level: {status['enhancement_level']}")print(f"Workers: {status.get('worker_count', 'N/A')}")# Set enhancement level programmaticallyepochly.set_level(3) # Set to Level 3 (Full optimization)
Level 1: Threading Configuration
What It Does
Level 1 introduces a thread pool executor for I/O-bound operations:
- File reads/writes
- Network requests
- Database queries
- Any operation that waits on external resources
Tuning Level 1
# Set maximum workers via environment variableexport EPOCHLY_MAX_WORKERS=32python your_script.py
#### Recommended Settings
| Workload Type | Recommended Workers |
|---|---|
| Light I/O (< 10 concurrent ops) | 4-8 |
| Medium I/O (10-50 concurrent ops) | 8-16 |
| Heavy I/O (> 50 concurrent ops) | 16-32 |
| Memory constrained | 2-4 |
Level 2: JIT Configuration
Tuning Level 2
#### JIT Backend Selection
# Force Numba backendexport EPOCHLY_JIT_BACKEND=numbapython your_script.py# Use native JIT (Python 3.13+)export EPOCHLY_JIT_BACKEND=nativepython -X jit your_script.py# Auto-detect best backend (default)export EPOCHLY_JIT_BACKEND=autopython your_script.py
#### JIT Threshold
Control when functions are JIT-compiled:
# Compile after fewer calls (more aggressive)export EPOCHLY_JIT_HOT_PATH_THRESHOLD=100python your_script.py# Compile after more calls (more conservative)export EPOCHLY_JIT_HOT_PATH_THRESHOLD=5000python your_script.py
Level 3: Multicore Configuration
Tuning Level 3
#### Worker Count
# Check current workerspython -c "import epochly; print(epochly.get_status())"# Adjust worker countexport EPOCHLY_MAX_WORKERS=8python your_script.py
#### Shared Memory Size
# Increase shared memory for large data transfersexport EPOCHLY_MEMORY_SHARED_SIZE=134217728 # 128MBpython your_script.py# Decrease for memory-constrained environmentsexport EPOCHLY_MEMORY_SHARED_SIZE=33554432 # 32MBpython your_script.py
Level 4: GPU Configuration
Tuning Level 4
# Enable GPU accelerationexport EPOCHLY_GPU_ENABLED=truepython your_script.py# Set GPU memory limit (MB)export EPOCHLY_GPU_MEMORY_LIMIT=4096python your_script.py# Set minimum workload threshold for GPU offloadexport EPOCHLY_GPU_WORKLOAD_THRESHOLD=1000000python your_script.py
Common Configuration Patterns
Web Server (I/O Heavy)
export EPOCHLY_LEVEL=1export EPOCHLY_MAX_WORKERS=64export EPOCHLY_JIT_ENABLED=falsepython server.py
Data Science (Numerical Heavy)
export EPOCHLY_LEVEL=3export EPOCHLY_JIT_HOT_PATH_THRESHOLD=50export EPOCHLY_MAX_WORKERS=8export EPOCHLY_GPU_ENABLED=truepython analysis.py
Batch Processing (CPU Heavy)
export EPOCHLY_LEVEL=3export EPOCHLY_MAX_WORKERS=16export EPOCHLY_MEMORY_SHARED_SIZE=268435456 # 256MBpython batch_job.py
Memory Constrained
export EPOCHLY_LEVEL=2export EPOCHLY_MAX_WORKERS=4python constrained_app.py
TOML Configuration File
For complex setups, create an epochly.toml file:
[epochly]mode = "auto"enhancement_level = 3max_workers = 8telemetry = true[epochly.jit]backend = "auto"cache_enabled = truehot_path_threshold = 1000[epochly.memory]pool_type = "shared"shared_memory_size = 134217728 # 128MBnuma_aware = false[epochly.gpu]enabled = truememory_limit = 4096workload_threshold = 1000000
Place this file in:
- Current directory (
./epochly.toml) - User config directory (
~/.config/epochly/epochly.toml) - System config (
/etc/epochly/epochly.toml)
Files are loaded in priority order (current directory wins).