Documentation

Epochly Level Configuration Guide

This guide explains how to tune Epochly's optimization levels for your specific workloads.

Overview of Enhancement Levels

Epochly uses a progressive enhancement model with five levels:

LevelNameOptimization TypeBest For
0MonitorNone (baseline)Diagnostics
1ThreadingThread pool for I/OI/O-bound tasks
2JITJust-in-time compilationNumerical computation
3MulticoreFull parallelismCPU-bound tasks
4GPUGPU accelerationLarge array operations

Configuration Methods

Epochly supports configuration through:

  1. Environment variables (recommended for deployment)
  2. TOML configuration files (for complex setups)
  3. Runtime API (for level changes only)

Environment Variable Reference

VariablePurposeValues
EPOCHLY_LEVELSet target enhancement level0-4
EPOCHLY_MAX_WORKERSMaximum worker processesInteger (default: CPU count)
EPOCHLY_JIT_BACKENDForce specific JIT backendnumba, native, auto
EPOCHLY_JIT_HOT_PATH_THRESHOLDFunction calls before JITInteger (default: 1000)
EPOCHLY_GPU_ENABLEDEnable/disable GPUtrue, false
EPOCHLY_GPU_MEMORY_LIMITGPU memory limit (MB)Integer
EPOCHLY_MEMORY_SHARED_SIZEShared memory pool sizeInteger (bytes)
EPOCHLY_TELEMETRYEnable telemetrytrue, false
EPOCHLY_MODEOperating modeauto, off

Runtime API

import epochly
# Check current status
status = epochly.get_status()
print(f"Current level: {status['enhancement_level']}")
print(f"Workers: {status.get('worker_count', 'N/A')}")
# Set enhancement level programmatically
epochly.set_level(3) # Set to Level 3 (Full optimization)

Level 1: Threading Configuration

What It Does

Level 1 introduces a thread pool executor for I/O-bound operations:

  • File reads/writes
  • Network requests
  • Database queries
  • Any operation that waits on external resources

Tuning Level 1

# Set maximum workers via environment variable
export EPOCHLY_MAX_WORKERS=32
python your_script.py

#### Recommended Settings

Workload TypeRecommended Workers
Light I/O (< 10 concurrent ops)4-8
Medium I/O (10-50 concurrent ops)8-16
Heavy I/O (> 50 concurrent ops)16-32
Memory constrained2-4

Level 2: JIT Configuration

Tuning Level 2

#### JIT Backend Selection

# Force Numba backend
export EPOCHLY_JIT_BACKEND=numba
python your_script.py
# Use native JIT (Python 3.13+)
export EPOCHLY_JIT_BACKEND=native
python -X jit your_script.py
# Auto-detect best backend (default)
export EPOCHLY_JIT_BACKEND=auto
python your_script.py

#### JIT Threshold

Control when functions are JIT-compiled:

# Compile after fewer calls (more aggressive)
export EPOCHLY_JIT_HOT_PATH_THRESHOLD=100
python your_script.py
# Compile after more calls (more conservative)
export EPOCHLY_JIT_HOT_PATH_THRESHOLD=5000
python your_script.py

Level 3: Multicore Configuration

Tuning Level 3

#### Worker Count

# Check current workers
python -c "import epochly; print(epochly.get_status())"
# Adjust worker count
export EPOCHLY_MAX_WORKERS=8
python your_script.py

#### Shared Memory Size

# Increase shared memory for large data transfers
export EPOCHLY_MEMORY_SHARED_SIZE=134217728 # 128MB
python your_script.py
# Decrease for memory-constrained environments
export EPOCHLY_MEMORY_SHARED_SIZE=33554432 # 32MB
python your_script.py

Level 4: GPU Configuration

Tuning Level 4

# Enable GPU acceleration
export EPOCHLY_GPU_ENABLED=true
python your_script.py
# Set GPU memory limit (MB)
export EPOCHLY_GPU_MEMORY_LIMIT=4096
python your_script.py
# Set minimum workload threshold for GPU offload
export EPOCHLY_GPU_WORKLOAD_THRESHOLD=1000000
python your_script.py

Common Configuration Patterns

Web Server (I/O Heavy)

export EPOCHLY_LEVEL=1
export EPOCHLY_MAX_WORKERS=64
export EPOCHLY_JIT_ENABLED=false
python server.py

Data Science (Numerical Heavy)

export EPOCHLY_LEVEL=3
export EPOCHLY_JIT_HOT_PATH_THRESHOLD=50
export EPOCHLY_MAX_WORKERS=8
export EPOCHLY_GPU_ENABLED=true
python analysis.py

Batch Processing (CPU Heavy)

export EPOCHLY_LEVEL=3
export EPOCHLY_MAX_WORKERS=16
export EPOCHLY_MEMORY_SHARED_SIZE=268435456 # 256MB
python batch_job.py

Memory Constrained

export EPOCHLY_LEVEL=2
export EPOCHLY_MAX_WORKERS=4
python constrained_app.py

TOML Configuration File

For complex setups, create an epochly.toml file:

[epochly]
mode = "auto"
enhancement_level = 3
max_workers = 8
telemetry = true
[epochly.jit]
backend = "auto"
cache_enabled = true
hot_path_threshold = 1000
[epochly.memory]
pool_type = "shared"
shared_memory_size = 134217728 # 128MB
numa_aware = false
[epochly.gpu]
enabled = true
memory_limit = 4096
workload_threshold = 1000000

Place this file in:

  • Current directory (./epochly.toml)
  • User config directory (~/.config/epochly/epochly.toml)
  • System config (/etc/epochly/epochly.toml)

Files are loaded in priority order (current directory wins).