Decorator Patterns
Common patterns for using Epochly decorators to optimize your functions.
Available Decorators
| Decorator | Purpose | Default Level | Use Case |
|---|---|---|---|
| @optimize() | General-purpose optimization | Auto-detect or specified | Any function needing optimization |
| @performance_monitor | Performance tracking only | 0 (monitoring) | Baseline measurements, production monitoring |
| @jit_compile | JIT compilation shorthand | 2 (JIT) | Numerical loops, CPU-bound code |
| @threading_optimize | Threading optimization shorthand | 1 (threading) | I/O-bound operations |
| @full_optimize | Multi-core parallelism shorthand | 3 (multi-core) | CPU-intensive parallel workloads |
Pattern 1: Default Optimization
Let Epochly automatically select the best optimization level.
import epochlyimport numpy as np@epochly.optimize()def auto_optimized_function(data):"""Epochly analyzes and chooses optimal level"""result = sum(x ** 2 for x in data)return result# Automatic optimization based on workloaddata = list(range(1_000_000))result = auto_optimized_function(data)print(f"Result: {result}")
When to use:
- You're not sure which level is best
- You want Epochly to analyze your workload
- Prototyping and experimentation
Pattern 2: Explicit Level Selection with Enum
Use the EnhancementLevel enum for type-safe level specification.
import epochlyfrom epochly import EnhancementLevel@epochly.optimize(level=EnhancementLevel.LEVEL_1_THREADING)def io_bound_work(file_paths):"""Threading for I/O operations"""results = []for path in file_paths:with open(path, 'r') as f:results.append(f.read())return results@epochly.optimize(level=EnhancementLevel.LEVEL_2_JIT)def numerical_computation(n):"""JIT compilation for loops"""total = 0.0for i in range(n):total += i ** 2return total@epochly.optimize(level=EnhancementLevel.LEVEL_3_FULL)def parallel_processing(data_chunks):"""Multi-core for parallel work"""return [process_chunk(chunk) for chunk in data_chunks]# Use the functionsfiles = ['data1.txt', 'data2.txt', 'data3.txt']data = io_bound_work(files)result = numerical_computation(1_000_000)chunks = [list(range(i, i+10000)) for i in range(0, 100000, 10000)]results = parallel_processing(chunks)
Available EnhancementLevel values:
LEVEL_0_BASELINE- Monitoring onlyLEVEL_1_THREADING- Threading for I/OLEVEL_2_JIT- JIT compilationLEVEL_3_FULL- Multi-core parallelismLEVEL_4_GPU- GPU acceleration (Pro only)
Pattern 3: Integer Level Specification
Use integer levels for concise code.
import epochly# Level 0: Monitoring only@epochly.optimize(level=0)def baseline_function(data):"""No optimization, just tracking"""return sum(data)# Level 1: Threading@epochly.optimize(level=1)def fetch_urls(urls):"""I/O-bound operations"""import requestsreturn [requests.get(url).text for url in urls]# Level 2: JIT@epochly.optimize(level=2)def compute_stats(numbers):"""CPU-bound numerical code"""mean = sum(numbers) / len(numbers)variance = sum((x - mean) ** 2 for x in numbers) / len(numbers)return mean, variance# Level 3: Multi-core@epochly.optimize(level=3)def process_batches(batches):"""Parallel processing"""return [sum(batch) for batch in batches]# Use with different dataprint(baseline_function([1, 2, 3, 4, 5]))print(compute_stats([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]))
Pattern 4: Monitoring Only
Use @performance_monitor for baseline measurements without optimization.
import epochly@epochly.performance_monitordef baseline_algorithm(data):"""Track performance without modifying behavior"""result = []for item in data:result.append(item ** 2)return result# Run the functiondata = list(range(100_000))result = baseline_algorithm(data)# Check metricsmetrics = epochly.get_metrics()print(f"Execution time: {metrics['total_time']:.3f}s")print(f"Function calls: {metrics['total_calls']}")print(f"Average time: {metrics['avg_time']:.3f}s")
Use cases:
- Establishing baselines for optimization
- Production monitoring
- Performance regression testing
- Comparing optimized vs unoptimized code
Pattern 5: JIT Compilation Shorthand
Use @jit_compile as a shorthand for Level 2 optimization.
import epochlyimport numpy as np@epochly.jit_compiledef monte_carlo_pi(n_samples):"""Estimate Pi using Monte Carlo method"""inside_circle = 0for _ in range(n_samples):x = np.random.random()y = np.random.random()if x**2 + y**2 <= 1:inside_circle += 1return 4 * inside_circle / n_samples# First call: JIT compiles (slower)pi_estimate = monte_carlo_pi(1000)# Subsequent calls: Use compiled code (much faster)pi_estimate = monte_carlo_pi(10_000_000)print(f"Pi estimate: {pi_estimate:.6f}")
When to use:
- Numerical loops
- Custom algorithms
- Functions with heavy computation
- Code that can't be vectorized
Pattern 6: Full Optimization Shorthand
Use @full_optimize as a shorthand for Level 3 (multi-core).
import epochlyimport numpy as np@epochly.full_optimizedef parallel_matrix_multiply(matrices):"""Multiply multiple matrix pairs in parallel"""results = []for A, B in matrices:C = np.dot(A, B)results.append(C)return results# Create matrix pairsn = 1000matrix_pairs = [(np.random.rand(n, n), np.random.rand(n, n))for _ in range(10)]# Process in parallel across CPU coresresults = parallel_matrix_multiply(matrix_pairs)print(f"Processed {len(results)} matrix multiplications")
When to use:
- List comprehensions with independent iterations
- Batch processing
- Parallel aggregations
- CPU-intensive parallelizable workloads
Pattern 7: Threading Shorthand
Use @threading_optimize as a shorthand for Level 1 (threading).
import epochlyimport requestsimport time@epochly.threading_optimizedef fetch_multiple_apis(endpoints):"""Fetch from multiple API endpoints concurrently"""results = []for endpoint in endpoints:response = requests.get(f"https://api.example.com/{endpoint}")results.append(response.json())return results@epochly.threading_optimize(max_workers=8)def parallel_file_processing(file_paths):"""Process files concurrently with custom worker count"""results = []for path in file_paths:with open(path, 'r') as f:content = f.read()results.append(len(content))return results# Fetch APIs concurrentlyendpoints = ['users', 'products', 'orders', 'analytics']data = fetch_multiple_apis(endpoints)# Process files with 8 workersfiles = [f'data_{i}.txt' for i in range(100)]sizes = parallel_file_processing(files)
When to use:
- Network requests
- File I/O operations
- Database queries
- Any I/O-bound work
Pattern 8: Disable Monitoring
Disable performance monitoring for minimal overhead.
import epochly@epochly.optimize(level=2, monitor_performance=False)def no_monitoring(data):"""Optimized without performance tracking"""return sum(x ** 2 for x in data)@epochly.optimize(level=3, monitor_performance=False)def minimal_overhead(batches):"""Maximum performance, no tracking"""return [process(batch) for batch in batches]# These functions don't contribute to get_metrics()result1 = no_monitoring(range(1_000_000))result2 = minimal_overhead([[1, 2, 3], [4, 5, 6]])# Metrics won't include these callsmetrics = epochly.get_metrics()print(f"Tracked calls: {metrics.get('total_calls', 0)}")
When to use:
- Production code where every microsecond counts
- Functions called billions of times
- When you don't need performance metrics
Pattern 9: Accessing Decorator Metadata
Access metadata about the optimization applied to a function.
import epochly@epochly.optimize(level=2)def optimized_function(x):return x ** 2# Check if function is enhancedif hasattr(optimized_function, '_epochly_enhanced'):print(f"Function is Epochly-enhanced: {optimized_function._epochly_enhanced}")# Get optimization levelif hasattr(optimized_function, '_epochly_level'):print(f"Optimization level: {optimized_function._epochly_level}")# Access original unoptimized functionif hasattr(optimized_function, '_epochly_original'):original = optimized_function._epochly_originalprint(f"Original function: {original.__name__}")# Compare optimized vs originalimport time# Optimized versionstart = time.perf_counter()optimized_function(1000)optimized_time = time.perf_counter() - start# Original versionstart = time.perf_counter()original(1000)original_time = time.perf_counter() - startprint(f"Speedup: {original_time / optimized_time:.2f}x")
Available metadata attributes:
_epochly_enhanced: Boolean indicating if function is optimized_epochly_level: The optimization level applied_epochly_original: Reference to the original unoptimized function_epochly_config: Configuration dict for the optimization
Pattern 10: Class Methods
Apply decorators to instance methods, class methods, and static methods.
import epochlyclass DataProcessor:"""Data processing class with optimized methods"""def __init__(self, config):self.config = config@epochly.optimize(level=2)def process_item(self, item):"""Instance method with JIT optimization"""result = 0for i in range(item):result += i ** 2return result@classmethod@epochly.optimize(level=3)def batch_process(cls, items):"""Class method with multi-core optimization"""return [cls._process_single(item) for item in items]@staticmethod@epochly.optimize(level=2)def calculate_stats(numbers):"""Static method with JIT optimization"""mean = sum(numbers) / len(numbers)variance = sum((x - mean) ** 2 for x in numbers) / len(numbers)return {'mean': mean, 'variance': variance}@classmethoddef _process_single(cls, item):"""Helper method"""return item ** 2# Use the classprocessor = DataProcessor(config={'threshold': 100})# Instance methodresult = processor.process_item(1000)print(f"Processed item: {result}")# Class methodbatch_results = DataProcessor.batch_process([10, 20, 30, 40])print(f"Batch results: {batch_results}")# Static methodstats = DataProcessor.calculate_stats([1, 2, 3, 4, 5])print(f"Stats: {stats}")
Best practices for class methods:
- Epochly decorator should be closest to the function (innermost)
- Works with
@classmethod,@staticmethod, and@property - Can optimize constructors (
__init__) - Maintains proper method resolution order (MRO)
Best Practices
1. Match Level to Workload
import epochly# ✅ GOOD: I/O-bound → Level 1@epochly.optimize(level=1)def fetch_data(urls):import requestsreturn [requests.get(url).text for url in urls]# ✅ GOOD: Numerical loops → Level 2@epochly.optimize(level=2)def compute_primes(n):primes = []for num in range(2, n):if all(num % i != 0 for i in range(2, int(num**0.5) + 1)):primes.append(num)return primes# ✅ GOOD: Parallel processing → Level 3@epochly.optimize(level=3)def process_batches(batches):return [sum(batch) for batch in batches]# ❌ BAD: I/O-bound with Level 2@epochly.optimize(level=2) # Wrong level!def bad_io(urls):import requestsreturn [requests.get(url).text for url in urls]
2. Start with Defaults
# Start with auto-detection@epochly.optimize()def my_function(data):return process(data)# Or use monitoring to establish baseline@epochly.performance_monitordef baseline(data):return process(data)# Then optimize based on measurements@epochly.optimize(level=2)def optimized(data):return process(data)
3. Profile First
import epochly# Establish baseline@epochly.performance_monitordef baseline_version(data):return compute(data)# Test optimization@epochly.optimize(level=2)def optimized_version(data):return compute(data)# Comparebaseline_version(test_data)optimized_version(test_data)metrics = epochly.get_metrics()# Analyze before committing to optimization level
4. Avoid Over-Optimization
# ❌ BAD: Over-optimization for small data@epochly.optimize(level=3)def tiny_work(data):return [x ** 2 for x in data]# Called with: tiny_work([1, 2, 3, 4, 5])# Overhead > benefit!# ✅ GOOD: No optimization for small datadef tiny_work(data):return [x ** 2 for x in data]# Or use conditional optimization@epochly.optimize()def smart_work(data):# Epochly auto-detects size and optimizes accordinglyreturn [x ** 2 for x in data]
5. Use Shorthand Decorators
# Use shorthand for clarity@epochly.jit_compile # Clear: JIT optimizationdef numerical_code(n):return compute(n)@epochly.threading_optimize # Clear: Threadingdef io_code(files):return read_files(files)@epochly.full_optimize # Clear: Multi-coredef parallel_code(batches):return process_batches(batches)# Instead of less clear:@epochly.optimize(level=2) # Less clear what level 2 meansdef numerical_code(n):return compute(n)
6. Document Optimization Rationale
@epochly.optimize(level=2)def calculate_distances(points):"""Calculate pairwise distances between points.Uses Level 2 (JIT) because:- Contains nested loops (O(n²))- Pure numerical computation- No parallelization benefit (dependencies)- JIT provides 10x speedup on 1000+ points"""distances = []for i, p1 in enumerate(points):for j, p2 in enumerate(points[i+1:]):dist = ((p1[0]-p2[0])**2 + (p1[1]-p2[1])**2)**0.5distances.append(dist)return distances