All posts
Education

Python's GIL Deep Dive: What Every Developer Should Know

A technical deep dive into Python's Global Interpreter Lock -- what it does, why it exists, how it affects performance, and what you can do about it.

Epochly TeamJanuary 30, 20268 min read

The Global Interpreter Lock (GIL) is the most misunderstood feature in Python. Developers blame it for slow code. Teams migrate to Go or Rust because of it. Conference talks warn against it.

But most Python performance problems have nothing to do with the GIL.

This post explains what the GIL actually does, when it matters, when it doesn't, and what practical options exist for working around it.


What the GIL Actually Is

The GIL is a mutex (mutual exclusion lock) in CPython that allows only one thread to execute Python bytecode at a time. Even on a 16-core machine with 16 threads, only one thread runs Python code at any given moment.

Thread 1: [===RUNNING===][---waiting---][===RUNNING===]
Thread 2: [---waiting---][===RUNNING===][---waiting---]
Thread 3: [---waiting---][---waiting---][===RUNNING===]
GIL: [Thread 1 ][Thread 2 ][Thread 3 ]

The GIL exists to protect CPython's memory management. Python uses reference counting for garbage collection, and incrementing/decrementing reference counts without a lock would require atomic operations on every Python object -- a significant performance cost on single-threaded code.

Key facts

  1. The GIL is a CPython implementation detail, not a Python language requirement. Jython (Java), IronPython (.NET), and PyPy (with STM) don't have it.
  2. The GIL only affects Python bytecode execution. C extensions can release the GIL during computation. NumPy, SciPy, and most scientific libraries do this.
  3. The GIL is released during I/O operations. File reads, network calls, and database queries release the GIL automatically.

When the GIL Matters (And When It Doesn't)

GIL matters: CPU-bound Python code with threads

import threading
import time
def cpu_work(n):
"""Pure Python CPU-bound work."""
result = 0
for i in range(n):
result += i * i
return result
# Sequential: runs in ~13s
start = time.perf_counter()
cpu_work(50_000_000)
cpu_work(50_000_000)
sequential = time.perf_counter() - start
# Threaded: also ~13s (GIL prevents parallelism)
t1 = threading.Thread(target=cpu_work, args=(50_000_000,))
t2 = threading.Thread(target=cpu_work, args=(50_000_000,))
start = time.perf_counter()
t1.start(); t2.start()
t1.join(); t2.join()
threaded = time.perf_counter() - start
print(f"Sequential: {sequential:.1f}s")
print(f"Threaded: {threaded:.1f}s")
# Both ~13 seconds. Threads did nothing.

With CPU-bound Python code, threading provides approximately 1.1x speedup (sometimes slower than sequential due to lock contention). We measured this directly:

ExecutorCPU-bound speedup
ThreadPool~1.1x
ProcessPool3-4x (8-12x with all cores)

ThreadPool provides almost no benefit for CPU-bound work due to the GIL.

GIL doesn't matter: I/O-bound code

import threading
import requests
def fetch(url):
return requests.get(url)
# This WILL benefit from threading
# because the GIL is released during network I/O
urls = ["https://api.example.com/1", "https://api.example.com/2"]
threads = [threading.Thread(target=fetch, args=(url,)) for url in urls]

When Python code is waiting for I/O, the GIL is released. Other threads can execute Python code while one thread waits for a network response. This is why threading works well for I/O-bound workloads and why asyncio provides similar benefits with a different programming model.

GIL doesn't matter: NumPy and scientific libraries

import numpy as np
# This already uses multiple cores
# NumPy releases the GIL during BLAS operations
a = np.random.randn(4096, 4096)
b = np.random.randn(4096, 4096)
result = np.matmul(a, b) # Uses OpenBLAS/MKL threads internally

NumPy, SciPy, and most scientific computing libraries release the GIL when calling into compiled C/Fortran code. The np.matmul call above uses all available cores through OpenBLAS or MKL regardless of the GIL.

This is why adding parallelism on top of NumPy operations shows approximately 1.0x speedup -- the operations are already parallel internally.


The Real Cost of the GIL

The GIL's cost isn't that Python is slow. Python is slow because it's interpreted. The GIL's cost is that you can't use threads to parallelize CPU-bound Python code.

For a concrete example:

ScenarioTimeWhy
Sequential (1 core)13.19sBaseline
ThreadPool (16 threads)~12.5sGIL prevents parallel execution
ProcessPool (16 processes)1.52sSeparate interpreters, separate GILs

Using 16 threads on a 16-core machine gives you roughly 5% improvement. Using 16 processes gives you 8.7x. The difference is the GIL.

(Measured on Apple M2 Max, 16 cores, Python 3.13.5. ProcessPool at 54% efficiency.)


Working Around the GIL: Four Approaches

1. multiprocessing (ProcessPool)

The most straightforward approach. Each process has its own Python interpreter and its own GIL. True parallelism.

from concurrent.futures import ProcessPoolExecutor
def cpu_work(n):
result = 0
for i in range(n):
result += i * i
return result
with ProcessPoolExecutor(max_workers=16) as pool:
futures = [pool.submit(cpu_work, 1_000_000) for _ in range(64)]
results = [f.result() for f in futures]

Measured speedup: 8-12x on 16 cores for CPU-bound workloads.

Trade-offs:

  • Process spawn overhead: ~200ms per worker
  • Data must be serialized (pickle) between processes
  • Higher memory usage (each process has its own interpreter)
  • Not suitable for fine-grained parallelism (<1s workloads)

2. C extensions that release the GIL

Write your performance-critical code in C/C++ and release the GIL during computation. This is what NumPy, SciPy, scikit-learn, and most scientific libraries do.

/* Release the GIL before computation */
Py_BEGIN_ALLOW_THREADS
result = expensive_computation(data, n);
Py_END_ALLOW_THREADS

Trade-offs:

  • Requires C/C++ expertise
  • Maintenance burden increases
  • Debugging becomes harder
  • Not always practical for application code

3. JIT compilation (Numba)

Numba compiles Python numerical code to native machine code, effectively bypassing the interpreter (and the GIL for numerical operations).

from numba import njit
@njit
def cpu_work(n):
result = 0
for i in range(n):
result += i * i
return result

Measured speedup: 58-193x (113x average) on numerical loops.

Trade-offs:

  • Only works with numerical code (no strings, dicts, or complex objects)
  • First-call compilation overhead
  • Limited Python feature support
  • Debugging compiled code is harder

4. Free-threaded Python (PEP 703)

Python 3.13 introduced an experimental build without the GIL (--disable-gil). This is the long-term solution, but it's not production-ready yet.

Current status (as of early 2026):

  • Experimental in Python 3.13+
  • Some C extensions may not work correctly
  • Performance of single-threaded code may regress slightly
  • The ecosystem needs time to adapt

Free-threaded Python is promising but not yet a practical solution for production workloads. Most libraries haven't been tested or updated for GIL-free operation.


How Epochly Addresses the GIL

Epochly combines approaches 1-3 transparently:

LevelApproachEffect on GIL
Level 1GIL-aware schedulingMinimizes contention (<5% overhead)
Level 2Numba JIT compilationBypasses interpreter for numerical code
Level 3ProcessPool executionSeparate GILs per process
Level 4GPU offloadingComputation moves off CPU entirely

The key insight is that different workloads need different strategies. A numerical loop benefits from JIT (Level 2). A batch of independent tasks benefits from ProcessPool (Level 3). Large array operations benefit from GPU (Level 4).

Epochly's progressive enhancement model applies the right strategy based on the workload characteristics it observes at Level 0 (monitoring).


Practical Decision Framework

When you encounter a performance bottleneck in Python, ask these questions in order:

1. Is it CPU-bound or I/O-bound?

  • I/O-bound: Use asyncio or ThreadPool. The GIL is irrelevant.
  • CPU-bound: Continue to question 2.

2. Is the hot code numerical (loops, math)?

  • Yes: JIT compilation (Numba, Level 2) gives 58-193x.
  • No: Continue to question 3.

3. Can the work be split into independent chunks?

  • Yes: ProcessPool (Level 3) gives 8-12x on 16 cores.
  • No: You need algorithmic optimization or a C extension.

4. Is the data large enough?

  • Arrays >10M elements: GPU (Level 4) gives up to 70x.
  • Arrays <1M elements: Stay on CPU.
  • Workload <1 second: Don't parallelize.

Common GIL Misconceptions

"Python can't do parallelism"

False. multiprocessing, concurrent.futures.ProcessPoolExecutor, and C extensions all provide true parallelism. The GIL only blocks Python bytecode in threads.

"The GIL makes Python slow"

Misleading. Python is slow because it's interpreted. The GIL prevents you from using threads to parallelize CPU-bound code, but single-threaded Python would be the same speed with or without the GIL.

"I should use Go/Rust instead"

Maybe. If your entire application is CPU-bound computation, a compiled language will be faster. But if you're using Python for its ecosystem (NumPy, pandas, scikit-learn, PyTorch), the GIL is rarely the bottleneck -- those libraries already bypass it.

"Free-threaded Python will fix everything"

Eventually, partially. PEP 703 removes the GIL, but it won't make Python interpretation faster. CPU-bound Python loops will still be slow -- just now parallelizable with threads instead of requiring processes.


Summary

FactorImpactWhat to Do
GIL + CPU-bound threadsBlocks parallelismUse ProcessPool or JIT
GIL + I/O-bound threadsNo impact (GIL released)Use threading or asyncio
GIL + NumPyNo impact (BLAS releases GIL)Nothing needed
GIL + numerical loopsBlocks thread parallelismJIT compiles past interpreter

The GIL is real, but it's not the end of Python performance. Understanding when it matters -- and when it doesn't -- is the first step toward making your Python code faster.


Benchmark conditions: Python 3.12.3, Linux WSL2, 16 cores. Apple M2 Max results on Python 3.13.5, 16 cores. January 29, 2026 comprehensive benchmark report.

pythongilconcurrencyparallelismthreading