11# erlang_python
22
3- Execute Python from Erlang/Elixir using dirty NIFs with support for free-threaded Python.
3+ ** Combine Python's ML/AI ecosystem with Erlang's concurrency.**
4+
5+ Run Python code from Erlang or Elixir with true parallelism, async/await support,
6+ and seamless integration. Build AI-powered applications that scale.
47
58## Overview
69
7- This library embeds a Python interpreter into the Erlang VM and provides a
8- clean API for calling Python functions, evaluating expressions, and streaming
9- results from generators. It works seamlessly with both Erlang and Elixir.
10+ erlang_python embeds Python into the BEAM VM, letting you call Python functions,
11+ evaluate expressions, and stream from generators - all without blocking Erlang
12+ schedulers.
13+
14+ ** Three paths to parallelism:**
15+ - ** Sub-interpreters** (Python 3.12+) - Each interpreter has its own GIL
16+ - ** Free-threaded Python** (3.13+) - No GIL at all
17+ - ** BEAM processes** - Fan out work across lightweight Erlang processes
1018
1119Key features:
12- - ** Dirty NIF execution ** - Python code runs on dirty schedulers, not blocking the BEAM
13- - ** Multiple execution modes ** - Free-threaded (3.13+), sub-interpreters (3.12+), or multi-executor
20+ - ** Async/await ** - Call Python async functions, gather results, stream from async generators
21+ - ** Dirty NIF execution ** - Python runs on dirty schedulers, never blocking the BEAM
1422- ** Elixir support** - Works seamlessly from Elixir via the ` :py ` module
15- - ** Erlang callbacks ** - Register Erlang/Elixir functions callable from Python
23+ - ** Bidirectional calls ** - Python can call back into registered Erlang/Elixir functions
1624- ** Type conversion** - Automatic conversion between Erlang and Python types
17- - ** Streaming** - Support for Python generators with chunk-by-chunk delivery
25+ - ** Streaming** - Iterate over Python generators chunk-by-chunk
1826- ** Virtual environments** - Activate venvs for dependency isolation
19- - ** Rate limiting** - ETS-based semaphore prevents overload
2027- ** AI/ML ready** - Examples for embeddings, semantic search, RAG, and LLMs
2128
2229## Requirements
2330
24- - Erlang/OTP 24 +
25- - Python 3.8+ (3. 12+ recommended, 3.13+ for free-threading)
31+ - Erlang/OTP 27 +
32+ - Python 3.12+ ( 3.13+ for free-threading)
2633- C compiler (gcc, clang)
2734
2835## Building
@@ -108,6 +115,41 @@ end)
108115{:ok , 3628800 } = :py .eval (" __import__('erlang').call('factorial', 10)" )
109116```
110117
118+ ## Async/Await Support
119+
120+ Call Python async functions without blocking:
121+
122+ ``` erlang
123+ % % Call an async function
124+ Ref = py :async_call (aiohttp , get , [<<" https://api.example.com/data" >>]),
125+ {ok , Response } = py :async_await (Ref ).
126+
127+ % % Gather multiple async calls concurrently
128+ {ok , Results } = py :async_gather ([
129+ {aiohttp , get , [<<" https://api.example.com/users" >>]},
130+ {aiohttp , get , [<<" https://api.example.com/posts" >>]},
131+ {aiohttp , get , [<<" https://api.example.com/comments" >>]}
132+ ]).
133+
134+ % % Stream from async generators
135+ {ok , Chunks } = py :async_stream (mymodule , async_generator , [args ]).
136+ ```
137+
138+ ## Parallel Execution with Sub-interpreters
139+
140+ True parallelism without GIL contention using Python 3.12+ sub-interpreters:
141+
142+ ``` erlang
143+ % % Execute multiple calls in parallel across sub-interpreters
144+ {ok , Results } = py :parallel ([
145+ {math , factorial , [100 ]},
146+ {math , factorial , [200 ]},
147+ {math , factorial , [300 ]},
148+ {math , factorial , [400 ]}
149+ ]).
150+ % % Each call runs in its own interpreter with its own GIL
151+ ```
152+
111153## Parallel Processing with BEAM Processes
112154
113155Leverage Erlang's lightweight processes for massive parallelism:
@@ -134,13 +176,15 @@ end).
134176).
135177```
136178
137- ** Performance: **
179+ ** Benchmark Results ** (from ` examples/erlang_concurrency.erl ` ):
138180```
139- Sequential (10 items × 100ms): 1.01 seconds
140- Parallel (10 BEAM processes): 0.10 seconds
141- Speedup: 10x faster!
181+ Sequential: 10 Python calls × 100ms each = 1.01 seconds
182+ Parallel: 10 BEAM processes calling Python = 0.10 seconds
142183```
143184
185+ The speedup is linear with the number of items when work is I/O-bound or
186+ distributed across sub-interpreters.
187+
144188## Virtual Environment Support
145189
146190``` erlang
0 commit comments