Getting Started
TARX API Overview
TARX provides an OpenAI-compatible inference API. Point your existing code at any of three compute tiers — your local machine, the SuperComputer mesh, or your enterprise fleet — with a single line change.
# Three tiers. Same API.
# Your machine — $0/query, always private
client = OpenAI(base_url="http://localhost:11435/v1", api_key="none")
# SuperComputer mesh — distributed inference
client = OpenAI(base_url="https://api.tarx.com/v1", api_key="tarx_...")
# Your enterprise fleet — on-premise
client = OpenAI(base_url="https://tarx.acme-corp.com/v1", api_key="tarx_...")