Skip to main content

Rate Limits

Rate limits are enforced per API key using Redis sliding-window counters with both per-second (RPS) and per-minute (RPM) buckets.

Tiers

TierRequests/secRequests/minMax WS ConnectionsMax Batch Size
free530025
pro201,2001020
enterprise503,0005050
Check available tiers and their quotas via GET /v1/keys/tiers.

Rate Limit Response

When you exceed your limit, the API returns HTTP 429:
HTTP/1.1 429 Too Many Requests
Retry-After: 1
Content-Type: application/json
{
  "detail": "Rate limit exceeded. Retry after 1s."
}
The Retry-After header tells you exactly how many seconds to wait.

Handling Rate Limits

Implement exponential backoff with the Retry-After header:
import time
import requests

def api_request(url, headers, json_data=None, max_retries=3):
    for attempt in range(max_retries):
        resp = requests.post(url, headers=headers, json=json_data)

        if resp.status_code == 429:
            retry_after = int(resp.headers.get("Retry-After", 1))
            time.sleep(retry_after)
            continue

        if resp.status_code >= 500:
            time.sleep(2 ** attempt)  # Exponential backoff
            continue

        return resp

    raise Exception(f"Failed after {max_retries} retries")

Best Practices

Use Batch Endpoints

POST /v1/orders/batch and POST /v1/prices/batch let you combine multiple operations into a single request, dramatically reducing your request count.

Use WebSocket Feeds

Subscribe to WS /v1/ws/prices instead of polling GET /v1/markets. WebSocket connections don’t count against your REST rate limit.

Cache Market Metadata

Market metadata (slug, question, outcomes) changes infrequently. Cache it locally and only refresh periodically.

Idempotency Keys

Use Idempotency-Key headers to safely retry failed requests without risking duplicate orders.

Next Steps