Skip to content

Rate Limiting

pyBDL includes a sophisticated rate limiting system that automatically enforces API quotas to prevent exceeding the BDL API provider's limits. The rate limiter supports both synchronous and asynchronous operations, persistent quota tracking, and flexible wait/raise behaviors.

Overview

The rate limiting system enforces multiple quota periods simultaneously (per second, per 15 minutes, per 12 hours, per 7 days) as specified by the BDL API provider. It automatically tracks quota usage and can either wait for quota to become available or raise exceptions when limits are exceeded.

Key Features

  • Automatic enforcement: Rate limiting is built into all API calls
  • Multiple quota periods: Enforces limits across different time windows simultaneously
  • Persistent cache: Quota usage survives process restarts
  • Sync & async support: Works seamlessly with both synchronous and asynchronous code
  • Configurable behavior: Choose to wait or raise exceptions when limits are exceeded
  • Shared state: Sync and async limiters share quota state via persistent cache

The pybdl.config.BDLConfig used by the API client defaults to waiting when a local quota slot is not yet available (raise_on_rate_limit=False). Set raise_on_rate_limit=True (or environment variable BDL_RATE_LIMIT_RAISE=true) to raise pybdl.api.exceptions.RateLimitError immediately instead, for example in tests that must fail fast.

When the server responds with HTTP 429 (Too Many Requests), the client retries with a separate budget (http_429_max_retries / BDL_HTTP_429_MAX_RETRIES), honoring Retry-After when present (seconds or HTTP-date) up to http_429_max_delay (default 900 seconds). If Retry-After is omitted, waits use exponential backoff retry_backoff_factor × 2^attempt (capped by http_429_max_delay). This is independent of request_retries, which applies to other retryable status codes.

Default Quotas

The rate limiter enforces the following default quotas based on user registration status:

Period Anonymous user Registered user
1s 5 10
15m 100 500
12h 1,000 5,000
7d 10,000 50,000

These limits are automatically applied based on whether you provide an API key (registered user) or not (anonymous user).

Registration status detection

The library automatically determines your registration status:

  • Anonymous user: When api_key is None or not provided in BDLConfig
  • Registered user: When api_key is provided in BDLConfig

The rate limiter uses separate quota tracking for registered and anonymous users, ensuring that each user type gets the correct limits.

User Guide

Basic Usage

Rate limiting is automatically handled by the library. Simply use the API client normally:

from pybdl import BDL, BDLConfig

config = BDLConfig(api_key="your-api-key")
bdl = BDL(config)

# Rate limiting is automatic - no extra code needed
data = bdl.api.data.get_data_by_variable(variable_id="3643", years=[2021])

The rate limiter will automatically: - Track your API usage across all calls - Enforce quota limits - Raise exceptions if limits are exceeded (default behavior)

Handling Rate Limit Errors

By default, the rate limiter raises a RateLimitError when quota is exceeded:

from pybdl.utils.rate_limiter import RateLimitError

try:
    data = bdl.api.data.get_data_by_variable(variable_id="3643", years=[2021])
except RateLimitError as e:
    print(f"Rate limit exceeded. Retry after {e.retry_after:.1f} seconds")
    print(f"Limit info: {e.limit_info}")

The exception includes: - retry_after: Number of seconds to wait before retrying - limit_info: Dictionary with detailed quota information

Waiting Instead of Raising

You can configure the rate limiter to wait automatically instead of raising exceptions. This requires creating a custom rate limiter:

from pybdl.utils.rate_limiter import RateLimiter, PersistentQuotaCache
from pybdl.config import DEFAULT_QUOTAS

# Create a rate limiter that waits up to 30 seconds
cache = PersistentQuotaCache(enabled=True)
quotas = {k: v[1] for k, v in DEFAULT_QUOTAS.items()}  # Registered user quotas
limiter = RateLimiter(
    quotas=quotas,
    is_registered=True,
    cache=cache,
    raise_on_limit=False,  # Wait instead of raising
    max_delay=30.0  # Maximum wait time in seconds
)

# Use the limiter before making API calls
limiter.acquire()
data = bdl.api.data.get_data_by_variable(variable_id="3643", years=[2021])

Using Context Managers

Rate limiters can be used as context managers for cleaner code:

from pybdl.utils.rate_limiter import RateLimiter, PersistentQuotaCache
from pybdl.config import DEFAULT_QUOTAS

cache = PersistentQuotaCache(enabled=True)
quotas = {k: v[1] for k, v in DEFAULT_QUOTAS.items()}
limiter = RateLimiter(quotas, is_registered=True, cache=cache)

# Automatically acquires quota when entering context
with limiter:
    data = bdl.api.data.get_data_by_variable(variable_id="3643", years=[2021])

Using Decorators

You can decorate functions to automatically rate limit them:

from pybdl.utils.rate_limiter import rate_limit
from pybdl.config import DEFAULT_QUOTAS

quotas = {k: v[1] for k, v in DEFAULT_QUOTAS.items()}

@rate_limit(quotas=quotas, is_registered=True, max_delay=10)
def fetch_data(variable_id: str, year: int):
    return bdl.api.data.get_data_by_variable(variable_id=variable_id, years=[year])

# Function is automatically rate limited
data = fetch_data("3643", 2021)

For async functions:

from pybdl.utils.rate_limiter import async_rate_limit
from pybdl.config import DEFAULT_QUOTAS

quotas = {k: v[1] for k, v in DEFAULT_QUOTAS.items()}

@async_rate_limit(quotas=quotas, is_registered=True)
async def async_fetch_data(variable_id: str, year: int):
    return await bdl.api.data.aget_data_by_variable(variable_id=variable_id, years=[year])

Checking Remaining Quota

You can check how much quota remains before making API calls:

from pybdl import BDL, BDLConfig

bdl = BDL(BDLConfig(api_key="your-api-key"))

# Get remaining quota (requires accessing the internal limiter)
remaining = bdl._client._sync_limiter.get_remaining_quota()
print(f"Remaining requests per second: {remaining.get(1, 0)}")
print(f"Remaining requests per 15 minutes: {remaining.get(900, 0)}")

Custom Quotas

You can override default quotas for testing or special deployments:

from pybdl import BDLConfig

# Custom quotas: period in seconds -> limit
custom_quotas = {
    1: 20,        # 20 requests per second
    900: 500,     # 500 requests per 15 minutes
    43200: 2000,  # 2000 requests per 12 hours
    604800: 20000 # 20000 requests per 7 days
}

config = BDLConfig(api_key="your-api-key", custom_quotas=custom_quotas)
bdl = BDL(config)

Or via environment variable:

export BDL_QUOTAS='{"1": 20, "900": 500}'

Persistent Cache

The rate limiter uses a persistent cache to track quota usage across process restarts. The cache is stored in:

  • Project-local: .cache/pybdl/quota_cache.json (default)
  • Global: Platform-specific cache directory (e.g., ~/.cache/pybdl/quota_cache.json on Linux)

You can disable persistent caching:

from pybdl import BDLConfig

config = BDLConfig(api_key="your-api-key", quota_cache_enabled=False)
bdl = BDL(config)

Sync and Async Sharing

Both synchronous and asynchronous rate limiters share the same quota state via the persistent cache. This means:

  • Sync and async API calls count toward the same limits
  • Quota usage persists across different execution contexts
  • Process restarts maintain quota state

Technical Details

For technical implementation details, including architecture, algorithm, thread safety, cache implementation, and configuration options, see appendix.

API Reference

rate_limiter

Rate limiting utilities.

__all__ module-attribute

__all__ = [
    "GUSBDLError",
    "RateLimitError",
    "RateLimitDelayExceeded",
    "PersistentQuotaCache",
    "RateLimiter",
    "AsyncRateLimiter",
    "rate_limit",
    "async_rate_limit",
]

GUSBDLError

Bases: Exception

Base exception for all GUS BDL API errors.

RateLimitDelayExceeded

RateLimitDelayExceeded(
    actual_delay, max_delay, limit_info=None
)

Bases: RateLimitError

Raised when required delay exceeds max_delay setting.

Source code in pybdl/api/exceptions.py
def __init__(
    self,
    actual_delay: float,
    max_delay: float,
    limit_info: dict[str, Any] | None = None,
) -> None:
    self.actual_delay = actual_delay
    self.max_delay = max_delay

    message = f"Required delay ({actual_delay:.1f}s) exceeds maximum allowed delay ({max_delay:.1f}s)."
    super().__init__(
        retry_after=actual_delay,
        limit_info=limit_info,
        message=message,
    )

retry_after instance-attribute

retry_after = retry_after

limit_info instance-attribute

limit_info = limit_info or {}

actual_delay instance-attribute

actual_delay = actual_delay

max_delay instance-attribute

max_delay = max_delay

RateLimitError

RateLimitError(retry_after, limit_info=None, message=None)

Bases: GUSBDLError

Raised when rate limit is exceeded.

Source code in pybdl/api/exceptions.py
def __init__(
    self,
    retry_after: float,
    limit_info: dict[str, Any] | None = None,
    message: str | None = None,
) -> None:
    self.retry_after = retry_after
    self.limit_info = limit_info or {}

    if message is None:
        periods = ", ".join(f"{info['limit']} req/{info['period']}s" for info in self.limit_info.get("quotas", []))
        message = f"Rate limit exceeded ({periods}). Retry after {retry_after:.1f}s."

    super().__init__(message)

retry_after instance-attribute

retry_after = retry_after

limit_info instance-attribute

limit_info = limit_info or {}

AsyncRateLimiter

AsyncRateLimiter(
    quotas,
    is_registered,
    cache=None,
    max_delay=None,
    raise_on_limit=True,
    buffer_seconds=0.05,
)

Bases: RateLimiterBase

Asyncio-compatible rate limiter for API requests.

Source code in pybdl/utils/rate_limiter/_async.py
def __init__(
    self,
    quotas: dict[int, int | tuple[int, int]],
    is_registered: bool,
    cache: PersistentQuotaCache | None = None,
    max_delay: float | None = None,
    raise_on_limit: bool = True,
    buffer_seconds: float = 0.05,
) -> None:
    super().__init__(
        quotas=quotas,
        is_registered=is_registered,
        cache=cache,
        max_delay=max_delay,
        raise_on_limit=raise_on_limit,
        buffer_seconds=buffer_seconds,
    )
    self.lock = asyncio.Lock()

quotas instance-attribute

quotas = quotas

is_registered instance-attribute

is_registered = is_registered

calls instance-attribute

calls = {period: (deque()) for period in quotas}

cache instance-attribute

cache = cache

cache_key instance-attribute

cache_key = 'reg' if is_registered else 'anon'

max_delay instance-attribute

max_delay = max_delay

raise_on_limit instance-attribute

raise_on_limit = raise_on_limit

buffer_seconds instance-attribute

buffer_seconds = buffer_seconds

lock instance-attribute

lock = Lock()

acquire async

acquire()
Source code in pybdl/utils/rate_limiter/_async.py
async def acquire(self) -> float | None:
    if not self.quotas:
        return None

    while True:
        sleep_time = 0.0
        async with self.lock:
            if self.cache and self.cache.enabled:
                self._sync_from_cache()
            now = time.monotonic()
            self._cleanup_expired(now)
            wait_time = self._compute_wait(now)
            if wait_time <= 0 and self._try_record(now):
                return now
            sleep_time = self._check_wait_and_raise(wait_time) if wait_time > 0 else 0.0

        if sleep_time > 0:
            await asyncio.sleep(sleep_time)

release async

release(recorded_at)
Source code in pybdl/utils/rate_limiter/_async.py
async def release(self, recorded_at: float | None) -> None:
    if recorded_at is None:
        return
    async with self.lock:
        self._remove_timestamp(recorded_at)

get_remaining_quota

get_remaining_quota()

Return a snapshot without mutating state or awaiting a lock.

Source code in pybdl/utils/rate_limiter/_async.py
def get_remaining_quota(self) -> dict[int, int]:
    """Return a snapshot without mutating state or awaiting a lock."""
    now = time.monotonic()
    if self.cache and self.cache.enabled:
        return {
            period: max(
                0,
                self._get_limit(period)
                - len(
                    [
                        value
                        for value in self.cache.get(f"{self.cache_key}_{period}")
                        if isinstance(value, int | float) and value > now - period
                    ]
                ),
            )
            for period in self.quotas
        }

    return {
        period: max(
            0,
            self._get_limit(period) - len([value for value in self.calls[period] if value > now - period]),
        )
        for period in self.quotas
    }

get_remaining_quota_async async

get_remaining_quota_async()
Source code in pybdl/utils/rate_limiter/_async.py
async def get_remaining_quota_async(self) -> dict[int, int]:
    async with self.lock:
        if self.cache and self.cache.enabled:
            self._sync_from_cache()
        return self._get_remaining(time.monotonic())

reset

reset()

Synchronous convenience helper for non-concurrent contexts.

Source code in pybdl/utils/rate_limiter/_async.py
def reset(self) -> None:
    """Synchronous convenience helper for non-concurrent contexts."""
    self._reset_all()

reset_async async

reset_async()
Source code in pybdl/utils/rate_limiter/_async.py
async def reset_async(self) -> None:
    async with self.lock:
        self._reset_all()

__aenter__ async

__aenter__()
Source code in pybdl/utils/rate_limiter/_async.py
async def __aenter__(self) -> "AsyncRateLimiter":
    await self.acquire()
    return self

__aexit__ async

__aexit__(exc_type, exc_val, exc_tb)
Source code in pybdl/utils/rate_limiter/_async.py
async def __aexit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> Literal[False]:
    return False

PersistentQuotaCache

PersistentQuotaCache(
    enabled=True, *, cache_file=None, use_global_cache=False
)

Thread-safe persistent storage for rate limiter timestamps.

Source code in pybdl/utils/rate_limiter/_cache.py
def __init__(
    self,
    enabled: bool = True,
    *,
    cache_file: str | Path | None = None,
    use_global_cache: bool = False,
) -> None:
    self.enabled = enabled
    self.cache_file = Path(
        resolve_cache_file_path(
            "quota_cache.json",
            use_global_cache=use_global_cache,
            custom_file=str(cache_file) if cache_file is not None else None,
        )
    )
    self._lock = threading.Lock()
    self._data: dict[str, Any] = {}
    if self.enabled:
        self._ensure_cache_dir()
        self._load()

enabled instance-attribute

enabled = enabled

cache_file instance-attribute

cache_file = Path(
    resolve_cache_file_path(
        "quota_cache.json",
        use_global_cache=use_global_cache,
        custom_file=str(cache_file)
        if cache_file is not None
        else None,
    )
)

get

get(key)
Source code in pybdl/utils/rate_limiter/_cache.py
def get(self, key: str) -> Any:
    if not self.enabled:
        return []
    with self._lock:
        value = self._data.get(key, [])
        return list(value) if isinstance(value, list) else value

set

set(key, value)
Source code in pybdl/utils/rate_limiter/_cache.py
def set(self, key: str, value: Any) -> None:
    if not self.enabled:
        return
    with self._lock:
        self._data[key] = value
        self._save()

try_append_if_under_limit

try_append_if_under_limit(
    key, value, max_length, cleanup_older_than=None
)
Source code in pybdl/utils/rate_limiter/_cache.py
def try_append_if_under_limit(
    self,
    key: str,
    value: float,
    max_length: int,
    cleanup_older_than: float | None = None,
) -> bool:
    if not self.enabled:
        return True
    with self._lock:
        current = self._clean_list(key, cleanup_older_than)
        if len(current) >= max_length:
            return False
        current.append(value)
        self._data[key] = current
        self._save()
        return True

try_record_all_periods

try_record_all_periods(periods_config, value)

Atomically record one timestamp for every tracked period.

Source code in pybdl/utils/rate_limiter/_cache.py
def try_record_all_periods(
    self,
    periods_config: Sequence[tuple[str, int, float | None]],
    value: float,
) -> bool:
    """Atomically record one timestamp for every tracked period."""
    if not self.enabled:
        return True

    with self._lock:
        staged: dict[str, list[float]] = {}
        for key, max_length, cleanup_older_than in periods_config:
            current = self._clean_list(key, cleanup_older_than)
            if len(current) >= max_length:
                return False
            staged[key] = current

        for key, _, _ in periods_config:
            staged[key].append(value)
            self._data[key] = staged[key]

        self._save()
        return True

remove_last_if_matches

remove_last_if_matches(key, value)
Source code in pybdl/utils/rate_limiter/_cache.py
def remove_last_if_matches(self, key: str, value: float) -> bool:
    if not self.enabled:
        return False
    with self._lock:
        current = list(self._data.get(key, []))
        for index in range(len(current) - 1, -1, -1):
            if current[index] == value:
                del current[index]
                self._data[key] = current
                self._save()
                return True
        return False

remove_from_all_periods

remove_from_all_periods(keys, value)

Atomically remove a timestamp from all tracked periods.

Source code in pybdl/utils/rate_limiter/_cache.py
def remove_from_all_periods(self, keys: Sequence[str], value: float) -> bool:
    """Atomically remove a timestamp from all tracked periods."""
    if not self.enabled:
        return False

    changed = False
    with self._lock:
        for key in keys:
            current = list(self._data.get(key, []))
            for index in range(len(current) - 1, -1, -1):
                if current[index] == value:
                    del current[index]
                    self._data[key] = current
                    changed = True
                    break

        if changed:
            self._save()
        return changed

RateLimiter

RateLimiter(
    quotas,
    is_registered,
    cache=None,
    max_delay=None,
    raise_on_limit=True,
    buffer_seconds=0.05,
)

Bases: RateLimiterBase

Thread-safe synchronous rate limiter for API requests.

Source code in pybdl/utils/rate_limiter/_sync.py
def __init__(
    self,
    quotas: dict[int, int | tuple[int, int]],
    is_registered: bool,
    cache: PersistentQuotaCache | None = None,
    max_delay: float | None = None,
    raise_on_limit: bool = True,
    buffer_seconds: float = 0.05,
) -> None:
    super().__init__(
        quotas=quotas,
        is_registered=is_registered,
        cache=cache,
        max_delay=max_delay,
        raise_on_limit=raise_on_limit,
        buffer_seconds=buffer_seconds,
    )
    self.lock = threading.Lock()

quotas instance-attribute

quotas = quotas

is_registered instance-attribute

is_registered = is_registered

calls instance-attribute

calls = {period: (deque()) for period in quotas}

cache instance-attribute

cache = cache

cache_key instance-attribute

cache_key = 'reg' if is_registered else 'anon'

max_delay instance-attribute

max_delay = max_delay

raise_on_limit instance-attribute

raise_on_limit = raise_on_limit

buffer_seconds instance-attribute

buffer_seconds = buffer_seconds

lock instance-attribute

lock = Lock()

acquire

acquire()
Source code in pybdl/utils/rate_limiter/_sync.py
def acquire(self) -> float | None:
    if not self.quotas:
        return None

    while True:
        sleep_time = 0.0
        with self.lock:
            if self.cache and self.cache.enabled:
                self._sync_from_cache()
            now = time.monotonic()
            self._cleanup_expired(now)
            wait_time = self._compute_wait(now)
            if wait_time <= 0 and self._try_record(now):
                return now
            sleep_time = self._check_wait_and_raise(wait_time) if wait_time > 0 else 0.0

        if sleep_time > 0:
            time.sleep(sleep_time)

release

release(recorded_at)
Source code in pybdl/utils/rate_limiter/_sync.py
def release(self, recorded_at: float | None) -> None:
    if recorded_at is None:
        return
    with self.lock:
        self._remove_timestamp(recorded_at)

get_remaining_quota

get_remaining_quota()
Source code in pybdl/utils/rate_limiter/_sync.py
def get_remaining_quota(self) -> dict[int, int]:
    with self.lock:
        if self.cache and self.cache.enabled:
            self._sync_from_cache()
        return self._get_remaining(time.monotonic())

reset

reset()
Source code in pybdl/utils/rate_limiter/_sync.py
def reset(self) -> None:
    with self.lock:
        self._reset_all()

__enter__

__enter__()
Source code in pybdl/utils/rate_limiter/_sync.py
def __enter__(self) -> "RateLimiter":
    self.acquire()
    return self

__exit__

__exit__(exc_type, exc_val, exc_tb)
Source code in pybdl/utils/rate_limiter/_sync.py
def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> Literal[False]:
    return False

async_rate_limit

async_rate_limit(quotas, is_registered, **limiter_kwargs)
Source code in pybdl/utils/rate_limiter/_decorators.py
def async_rate_limit(
    quotas: dict[int, int | tuple[int, int]],
    is_registered: bool,
    **limiter_kwargs: Any,
) -> Callable[[Callable[..., Awaitable[T]]], Callable[..., Awaitable[T]]]:
    limiter = AsyncRateLimiter(quotas, is_registered, **limiter_kwargs)

    def decorator(func: Callable[..., Awaitable[T]]) -> Callable[..., Awaitable[T]]:
        @functools.wraps(func)
        async def wrapper(*args: Any, **kwargs: Any) -> T:
            await limiter.acquire()
            return await func(*args, **kwargs)

        return wrapper

    return decorator

rate_limit

rate_limit(quotas, is_registered, **limiter_kwargs)
Source code in pybdl/utils/rate_limiter/_decorators.py
def rate_limit(
    quotas: dict[int, int | tuple[int, int]],
    is_registered: bool,
    **limiter_kwargs: Any,
) -> Callable[[Callable[..., T]], Callable[..., T]]:
    limiter = RateLimiter(quotas, is_registered, **limiter_kwargs)

    def decorator(func: Callable[..., T]) -> Callable[..., T]:
        @functools.wraps(func)
        def wrapper(*args: Any, **kwargs: Any) -> T:
            limiter.acquire()
            return func(*args, **kwargs)

        return wrapper

    return decorator

Examples

Example: Custom Rate Limiter with Wait Behavior

from pybdl.utils.rate_limiter import RateLimiter, PersistentQuotaCache
from pybdl.config import DEFAULT_QUOTAS

# Create cache
cache = PersistentQuotaCache(enabled=True)

# Get registered user quotas
quotas = {k: v[1] for k, v in DEFAULT_QUOTAS.items()}

# Create limiter that waits up to 30 seconds
limiter = RateLimiter(
    quotas=quotas,
    is_registered=True,
    cache=cache,
    raise_on_limit=False,
    max_delay=30.0
)

# Use limiter
limiter.acquire()  # Will wait if needed, up to 30 seconds
# Make your API call here

Example: Handling Rate Limit Errors

from pybdl import BDL, BDLConfig
from pybdl.utils.rate_limiter import RateLimitError, RateLimitDelayExceeded

bdl = BDL(BDLConfig(api_key="your-api-key"))

try:
    data = bdl.api.data.get_data_by_variable(variable_id="3643", years=[2021])
except RateLimitError as e:
    if isinstance(e, RateLimitDelayExceeded):
        print(f"Would need to wait {e.actual_delay:.1f}s, exceeds max {e.max_delay:.1f}s")
    else:
        print(f"Rate limit exceeded. Retry after {e.retry_after:.1f}s")
        print(f"Current limits: {e.limit_info}")

Example: Checking Quota Before Making Calls

from pybdl import BDL, BDLConfig

bdl = BDL(BDLConfig(api_key="your-api-key"))

# Check remaining quota
remaining = bdl._client._sync_limiter.get_remaining_quota()

if remaining.get(1, 0) < 5:
    print("Warning: Low quota remaining for 1-second period")
    # Consider waiting or reducing request rate

# Make API call
data = bdl.api.data.get_data_by_variable(variable_id="3643", years=[2021])

Example: Resetting Quota (for testing)

from pybdl import BDL, BDLConfig

bdl = BDL(BDLConfig(api_key="your-api-key"))

# Reset quota counters (useful for testing)
bdl._client._sync_limiter.reset()

# Now you can make fresh API calls

Best Practices

  1. Use default behavior: The default raise-on-limit behavior is usually best for most applications
  2. Handle exceptions: Always catch RateLimitError and implement retry logic
  3. Monitor quota: Check remaining quota periodically to avoid hitting limits unexpectedly
  4. Use persistent cache: Keep quota_cache_enabled=True (default) to maintain quota state across restarts
  5. Custom quotas for testing: Use custom quotas when testing to avoid hitting production limits
  6. Async operations: Use async rate limiters for async code to avoid blocking the event loop

Troubleshooting

RateLimitError despite few calls

The persistent cache may contain old quota data. Try resetting the quota or clearing the cache file.

Sync vs async separate limits

Ensure both limiters share the same PersistentQuotaCache instance. This is automatic when using BDLConfig.

Rate limiter feels slow

Consider using async operations or adjusting max_delay. The rate limiter adds minimal overhead (\<1ms per call).

Corrupted cache file

The cache file is automatically recreated if corrupted. Old quota data will be lost, but this is usually fine.

Seealso

- [Configuration](config.md) — configuration options
- [API clients](api_clients.md) — API usage examples
- [Appendix](appendix.md) — technical implementation details