The feature compute engine for realtime ML

Chalk’s compute engine powers real-time ML by computing features at inference time. Chalk executes models end-to-end with an optimized engine, eliminating stale streams and ETL jobs by resolving features directly from the source.

Trusted by teams building the next generation of ML systems

melio logo
sunrun logo
found logo
pipe logo
indebted logo
ramp logo

Why ML teams love Chalk’s compute engine

Compute features at inference time—directly from the source.

API & Data Integration

Easily connect APIs (3rd party clients) and incorporate unstructured data with LLMs. Chalk handles auth, retries, and caching automatically.

Just-in-Time Fetching

Get fresh data only when needed. Chalk fetches inputs at runtime for accurate, cost-efficient predictions.

Declarative Pipelines

Define dependencies with Python signatures. Chalk auto-orchestrates resolvers into efficient query plans across online and offline environments.

Preview Deployments

Test changes in isolated preview environments. Chalk spins up sandboxes per branch for safe iteration and review.

Rust-Powered Runtime

Run Python at native speed. Chalk uses Rust to parallelize fetches, push down ops, and multithread computations.

Built-in Observability

Trace every query, monitor latency, and debug at the feature level. Chalk captures lineage and telemetry by default—no extra setup required.

Chalk has transformed our ML development workflow. We can now build and iterate on ML features faster than ever, with a dramatically better developer experience. Chalk also powers real-time feature transformations for our LLM tools and models — critical for meeting the ultra-high freshness standards we require. Beyond the product, the Chalk team has been a great partner: responsive, deeply knowledgeable, and committed to helping us move faster.

Jay Feng
ML Engineer at Nowsta
Nowsta logo

Feature engineering reimagined for the inference era

Inference is top of mind for every ML org today. But most teams are stuck duct-taping infrastructure together—ETL, feature stores, custom APIs, brittle retraining logic.

Chalk changes that.

  • Type annotations define which features to transform
  • Decorators make it easy to group logic and assign resolvers to environments, owners, and more
  • Native performance, Python syntax. Write functions in Python—Chalk compiles and accelerates them in C++ and Rust, with zero Python runtime overhead

Accelerated execution with Python‑to‑Rust transpilation

One language, one system.

With Chalk, the same code powers training, evaluation, and inference—ensuring consistency, correctness, and eliminating the need to rewrite features.

  • Dynamically builds efficient query plans, never fetching anything extra
  • Parses and transpiles your logic into static expressions that run natively
  • Centralizes your ML models into code, establishing a single source of truth

Compute fresh features. Serve them in milliseconds. Unify your ML workflow.