"When things get faster, everything gets better."
That's how our CEO Marc summed up Chalk's mission during his appearance on This Week in Startups (TWiST). As part of the TWiST 500, Marc sat down with host Alex Wilhelm to unpack why AI needs fresh data — fast.
Speed as a feature
Marc shared a lesson from his Google days: "Speed is a feature." When things are faster, people search more, click more ads, and everything becomes more magical.
But companies today are forced to choose between fast data or fresh data. Traditional architectures either serve pre-computed data in milliseconds (fast but stale) or fetch real-time data on-demand (fresh but not instant).
Take Whatnot, the live streaming marketplace. They were running overnight batch jobs for personalized recommendations — a standard approach that was burning compute on predictions that might never get used. With Chalk, they were able to both improve recommendations with real-time inference and save millions of dollars by deferring compute to when users actually need them.
These are exactly the kind of problems Chalk solves — fetching fresh data directly from sources at inference time without the traditional latency penalty. It's how you get accuracy at a speed that feels magical.
How we solved a recurring problem
Marc revealed that co-founder Elliot built what would become Chalk three times — at Affirm, his own startup Haven Money, and Credit Karma. He would run into the same problem over and over again: Data scientists write models in Python, but it's too slow for production. Teams would rewrite everything in a faster language, introducing bugs and delays.
After building an internal solution three times, Elliot realized how many teams were stuck in the same cycle. That's what sparked Chalk — why not solve this once for everyone? The approach is straightforward: we transpile Python to C++ automatically. Same Python your data scientists already write, but it runs fast enough for production without the manual rewrites.
What’s next for Chalk
Watch the full episode to hear Marc discuss open source plans, our SF/NYC/LA offices, and whether Databricks should be worried. If your ML models are stuck waiting for batch jobs, maybe it's time to see what happens when things get faster!