MoneyLion is on a mission to empower Americans to make better financial decisions. With millions of users across lending, investing, and personal finance tools, the company relies on a complex machine learning ecosystem to drive real-time fraud detection, customer engagement, and personalized recommendations.
As the business scaled, building and deploying machine learning solutions across teams—ML operations, backend engineering, product, and data science—became more challenging. Each group owned a different part of the ML lifecycle, with distinct priorities:
The result was natural friction. Different workflows, goals, and metrics made collaboration slow and costly. MoneyLion needed more than a better feature platform. They needed an alignment layer for the entire ML lifecycle—from idea to production.
That alignment didn’t mean forcing teams into the same process. It meant creating a shared space where data scientists could build, engineers could scale, MLOps could govern, and product teams could move quickly. Without that layer, features were delayed, duplicated, or lost between teams.
MoneyLion’s first-generation feature platform was technically robust but operationally fragmented. Built around Java-based micro-services using SpringBoot, Postgres, and Redis, the system prioritized scalability — but also introduced complexity.
Data scientists and ML scientists, like Jing, prototyped features offline in SQL or notebooks. Bringing these experiments into production required rewriting logic in Java, often by different engineers on the Feature Platform Team (FPT). This translation step added significant latency between experimentation and deployment.
The backend engineers, like Anya, maintained multiple custom micro-services for different products, with duplicated ingestion and feature computation logic. Without a centralized feature catalog or lineage tracking, feature reuse was rare and offline/online skew was common—especially for real-time applications.
Despite heavy investment in Postgres query optimization and Redis caching layers to improve read performance for fraud use cases, maintaining sub-second latencies remained difficult under peak loads. Many fraud models required features to be computed and served within hundreds of milliseconds to meet strict SLA targets.
MLOps engineers, like Melvin, managed governance manually — controlling data access, code promotion, and observability across a patchwork of microservices — without a unified interface for lifecycle management.
Without a cohesive system, features were delayed, duplicated, or dropped—limiting MoneyLion’s ability to deliver real-time, AI-driven experiences at scale.
Chalk unified MoneyLion’s fragmented ML workflows by providing a developer-first platform that lets each team contribute effectively without forcing a rigid process.
Features prototyped offline, rewritten manually for production
Features built directly in Python and productionized with Chalk
No central feature store, catalog, or reuse
Centralized catalog, automatic lineage, and easy feature reuse
High engineering overhead for the FPT
FPT focuses on scaling, latency, and platform reliability
Manual governance and slow approvals
Built-in branching, isolation, and governance
Long delays from idea to deployment
Rapid iteration: hours/days vs. weeks
For backend engineers, Chalk replaced manual micro-service maintenance with dynamic feature pipelines. Engineers now focus on scaling system throughput, onboarding new real-time data sources, and optimizing query planners, rather than hand-translating feature logic.
For MLOps engineers, Chalk introduced a clean, branch-based development lifecycle. Each feature change exists in an isolated environment until promoted, reducing the risk of dependency conflicts or production regressions. Governance is enforced automatically through versioning, feature ownership tracking, and runtime policy checks.
For data scientists, Chalk eliminated the offline/online skew. Features are defined once in Python, immediately versioned, and served online through Chalk’s real-time query engine. This closed the loop between experimentation and production, dramatically speeding up iteration cycles.
For product managers like Meng, Chalk unlocked better observability and feature reuse across lines of business, reducing duplicate effort and shortening time-to-market.
Chalk acts as the central feature platform between MoneyLion’s data infrastructure and real-time model serving systems. It abstracts feature computation, storage, and online serving behind a unified Python-first interface.
Teams define, test, version, and serve features through Chalk while maintaining full traceability and runtime guarantees.
Chalk helped MoneyLion not only unify the process of productionizing their ML but also accelerate customer-facing innovation across key areas.
Today, with Chalk powering real-time ML infrastructure, MoneyLion can:
These capabilities are active today, reaching millions of MoneyLion users.
Data engineers focus on platform optimization, not manual feature support
Higher availability of real-time features for fraud and finance
MLOps enforces platform governance automatically
Faster and safer experimentation across teams
Data scientists productionize features independently in Python
Smarter, faster model deployments
Product reuses features across lines
Shorter launch times for AI-driven features
Chalk now serves as the foundation for MoneyLion’s future AI strategy. As the company grows, Chalk is enabling: