Monitoring
Detect data drift
Ensure data quality
Monitor availability + latency
CPU Utilization
max
84.0%
avg
52.3%
Feature values
Easily differentiate between organic shifts, unexpected format changes in upstream data sources, and development mistakes. Get alerted automatically before issues cause your model performance to suffer.
Pipeline execution
When feature engineering pipelines break, you need visibility into why. Aggregate logs and metrics by query, cron job, migration, and even individual resolvers. Every line of code you write is automatically instrumented so you can easily diagnose issues.
Data freshness
When ETL pipelines break or third-party data vendors have outages, models can wind up with stale inputs that lead to inaccurate predictions. Monitor freshness across batch, streaming, and realtime data sources to make sure that your models execute with up-to-date data.
Data lineage
Upstream data quality issues can cause your features to suffer. Automatically track data provenance for derived features, so that you can understand which upstream data sources cause problems and escalate issues to the relevant data owners or external vendors.
Incident response
Production-grade machine learning requires production-grade alerting. Integrate with the alerting systems you already use like Pagerduty, or chat systems like Slack, to keep your team informed about issues. Configure alerting thresholds so that you get notified when pipeline behavior doesn’t match expectations.
Get Started with Code Examples
Unlock the power of real-time data pipelines. Explore all examples
Get Started with Code Examples
Unlock the power of real-time data pipelines.
Tags & Owners
Feature Discovery
Assigning tags & owners to features.
Preview deployments
GitHub Actions
Set up preview deployments for all PRs.
Multi-Tenancy
Resolvers
Serve many end-customers with differentiated behavior.
Unit tests
Testing
Resolvers are just Python functions, so they are easy to unit test.