Customer Story

Medely staffs critical healthcare in real-time with Chalk

customer story mobile landing image

Client

Use Case

RecSys (matchmaking), Dynamic pricing

Industry

Healthcare Staffing

Cloud

GCP

Challenges

  • Feature changes required heavy infrastructure work
  • Batch pipeline locked feature freshness at 24 hours
  • Experiment cycles took ~2 months due to manual overhead

Solutions

  • Self-serve feature platform delivers data engineering as software
  • Real-time computation reacts to live marketplace conditions
  • Intuitive feature development unlocks 2-week sprint cycles

Overview

Medely is the world's largest healthcare talent marketplace, connecting providers to a flexible workforce of 300,000 nurses and allied professionals. Since COVID-19, demand for healthcare gig work has surged: a 2022 Oliver Wyman survey found a 1400% spike in nurses moving to gig models. Medely is creating a trusted network where healthcare workers can find flexible opportunities and facilities can fill urgent shifts.

As Medely matured, the company recognized that machine learning could optimize two key levers:

  • Dynamic pricing: Recommending charge rates that balance facility budgets with competitive compensation
  • Professional matching: Placing qualified professionals into the right jobs at the right time to maximize fill rate

But without a proper experimentation-to-production pipeline, these optimizations remained out of reach.

Medely chose Chalk for self-serve feature infrastructure that could scale with their ambitions. Within months, they deployed a charge rate recommendations model that drove a sharp increase in revenue, proving the value of ML and clearing the path to growth.

The Challenge

Healthcare staffing operates on compressed timelines. Jobs can be posted with as little lead time as 48 hours. Pricing and matching models need to react to marketplace conditions in real time.

Medely assembled a batch pipeline from available tools:

Component

Function

  • Prefect
  • Orchestrated a daily batch job that queried Postgres/Snowflake
  • Redis
  • Stored pre-computed features (feature store)
  • Inference Service
  • Read features from Redis to make predictions

This architecture created two main problems:

1. Models operated on stale data

Since the job was daily, feature freshness was locked at 24 hours: too slow to react to rapidly changing facility demand and professional availability.

2. Every iteration required heavy manual work

ML task

Infrastructure + development overhead

  • Write a new feature
  • Write separate Snowflake queries (including complex joins for relational features)
  • Modify feature logic
  • Redeploy Prefect pipeline across entire stack
  • Add a data source
  • Coordinate with data engineering + update pipelines
  • Modify schema
  • Provision infrastructure
It was quite clear to me that the ownership and the infra was not going to scale with the ambitions of the number of models we wanted to deploy.
hi
Eric Simon Staff Machine Learning Engineer

To scale their ML use cases, Medely needed something low-lift, flexible, and seamless.

The Solution

Medely evaluated a variety of feature platforms, and Chalk stood out for its ease of adoption, support quality, and self-serve design. The team spun up Chalk in just a few days, replacing the batch pipeline with a unified system where features are defined in Python and computed in real time.

What Chalk delivered:

  1. Self-serve infrastructure

The manual infrastructure work disappeared. Medely engineers could modify feature logic, update schemas, and add data sources without touching Terraform or coordinating pipelines. Chalk handled the infrastructure layer automatically, functioning as a data engineering team delivered via software.

A solution like Chalk is profoundly important to our team because it provided the ability to buy engineering talent. I can rely on the fact that it's self-serve.
hi
Eric Simon Staff Machine Learning Engineer
  1. Real-time computation

Chalk directly queries Postgres at inference time, eliminating the 24-hour staleness problem. Medely’s pricing models now combine Snowflake historical aggregates with Postgres real-time signals, availability, facility demand, and current shift patterns—reacting to marketplace conditions as they happen.

  1. Intuitive feature development

Chalk's resolver architecture transformed how quickly Medely engineers could build features. Previously, building features from related data meant writing separate complex queries across multiple tables. To calculate anything from a professional's job history, they manually joined professional to jobs and computed the metric, a pattern repeated across dozens of features.

In Chalk, the team defined the relationship once: a professional has many jobs. Resolvers could then chain off that relationship: professional.booked_jobs gave them the entire history. From completed counts to cancellation rates, features that would've required separate Snowflake queries now flow naturally from that single relationship.

The features are just flowing so naturally ... I've never experienced anything like that, including my time at Spotify.
hi
Eric Simon Staff Machine Learning Engineer

What used to take dedicated queries and pipeline coordination now happens in minutes.

Outcomes

From the first model Medely deployed with Chalk, the ROI was clear. The charge rate recommendation model is projected to generate $800K in annual net revenue through improved margins on job placements, enabled by real-time computation and self-serve infrastructure.

The first product we ever deployed with Chalk paid for our team, probably more, in net revenue.
hi
Eric Simon Staff Machine Learning Engineer

Beyond the immediate revenue impact, Chalk transformed how quickly Medely's ML team could operate:

Metric

Before Chalk

After Chalk

  • Feature freshness
  • Daily batch features written to Redis (24-hour lag)
  • Postgres queries at inference time
  • Feature development
  • Separate Snowflake queries with manual joins + Terraform/pipeline coordination
  • Define relationships once → features flow from simple resolver chains
  • Experiment velocity
  • Experiment cycles took ~2 months
  • Moving toward 2-week sprint cycles
  • Data quality
  • BI, ML, and Product teams calculated features differently
  • Unified definitions across teams

Faster development and real-time features enable Medely to better match nurses and doctors to urgent shifts, filling critical healthcare needs more efficiently.

Looking Ahead

Medely's 2026 planning revealed how central Chalk had become. The team prioritized "Invest in Chalk" as a top initiative focused on unlocking more platform capabilities. Multiple models are now in experimentation for both pricing and matching use cases.

With Chalk as its foundation, Medely is building toward:

  • Smarter professional-facility matching: Match medical professionals to shifts based on preferences and performance
  • Rates with administrator control: Human-in-the-loop pricing where facility managers fine-tune ML recommendations
  • Unified training and serving: Eliminating the current translation work by using Chalk for both model development and production
Adopting Chalk is the biggest singular win I have had as an ML engineer at this company.
hi
Eric Simon Staff Machine Learning Engineer
Build faster with Chalk
See what Chalk can do for your team.