Free & Open Source

Neo Cloud GPU Price Tracker

Compare GPU rental prices across neo cloud providers in one place. Updated nightly. Built for AI engineers, ML researchers, and platform teams choosing where to train, fine-tune, or serve.

No sign-up, no email gate, no vendor lock-in. Open source under Apache 2.0.

Updated nightly

Scrapers run every night against each provider's published pricing, so the dashboard reflects current rates — not last quarter's.

Multi-provider comparison

Side-by-side pricing for H100, H200, A100, L40S, B200, and more across CoreWeave, RunPod, Lambda Labs, Nebius, Crusoe, and Denvr — with no vendor lock-in.

Historical price charts

Track how each GPU's hourly rate has trended over time. Useful for forecasting training budgets and timing when to pre-buy capacity.

Providers tracked

Six neo cloud providers today, with more added as their pricing pages stabilize. Pricing data refreshes nightly.

CoreWeaveRunPodLambda LabsNebiusCrusoeDenvr

Want a provider added? Open an issue

How it works

A simple, transparent pipeline — public pricing pages in, normalized JSON out, dashboard on top.

1

Scrape

One Python scraper per provider, run nightly via scheduled job.

2

Normalize

Output is merged into a canonical schema — GPU, region, $/hr, on-demand vs reserved.

3

Publish

JSON snapshot pushed to a public endpoint; the dashboard reads from it directly.

4

Compare

Filter by GPU model, sort by price, drill into a provider, view historical trend.

Built by beCloudReady

Need a self-hosted vLLM or training setup on a neo cloud?

We help engineering teams pick the right GPU partner and wire up vLLM, Ray, or fine-tuning pipelines on their workspace — without locking into a single hyperscaler.