Contents
- Cheapest A100 in US: A100 Pricing Overview
- Provider Comparison
- A100 PCIe vs SXM
- Regional Pricing Differences
- What to Look For
- FAQ
- Related Resources
- Sources
Cheapest A100 in US: A100 Pricing Overview
Cheapest A100 Us is the focus of this guide. A100 pricing varies by provider and type. PCIe vs SXM matters. March 2026:
RunPod:
- A100 PCIe: $1.19/hour
- A100 SXM: $1.39/hour
Lambda Cloud:
- A100: $1.48/hour
CoreWeave:
- 8x A100 cluster: $21.60/hour ($2.70/GPU)
RunPod wins on per-GPU cost. PCIe at $1.19, SXM at $1.39. CoreWeave is cheaper for multi-GPU training, but per-GPU rates are higher due to bundling.
Provider Comparison
Three main providers dominate:
RunPod: Marketplace model, community pricing. Connects users with GPU owners and data centers. Lower rates. $1.19 PCIe, $1.39 SXM. Best for flexible work.
Lambda Cloud: Managed production. Predictable pricing. $1.48/hour. Includes dedicated hardware and support. Premium covers SLA guarantees.
CoreWeave: Multi-GPU specialist. 8x A100 at $21.60/hour. Coordinated clusters. Single GPU? Not competitive. Distributed training? Yes.
A100 PCIe vs SXM
The two A100 variants serve different use cases:
PCIe Models connect to servers via PCIe lanes, limiting bandwidth and introducing latency. Cost advantages range from 10-15% savings. Suitable for inference, single-GPU training, and applications tolerant of inter-GPU communication delays.
SXM Models use high-bandwidth SXM interconnects, enabling multi-GPU training at scale. The 20% price premium over PCIe reflects superior memory bandwidth and NVLink support. Production training pipelines commonly standardize on SXM variants to avoid communication bottlenecks.
RunPod's pricing reflects this differentiation directly: PCIe models cost less, SXM models cost more. For budget-conscious workloads, PCIe A100 instances deliver 40 GB memory at lower rates. For research teams conducting distributed training, SXM models justify their premium through throughput gains.
Regional Pricing Differences
A100 pricing in the US varies by region and data center density:
East Coast providers (RunPod, Lambda Cloud nodes in Virginia, North Carolina) typically match or beat West Coast rates due to abundant capacity. Pricing stabilizes around $1.19-$1.48 per hour.
West Coast clusters (California, Oregon) show occasional premiums during peak demand, though major providers maintain consistent US-wide pricing tiers to simplify purchasing decisions.
Secondary markets in Dallas, Chicago, and other interior hubs increasingly offer competitive A100 access as providers expand infrastructure footprint. Buyers willing to use non-optimal geographic locations can occasionally negotiate discounts on reserved capacity.
For latency-sensitive applications, geographic proximity matters less than for bandwidth-heavy training. Many teams with US-only data sovereignty requirements split workloads across regions to minimize costs while respecting compliance boundaries.
What to Look For
When evaluating A100 pricing, consider beyond hourly rates:
Commitment discounts lower effective costs 20-40% on 1-month or longer reservations. Reserved instances on RunPod and Lambda Cloud reduce A100 PCIe from $1.19 to approximately $0.95 per hour.
Spot pricing (RunPod's primary model) introduces variability. Reliable workloads should budget 1.5x the advertised hourly rate or use secondary-tier GPUs for baseline capacity.
Data transfer costs often appear in fine print. Ingesting large datasets incurs bandwidth charges that sometimes exceed compute costs for proof-of-concept work.
Memory and interconnect determine real-world performance. A100 PCIe and SXM both offer 40 GB memory, but applications sensitive to inter-GPU latency (distributed training, large batch processing) depend on SXM's NVLink connectivity.
Visit /gpu-pricing-guide for broader market context and /a100-specs for technical specifications.
FAQ
What's the absolute cheapest A100 available in the US?
RunPod's A100 PCIe at $1.19/hour is the lowest public rate as of March 2026. Using reserved capacity discounts can push effective rates to $0.95/hour, but spot pricing variability should be factored into production workload budgets.
Can I reserve an A100 in advance for lower rates?
Yes. RunPod and Lambda Cloud both offer 1-month and longer reservations with 15-30% discounts. Commitment periods lock in rates but reduce flexibility.
Is A100 PCIe sufficient for training, or do I need SXM?
For single-GPU training and most inference workloads, A100 PCIe is adequate. Distributed multi-GPU training benefits from SXM's NVLink, but PCIe models handle sequential or loosely coupled tasks well at lower cost.
How does RunPod's pricing compare to managed platforms like AWS?
RunPod undercuts AWS by approximately 40-50% on A100 instances. AWS charges approximately $2.00-$2.40 per hour for equivalent A100 access, making RunPod the default choice for cost-conscious teams without AWS lock-in requirements.
Related Resources
Sources
- RunPod official pricing: https://www.runpod.io/gpu-pricing
- Lambda Cloud GPU pricing: https://cloud.lambdalabs.com/instances
- CoreWeave official documentation: https://www.coreweave.com/pricing