Contents
- AMD Mi350x Price: MI350X Specifications
- MI350X cloud availability: sparse and growing
- Price comparison: MI350X vs NVIDIA alternatives
- MI350X vs H200: memory matters
- When to choose MI350X
- FAQ
- Related Resources
- Sources
AMD Mi350x Price: MI350X Specifications
AMD released MI350X in late 2025. It upgrades MI300X with ~288GB HBM3 memory (up from 192GB) and roughly 1.5x faster tensor throughput. Performance targets H200-class workloads.
The main win: memory capacity and bandwidth. Inference on large models is bandwidth-limited, not compute-limited. MI350X delivers on both. The jump from 192GB to ~288GB is meaningful for models exceeding MI300X capacity.
Power consumption stayed under 700W (efficient). Manufacturing uses chiplets, which creates yield variability. Supply is constrained through Q2 2026.
MI350X cloud availability: sparse and growing
DigitalOcean launched MI350X availability in 2026. Pricing as of March 2026: $4.40/hr for single GPU (288GB), $35.20/hr for 8-GPU pod (2,304GB).
Lambda Labs testing MI350X but no public launch yet. Likely coming Q2 2026.
Vast.AI peer-to-peer market has few MI350X owners. Sporadic availability. When available, prices undercut NVIDIA by 20-30%.
AWS Trainium and Inferentia use custom silicon. Not MI350X.
Google Cloud has no MI350X availability announced. Likely 2027 at earliest.
As of March 2026, DigitalOcean is the primary cloud provider offering MI350X for production workloads.
Price comparison: MI350X vs NVIDIA alternatives
| Chip | $/hr | Memory | Use case |
|---|---|---|---|
| MI350X (DigitalOcean) | $4.40 | ~288GB HBM3 | LLM inference & training |
| H200 (RunPod) | $3.59 | 141GB HBM | LLM training, long-context |
| H100 SXM (RunPod) | $2.69 | 80GB HBM | LLM training, inference |
| A100 80GB (RunPod) | $1.19-1.39 | 80GB HBM | General ML |
MI350X sits above H100 in memory (~288GB vs 80GB) and above H200 (141GB). Similar tensor throughput to H200. Cheaper than H200 when memory capacity and bandwidth are the bottleneck.
MI350X vs H200: memory matters
H200 has slightly higher tensor throughput. Both have similar memory configurations.
MI350X wins on power efficiency and cost per GB. 70B models? Both work fine. 405B models? MI350X fits it; H200 requires distribution.
Large model inference is bandwidth-bound, not compute-bound. MI350X solves the bandwidth problem.
When to choose MI350X
Use AMD MI350X pricing when:
- Workload needs 200GB+ HBM per accelerator
- Cost optimization matters more than max throughput
- Long-context LLM inference dominates the workload
- Model inference at 4-bit or 8-bit quantization
Skip MI350X when:
- Maximum raw performance is required (H200 is stronger)
- Availability and support are critical (H100 SXM is more mature)
- The team only knows CUDA (MI350X requires ROCm)
CUDA ecosystem is stronger. PyTorch runs on both. HuggingFace models work. But CUDA has more community code, more frameworks, more community support.
FAQ
Q: Is MI350X actually available to rent right now? CoreWeave has inventory. Others coming. Full availability expected by end of Q2 2026. Check CoreWeave docs for current stock.
Q: What's ROCm and does it work with my code? ROCm is AMD's compute platform. PyTorch and TensorFlow support it. Most models compile unchanged. Some CUDA-specific kernels need porting.
Q: How does MI350X compare to MI300X? MI350X: ~288GB HBM3, 1.5x throughput. MI300X: 192GB HBM3, lower clocks. MI350X is the newer generation with substantially more memory. Performance uplift noticeable for transformer inference, particularly large-context workloads.
Q: Should I wait for MI350X or use H100 now? If you can start with H100 today, do it. Waiting 2-3 months may not be worth it. If your workload specifically needs 200GB+ HBM, MI350X is worth the wait.
Q: Does CoreWeave support HIP (AMD's CUDA equivalent)? Yes. Full ROCm stack. HIP development supported. Documentation is good. Community smaller but growing.
Q: Can I run proprietary models on MI350X? Check the model's license. Open source models work. Proprietary models licensed for CUDA may not permit ROCm use. Read terms carefully.
Related Resources
- AMD MI300X pricing
- AMD MI325X pricing
- NVIDIA H200 pricing
- CoreWeave GPU pricing
- GPU pricing comparison
Sources
- AMD MI350X Datasheet: https://www.amd.com/en/products/specifications/accelerators/instinct
- CoreWeave MI350X: https://www.coreweave.com/pricing
- ROCm Documentation: https://rocmdocs.amd.com/
- AMD Instinct: https://www.amd.com/en/products/accelerators/instinct