Contents
- Top AI Stocks Core Infrastructure Tools: AI Stock Market Overview
- Infrastructure Tier Stocks
- Platform and Cloud Tier Stocks
- Application Tier Stocks
- Market Dynamics and Catalysts
- Capital Expenditure Trends
- Valuation Framework
- Portfolio Construction
- FAQ
- Related Resources
- Sources
Top AI Stocks Core Infrastructure Tools: AI Stock Market Overview
The top AI stocks in core infrastructure and tools split into two distinct tiers: the infrastructure that runs models, and the applications that use them.
Tier 1 (Infrastructure) is capital-intensive but high-margin. NVIDIA (NVDA), AMD (AMD), and Marvell (MRVL) design chips. Data center providers — CoreWeave (CRWV, IPO'd March 2025 at $40/share) and Nebius (NBIS, Nasdaq) — rent access. These companies benefit from rising aggregate compute demand for training and inference. Winner-take-most dynamics: whoever builds the fastest, cheapest chip captures the market.
Tier 2 (Platforms and Applications) consumes the infrastructure. Microsoft, Google, Amazon, Meta, OpenAI. They build the LLMs, APIs, and services. Their business model depends on cost-per-token falling faster than revenue-per-token grows, which incentivizes buying cheaper hardware from Tier 1 companies.
This analysis focuses on publicly traded companies (except where private companies like OpenAI are material context). Data as of March 2026.
Infrastructure Tier Stocks
NVIDIA (NVDA)
Market Cap: ~$3.2 trillion (as of March 2026) Q4 2025 Revenue: $40.1B (73% YoY growth) GPU Market Share: 80%+ of AI chips Fair Value Estimate: $240 (currently trades 24% below)
NVIDIA is the clear leader. H100, H200, B200: the GPUs every AI company buys. Supply remains constrained through 2026 despite competitors ramping. Architecture roadmap (Blackwell, Rubin) is 18+ months ahead of AMD.
CUDA ecosystem lock-in is powerful. Every LLM framework (PyTorch, JAX, Hugging Face) is optimized for NVIDIA. Switching costs are real for teams with millions of lines of CUDA code.
Catalysts:
- H200 and B200 full ramp (higher margins than H100)
- New architecture launches (Rubin in H1 2026, Spartan in H2)
- Demand from OpenAI, Google, Meta capital spends (see CapEx section)
- Data center custom silicon competition (AWS Trainium, Google TPU) not approaching NVIDIA performance
Risks:
- AMD gaining inference workload share (Instinct MI300X competitive on cost)
- Open-source alternatives (llama.cpp, vLLM) reducing API dependency and shifting load from cloud to edge
- China ban on advanced exports limiting total addressable market
Valuation: At $240 fair value vs current price, the stock is 24% undervalued per analyst consensus. The stock has already priced in near-term growth. Longer-term multiple expansion depends on Blackwell adoption rate in H2 2026.
AMD (AMD)
Market Cap: ~$250 billion Q3 2025 Revenue: $9.2B (36% YoY growth) GPU Market Share: 8-12% (growing) Estimated Fair Value: $185 (currently 15% undervalued)
AMD's Instinct MI300X is competitive on cost and performance for inference workloads. Not as fast as H100 for training, but close enough and 20-30% cheaper. Meta, OpenAI, and other hyperscalers are buying MI300X clusters to reduce NVIDIA dependency.
The MI325X (follow-up) is shipping now (Q1 2026). Performance targets are closer to H200 than H100, which tightens the gap further.
AMD's EPYC server CPUs are also gaining share in AI data centers, particularly for inference preprocessing and orchestration.
Catalysts:
- MI325X adoption acceleration (Meta is a large customer)
- EPYC penetration in AI infrastructure
- Open-source inference frameworks (vLLM, llama.cpp) being optimized for AMD HIP
- Custom AI chips for hyperscalers maturing (AWS Trainium, Google TPU) still lag AMD and NVIDIA on performance, reducing the threat
Risks:
- Software maturity lag vs CUDA. HIP and ROCm are catching up but still fragmented
- NVIDIA's architecture lead remains 18+ months ahead
- Concentration risk: Meta and OpenAI are large customers. Loss of one contract impacts revenue materially
Valuation: AMD is trading at a modest discount. Upside is more limited than NVIDIA (AMD already traded up 36% in 2025), but the company remains a solid infrastructure play for portfolio diversification.
Broadcom (AVGO)
Market Cap: ~$290 billion Broadcom's AI Position: Networking and high-speed interconnect
Broadcom supplies the high-speed networking equipment that links GPUs in data center clusters. When a hyperscaler builds a 1,000-GPU training cluster, Broadcom provides the switch ASICs that connect them.
AI cluster networking is a $3-4B annual TAM and growing 40% YoY. Broadcom has 60%+ market share.
Broadcom is less volatile than NVIDIA or AMD because customers must buy these components regardless of GPU choice. It's a quiet infrastructure winner.
CoreWeave (CRWV)
IPO: March 2025 at $40/share (Nasdaq) Business: Specialized GPU cloud infrastructure for AI training and inference
CoreWeave is the largest independent GPU cloud provider, operating purpose-built data centers optimized for dense GPU deployments. The company went public in March 2025 at $40/share. Revenue comes from renting H100, H200, and B200 GPU clusters to AI labs, research teams, and enterprises.
CoreWeave targets teams needing large-scale compute (8-GPU bundles minimum) for distributed training. This positions it between hyperscalers (AWS, Azure) and small GPU rental platforms (RunPod, Lambda Labs).
Investment thesis: Pure-play AI infrastructure exposure. Revenue grows with aggregate AI compute demand. Risk is margin compression from hyperscaler competition.
Nebius (NBIS)
Ticker: NBIS on Nasdaq Business: GPU cloud and AI infrastructure (formerly Yandex Cloud international operations)
Nebius offers flexible per-GPU rental (individual GPUs from A10 to H100) targeting startups and mid-market AI teams. The company separated from Yandex and listed independently on Nasdaq.
Investment thesis: Lower-cost, more flexible alternative to CoreWeave for cost-sensitive teams. Higher utilization rates than CoreWeave due to individual GPU pricing. Lower growth ceiling.
Platform and Cloud Tier Stocks
Microsoft (MSFT)
Market Cap: ~$3.8 trillion FY2026 Revenue Guidance: $260B+ (20% growth) Azure AI Revenue: Estimated $15-20B (based on earnings disclosures) Fair Value Estimate: $600 (currently 32% undervalued)
Microsoft bet heavily on OpenAI early (Series A investor, exclusive cloud partner). ChatGPT integration into Copilot, Office, and Azure drives large-scale adoption.
The moat: Copilot Pro and Copilot for Microsoft 365 embed AI into tools every knowledge worker uses. Switching cost is high because the AI is now core workflow.
Azure's capex spend on AI is ramping. Microsoft committed $80-100B to AI infrastructure through 2026. That capital goes to GPUs, networking, and data centers. Some goes to NVIDIA, some to AMD, some to custom silicon (Maia). All of it is margin-accretive: cloud infrastructure is high-margin after amortization.
Catalysts:
- Copilot adoption across teams (large-scale customers are Tier 2 on rollout)
- New GPT releases and price drops (OpenAI's efficiency gains flow to Azure margins)
- Office 365 AI features (Copilot in Word, Excel) driving seat upgrades
- Azure OpenAI Service becoming standard for regulated industries (HIPAA BAA, FedRAMP)
Risks:
- Google and Amazon closing the gap on AI services integration
- OpenAI's partnership flexibility (OpenAI can reduce Azure dependency if own inference efficiency improves)
- Antitrust scrutiny on market dominance
Valuation: At $600 fair value, the stock is 32% undervalued. Microsoft has benefited from broadening AI adoption, but valuation doesn't leave much room for disappointment.
Alphabet/Google (GOOGL, GOOG)
Market Cap: ~$2.3 trillion 2026 CapEx Guidance: $175-185B (up 73% YoY) AI Revenue: Search + Cloud + Other (estimated $100B+)
Google has the most capital spending of any AI company. $175-185B in 2026. Most goes to data centers for training and serving Gemini and other models.
Google designs its own TPUs (Tensor Processing Units), which compete with NVIDIA GPUs on inference and some training workloads. In-house silicon is higher margin than buying NVIDIA.
The moat: Google's data (Search, YouTube, Gmail) is unmatched. Their models benefit from more training data. Competition with OpenAI is real, but Google's distribution (Android, Gmail, YouTube) is broader.
Catalysts:
- Gemini model improvements and wider adoption
- TPU custom silicon gaining inference share
- YouTube's search-to-AI transition (Gemini-powered search suggestions)
- Vertex AI platform adoption (large-scale ML platform competing with Azure ML, AWS SageMaker)
Risks:
- Regulatory pressure on market dominance (search antitrust case ongoing)
- OpenAI gaining ground in business applications
- Alphabet's core search business under threat from AI-native competitors (Perplexity AI)
Valuation: Alphabet is trading at a more modest valuation than Microsoft. The $175B+ capex spend is priced in, but there's room for upside if EBITDA from that capex is higher than expected.
Amazon (AMZN)
Market Cap: ~$2.1 trillion AWS AI Services Revenue: Estimated $8-10B (growing 40% YoY) AWS CapEx: Part of Amazon's broader $80-100B infrastructure spend
AWS is the largest cloud provider, and AI is increasingly central to AWS's value proposition. SageMaker (ML platform), Bedrock (LLM API), and Trainium/Inferentia (custom silicon) form the AWS AI stack.
AWS's advantage: existing customer relationships. 60% of large teams already use AWS. Adding AI services to existing workloads is lower friction than switching clouds.
AWS custom silicon (Trainium, Inferentia) is less advanced than NVIDIA, but improving. Cost advantage is material for inference workloads.
Catalysts:
- Bedrock adoption acceleration (LLM consumption through AWS)
- Trainium and Inferentia ramp (reduce NVIDIA dependency, improve margins)
- Anthropic partnership (deep discounts and exclusive cloud partnership like Microsoft-OpenAI)
- AWS AI services reaching $20B+ ARR
Risks:
- Microsoft's tight OpenAI integration (Azure-exclusive early access to new models)
- Generative AI cannibalization of legacy AWS services (if AI handles tasks that used to require full app development)
Valuation: Amazon is trading at a fair valuation. Upside depends on AWS AI services ramping faster than expected.
Meta (META)
Market Cap: ~$650 billion 2026 CapEx Guidance: $115-135B (73% YoY increase) Meta AI Revenue: Part of advertising business, increasingly from proprietary models
Meta is spending massive capital on AI infrastructure ($115-135B in 2026). But Meta is not selling AI services; it's using AI internally for recommendation systems, content generation, and ads targeting.
The capital is a cost drag on near-term earnings. But longer-term, AI efficiency gains in recommendation and ads targeting should drive better margins and lower churn.
Meta also released Llama 2 open-source model (not profitable but strategically important). Llama's adoption reduces Meta's NVIDIA dependency: teams can run local Llama instead of cloud APIs.
Catalysts:
- Generative AI improving ads targeting (less ad waste, higher conversion)
- Llama model adoption spreading (solidifies Meta as AI company beyond advertising)
- Recommendation AI reducing content moderation costs
Risks:
- CapEx increase not translating to revenue/earnings growth (cost center, not profit center)
- Regulatory pressure on data usage for training
- Open-source models cannibalizing Meta's proprietary AI advantage
Valuation: Meta is trading 24% undervalued at $850 fair value. The heavy capex spend is a near-term headwind, but long-term ROI could be material if ads targeting improves materially.
Application Tier Stocks
Nvidia's Vertical Integration
Key Point: NVIDIA is not just a hardware company anymore. The ecosystem of frameworks, libraries, and applications built on top of CUDA is itself valuable.
Applications like Hugging Face, Anthropic (if public), Stability AI (if public), and others are implicitly valued through NVIDIA's ecosystem dominance. But directly:
Anthropic (Private, Valued $25B): OpenAI's primary competitor. Strong technical team. Heavy focus on AI safety and interpretability. Not public yet, but if it IPOs in 2026-2027, it would be a major event.
Stability AI (Private): Develops open-source Stable Diffusion (image generation). Lower valuation than Anthropic or OpenAI, but growing. Dependency on open-source is higher risk than proprietary models.
Public AI Application Stocks
Palantir (PLTR): Defense and data analytics software using AI. Government customers (reduced churn). Valuation is rich, but recurring revenue is solid.
CrowdStrike (CRWD): Security software increasingly powered by AI (endpoint detection, threat analysis). Growing rapidly. AI is part of a larger security platform.
Salesforce (CRM): CRM platform with Einstein AI copilot. Large installed base, high switching cost. AI is a feature, not the business.
None of these are pure-play AI application companies. Diversification is actually lower risk than betting on a single LLM provider.
Market Dynamics and Catalysts
Training vs Inference Workload Shift
In 2024-2025, training dominated capex (data centers needed massive H100 clusters). In 2026, the inference workload is ramping faster. Inference is where efficiency gains matter: cost-per-token falls as batch sizes scale and quantization improves.
This favors AMD (MI300X is competitive for inference), custom silicon (AWS Trainium, Google TPU), and software companies making inference efficient (vLLM, llama.cpp).
NVIDIA remains king, but the competitive dynamic is shifting. More revenue from inference means lower ASP (average selling price) and margin compression.
Open-Source and Decentralization
Llama 2 and DeepSeek R1 (open-source) are shifting some inference load away from cloud APIs. Teams running Ollama or llama.cpp on local hardware don't buy cloud compute.
This is net-negative for Microsoft, Google, and Amazon cloud services. It's net-positive for hardware companies that sell to on-prem deployments.
Long-term, this trend continues: local models improve, regulatory pressure on data privacy increases, and decentralization accelerates. Bet on infrastructure (NVIDIA, AMD), not cloud APIs.
Capital Expenditure Trends
Hyperscaler CapEx (2026 Guidance)
| Company | 2026 CapEx Guidance | % of Revenue | YoY Growth |
|---|---|---|---|
| $175-185B | ~30% | 73% YoY increase | |
| Meta | $115-135B | ~27% | 73% YoY increase |
| Microsoft | $80-100B | ~30% | Growing (specific guidance less public) |
| Amazon | ~$80-100B | ~25% | Growing (spreads across AWS, retail ops, logistics) |
Total capex from big 4: ~$450-520B in 2026. Most goes to data centers. The percentage going to GPUs specifically is estimated at 30-40% of data center capex, implying $50-80B to GPU purchases annually.
At $1,500-2,000 per H100 equivalent, that's 25-50M GPUs per year across all hyperscalers. NVIDIA ships ~2-3M GPUs per year. The rest come from AMD (growing), custom silicon, and older inventory.
Implication: Aggregate AI infrastructure spend is accelerating, but NVIDIA's share is being diluted by competition and custom silicon. Good for the industry, not necessarily great for NVIDIA stock in 2026.
Valuation Framework
Framework 1: GPU Attach Rate
How many GPUs does a hyperscaler buy per exabyte of inference throughput?
As inference-optimized silicon (H200, MI325X, custom chips) ships, the attach rate of premium H100s falls. A company needing 100 exabytes of inference might have bought 50k H100s in 2024. In 2026, they might buy 30k H100s + 15k MI325X + 5k custom silicon.
NVIDIA's revenue per exabyte is falling, even if total exabytes purchased is rising.
Stock implication: NVIDIA's growth rate will decelerate from 73% YoY (Q4 2025) to 30-40% in 2026-2027 as attach rates dilute. Analysts will have to re-rate the multiple down (lower growth = lower multiple), even if the business is healthy.
Framework 2: Cost-Per-Inference
Open-source models (Llama, Mistral, DeepSeek R1) are free to download. Inference cost is just the hardware and electricity.
Closed models (GPT-5.4, Claude, Gemini) charge API fees. As open-source models close the accuracy gap, price competition for API inference intensifies.
Stock implication: Microsoft and Google's cloud AI revenue growth could be constrained by free open-source alternatives. This limits upside for cloud platforms in the inference-heavy 2026-2027 period.
Portfolio Construction
Core Holdings (If Building an AI Stock Portfolio)
60% in Infrastructure (NVIDIA + AMD):
- NVIDIA 40%: Highest growth, highest multiple risk, most established dominance
- AMD 20%: Lower valuation, faster catch-up, diversification benefit
30% in Platforms (Microsoft + Google + Amazon):
- Microsoft 15%: Most exposed to LLM API growth, highest quality
- Google 10%: Highest capex, most uncertain payoff, but upside if TPU adoption accelerates
- Amazon 5%: Defensive position, steady growth, undervalued
10% in Specialized Infrastructure (Broadcom, CoreWeave, Nebius, or custom silicon plays):
- Broadcom 10%: Quiet winner, less volatile, higher dividend
- CoreWeave (CRWV): Higher-risk pure-play GPU cloud bet; substitute for portion of Broadcom allocation if conviction is high
- Nebius (NBIS): Lower-risk GPU cloud exposure, profitable, smaller upside
Avoid:
- Pure-play LLM API companies (OpenAI if it goes public): margin compression from open-source and price competition is a known headwind
- Late-stage AI application companies (CrowdStrike, Palantir unless teams have conviction on them separately)
- SPAC plays and pre-revenue AI startups: valuation risk is extremely high
Rotation Timing
If the market re-rates NVIDIA multiple downward in Q2-Q3 2026 as guidance shows growth moderating from 73% to 40%, that's the point to rotate out of momentum and into value (AMD, Microsoft, Google).
If open-source models continue improving (DeepSeek R1 matching o1 on benchmarks), rotate out of cloud platforms and into hardware.
FAQ
Is NVIDIA overvalued?
At $240 fair value vs current price, the stock is 24% undervalued per analyst consensus. But this assumes growth moderates to 30-40% by late 2026. If growth stays above 50%, there's 50%+ upside. If it falls below 30%, downside is 30-40%. It's a binomial bet on growth deceleration.
Should I buy AMD instead of NVIDIA?
If you believe AMD's MI325X closes the performance gap faster than expected, yes. AMD trades at a 40% discount to NVIDIA's growth rate. Lower upside but also lower risk of disappointment.
Which cloud platform should I buy for AI exposure?
Microsoft for the highest growth (Copilot adoption). Google for the highest capex (long-term optionality). Amazon for the steadiest growth and defensive positioning. No single clear winner.
Should I buy an open-source AI company stock?
If it goes public: very carefully. Llama, DeepSeek, and other open-source models have zero revenue. The business model is unclear (you're betting on someone monetizing it). Too speculative for a diversified portfolio.
What is the biggest risk to AI stocks?
Regulatory clampdown on data use, compute, or the business models (API providers). If government mandates open-source only, or bans certain training data, the TAM shrinks and multiples re-rate downward.
When should I buy in?
If NVIDIA's growth moderates to 30-40% (expected Q2-Q3 2026), that's the re-rating window. That's when the stock is cheapest on a forward multiple basis. Buy the dip, not the momentum.
Related Resources
- GPU Pricing and Availability
- LLM Models and Pricing
- AI Infrastructure Tools and Platforms
- CoreWeave vs Nebius: GPUaaS AI Stocks Comparison
Sources
- NVIDIA Q4 2025 Earnings Report
- AMD Q3 2025 Earnings Report
- Microsoft FY2026 Earnings Guidance
- Alphabet CapEx Guidance (Q4 2025 Earnings)
- Meta 2026 CapEx Guidance (Q4 2025 Earnings)
- Amazon AWS Financial Performance
- The Motley Fool: Best AI Stocks 2026
- U.S. News: Best AI Stocks
- DeployBase GPU and LLM Pricing Tracker (data observed March 21, 2026)