For most of the last decade, “software eats the world” was the dominant playbook. In 2026, the playbook is flipping: the alpha is in hardware and energy—for the first time in years—because AI is colliding with hard physical limits.
This isn’t “hardware is cool again.” It’s a bottleneck map. If you understand where the AI stack breaks, you can understand who gets pricing power.
Thesis snapshot: where alpha is concentrated
| Layer | Current setup | Why market rewards it now | Representative tickers |
|---|---|---|---|
| Hardware + infrastructure | SCARCE | Removes hard constraints: compute, HBM, packaging, networking, cooling, and power delivery. | NVDA, MU, TSM, ANET, VRT |
| Software application layer | ABUNDANT | Still strategic long-term, but nearer-term repricing reflects seat compression and variable inference costs. | NOW (higher quality), while weaker seat-based SaaS remains selective. |
Read this post as a bottleneck map: in this phase, own what removes physical constraints first; evaluate software upside on a stricter pricing-power filter.
The AI hardware stack (end-to-end): what actually matters
Investors talk about AI like it’s one thing. In reality, it’s a chain—and the chain is only as fast as its slowest link:
- Compute: GPU/accelerator + system + interconnect.
- Memory: HBM (and the packaging required to attach it).
- Networking: switching + optics to make thousands of accelerators behave like one machine.
- Storage & data movement: feeding the cluster without starving it.
- Power train: grid → substation → transformers → UPS → PDUs → VRMs on the board.
- Thermals: air vs liquid cooling and heat rejection.
- Energy supply: time-to-power and firm capacity (often the real gating factor).
Software still matters—but in this part of the cycle, software is not the scarce input. Compute, memory bandwidth, packaging, and megawatts are.
The key insight: AI turns “inference” into an industrial load
Training gets headlines. Inference becomes the permanent electricity bill. Once AI products reach scale, usage turns into a quasi-industrial demand profile: always-on, latency sensitive, and extremely capex-intensive.
That’s why the market is increasingly paying for the companies that remove physical constraints, not the companies that add another app layer.
Category-by-category: the best-positioned AI hardware & energy stocks
1) Accelerators (the core tollbooth): NVIDIA (NVDA) + challengers
NVDA is the center of the stack because it sells a full platform: GPUs, high-speed interconnect, networking, and a software ecosystem that reduces time-to-deploy. That integrated platform matters when clusters are massive and downtime is expensive.
- Best positioned: NVDA
- Challengers / second-order plays: AMD (accelerators), AVGO (custom silicon + networking), INTC (foundry/packaging optionality, execution risk)
What to watch: supply, platform attach (networking/systems), and whether cluster buyers standardize multi-vendor stacks.
2) HBM memory (the silent limiter): Micron (MU)
AI clusters don’t just need compute—they need memory bandwidth. HBM is expensive and complex to manufacture, and when it’s tight, it becomes a direct throttle on GPU shipments and system performance.
Research coverage has highlighted that the industry can be memory-constrained: CNBC reported that AI memory demand has outpaced supply and cited expectations for sharp DRAM price increases, with HBM described as the critical component surrounding the GPU in modern AI systems (CNBC, Jan 10 2026).
- Best positioned (US-listed pure play): MU
- Also relevant (non-US): SK Hynix, Samsung (HBM leaders)
What to watch: HBM mix, pricing power, and cycle risk (memory is still cyclical even if AI raises the floor).
3) Foundry + advanced packaging (where AI gets assembled): TSMC (TSM)
Even if you “have a chip design,” you still need advanced nodes and advanced packaging capacity to ship AI systems. Packaging (think CoWoS-like capacity) has been a real-world constraint in recent AI ramps.
- Best positioned: TSM
- Second-order: outsourced packaging/test such as AMKR (more commodity; watch mix)
What to watch: packaging lead times, capex, and customer concentration.
4) Semiconductor equipment (the capacity builders): ASML, AMAT, LRCX, KLAC
If AI demand forces sustained leading-edge and memory capex, the equipment layer is the “arms dealer.” This is not a pure AI trade—it's a cycle trade tied to capex duration.
What to watch: memory capex, China/export controls, and order durability (not just one quarter beats).
5) Networking & switching (making GPUs act like one computer): ANET, AVGO (and NVDA adjacency)
As clusters scale, the network fabric becomes a first-class constraint. You don’t just buy GPUs—you buy a system where switching capacity and latency determine utilization.
What to watch: Ethernet vs InfiniBand mix, switch silicon cycles, and whether hyperscalers go more in-house.
6) Optical transceivers & photonics (the bandwidth enablers): COHR, LITE, CIEN
As racks densify and clusters spread across halls/campuses, optics matter. This is a higher-volatility layer, but it’s directly levered to bandwidth growth.
What to watch: product cycles (400G/800G/1.6T), pricing pressure, and customer concentration.
7) Servers & rack integration (turning chips into deployable capacity): SMCI, DELL, HPE
Chips don’t deploy themselves. OEMs and integrators translate supply into shipped racks. This layer often looks like “low margin,” but during supply shocks it can capture value via allocation and configuration.
What to watch: component availability (HBM, GPUs, PSUs), working capital, and demand digestion phases.
8) Power & cooling (the “grid-to-chip” tollbooth): VRT, ETN
AI racks push power density higher, which forces a redesign of the data center’s power train and thermals. If you can’t power it and cool it, you can’t monetize it.
What to watch: backlog quality, lead times, and whether liquid cooling becomes standard rather than optional.
9) Energy supply (time-to-power): CEG, VST, TLN (selective)
In many regions, the constraint isn’t land or fiber—it’s time-to-power. Utilities and power generators with the right assets can become strategic partners to data center developers (with all the political/regulatory complexity that implies).
What to watch: contracted pricing, regulatory exposure, and how much AI demand is already priced in.
Software still matters, but scarcity sits elsewhere
Software is not “dead.” But right now, the cleaner alpha is mostly outside seat-based SaaS. Many software names are being repriced for two pressures: seat compression risk and variable inference costs. By contrast, hardware and energy are being rewarded for physical scarcity, delivery constraints, and direct linkage to AI capex.
How to use AlphaCrew on this thesis
Bottom line: In 2026, don’t just follow the AI narrative. Follow the constraints. That’s where pricing power lives.

