AMD vs NVIDIA + Global Risks for American Chipmakers
A roundup of recent media coverage plus a brief on the datacenters' decision matrix
Over the past two weeks, markets watched as Nvidia climbed to a $3 trillion market capitalization. In a conversation with Scott Kanowsky from Investing.com, I opined that there were plenty of risks that belie the generous forward outlook on its Price to Earnings (PE) Ratio. In another conversation with Laila Maidan from Business Insider, I discussed the risks implied by its transformed consumer mix, opportunities for its almost-cousin AMD, and other geopolitical risks typical for American chipmakers. Herein lie the fullness of my rationale plus links to other articles published elsewhere. Read on!
Note #1: Nvidia’s Q1 earnings were discussed in great detail in an article published in Leverage Shares (which can be found here) as well as SeekingAlpha (which is here).
A large part of Nvidia’s conviction lies in the fact that switching customer focus from crypto miners and gamers — where needs are highly variable — to the corporate sector, which tends to come with high expectations. Nvidia’s design excellence is a well-established fact by now and it delivered with telling results.
High R&D expenses towards the development of corporate solutions led to massive payouts in FY 2024 (i.e. 2023) and Q1 of FY 2025 (i.e. first quarter of this year). Stable rising cashflows coupled with diminishing expenses in a tech company tend to be a great boost for investor conviction. Ergo, the stock price shot up through parts of 2023 and almost all of 2024 so far. As market breadth vanished across 2024, investor conviction piled on. The semiconductor industry, on average, has a Trailing Twelve Month (TTM) PE Ratio in the 30-35 range. Nvidia’s, on the other hand, is skirting towards the 70-75 range. Therefore, it wouldn’t be entirely unreasonable if consensus market opinion drives down the stock’s PE Ratio by about 25-35% over the course of the year.
Now, while a corporate clientele ensures a steady demand for the company’s products, it also imposes limits: if the products are deemed overpriced relative to requirements, said clientele would show no reluctance to move on to another more accommodating supplier. Also important are the clients’ usage cycle: once costs have been sunk into a set of products, there will likely be a tight set of requirements before substantial investments are made into upgrades. A spend of, say, $10 billion would necessarily be rationalized over the course of, say, six or seven years.
The “Power Wall” Issue Has Consequences
Nvidia’s products — notably its GPU range — can artfully be described as “high-spec” that inevitably run closer and closer to what engineers refer to as the “power wall”: power dissipated in a silicon CMOS circuit comprises of several components, of which dynamic power, computed as:
where: a is activity, C is capacitance, V is voltage, and f is frequency
is a major component. Even with the Intel Pentium 4 processers (circa 2000-2008), the power wall density of a closely-packed processor array resulted in heat signatures that were steadily more and more comparable to that of a nuclear reactor.
Presently, the “power wall” has been deemed hit — with microprocessors consistently being embedded with an ever-larger number of transistors to run larger computations.
More transistors generally equate to more power being consumed and more heat being generated. A large part of the limitations of computing resident within the computation limits of CPUs is addressed with GPUs, which packs more logical units for processing in parallel.
The “more” the processing, the higher the need for heat management, which leading data centers have long been wrestling with as processor stacks increase in size and density. Air-cooling — that most personal computers tend to be equipped to be — don’t quite cut it. Water-cooling, on the other hand, tends to be more efficient.
A little over a decade ago, Google initiated experiments with a highly-secretive and heavily-patented “floating data center”, that resulted in sightings of a number of mysterious barges with up to four storeys of structures built out of shipping containers floating off the coasts of California and Maine.

Patent filings revealed that each container might be packed with at least 2,000 processors and 5 terabytes of storage. The design was also described as being well-suited for “modular” add-ons with the cooling plant onboard incorporating seawater.
The greater the heat dissipation, the greater the performance. If the ocean has the answer, it bears to remember that the sea gets colder the further down one goes. In the course of “Project Natick”, Microsoft embedded a “datacenter” down onto the seabed in the Northern Isles (specifically, off the Orkney Islands) in 2017.

Born out of an employee’s idea during a “ThinkWeek” in 2014, the project rationalizes that, given that more than half the world’s population lives within 120 miles of the coast, data flowing out of datacenters underwater near coastal cities would have a short distance to travel, leading to fast and smooth web surfing, video streaming and game playing.
When raised back to the surface in 2020, the datacenter was covered in a thin coat of algae, a number of barnacles, and sea anemones the size of cantaloupes. While the datacenter did have a “handful” of failed servers and related cables, it was determined that the servers in the underwater datacenter were, as a whole, eight times more reliable than those on land.
Heat damage complexities and high infrastructure costs continue to keep CPUs in play within datacenters. The complexities of high computation are being handled via software methods that essentially break down a problem set into smaller “chunks” that could be handled by less-powerful hardware, thereby bringing in a form of “mixed-mode optimization” in deployed hardware profiles. Thus, Advanced Micro Devices (ticker: AMD) — with a long-standing forte in supplying top-of-the-line CPUs to computer manufacturers — has been gaining traction in long strides among datacenters.
Note #2: The potential for AMD’s market capitalization to reach $1 trillion was discussed in an article released on the Leverage Shares website (click here) as well as the Tiger Brokers Community platform (click here), wherein it was on of the most-read articles that week. Excerpt of this article are being presented here.
While Nvidia’s revenues have traditionally lapped those of AMD by some margin, 2022 was the year both drew up about par. 2023, however, saw Nvidia pile on gains by progressively stronger focus on a corporate clientele via its Data Center segment – resulting in revenues four times as much as its rival the next year. AMD’s higher operating expenses – as well as R&D expenses – in 2022 and 2023 suggest that a similar reorientation is being ventured into by AMD. However, as far as present trends go, AMD’s Q1 in 2024 is a foreshadowing of Nvidia’s 2022.
While AMD’s R&D expenses in Q1 have remained comparatively high relative to revenues (or sales) as in the two previous full years, net operating expenses have seen a net drop. Just as with Nvidia in 2022, roughly half of AMD’s Q1 2024 revenue is attributable to its Data Center segment. Unlike Nvidia, only half as much revenue is attributable to Gaming. Also unlike Nvidia is revenue contributions from Client and Embedded segments, which collectively make up 40% of AMD’s revenue while Nvidia’s revenue share from these segments were 9% in 2022. In the most recent quarter, Nvidia’s contributions from these two segments have dwindled to 3%.
While AMD and Nvidia currently occupy different parts of price/performance spectrum in their product, every part of the spectrum has been receiving a fair bit of corporate attention on account of the “mixed-mode optimization” conundrum. AMD itself reports that its Data Center segment is driven by sales of Instinct MI300 GPUs — which tends to be a beat or two behind Nvidia’s GPUs — and 4th Gen EPYC CPUs. However, the MI300 ecosystem is being driven forward in its evolution via software-driven optimizations of the ROCm 6.1 stack. A number of its clients such as Lenovo, Samsung and Vodafone have demonstrated that software-driven AI optimization unlocks improved performance in its EPYC CPUs as well. As corporate upgrade cycles turn, so will buy-ins into designers other than Nvidia.
Hence, there are two risk factors for Nvidia stock’s forward outlook. First, its current price ratio premium relative to the rest of the semiconductor industry is overextended. Second, corporate deployment cycles will inevitably optimize the product relative to the likes of AMD.
There are, of course, a few other “geographical” factors at play.
Other Considerations: Competitors and Countries
A Substack article from nearly a year ago had mentioned that three major startups — GraphCore, Sambanova and Cerebras — have been working on developing “Nvidia-beating” products that would eventually chip away (no regrets for the pun) at Nvidia’s dominance. Of these, Cerebras inked an agreement with Nautilus — which aims to operate “floating datacenters” — in April to support its services built around Cerebras’ WSE-3, deemed the world’s fastest AI chip.
Outside of rapidly rising American/Western Hemisphere competitors lie numerous aspirants in the East: semiconductors and processors are deemed far too sensitive to be left in perpetuity to Western manufacturers and their respective governments’ oversight. Most active in the attempt to break away from the West’s influence is China.
Historically, China has historically been a net importer of high-tech goods. As of 2021, this trend has been witnessing a slight reversal.
A Substack article from March indicated that even in the case of Taiwan’s TSMC — the foundry that is the main manufacturing partner for both Nvidia and AMD — a large proportion of components and raw material needed for chip manufacturing comes from China. Chinese companies — presumably with a lot of encouragement from their government — have been hard at work at attempting to build up comprehensive chipmaking facilities domestically, notably by importing various equipment needed for this. The largest share of imports by dollar value has been held by lithography machines that are needed to build layers of complex transistor patterns layer by layer on a single silicon wafer.
Dutch corporation ASML Holdings is the world’s only manufacturer of extreme ultraviolet (EUV) lithography machines needed to manufacture advanced chips. As technology export curbs gradually take hold against China, import volumes have only been increasing in fits and starts.
China isn’t the only power in Asia that views semiconductor manufacturing as a matter of sensitivity. As a Substack article published a year ago indicated, India had laid the groundwork for a semiconductor industry along with an AI sector with substantial monetary incentives. This has already begun to borne fruit: India’s Tata Electronics — in a collaboration with Taiwan's Powerchip Semiconductor Manufacturing Corporation (PSMC) — has just finished a batch of semiconductor chips in a pilot line that have been exported to customers worldwide. The finalization of manufacturing design — known as “tape-out” in the industry — is currently underway and the company expects to ship semiconductor chips in sizes including 28nm, 40nm, 55nm, and 65nm out of a government-subsidized 50,000-wafers-a-month facility afterwards.
On the AI front, the most talked-about development was the inking of a deal between Ola Electric (which is also building out the world’s largest EV manufacturing facility) and India-based Kaynes Semicon for the domestic design and manufacturing of chips for EVs. Earlier in the year, Ola Electric had already announced moving out from Microsoft Azure to its own cloud platform and the design of its own alternative to Google Maps. With time, a number of other Indian companies and startups can be expected to deliver indigenous alternatives to the services and solutions offered and dominated by American tech giants.
All in all, these are additional risks that weigh down any argument for the extended valuation of American chipmakers such as Nvidia and AMD. The digital/electronic space of the future can be expected to be very interesting.
UPDATE: My commentary also featured in articles in MSN, Yahoo! Finance, and Yahoo! Finance Singapore.
“New India” has been hard at work on areas adjacent to chip manufacturing. Click here for an article from last year to read about India’s work in the AI race that could propel the country into the Top 3 podium soon. Another part of the series describes India’s quiet fintech revolution: click here to read that.
The “Dharma” series — which traces the evolution of Eastern faith and philosophy — also features India in great detail. Here’s Part 1, Part 2, Part 3, Part 4 and Part 5, followed by the ancillary Part 6 and Part 7 discussing Malaysia’s and Indonesia’s spiritual history.
For a list of all articles ever published, click here.