Connect with us

Bussiness

Can Marvell Profit As It Tries To Triple Its Business By 2028?

Published

on

Can Marvell Profit As It Tries To Triple Its Business By 2028?


A rising tide may lift all boats, and that is a good thing these days with any company that has an AI oar in the water. But the question is will any of that water be potable – by which we mean profitable.

Thus far, depending on how cynical you want to be about the server recession and the competitive pressures in the compute, networking, and storage sectors, only one company has been able to filter its costs out of the AI wave and create massive pools of cool, clear profits. Which is quite literally holding Wall Street afloat right now.

That company, of course, is Nvidia.

But others are going to try, and Marvell Technology, which has a longer history in the semiconductor business in the datacenter than even Nvidia, is one of them. Marvell has a better chance than many other chip and systems players to capitalize on the enormous GenAI opportunity in the next several years.

We have been following Marvell’s various technologies for many years already, and have paid particular attention to the Cavium Arm server CPU business and the Innovium Ethernet switch ASIC businesses before they were eaten by Marvell to bolster its datacenter compute and networking aspirations. Marvell’s well-regarded Armada Arm chips started the Arm server revolution – we think unintentionally – in 2009 or so, and we have been paying attention since then. Marvell acquired Cavium in November 2017 for $6.1 billion to get its hands on its ThunderX family of Arm chips and acquired Innovium in August 2021 for $1.1 billion to own its hyperscaler-focused, lean and mean Teralynx switch chips. (Our analysis of the 51.2 Tb/sec Teralynx 10 switches announced last March and shipping in volume since late last year is here.)

But Marvell is not just selling chips, and in fact, it is really selling expertise in helping others to design their chips and to get them through the foundries of Taiwan Semiconductor Manufacturing Co and, presumably if needed, those of Samsung and Intel as they start providing competitive options.

The $650 million acquisition of Avera Semiconductor, which is an alloy of the chip design teams from GlobalFoundries and IBM Microelectronics that was spun out of GlobalFoundries and bought by Marvell in November 2019, and it is the basis of that custom silicon and packaging business.

We did a deep dive on the custom and semi-custom chip business at Marvell back in September 2020, but now we are going to start tracking the financial numbers for Marvell, which many are hoping will be able to compete against Nvidia, AMD, Intel, Cisco Systems, and Broadcom in compute, storage, and networking while also cooperating with them as they adapt their technologies to try to bring the cost of GenAI systems down and compete against the juggernaut that is Nvidia when it comes to AI hardware and software.

With the addition of the DSPs and optical modules from the $10 billion acquisition of Inphi in October 2020, Marvell rounded out its datacenter and 5G interconnect businesses and complemented its existing Prestera and (at the time) future Innovium Ethernet switching businesses.

A lot of the pieces for Marvell to benefit from a substantial re-architecting of the datacenter for AI workloads are in place, and at a considerable cost of $17.9 billion in stock and cash across those four deals. The assemblage of datacenter technologies that Marvell  has put together is intentional, and the datacenter boat is rising. But as you will see from its recent financial results, potable water is not yet being filtered out through its P&Ls.

In its first quarter of fiscal 2025, which ended in the first week of May, Marvell’s revenues were down 12.2 percent to $1.16 billion and declined 18.6 percent sequentially from the fourth quarter of fiscal 2024, which ended on February 3. Operations resulted in a loss of $152 million, and the company reported a net loss of $215.6 million. The company had $848 million in cash in the bank and had just a tad over $4 billion in long term debt.

The losses in recent quarters have been concerning, and it is hard to say if this is due to declines in more traditional datacenter compute, storage, and interconnect businesses or if the rising business that Marvell is doing helping hyperscalers and cloud builders create custom silicon and getting it through the TSMC foundries on advanced processes is a drag on profits. As is the case with other chip makers and system builders, it is hard to say because they don’t talk directly about the profitability of their AI businesses. But it sure looks like Hewlett Packard Enterprise, Dell, Lenovo, Cisco Systems, and Supermicro are having trouble profiting from their burgeoning AI businesses. (As we have reported in recent weeks across many stories profiling the financials of these OEM vendors.)

In the quarter, Marvell’s Datacenter group posted sales of $816.4 million, up by a factor of 1.87X compared to the year ago period and up 6.7 percent sequentially, and Matt Murphy, Marvell’s president and chief executive officer, said on a call with Wall Street analysts that double-digit growth from “cloud” more than offset a “higher than seasonal decline” in chips sold into products aimed at enterprise, on-premises datacenters.

Revenues in fiscal were boosted significantly by the initial shipments of custom accelerators for hyperscalers and cloud builders. Several years ago, AI chip startup Groq partnered up with Marvell for help in designing and manufacturing its TSP accelerators, and now it has partnerships for design and manufacturing for three out of the four hyperscalers and cloud builders here in the United States.

Marvell does not name names when it comes to these devices, but it is widely believed that the latest Trainium2 AI training chips from Amazon Web Services are ramping in volume through Marvell and TSMC right now, with future Inferentia3 AI inference chips ramping in 2025. (These Inferentia3 chips have not been announced yet, but could be revealed soon.) Google’s “Cypress” Axion Arm server chips are also believed to be coming out with the help of Marvell and ramping now. Microsoft is reportedly working with Marvell to bring a future iteration of its “Athena” Maia series of AI accelerators, which will be ramping in 2026. (One might code name these Athena2 and call them the Maia 200 series in production.) Microsoft revealed the Maia 100 series and their companion Cobalt 100 series Arm server CPUs in November 2023.

These custom silicon deals with the hyperscalers and cloud builders, as well as their adoption of networking and interconnect products, is what is fueling the Datacenter group growth, and Murphy said on the call that revenues for this group would be up “in the mid-single digits” sequentially in fiscal Q2. The company has a set of coherent DSPs that are being used as the basis for datacenter interconnects (DCI) that can reach as far as 1,000 kilometers and these will drive a $1 billion business by themselves over the long, and that the DCI business in general will grow to $3 billion by calendar 2028. The company’s PAM-4 DSPs are also being adopted by the hyperscalers and cloud builders and over the similar long haul are expected to add another $1 billion revenue stream. (Also presumably by 2028).

Marvell has just entered the PCI-Express retimer chip business, taking on Astera Labs, which thought it was going to own this niche until Broadcom returned to it this year and now Marvell is entering it. These retimers are used to extended the reach of the links between flash storage and accelerators and PCI-Express controllers embedded in host CPUs.

“We see a massive opportunity ahead with the data center TAM expected to grow from $21 billion last year to $75 billion in calendar 2028 at a 29 percent CAGR,” Murphy said on the Wall Street call. “We have numerous opportunities across compute, interconnect, switching and storage, as a result, we expect to double our market share over the next several years from our approximately 10 percent share last fiscal year.”

These numbers are consistent with what Marvell was talking about at its accelerated infrastructure event earlier this year, which we were unable to attend because of medical issues in our household but which we are gleaning information from as part of this analysis.

At its AI Era event back in April, Marvell said that it had about $200 million in AI-related connectivity sales in fiscal 2023, and that was greater than $550 million in fiscal 2024. In fiscal 2025, the current fiscal year, Marvell is expecting the combination of its connectivity and custom compute businesses to drive more than $1.5 billion in sales (roughly a $1 billion increase) and expects AI revenues to drive an incremental $1 billion or more in sales in fiscal 2026 to bust through $2.5 billion.

It is helpful to put this and the general trends for Marvell into perspective with some broader data. Here is how Marvell and see the datacenter semiconductor total addressable market, or TAM, starting with the calendar 2023 baseline.

Based on various sources, Marvell reckons that datacenter capital expenditures worldwide were around $260 billion in 2023. Of that, $197 billion was spent on IT equipment, which means this figure excludes the physical datacenter infrastructure that wraps around and powers and cools that IT equipment. Within this $197 billion for equipment, around $120 billion of that was for semiconductor chippery, and excluding various kinds of memory and other chips where Marvell does not have products, the datacenter TAM dropped down to around $82 billion.

Here is how that $82 billion breaks down further across the compute, interconnect, switching, and storage segments where Marvell plays:

Most of it is compute, and this distribution of revenues is a reflection of distributed computing architectures. That pie chart on the right might as well be a bill of materials share of the cost of an HPC or AI cluster.

Now, if you want to drill down further into the compute category, Marvell and its consultants believe that there was about $26 billion in general purpose compute sold in 2023 and about $42 billion in accelerated compute – GPUs, TPUs, and other kinds of parallel accelerators that we talk about all the time, for a total of $68 billion. Projecting out to calendar 2028, Marvell and friends think that the compute market will grow by a compound annual growth rate of 24 percent to $202 billion, with general purpose compute having a CAGR if 3 percent to reach $30 billion by 2028 and accelerated compute having a CAGR of 32 percent to reach $172 billion by 2028.

Now, within those compute numbers, custom accelerated compute engines accounted for 16 percent of total accelerated compute in 2023, of about $6.6 billion, and depending on who Marvell asked, this is projected to grow by 32 percent CAGR to $27.5 billion at a flat 16 percent share of the total or by a CAGR of 45 percent between 2023 and 2028 to $42.9 billion or about 25 percent of the total.

If you do the math, if Marvell has a 20 percent share of custom accelerated compute revenues, then it is looking at helping make devices that will comprise somewhere between $5.5 billion and $8.6 billion in revenue. We don’t know how much of this comes to Marvell – it depends on how the revenues and costs get booked. If Marvell is hired to be a supplier of the chips, then all of that revenue will come to it and it will have to bear the costs of the projects. (Which is what we think is happening here and what Google is doing with Broadcom as the latter makes Google’s TPUs.) If not, then Marvell and Broadcom are essentially costs in the value chain for the making of the custom compute engines, and we have a harder time figuring out the revenue.

If you add it all up, here is the TAM in 2023 and 2028 that Marvell is chasing:

 

In the closest thing we have to a revenue figure for Marvell for 2023, which is its fiscal 2024 year which ended on February 3, Marvell had $5.51 billion in sales, and if it had around 10 percent of the TAM for datacenter stuff across custom accelerated compute, switching, interconnects, and storage, that is $2.1 billion or something around 40 percent of its aggregate revenues. And by 2028, Marvell is expecting this to grow to 20 percent – a doubling of share – of a TAM that will be 3.6X larger, to reach a $15 billion business by 2028.

Marvell thinks that it will be able to make it up in volume, we presume, when it comes to profits. And, it may be right about that. The only way to find out will be to live through the next five years.

Now, here is a neat chart that drills down into the interconnect TAM, which adds all of the neat PAM-4 DSPs and other silicon photonics stuff into the mix. Note, this interconnect data does not include switching:

Assuming a 10 percent share of this – the prior chart said maintain share of the interconnect space – that is a $1.4 billion interconnect business by 2028.

Now, let’s drill down into Ethernet switching, which includes the XPliant heritage from Cavium and the Prestera and Innovium stuff.

It is not clear what the share is today – it could be more or less than 10 percent of the 2023 TAM of $6.1 billion. Marvell is not precise in its predictions, either, but assuming maybe a 15 percent share of TAM for datacenter Ethernet switching, then maybe by 2028 Marvell can have a $1.8 billion or so ASIC business here.

Add in storage controllers, and that might be another $600 million by 2028, and other things in automotive and consumer will account for the rest.

So, in the best case scenario for 2028, compute chips will be an $8.6 billion business, Ethernet switching chips will be a $1.8 billion business, interconnect chips will be a $1.4 billion business, and storage chips will be a $600 million business for a total of $12.4 billion, and another $2.6 billion will come from other stuff to make up that expected $15 billion or 20 percent of the total TAM that Marvell is shooting for.

But again, the question is: Can Marvell be profitable as it triples its business?

It doesn’t have to be profitable at all, of course, unless it wants to make Wall Street and employees who hold shares, as well as its customers, happy. And we think it wants to be. Hopefully, the past two years are not an indication of the next four ones.

Over the range of our financial analysis, which starts in the February 2008 quarter and rolls to the May 2024 quarter, Marvell has generated $53.63 billion in sales, but only $1.99 billion in net income. That’s 3.7 percent of revenue. And it has had long periods of losing money. We understand why this is case, and we applaud the company’s constant reinvention of itself to address new markets and doing the acquisitions to build new foundations. But these hyperscaler and cloud builder customers ask a lot and don’t leave a lot of margin in the game for their partners. Hopefully, the hyperscalers and cloud builders are smart enough to let their partners make a living, too. All Nvidia has to do is cut its prices, and it can undercut all of the homegrown accelerator projects. Intel and AMD and the Arm collective can do the same for CPUs and make it less desirable to have homegrown CPUs.

We doubt that will happen. We think it is far more likely that the hyperscalers and cloud builders will want to control their own fate and use homegrown chips as leverage. And thus, there is a chance that Marvell can profit from all of this.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Continue Reading