The Dollars And Sense Of Nvidia Paying A Fortune For Arm – The Next Platform



Back in April, when we were talking with Nvidia co-founder and chief executive officer Jensen Huang about the datacenter being the new unit of compute, we explained that we were always disappointed with the fact that Nvidia did not bring its “Denver” hybrid Arm CPU and Nvidia GPU, previewed way back in January 2011, to market, and said further we really wanted Nvidia to redefine what a CPU is by breaking its memory and I/O truly free from its compute.

What we didn’t say in all of this that Nvidia should try to buy Arm Holdings, the company the creates and licenses the Arm embedded, client, and server chip instruction set, architectures, and reference designs. But if the rumor mill is right, then Nvidia is pondering just that.

This opportunity is only coming about because SoftBank Group, the Japanese conglomerate founded by and for the moment controlled by Masayoshi Son, is being hammered by some bad investments – particularly the We Work office renting boondoggle – at the same time that the coronavirus pandemic hit. In March of this year, SoftBank announced it was selling of $41 billion in assets to clean up its balance sheet and to fund share buybacks to keep its investors from revolting. Softbank has a 24 percent stake in T-Mobile, a 29.5 percent stake in Alibaba, and a 48.2 percent stake in Yahoo Japan that it probably wants to keep, and letting go of Arm Holdings, which it paid a whopping $32 billion to take control of four years ago this month, is probably not something that Son, who wants us all to join The Singularity with him and create the technologies to do it, relishes. But, for those of us who want no part of such nonsense, hooray! Make Son’s licensing very expensive, please.

And while you are at it, take your universal basic income and put it in a self-driving car and push it over a cliff. All we need is intelligent assistance for human drivers, and we all need meaningful work with a reasonable standard of living. This is very hard to attain for all of humanity, obviously. Technology has helped as much as it has hurt – as it always does.

So, the first question we have about a potential Nvidia-Arm deal is: At what price can Nvidia actually acquire the Arm Holdings business from SoftBank so it makes economic sense? Money is cheap right now, to be sure. Will it be more than SoftBank paid four years ago, about the same, or less? Given the current environment, you could argue all three ways. According to the Wall Street Journal story that broke on July 13 saying that the talks between Nvidia and Softbank were in the early stages, starting all the tongues wagging, Goldman Sachs is apparently helping SoftBank unpack the numbers. The Bloomberg story that posted on July 31, we are now in the advanced stages of talks and one analyst cited the Arm price tag at around $55 billion. There have been rumors that Arm might re-IPO next year with a $44 billion valuation – which is a neat trick considering that SoftBank shareholders already own Arm, and that if the company waited until 2025 to re-IPO it might be worth $68 billion.

Let that sink in while we consider how much money that is for a business that generated just shy of $2 billion a year in licensing fees and other revenues in SoftBank’s fiscal 2019 and 2020 years ending in March and had an income of $1.27 billion in fiscal 2019 and a loss of $400 million in fiscal 2020. The Arm division of SoftBank had a one-time gain of $1.67 billion in that fiscal 2019 year after setting up a joint venture in China and getting a big bag of cash. The point is, it is hard to say how profitable the Arm licensing business really is at this point. Right now, it really isn’t, and in servers at least there is intense competition from Intel and AMD in the X86 arena and across the embedded and client spectrum there is increasing interest in the RISC-V architecture, which is free and open, and by the way, did we mention that IBM open sourced the Power instruction set last summer and just open sourced the very good, energy-efficient 64-bit Power A2I core design used in the BlueGene/Q supercomputer family? Arm does have nearly 1,800 licensees, which drive that revenue stream and which is the envy of any open or licensable architecture in that regard. It is questionable if that base and its annuity-like revenue stream is worth 28X revenues, though. But that doesn’t mean Wall Street won’t try to get the highest price it can if Arm Holdings really is on the market.

Moreover, it is seriously questionable that Arm Holdings is worth such a big multiple compared to Mellanox, which Nvidia just bought earlier this year for $6.9 billion and which gave Nvidia a $1.33 billion business in calendar year 2019. In the trailing twelve months before Nvidia bought Mellanox ending in March 2020, the networking company had $1.45 billion in sales and had $262.4 million in net income. So Mellanox is 75 percent the size of Arm Holdings and is generating almost as much money in the black as Arm is losing in the red.

I don’t know how these Wall Street and City of London people come up with valuations, but ignoring profits, with these kind of metrics, Arm Holdings is not really worth all that much more than what Nvidia paid for Mellanox. And Son paid $32 billion for it because Arm Holdings is the darling of the British economy and pretty much the big success story in tech that the country has, and to be absolutely fair, Arm chips are ubiquitous in embedded and client devices and there is no reason to believe the architecture cannot – and should not – vanquish X86 from the datacenter. The attack always comes from the bottom. But the question here is will another animal chew its way up from a deeper, more open bottom and eat Arm and X86 at the same time? Everything that is good about Arm can be good about RISC-V over the long haul, and without having to pay anyone for a license. Arm is living by the legacy of its customer’s code base and the currency of its wits. Just like every other compute platform provider in history.

Just get the hyperscalers and cloud builders annoyed enough, and it can all change in a matter of years. The switch from federated giant NUMA RISC/Unix boxes to X86 clusters running Linux took a matter of a few years in the supercomputing realm two decades ago. Those who control their own software control their own fates, to a certain extent.

That right there is the crux of the problem. Arm has already started to give some of its technology away in response to open chip architectures. And where does this end? Ask the proprietary minicomputer makers Digital Equipment, Hewlett Packard, IBM, or better still, the RISC/Unix vendors like Acorn RISC Machines (the originator of the Arm architecture), IBM, Sun Microsystems, Hewlett Packard, Data General, and so on. What Linux plus X86 did to all of them Linux plus RISC-V can do to Arm plus either Windows Server or Linux or MacOS or iOS or whatever. We said can, not will.

Nvidia buying Arm Holdings is all a thought experiment at this point, so let’s think about it for a minute.

Where is the control, and therefore the profit, in the datacenter today? And drilling down deeper, where is the profit when it comes to compute or switching engines, or storage devices?

There is a chunk of profit in actually manufacturing the chips, and Taiwan Semiconductor Manufacturing Corp demonstrates perfectly well with $34.63 billion in revenues in 2019 and $11.18 billion in net income. That’s 32.2 percent of revenue, and that is as good as it gets in the hardware business. But this is clearly a fab that holds the world in its hands at the 7 nanometer node and is at the top of its game.

There is some money in the design of devices as well, but it is not clear how much. But in a good year, Arm Holdings should drop maybe 15 percent of revenue to the bottom line.

Those who designed and manufactured their own chips, created their own systems, and developed and maintained their own systems software had the grief of doing the whole stack, but they got all of the profits from each layer, and could cover losses in one area if necessary where costs ran high for a quarter or two. X86 and Linux ruined that, and forever dried up those profit pools and the IT budgets were redeployed elsewhere, never to return to systems.

Even today, there is some additional profit that comes from packaging up the electronics for a socket and some profit that comes from integrating these components into a system, and then, if it is done right, there is some profit that comes from the software stack. IBM’s profits in the mainframe used to be a mix of hardware and software, both at very high margins, but over time the hardware margins have degraded. The software margins have come down some, but even as the mainframe base has shrunk, those that still have mainframes have consumed ever more capacity and the aggregate amount of MIPS compute consumption on mainframes has gone up and to the right with near mathematical predictability, through good times and bad, and that has driven that software to be a steady freddy business.

But Nvidia doesn’t sell its software stack, and CUDA is just a way to spur adoption of GPU compute. In the datacenter, the profits go to Red Hat for Linux and Microsoft for Windows Server, the memory makers for DRAM, PMEM, and flash, and the network adapter cards. Integrating the components into a system yields profits on the order of maybe 5 percent of revenues across the OEMs and ODMs, and we might be generous here. Toss in technical support and break/fix support across enterprise customers, and maybe the profits across the combined hardware and support revenue streams rises to somewhere between 5 percent and 10 percent.

If Nvidia did not want to create a server-class Arm processor in the prior decade, with or without integration with its GPUs, it is hard to imagine why the company would want to pay a huge sum to take over Arm and try to control an entire ecosystem of server chip suppliers. (As if you could control them anyway. The members of the Arm collective seem to come and go, but right now, the only two successful Arm chips come from Applied Micro/Ampere Computing with its X-Gene and follow-on Altra designs and the only Cavium/Marvell Arm server chip that actually took off was the one they got for a song from Broadcom when it exited the business when it was trying to buy Qualcomm. Calxeda starved before it got to 64-bits, AMD spiked its efforts to focus on X86, Samsung didn’t even get out of the gate, Qualcomm got out there and then gave up, Huawei Technology and its HiSilicon subsidiary are still in the game but pretty much limited to China, and Fujitsu has done an absolutely marvelous job creating an HPC-centric and AI-capable 48-core behemoth based on Arm. (A bow of respect.)

With increasing competition coming in GPUs and a failure to capture any of the initial exascale class machines, Nvidia might be angling to change the nature of the game in the datacenter. This is the only thing that makes sense to us, and it is a hell of a high price to pay but it could – just maybe – pay off. And Nvidia woudl have a business that spans from datacenter to edge to cloud to client to embedded.

So here is what you have to consider: Would Nvidia be willing to make everything it does – GPUs, CPUs, switch and adapter ASICs, SmartNICs, transceivers, whatever – open and licensable, at reasonable prices, to totally fill in all of the protective moats that Intel, Broadcom, and others have built in the datacenter around their respective products? (We didn’t say open source, but open as defined by a partner relationship with contractual behavior.) Nvidia could create its own reference designs and sell them, much as it does with DGX servers today, up and down the stack, and also let others manufacture devices and systems based on either its intellectual property or its chippery – or a mix of both. And would such a combination of assets drive enough revenue and profit while also smashing the competition?

We really are not sure, and we need to think about this some more. The combination of Mellanox and Cumulus Networks for networking with Nvidia for GPU compute and the core AI market it serves now made sense. But only if it doesn’t alienate Mellanox switch and adapter buyers or those who just buy ASICs. But if everything is licensable on reasonable terms, then Nvidia can compete to a certain extent with its customers, so long as the playing field is level. Nvidia itself has demonstrated this well with its raw GPU business, where it makes cards and others also make cards, and with its DGX server business, where it makes systems and others make systems based on its GPUs and interconnects, too.

Pipe up if you have any thoughts. The more, the merrier. There are a slew of bankers slouching towards London, Tokyo, and New York who are looking for every way they can to justify such a humongous deal so they can feast upon it.



Source link