[HN Gopher] Nvidia to Acquire Arm for $40B
       ___________________________________________________________________
        
       Nvidia to Acquire Arm for $40B
        
       Author : czr
       Score  : 1711 points
       Date   : 2020-09-13 23:28 UTC (23 hours ago)
        
 (HTM) web link (nvidianews.nvidia.com)
 (TXT) w3m dump (nvidianews.nvidia.com)
        
       | ckastner wrote:
       | Softbank paid $32B for ARM in 2016.
       | 
       | A 25% gain over a horizon of four years is not bad for your
       | average investment -- but this isn't an average investment.
       | 
       | First, compared to the SP500, this underperforms over the same
       | horizon (even compared to end of 2019 rather than the inflated
       | prices right now).
       | 
       | Second, ARM's sector (semiconductors) has performed far, far
       | better in that time. The PHOX (Philadelphia Semiconductor Index)
       | doubled in the same time period.
       | 
       | And looking at AMD and NVIDIA, it feels as if ARM would have been
       | in a position to benefit from the surrounding euphoria.
       | 
       | On the other hand, unless I'm misremembering, ARM back then was
       | already considered massively overvalued precisely because it was
       | such a prime takeover target, so perhaps its the $32B that are
       | throwing me off here.
        
         | yreg wrote:
         | It vastly overperforms the Softbank ventures we usually hear
         | about (excluding BABA).
        
           | smcl wrote:
           | To be honest, cash deposited in a boring current account
           | outperforms Softbank's Vision Fund
        
         | lrem wrote:
         | There's also a fundamental threat to ARM in the raise of
         | RISC-V.
        
           | janoc wrote:
           | RISC-V is only a tiiny player in the low end embedded space
           | (there are only a few parts actually available with that
           | instruction set) and no competition at all at the high end.
           | 
           | Maybe in a decade or so it may be more relevant but not
           | today. Calling this a "fundamental threat" is a tad
           | exaggerated.
        
             | kabacha wrote:
             | Where are people getting the "decade" number from? Looks
             | demand for RISC-V just went up significantly and it's not
             | like it's broken or not-existing, RISC-V "just worked" a
             | year ago already:
             | https://news.ycombinator.com/item?id=19118642
        
               | janoc wrote:
               | The "decade" comes from the fact that there is currently
               | no RISC-V silicon with performance anywhere near the
               | current ARM Cortex A series.
               | 
               | And a realistic estimate how long it will take to develop
               | something that would be on par with today's Snapdragon,
               | Exynos or Apple's chips is at least those 10 years. You
               | need quite a bit more to have a high performance
               | processor than just the instruction set.
               | 
               | The "just worked" chips are microcontrollers, something
               | you may want to put in your toaster or fridge but not a
               | SoC at the level of e.g. Snapdragon 835 (which is an old
               | design, at that).
               | 
               | Also the Sipeed chips are mostly just unoptimized
               | reference designs, they have a fairly poor performance.
               | 
               | Most people who talk and hype RISC-V don't realize this.
        
               | hajile wrote:
               | A ground-up new architecture takes 4-5 years.
               | 
               | Alibaba recently said their XT910 was slightly faster
               | than the A73. Since the first actual A73 launched in Q4
               | 2016, that would imply they are at most 4 years behind.
               | 
               | SiFive's U8 design from last year claimed to have the
               | same performance as A72 with 50% greater performance per
               | watt and using half the die area. Consider how popular
               | the Raspberry PI is with it's A72 cores. With those RISCV
               | cores, they could drastically increase the cache size and
               | even potentially add more PCIe lanes within the same die
               | size and power limits.
               | 
               | Finding out new things takes much more time than re-
               | implementing what is known to already work. As with other
               | things, the 80/20 rule applies. ARM has caught up a few
               | orders of magnitude in the past decade. RISCV can easily
               | do the same and give the lack of royalties. Meanwhile,
               | the collaboration helps to share costs and discoveries
               | which might mean progress will be even faster.
        
               | andoriyu wrote:
               | Hold on, the reason why RISV-V cores slower is that
               | companies who make it doesn't have an existing backend or
               | just got into CPU game.
               | 
               | I'm not saying Apple can drop-in RISC-V front-end to
               | their silicon and call it a day, but you get the idea.
               | 
               | Sifive has a pretty decent chance at making performant
               | chips within next few years.
        
             | lrem wrote:
             | But you don't buy a company for tens of billions ignoring
             | anything that's not relevant today. You pay basing on what
             | you predict the company will earn between today and
             | liquidation. When Softbank originally bought ARM, no
             | competition was on the radar. Now there is some. Hence,
             | valuation drops.
        
               | janoc wrote:
               | Softbank bought ARM for $32 billion, sold for $40
               | billion, plus they were collecting profits during all
               | that time. Not quite sure how that meshes with "valuation
               | drops" ...
               | 
               | RISC-V in high performance computing is years out even if
               | big players like Samsung or Qualcomm decided to switch
               | today. New silicon takes time to develop.
               | 
               | And Nvidia really couldn't care less about the $1-$10
               | RISC-V microcontrollers that companies like SiPeed or
               | Gigadevice are churning out today (and even there ARM
               | micros are outselling these by several orders of
               | magnitude).
        
               | lrem wrote:
               | Let me just quote what I originally responded to:
               | 
               | > A 25% gain over a horizon of four years is not bad for
               | your average investment -- but this isn't an average
               | investment.
               | 
               | > First, compared to the SP500, this underperforms over
               | the same horizon (even compared to end of 2019 rather
               | than the inflated prices right now).
               | 
               | > Second, ARM's sector (semiconductors) has performed
               | far, far better in that time. The PHOX (Philadelphia
               | Semiconductor Index) doubled in the same time period.
               | 
               | > And looking at AMD and NVIDIA, it feels as if ARM would
               | have been in a position to benefit from the surrounding
               | euphoria.
               | 
               | That's the "valuation drops" - relatively to the market,
               | ARM has significantly underperformed, despite the
               | business being actually healthy and on the rise.
        
       | [deleted]
        
       | paxys wrote:
       | No doubt because Softbank was facing investor pressure to make up
       | for their losses.
        
       | m00dy wrote:
       | Can anyone point me who would be competing with Nvidia in AI
       | Market ?
        
         | joshvm wrote:
         | There's low power inference from Intel (Movidius) and Google
         | (Coral Edge TPU). Nvidia doesn't really have anything below the
         | Jetson Nano. I think there are a smattering of other low power
         | cores out there (also dedicated chips in phones). TPUs are used
         | on the high performance end and there are also companies like
         | Graphcore who do insane things in silicon. Also niche HPC
         | products like Intel Knight's Landing (Xeon Phi) which is
         | designed for heterogeneous compute.
         | 
         | There isn't a huge amount of competition in the
         | consumer/midrange sector. Nvidia has almost total market
         | domination here. Really we just need a credible cross platform
         | solution that could open up gpgpu on AMD. I'm surprised Apple
         | isn't pushing this more, as they heavily use ML on-device and
         | to actually train anything you need Nvidia hardware (eg try
         | buying a Macbook for local deep learning training using only
         | apple approved bits, it's hard!). Maybe they'll bring out their
         | own training silicon at some point.
         | 
         | Also you need to make a distinction between training and
         | inference hardware. Nvidia absolutely dominate model training,
         | but inference is comparably simpler to implement and there is
         | more competition there - often you don't even need dedicated
         | hardware beyond a cpu.
        
         | option wrote:
         | Google, Habana (Intel), AMD in a year or two, Amazon in few
         | years
        
       | weregiraffe wrote:
       | Wow. It costs an arm.
        
       | fizzled wrote:
       | Wow, this is tectonic. I cannot wait to see how this redraws the
       | competition map. There are dozens of major embedded semi vendors
       | that license Arm IP. Nvidia could eradicate them trivially.
        
       | maxioatic wrote:
       | > Immediately accretive to NVIDIA's non-GAAP gross margin and EPS
       | 
       | Can someone explain this? (From the bullet points of the article)
       | 
       | I looked up the definition of accretive: "characterized by
       | gradual growth or increase."
       | 
       | So it seems like they expect this to increase their margins. Does
       | that mean ARM had better margins than NVIDIA?
       | 
       | Edit: I don't know what non-GAAP and EPS stand for
        
         | pokot0 wrote:
         | GAAP is an accounting set of rules, EPS is Earning per Share.
         | Basically means they think it will increase their gross margin
         | and EPS but you can't sue them if it does not.
        
         | ericmay wrote:
         | EPS -> earnings per share
         | 
         | Non-GAAP -> doesn't follow generaly accepted accounting
         | practices. There are alternative accounting methods. GAAP is
         | very US-centric (not good or bad, just stating a fact).
        
           | salawat wrote:
           | Though note the intent of GAAP is to cut down on "creative
           | accounting" which can tend to mislead.
        
         | tyingq wrote:
         | I would guess they do have higher margins since they are mostly
         | selling licenses and not actual hardware.
         | 
         | This article is old, but suggests a 48% operating margin:
         | https://asia.nikkei.com/NAR/Articles/ARM-posts-strong-profit...
        
         | bluejay2 wrote:
         | You are correct that it means they expect margins to increase.
         | One possibility is that ARM has higher margins as you
         | mentioned. Another is that they are making some assumptions
         | about how much they can reduce certain expenses by, and once
         | you factor in those savings, margins go up.
        
       | hastradamus wrote:
       | Why would Nvidia spend $40B to ruin Arm? I can't see them making
       | a return on this investment. No one wants to work with Nvidia
       | they are notoriously roothless. I'm sure everyone is making plans
       | to move to something else ASAP. Maybe RISC-V
        
       | yangcheng wrote:
       | I am surprised that no one has mentioned China will very likely
       | block this deal.
        
         | incognition wrote:
         | Are you thinking retribution for Huawei?
        
           | ttflee wrote:
           | China de-facto blocked Qualcomm-NXP merger during a trade
           | talk by not providing a decision before the deadline to the
           | deal.
        
         | zaptrem wrote:
         | On what grounds/with what authority?
        
           | genocidicbunny wrote:
           | Is it that hard to see that China might see this as an
           | American company further monopolizing silicon tech,
           | potentially cutting China off from Arm designs.
           | 
           | But more to the point, this is also China. If you want to do
           | business in China, you're going to do as they tell you, or
           | you get the stick. And if you don't like it, what are you
           | going to do?
        
           | yangcheng wrote:
           | is this a question or just sarcasm? The post said clearly
           | "The proposed transaction is subject to customary closing
           | conditions, including the receipt of regulatory approvals for
           | the U.K., China, the European Union and the United States.
           | Completion of the transaction is expected to take place in
           | approximately 18 months."
        
         | mnd999 wrote:
         | I really hope so. The UK government should block it also, but I
         | don't think they will.
        
       | jl2718 wrote:
       | This is all about NVLink in ARM.
        
       | yissp wrote:
       | I still think the real reason was just to spite Apple :)
        
         | ericmay wrote:
         | How does this spite Apple?
        
         | RL_Quine wrote:
         | The company who was part of the creation of ARM and has a
         | perpetual license to its IP? Tell me how.
        
           | hu3 wrote:
           | Even perpetual license to future IP?
        
             | scarface74 wrote:
             | Why would Apple need future ARM IP? They have more than
             | enough in house talent to take their license in any
             | direction they wish.
        
               | hu3 wrote:
               | Such deal could certainly benefit both players. Hence my
               | question.
        
       | mlindner wrote:
       | This is really bad news. I hope the deal somehow falls through.
        
       | filereaper wrote:
       | Excellent, can't wait for Jensen to pull out a Cortex A-78 TI
       | from his oven next year. /s
        
         | broknbottle wrote:
         | hodl out for the Titan A-78 Super if you can, I hear it runs a
         | bit faster
        
       | hyperpallium2 wrote:
       | edge-AI!
       | 
       | They're also building an ARM supercomputer at Cambridge, but
       | server-ARM doesn't sound like a focus.
       | 
       | I'm just hoping for some updated mobile Nvidia GPUs... and maybe
       | the rumoured 4K Nintendo Switch.
       | 
       | They _say_ they won 't muck it up, and it seems sensible to keep
       | it profitable:
       | 
       | > As part of NVIDIA, Arm will continue to operate its open-
       | licensing model while maintaining the global customer neutrality
       | that has been foundational to its success, with 180 billion chips
       | shipped to-date by its licensees.
        
         | fluffything wrote:
         | > but server-ARM doesn't sound like a focus.
         | 
         | ARM doesn't have good server CPU IP. Graviton, A64FX, etc.
         | belong to other companies.
        
           | Followerer wrote:
           | Graviton is Neoverse, ARM's Neoverse.
        
       | throw_m239339 wrote:
       | Tangential, but when I hear about all these insane "start-up du
       | jour" valuations, does anyone else feel like $40B isn't a lot of
       | a hardware company sur as ARM?
        
         | beervirus wrote:
         | $40 billion is a real valuation though, as opposed to WeWork's.
        
           | incognition wrote:
           | Maybe referencing Nikola?
        
           | broknbottle wrote:
           | are you suggesting a bunch of office space leases that are
           | fully stocked with beer is not worth $40 billion?
        
       | fishermanbill wrote:
       | When will Europe realise that there is no second place when it
       | comes to a market - the larger player will always eventually end
       | up owning everything.
       | 
       | I can not put into words how furious I am at the UK's
       | Conservative party for not protecting our last great tech
       | company.
       | 
       | Europe has been fooled into the USA's ultra free market system
       | (which works brilliantly for the US but is terrible for everybody
       | else). As such American tech companies have brought EVERYTHING
       | and eventually moth balled them.
       | 
       | Take Renderware it was the leading game engine of the PS2 era
       | consoles, brought by EA and mothballed. Nokia is another great
       | example brought by Microsoft and mothballed. Imagination
       | Technologies was slightly different in that it wasn't bought but
       | Apple essentially mothballed them. Now ARM will undoubtedly be
       | the next via an intermediate buyout.
       | 
       | You look across Europe and there is nothing. Deepmind could have
       | been a great European tech company - it just needed the right
       | investment.
        
         | geon wrote:
         | I don't really see your point. Even the examples you list are
         | nonsensical.
         | 
         | * We are no longer in the ps2 era. EA now uses Frostbite, which
         | was developed by the Swedish studio Dice. It is alive and well,
         | powering some 40-50 games.
         | https://en.m.wikipedia.org/wiki/Frostbite_(game_engine)
         | 
         | * Nokia was dead well before MS bought them.
        
         | alfalfasprout wrote:
         | And you really think more protectionism will help?
         | 
         | Maybe part of the problem is that due to so many regulations,
         | there's not a healthy startup ecosystem and the compensation
         | isn't remotely high enough to draw the best talent.
        
           | mytherin wrote:
           | Regulation has little to do with it. Most of the tech
           | industry is inherently winner-takes-all or winner-takes-most
           | because of how easy it is to scale up tech solutions. US
           | companies get a huge head-start because of their large home
           | market compared to the fragmented EU market, and can easily
           | carry that advantage into also dominating the EU market.
           | 
           | There is a reason Russia and China have strong tech companies
           | and Europe doesn't. That reason isn't lack of money, lack of
           | talent or regulations. The only way for Europe to get big
           | tech companies is by removing or crippling big US companies
           | so EU companies can actually compete. The US companies would
           | be quickly replaced by EU alternatives and those would offer
           | high compensation all the same.
           | 
           | Whether or not that is worth it from the perspective of the
           | EU is not so black and white - tech is obviously not
           | everything - but the current situation where all EU data gets
           | handed to the US government on a silver platter is also far
           | from optimal from the perspective of the EU.
        
           | AsyncAwait wrote:
           | > Maybe part of the problem is that due to so many
           | regulations, there's not a healthy startup ecosystem
           | 
           | Reaganomics talking point since the 80s, yet the U.S.
           | constantly relaxes regulations, recently it released even
           | more environmental ones and it looks in parts like Mars.
           | 
           | But of course, cut regulations, cut corporate taxes, cut
           | benefits, cut, cut, cut. There's never a failure model for
           | such capitalism apparently. 2008 even was blamed on
           | regulation, rather than lack of thereof.
           | 
           | Am quite frankly done with this line of argument.
        
         | Barrin92 wrote:
         | tech is only 10% of the US economy, and European nations are
         | much more reliant on free-trade. Germany in particular, whose
         | exports constitute almost 47% of their GDP, globally only
         | comparable to South Korea for a developed nation.
         | 
         | I get that Hackernews is dominated by people working in
         | software and software news, but as a part of the real economy
         | (and not the stock market) it's actually not that large and
         | Europe doesn't frame trade policy around it, for good reasons.
         | 
         | The US also doesn't support free-trade for economic reasons,
         | but for political and historical reasons, which is to maintain
         | a rule based alliance across the globe, traditionally to fend
         | off the Soviets. Because they aren't around any more, the US is
         | starting to ditch it. The US has never economically benefited
         | from free-trade, it's one of the most insular nations on the
         | planet. EU-Asia trade with a volume of 1.5 trillion almost
         | doubles EU-American trade, tendency increasing, and that's why
         | Europe is free-trade dependent.
        
           | KptMarchewa wrote:
           | Only because of EU. Intra-EU trade is more comparable to
           | trade between US states then true international trade.
        
       | gautamcgoel wrote:
       | This is _awful_. Out of all the big tech companies, Nvidia is
       | probably least friendly to open source and cross-platform
       | comparability. It seems to me that their goal is to monopolize AI
       | hardware over the next 20 years, the same way Intel effectively
       | monopolized cloud hardware over the last 20. Expect to see less
       | choice in the chip market and more and more propietary software
       | frameworks like CUDA. A sad day for CS and for AI.
        
         | tony wrote:
         | Surprisingly - they have a working driver for FreeBSD. Never
         | had an issue with it - and the performance is fantastic. As far
         | back as early 2000's I remember installing proprietary nvidia
         | drivers on Linux and playing UT2004.
         | 
         | Maybe Nintendo/Sony uses Nvidia cards on their developer
         | machines? I imagine FreeBSD drivers aren't simply altruism on
         | their part.
         | 
         | On the other hand, stagnation on other fronts:
         | 
         | - Nouveau (tried recently) is basically unusable on Ubuntu. As
         | in the mouse/keyboard locks every 6 seconds.
         | 
         | - Proprietary drivers won't work with wayland
         | 
         | And since their stuff isn't open, the community can't do much
         | to push Nouveau forward.
        
           | loeg wrote:
           | The FreeBSD blob driver will drive a monitor, but it lacks a
           | bunch of GPU hardware support the Linux one has: all of CUDA,
           | NVENC/NVDEC, I2C DDC, and certainly more.
        
             | non-entity wrote:
             | I knew it was too good to be true.
        
           | dheera wrote:
           | The proprietary driver also doesn't support fractional
           | scaling on 20.04 and I've been waiting ages for that.
        
           | zamadatix wrote:
           | Not just developer machines e.g. the Nintendo Switch uses an
           | Nvidia Tegra X1 and run FreeBSD. Lots of game consoles do,
           | you don't have to worry about the GPL.
        
             | Shared404 wrote:
             | I thought the switch ran a modified form of android, but
             | just looked it up and found this on the Switch Wikipedia
             | page:
             | 
             | > Despite popular misconceptions to the contrary, Horizon
             | [The switch software's codename] is not largely derived
             | from FreeBSD code, nor from Android, although the software
             | licence[7] and reverse engineering efforts[8][9] have
             | revealed that Nintendo does use some code from both in some
             | system services and drivers.
             | 
             | That being said, at least one of the Playstation's runs a
             | modified form of FreeBSD.
             | 
             | Edit: add [The ... codename]
        
               | zamadatix wrote:
               | Very interesting, thanks!
        
           | gautamcgoel wrote:
           | Is the FreeBDD driver open source? Also, somehow I was under
           | the impression they stopped maintaining the FreeBSD driver.
           | Is that correct?
        
             | magic_quotes wrote:
             | > impression they stopped maintaining the FreeBSD driver.
             | 
             | It's maintained while simultaneously not receiving any new
             | features.
        
               | throwaway2048 wrote:
               | Note this includes support for newer cards, so its only a
               | matter of time before FreeBSD support is EOL'd
        
               | magic_quotes wrote:
               | New cards are fully supported. Why do you think they
               | aren't?
        
               | loeg wrote:
               | https://www.nvidia.com/Download/driverResults.aspx/163239
               | /en...
               | 
               | The latest FreeBSD driver is 450.66, published 2020-8-18.
               | Supports RTX 20xx, GTX 16xx, GTX 10xx, and older.
        
             | Conan_Kudo wrote:
             | They did. The proprietary NVIDIA driver is required on
             | FreeBSD as well. The FreeBSD community just doesn't care
             | about this.
        
           | phire wrote:
           | Nvidia's "Blob" approach does have an advantage when it comes
           | to supporting random OSes.
           | 
           | It's less of a driver and more of an operating system.
           | Basically self-contained with all the support libraries it
           | needs. Super easy to port to a new operating system and any
           | driver improvement work on all OSes.
           | 
           | But the approach also has many downsides. It's big. It
           | ignores all the native stuff (like linux's GEM interface).
           | 
           | It also has random issues with locking up the entire system.
           | Like if you are debugging a process with the linux drivers, a
           | breakpoint or a pause in the wrong place can deadlock the
           | system.
        
             | loeg wrote:
             | You say that, but: Nvidia's "blob" driver doesn't have
             | CUDA, NVENC/NVDEC, nor DDC I2C support on FreeBSD --
             | despite all of this functionality being specific to
             | Nvidia's own hardware, and the support being present in the
             | Linux blob, which runs on a relatively similar platform.
             | 
             | If the only differing bits were the portability framework,
             | this would just be a matter of adding missing support. But
             | it isn't -- the FreeBSD object file Nvidia publishes lacks
             | the internal symbols used by the Linux driver.
        
               | magic_quotes wrote:
               | > If the only differing bits were the portability
               | framework, this would just be a matter of adding missing
               | support. But it isn't -- the FreeBSD object file Nvidia
               | publishes lacks the internal symbols used by the Linux
               | driver.
               | 
               | In fact it is. For Linux and FreeBSD Nvidia distributes
               | exactly the same blob for compilation into nvidia.ko; the
               | blobs for nvidia-modeset.ko are slightly different.
               | (Don't take my word for it, download both drivers and
               | compare kernel/nvidia/nv-kernel.o_binary with
               | src/nvidia/nv-kernel.o.) Nothing is locked in the closed
               | source part.
        
             | magic_quotes wrote:
             | Native Linux stuff is not at all native for FreeBSD. If
             | Nvidia suddenly decides to open their driver (to merge it
             | into Linux), FreeBSD and Solaris support will be the first
             | thing thrown out.
        
         | sillysaurusx wrote:
         | AI training is moving away from CUDA and toward TPUs anyway.
         | DGX clusters can't keep up.
        
           | make3 wrote:
           | this is just false
        
           | ladberg wrote:
           | And Nvidia's GPUs now include the same type of hardware that
           | TPUs have, so there's no reason to believe that TPUs will win
           | out over GPUs.
        
             | sillysaurusx wrote:
             | The key difference between a TPU and a GPU is that a TPU
             | has a CPU. It's an entire computer, not just a piece of
             | hardware. Is nVidia moving in that direction?
        
               | dikei wrote:
               | In term of cutting edge tech, They have their own GPUs,
               | CPUs from ARM, Networking from Mellanox, so I'd say
               | they're pretty much set to build a kick ass TPU.
        
               | shaklee3 wrote:
               | A TPU is a chip you cannot program. It's purpose built
               | and can't run the fraction of the type of workloads that
               | a GPU can.
        
               | sillysaurusx wrote:
               | I don't know where all of this misinformation is coming
               | from or why, but, as someone who has spent the last year
               | programming TPUs to do all kinds of things that a GPU
               | can't do, this isn't true.
               | 
               | Are we going to simply say "Nu uh" at each other, or do
               | you want to throw down some specific examples so I can
               | show you how mistaken they are?
        
               | shaklee3 wrote:
               | Please show me the API where I can write a generic
               | function on a TPU. I'm talking about writing something
               | like a custom reduction or a peak search, not offloading
               | a tensor flow model.
               | 
               | I'll make it easier for you, directly from Google's
               | website:
               | 
               | TPUs Cloud TPUs are optimized for specific workloads. In
               | some situations, you might want to use GPUs or CPUs on
               | Compute Engine instances to run your machine learning
               | workloads.
               | 
               | Please tell me a workload a gpu can't do that a TPU can.
        
               | sillysaurusx wrote:
               | Sure, here you go:
               | https://www.tensorflow.org/api_docs/python/tf/raw_ops
               | 
               | In my experience, well over 80% of these operations are
               | implemented on TPU CPUs, and at least 60% are implemented
               | on TPU cores.
               | 
               | Again, if you give a specific example, I can simply write
               | a program demonstrating that it works. What kind of
               | custom reduction do you want? What's a peak search?
               | 
               | As for workloads that GPUs can't do, we regularly train
               | GANs at 500+ examples/sec across a total dataset size of
               | >3M photos. Rather hard to do that with GPUs.
        
               | shaklee3 wrote:
               | Well, there you go. For one TensorFlow is not a generic
               | framework like cuda is, so you lose a whole bunch of the
               | configurability you have with cuda. So, for example, even
               | though there is an FFT raw function, there doesn't appear
               | to be a way to do more complicated FFTs, such as an
               | overlap-save. This is trivial to do on a GPU, and is
               | built into the library. The raw functions it provides is
               | not direct access to the hardware and memory subsystem.
               | It's a set of raw functions that is a small subset of the
               | total problem space. And certainly if you are saying that
               | running something on a TPU's CPU cores are in any way
               | going to compete with a gpu, then I don't know what to
               | tell you.
               | 
               | You did not give an example of something GPUs can't do.
               | all you said was that TPUs are faster for a specific
               | function in your case.
        
               | sillysaurusx wrote:
               | _For one TensorFlow is not a generic framework like cuda
               | is, so you lose a whole bunch of the configurability you
               | have with cuda_
               | 
               | Why make generalizations like this? It's not true, and
               | we've devolved back into the "nu uh" we originally
               | started with.
               | 
               |  _This is trivial to do on a GPU, and is built into the
               | library_
               | 
               | Yes, I'm sure there are hardwired operations that are
               | trivial to do on GPUs. That's not exactly a +1 in favor
               | of generic programmability. There are also operations
               | that are trivial to do on TPUs, such as CrossReplicaSum
               | across a massive cluster of cores, or the various
               | special-case Adam operations. This doesn't seem related
               | to the claim that TPUs are less flexible.
               | 
               |  _The raw functions it provides is not direct access to
               | the hardware and memory subsystem._
               | 
               | Not true. https://www.tensorflow.org/api_docs/python/tf/r
               | aw_ops/Inplac...
               | 
               | Jax is also going to be giving even lower-level access
               | than TF, which may interest you.
               | 
               |  _You did not give an example of something GPUs can 't
               | do. all you said was that TPUs are faster for a specific
               | function in your case._
               | 
               | Well yeah, I care about achieving goals in my specific
               | case, as you do yours. And simply getting together a VM
               | that can feed 500 examples/sec to a set of GPUs is a
               | massive undertaking in and of itself. TPUs make it more
               | or less "easy" in comparison. (I won't say effortless,
               | since it does take some effort to get yourself into the
               | TPU programming mindset.)
        
               | shaklee3 wrote:
               | I gave you an example of something you can't do, which is
               | an overlap-save FFT, and you ignored that completely.
               | Please implement it, or show me any example of someone
               | implementing any custom FFT that's not a simple,
               | standard, batched FFT. I'll take any example of
               | implementing any type of signal processing pipeline on
               | TPU, such as a 5G radio.
               | 
               | Your last sentence is pretty funny: a GPU can't do
               | certain workloads because one it _can_ do is too slow for
               | you. Yet it remains a fact that TPU _cannot_ do certain
               | workloads without offloading to the CPU (making it orders
               | of magnitude slower), and that 's somehow okay? It seems
               | where this discussion is going is you pointed to a
               | TensorFlow library that may or may not offload to a TPU,
               | and it probably doesn't. But even that library is
               | incomplete to implement things like a 5G LDPC decoder.
        
               | sillysaurusx wrote:
               | Which part of this can't be done on TPUs? https://en.wiki
               | pedia.org/wiki/Overlap%E2%80%93save_method#Ps... As far
               | as I can tell, all of those operations can be done on
               | TPUs. In fact, I linked to the operation list that shows
               | they can be.
               | 
               | You'll need to link me to some specific implementation
               | that you want me to port over, not just namedrop some
               | random algorithm. Got a link to a github?
               | 
               | If your point is "There isn't a preexisting operation for
               | overlap-save FFT" then... yes, sure, that's true. There's
               | also not a preexisting operation for any of the hundreds
               | of other algorithms that you'd like to do with signal
               | processing. But they can all be implemented efficiently.
               | 
               |  _Yet it remains a fact that TPU cannot do certain
               | workloads without offloading to the CPU (making it orders
               | of magnitude slower), and that 's somehow okay?_
               | 
               | I think this is the crux of the issue: you're saying X
               | can't be done, I'm saying X can be done, so please link
               | to a _specific code example_. Emphasis on  "specific" and
               | "code".
        
               | shaklee3 wrote:
               | Let's just leave this one alone then. I can't argue with
               | someone who claims anything is possible, yet absolutely
               | nobody seems to be doing what you're referring to (except
               | you). A100 now tops all MLPerf benchmarks, and the
               | unavailable TPUv4 may not even keep up.
               | 
               | Trust me, I would love if TPUs could do what you're
               | saying, but they simply can't. There's no direct DMA from
               | the NIC to where I can do a streaming application at
               | 40+Gbps to it. Even if TPU _could_ do all the things you
               | claim, if it 's not as fast as the A100, what's the
               | point? To go through undocumented pain to prove
               | something?
        
               | sillysaurusx wrote:
               | FWIW, you can stream at 10Gbps to TPUs. (I've done it.)
               | 
               | 10Gbps isn't quite 40Gbps, but I think you can get there
               | by streaming to a few different TPUs on different VPC
               | networks. Or to the same TPU from different VMs,
               | possibly.
               | 
               | The point is that there's a realistic alternative to
               | nVidia's monopoly.
        
               | shaklee3 wrote:
               | When I can run a TPU in my own data center, there is.
               | Until then it precludes a lot of applications.
        
               | slaymaker1907 wrote:
               | You can run basically any C program on a CUDA core even
               | those requiring malloc. It may not be efficient but you
               | can do it. Google themselves call GPUs general purpose
               | and TPUs domain specific.
               | https://cloud.google.com/blog/products/ai-machine-
               | learning/w...
        
               | sorenbouma wrote:
               | I'm a TPU user and I'd be interested to see a specific
               | example of something that can be done on TPU but not GPU.
               | 
               | Perhaps I'm just not experienced enough with the
               | programming model, but I've found them to be strictly
               | less flexible/more tricky than GPUs, especially for
               | things like conditional execution, multiple graphs,
               | variable size inputs and custom ops.
        
               | sillysaurusx wrote:
               | Sure! I'd love to chat TPUs. There's a #tpu discord
               | channel on the MLPerf discord:
               | https://github.com/shawwn/tpunicorn#ml-community
               | 
               | The central reason that TPUs feel less flexible is
               | Google's awful mistake in encouraging everyone to use
               | TPUEstimator as the One True API For Doing TPU
               | Programming. Getting off that API was the single biggest
               | boost to my TPU skills.
               | 
               | You can see an example of how to do that here:
               | https://github.com/shawwn/ml-
               | notes/blob/master/train_runner.... This is a repo that
               | can train GPT-2 1.5B at 10 examples/sec on a TPUv3-8 (aka
               | around 10k tokens/sec).
               | 
               | Happy to answer any specific questions or peek at
               | codebases you're hoping to run on TPUs.
        
               | slaymaker1907 wrote:
               | That doesn't answer the question of what a TPU can do
               | that a GPU can't. I think the OP means impossible for the
               | GPU, not just slower.
        
               | [deleted]
        
               | newsclues wrote:
               | They just bought ARM for 40$billion. I think they want to
               | integrate CPU,GPU and high speed networks
        
           | lostmsu wrote:
           | Where did you get this from? AFAIK GPT-3 (for example) was
           | trained on a GPU cluster, not TPUs.
        
             | sillysaurusx wrote:
             | Experience, for one. TPUs are dominating MLPerf benchmarks.
             | That kind of performance can't be dismissed so easily.
             | 
             | GPT-2 was trained on TPUs. (There are explicit references
             | to TPUs in the source code: https://github.com/openai/gpt-2
             | /blob/0574c5708b094bfa0b0f6df...)
             | 
             | GPT-3 was trained on a GPU cluster probably because of
             | Microsoft's billion-dollar Azure cloud credit investment,
             | not because it was the best choice.
        
               | option wrote:
               | no they are not. Go read recent MLPerf results more
               | carefully and not Google's blogpost. NVIDIA won 8/8
               | benchmarks for publicly available SW/HW combo. Also 8/8
               | on per chip performance. Google did show better results
               | with some "research" system which is not available to
               | anyone other then them yet.
        
               | sillysaurusx wrote:
               | This is a weirdly aggressive reply. I don't "read
               | Google's blogpost," I use TPUs daily. As for MLPerf
               | benchmarks, you can see for yourself here:
               | https://mlperf.org/training-results-0-6 TPUs are far
               | ahead of competitors. All of these training results are
               | openly available, and you can run them yourself. (I did.)
               | 
               | For MLPerf 0.7, it's true that Google's software isn't
               | available to the public yet. That's because they're in
               | the middle of transitioning to Jax (and by extension,
               | Pytorch). Once that transition is complete, and available
               | to the public, you'll probably be learning TPU
               | programming one way or another, since there's no other
               | practical way to e.g. train a GAN on millions of photos.
               | 
               | You'd think people would be happy that there are
               | realistic alternatives to nVidia's monopoly for AI
               | training, rather than rushing to defend them...
        
               | p1esk wrote:
               | _transitioning to Jax (and by extension, Pytorch)_
               | 
               | Wait, what? Why would transition to Jax imply transition
               | to Pytorch?
        
               | llukas wrote:
               | You are basing your opinion on last year MLPerf and some
               | stuff that may or may not be available in the future.
               | MLPerf 0.7 "available" category has been ghosted by
               | google.
               | 
               | Pointing this out is not aggressive.
        
               | lostmsu wrote:
               | I checked MLPerf website, and it looks like A100 is
               | outperforming TPUv3, and is also more capable (there does
               | not seem to be a working implementation of RL for Go on
               | TPU).
               | 
               | To be fair, TPUv4 is not out yet, and it might catch up
               | using the latest processes (7nm TSMC or 8nm Samsung).
               | 
               | https://mlperf.org/training-results-0-7
        
         | QuixoticQuibit wrote:
         | NVIDIA's hardware works on x86, PowerPC, and ARM platforms.
         | 
         | Many of their AI libraries/tools are in fact open source.
         | 
         | They stand to be a force that could propel ARM's strength in
         | data center and desktop computing. For some reason you're okay
         | with the current x86 duopoly held by AMD and Intel, both who
         | have their own destiny over CPUs and GPUs.
         | 
         | The HN crowd is incredibly biased against certain companies.
         | Why not look at some of the potential bright sides to this for
         | a more nuanced and balanced opinion?
        
           | LanternLight83 wrote:
           | There are good points on both sides; as a Linux user, I feel
           | the effects of their proprietary drivers and uncooperative
           | approach, while I can also appreciate that they've managed to
           | work with the Gnome and KDE project to have some support
           | under Wayland, and the contributions they've made to the
           | machine learning communities. Ad a whole, I do think that the
           | former outweighs the latter, and loath the acquisition, but
           | do think that the resources they're packing will bring ARM to
           | new heights for the majority users.
        
             | endgame wrote:
             | https://drewdevault.com/2017/10/26/Fuck-you-nvidia.html
             | 
             | You mean this Weyland "support"?
        
               | CountSessine wrote:
               | The funny thing about GBM vs EGLStreams, though, based on
               | everything I've read online, is that there's broad
               | agreement that EGLStreams is the technically superior
               | approach - and that the reason it's been rejected by the
               | Linux graphics devs is GBM, while inferior, has broad
               | hardware vendor support. Apparently very few GPU vendors
               | outside the big 3 had decent EGl implementations.
        
               | ddevault wrote:
               | That's not true. EGLStreams would be better suited to
               | Nvidia's proprietary driver design, but it's technically
               | inferior. This is the "broad agreement". No one but
               | Nvidia wants EGLStreams, or we would have implemented it
               | in other drivers.
        
               | CountSessine wrote:
               | You know I did a bit more reading about this, and it
               | sounds like you're right. There's a great discussion
               | about it here:
               | 
               | https://mesa-dev.freedesktop.narkive.com/qq4iQ7RR/egl-
               | stream...
        
               | selectodude wrote:
               | "October 26, 2017"
               | 
               | I'm not coming out and saying it's gotten significantly
               | better, but that is a three year old article and Nvidia-
               | wayland does work on KDE and Gnome.
        
               | [deleted]
        
               | timidger wrote:
               | That's because they implemented egl streams, nothing has
               | changed majorly since then. There's been a lot of talk,
               | but no action towards unification. The ball is entirely
               | in Nvidia's court and they continue to work together on
               | this.
        
           | ianai wrote:
           | Yes I wonder the same thing about the sentiment against
           | nvidia. It's be helpful if there were some wiki about things
           | they've killed or instances they've acted against foss
           | systems.
        
             | mixmastamyk wrote:
             | Linus famously gave them the middle-finger.
        
               | paulmd wrote:
               | "linus went on a hyperbolic rant" isn't sufficient
               | evidence for anything.
               | 
               | linus is a hyperbolic jerk (as he admitted himself for a
               | few months before resuming his hyperbolic ways) who is
               | increasingly out of touch with anything outside the
               | direct sphere of his projects. Like his misguided and
               | completely unnecessary rants about ZFS or AVX.
               | 
               | if there are technical merits to discuss you can post
               | those instead of just appealing to linus' hyperbole.
               | 
               | (I won't even say "appeal to authority" because that's
               | not what you're appealing to. You're literally appealing
               | to his middle finger.)
        
               | arp242 wrote:
               | He just responded to a complaint that nVidia hardware
               | wasn't working with "nVidia is the worst company we've
               | dealt with". I don't think that's "out of touch", it's
               | pretty much the kind of stuff his job description
               | entails. If you don't like his style, fair enough, but if
               | the leader of a project says "company X is the worst
               | company we deal with" then that doesn't inspire a whole
               | lot of confidence.
        
               | paulmd wrote:
               | AFAIK the debate was mostly settled by NVIDIA submitting
               | their own EGLStreams backend for Wayland (that promptly
               | exposed a bunch of Wayland bugs). _So_ difficult to work
               | with, that NVIDIA, asking to do something different and
               | then submitting their own code to implement it!
               | 
               | https://www.phoronix.com/scan.php?page=news_item&px=EGLSt
               | rea...
               | 
               | AFAIK it also ended up being literally a couple thousand
               | lines of code, not some massive endeavor, so the Wayland
               | guys don't come off looking real great, looks like they
               | have their own Not Invented Here syndrome and certainly a
               | lot of generalized hostility towards NVIDIA. Like
               | Torvalds, I'll be blunt, my experience is that a lot of
               | people just _know_ NVIDIA is evil because of these dozens
               | of little scandals they've drummed up, and they almost
               | all fall apart when you look into them, but people just
               | fall back on asserting that NVIDIA _must_ be up to
               | something because of these 27 other things (that also
               | fall apart when you poke them a bit). It is super trendy
               | to hate on NVIDIA in the same way it's super trendy to
               | hate on Apple or Intel.
               | 
               | Example: everyone used to bitch and moan about G-Sync,
               | the biggest innovation in gaming in 10 years. Oh, it's
               | this _proprietary_ standard, it 's using a _proprietary_
               | module, why are they doing this, why don 't they support
               | the Adaptive Sync standard? Well, at the time they
               | started doing it, Adaptive Sync was a draft standard for
               | power-saving in laptops that had languished for years,
               | there was no impetus to push the standard through, there
               | were no monitors that supported it, and no real push to
               | implement monitors either. Why take 10 years to get
               | things through a standards group when you can just take a
               | FPGA and do it yourself? And once you've done all that
               | engineering work, are you going to give it away for free?
               | Back in 2016 I outright said that sooner or later NVIDIA
               | would have to support Adaptive Sync or else lose the home
               | theater market/etc as consoles gained support. People
               | told me I was loony, " _NVIDIA 'S just not that kind of
               | company_", etc. Well, turns out they were that kind of
               | company, weren't they? Turns out people were mostly mad
               | that... NVIDIA didn't immediately give all their
               | engineering work away for free.
               | 
               | The GPP is the only thing I've seen that really stank and
               | they backed off that when they saw the reaction. Other
               | than that they are mostly guilty of... using a software
               | license you don't like. It says a lot about the success
               | of copyleft that anyone developing software with a
               | proprietary license is automatically suspect.
               | 
               | The truth is that NVIDIA, while proprietary, does a huge
               | amount of really great engineering in novel areas that
               | HNers would really applaud if it were any other company.
               | Going and making your own monitor from scratch with a
               | FPGA so you can implement a game-changing technology is
               | exactly the kind of go-getter attitude that this site is
               | supposed to embody.
               | 
               | Variable refresh rate/GSync is a game changer. DLSS 2.0
               | is a game changer. Raytracing is a game changer. And you
               | have NVIDIA to thank for all of those, "proprietary" and
               | all. They would not exist today without NVIDIA, AMD or
               | Intel would not have independently pushed to develop
               | those, even though they do have open-source drivers. What
               | a conundrum.
        
               | arp242 wrote:
               | I'm not sure if Linus was talking about the Wayland stuff
               | specifically; the answer was in response to a complaint
               | that the nvidia/Intel graphics card switching didn't work
               | on Linux.
               | 
               | I haven't used nvidia products for about 10 years and I'
               | m not really in to gaming or graphics, so I don't really
               | have an opinion on them either way, either business or
               | technical. I used their FreeBSD drivers back in the day
               | and was pretty happy it allowed me to play Unreal
               | Tournament on my FreeBSD machine :-)
               | 
               | Linus is not always right, but a lot of what he says is
               | often considerably more nuanced and balanced than his
               | "worst-of" highlight reel suggests. There are plenty of
               | examples of that in the presentation/Q&A he did from
               | which this excerpt comes, for example (but of course,
               | most people only see the "fuck you" part).
               | 
               | So if Linus - the person responsible for writing
               | operating systems with their hardware - says they're the
               | "worst company we deal with" then this strikes me as a
               | good reason to at least do your research if you plan to
               | buy hardware from them, if you intend to use it with
               | Linux anyway. I'll take your word for it that they're
               | doing great stuff, but if it outright refuses to work on
               | my Linux box then that's kinda useless to me.
               | 
               | This was also 6 or 7 years ago I think, so perhaps things
               | are better now too.
        
               | 48bb-9a7e-dc4f2 wrote:
               | "AFAIK". Your rant is wrong in so many levels it's not
               | even funny.
               | 
               | Nvidia earned that hostility. It's not even worth
               | replying when you're giving a comment in such bad faith.
               | There's a search function with many many threads that
               | dealt with this subject before if anyone wants a less
               | biased view of EGLStream and Wayland.
        
               | [deleted]
        
               | andrewprock wrote:
               | The notion that the maintainer if Linux has a narrow
               | focus role that isn't capable of advocating and serving
               | the broader community is belied by the fact that Linux
               | has grown from being a niche hobbyist platform to the de
               | facto cloud standard OS.
        
               | paulmd wrote:
               | https://arstechnica.com/gadgets/2020/01/linus-torvalds-
               | zfs-s...
               | 
               | https://www.zdnet.com/article/linus-torvalds-i-hope-
               | intels-a...
               | 
               | /shrug. The guy can't keep from popping off with
               | obviously false statements about things he knows nothing
               | about. What exactly do you want me to say? Yes, he's been
               | a good manager for the linux kernel, but he is self-
               | admittedly hyperbolic and statements like these show that
               | he really doesn't have an issue running his mouth about
               | things that he really doesn't understand.
               | 
               | It is the old problem with software engineers: they think
               | expertise in one field or one area makes them a certified
               | supergenius with relevant input in completely unrelated
               | areas. I can't count how many times I've seen someone on
               | HN suggest One Weird Trick To Solve Hard Problems in
               | [aerospace/materials sciences/etc]. Linus suffers from
               | the same thing.
               | 
               | His experiences with NVIDIA are probably relevant, and if
               | so we can discuss that, but the fact that he gave someone
               | the middle finger in a Q+A session is not. That's just
               | Linus being an asshole.
               | 
               | (and him being a long-term successful project manager
               | doesn't make him not an asshole either. Jensen's an
               | asshole and he's one of the most successful tech CEOs of
               | all time. Linus doesn't mince words and we should do the
               | same - he's an asshole on a professional level, the "I'm
               | just being blunt!" schtick is just a nice dressing for
               | what would in any other setting be described as a
               | textbook toxic work environment, and his defenses are
               | textbook "haha it's just locker room talk/we're all
               | friends here" excuses that people causing toxic work
               | environment are wont to make. He knows it, he said he'd
               | tone it down, that lasted about a month and he's back to
               | hyperbolic rants about things he doesn't really
               | understand... like ZFS and AVX. But hey I guess they're
               | not directed at people this time.)
               | 
               | Again, if he's got relevant technical input we can
               | discuss that but "linus gives the middle finger!!!!" is
               | not the last word on the topic.
        
               | andrewprock wrote:
               | I don't take issue with the fact that Linus' behavior is
               | problematic.
        
               | mixmastamyk wrote:
               | A little defensive, no? I trust his judgement more than
               | Mr. Random on the internets.
        
             | WrtCdEvrydy wrote:
             | Nvidia does not publish drivers, only binary blobs.
             | 
             | You generally have to wrap those and use them while not
             | being foss-compatible.
        
           | throwaway2048 wrote:
           | The OP said nothing about being ok with the x86 duopoly.
           | 
           | Its possible to dislike two things at once, its also possible
           | to be wary of new developments that give much more market
           | power to a notoriously uncooperative and closed company.
        
           | dheera wrote:
           | > The HN crowd is incredibly biased against certain
           | companies.
           | 
           | No kidding, regarding the HN crowd. -- every time I post a
           | comment criticizing Apple's monopoly and policies I get
           | downvoted to oblivion, and I'd say that some of the things
           | they do e.g. on the Apple store and proprietary
           | hardware/software combinations are far more egregious than
           | anything Nvidia has ever done. The HN algorithm basically
           | encourages an echo chamber of people who gang up and
           | downvote/flag others who don't agree with the gang opinion.
           | 
           | (Psst ... If you see this comment disappear after a while,
           | it's probably because the same Apple fanboys found this
           | comment and decided to hammer it down again.)
        
             | dang wrote:
             | This is off topic and breaks the site guidelines. Would you
             | please review them and stick to the rules?
             | 
             | This sort of $BigCo metaflamewar in which each side accuses
             | HN of being shills for the opposite side is incredibly
             | repetitive and tedious. It's also a cognitive bias;
             | everyone feels like the community (and the moderators for
             | that matter) is biased against whatever view they favor.
             | 
             | https://hn.algolia.com/?dateRange=all&page=0&prefix=false&q
             | u...
             | 
             | https://hn.algolia.com/?query=notice%20dislike%20by:dang&da
             | t...
             | 
             | https://news.ycombinator.com/newsguidelines.html
        
             | barbecue_sauce wrote:
             | What's the point of pretending that people don't know you
             | will get downvoted for holding an unpopular opinion? It's
             | tautological.
        
             | Shared404 wrote:
             | Haven't looked at the rest of your posts, but if this gets
             | flagged, it's probably from this:
             | 
             | [Edit: I disagree with my past self of having put this
             | comment here. This is something that if one feels that they
             | notice, they should probably comment on. Leaving it here
             | for clarity's sake]
             | 
             | > The HN algorithm basically encourages an echo chamber of
             | people who gang up and downvote/flag others who don't agree
             | with the gang opinion.
             | 
             | or this:
             | 
             | > (Psst ... If you see this comment disappear after a
             | while, it's probably because the same Apple fanboys found
             | this comment and decided to hammer it down again.)
             | 
             | than this:
             | 
             | > criticizing Apple's monopoly and policies
             | 
             | Source: Have criticized Apple without having been downvoted
             | and flagged.
        
               | [deleted]
        
               | dheera wrote:
               | I think it depends on how you criticize them. I've
               | criticized:
               | 
               | - The idea that we can trust Apple with privacy, when
               | their OS isn't open source
               | 
               | - The idea that the Apple Store is doing good things in
               | the disguise of privacy when they actually play corporate
               | favoritism and unfairly monopolize the app market
               | 
               | - The idea that a PIN has enough entropy to be a good
               | measure of security
               | 
               | - Right to repair, soldered-to-motherboard SSDs
               | 
               | - The environmental waste associated with the lack of
               | upgradability
               | 
               | - Price gouging on upgrades ($400 for +512GB, anyone?
               | When I can get 4TB on Amazon for $500?) and lack of user
               | upgradability
               | 
               | I've been downvoted to hell for speaking the above. There
               | are a bunch of brainwashed Apple worshippers lurking
               | around here who don't accept any criticism of the holy.
               | 
               | I totally welcome engaging in an intelligent discussion
               | about any of the above, but downvotes and flagging (in
               | most cases without any response) doesn't accomplish any
               | of that, and serves just to amplify the fanboy echo
               | chamber.
               | 
               | Also, I don't think criticizing HN's algorithm or design
               | should warrant being flagged or downvoted, either.
               | Downvoting should be reserved for trolling, not
               | intelligent protest. I'm also of the opinion that
               | downvotes shouldn't be used for well-written
               | disagreement, as that creates an echo chamber and
               | suppresses non-majority opinions.
        
               | Shared404 wrote:
               | > I've been downvoted to hell for speaking the above.
               | 
               | Fair enough. I don't think I've ever mentioned Apple by
               | name, just participated in threads about them and
               | commented on how I disagree about how they do some
               | things.
               | 
               | I also went and reread some of my comments, and realized
               | I haven't criticized Apple as much here as I'd originally
               | thought. Apparently I mostly stick to that in meatspace.
               | 
               | > There are a bunch of brainwashed Apple worshippers
               | lurking around here who don't accept any criticism of the
               | holy.
               | 
               | As there are everywhere unfortunately. I haven't run into
               | many here personally, but have no difficulty believing
               | they are here.
               | 
               | > Also, I don't think criticizing HN's algorithm or
               | design should warrant being flagged or downvoted, either.
               | Downvoting should be reserved for trolling, not
               | intelligent protest. I'm also of the opinion that
               | downvotes shouldn't be used for well-written
               | disagreement, as that creates an echo chamber and
               | suppresses non-majority opinions.
               | 
               | I agree here as well. I was more trying to point out the
               | combative tone then the content of the text. There was
               | also some content that could be interpreted as
               | accusations of shilling, which is technically against the
               | guidelines.
               | 
               | To be clear -- I don't think your comment was accusing
               | people of shilling, just that I can see how people could
               | interpret it that way.
               | 
               | Edit: The first quote I took from your comment really
               | shouldn't have been there. Sorry about that. I still
               | think if you get flagged it's because of the "(Psst..."
               | part though. That one does come off as a bit aggressive.
        
               | [deleted]
        
           | dis-sys wrote:
           | > Why not look at some of the potential bright sides to this
           | for a more nuanced and balanced opinion?
           | 
           | because as a NVIDIA user of the last 20 years, I never saw
           | such bright side when it comes to open source.
        
           | arp242 wrote:
           | I don't really have an opinion on nVidia, as I haven't dealt
           | with any of their products for over a decade; my own problem
           | with this is somewhat more abstract: I'm not a big fan of
           | this constant drive towards merging all these tech companies
           | (or indeed, any company really). Perhaps there are some
           | short-term advantages for the ARM platform, but on the long
           | term it means a few small tech companies will have all the
           | power, which doesn't strike me as a good thing.
        
             | QuixoticQuibit wrote:
             | I don't disagree with you, but I see it two ways:
             | 
             | 1. The continued conglomeratization in the tech sector is a
             | worrying trend as we see fewer and fewer small players.
             | 
             | 2. Only a large-ish company could provide effective
             | competition in the CPU/ISA/architecture space against the
             | current x86 duopoly.
        
               | arp242 wrote:
               | I'm not so sure about that second point; I don't see why
               | an independent ARM couldn't provide an effective
               | competition? Server ARMs have been a thing for a while,
               | and Apple has been working on ARM macbooks for a while. I
               | believe the goal is even to completely displace Intel
               | macBooks in favour of the ARM ones eventually.
               | 
               | The big practical issue is probably software
               | compatibility and the like, and it seems to me that the
               | Apple/macOS adoption will do more for that than nVidia
               | ownership.
        
         | therealmarv wrote:
         | The problem is more that AMD is sleeping in regard of GPU AI
         | and good software interfaces.
        
           | burnte wrote:
           | Hardly sleeping, they have a fraction of the revenue and
           | profit of NVidia and Intel. Revenue was half of Nvidia,
           | profit was 5% of NVidia. Intel is even bigger. They only have
           | so much in the way of R&D money.
        
             | reader_mode wrote:
             | Seeing their market performance I have no doubt they could
             | get capital for R&D
        
               | zrm wrote:
               | Their market performance is a relatively recent
               | development. R&D has a lead time.
        
             | Polylactic_acid wrote:
             | ROCm is their product for compute and it still can't run on
             | their latest NAVI cards which have been out for over a year
             | while CUDA works on every nvidia card on day one.
        
             | adventured wrote:
             | > Revenue was half of Nvidia, profit was 5% of NVidia.
             | 
             | For operating income it's 25%, for net income it's 18%; not
             | 5%.
             | 
             | Last four quarters operating income for AMD: $884 million
             | 
             | Last four quarters operating income for Nvidia: $3.5
             | billion
             | 
             | This speaks to the dramatic improvement in AMD's operating
             | condition over the last several years. For contrast, in
             | fiscal 2016 AMD's operating income was negative $382
             | million. Op income has increased by over 300% in just ~2
             | 1/2 years. Increasingly AMD is no longer a profit
             | lightweight.
        
               | burnte wrote:
               | I was using 2019 numbers rather than last 4 quarters. I
               | also talked about revenue and profit, not operating
               | anything.
               | 
               | AMD 2019 Revenue: $6.73b [1] NVIDIA 2019 Revenue: $11.72b
               | [2]
               | 
               | Roughly half, as I said.
               | 
               | AMD 2019 Profit (as earnings per share): $0.30 [1] NVIDIA
               | 2019 Profit (as earnings per share): $6.63 [2]
               | 
               | 4.52%, rounds to 5%, as I said.
               | 
               | However, you still proved my point. Lightweight or not,
               | they do not, and have not had the amount of money
               | available compared to NVidia and Intel. It's growing,
               | they'll be able to continue to invest, and they have an
               | advantage in the CPU space that should last for another
               | year or two, giving them a great influx of cash, and
               | their focus on Zen 2 really paid off allowing them
               | greater cash flow to focus on GPUs as well.
               | 
               | [1] https://ir.amd.com/news-events/press-
               | releases/detail/930/amd... [2]
               | https://nvidianews.nvidia.com/news/nvidia-announces-
               | financia...
        
               | nerderloo wrote:
               | Earnings per share is only useful when you look at the
               | stock price. Since number of outstanding shares are
               | different between two companies, it's incorrect to
               | measure company's profit based on EPS.
        
               | adventured wrote:
               | > AMD 2019 Profit (as earnings per share): $0.30 [1]
               | NVIDIA 2019 Profit (as earnings per share): $6.63 [2]
               | 
               | > 4.52%, rounds to 5%, as I said.
               | 
               | You're misunderstanding how to properly compare
               | profitability between two companies.
               | 
               | If Company A has 1 billion shares outstanding and earns
               | $0.10 per share, that's $100m in profit.
               | 
               | If Company B has 10 billion shares outstanding and earns
               | $0.05 per share, that's $500m in profit.
               | 
               | Company A is not 100% larger on profit just because they
               | earned more per share. It depends on how many shares you
               | have outstanding, which is what you failed to account
               | for.
               | 
               | AMD's profit was not close to 5% of Nvidia's in 2019.
               | That is what you directly claimed (as you're saying you
               | went by the last full fiscal year).
               | 
               | AMD had $341m in net income in their last full fiscal
               | year. Nvidia had $2.8 billion in net income for their
               | last full fiscal year. That's 12%, not 5%. And AMD's
               | operating income was 22% of Nvidia for the last fiscal
               | year.
               | 
               | The trailing four quarters and operating income, is the
               | superior way to judge the present condition of the two
               | companies, rather than using the prior fiscal year.
               | Especially given the rapid ongoing improvement in AMD's
               | business. Regardless, even going by the last full fiscal
               | year, your 5% figure is still wrong by a large amount.
               | 
               | Operating income is a key measure of profitability and
               | it's a far better manner of gauging business
               | profitability than net income at this point. That's
               | because the modern net income numbers are partially
               | useless as they will include such things as asset
               | gains/losses during the quarter. If you want to read up
               | more on it, Warren Buffett has pointed out the absurdity
               | of this approach on numerous occasions (if Berkshire's
               | portfolio goes up a lot, they have to report that as net
               | income, even though it wasn't a real profit generation
               | event).
               | 
               | I didn't say anything refuting your revenue figures,
               | because I wasn't refuting them. I'm not sure why you
               | mention that.
        
           | slavik81 wrote:
           | Aside from Navi support, which has already been mentioned,
           | what would you like to see?
        
           | Aeolun wrote:
           | It is frankly amazing AMD can keep up at all.
        
             | RantyDave wrote:
             | They are doing something very right over there.
        
         | twblalock wrote:
         | On the other hand, if Nvidia wants ARM to succeed (and why else
         | would they acquire it?), they can be a source of more
         | competition in the CPU market.
         | 
         | I don't really see how this deal makes the CPU market worse --
         | wasn't the ARM market for mobile devices basically dominated by
         | Qualcomm for years? Plus, the other existing ARM licensees
         | don't seem to be impacted. On the other hand, I do see a lot of
         | potential if Nvidia is serious about innovating in the CPU
         | space.
        
         | m3kw9 wrote:
         | Lol with the italics awful and your "seems to me" angle
        
           | starpilot wrote:
           | Hyperbole is how you get upvotes. No one really reacts to
           | wishy-washy answers, it's the "OMG world is ending"
           | provocative stuff that gets people engaged.
        
       | mikorym wrote:
       | Wow, didn't someone call this out recently on HN? I mean, someone
       | mentioned that this was going to happen. (Or rather, feared that
       | this was the direction things were going in.)
       | 
       | On a different topic, how would this influence Raspberry Pis
       | going forward?
        
       | axaxs wrote:
       | This seems fair, as long as it stays true(regarding open
       | licensing and neutrality). I've mentioned before, I think this
       | will ultimately be a good thing. NVidia has the gpu chops to
       | really amp up the reference implementation, which is a good thing
       | for competition in the mobile, settop, and perhaps even desktop
       | space.
        
         | andy_ppp wrote:
         | No, what everyone thinks will happen is a pretend open ARM
         | architecture and Nvidia CPUs dominating. Nvidia isn't going to
         | license the best GPU features they start adding.
         | 
         | It's an excellent deal for NVIDIA of course, I'm certain they
         | intend to make the chips they produce much faster than the ones
         | they license (if they even ever release another open design) to
         | the point where buying CPUs from Nvidia might be they only game
         | in town. We'll have to see but this is what I expect to happen.
        
           | fluffything wrote:
           | That's not what everyone thinks.
           | 
           | NVIDIA already has an CPU architect team building their own
           | ARM CPUs with an unlimited ARM license.
           | 
           | ARM doesn't give NVIDIA a world-class CPU team like apple's,
           | amazon's or fujitsu. ARM own cores are "meh" at best. Buying
           | such a team, would also have been much cheaper than 40b$.
           | 
           | Mobile ARM chips are meh, but nvidia doesn't have GPUs for
           | that segment, and their current architectures probably don't
           | work well there. The only ARM chips that are ok-ish are
           | embedded/IoT at < 1W power envelope. It would probably take
           | nvidia 10 years to develop GPUs for that segment, the margins
           | on that segment are razor thin (0.10$ is the cost of a full
           | SoC on that segment), and it is unclear whether applications
           | on that segment need GPUs (your toaster certainly does not).
           | 
           | The UK appears to require huge R&D investments in ARM to
           | allow the sale. And ARMs bottom line is 300million $/year in
           | revenue, which is peanuts for nvidia.
           | 
           | So if anything, ARM has a lot to win here with nvidia pumping
           | in money like crazy to try to improve ARM's CPU offering. Yet
           | this all seem super-risky because at the segments ARM is
           | competing at, RISC-V competes as well, and without royalties.
           | It is hard to compete against something that's free, even if
           | it is slightly less good. And chances are that over the next
           | 10 years RISC-V will have much better cores (NVIDIA
           | themselves started replacing ARM cores with RISC-V cores in
           | their GPUs years ago already...).
           | 
           | Either way, the claim that it is obvious to everybody what
           | the 3D-chess being played here is false. To me this looks
           | like a bad buy for nvidia. They could have paid 1 billion for
           | a world class CPU team and just continue to license ARM
           | and/or switch to RISC-V chips. Instead they are spending 40
           | billion in a company that makes 300 million a year, makes
           | meh-cpus, is heavily regulated in the UK and the world, has
           | problems with China due to being in the West, have to invest
           | in the UK which is leaving the EU in a couple of weeks, etc.
        
             | yvdriess wrote:
             | > NVIDIA already has an CPU architect team building their
             | own ARM CPUs with an unlimited ARM license.
             | 
             | Famously, the Tegra SoCs, as used in the Nintendo Switch.
        
               | Followerer wrote:
               | No. Precisely the Tegra SoC within the Nintendo Switch
               | (X1) uses ARM Cores. Specifically A57 and A53. NVIDIA's
               | project to develop their own v8.2 ARM-based chip is
               | called Denver.
        
             | axaxs wrote:
             | If you only look at today's numbers, it doesn't make a ton
             | of sense. But looking at the market, the world is moving
             | -more- towards ARM, not away. The last couple years have
             | given us ARM in a gaming console, ARM in mainstream
             | laptops, ARM in the datacenter. Especially as the world
             | strives to go 'carbon neutral', ARM kills everything from
             | Intel/AMD. So with that in mind, I don't think it's a bad
             | buy, but time will tell.
             | 
             | RISC-V and ARM can coexist, but RISC-V in the mainstream is
             | a far away due to nothing more than momentum. People won't
             | even touch Intel in a mobile device anymore, not just
             | because of power usage, but software compatibility.
        
             | andy_ppp wrote:
             | It's a bad buy unless they plan to use this to leverage a
             | better position. Your argument is that ARM is essentially
             | worthless to NVIDIA or at least _extremely_ high $40bn bet.
             | I guess only time will tell but I think NVIDIA intend to
             | make their money back on this purchase and that won't be
             | through the current licensing model (as your own figures
             | show).
        
             | rrss wrote:
             | > ARM doesn't give NVIDIA a world-class CPU team like
             | apple's, amazon's or fujitsu.
             | 
             | Are you referring to the Graviton2 for Amazon? If so, you
             | might be interested to learn that ARM designed the cores in
             | that chip.
             | 
             | > (NVIDIA themselves started replacing ARM cores with
             | RISC-V cores in their GPUs years ago already...).
             | 
             | The only info on this I'm aware of is https://riscv.org/wp-
             | content/uploads/2017/05/Tue1345pm-NVIDI..., which says
             | nvidia is replacing some internal proprietary RISC ISA with
             | RISC-V, not replacing ARM with RISC-V.
        
           | axaxs wrote:
           | Perhaps not. I mean AMD and soon Intel basically compete with
           | themselves by pushing advances in both discrete GPU and APU
           | at the same time, one negating the need for the other.
           | 
           | I'm not claiming I'm right and you're wrong, of course. I
           | just think it's unfair to make negative assumptions at this
           | point, so wanted to paint a possible good thing.
        
             | andy_ppp wrote:
             | Agree, Nvidia's track record on opening things up is pretty
             | bad. They are very good at business though!
        
       | gumby wrote:
       | Amidst all the hand-wringing: RISC-V is at least a decade away
       | (at current pace + ARM's own example). But what if Google bought
       | AMD and put TPUs on the die?
        
         | askvictor wrote:
         | Because so many of Google's acquisitions have ended up doing
         | well...
        
           | sib wrote:
           | Android, YouTube, Google Maps, DoubleClick, Applied Semantics
           | (became AdSense), DeepMind, Urchin, ITA Software, etc.
           | 
           | I think Google has done ok.
        
             | dbcooper wrote:
             | Has it had any successes with hardware acquisitions?
        
               | Zigurd wrote:
               | Google has steadily gained smart home device market share
               | vs a very good competitor and is now the dominant player.
        
         | thrwyoilarticle wrote:
         | AMD's x86 licence isn't transferrable. To acquire them is to
         | destroy their value.
        
       | andy_ppp wrote:
       | This sort of stuff really isn't going to produce long term
       | benefits for humanity is it.
       | 
       | Does anyone know if or how Apple will be affected by this? What
       | are the licensing agreements on the ISA?
        
         | gumby wrote:
         | ARM and Apple go way back (Apple was one of the three original
         | consortium members who founded ARM). I am sure they pay a flat
         | fee and have freedom to do whatever they like (probably other
         | than sub licensing).
         | 
         | Owning ARM would make no sense for them as they would gain no
         | IP but would have to deal with antitrust which would force them
         | to continue licensing the ip to others, which is not a business
         | they are in.
         | 
         | If ARM vanished tomorrow I doubt it would affect apple's
         | business at all.
        
         | UncleOxidant wrote:
         | This has been out there in the news for well over a month, I
         | guess I don't understand why Apple didn't try to make a bid for
         | ARM? Or why Apple didn't try to set up some sort of independent
         | holding company or consortium to buy ARM. They definitely have
         | the money and clout to have done something like that.
        
           | andy_ppp wrote:
           | I guess Apple have a perpetual ARM ISA license. They haven't
           | used the CPU designs from ARM for many years.
        
         | dharma1 wrote:
         | I doubt Apple's arm license will be affected. But I think they
         | will be getting tougher competition in the sense that Android
         | phones will be getting a fair bit better now - most of them
         | will be using super well integrated Nvidia GPU/ML chips in the
         | future because of this deal.
         | 
         | I think it will also bring Google and Nvidia closer together
        
       | deafcalculus wrote:
       | nVidia clearly wants to compete with Intel in data centers. But
       | how does buying ARM help with that? They already have an
       | architectural license.
       | 
       | Right now, I can see nVidia replacing Mali smartphone GPUs in low
       | to mid-end Exynos SoCs and the like. But it's not like nVidia to
       | want to be in that low-margin area.
        
         | fluffything wrote:
         | > I can see nVidia replacing Mali smartphone GPUs in low to
         | mid-end Exynos SoCs and the like.
         | 
         | Replacing these with what? What nvidia gpus can operate at that
         | power envelope ?
        
           | deafcalculus wrote:
           | Lower power versions of what they put in Tegra / Switch? Or
           | perhaps they can whip up something in 1-2 years. I'd be
           | astonished if nVidia doesn't take any interest in smartphone
           | GPUs after this acquisition.
        
             | fluffything wrote:
             | A nintendo switch has ~2 hours of battery...
             | 
             | There is a big difference between having interest in a
             | market, and being able to compete in it. There are also
             | many trade-offs.
             | 
             | Nobody has designed yet a GPU architecture that works at
             | all from 500W HPC clusters to sub 1W embedded/IoT systems,
             | much less that works well to be a market leader in all
             | segments. So AFAICT whether this is even possible is an
             | open research problem. If this were possible, there would
             | already be nvidia GPUs at least in some smartphones and IoT
             | devices.
        
       | m0zg wrote:
       | On the one hand, this is bad news - I would prefer ARM to remain
       | independent. But on the other, from a purely selfish standpoint,
       | NVIDIA will likely lean on Apple pretty hard to get its GPUs into
       | Apple devices again, which bodes well for GPGPU and deep learning
       | applications.
       | 
       | Apple is probably putting together a RISC-V hardware group as we
       | speak. The Jobs ethos will not allow them to depend this heavily
       | on somebody else for such a critical technology.
        
         | buzzerbetrayed wrote:
         | A few weeks ago there were rumors that ARM was looking to be
         | sold to Apple and Apple turned them down. If an NVIDIA
         | acquisition is such a deal breaker for Apple, why wouldn't they
         | have just acquired ARM to begin with?
        
       | patfla wrote:
       | How does this get past the FTC? Oh right, it's been a dead letter
       | since the Reagan administration. Monopoly 'R Us.
       | 
       | Never mind the FTC - the rest of the semiconductor industry has
       | to be [very] strongly opposed.
        
       | MangoCoffee wrote:
       | https://semiwiki.com/ip/287846-tears-in-the-rain-arm-and-chi...
       | 
       | Does this mean Nvidia will have to deal with the hot mess at
       | China ARM?
        
         | justincormack wrote:
         | Allegedly it has been sorted, before this deal was announced.
         | No idea how.
        
       | DCKing wrote:
       | This is terrible. Not really just because of Nvidia - which has a
       | lot of problems I've previously commented on the rumors of this
       | [1] - but Nvidia's ownership completely changes ARM's incentives.
       | 
       | ARM created a business model for itself where they had to act as
       | a "BDFL" for the ARM architecture and IP. They made an
       | architecture, CPU designs, and GPU designs for others. They had
       | no stake in the chip making game, and they had others - Samsung,
       | Apple, Nvidia, Qualcomm, Huawei, Mediatek, Rockchip and loads of
       | others make the chip. Their business model was to make the ARM
       | ecosystem accessible for as many companies as possible, so they
       | could sell as many licenses as possible. In that way, ARM's
       | business model enabled a very diverse and thriving ARM market. I
       | think this is the _sole_ reason we see ARM eating the chip world
       | today.
       | 
       | This business model would continue to work perfectly fine as a
       | privately held company, or being owned by a faceless investor
       | company that wants you to make as much money as possible. But
       | it's not fine if you are owned by a company that wants to use you
       | to control their own position in the chip market. There is no way
       | Nvidia (any other chip company, but as laid out previously Nvidia
       | might even be more concerning) will spend 40 billion on this
       | without them deliberately or inadvertently destroying ARM's open
       | CPU and GPU ecosystem. Will Nvidia allow selling ARM licenses to
       | competitors of Nvidia's business? Will Nvidia reserve ARM's best
       | IP as a selling point for its own chips? Will Nvidia allow Mali
       | to continue existing? Any innovations ARM made previously it sold
       | to anyone mostly indiscriminatorily (outside of legal
       | restrictions), but now every time the question must be asked
       | "does Nvidia have a better propietary purpose for this?". For any
       | ARM chip maker the situation will be that Nvidia is both your
       | ruthless competitor, but it also sells you the IP you need to
       | build your chips.
       | 
       | EDIT: ARM's interests up to last week were to create and empower
       | as many competitors for Nvidia as possible. They were good at
       | that and was the root of the success of the ARM ecosystem. That
       | incentive is completely gone now.
       | 
       | Unless Nvidia leaves ARM alone (and why would they spend $40B on
       | that??), this has got to be the beginning of the end of ARM's
       | golden age.
       | 
       | [1]: https://news.ycombinator.com/item?id=24010821
        
         | klelatti wrote:
         | Precisely, plus just consider the information that Nvidia will
         | have on all its competitors who use Arm IP.
         | 
         | - It will know of their product plans (as they will need to buy
         | licenses for new products).
         | 
         | - It will know their sales volumes by product (as they will
         | need to pay fees for each Arm CPU sold).
         | 
         | - If they need technical help from Arm in designing a new SoC
         | then the details of that engagement will be available to
         | Nvidia.
         | 
         | How does this not give Nvidia an completely unfair advantage?
        
           | DCKing wrote:
           | I wouldn't use the term "unfair" here. There's also just
           | three x86 licensees in the world and people don't usually
           | consider that an affront. You buy, you control, that's how
           | the world works.
           | 
           | But I do think it's important that we recognize that we're
           | going from a position of tremendous competitiveness to a much
           | less competitive situation. And that will be a situation
           | where ARM will be tightly controlled and much less inducive
           | to the innovation we've seen in the last years.
        
             | klelatti wrote:
             | That's a good point. I guess it is "unfair" but that isn't
             | necessarily an argument against it - as you say lots of
             | things are unfair.
             | 
             | But, given the high market share of Arm in several markets
             | allowing one firm the ability to use that market share to
             | gain competitive advantage in related markets seems to me
             | to be deeply problematic.
        
             | i386 wrote:
             | > You buy, you control, that's how the world works.
             | 
             | Only in countries with poor regulators like the states does
             | it work like this.
        
             | a1369209993 wrote:
             | > There's also just three x86 licensees in the world and
             | people don't usually consider that an affront.
             | 
             | Hi! Counterexample here.
        
             | bigyikes wrote:
             | Noob question: how does the x86 licensing work? Does Intel
             | still own the rights? Why would they license to AMD? Why
             | don't they license to others?
        
               | VectorLock wrote:
               | In simplest terms, the AMD & Intel perpetual license goes
               | all the way back to the early 80s and Intel has tried to
               | legally harangue their way out of it ever since, with
               | limited success.
        
               | klelatti wrote:
               | Very briefly:
               | 
               | - They don't license because they can make a lot more
               | money manufacturing the chips themselves.
               | 
               | - AMD also has the right to x86 because Intel originally
               | allowed them to build x86 compatible chips (some
               | customers insisted on a 'second source' for cpus) and
               | following legal action and settlements between the two
               | companies over the years there is now a comprehensive
               | cross licensing agreement in place. [1]
               | 
               | - Note that AMD actually designed the 64 bit version x86
               | that is used in most laptops / desktops and servers these
               | days.
               | 
               | [1] https://www.kitguru.net/components/cpu/anton-
               | shilov/amd-clar...
        
             | tboerstad wrote:
             | The three licensees would be Intel, AMD and VIA
        
               | dralley wrote:
               | VIA doesn't have a license for the AMD64 instruction set,
               | however. Intel and AMD did a cross-licensing deal so they
               | have a co-equal position.
        
               | StillBored wrote:
               | Which hasn't kept them from building 64-bit cores with
               | everything including AVX-512..
               | 
               | https://fuse.wikichip.org/news/3099/centaur-unveils-its-
               | new-...
               | 
               | IIRC you watch the "Rise of the Centaur" documentary they
               | talk about the Intel lawsuit, and the corresponding
               | counter suit that they won. Which makes the whole thing
               | sound like MAD.
               | 
               | More interesting there is
               | https://en.wikichip.org/wiki/zhaoxin/kaixian
        
               | Dylan16807 wrote:
               | Thankfully those patents are about to expire.
        
         | pier25 wrote:
         | > _There is no way Nvidia will spend 40 billion on this without
         | them deliberately or inadvertently destroy ARM 's open CPU and
         | GPU ecosystem_
         | 
         | But why would a company spend that much money to buy a company
         | and destroy it afterwards?
        
           | ltbarcly3 wrote:
           | https://songma.github.io/files/cem_killeracquisitions.pdf
        
           | batmansmk wrote:
           | Oracle and Sun at $8B.
        
           | bwanab wrote:
           | It's not going to destroy the business it's going to destroy
           | the current business model. The point is that the only way
           | this deal can make sense for Nvidia, is to use ARM's IP as a
           | competitive advantage over other competitors. Until now,
           | ARM's value proposition has been IP neutrality for the
           | various user companies.
        
           | DCKing wrote:
           | Nvidia will certainly try to get as much money out of ARM's
           | R&D capabilities, existing IP, and future roadmap as they
           | can. They will get their money's worth - at worst they will
           | fail trying. In that sense, they won't destroy "ARM the
           | company" or "ARM the IP". But Nvidia will have no interest in
           | maintaining ARM's business model whereby ARM fosters a
           | community of Nvidia competitors - they have an interest to
           | the opposite. Therefore they very likely will destroy "ARM
           | the ecosystem".
        
         | melbourne_mat wrote:
         | I think it's wonderful news that Arm is joining the ranks of
         | giant American tech oligopolies. This is a win for freedom and
         | increases prosperity for all.
         | 
         | /s
        
         | oldschoolrobot wrote:
         | exactly
        
         | edderly wrote:
         | I agree with the general sentiment here, but ARM is not exactly
         | Snow White. It's an open secret that ARM was (and still is)
         | selling the CPU design at a discount if you integrated their
         | Mali (GPU). This isn't relevant to Nvidia today, but it was
         | when they were in the mobile GPU space. Also this caused
         | obvious problems for IMGtec and other smaller GPU players like
         | Vivante.
        
           | klelatti wrote:
           | Bundling is not necessarily problematic: it happens at your
           | local supermarket all the time!
           | 
           | What would be an issue would be if Arm used their market
           | power in CPUs to try to control the GPU market - e.g. you
           | can't have the latest CPU unless you buy a Mali GPU with it.
        
           | hajile wrote:
           | Bundling isn't necessarily anti-competitive provided ARM
           | isn't taking a loss selling their chip. I'll admit that
           | things aren't actually free-market here because copyright and
           | patent monopolies apply.
           | 
           | There are three possibilities here: ARM's design is
           | approximately the same as the competitory, ARM's design is
           | inferior to the competitor, and ARM's design is superior to
           | the competitor.
           | 
           | If faced with two equivalent products, staying with the same
           | supplier for both is best (especially in this case where the
           | IP isn't supply-limited). The discount means a reduction in
           | costs to make the device. Instead of ARM making a larger
           | profit, their customers keep more of their money. In turn,
           | the super-competitive smartphone market means those savings
           | will directly go to customers.
           | 
           | In cases where ARM's design is superior, why would they
           | bundle? If they did, getting a superior product at an even
           | lower price once again just means less money going to the big
           | corporation and more money that stays in the consumer's
           | pocket.
           | 
           | The final case is where ARM has an inferior design. I want to
           | sell the most performance/features for the price so I can
           | sell more phones. I have 2 choices: slight discount on the
           | CPU but bundled with an inferior GPU or full price for the
           | CPU and full price for a superior GPU. The first option
           | lowers phone price. The second option offers better features
           | and performance. For the high-end market, I'm definitely not
           | going with the discount because peak performance reigns
           | supreme. In the lesser markets, its a calculation of price
           | for total performance and the risk that consumers might
           | prefer an extra few FPS for the cost of another few dollars.
           | 
           | Finally, there are a couple small players like Vivante or
           | Imagination Technologies, but the remaining competitors in
           | the space (Intel, AMD, Nvidia, Qualcomm, Samsung, etc) aren't
           | going to be driven under by bundle deals, so bundling seems
           | to be pretty much all upside for consumers who stand to save
           | money as a result.
        
           | Followerer wrote:
           | " It's an open secret that ARM was (and still is) selling the
           | CPU design at a discount if you integrated their Mali (GPU)."
           | 
           | Why is that bad? Not only it's common business practice (the
           | more you buy from us, the cheaper we sell), it also makes
           | sense from the support perspective. Support the integration
           | between their cores and a different GPU would be more work
           | for them than integration of their cores with their own GPUs.
           | 
           | That's why companies expand to adjacent markets: efficiency.
           | 
           | A completely different thing would be to say: "if you want
           | our latest AXX core, you _have_ to buy our latest Mali GPU ".
           | That's bundling, and that's illegal.
        
           | DCKing wrote:
           | I'm sure that ARM is not a saint here in the sense that they
           | would also have an incentive to milk their licensees as much
           | as possible. Now they will keep having that incentive, but
           | also the terrible incentive to actively outcompete their
           | licensees which is much worse.
        
         | 013a wrote:
         | I tend to agree, but there may be another angle to this which
         | could prove beneficial to consumers. Right now, the only ARM
         | chips which are actually competitive with desktop chips are
         | from Apple, and are obviously very proprietary. If this
         | acquisition enables Nvidia to begin producing ARM chips at the
         | same level as Apple (somehow, who's to say how, that's on them)
         | then that would help disrupt the AMD/Intel duopoly on Windows.
         | Its been a decade; Qualcomm has had the time to try and compete
         | here, and has failed miserably.
         | 
         | I doubt Nvidia would substantially disrupt or cancel licensing
         | to the many third-rate chip designers you listed. But, if they
         | can leverage this acquisition to build Windows/Linux CPUs that
         | can actually compete with AMD and Intel, that would be a win
         | for consumers. And Nvidia has shown interest in this in the
         | past.
         | 
         | Yes, its a massive disruption to the status quo. But it may be
         | a good one for consumers.
        
           | klelatti wrote:
           | But Nvidia has an Arm architecture license already - the same
           | as Apple - so can build Arm chips to whatever design it wants
           | (and it does in Tegra).
           | 
           | This is nothing to do with extending Nvidia's ability to use
           | Arm IP in its own products.
        
             | 013a wrote:
             | Nvidia is a total powerhouse when it comes to chip design.
             | Most people here are looking at this from the angle of "how
             | does ARM benefit Nvidia", but I think its more valuable to
             | consider "how does Nvidia benefit ARM". In 2020, given what
             | we know about Ampere, I really don't think there's another
             | company out there with better expertise in microprocessor
             | design (but, to be fair, lets say top 3 next to Apple and
             | AMD). Now, they have more of the stack in-house, which may
             | help produce better chips.
             | 
             | Yes, ARM mostly just does licensing, but it may turn out
             | that this acquisition gives Nvidia positive influence over
             | future ISA and fundamental design changes which emerge from
             | their own experience building microprocessors.
             | 
             | Maybe that just benefits Nvidia, or maybe all of their
             | licenses; I don't know. But, I think the high price of this
             | acquisition should signal that Nvidia wants ARM for more
             | than just collecting royalties (or, jesus, the people here
             | who think they're going to cancel the licenses or
             | something, that's a wild prediction).
             | 
             | The other important point is Mali, which has a very obvious
             | and natural synergy with Nvidia's wheelhouse. Another
             | example of Nvidia making ARM better; Nvidia is the leader
             | in graphics, this is no argument, so their ability to
             | positively influence Mali (whether by actually improving
             | it, or replacing it with something GeForce) may be
             | beneficial to the OEMs who use it.
        
               | DCKing wrote:
               | > Nvidia is a total powerhouse when it comes to chip
               | design. Most people here are looking at this from the
               | angle of "how does ARM benefit Nvidia", but I think its
               | more valuable to consider "how does Nvidia benefit ARM".
               | In 2020, given what we know about Ampere, I really don't
               | think there's another company out there with better
               | expertise in microprocessor design (but, to be fair, lets
               | say top 3 next to Apple and AMD). Now, they have more of
               | the stack in-house, which may help produce better chips.
               | 
               | In my view you have this completely backwards. I think
               | the opposite is true and that Nvidia is not a powerhouse
               | CPU designer at all. They make extremely impressive GPUs
               | certainly, but that does not automatically translate to
               | great capabilities in CPUs. In terms of CPUs they have so
               | far either used standard ARM designs and have attempted
               | their own Project Denver custom architecture which are
               | not bad but have not impressed CPU wise either. In this
               | area Nvidia would _need_ ARM - primarily for themselves.
               | 
               | > The other important point is Mali, which has a very
               | obvious and natural synergy with Nvidia's wheelhouse.
               | Another example of Nvidia making ARM better; Nvidia is
               | the leader in graphics, this is no argument, so their
               | ability to positively influence Mali (whether by actually
               | improving it, or replacing it with something GeForce) may
               | be beneficial to the OEMs who use it.
               | 
               | I know you're only entertaining the thought, but the
               | image of Nvidia shipping HDL designs of Geforce IP to
               | Samsung or Mediatek in the short term future seems
               | completely alien to me. Things would need to change
               | drastically at Nvidia for them to ever do this.
               | 
               | Certainly Nvidia has the capabilities to sell way better
               | graphics to the ARM ecosystem, and very likely only one
               | line of GPUs can survive, but it just seems _extremely_
               | unlike Nvidia to ever license Geforce IP to their
               | competitors.
        
               | easde wrote:
               | Nvidia actually did try to license out its GPU IP a few
               | years back: https://www.anandtech.com/show/7083/nvidia-
               | to-license-kepler...
               | 
               | I don't believe they ever closed a deal, but clearly
               | Nvidia had some interest in becoming an IP vendor.
               | Perhaps the terms were too onerous or the price too high.
        
               | smolder wrote:
               | AMDs answer to Ampere won't be shabby based on info about
               | next generation consoles. They're also widening the gap
               | on CPUs with Zen to where ARM won't have an easy time
               | making inroads on server/workstation.
               | 
               | On the Intel side, the process obstacles have been
               | tragic, but they have plenty of hot products and plenty
               | of x86 market share to lose, or in other words, plenty of
               | time to recover CPU performance dominance.
        
               | 013a wrote:
               | I hope so. And when AMD's graphics cards and Intel's
               | processors become good again, they're welcome to reclaim
               | a top spot. But, until then, they are woefully behind.
        
             | DesiLurker wrote:
             | perhaps they can be part of an 'early insider' program to
             | get access to the next gen architecture improvements. and
             | use that to steer towards integrating their own gpus for a
             | premium instead of the dinky little Malis.
        
         | Wowfunhappy wrote:
         | To play devil's advocate a bit, are nVidia's incentives
         | necessarily so different? Their goal will be to make as much
         | money as possible, and it's clear that licensing has been a
         | winning strategy for ARM.
         | 
         | Samsung comes to mind as another company that makes their own
         | TVs, phones, SSDs, ect., but is also perfectly happy to license
         | the underlying screens and chips in those products to other
         | companies. From my vantage point, the setup seems to be working
         | well?
        
           | marcosdumay wrote:
           | Screen manufacture is a highly competitive market. ARM
           | licensing isn't.
        
           | DCKing wrote:
           | It could be Nvidia just wants ARM's profits and leave them
           | alone, but I don't understand why Nvidia would spend 40
           | billion dollars on that. They spend 40 billion on the control
           | of a company, why would they do that if they just would leave
           | them be? Surely they want to exercise that control in some
           | way for their own goals. Especially a company like Nvidia
           | which (in my linked comment) has a proven track record of not
           | understanding how to collaborate with others.
           | 
           | EDIT: Let's be clear that ARM's incentive last week was to
           | create and empower as many competitors for Nvidia as
           | possible. They were good at that and was the root of the
           | success of the ARM ecosystem. That incentive is completely
           | gone now.
           | 
           | I'm guessing Samsung has a track record where I'd feel a
           | little more confidence in the situation if they'd taken over
           | ARM here, but in general ARM's sale to Softbank and thereby
           | its exposure to lesser competitive interests has been
           | terrible. They could have remained a private company.
        
             | Wowfunhappy wrote:
             | > They spend 40 billion on the control of a company, why
             | would they do that if they just would leave them be?
             | 
             | Well, the optimistic reason would be talent-share. nVidia
             | has a lot of chip designers, and ARM has a lot of chip
             | designers, and having all of them under one organization
             | where they can share discoveries, research and ideas could
             | benefit all of nVidia's products.
        
               | DCKing wrote:
               | All I can say to that is that I don't share your
               | optimism. Explaining this as an acquihire sounds nice,
               | but that seems a thin explanation of this purchase given
               | the key role ARM has for many of Nvidia's competitors and
               | the ludicrous amount of money Nvidia put on the table
               | here. We'll see what the future holds - I certainly hope
               | the open ecosystem can survive.
        
           | klelatti wrote:
           | How do we know that Samsung hasn't stifled some potential
           | competitors by refusing to sell them screens or by selling
           | them an inferior product?
        
             | baddox wrote:
             | Samsung makes (excellent) screens for iPhones, which are
             | huge competitors to Samsung's own flagship phones, but
             | Samsung still seems happy to take the profits from the
             | screen sales. If there are smaller potential competitors
             | that Samsung won't work with it's most likely because the
             | scale is too small to be in their economic interest, not
             | because they're rejecting profits in order to stifle
             | potential competitors.
        
               | simias wrote:
               | iPhones sell massively well though, to the point where it
               | would be a big loss if Apple went to another company for
               | their screens. You just can't screw with a company the
               | size of Apple without consequences. It's not like they'd
               | go "oh no, no more iPhone now I guess :'(" if Samsung
               | decided not to sell them screens anymore.
               | 
               | The problem is more with smaller companies that could be
               | destroyed before they even get a chance to compete. Those
               | can be bullied pretty easily by a company the size of
               | Samsung.
        
               | baddox wrote:
               | I don't really see how it works both ways. It's hard to
               | imagine a bigger scarier competitor to Samsung than the
               | Apple iPhone, yet we agree Samsung is happy selling
               | screens to Apple.
        
               | bavell wrote:
               | Because there are a lot more Androids and other devices
               | that need screens. If they didn't make screens for Apple,
               | a competitor would.
        
               | babypuncher wrote:
               | Apple is a much bigger company than Samsung, they can't
               | realistically turn down a contract that big when LG is
               | lying in wait to take all that money.
        
               | sfifs wrote:
               | Not really. Revenues and Profits of Samsung group and
               | Apple are roughly comparable and in many ways Samsung's
               | revenue streams are more diversified and sustainable..
               | stock valuation notwithstanding.
        
               | runeks wrote:
               | > Samsung makes (excellent) screens for iPhones, which
               | are huge competitors to Samsung's own flagship phones,
               | but Samsung still seems happy to take the profits from
               | the screen sales.
               | 
               | What markup does Apple pay for Samsung OLED displays
               | compared to Samsung's other OLED customers? I think this
               | is highly relevant if you want to use it as an example.
               | Because if the markup for Apple is 5x that of other
               | buyers of Samsung OLED displays then you certainly can't
               | say Samsung is "happy" to sell them to Apple.
               | 
               | Same for nVidia-owned-ARM: if they're happy to sell ARM
               | licenses at 5x the previous price, then that will surely
               | increase sales for nVidia's own chips. I guess my overall
               | point is: a sufficiently high asking price is equivalent
               | to a refusal to sell.
        
               | paulmd wrote:
               | demanding information that you know nobody will be able
               | to produce is an unethical debating tactic.
               | 
               | obviously nobody but Samsung and their customers will
               | know that information, and anyone who could reveal it is
               | under NDA.
               | 
               | Apparently the prices are good enough that Apple doesn't
               | go elsewhere.
        
             | hajile wrote:
             | Resources are limited. If a Samsung phone and a Motorola
             | phone need the same screen and there's not enough to go
             | around, what happens?
             | 
             | A bidding war of course.
             | 
             | On the surface, it's capitalism at work. In reality,
             | Samsung winds up in a no-lose situation. If Motorola wins,
             | Samsung gets bigger margins due to the battle. If Samsung
             | wins, they play "pass around the money" with their
             | accountants, but their only actual costs are those of
             | production.
             | 
             | I'd note that chaebol wouldn't exist in a free market. They
             | rely on corruption of the Korean government.
        
       | runeks wrote:
       | The root problem here is the concept of patents -- at least as
       | far as I can see.
       | 
       | If patents did not exist, and nVidia were to close down ARM and
       | tell people " _no more ARM GPUs; only nVidia GPUs from now on_ ",
       | then a competitor who offers ARM-compatible ISAs would quickly
       | appear. But in the real world, nVidia just bought the monopoly
       | rights to sue such a competitor out of existence.
       | 
       | It's really no wonder nVidia did this given the profits they can
       | extract from this monopoly (on the ARM ISA).
        
       | sizzle wrote:
       | Holy shit this is huge. Did anyone see this coming?!?
        
       | jbotz wrote:
       | There is now but one choice... RISC-V, full throttle.
        
       | redwood wrote:
       | The British should never have allowed foreign ownership of their
       | core tech
        
         | bencollier49 wrote:
         | I'm so exercised about this that I'm setting up a think tank to
         | actively discuss UK control of critical tech (and "golden
         | geese" as per others in this thread). If you're in tech and
         | have a problem with this, please drop me a line, I'm
         | @bencollier on Twitter.
        
         | hajile wrote:
         | Britain and the US are politically, and economically entwined
         | with each other. As a country, keeping core technology at home
         | is tied to defense (don't buy your guns from your competitor).
         | If they don't intend to go to war with the US, then there isn't
         | any real defense loss (I'd also point out that making IP
         | "blueprints" for a core is different from manufacturing the
         | core itself). If Britain were to have a real issue, it would be
         | the US locking down F35 jets they sell to their allies.
        
         | ranbumo wrote:
         | Yes. It'd have been reasonable to block sales to non eu parties
         | for national security reasons.
         | 
         | Now arm is yet another US company.
        
           | scarface74 wrote:
           | Isn't the UK leaving the EU?
        
             | kzrdude wrote:
             | That just means that their home/core market got smaller, so
             | they should have protected ARM to stay inside EU the UK,
             | then.
        
             | mkl wrote:
             | Yes, but the Brexit referendum was only 1 month before
             | SoftBank acquired Arm Holdings. The deal was probably
             | already in progress, and finalised before the UK had any
             | real policies about Brexit, so EU requirements would have
             | been reasonable if decided ahed of time. But the timing may
             | also explain the lack of any national security focused
             | requirement (general confusion).
        
       | alphachloride wrote:
       | I hope the stonks go up
        
       | shmerl wrote:
       | That's nasty Nvidia, very nasty. But on the other hand might be
       | it will be a motivation for everyone to use ARM less.
       | 
       | I quite expect AMD for example to drop ARM chips from their
       | hardware. Others should also follow suit. Nvidia is an awful
       | steward for ARM.
        
       | bitxbit wrote:
       | They need to block this deal. A real clean case.
        
         | iso8859-1 wrote:
         | Who needs to block it? And why is it a clean case?
        
       | zeouter wrote:
       | Eeek. My gut reaction to think is.. could we have less powerful
       | conglomerates.. please?
        
       | bleepblorp wrote:
       | This isn't going to do good things for anyone who doesn't own a
       | lot of Nvidia stock.
       | 
       | This is going to do especially bad things for anyone who needs to
       | buy a cell phone or the SoC that powers one. There's no real
       | alternative to ARM-based phone SoCs. Given Nvidia's business
       | practices, any manufacturer who doesn't already have a perpetual
       | ARM license should expect to have to pay a lot more money into
       | Jensen Huang's retirement fund going forward. These costs will be
       | passed on to consumers and will also provide an avenue for
       | perpetual license holders to raise their consumer prices to
       | match.
        
         | jacques_chester wrote:
         | > _This isn 't going to do good things for anyone who doesn't
         | own a lot of Nvidia stock._
         | 
         | If it makes you feel any better, studies of acquisitions show
         | that most of them are duds and destroy acquirer shareholder
         | value.
        
           | imtringued wrote:
           | Well, the commenter is actively worried about Nvidia
           | destroying shareholder value. If you destroy ARM in the
           | process of acquiring it the combined company will be worth
           | less in the long run. If the acquisition was actually
           | motivated by synergy then Nvidia could have gotten away with
           | a much cheaper license from ARM.
        
         | bgorman wrote:
         | Android worked on x86 and MIPS in the past, it could presumably
         | be ported to work with RISC-V
        
           | saagarjha wrote:
           | It might pretty much work today, perhaps with a few config
           | file changes.
        
           | Zigurd wrote:
           | MIPS support was removed from the Android NDK, though older
           | versions of the NDK stand a good chance of working, still. So
           | app developers with components needing the NDK support have a
           | bit of work to remain compatible.
        
           | bleepblorp wrote:
           | Android still works on x86-64; indeed there are quite a few
           | actively maintained x86-64 Android ports that are used, both
           | on bare metal PCs and virtualized, for various purposes.
           | 
           | The problem is that there are no x86 SoCs that are
           | sufficiently power efficient to be battery-life competitive
           | with ARM SoCs in phones.
        
             | seizethegdgap wrote:
             | Can confirm, just spun up Bliss OS on my Surface Book. Not
             | at all the smoothest experience I've ever had using an
             | Android tablet, but it's nice.
        
           | janoc wrote:
           | You would first need an actual working RISC-V silicon that it
           | would be worth porting to. Not the essentially demo chips
           | with poor performance that are around now.
           | 
           | RISV-V is a lot of talk and hype but the actual silicon that
           | you could buy and implement into a product is hard to come
           | by. With the exception of a few small microcontroller efforts
           | (GigaDevice, Sipeed).
        
       | zmmmmm wrote:
       | Many various reasons for this but one perspective I am curious
       | about is how much this is actually a defensive move against
       | Intel, because nVidia knows Intel is busy developing dedicated
       | graphics via Xe, and if nVidia just allows that to continue they
       | are going to find themselves simultaneously competing with and
       | dependent on a vendor that owns the whole stack that their
       | platform depends on. It is not a place I would want to be, even
       | accounting for how incompetent Intel seems to have been for the
       | last 10 years.
       | 
       | Edit: yes I meant nVidia not AMD!
        
         | fluffything wrote:
         | Nvidia could have bought a world-class CPU architects team, and
         | build their own ARM or RISC-V chips (NVIDIA has an infinite ARM
         | license already).
        
         | Tehdasi wrote:
         | Intel has been promising high-end graphics for decades, and
         | delivering low end integrated graphics as a feature for their
         | CPUs. Which makes sense, the market for CPUs is worth more than
         | the market for game oriented GPUs. The rise of GPUs used in AI
         | might change this calculation, but I doubt it. I suspect that
         | nVidia just would like to move into the CPU market.
        
         | jml7c5 wrote:
         | How does AMD enter into this? Did you mean Nvidia?
        
           | zmmmmm wrote:
           | ouch I wrote a whole comment and systematically replaced
           | Nvidia with AMD ... kind of impressive.
           | 
           | Thanks!
        
       | yogrish wrote:
       | SoftBank isa true banking company .. invested (Bought) in ARM and
       | selling it now for meagre profit. A company that says it has 300
       | year vision
        
       | krick wrote:
       | We've been repeating the word "bad" for the last couple of weeks
       | here, but I don't really remember any insights on what can happen
       | long term (and I'm asking, because I have absolutely no idea). I
       | mean, let's suppose relationships with Qualcomm don't work out
       | (which we all kind of suspect already). What's the alternative?
       | Is it actually possible to create another competitive
       | architecture at this point? Does it take 5, 10 years? Is there
       | even a choice for some other (really big) company, that doesn't
       | want to depend on NVIDIA?
        
       | walterbell wrote:
       | Talking points from the founders of Arm & Nvidia:
       | https://www.forbes.com/sites/patrickmoorhead/2020/09/13/its-...
       | 
       |  _> Huang told me that first thing that the combined company will
       | do is to, "bring NVIDIA technology through Arm's vast network."
       | So I'd expect NVIDIA GPU and NPU IP to become available quickly
       | to smartphone, tablet, TV and automobile SoC providers as quickly
       | as possible._
       | 
       |  _> Arm CEO Simon Segars framed it well when he told me, "We 're
       | moving into a world where software doesn't just run in one place.
       | Your application today might run in the cloud, it might run on
       | your phone, and there might be some embedded application running
       | on a device, but I think increasingly and with the rollout of 5g
       | and with some of the technologies that Jensen was just talking
       | about this kind of application will become spread across all of
       | those places. Delivering that and managing that there's a huge
       | task to do."_
       | 
       |  _> Huang ... "We 're about to enter a phase, where we're going
       | to create an internet that is thousands of times bigger than the
       | internet that we enjoy today. A lot of people don't realize this.
       | And so, so we would like to create a computing company for this
       | age of AI."_
        
         | mlindner wrote:
         | Arm CEO really doesn't understand what's going on. There is no
         | future where everything runs on the cloud. That simply cannot
         | happen for legal reasons. Additionally the internet is getting
         | more balkanized and that further is against the idea of the
         | cloud. AI will not be running in the cloud, it will be running
         | locally. Apple sees this but many others don't yet. You only
         | run AI in the cloud if you want to monetize it with
         | advertising.
        
           | scalablenotions wrote:
           | So all these AI SAAS companies are fake?
        
         | topspin wrote:
         | My instincts are telling me this is smoke and mirrors to
         | rationalize a $40E9 deal. The only part of that that computes
         | at all is the GPU integration, and that only works if NVIDIA
         | doesn't terrorize Arm licencees. The rest is buzzwords.
        
           | cnst wrote:
           | There's a pretty big assumption that the deal even gets
           | approved.
           | 
           | Even if it does get approved, and even if NVIDIA decides to
           | not screw up any of the licensees, the whole notion of NVIDIA
           | being capable of doing so to any of their (NVIDIA's)
           | competitors, will surely mean extra selling points for all of
           | ARM competitors like MIPS, RISC-V etc.
        
             | sharken wrote:
             | Hopefully this deal will be stopped in it's tracks by
             | regulators.
             | 
             | If not then it's very likely that NVIDIA will do everything
             | in it's power to increase prices for ARM designs.
        
           | ethbr0 wrote:
           | Or Jensen believes that smart, convergent IoT is at a tipping
           | point, and this is a bet on that.
           | 
           | Not all fashionable words are devoid of meaning.
        
             | systemvoltage wrote:
             | Makes sense. IoT was a buzzword in 2013. It is now a mature
             | ecosystem and we've gotten a good taste and smell for it.
             | Its on its way towards the plateau of productivity on the
             | Gartner curve if I were to guess.
        
               | ncmncm wrote:
               | The sole meaning of the "Gartner Curve" is the amount of
               | money available to be spent on hype, that Gartner can
               | hope to get. It has the most tenuous imaginable
               | relationship with the market for actual, you know,
               | products.
        
               | manigandham wrote:
               | The Gartner Hype Cycle might be branded but the basic
               | delta between overhyped technology vs actual production
               | processes has been observed for a long time.
        
           | Ecco wrote:
           | Want to be nerdy and use powers-of-ten? Fine by me! But then
           | please go all the way: $4E10!!!
        
             | mikkelam wrote:
             | That's normalized scientific notation, nothing wrong with
             | 10E9. This is also allowed in several programming languages
             | such as python.
        
             | jabl wrote:
             | Perhaps the parent prefers engineering notation, which uses
             | exponents that are a multiple of three?
        
               | topspin wrote:
               | Got me.
        
         | baybal2 wrote:
         | Bye bye ARM Mali =(
        
           | janoc wrote:
           | Frankly, good riddance.
        
           | paulmd wrote:
           | This is NVIDIA's "xbox one/PS4" moment. AMD has deals with
           | console manufacturers, their clients pay for a huge amount of
           | R&D that gets ported back into AMD's desktop graphics
           | architecture. Even if AMD doesn't make basically anything on
           | the consoles themselves, it's a huge win for their R&D.
           | 
           | Now, every reference-implementation ARM processor
           | manufactured will fund GeForce desktop products,
           | datacenter/enterprise, etc as well.
           | 
           | NVIDIA definitely needs something like this in the face of
           | the new Samsung deal, as well as AMD's pre-existing console
           | deals.
        
             | fluffything wrote:
             | > Now, every reference-implementation ARM processor
             | manufactured will fund GeForce desktop products,
             | datacenter/enterprise, etc as well.
             | 
             | That's like throwing pennies onto a pile of gold. NVIDIA
             | makes billions of yearly revenue. ARM makes ~300 million.
             | NVIDIA revenue is 60% of a GPU price. ARM margins in
             | IoT/embedded/phone chips are thin-to-non-existent. If
             | anything, NVIDIA will need to cut GPU spending to push ARM
             | to the moon. And the announcement already suggest that this
             | will happen.
        
               | janoc wrote:
               | ARM doesn't make any chips. They license IP (CPU cores
               | and the MALI GPU) needed to build those chips to
               | companies like Apple, Samsung, TI, ST Micro, Microchip,
               | Qualcomm ...
               | 
               | That margins on $1 microcontroller are "thin-to-non-
               | existent" is thus completely irrelevant - those are
               | margins of the silicon manufacturer, not ARM's.
        
               | fluffything wrote:
               | So what's the marging of ARM per chip bought? ARM's IP
               | isn't free. It is created by engineers that cost money.
               | Those chip sell for 0.10$, so ARM's margin's can't be
               | more than 0.10$ per chip, otherwise seller would be
               | operating at a loss.
        
             | mr_toad wrote:
             | If I was an nVidia shareholder I wouldn't want ARM profits
             | to subsidise GPU development.
             | 
             | GPU profits should be able to cover their own R&D,
             | especially given the obscene prices on the high end cards.
        
               | dannyw wrote:
               | Why not? The winning strategy since the 2000s have been
               | to reinvest all profits into growing the business; not
               | trying to chase juicy margins.
        
             | hajile wrote:
             | It's also constraining.
             | 
             | Console makers dictated that RDNA must use a forward-
             | compatible version of the GCN ISA. While AMD might have
             | wanted to make some changes of some ideas that turned out
             | to be less-than-optimal, they cannot because they are
             | stopped by the console makers paying the bills.
        
           | joshvm wrote:
           | Is the situation better with Mali? This seems something that
           | might actually improve with Nvidia. I was under the
           | impression that ARM GPUs are currently heavily locked down
           | anyway (in terms of open source drivers). Nvidia would
           | presumably still be locked down, but maybe we'd have more
           | uniform cross platform interfaces like CUDA.
        
             | CameronNemo wrote:
             | Tegra open source support is great and actually supported
             | by NVIDIA. Probably the best ARM GPU option for open source
             | drivers.
             | 
             | Mali support is done by outside groups (not ARM). Midgard
             | and Bifrost models are well supported (anything that starts
             | with a T or G, respectively). Support for older models is a
             | little worse, but better than some other GPUs.
             | 
             | Adreno support is done by outside groups (not Qualcomm) and
             | is lagging behind new GPUs that come out considerably.
             | 
             | PowerVR GPUs (Imagination) have terrible open source
             | support.
        
               | rhn_mk1 wrote:
               | Tegra the chipset is supported by Nvidia, but even then
               | they are not a paragon of cooperation with the Linux
               | community. They have their own non-mainlined driver,
               | while the mainlined one (tegra) is done by someone else.
               | 
               | They did contribute somewhat to the "tegra" driver at
               | least.
        
       | cowsandmilk wrote:
       | What is Amazon's license for ARM like to make graviton
       | processors?
        
       | rvz wrote:
       | What a death sentence for ARM right there and the start of a
       | starvation in a new microprocessor winter. I guess we now have to
       | wait for RISC-V to catch up.
       | 
       | Aside from that, ARM was one of the only actual tech companies
       | the UK could talk about on the so-called "world stage", that has
       | survived more than 2 decades. But instead, they continue to sell
       | themselves and their businesses to the US instead of vice versa.
       | 
       | In 2011, I thought that they would learn the lessons and warnings
       | highlighted from Eric Schmidt about the UK creating long standing
       | tech companies like FAANMG. [0] I had high hopes for them to
       | learn from this, but after 2016 with Softbank and now this, it is
       | just typical.
       | 
       | ARM will certainly be more expensive after this and will
       | certainly be even more closed-source, since their Mali GPUs
       | drivers were already as closed as Nvidia's GPUs. This is a
       | terrible outcome I have seen but from Nvidia's perspective, it
       | makes sense. From a FOSS perspective, ARM is dead, long live
       | RISC-V.
       | 
       | [0] https://www.theguardian.com/technology/2011/aug/26/eric-
       | schm...
        
         | paulmd wrote:
         | love the armchair CEO perspectives that NVIDIA spent $40b only
         | to flush the whole ARM ecosystem right down the drain, that's
         | definitely a rational thing to do, right?
         | 
         | Jensen's not an idiot, how many OG 90s tech CEOs are still at
         | the helm of the company they founded? Any severely negative
         | moves towards their customers just drives them into RISC-V and
         | they know that.
         | 
         | Yes, ARM customers will be paying more for their ARM IP. No,
         | NVIDIA is not going to burn ARM to the ground.
        
         | throwaway5792 wrote:
         | Once they were sold to SoftBank ARM had no more control over
         | its destiny. You're saying that they're repeating the pattern,
         | but they had no choice in this today. This was SoftBank's
         | decision as the owner of ARM.
        
           | Aeolun wrote:
           | Softbank has some losses of it's own to make up, so that's
           | not very surprising.
        
             | broknbottle wrote:
             | Losses? Masayoshi Son just made a cool 8 billion on this
             | deal. Time to hit up the roulette table in Vegas to double
             | that to 16 billion
        
               | throwaway5792 wrote:
               | Buying a stake in AAPL,FB, or any tech company would have
               | netted higher returns for less effort than buying ARM. If
               | anything, a 25% ROI in 4 years is quite poor.
        
         | therealmarv wrote:
         | Have you read the press release at all? It's too early to judge
         | this now. ARM will stay in Cambridge and Nvidia wants to invest
         | in this place.
        
           | jacques_chester wrote:
           | My experience of acquisitions is that sweet songs are sung to
           | calm the horses. Then the next financial quarter comes around
           | and the truck from the glue factory arrives.
           | 
           | If you take at face value anything from a press release,
           | earnings call or investor relations website, then I would
           | like to take a moment to share with you the prospectus of
           | Brooklyn Bridge LLC.
        
             | dylan604 wrote:
             | My money is currently tied up in ocean front property in
             | Arizona.
        
           | dylan604 wrote:
           | And when Facebook bought Oculus, they said no FB login would
           | be required to use Oculus. FoxConn was supposed to ramp up
           | production in the US according to a press release. That was
           | then, now time has passed. bigCorp hopes your short term
           | memory forgets the woohooism from a foregone press release.
        
           | BLKNSLVR wrote:
           | I'm reminded of this recent development:
           | 
           | https://www.oculus.com/blog/a-single-way-to-log-into-
           | oculus-...
           | 
           | Discussed here on HN, with the top comment being a copy and
           | paste of Palmer Lucky's acknowledgement that the early
           | critics turned out to be correct:
           | 
           | https://news.ycombinator.com/item?id=24201306
           | 
           | Different situation, different companies, but if you "follow
           | the money" you'll never be too far wrong.
        
           | klodolph wrote:
           | It's not too early to judge, because this has been in the
           | news for a while and people have had the time to do an
           | analysis of what it means for Nvidia to buy ARM. The press
           | release doesn't add much to that analysis. We've seen a _ton_
           | of press releases go by for acquisitions and they 're always
           | full of sunshine... at the moment, I'm thinking of Oracle's
           | acquisition of Sun, but that's just one example. The typical
           | pattern for acquisitions is that a tech company will acquire
           | another tech company because they can _extract more value_
           | from the acquired company 's IP compared to the value the
           | acquired company would have by itself, and you can extract a
           | lot of value from an IP while you slash development budgets.
           | Not saying that's going to happen, but it's a common pattern
           | in acquisitions.
           | 
           | I think it's enough to know what Nvidia is, how they operate,
           | and what their general strategies are.
           | 
           | Not saying I agree with the analysis... but I am not that
           | optimistic.
        
           | boardwaalk wrote:
           | Corporations say these types of things with every
           | acquisition. It might be true initially and superficially,
           | but that's all.
        
             | sjg007 wrote:
             | I doubt they will move all of the talent to the USA. I mean
             | California is nice and all but I think a lot of Brits are
             | happy to stay in Cambridge.
        
               | mikhailfranco wrote:
               | Cambridge is one of the most beautiful cities on the
               | planet.
        
               | sjg007 wrote:
               | Agree!
        
             | paulmd wrote:
             | One of the (many) synergies in this acquisition is that
             | it's also an acqui-hire for CPU development talent.
             | NVIDIA's efforts in that area have not gone very smoothly.
             | They aren't going to fire all the engineers they just
             | bought.
        
               | wmf wrote:
               | A bigger concern is that they would hoard future cores
               | and not license them.
        
               | RantyDave wrote:
               | They might, just like Apple do. But then their forty
               | billion dollar investment would stagnate and be eaten by
               | riscv, which is probably what they are hoping to avoid.
        
           | causality0 wrote:
           | Why _would_ you read the press release at all? Do you expect
           | a company to not do what 's in their own financial self-
           | interest? Look, I love nVidia. I only buy nVidia GPUs and I
           | adore their devices like the SHIELD TV, handheld, tablet,
           | even the Tegra Note 7. Even I can see that they're not just
           | buying ARM on a whim. They intend to make that money back.
           | Them using ARM to make that money is good for absolutely
           | _nobody_ except nVidia themselves.
        
             | therealmarv wrote:
             | Well it seems to be normal to judge without reading in 2020
             | and customize your news yourself. Time will tell...
        
               | causality0 wrote:
               | I'll listen to what they're saying but I'm paying far
               | more attention when I look at what they're doing. The
               | only way buying ARM helps nVidia is by damaging their
               | competitors, AKA almost everyone.
        
         | CleanItUpJanny wrote:
         | >long standing tech companies like FAANMG
         | 
         | why are people going out of their ways to avoid the obvious and
         | intuitive "FAGMAN" acronym?
        
           | realbarack wrote:
           | Because it has an offensive slur in it
        
             | stupendousyappi wrote:
             | People should be fighting to join their favorite Silicon
             | Valley FANMAG. Some would prefer the Bill Gates FANMAG,
             | others Jeff Bezos...
        
               | dylan604 wrote:
               | Sounds more like a F-MANGA. Gates goes Super Saiyan while
               | Bezos builds a mechwarrior army.
        
           | gumby wrote:
           | Because smoking is no longer acceptable
        
           | swarnie_ wrote:
           | Lost on the way to 4chan?
        
           | lacker wrote:
           | Netflix really doesn't belong in the same category as the
           | others. It's big but not as big, and it isn't a sprawling
           | conglomerate. Clearly "FAGMA" is the best acronym.
        
             | staz wrote:
             | GAFAM is often used in the french speaking world
        
             | whereistimbo wrote:
             | Or MAGA: Microsoft Apple Google (Alphabet) Amazon. $1
             | trillion dollar club.
        
               | broknbottle wrote:
               | I think you meant the four comma club
        
       | UncleOxidant wrote:
       | On the bright side, this could end up being a big boost for
       | RISC-V.
        
         | m00dy wrote:
         | and this would be killer for intel
        
           | UncleOxidant wrote:
           | Maybe? But somehow I don't think they'll be able to
           | capitalize on it because Intel.
        
           | hajile wrote:
           | Even if/when RISCV takes over, Intel and AMD will be in a
           | unique position to offer "combination" chips with both x86
           | and RISCV cores which could milk the richest enterprise and
           | government markets for decades to come.
        
         | kristianpaul wrote:
         | Indeed, looking right now at https://rioslab.org/.
        
         | miguelmota wrote:
         | Would love to see RISC-V catch up and be more widely adopted.
        
         | nickt wrote:
         | Probably worth a second look at this RISC desktop thread
         | 
         | https://news.ycombinator.com/item?id=19118642
        
       | hn3333 wrote:
       | Softbank buys low ($32B in 2016) and sells high ($40B in 2020).
       | Nice trade!
        
         | tuananh wrote:
         | i thought that would be low in investment world.
        
           | unnouinceput wrote:
           | 4 years, 8 billions. 2 B / year. I don't think that's low.
           | And during this time Arm was also filling its owner coffers.
           | The only real question here is if they filled the coffers at
           | a greater 2B/year or not. My guess is they didn't. Now
           | SoftBank has a lot of cash to acquire more shining toys.
        
             | tuananh wrote:
             | i guess i watched too many tv shows where investors think
             | anything below 50% ROI is low
        
               | hajile wrote:
               | 5% over inflation is average. US congress manages 25-30%,
               | but that's on the back of insider trading (technically
               | illegal thanks to a law signed by Obama, but also without
               | much ability to actually investigate -- also law signed
               | by Obama a few months later without fanfare).
        
         | aneutron wrote:
         | How does it fare, adjusting for inflation and other similar
         | factors ?
        
       | oldschoolrobot wrote:
       | This is horrible for Arm
        
       | ryanmarsh wrote:
       | Simple question, is this about ARM architecture and IP... or
       | securing a worldwide competitive advantage in 5G?
        
       | curiousmindz wrote:
       | Why do you think Nvidia cares about Arm? Probably the "access"
       | into a lot of industries?
        
         | andy_ppp wrote:
         | No to make a better ARM chip, with awesome graphics/AI, not
         | licence the design and take all the mobile CPU profits for 5-10
         | years?
        
         | banjo_milkman wrote:
         | I think this is driven by partly embedded applications and more
         | directly by the datacenter.
         | 
         | Nvidia already own the parallel compute/ML part of the
         | datacenter and the Mellanox acquisition had brought the ability
         | to compete in the networking part of the datacenter - but they
         | were missing CPU IP, for tasks that aren't well matched to the
         | GPU. This plugs that hole. They are in control of a complete
         | data-center solution now.
        
           | paulmd wrote:
           | There's a huge number of synergies that this deal provides.
           | 
           | (a) NVIDIA becomes a full-fledged full-stack house, they have
           | both CPU and GPU now. They can now compete with AMD and Intel
           | on equal terms. That has huge implications in the datacenter.
           | 
           | (b) GeForce becomes the reference implementation of the GPU,
           | ARM processors now directly fund NVIDIA's desktop/datacenter
           | R&D in the same way consoles and Samsung SOCs fund Radeon's
           | R&D. CUDA can be used anywhere on any platform easily.
           | 
           | (c) Acqui-hire for CPU development talent. NVIDIA's efforts
           | in this area have not been very good to date. Now they have
           | an entire team that is experienced in developing ARM and can
           | aim the direction of development where they want.
           | 
           | Basically there's a reason that NVIDIA was willing to pay
           | more than anyone else for this property. And (Softbank's) Son
           | desperately needed a big financial win to show his investors
           | that he's not a fucking idiot for paying $32b for ARM and to
           | make up for his other recent losses.
        
             | RantyDave wrote:
             | Quite. Nvidia will be able to make a single chip CPU, GPU
             | and Infiniband. Plus they'll score some people that know
             | about cache coherency from Arm. We can see the future
             | datacentre starting to form...
        
       | jasoneckert wrote:
       | Most tech acquisitions are fairly bland - they often maintain
       | their separate ways for several years with a bit of integration.
       | Others satisfy a political purpose or serve to stifle
       | competition.
       | 
       | However, given the momentum of Nvidia these past several years
       | alongside the massive adoption and evolution of ARM, this is
       | probably going to be the most interesting acquisition to watch
       | over the next few years.
        
       | hitpointdrew wrote:
       | LOL, Apple must be shitting bricks. Serves them right for going
       | with ARM for their new Mac Books, the smarter move would have
       | been to move to an AMD Ryzen APU, they also clearly should have
       | gone with AMD Epyc for the new Mac Pro's.
        
       | [deleted]
        
       | geogra4 wrote:
       | smic and Huawei better be prepared to have to dump arm asap
        
       | bfrog wrote:
       | So when does everyone switch to risc-v then
        
       | sharken wrote:
       | Related discussions:
       | https://news.ycombinator.com/item?id=24009177
       | 
       | https://news.ycombinator.com/item?id=24173539
       | 
       | https://news.ycombinator.com/item?id=24454958
        
         | someperson wrote:
         | and https://news.ycombinator.com/item?id=24467989
        
       | poxwole wrote:
       | It was good while it lasted. RIP ARM
        
       | [deleted]
        
       | hetspookjee wrote:
       | I wonder what the this means for Apple and their move to ARM for
       | their macbooks. End of 2019 Apple and NVIDIA broke up their
       | cooperation on CUDA. Both these companies are very tight on their
       | hardware. Apple must've known this was happening but I guess they
       | weren't willing to pay more than 40B for this risky joint venture
       | they're bound to go into.
       | 
       | Anyone has a proper analysis on the ramifications of this
       | acquisition for Apple's future in ARM?
        
         | renewiltord wrote:
         | Apple is an ARM founder. You can bet your boots they made sure
         | they were safe through the history of ARM's existence and sale
         | to Softbank in the first place. No one can cite the deep magic
         | to them, they were there when it was written.
        
       | ibains wrote:
       | I love this, I was amongst early engineers on CUDA (compilers).
       | 
       | NVIDIA was so well run, but boxed into a smaller graphics card
       | market - ATI and it were forced into low margins since they were
       | made replaceable by OpenGL and DirectX standards. For the
       | standard fans - they resulted a wealth transfer from NVIDIA to
       | Apple etc. and reduced capital available for R&D.
       | 
       | NVIDIA was constantly attacked by a much bigger Intel (which
       | changed interfaces to kill products and was made to pay by a
       | court)
       | 
       | Through innovation, developing new technologies (CUDA) they
       | increased market cap, and have used that to buy Arm/Mellanox.
       | 
       | I love the story of the underdog run by a founder, innovating
       | it's way to getting into new markets against harsh competition.
       | Win for capitalism!
        
         | [deleted]
        
         | [deleted]
        
         | enragedcacti wrote:
         | Nvidia might have been an underdog once, but they are now the
         | world's largest chipmaker, even surpassing Intel.
         | 
         | https://www.extremetech.com/computing/312528-nvidia-overtake...
        
           | mhh__ wrote:
           | And Intel's revenue remains 700% larger than Nvidia's
        
         | justicezyx wrote:
         | The comment identified the positive side of the nvidia story.
         | Note that nvidia had not had large acquisition for many years.
         | 
         | This acquisition can be seen as a beacon of nvidia's past
         | struggle against the market and the competitors.
         | 
         | For whatever happened, nvidia innovated to their success, and
         | had enabled possibly the biggest tech boom so far through deep
         | learning. Might be one day everyone claimed nvidia to be the
         | "most important company" on earth.
        
           | llukas wrote:
           | > The comment identified the positive side of the nvidia
           | story. Note that nvidia had not had large acquisition for
           | many years.
           | 
           | Not correct Mellanox was bought for $7B.
        
             | justicezyx wrote:
             | Bad me... Poor memory!
        
       | Lind5 wrote:
       | Arm's primary base is in the IoT and the edge, and it has been
       | very successful there. Its focus on low power allowed it to shut
       | out Intel from the mobile phone market, and from there it has
       | been gaining ground in a slew of vertical markets ranging from
       | medical devices to Apple computers. But as more intelligence is
       | added to the edge, the next big challenge is to be able to
       | radically improve performance and further reduce power, and the
       | only way to make that happen is to more tightly customize the
       | algorithms to the hardware, and vice versa
       | https://semiengineering.com/nvidia-to-buy-arm-for-40b/
        
       | 01100011 wrote:
       | Can we,for once, hear the opinions of people in the chip industry
       | and not the same tired posts from software folks? NVIDIA Bad! Ok,
       | we get it. Do you have anything more insightful than that?
       | 
       | I'm starting to feel like social media based on upvotes is a
       | utter waste of time. Echo chambers and groupthink. People
       | commenting on things they barely know anything about and getting
       | validation from others who don't know anything. I'd rather pay
       | for insightful commentary and discussion. I feel like reddit
       | going downhill has pushed a new group of users to HN and it's
       | sending it down the tube. Maybe it's time for me to stop
       | participating and get back to work.
        
         | diydsp wrote:
         | as an embedded dev, the last 2-4 years of STM (major ARM
         | provider) have seen a large degree of integration of
         | specialized hardware into 32-bit microcontrollers. e.g. radios,
         | motor control, AI, 64-bit FPUs, graphics, low-power, etc.
         | 
         | I expect more of the same. The only way it could go wrong is if
         | they lose customer focus. Microcontrollers are a competitive,
         | near-commodity market, so companies have to provide valuable
         | features.
         | 
         | I don't really know Nvidia well - I only buy their stock! - but
         | they seem to be keeping their customers happy by paying
         | attention to what they need. Perhaps their fabrication will be
         | a boon to micros, as they're usually a few gens behind
         | laptop/server processors.
        
         | drivebycomment wrote:
         | You're not wrong, but most of HN threads are like this. 80% of
         | comments are low information, Dunning-Kruger effect in action.
         | But among that, there are still some useful gems, so despite
         | what you said, HN is still worth it. If you fold the first two
         | top level comments, the rest have some useful , informed
         | perspective.
         | 
         | I don't see this as having that much of an impact on any short
         | to medium term. ARM has too much intricate business
         | dependencies and contracts nVidia can't just get out of.
         | 
         | My speculation is that nVidia might be what it takes to push
         | arm to overcome the final huddle into more general purpose and
         | server cpu, and achieve that pipedream of a single binary/ISA
         | running everywhere. Humanity would be better off if a single
         | ISA does become truly universal. Whether business/technology
         | politics will allow that to happen and whether nVidia has
         | enough understanding and the shrewdness to pull that off is to
         | be seen.
        
           | dahart wrote:
           | > Humanity would be better off of a single ISA does become
           | truly universal.
           | 
           | This is an interesting thought. I think I would agree with
           | this in the CPU world of the last 20-30 years, but it makes
           | me wonder a few things. Might a universal ISA eliminate major
           | pieces of competitive advantage for chip makers, and/or stall
           | innovation? It does feel like non-vector instructions are
           | somewhat settled, but vector instructions haven't yet, and
           | GPUs are changing rapidly (take NVIDIA's Tensor cores and ray
           | tracing cores for example). With Moore's law coming to an
           | end, a lot of people are talking about special purpose chips
           | more and more, TPUs being a pretty obvious example, and as
           | nice as it might be to settle on a universal ISA, it seems
           | like we're all about to start seeing larger differences more
           | frequently, no?
           | 
           | > 80% of comments are low information, Dunning-Kruger effect
           | in action.
           | 
           | I really liked all of your comment except this. I have a
           | specific but humble request aside from the negative
           | commentary & assumption about behavior. Please consider
           | erasing the term "Dunning-Kruger effect" from your mind. It
           | is being used here incorrectly, and it is very widely
           | misunderstood and abused. There is no such effect. The
           | experiments in the paper do not show what was claimed, the
           | paper absolutely does not support the popular notion that
           | confidence is a sign of incompetence. (Please read the actual
           | paper -- the experiments demonstrated a _positive
           | correlation_ between confidence and competence!) There have
           | been some very wonderful analyses of how wrong the Dunning-
           | Kruger paper was, yet most people only seem to remember the
           | (incorrect) summary that confidence is a sign of
           | incompetence.
           | 
           | https://www.talyarkoni.org/blog/2010/07/07/what-the-
           | dunning-...
        
             | drivebycomment wrote:
             | > Might a universal ISA eliminate major pieces of
             | competitive advantage for chip makers, and/or stall
             | innovation?
             | 
             | That's a good question. I didn't try to put all the
             | necessary nuances in a single sentence, so you're right to
             | question a lot of the unsaid assumptions. I don't know for
             | sure at this point the innovation in ISA has run most of
             | its course yet, but I do feel like we kind of have, given
             | how relatively little difference it makes. I think a
             | "truly" universal ISA, if it ever happens, would
             | necessarily have to have a governance and evolution cycles,
             | so that people will have to agree to the core part of the
             | universal ISA, yet have a room and a way to let others
             | experiment various extensions, including vector extensions
             | for example, and have a process to reconcile and agree on
             | the standard adoption. I don't know if that's actually
             | possible - it might be very difficult or impossible for
             | many different reasons. But if such can happen, that would
             | be beneficial, as it would reduce certain amount of
             | duplication, and unlock certain new possibilities.
             | 
             | > Please consider erasing the term "Dunning-Kruger effect"
             | from your mind. It is being used here incorrectly
             | 
             | Duly noted.
        
         | paxys wrote:
         | You are just starting to feel that now?
        
           | 01100011 wrote:
           | I know, right? It's finally hitting me. I am realizing how
           | much of an asshole I've become thanks to the validation of a
           | handful of strangers on the internet. I have caught myself
           | commenting on things I have only cursory knowledge of and
           | being justified by the likes of strangers. I actually
           | believed that the online communities I participated in
           | actually represented the world at large.
           | 
           | I'm not quite ready to end my participation in HN, but I'm
           | close. I am looking back on the last 10 years of
           | participation in forums like this and wondering what the hell
           | good it did. I am also suddenly very worried for what sites
           | like Reddit are doing to kids. That process of validation is
           | going to produce some very anti-social, misguided adults.
           | 
           | I would rather participate in an argument map style
           | discussion, or, frankly, just read the thoughts of 'experts'.
        
             | cycloptic wrote:
             | It is best to limit your exposure to these type of sites.
             | There was a post yesterday about Wikipedia being an
             | addictive MMORPG. Well, so are Hacker news, twitter,
             | reddit, and so on...
        
         | saiojd wrote:
         | Agreed, upvotes are a failed experiment, especially for
         | comments.
        
         | systemvoltage wrote:
         | Totally. I see a future where people will have edge over others
         | when they pay for information than relying on free sources
         | (except Wikipedia due to its scale, but still, Wikipedia is not
         | a replacement for a proper academic textbook).
         | 
         | FT provides insightful commentary from finance/business side of
         | things and their subscription is expensive - rightfully so.
        
       | choiway wrote:
       | I'm confused. What is it that Nvidia can do by owning ARM that it
       | can't do by just licensing the architecture? Can't they just
       | license and build all the chips people think they'll build
       | without buying the whole thing?
        
       | ykl wrote:
       | I wonder what this means for NVIDIA's recent RISC-V efforts [1].
       | Apparently they've been aiming to ship (or already have been
       | shipping?) RISC-V microcontrollers on their GPUs for some time .
       | 
       | [1] https://riscv.org/wp-content/uploads/2017/05/Tue1345pm-
       | NVIDI...
        
       | luxurycommunism wrote:
       | Hopefully that means we will see some big leaps in performance.
        
       | QuixoticQuibit wrote:
       | HN being hyperbolic and anti-NVIDIA as usual. I think this is a
       | great thing. Finally a competitor to the AMD-Intel x86 duopoly. I
       | imagine the focus will first be on improving ARM's data center
       | offerings but eventually I'm hoping to see consumer-facing parts
       | available sometime as well.
        
         | japgolly wrote:
         | I think the biggest concern is NVIDIA's stance against OSS.
        
           | QuixoticQuibit wrote:
           | Look at their AI/CUDA documentation and associated githubs.
           | Many of their tools and libraries are open source.
           | 
           | Tell me, what other AI platform works with x86 and PowerPC
           | and ARM? Currently NVIDIA's GPUs do.
        
             | cycloptic wrote:
             | It's good that they open sourced that stuff, but the main
             | problem is that CUDA itself is closed source and vendor
             | locked to nvidia's hardware.
        
               | sreeramb93 wrote:
               | I dont buy laptops with nvidia GPUs because of nightmares
               | I had working with them in 2014-2015. Has the support
               | improved?
        
               | TomVDB wrote:
               | It only makes sense to demand them to give this away for
               | free if you consider them a hardware company and a
               | hardware company only.
               | 
               | Look at them as a solutions company and the first
               | question to answer is: why is it fine for companies like
               | Oracle, Microsoft, Adobe etc and any other software
               | company to profit from closed software yet a company
               | should be ostracized for it as soon as hardware becomes
               | part of the deal?
               | 
               | Nvidia invested 15 years in developing and promoting a
               | vast set of GPU compute libraries and tools. AMD has only
               | paid lip service and to this day treats it as an ugly
               | stepchild, they don't even bother anymore to support
               | consumer architectures. Nvidia is IMO totally justified
               | to reap the rewards of what they've created.
        
               | cycloptic wrote:
               | Please don't misrepresent my statement, I haven't
               | demanded they give anything away for free. If you're
               | referring to the CUDA drivers, the binaries for those are
               | already free with the hardware. And if AMD truly doesn't
               | care about it then that's even more reason for them to
               | open source it, because they can't really claim they're
               | keeping it closed source to hamstring the competition
               | anymore.
               | 
               | wrt Oracle, Microsoft, Adobe, and the others: I've been
               | asking them to open source key products every chance I
               | get. Just so you know where I'm coming from.
        
             | nolaspring wrote:
             | I spent most of this afternoon tying to get cuda in docker
             | to work on my Mac for a machine learning use case. It
             | doesn't. Because nvidia
        
               | fomine3 wrote:
               | Because Apple. NVidia was tried to support Macs but Apple
               | stops support.
        
               | diesal11 wrote:
               | "Because NVIDIA" is blatantly false.
               | 
               | CUDA support for docker containers is provided through
               | the open source Nvidia-Docker project maintained by
               | Nvidia[1]. If anything this is a great argument _for_
               | NVIDIAs usage of open source.
               | 
               | Searching that project's issues shows that Nvidia-Docker
               | support on MacOS is blocked by the VM used by Docker for
               | Mac(xhyve) not supporting PCI passthrough, which is
               | required for any containers to use host GPU resources.[2]
               | 
               | xhyve has an issue for PCI passthrough, updated a few
               | months ago, which notes that the APIs provided by Apple
               | through DriverKit are insufficient for this use case[3]
               | 
               | So your comment should really say "Because Apple"
               | 
               | [1] https://github.com/NVIDIA/nvidia-docker
               | 
               | [2] https://github.com/NVIDIA/nvidia-
               | docker/issues/101#issuecomm...
               | 
               | [3] https://github.com/machyve/xhyve/issues/108#issuecomm
               | ent-616...
        
               | jefft255 wrote:
               | << I spent most of the afternoon hacking away at some
               | unsupported edge case on an unsupported platform which is
               | inadequate for what I'm trying to do. It doesn't work,
               | which is clearly nvidia's fault. >>
        
               | kllrnohj wrote:
               | Apple is why you can't use Nvidia hardware on a Mac, not
               | Nvidia. Apple has exclusive control over the drivers.
               | Nvidia can't release updates or fix things on Mac OS.
               | 
               | I'm all for railing on the shitty things Nvidia does do,
               | but no reason to add some made up ones onto the pile.
        
       | peterburkimsher wrote:
       | Sounds like everyone is rallying around RISC-V. What does this
       | mean for MIPS?
       | 
       | "ARM was probably what sank MIPS" - saagarjha
       | 
       | https://news.ycombinator.com/item?id=24402107
        
         | Zigurd wrote:
         | Wave owns MIPS, about which I had no idea and googling that
         | also returns that Wave went Chapter 11 this year.
        
       | kkielhofner wrote:
       | Looking at the conversations almost 24 hours after posting the
       | IP, licensing, ecosystem, political, and overall business aspects
       | of this have been discussed to death. Oddly for Hacker News there
       | has been little discussion of the potential technical aspects of
       | this acquisition.
       | 
       | Pure speculation (of course)...
       | 
       | To me (from a tech standpoint) this acquisition centers around
       | three things we already know about Nvidia:
       | 
       | - Nvidia is pushing to own anything and everything GPGPU/TPU
       | related, from cloud/datacenter to edge. Nvidia has been an ARM
       | licensee for years with their Jetson line of hardware for edge
       | GPGPU applications:
       | 
       | https://developer.nvidia.com/buy-jetson
       | 
       | Looking at the architecture of these devices (broadly speaking)
       | Nvidia is combining an ARM CPU with their current gen GPU
       | hardware (complete with Tensor Cores, etc). What's often left out
       | of this mention is that they utilize a shared memory architecture
       | where the ARM CPU and CUDA cores share memory. Not only does this
       | cut down on hardware costs and power usage, it increases
       | performance.
       | 
       | - Nvidia has acquired Mellanox for high performance network I/O
       | across various technologies (Ethernet and Infiniband). Nvidia is
       | also actively working to be able to remove the host CPU from as
       | many GPGPU tasks as possible (network I/O and data storage):
       | 
       | https://developer.nvidia.com/gpudirect
       | 
       | - Nvidia already has publicly available software in place to
       | effectively make their CUDA compute available over the network
       | using various APIs:
       | 
       | https://github.com/triton-inference-server/server
       | 
       | Going on just the name Triton is currently only available for
       | inference but it provides the ability to not only directly serve
       | GPGPU resources via network API at scale but ALSO accelerate
       | various models with TensorRT optimization:
       | 
       | https://docs.nvidia.com/deeplearning/triton-inference-server...
       | 
       | Given these points I think this is an obvious move for Nvidia.
       | TDP and performance is increasingly important across all of their
       | target markets. They already have something in place for edge
       | inference tasks powered by ARM with Jetson but looking at ARM
       | core CPU benchmarks it's sub-optimal. Why continue to pay ARM
       | licensing fees when you can buy the company, collect licensing
       | fees, get talent, and (presumably) drastically improve
       | performance and TDP for your edge GPGPU hardware?
       | 
       | In the cloud/datacenter, why continue to give up watts in terms
       | of TDP and performance to sub-optimal Intel/AMP/x86_64 CPUs and
       | their required baggage (motherboard bridges, buses, system RAM,
       | etc) when all you really want to do is shuffle data between your
       | GPUs, network, and storage as quickly and efficiently as
       | possible?
       | 
       | Of course many applications will still require a somewhat general
       | purpose CPU for various tasks, customer code, etc. AWS already
       | has their own optimized ARM cores in place. aarch64 is more and
       | more becoming a first class citizen across the entire open source
       | ecosystem.
       | 
       | As platform and software as a service continues to eat the world
       | cloud providers likely have already started migrating the
       | underlying hardware powering these various services to ARM cores
       | for improved performance and TDP (same product, more margin).
       | 
       | Various ARM cores are already showing to be quite capable for
       | most CPU tasks but given the other architectural components in
       | place here even the lowliest of modern ARM cores is likely to be
       | asleep most of the time for the applications Nvidia currently
       | cares about. Giving up licensing, die space, power, tighter
       | integration, etc to x86_64 just seems to be foolish at this
       | point.
       | 
       | Meanwhile (of course) if you still need x86_64 (or any other
       | arch) for whatever reason you can hit a network API powered by
       | hardware using Nvidia/Mellanox I/O, GPU, and ARM. Potentially
       | (eventually) completely transparently using standard CUDA
       | libraries and existing frameworks (see work like Apex):
       | 
       | https://github.com/NVIDIA/apex
       | 
       | I, for one, am excited to see what comes from this.
        
       | vletal wrote:
       | Nvidia being like "Apple did now want to use our tech? Let's just
       | buy ARM!"
        
       | naruvimama wrote:
       | (Nvidia - ARM) - Nvidia >> 40 Bn
       | 
       | Only about 50% is gaming and nascent divisions like data centres
       | can get a big boost from the acquisition.
       | 
       | We only connected Nvidia with GPUs, perhaps AI & ML. Now they are
       | going to be a dominant player everywhere from consumer devices,
       | IOT, cloud, HPC & Gaming.
       | 
       | And since Nvidia does not FAB its own chips like intel, this
       | transformation is going to be pretty quick.
       | 
       | If only they go into public cloud business, we as costumers would
       | have one other strong vendor to choose from.
        
       | totorovirus wrote:
       | nvidia is notorious for being not nice to oss developers as Linus
       | Torvalds claims:
       | https://www.youtube.com/watch?v=iYWzMvlj2RQ&ab_channel=Silic...
       | 
       | I wonder how Linux would react to this news.
        
         | tontonius wrote:
         | You mean how GNU/Linux would react to it?
        
           | RealStickman_ wrote:
           | Don't exclude the alpine folks please
        
       | [deleted]
        
       | jpswade wrote:
       | I feel like this is yet more terrible news for the UK.
        
       | browserface wrote:
       | Vertical integration. It's the end products, not the producers
       | that matter.
       | 
       | Or maybe, more accurately, the middle of the supply chain doesn't
       | matter. The most value is at either end: raw materials and
       | energy, and end products.
       | 
       | Or so it seems :p ;) xx
        
       | [deleted]
        
       | ComputerGuru wrote:
       | Doesn't this need regulatory approval from the USA and Japan?
       | (Not that the USA would look a gift horse in the mouth, of
       | course.)
        
         | sgift wrote:
         | There is a note at the end of the article:
         | 
         | "The proposed transaction is subject to customary closing
         | conditions, including the receipt of regulatory approvals for
         | the U.K., China, the European Union and the United States.
         | Completion of the transaction is expected to take place in
         | approximately 18 months."
         | 
         | Hopefully, the EU does its job, laughs at this and tells Nvidia
         | to either go home or forces them to FRAND licensing of ARM IP.
        
           | kasabali wrote:
           | > forces them to FRAND licensing of ARM IP
           | 
           | FRAND licensing is worthless, if Qualcomm taught us anything.
        
       | PragmaticPulp wrote:
       | I'm not convinced this is a death sentence for ARM. I doubt
       | nVidia spent $40b on a company with the intention of killing it's
       | golden goose business model. The contractual agreements might
       | change, but ARM wasn't exactly giving their IP away for free
       | before this move.
        
         | Hypx wrote:
         | You know how the tobacco companies work?
         | 
         | From a purely capitalistic standpoint, it's fine to kill off
         | some of your customer base if you make more money from the
         | remainder. If it can work for tobacco, you believe that Nvidia
         | is will to kill off some of its customers if they can get the
         | remainder to pay more.
        
           | scruffyherder wrote:
           | Unless they are putting explosives in the chips the customers
           | will be free to go elsewhere
        
             | Hypx wrote:
             | If their software is dependent on the ARM ISA then they
             | can't.
        
               | qubex wrote:
               | _mumble mumble_ Turing complete _mumble mumble_
        
         | MattGaiser wrote:
         | It is less about them intentionally killing it and more about
         | their culture and attitude killing it.
        
           | asdfasgasdgasdg wrote:
           | Does Nvidia have a habit of killing acquisitions? I'm only
           | familiar with their graphics business, but as far as I can
           | see the only culture going on there is excellence.
        
             | p1necone wrote:
             | NVIDIAs whole schtick is making a bunch of interesting
             | software and arbitrarily locking it to their own hardware.
             | Doesn't seem compatible with being the steward for what has
             | up until now been a relatively open CPU architecture.
        
               | ImprobableTruth wrote:
               | arbitrarily? Nvidia invests a lot in software R&D, why
               | should they just give it away to their competitor AMD who
               | basically invest nothing in comparison?
        
               | rocqua wrote:
               | Arbitrary as in, without technical reasons.
               | 
               | An open architecture, and business model based on
               | partnership doesn't really synchronize with vendor
               | locking your products for increased profits.
        
             | ip26 wrote:
             | At issue is the conflict between ARM's business model-
             | which revolves around licensing designs to other companies-
             | and Nvidia's reputation of not playing nicely with other
             | companies.
        
             | ATsch wrote:
             | The concern is more that nvidias culture has historically
             | been being overall hostile to parterships. Which works
             | great for what Nvidia is doing right now, but is probably
             | bad for a company that depends heavily on partnerships.
        
       | paulpan wrote:
       | My initial reaction is that this reminiscent of the AMD-ATI deal
       | back in 2006. It almost killed both companies and comparatively,
       | this deal size is much bigger ($40B vs. $6B) for both a more
       | mature industry and companies involved.
       | 
       | $40B is an obscene lot of money objectively and what's the
       | endgame for Nvidia? If it's to "fuse" ARM's top CPU designs with
       | their GPU prowess, then couldn't they invest the money to restart
       | their own CPU designs (e.g. Carmel)? My inner pessimist, as with
       | others here, is that Nvidia will somehow cripple the ARM
       | ecosystem or prioritize their own needs over those of other
       | customers'. Perhaps an appropriate analogy is Qualcomm's IP
       | licensing shenanigans and how they've crippled the non-iOS
       | smartphone industry.
       | 
       | That said, there's also examples of companies making these
       | purchases with minimal insidious behavior and co-existing with
       | their would-be competitors: Microsoft's acquisition of Github,
       | Google's Pixel smartphones, Sony's camera lenses business and
       | even Samsung, which supposedly firewalls its components teams so
       | the best tech is available to whoever wants (and is willing to
       | pay for it).
       | 
       | I suppose if this acquisition ends up going through (big if),
       | then we'll see Nvidia's true intent in 3-5 years.
        
       | gigatexal wrote:
       | How large or little of boost does this give the likes of RISC-V?
        
       | nl wrote:
       | Just noting that Apple _doesn 't_ have a perpetual license, they
       | have an architecture license[1], including for 64bit parts[2].
       | 
       | This allows them to design their own cores using the Arm
       | instruction set[3] and presumably includes perpetual IP licenses
       | for Arm IP used while the license is in effect. New Arm IP
       | doesn't seem to be included, since existing 32bit Arm licensees
       | had to upgrade to a 64bit license[2].
       | 
       | [1] https://www.anandtech.com/show/7112/the-arm-diaries-
       | part-1-h...
       | 
       | [2]
       | https://www.electronicsweekly.com/news/business/finance/arm-...
       | 
       | [3]
       | https://en.wikipedia.org/wiki/ARM_architecture#Architectural...
        
       | zdw wrote:
       | I see this going a few ways for different players:
       | 
       | The perpetual architecture license folks that make their own
       | cores like Apple, Samsung, Qualcomm, and Fujitsu (I think they
       | needed this for the A64FX, right?) will be fine, and may just
       | fork off on the ARMv8.3 spec, adding a few instructions here or
       | there. Apple especially will be fine as they can get code into
       | LLVM for whatever "Apple Silicon" evolves into over time.
       | 
       | The smaller vendors that license core designs (like the A5x and
       | A7x series, etc.) like Allwinner, Rockchip, and Broadcom are
       | probably in a worse state - nVidia could cut them off from any
       | new designs. I'd be scrambling for an alternative if I were any
       | of these companies.
       | 
       | Long term, it really depends on how nVidia acts - they could
       | release low end cores with no license fees to try to fend off
       | RISC-V, but that hasn't been overly successful when tried earlier
       | with the SPARC and Power architectures. Best case scenario, they
       | keep all the perpetual architecture people happy and
       | architecturally coherent, and release some interesting datacenter
       | chips, leaving the low end (and low margin) to 3rd parties.
       | 
       | Hopefully they'll also try to mend fences with the open source
       | community, or at least avoid repeating past offenses.
        
         | hastradamus wrote:
         | Nvidia mend. Lol
        
         | Followerer wrote:
         | "and may just fork off on the ARMv8.3 spec, adding a few
         | instructions here or there"
         | 
         | No, they may not. People keep suggesting these kinds of things,
         | but part of the license agreement is that you can't modify the
         | ISA. Only ARM can do that.
        
           | my123 wrote:
           | That's untrue.
           | 
           | (famously so, Intel used to ship arm chips with WMMX and
           | Apple for example ships their CPU today with the AMX AI
           | acceleration extension)
        
             | rrss wrote:
             | WMMX was exposed via the ARM coprocessor mechanism (so it
             | was permitted by the architecture). The coprocessor stuff
             | was removed in ARMv8.
        
               | my123 wrote:
               | Now custom instructions are directly on the regular
               | instruction space...
               | 
               | (+ there's the can of worms of target-specific MSRs being
               | writable from user-space, Apple does this as part of APRR
               | to flip the JIT region from RW- to R-X and vice-versa
               | without going through a trip to the kernel. That also has
               | the advantage that the state is modifiable per-thread)
        
               | Followerer wrote:
               | In ARMv8 you have a much cleaner mechanism through system
               | registers(MSR/MRS).
        
               | saagarjha wrote:
               | Apple has been using system registers for years already.
               | AMX is interesting because it's actual instruction
               | encodings that are unused by the spec.
        
             | Followerer wrote:
             | That's like saying that my Intel CPU comes with an NVIDA
             | Turing AI acceleration extension. The instructions the CPU
             | can run on an Apple ARM-based CPU is all ARM ISA. That's in
             | the license arrangement, if you fail to pass ARM's
             | compliance tests (which include not adding your own
             | instructions, or modifying the ones included) you can't use
             | ARM's license.
             | 
             | Please, stop spreading nonsense. All of this is public
             | knowledge.
        
               | my123 wrote:
               | No. I reverse-engineered it and AMX on the Apple A13 is
               | an instruction set extension running on the main CPU
               | core.
               | 
               | The Neural Engine is a completely separate hardware
               | block, and you have good reasons to have such an
               | extension available on the CPU directly, to reduce
               | latency for short-running tasks.
        
               | rrss wrote:
               | Are the AMX instructions available in EL0?
               | 
               | Is it possible AMX is implemented with the
               | implementation-defined system registers and aliases of
               | SYS/SYSL in the encoding space reserved for
               | implementation-defined system instructions? Do you have
               | the encodings for the AMX instructions?
        
               | my123 wrote:
               | AMX instructions are available in EL0 yes, and are used
               | by CoreML and Accelerate.framework.
               | 
               | A sample instruction: 20 12 20 00... which doesn't in any
               | stretch parse as a valid arm64 instruction in the Arm
               | specification.
               | 
               | Edit: Some other AMX combinations off-hand:
               | 
               | 00 10 20 00
               | 
               | 21 12 20 00
               | 
               | 20 12 20 00
               | 
               | 40 10 20 00
        
               | rrss wrote:
               | very interesting, thanks!
        
               | Followerer wrote:
               | The AMX is an accelerator block... If you concluded
               | otherwise, your reverse-engineering skills are not
               | great...
               | 
               | Let me repeat this: part of the ARM architectural license
               | says that you can't modify the ISA. You have to implement
               | a whole subset (the manual says what's mandatory and
               | what's optional), and _only_ that. This is, as I 've been
               | saying, _public_ _knowledge_. This is how it works. And
               | there are very good reasons for this, like avoiding
               | fragmentation and losing control of their own ISA.
               | 
               | And once again, stop spreading misinformation.
        
               | my123 wrote:
               | Hello,
               | 
               | Specifically about the Apple case,
               | 
               | After your tone, not certainly obligated to answer but
               | will write one quickly...
               | 
               | Apple A13 adds AMX, a set of (mostly) AI acceleration
               | instructions that are also useful for matrix math in
               | general. The AMX configuration happens at the level of
               | the AMX_CONFIG_EL1/EL12/EL2/EL21 registers, with
               | AMX_STATE_T_EL1 and AMX_CONTEXT_EL1 being also present.
               | 
               | The list of instructions is at
               | https://pastebin.ubuntu.com/p/xZmmVF7tS8/ (didn't bother
               | to document it publicly at least at this point).
               | 
               | Hopefully that clears things up a bit,
               | 
               | And please don't ever do this again, thank you. (this
               | also doesn't comply with the guidelines)
               | 
               | -- a member of the checkra1n team
        
               | btian wrote:
               | Can you provide a link to the "public knowledge" for
               | those who don't know?
        
               | eklavya wrote:
               | You may be correct, but do you really have to be so
               | attacking?
        
           | kortex wrote:
           | Well, regardless of whether this amendment is kosher or not,
           | AMX definitely exists. Perhaps the $2T tech behemoth was able
           | to work out a sweetheart deal with the $40B semiconductor
           | company.
           | 
           | > There's been a lot of confusion as to what this means, as
           | until now it hadn't been widely known that Arm architecture
           | licensees were allowed to extend their ISA with custom
           | instructions. We weren't able to get any confirmation from
           | either Apple or Arm on the matter, but one thing that is
           | clear is that Apple isn't publicly exposing these new
           | instructions to developers, and they're not included in
           | Apple's public compilers. We do know, however, that Apple
           | internally does have compilers available for it, and
           | libraries such as the Acclerate.framework seem to be able to
           | take advantage of AMX. [0]
           | 
           | my123's instruction names leads to a very shallow rabbit hole
           | on google, which turns up a similar list [1]
           | 
           | Agreed upon: ['amxclr', 'amxextrx', 'amxextry', 'amxfma16',
           | 'amxfma32', 'amxfma64', 'amxfms16', 'amxfms32', 'amxfms64',
           | 'amxgenlut', 'amxldx', 'amxldy', 'amxldz', 'amxldzi',
           | 'amxmac16', 'amxmatfp', 'amxmatint', 'amxset', 'amxstx',
           | 'amxsty', 'amxstz', 'amxstzi', 'amxvecfp', 'amxvecint']
           | 
           | my123 also has ['amxextrh', 'amxextrv'].
           | 
           | [0] https://www.anandtech.com/show/14892/the-apple-
           | iphone-11-pro....
           | 
           | [1] https://www.realworldtech.com/forum/?threadid=187087&curp
           | ost...
        
         | mxcrossb wrote:
         | It seems to me that if Apple felt that Nvidia would limit them,
         | they could have outbid them for ARM! So I think you are
         | correct.
        
           | millstone wrote:
           | I think Apple is not committed to ARM at all. Bitcode,
           | Rosetta 2, "Apple Silicon" - it all suggests they want to
           | keep ISA flexibility.
        
             | emn13 wrote:
             | Wow, but that cost - it's not a small thing to transition
             | ISA, and don't forget that this transition is one of the
             | simpler ones (more registers, fairly few devices). The
             | risks of transitioning _everything_ away from arm would be
             | much greater.
             | 
             | I guess they have _some_ ISA flexibility (which is
             | remarkable). But not much; each transition was still a very
             | special set of circumstances and a huge hurdle, I 'm sure.
        
               | rocqua wrote:
               | At the low-level driver interface, transitioning ISA is a
               | big deal. But I would guess that, at higher levels, most
               | of the work is just changing the target of your compiler?
               | 
               | As in, most of the work occurs in the low-level parts of
               | the Operating system. After that the OS should abstract
               | the differences away from User-space.
        
               | emn13 wrote:
               | No way; not at all.
               | 
               | First of all: there's lots of software that's not the OS.
               | The OS is the easy bit: everything else: grindy, grindy
               | horrorstory. A lot of that code will be third-party. And
               | if you think, "hey, we'll just recompile!", and you can
               | actually get them to too - well, good luck, but
               | performance _will_ be abysmal in many cases. Lots and
               | lots of libraries have hand-tuned code for specific
               | architectures. Anything with vectorization - despite
               | compilers being much better than they used to be - may
               | see huge bog downs without hand tuning. That 's not just
               | speculation; you can look at software that's gets the
               | vectorization treatment or was ported to arm from x86
               | poorly - perfomance falls off a cliff.
               | 
               | Then there's the JITs and interpreters, of which there
               | are quite a few, and they're often hyper-tuned to the
               | ISA's they run on. Also, they can't afford to run
               | something like LLVM on every bit of output; that's way
               | too slow. So even non-vectorized code suffers (you can
               | look at some of the .net core ARM developments to get a
               | feel for this, but the same goes for JS/Java etc).
               | Webbrowsers are hyper-tuned. regexengines, packet
               | filters, etc etc etc
               | 
               | Not to mention: just getting a compiler like LLVM to
               | support a new ISA as optimally as x86 or ARM isn't a
               | small feat.
               | 
               | Finally: at least at this point, until our AI overloads
               | render that redundant - all this work takes expertise,
               | but that expertise takes training, which isn't that easy
               | on an ISA without hardware. That's why Apple's current
               | transition is so easy: they already have the hardware;
               | _and_ the trained experts some with over a decade of
               | experience _on that ISA!_. But if they really want to go
               | their own route... well, that 's tricky, because what are
               | all those engineers going to play around on to learn how
               | it works; what's fast, and what's bad?
               | 
               | All in all, it's no coincidence transitions like this
               | take a long time, and that's for simple (aka well-
               | prepared) transitions like the one's Apple's doing now.
               | Saying they have ISA "flexibility", like they're somehow
               | interchangeable is completely missing the point on how
               | tricky on those details are, and how much they're going
               | to matter on how achievable such a transition is. Apple
               | doesn't have general ISA flexibility, it has a costly
               | route from specifically x86 to specifically ARM, and
               | nothing else.
        
               | simias wrote:
               | Extremely aggressive optimizations are really special
               | though, and they tend to require rewrites when new CPU
               | extensions release (and workarounds to work on older
               | hardware). If you rely on super low level ultra-
               | aggressive micro optimizations your code is going to have
               | a relatively short shelf life, different ISA or not.
               | 
               | The vast majority of the code written for any given
               | computer or smartphone doesn't have this level of
               | sophistication and optimization though. I'd wager that
               | for most code just changing the build target will indeed
               | mostly just work.
               | 
               | It won't be painless but modern code tends to be so high
               | level and abstracted (especially on smartphones) than the
               | underlying ISA matters a lot less than in the past.
        
               | pas wrote:
               | Do it more and more and they'll have the tools to
               | efficiently manage them.
               | 
               | Also likely the small tweaks they will want from time to
               | time should be "easy" to follow internally, if you can
               | orchestrate everything from top to bottom and back.
        
             | headmelted wrote:
             | Exactly. Apple's strategy here is very clear:
             | 
             | Offer customers iOS apps and games on the next MacBook as a
             | straight swap for Boot Camp and Parallels. Once they've
             | moved everyone over to their own chips and brought back
             | Rosetta and U/Bs they're essentially free to replace
             | whatever they like at the architecture level.
             | 
             | In their reveal I noticed that they only mentioned ARM
             | binaries running in virtual environments. It makes sense if
             | you don't want to commit to supporting GNU tools natively
             | on your devices (as it would mean sticking with an
             | established ISA)
        
               | saagarjha wrote:
               | I would be quite surprised if LLVM ever lost the ability
               | to compile C for the platform.
        
           | scarface74 wrote:
           | I doubt Apple is dumb enough not to have basically a
           | perpetual license for ARM.
        
             | soapdog wrote:
             | ARM was launched as a joint venture between Acorn, Apple,
             | and VLSI. I believe that since day 0 Apple had perpetual
             | access to the IP.
        
               | ksec wrote:
               | They sold all of the ARM shares in mid 90s to prevent
               | themselves from going bankruptcy.
               | 
               | Not to mention starting a JV has nothing to do with
               | perpetual IP access. You will still have to pay for it.
        
               | selectodude wrote:
               | They can certainly have a contract with Arm that allows
               | them to renew their arch license in perpetuity that
               | nvidia won't be able to void.
               | 
               | I obviously don't _know_ that for sure but the idea that
               | Apple would stake their future on something they don 't
               | have a legal ironclad position seems unlikely.
        
             | systemvoltage wrote:
             | I would also agree, the thing is - businesses breakup and
             | come together all the time. If it makes sense and both
             | parties can agree despite of past disagrements and
             | lawsuits, they will partner.
             | 
             | Just because Apple and nVidia has bad relationship at the
             | moment regarding their GPUs is probably orthogonal to what
             | they'll do with this new branch of nVidia, that is ARM.
        
               | scarface74 wrote:
               | What need does Apple have with ARMs R&D going forward?
               | They have their own chip designers, build tools, etc.?
               | 
               | True about frenemies the entire time that Apple was suing
               | Samsung, it was using Samsung to manufacture many of its
               | components.
        
               | pas wrote:
               | But if your chip heavily builds on arm's IP you need a
               | license for that at least as long as you can't replace
               | the infringing parts of the design. Which sounds very
               | much impossible if you also want to progress on other
               | aspects of having the best chips.
        
               | spacedcowboy wrote:
               | Apple uses the ARM ISA. It doesn't use ARM IP - as in,
               | the design of the chip. Apple designed their own damn
               | chip!
               | 
               | Since they're not branding it as ARM in any way, shape,
               | or form, and they have a perpetual architectural license
               | to the ISA, I suspect they could do pretty much what they
               | please - as long as they don't call it ARM. Which they
               | don't.
        
               | pas wrote:
               | During the design were they careful not to create any
               | derivative works of arm IP and/or not to infringe on any
               | of arm's patents?
        
           | dannyw wrote:
           | Apple would not get antitrust approval (iPhone maker controls
           | all android chips????). So that's why.
        
             | macintux wrote:
             | Were it a serious enough threat to their ARM license,
             | they'd find a way to buy ARM and keep it independent.
        
             | kabacha wrote:
             | Exactly, Apple is already straddling the line (and imho way
             | past it) on anti-comp laws.
        
               | jonhohle wrote:
               | What is their monopoly or which of their competitors are
               | they colluding with?
        
               | tick_tock_tick wrote:
               | Neither of those are required to violate anti-trust laws.
        
               | jonhohle wrote:
               | Such as? https://www.ftc.gov/enforcement/anticompetitive-
               | practices
        
               | kzrdude wrote:
               | I'm not defending Apple, just thinking that can't we say
               | this for many of the biggest tech firms? They are way
               | past the line on anticompetitive business.
        
               | wongarsu wrote:
               | Yes, Apple is not alone in this. Google is another
               | example, and they are very aware of this and are acting
               | very carefully
        
         | AnthonyMouse wrote:
         | > The perpetual architecture license folks that make their own
         | cores like Apple, Samsung, Qualcomm, and Fujitsu (I think they
         | needed this for the A64FX, right?) will be fine
         | 
         | There is one thing they would need to worry about though, which
         | is that if the rest of the market moves to RISC-V or x64 or
         | whatever else, it's not implausible that someone might at some
         | point make a processor which is superior to the ones those
         | companies make in-house. If it's the same architecture, you
         | just buy them or license the design and put them in your
         | devices. If it's not, you're stuck between suffering an
         | architecture transition that your competitors have already put
         | behind them or sticking with your uncompetitive in-house
         | designs using the old architecture that nobody else wants
         | anymore.
         | 
         | Their best move might be to forget about the architecture
         | license and make the switch to something else with the rest of
         | the market.
        
           | zdw wrote:
           | > Their best move might be to forget about the architecture
           | license and make the switch to something else with the rest
           | of the market.
           | 
           | This assumes that there isn't some other factor in
           | transitioning architecture - this argument could boil down in
           | the mid 2000's to "Why not go x86/amd64", but you couldn't
           | buy a license to that easily (would need to be 3-way with
           | Intel/AMD to further complicate things)
           | 
           | Apple has done quite well with their ARM license,
           | outperforming the rest of the mobile form factor CPU market
           | by a considerable margin. I don't doubt that they could
           | transition - they've done it successfully 3 times already,
           | even before the current ARM transition.
           | 
           | Apple under Cook has said they want to "to own and control
           | the primary technologies behind the products we make". I
           | doubt they'd turn away from that now to become dependent on
           | an outside technology, especially given how deep their
           | pockets are.
        
             | fluffything wrote:
             | Apple could transition in 10 years to RISC-V, just like how
             | they transitioned 10 years ago to x86, 10 years before to
             | PPC, 10 years before to .........
        
             | thomasjudge wrote:
             | It's kind of puzzling that Apple didn't buy them. They
             | don't seem to be particularly aggressive/creative in the
             | M&A department
        
               | andoriyu wrote:
               | Apple buying arm would never get approved. They have way
               | to many direct competitors that heavily rely on SoCs that
               | has arm cores.
        
             | spullara wrote:
             | They did dump ZFS when they decided they didn't like the
             | licensing terms.
        
         | kllrnohj wrote:
         | Samsung, Qualcomm, and MediaTek all currently just use off the
         | shelf A5x & A7x cores in their SoCs. Unless that part of the
         | company is losing money I don't expect nVidia to cut that off.
         | Especially since that's likely a key part of why nVidia
         | acquired ARM in the first place - I can't imagine they care
         | about the Mali team(s) or IP.
        
           | wyldfire wrote:
           | Qualcomm used to design their own cores up until the last
           | generation or two, but you're right they use the reference
           | designs now.
           | 
           | EDIT: correction, make that the last generation or four
           | (oops, time flies)
        
             | kllrnohj wrote:
             | Qualcomm hasn't used an in-house core design since 2015
             | with the original Kryo. Everything Kryo 2xx and newer are
             | based on Cortex.
        
               | awill wrote:
               | That was a really sad time honestly. QCOMM went from
               | leading the pack to basically using reference designs
               | (which they still arrogantly brand as Kryo despite
               | essentially being a tweak of a reference design.
               | 
               | It all happened because Apple came out with the first
               | 64-bit design, and QCOMM wasn't ready. Rather than
               | deliver 32-bit for 1 more year, they used an off the
               | shelf ARM 64-bit design (A57) in their SoC called the
               | Snapdragon 810, and boy was it terrible.
        
               | himinlomax wrote:
               | From what I gathered, they made at least _some_ risky
               | architecture choices in their custom architecture that
               | turned out not to be sustainable over the next
               | generations. Also note that their Cortex core is indeed
               | customized to a significant extent.
        
           | klelatti wrote:
           | What happens if/when Nvidia launches SoCs that compete with
           | Qualcomm and MediaTek. Will it continue to offer the latest
           | cores to competitors when it will make a lot more money on
           | its own SoCs? This is the reason for the widepsread concern
           | about Nvidia owning Arm.
        
             | kllrnohj wrote:
             | I don't know if Nvidia is eager to re-enter the SoC market.
             | It wouldn't be a clear path to more money, since they would
             | need to then handle modem, wifi, ISP, display, etc...
             | instead of just CPU & GPU. And they'd need to work with
             | Android & its HALs. And all the dozens/hundreds of device
             | OEMs.
             | 
             | They could, but that's more than just an easy money grab.
             | Something Nvidia would already be familiar with from Tegra.
             | 
             | What seems more likely/risky would be nvidia starts
             | charging a premium for a Mali replacement, or begins
             | sandbagging Mali on the lower end of things. But Qualcomm
             | already has Adreno to defend against that.
        
               | entropicdrifter wrote:
               | Looks like Nvidia never left the SoC market to begin
               | with.
               | 
               | The latest Tegra SoC launched March of 2020.
        
               | kllrnohj wrote:
               | They released a new SBC aimed at autonomous vehicles.
               | They haven't had a mobile SoC since 2015's Tegra X1.
               | Which only made it into the Pixel C tablet, Nintendo
               | Shield, and Nvidia's Shield TV (including the 2019
               | refresh)
        
               | riotnrrd wrote:
               | You're forgetting the TX2 as well as the Jetson Nano
        
             | ksec wrote:
             | Mediatek are in the lower end market, something Nvidia's
             | culture doesn't like competing in. Qualcomm holds the key
             | to Modem. Which means Nvidia competing with Qualcomm wont
             | work that well. Not to mention they have already tried that
             | with Icera and generally speaking Mobile Phone SoC are low
             | margin business. ( Comparatively Speaking )
        
               | klelatti wrote:
               | Completely take your point on Qualcomm.
               | 
               | On mobile SoC margins I guess that margins are low
               | because there is a lot of competition - start cutting off
               | IP to the competition and margins will rise.
               | 
               | I suspect that their focus will be on the server /
               | automotive to start off with but the very fact that they
               | can to any of this is troubling for me.
        
         | himinlomax wrote:
         | > nVidia could cut them off from any new designs
         | 
         | Why would they do that anyway? The downsides are obvious
         | (immediate loss of revenue), the risks are huge (antitrust
         | litigation, big boost to RiscV or even Mips), the possible
         | benefits are nebulous.
         | 
         | Those who are most obviously at risk are designers of mobile
         | GPUs (Broadcom, PowerVR ...).
        
           | ATsch wrote:
           | If they do it that directly, sure. But on a large enough
           | (time)scale, incentives are the only thing that matters. And
           | they'll certainly think hard about putting their fancy new
           | research that would help a competitor into their openly
           | licensed chips from now on.
        
         | mbajkowski wrote:
         | Curious as I don't know the terms of a perpetual architectural
         | ARM license. But, is it valid only for a specific architecture,
         | say v8 or v9, or is it valid for all future architectures as
         | well? Or is it one of those things, where it depends per
         | licensee and how they negotiated?
        
       | 127 wrote:
       | What does this change for STM32 and many other such low power
       | MCUs? They're pretty ubiquitous in electornics.
        
       | sshlocalhost wrote:
       | I am really worried of a single company monopolising over the
       | entire market
        
       | ChuckMcM wrote:
       | And there you have it. Perhaps the greatest thing to happen to
       | RISC-V since the invention of the FPGA :-).
       | 
       | I never liked Softbank owning it, but hey someone has to.
       | 
       | Regarding the federal investment in FOSS thread that was here
       | perhaps CPU architecture would be a good candidate.
        
         | dragontamer wrote:
         | RISC-V still seems too ad-hoc to me, and really new. Hard to
         | say where it'd go for now.
         | 
         | I know momentum is currently towards ARM over POWER, but...
         | OpenPOWER is certainly a thing, and has IBM / Red Hat support.
         | IBM may be expensive, but they already were proven "fair
         | partners" in the OpenPOWER initiative and largely supportive of
         | OSS / Free Software.
        
           | darksaints wrote:
           | OpenPOWER is pretty awesome but would be nowhere near as
           | awesome as an OpenItanium. IMHO, Itanium was always
           | mismarketed and misoptimized. It made a pretty good server
           | processor, but not so good that enterprises were willing to
           | migrate 40 year old software to run on it.
           | 
           | In mobile form, it would have made a large leap in both
           | performance and battery life. And it would have been a fairly
           | easy market to break into: the average life of a mobile
           | device is a few years, not a few decades. Recompilation and
           | redistribution of software _is the status quo_.
        
             | anarazel wrote:
             | IMO VLIW is an absurdly bad choice for a general purpose
             | processor. It requires baking in a huge amount of low level
             | micro-architectural details into the compiler / generated
             | code. Which obviously leads to problems with choosing what
             | hardware generation to optimize for / not being able to
             | generate good code for future architectures.
             | 
             | And the compiler doesn't even come close to having as much
             | information as the CPU has. Which basically means that most
             | of the VLIW stuff just ends up needing to be broken up
             | inside the CPU for good performance.
        
               | darksaints wrote:
               | Traditional compiler techniques may have struggled with
               | maintaining code for different architectures, but a lot
               | has changed in the last 15 years. The rise of widely used
               | IR languages has led to compilers that support dozens of
               | architectures and hundreds of instruction sets. And they
               | are getting better all the time.
               | 
               | The compiler has nearly all of the information that the
               | CPU has, and it has orders of magnitude more. At best,
               | your CPU can think a couple dozen _cycles_ ahead of what
               | it is currently executing. The compiler can see the whole
               | program, can analyze it using dozens of methodologies and
               | models, and can optimize accordingly. Something like Link
               | Time Optimization can be done trivially with a compiler,
               | but it would take an army of engineers decades of work to
               | be able to implement in hardware.
        
               | dragontamer wrote:
               | > At best, your CPU can think a couple dozen cycles ahead
               | of what it is currently executing.
               | 
               | The 200-sized reorder buffer says otherwise.
               | 
               | Loads/stores can be reordered for 200+ different
               | concurrent objects on modern Intel skylake (2015 through
               | 2020) CPUs. And its about to get a bump to 300+ sized
               | reorder buffers in Icelake.
               | 
               | Modern CPUs are designed to "think ahead" almost the
               | entirety of DDR4 RAM Latency, allowing reordering of
               | instructions to keep the CPU pipes as full as possible
               | (at least, if the underlying assembly code has enough ILP
               | to fill the pipelines while waiting for RAM).
               | 
               | > Something like Link Time Optimization can be done
               | trivially with a compiler, but it would take an army of
               | engineers decades of work to be able to implement in
               | hardware.
               | 
               | You might be surprised at what the modern Branch
               | predictor is doing.
               | 
               | If your "call rax" indirect call constantly calls the
               | same location, the branch predictor will remember that
               | location these days.
        
               | KMag wrote:
               | With proper profiling (say, reservoir sampling of
               | instructions causing pipeline stalls), and dynamic
               | recompilation/reoptimization like IBM's project DAISY /
               | HP's Dynamo, you may get performance near a modern out-
               | of-order desktop processor at the power budget of a
               | modern in-order low-power chip.
               | 
               | You get instructions scheduled based on actual
               | dynamically measured usage patterns, but you don't pay
               | for dedicated circuits to do it, and you don't re-do
               | those calculations in hardware for every single
               | instruction executed.
               | 
               | It's not a guaranteed win, but I think it's worth
               | exploring.
        
               | dragontamer wrote:
               | But once you do that, then you hardware optimize the
               | interpreter, and then its no longer called a "dynamic
               | recompiler", but instead a "frontend to the microcode".
               | :-)
        
               | KMag wrote:
               | No doubt there is still room for a power-hungry out-of-
               | order speed demon of an implementation, but you need to
               | leave the door open for something with approximately the
               | TDP of a very-low-power in-order-processor with
               | performance closer to an out-of-order machine.
        
               | branko_d wrote:
               | Neo: What are you trying to tell me? That I can dodge
               | "call rax"?
               | 
               | Morpheus: No, Neo. I'm trying to tell you that when
               | you're ready, you won't need "call rax".
               | 
               | ---
               | 
               | Compiler has access to optimizations that are at the
               | higher level of abstraction than what CPU can do. For
               | example, the compiler can eliminate the call completely
               | (i.e. inline the function), or convert a dynamic dispatch
               | into static (if it can prove that an object will always
               | have a specific type at the call site), or decide where
               | to favor small code over fast code (via profile-guided
               | optimization), or even switch from non-optimized code
               | (but with short start-up time) to optimized code mid-
               | execution (tiered compilation in JITs), move computation
               | outside loops (if it can prove that the result is the
               | same in all iterations), and many other things...
        
               | saagarjha wrote:
               | There is no way a compiler can do anything for an
               | indirect call that goes one way for a while and the other
               | afterwards. A branch predictor can get both with if not
               | 100% accuracy about as close to it as you can possibly
               | get.
        
               | formerly_proven wrote:
               | > The compiler has nearly all of the information that the
               | CPU has, and it has orders of magnitude more.
               | 
               | The CPU has something the compiler can never have.
               | 
               | Runtime information.
               | 
               | That's why VLIW works great for DSP which is 99.9 % fixed
               | access patterns, while being bad for general purpose
               | code.
        
               | dragontamer wrote:
               | VLIW was the best implementation (20 years ago) of
               | instruction level parallelism.
               | 
               | But what have we learned in these past 20 years?
               | 
               | * Computers will continue to become more parallel -- AMD
               | Zen2 has 10 execution pipelines, supporting 4-way decode
               | and 6-uop / clock tick dispatch per core, with somewhere
               | close to 200 registers for renaming / reordering
               | instructions. Future processors will be bigger and more
               | parallel, Ice Lake is rumored to have over 300-renaming
               | registers.
               | 
               | * We need assembly code that scales to all different
               | processors of different sizes. Traditional assembly code
               | is surprisingly good (!!!) at scaling, thanks to
               | "dependency cutting" with instructions like "xor eax,
               | eax".
               | 
               | * Compilers can understand dependency chains, "cut them
               | up" and allow code to scale. The same code optimized for
               | Intel Sandy Bridge (2011-era chips) will continue to be
               | well-optimized for Intel Icelake (2021 era) ten years
               | later, thanks to these dependency-cutting compilers.
               | 
               | I think a future VLIW chip can be made that takes
               | advantage of these facts. But it wouldn't look like
               | Itanium.
               | 
               | ----------
               | 
               | EDIT: I feel like "xor eax, eax" and other such
               | instructions for "dependency cutting" are wasting bits.
               | There might be a better way for encoding the dependency
               | graph rather than entire instructions.
               | 
               | Itanium's VLIW "packages" is too static.
               | 
               | I've discussed NVidia's Volta elsewhere, which has 6-bit
               | dependency bitmasks on every instruction. That's the kind
               | of "dependency graph" information that a compiler can
               | provide very easily, and probably save a ton on power /
               | decoding.
        
               | moonchild wrote:
               | Have you seen the mill cpu?
        
               | javajosh wrote:
               | Has anyone?
        
               | dralley wrote:
               | At the rate they're going, all the patents they've been
               | filing will be expired by the time they get a chip out
               | the door.
        
               | jabl wrote:
               | I agree there is merit in the idea of encoding
               | instruction dependencies in the ISA. There have been a
               | number of research projects in this area, e.g.
               | wavescalar, EDGE/TRIPS, etc.
               | 
               | It's not only about reducing the need for figuring out
               | dependencies at runtime, but you could also partly reduce
               | the need for the (power hungry and hard to scale!)
               | register file to communicate between instructions.
        
               | ATsch wrote:
               | All of this hackery with hundreds of registers just to
               | continue to make a massively parallel computer look like
               | an 80s processor is what something like Itanium would
               | have prevented. Modern processors ended up becoming
               | basically VLIW anyway, Itanium just refused to lie to
               | you.
        
               | dragontamer wrote:
               | When standard machine code is written in a "Dependency
               | cutting" way, then it scales to many different reorder
               | registers. A system from 10+ years ago with only
               | 100-reorder registers will execute the code with maximum
               | parallelism... while a system today with 200 to
               | 300-reorder buffers will execute the SAME code with also
               | maximum parallelism (and reach higher instructions-per-
               | clock tick).
               | 
               | That's why today's CPUs can have 4-way decoders and 6-way
               | dispatch (AMD Zen and Skylake), because they can "pick up
               | more latent parallelism" that the compilers have given
               | them many years ago.
               | 
               | "Classic" VLIW limits your potential parallelism to the
               | ~3-wide bundles (in Itanium's case). Whoever makes the
               | "next" VLIW CPU should allow a similar scaling over the
               | years.
               | 
               | -----------
               | 
               | It was accidental: I doubt that anyone actually planned
               | the x86 instruction set to be so effectively instruction-
               | level parallel. Its something that was discovered over
               | the years, and proven to be effective.
               | 
               | Yes: somehow more parallel than the explicitly parallel
               | VLIW architecture. Its a bit of a hack, but if it works,
               | why change things?
        
               | anarazel wrote:
               | I don't understand how an increase, including the implied
               | variability, of CPU internal parallelism and VLIW
               | benefits go together?
        
               | dragontamer wrote:
               | I'm talking about a mythical / mystical VLIW
               | architecture. Obviously, older VLIW designs have failed
               | in this regards... but I don't necessarily see "future"
               | VLIW processors making the same mistake.
               | 
               | Perhaps from your perspective, a VLIW architecture that
               | fixes these problems wouldn't necessarily be VLIW
               | anymore. Which... could be true.
        
               | KMag wrote:
               | > And the compiler doesn't even come close to having as
               | much information as the CPU has.
               | 
               | Unless your CPU has a means for profiling where your
               | pipeline stalls are coming from, combined with dynamic
               | recompilation/reoptimization similar to IBM's project
               | DAISY or HP's Dynamo.
               | 
               | It's not going to do well as out-of-order CPUs that make
               | instruction re-optimization decisions for every
               | instruction, but I wouldn't rule out software-controlled
               | dynamic re-optimization getting most of the performance
               | benefits of out-of-order execution with a much smaller
               | power budget, due to not re-doing those optimization
               | calculations for every instruction. There are reasons
               | most low-power implementations are in-order chips.
        
               | csharptwdec19 wrote:
               | I feel like what you describe is possible. When I think
               | of what Transmeta was able to accomplish in the early
               | 2000s just with CMS, certainly so.
        
             | drivebycomment wrote:
             | Itanuim deserved its fiery death and resurrection doesn't
             | make any sense whatsoever. It's a dead end architecture,
             | and humanity gained (by freeing up valuable engineering
             | power to other more useful endeavors) when it died.
        
               | darksaints wrote:
               | Itanium was an excellent idea that needed investment in
               | compilers. Nobody wanted to make that investment because
               | speculative execution got them 80% of the way there
               | without the investment in compilers. But as it turns out,
               | speculative execution was a phenomenally bad idea, and
               | patching its security vulnerabilities has set back
               | processor performance to the point where VLIW seems like
               | a good idea again. We should have made those compiler
               | improvements decades ago.
        
               | jabl wrote:
               | > Itanium was an excellent idea that needed investment in
               | compilers.
               | 
               | ISTR that Intel & HP spent well over a $billion on VLIW
               | compiler R&D, with crickets to show for it all.
               | 
               | How much are you suggesting should be spent this time for
               | a markedly different result?
        
               | dragontamer wrote:
               | NVidia Volta: https://arxiv.org/pdf/1804.06826.pdf
               | 
               | Each machine instruction on NVidia Volta has the
               | following information:
               | 
               | * Reuse Flags
               | 
               | * Wait Barrier Mask
               | 
               | * Read/Write barrier index (6-bit bitmask)
               | 
               | * Read Dependency barriers
               | 
               | * Stall Cycles (4-bit)
               | 
               | * Yield Flag (1-bit software hint: NVidia CU will select
               | new warp, load-balancing the SMT resources of the compute
               | unit)
               | 
               | Itanium's idea of VLIW was commingled with other ideas;
               | in particular, the idea of a compiler static-scheduler to
               | minimize hardware work at runtime.
               | 
               | To my eyes: the benefits of Itanium are implemented in
               | NVidia's GPUs. The compiler for NVidia's compiler-
               | scheduling flags has been made and is proven effective.
               | 
               | Itanium itself: the crazy "bundling" of instructions and
               | such, seems too complex. The explicit bitmasks / barriers
               | of NVidia Volta seems more straightforward and clear in
               | describing the dependency graph of code (and therefore:
               | the potential parallelism).
               | 
               | ----------
               | 
               | Clearly, static-compilers marking what is, and what
               | isn't, parallelizable, is useful. NVidia Volta+
               | architectures have proven this. Furthermore, compilers
               | that can emit such information already exist. I do await
               | the day when other architectures wake up to this fact.
        
               | StillBored wrote:
               | GPU's, aren't general purpose compute. EPIC did fairly
               | well with HPC/etc style applications as well, it was
               | everything else that was problematic. So, yes there are a
               | fair number of workload and microarch decision
               | similarities. But right now, those workloads tend to be
               | better handled with a GPU style offload engine (or as it
               | appears the industry is slowly moving, possibly a lot of
               | fat vector units attached to a normal core).
        
               | dragontamer wrote:
               | I'm not talking about the SIMD portion of Volta.
               | 
               | I'm talking about Volta's ability to detect dependencies.
               | Which is null: the core itself probably can't detect
               | dependencies at all. Its entirely left up to the compiler
               | (or at least... it seems to be the case).
               | 
               | AMD's GCN and RDNA architecture is still scanning for
               | read/write hazards like any ol' pipelined architecture
               | you learned in college. The NVidia Volta thing is new,
               | and probably should be studied from a architectural point
               | of view.
               | 
               | Yeah, its a GPU-feature on NVidia Volta. But its pretty
               | obvious to me that this explicit dependency-barrier thing
               | could be part of a future ISA, even one for traditional
               | CPUs.
        
               | rrss wrote:
               | FWIW, this article suggests the static software
               | scheduling you are describing was introduced in Kepler,
               | so it's probably at least not entirely new in Volta:
               | 
               | https://www.anandtech.com/show/5699/nvidia-geforce-
               | gtx-680-r...
               | 
               | > NVIDIA has replaced Fermi's complex scheduler with a
               | far simpler scheduler that still uses scoreboarding and
               | other methods for inter-warp scheduling, but moves the
               | scheduling of instructions in a warp into NVIDIA's
               | compiler. In essence it's a return to static scheduling
               | 
               | and I think this is describing more or less the same
               | thing in Maxwell:
               | https://github.com/NervanaSystems/maxas/wiki/Control-
               | Codes
        
               | dragontamer wrote:
               | I appreciate the info. Apparently NVidia has been doing
               | this for more years than I expected.
        
               | drivebycomment wrote:
               | By late 2000s, instruction scheduling research was
               | largely considered done and dusted, with papers like:
               | 
               | https://dl.acm.org/doi/book/10.5555/923366
               | https://dl.acm.org/doi/10.1145/349299.349318
               | 
               | and many, many others (it produced so many PhDs in 90s).
               | And, needless to say, HP and Intel hired so many
               | excellent researchers during the heydays of Itanium. So I
               | don't know on what basis you think there wasn't enough
               | investment. So I have no choice but to assume you're
               | ignorant of the actual history here, both in academics
               | and industry.
               | 
               | It turns out instruction scheduling can not overcome the
               | challenge of variable memory and cache latency, and
               | branch prediction, because all of those are dynamic and
               | unpredictable, for "integer" application (i.e. bulk of
               | the code running on the CPUs of your laptop and cell
               | phones). And, predication, which was one of the
               | "solutions" to overcome branch misprediction penalties,
               | turns out to be not very efficient, and is limited in its
               | application.
               | 
               | For integer applications, it turns out the instruction
               | level parallelism isn't really the issue. It's about how
               | to generate and maintain as many outstanding cache misses
               | at a time. VLIW turns out to be insufficient and
               | inefficient for that. Some minor attempts are addressing
               | that through prefetches and more elaborate markings
               | around load/store all failed to give good results.
               | 
               | For HPC type workload, it turns out data parallelism and
               | thread-level parallelism are much more efficient way to
               | improve the performance, and also makes ILP on a single
               | instruction stream play only a very minor role - GPUs and
               | ML accelerators demonstrate this very clearly.
               | 
               | As for the security and the speculative execution,
               | speculative execution is not going anywhere. Naturally,
               | there are many researches around this like:
               | 
               | https://ieeexplore.ieee.org/abstract/document/9138997
               | https://dl.acm.org/doi/abs/10.1145/3352460.3358306
               | 
               | and while it will take a while before the real pipeline
               | implements ideas like above thus we may continue to see
               | some smaller and smaller vulnerabilities as the industry
               | collectively plays whack-a-mole game, I don't see a world
               | where the top of the line general-purpose microprocessor
               | giving up on speculative execution, as the performance
               | gain is simply too big.
               | 
               | I have yet to meet any academics or industry processor
               | architects or compiler engineer who think VLIW / Itanium
               | is the way to move forward.
               | 
               | This is not to say putting as much work to the compiler
               | is a bad idea, as nVidia has demonstrated. But what they
               | are doing is not VLIW.
        
               | StillBored wrote:
               | I think your conflating OoO and speculative execution. It
               | was OoO which the itanium architects (apparently) didn't
               | think would work as well as it did. OoO and being able to
               | build wide superscaler machines, which could dynamically
               | determine instruction dependency chains is what killed
               | EPIC.
               | 
               | Speculative execution is something you would want to do
               | with the itanium as well, otherwise the machine is going
               | to be stalling all the time waiting for branches/etc.
               | Similarly, later itaniums went OoO (dynamically
               | scheduled) because it turns out, the compiler can't know
               | runtime state..
               | 
               | https://www.realworldtech.com/poulson/
               | 
               | Also while googling for that, ran across this:
               | 
               | https://news.ycombinator.com/item?id=21410976
               | 
               | PS: speculative execution is here to stay, it might be
               | wrapped in more security domains and/or its going to just
               | be one more nail in the business model of selling shared
               | compute (something that was questionably from the
               | beginning).
        
               | xgk wrote:
               | questionably from the beginning
               | 
               | Agreed. If you look at what's the majority of compute
               | loads (e.g. Instagram, Snap, Netflix, HPC) then that's
               | (a) not particularly security critical, and (b) so big
               | that the vendors can split their workload in security
               | critical / not security critical, and rent fast machines
               | for the former, and secure machines for the latter.
               | 
               | I wonder which cloud provider is the first to offer this
               | in a coherent way.
        
               | Quequau wrote:
               | I dimly recall reading an interview with one of Intel's
               | Sr. Managers on the Itanium project where he explained
               | his thoughts on why Itanium failed.
               | 
               | His explanation centred on the fact that Intel decided
               | early on that Itanium would only ever be an ultra high
               | end niche product and only built devices which Intel
               | could demand very high prices for. This in turn meant
               | that almost no one outside of the few companies who were
               | supporting Itanium development and certainly not most of
               | the people who were working on other compilers and
               | similar developer tools at the time, had any interest in
               | working on Itanium because they simply could not justify
               | the expense of obtaining the hardware.
               | 
               | So all the organic open source activity that goes on for
               | all the other platforms which are easily obtainable by
               | pedestrian users simply did not go on for Itanium. Intel
               | did not plan on that up front (though in hindsight it
               | seemed obvious) and by the time that was widely
               | recognised within the management team no one was willing
               | to devote the sort of scale of resources that were
               | required for serious development of developer tools on a
               | floundering project.
        
           | ChuckMcM wrote:
           | I would love OpenPOWER to succeed. I just don't see the 48
           | pin QFP version that costs < $1 and powers billions of
           | gizmos. For me the ARM ecosystem's biggest win is that it
           | scales from really small (M0/M0+) to really usefully big
           | (A78) and has many points between those two architectures.
           | 
           | I don't see OpenPOWER going there, but I can easily see
           | RISC-V going there. So, for the moment, that is the horse I'm
           | betting on.
        
             | dragontamer wrote:
             | Not quite 48-pin QFP chips, but 257-pin embedded is still
             | smaller than Rasp. Pi. (Just searched what NXP's newest
             | Power-chip is, and its a S32R274: 2MB 257-pin BGA.
             | Definitely "embedded" size, but not as small as Cortex-M0)
             | 
             | To be honest, I don't think that NVidia/ARM will screw over
             | their Cortex-M0 or Cortex-M0+ customers over. I'm more
             | worried about the higher-end, whether or not NVidia will
             | "play nice" with its bigger rivals (Apple, Intel, AMD) in
             | the datacenter.
        
               | exikyut wrote:
               | The FS32R274VCK2VMM appears to be the cheapest in this
               | series; Digi-Key have it for $30, NXP has it for "$13 @
               | 10K". This is for a 200MHz part.
               | 
               | https://www.nxp.com/part/FS32R274VCK2VMM
               | 
               | https://www.digikey.com/product-detail/en/nxp-usa-
               | inc/FS32R2...
               | 
               | The two related devkits list for $529 and $4,123:
               | https://www.digikey.com/products/en/development-boards-
               | kits-...
               | 
               | --
               | 
               | Those processors make quite a few reference to an "e200",
               | which I think is the CPU architecture. I discovered that
               | Digi-Key lists _quite_ a few variants of this under Core
               | Processor; and checking the datasheets of some random
               | results suggests that they are indeed Power architecture
               | parts.
               | 
               | https://www.digikey.com/products/en/integrated-circuits-
               | ics/...
               | 
               | The cheapest option appears to be the $2.67@1000, up-
               | to-48MHz SPC560D40L1B3E0X with 256KB ECC RAM.
               | 
               | Selecting everything >100MHz finds the $7.10@1000
               | SPC560D40L1B3E0X, an up-to-120MHz part that adds 1MB
               | flash (128KB ECC RAM).
               | 
               | Restricting to >=200MHz finds the $13.32@500
               | SPC5742PK1AMLQ9R has which has dual cores at 200MHz,
               | 384KB ECC RAM and 2.5MB flash, and notes core lock-step.
               | 
               | --
               | 
               | After discovering the purpose of the "view prices at"
               | field, the landscape changes somewhat.
               | 
               | https://www.digikey.com/products/en/integrated-circuits-
               | ics/...
               | 
               | The SPC574S64E3CEFAR (https://www.st.com/resource/en/data
               | sheet/spc574s64e3.pdf) is 140MHz, has 1.5MB code + 64KB
               | data flash and 96KB+32KB data RAM, and is available for
               | $14.61 per 1ea.
               | 
               | The SPC5744PFK1AMLQ9 (https://www.nxp.com/docs/en/data-
               | sheet/MPC5744P.pdf) is $20.55@1, 200MHz, 2.5MB ECC flash,
               | 384KB ECC RAM, and has two cores that support lockstep.
               | 
               | The MPC5125YVN400 (https://www.nxp.com/docs/en/product-
               | brief/MPC5125PB.pdf) is $29.72@1, 400MHz, supports
               | DDR2@200MHz (only has 32KB onboard (S)RAM), and supports
               | external flash. (I wonder if you could boot Linux on this
               | thing?)
        
               | rvense wrote:
               | These are all basically ten-year-old parts, aren't they?
        
               | ChuckMcM wrote:
               | Yes but hey, the core ARM ISA is like 40 years old. The
               | key is that they are in fact "low cost SoCs" which is not
               | something I knew existed :-).
               | 
               | Its really too bad the dev boards are so expensive but I
               | get you need a lot of layers to route that sort of BGA.
        
               | dragontamer wrote:
               | They're all low end embedded parts with highly integrated
               | peripherals. Basically: a microcontroller.
               | 
               | No different than say, Cortex-M0 or M0+ in many regards
               | (although ARM scales down to lower spec'd pieces).
        
             | awill wrote:
             | the war is over. Arm has won. That dominance will take a
             | long time to fade. AWS and Apple's future is Arm.
        
             | na85 wrote:
             | I miss DIP chips that would fit on breadboards. I don't
             | have steady enough hands to solder QFP onto a PCB, and I'm
             | too cheap to buy an oven :(
        
               | foldr wrote:
               | A cheap toaster oven or hot air tool works fine for
               | these. Or, as others have said, a regular soldering iron
               | with lots of flux.
        
               | ChuckMcM wrote:
               | https://www.digikey.com/catalog/en/partgroup/smt-
               | breakout-pc... can help. I have hand soldered STM32s to
               | this adapter (use flux, a small tip)
        
               | Ecco wrote:
               | You're supposed to drag-solder those. Look it up on
               | YouTube, it's super easy. The hardest part is positioning
               | the chip, but it's actually easier than with an oven,
               | because you can rework it if you only solder one or two
               | pins :)
        
               | IshKebab wrote:
               | There's an even easier way than drag soldering - just
               | solder it normally, without worrying about bridges. You
               | can put tons of solder on it.
               | 
               | The use some desoldering braid to soak up the excess
               | solder. It will remove all the bridges and leave perfect
               | joints.
        
               | na85 wrote:
               | Wow, just looked up a video and some guy did an 0.5mm
               | pitch chip pretty darn quickly. Thank you!
        
               | Ecco wrote:
               | You're welcome! Also, flux. Lots of it. Buy some good
               | one, and use tons of it. Then clean the hell out of your
               | PCB!
        
         | xigency wrote:
         | > I never liked Softbank owning it, but hey someone has to.
         | 
         | I understand what you're saying and this seems to be the
         | prevailing pattern but I really don't understand it. ARM could
         | easily be a standalone company. For some reason, mergers are
         | in.
        
           | ChuckMcM wrote:
           | I would like to understand what you know about ARM that it's
           | board of director's didn't (doesn't) know? In my experience
           | companies merge when they are forced to, not because they
           | want to.
           | 
           | I have always assumed that their shareholders were offered so
           | much of a premium on their shares that they chose to sell
           | them rather than hold onto them. Clearly based on their
           | fiscal 2015 results[1] they were a going concern.
           | 
           | [1] https://www.arm.com/company/news/2016/02/arm-holdings-
           | plc-re...
        
         | Koshkin wrote:
         | The next Apple machine I am going to buy will be using RISC-V
         | cores.
        
           | gumby wrote:
           | Because of this transaction? I'm sure this deal will have
           | absolutely no impact on apple's deal with ARM.
           | 
           | Apple could of course Afford to invest in RISC-V (and surely
           | has played with it internally) but they have enough control
           | of their future under the current arrangement that it will be
           | a long long time before they feel any need to switch -- 15
           | years at least.
        
             | acomjean wrote:
             | Apple and Nvidia don't seem to see eye to eye. Nvidia
             | doesn't support macs (CUDA support was pulled a year or two
             | ago) and apples don't include Nvidia cards.
             | 
             | This could change.
        
               | gumby wrote:
               | This is because Nvidia screwed apple (from Apple's POV)
               | years ago with some bad GPUs to the point where Apple
               | flat out refuses to source Nvidia parts. I don't know the
               | internal details of course just the public ones so can't
               | say if Apple is being petty or if Nvidia burned the
               | bridge while the problem was unfolding.
               | 
               | Given that the CEO was the supply chain guy at the time I
               | suspect the latter, as I'd imagine he'd be more
               | dispassionate than Jobs.
               | 
               | In any case I seriously doubt nvidia could, much less
               | would benefit from cancelling Apple's agreement.
        
               | TomVDB wrote:
               | > ... to the point where Apple flat out refuses to source
               | Nvidia parts.
               | 
               | I've seen this argument made before.
               | 
               | It would be a valid point if Apple stopped using Nvidia
               | GPUs in 2008 (they did), and then never used them again.
               | And yet, 4 years later, they used Nvidia GPUs on the 2012
               | MacBook Retina 15" on which I'm typing this.
        
               | ksec wrote:
               | And the 2012 GeForce GPU had GPU panic issues and let say
               | higher chances of GPU failures.
               | 
               | And then that was that. We haven't seen Nvidia GPU again.
        
               | TomVDB wrote:
               | I haven't seen any of those in 8 years, but I'll take you
               | at your word...
               | 
               | That said: AMD GPUs have also had their share of issues
               | on MacBooks.
        
               | gumby wrote:
               | Thanks for this correction!
        
           | saagarjha wrote:
           | I suspect you'll be waiting for quite a long time, if not
           | forever.
        
             | dmix wrote:
             | FWIW Apple isn't even listed on RISC-V's membership page:
             | 
             | https://riscv.org/membership/members/
             | 
             | While some like Google and Alibaba are listed as platinum
             | founding members.
        
               | nickik wrote:
               | I agree that Apple wont do RISC-V but you don't need to
               | be a member to us it.
        
           | CamperBob2 wrote:
           | ROFL. Why not wait for the upcoming Josephson junction
           | memristors, while you're at it?
        
       | RubberShoes wrote:
       | This is not good
        
         | chid wrote:
         | I have heard numerous arguments but they arguments don't feel
         | that compelling, what are reasons why this is bad?
        
           | wetpaws wrote:
           | More power concentrated in a company that already has a de-
           | facto monopoly over a gpu market.
        
           | RobLach wrote:
           | ARM will not be cheaper after this.
        
             | stupendousyappi wrote:
             | Conversely, Nvidia has done a solid job of advancing GPU
             | performance even in the face of weak competition, and with
             | their additional resources, ARM performance may advance
             | even faster, and provide the first competition in decades
             | to x86 in servers and desktops.
        
               | teruakohatu wrote:
               | The tech may have advanced due to the insatiable hunger
               | of machine learning, but the weak competition has meant
               | pricing has not decreased as much as it should have or
               | could have, only enough to move more GPUs. (nvidia
               | biggest competitor are the GPUs they manufactured two
               | years earlier).
        
               | dannyw wrote:
               | Really? You get 2x perf per dollar with Ampere. That's
               | not good enough on the pricing front?
        
               | teruakohatu wrote:
               | Yes really. Performance that is cleverly hampered by RAM
               | (and driver licensing) on the low end from an ML
               | perspective. The only reason they can do this is because
               | of lack of competition. The performance of 3000 series
               | cards could be dramatically improved for large models at
               | a modest increase in price if RAM was doubled.
               | 
               | It really is possible to be critical of a monopoly
               | without disparaging the product itself. It is when a true
               | competitor arises that we see the monopolist's true
               | capabilities (see Intel and AMD).
        
             | 867-5309 wrote:
             | would that affect the value of Raspberry Pi and other
             | budget devices?
        
           | fizzled wrote:
           | Ask Silicon Labs, NXP, STMicrolectronics, Dialog
           | Semiconductor, ADI, Infineon, ON Semi, Renesas, TI, ... they
           | all license Arm IP.
        
           | teruakohatu wrote:
           | An independent ARM that did not manufacture processors was
           | the best outcome for everyone in the industry.
           | 
           | ARM being owned by an organisation deeply embedded in
           | processor design and manufacturing, will now be licensing
           | designs to competitors, as well as getting a lot of intel on
           | its competitors.
           | 
           | ARM supercomputers were poised to take on Nvidia. Now its all
           | one and the same.
           | 
           | As others have said, this will do wonders for RISC-V.
        
           | dragontamer wrote:
           | Consider one of NVidia's rivals: AMD, who uses an ARM chip in
           | their EPYC line of chips as a security co-processor. Does
           | anyone expect NVidia to "play fair" with such a rival?
           | 
           | ARM as an independent company, has been profoundly "neutral",
           | allowing many companies to benefit from the ARM instruction
           | set. It has been run very well: slightly profitable and an
           | incremental value to all parties involved (be you Apple's
           | iPhone, NVidia's Tegra (aka Nintendo Switch chip), AMD's
           | EPYC, Qualcomm's Snapdragon, numerous hard drive companies,
           | etc. etc.). All in all, ARM's reach has been because of its
           | well-made business decisions that have been fair to all
           | parties involved.
           | 
           | NVidia, despite all their technical achievements, is known to
           | play hardball from a business perspective. I don't think
           | anyone expects NVidia to remain "neutral" or "fair".
        
             | paulmd wrote:
             | > Consider one of NVidia's rivals: AMD, who uses an ARM
             | chip in their EPYC line of chips as a security co-
             | processor. Does anyone expect NVidia to "play fair" with
             | such a rival?
             | 
             | Yes, absolutely.
             | 
             | NVIDIA's not going to burn the ARM ecosystem to the ground.
             | They just paid $40 billion for it. And they only had $11b
             | of cash on hand, they really overpaid for it (because
             | SoftBank desperately needed a big win to cover for their
             | other recent losses).
             | 
             | Now: will everybody (including AMD) probably be paying more
             | for their ARM IP from now on? Yes.
        
               | whatshisface wrote:
               | Why does _SoftBank 's_ desperation make _NVIDIA_ pay too
               | much?
        
               | dragontamer wrote:
               | When Oracle purchased Sun Microsystems for $7.4 Billion,
               | did you expect Oracle to burn Solaris to the ground, and
               | turn their back on MySQL's open source philosophy? Then
               | sue Google for billions of dollars over the Java /
               | Android thing?
               | 
               | Or more recently, when Facebook bought Oculus for $2
               | Billion, did you expect Facebook to betraying the
               | customer's trust and start pushing Facebook logins?
               | 
               | The Oculus / Facebook login thin just happened weeks ago.
               | Companies betraying the promises they made to their core
               | audience is like... bread-and-butter at this point (and
               | seems to almost always happen after an acquisition play).
               | We know Facebook's modus operandi, and even if its worse
               | for Oculus, we know that Facebook will do what Facebook
               | does.
               | 
               | Similarly, we know NVidia's modus operandi. NVidia is
               | trying to make a datacenter play and create a vertical
               | company for high-end supercomputers. Such a strategy
               | means that NVidia will NOT play nice with their rivals:
               | Intel or AMD. (And the Mellanox acquisition is just icing
               | on the cake now).
               | 
               | NVidia will absolutely leverage ARM to gain dominance in
               | the datacenter. That's the entire point of this purchase.
               | 
               | --------
               | 
               | There's a story about scorpions and swimming with one on
               | your back. I'm sure you've heard of it before. Just
               | because its necessary for the scorpion's survival doesn't
               | mean it is safe to trust the scorpion.
        
               | RantyDave wrote:
               | I'm not at all surprised they killed Solaris. Given that
               | Oracle was pretty much the only software that ran on
               | Solaris (or that there might be a good reason to run on
               | Solaris), maybe it was just a big support headache. As
               | for MySQL? Not surprised at all. They pretty much just
               | bought a brand name, maybe just to spite red hat.
        
               | rtpg wrote:
               | > Or more recently, when Facebook bought Oculus for $2
               | Billion, did you expect Facebook to betraying the
               | customer's trust and start pushing Facebook logins?
               | 
               | yes? I mean that felt eminently possible from the get-go.
        
               | yyyk wrote:
               | >When Oracle purchased Sun Microsystems for $7.4 Billion,
               | did you expect Oracle
               | 
               | Yes, yes, and yes. This is _Oracle_ we 're talking about.
               | Of course they'll invest more in lawyers than tech. The
               | only reason they still invest in Java is the lawsuit
               | potential. If only Google had the smarts to buy Sun
               | instead...
               | 
               | As for NVidia, their play probably is integration and
               | datacenters. At the moment, going after other ARM
               | licencees will hinder NVidia more than help (they're
               | going after x86, no time to waste on bad PR and legal
               | issues with small time ARM datacenter licencees; Qualcomm
               | and Apple are in a different segment altogether). Of
               | course, we can't guarantee it stays that way.
        
         | gruez wrote:
         | Bad for ARM, good for every other ISA.
        
         | paulmd wrote:
         | It wouldn't have been good if any of the people who could
         | actually afford to buy ARM did so.
         | 
         | Would you rather have TSMC in control of ARM? Maybe have access
         | to new architectures bundled with a mandate that you have to
         | build them on TSMC's processes?
         | 
         | How about Samsung? All of the fab ownership concerns of TSMC
         | plus they also make basically any tech product you care to
         | name, so all the integration concerns of NVIDIA.
         | 
         | https://asia.nikkei.com/Business/Technology/Key-Apple-suppli...
         | 
         | Microsoft? Oracle? None of the companies who could have
         | afforded to pay what Son wanted for Softbank were any better
         | than NVIDIA.
         | 
         | There are a lot of good things that will come out of this as
         | well. NVIDIA is a vibrant company compared to a lot of the
         | others.
        
           | leptons wrote:
           | Microsoft can't afford to buy ARM? lol... well that's not
           | true at all.
        
             | paulmd wrote:
             | But what would microsoft _do_ with ARM at a $40b valuation?
        
       | throwaway4good wrote:
       | Qualcomm and Apple are going to be fine even with NVIDIA owning
       | ARM. They are American companies under the protection of US
       | legislation and representation.
       | 
       | However the situation for Chinese companies is even clearer now.
       | Huawei, Hikvision etc. need to move away from ARM. Probably on to
       | their own thing as RISC-V is dominated by US companies.
        
         | vaxman wrote:
         | Qualcomm, Apple and NVidia will lose favor with the US
         | government unless they bring (at least a full copy of) their
         | FAB partners and the rest of their supply chains home to
         | America (Southwest including USMCA zone). We love Southeast
         | Asia, but the pandemic highlighted our vulnerability and, as a
         | country, we're not going to keep sourcing our critical
         | infrastructure in China --or it's back yard. If those American
         | CEOs keep balking at the huge investment required, you will see
         | the US government write massive checks to Intel (has numerous,
         | albeit obsolete, domestic FABs), DELL and upstarts (like
         | System76 in Colorado) to pick winners, while the elevator to
         | hell gains a new stop in Silicon Valley and San Diego
         | (nationalizing patents, etc) during a sort of "war effort" like
         | we had in the early 1940s.
        
       | unixhero wrote:
       | It's so amazing Apple didn't win this M&A race.
        
         | rahoulb wrote:
         | Apple isn't interested in selling tech licences to other
         | companies - they want to own their core technologies so they
         | can sell products to consumers. And, as an original member of
         | the Arm consortium, they have a perpetual licence to the Arm IP
         | (I have no inside knowledge about that, just many people who
         | know more than me have said it)
        
       | easton wrote:
       | Could Apple hypothetically use their perpetual license to ARM to
       | license the ISA to other manufacturers if they so desired? (not
       | that they do now, but it could be a saving grace if Nvidia
       | assimilated ARM fully).
        
         | pokot0 wrote:
         | Why would they ever want to enable their competitors?
        
           | newsclues wrote:
           | To spite nVidia.
        
             | hndamien wrote:
             | If Larry David has taught me anything, it is that spite
             | stores are not good business.
        
               | mumblerino wrote:
               | It depends on who you are. Are you a Larry David or are
               | you a Mila Kunis?
        
         | Wowfunhappy wrote:
         | I'm pretty sure they can't, but I also think there's no way in
         | hell they'd do it if they could. It's not in Apple's DNA.
         | Better for them if no one else has access to the instruction
         | set.
         | 
         | I bet they'd make a completely custom ISA if they could. Heck,
         | maybe they plan to some day, and that's why they're calling the
         | new Mac processors "Apple Silicon".
        
           | mhh__ wrote:
           | I could see apple being the next in line of basically failed
           | VLIW ISAs - the cost/benefit doesn't really add up to
           | completely redesign, prove, implement, and support a new ISA
           | unless it was worth it technologically.
           | 
           | If they could pull it off I would be very impressed.
        
           | pier25 wrote:
           | > _I bet they 'd make a completely custom ISA if they could.
           | Heck, maybe they plan to some day, and that's why they're
           | calling the new Mac processors "Apple Silicon"._
           | 
           | That was my first thought.
           | 
           | I'd be surprised if Apple wasn't already working on this.
        
           | exikyut wrote:
           | Thinking about that though, from a technical perspective
           | keeping the name but changing the function would only produce
           | bad PR.
           | 
           | - "We changed from the 68K to PPC" "Agh!...fine"
           | 
           | - "We changed from PPC to x86" "What, again?"
           | 
           | - "We changed from x86 to Apple Silicon" "...oooo...kay..."
           | 
           | - "We changed from Apple Silicon to, uh, Apple Silicon - like
           | it's still called Apple Silicon, but the architecture is
           | different" "What's an architecture?" "The CPU ISA." "The CPU
           | is a what?"
        
       | WhyNotHugo wrote:
       | This it terrible news for the FLOSS community.
       | 
       | Nvidia has consistently for many years refused to properly
       | support Linux and other open source OSs.
       | 
       | Heck, Wayland compositors just say "if you're using nvidia then
       | don't even try to use our software" since they're fed up of
       | Nvidia's lack of collaboration.
       | 
       | I really hope ARM doesn't go the same way. :(
        
         | janoc wrote:
         | ARM itself has little to no impact on open source community.
         | They only license chip IP, they don't make any chips
         | themselves. And most of the ARM architecture is documented and
         | open, with the exception of things like the MALI GPU.
         | 
         | Whether or not some SoC (e.g. in a phone) is going to be
         | supported by Linux doesn't depend on ARM but on the
         | manufacturer of the given chip. That won't change in any way.
         | 
         | ARM maintains the GCC toolchain for the ARM architecture but
         | that is unlikely to go anywhere (and even if it did, it is open
         | source and anyone else can take it over).
         | 
         | The much bigger problem is that Nvidia could now start putting
         | squeeze on chip makers who license the ARM IP for their own
         | business reasons - Nvidia makes its own ARM-based ICs (e.g. the
         | Jetson, Tegra) and it is hard to imagine that they will not try
         | to use their position to stiffle the competition (e.g. from
         | Qualcomm or Samsung).
        
           | hajile wrote:
           | https://developer.arm.com/tools-and-software/open-source-
           | sof...
           | 
           | ARM directly maintains the main ARM parts of the Linux kernel
           | among other things.
        
         | [deleted]
        
       | CivBase wrote:
       | I was hoping that Apple's switch to ARM would prompt better ARM
       | support for popular Linux distros. Given NVIDIA's track record
       | with the OSS community, I'm definitely less hopeful now.
        
       | ianai wrote:
       | They sure seem to be marketing this as a logical move for their
       | AI platform.
        
       | Blammar wrote:
       | No one has seemed to notice the following two things:
       | 
       | "To pave the way for the deal, SoftBank reversed an earlier
       | decision to strip out an internet-of-things business from Arm and
       | transfer it to a new company under its control. That would have
       | stripped Arm of what was meant to be the high-growth engine that
       | would power it into a 5G-connected future. One person said that
       | SoftBank made the decision because it would have put it in
       | conflict with commitments made to the U.K. over Arm, which were
       | agreed at the time of the 2016 deal to appease the government."
       | (from https://arstechnica.com/gadgets/2020/09/nvidia-reportedly-
       | to... )
       | 
       | and
       | 
       | "The transaction does not include Arm's IoT Services Group."
       | (nvidia news.)
       | 
       | which appear to contradict each other.
       | 
       | I'm not sure about the significance of this. I would have guessed
       | Nvidia would have wanted the IoT group to remain.
       | 
       | Also, to first order, when a company issues stock to purchase
       | another corporation, that cost is essentially "free" since the
       | value of the corporation increases.
       | 
       | In other words, Nvidia is essentially paying $12 billion in cash
       | for ARM up front, and that's all. (The extra $5B in cash or stock
       | depends on financial performance of ARM, and thus is a second-
       | order effect.)
        
         | manquer wrote:
         | It is not "free", it means current shareholders of Nvidia are
         | paying for the remaining money. Their stock is diluted on fresh
         | issue of shares.[1]
         | 
         | The $12B comes from Nvidia the company, the remaining money
         | comes from Nvidia's shareholders directly.
         | 
         | [1] Only if the valuation of ARM is "worth it" the fresh issue
         | of shares will not cost the current shareholders anything. This
         | is rarely the case , if Nvida overvalued(or less likely
         | undervalued) the deal then current shareholders are giving more
         | than they got for it.
        
           | haten wrote:
           | Yes
        
           | finiteloops wrote:
           | To me, "paying for" and "diluted" connotates negative
           | emotions.
           | 
           | Cash paid in this instance is treated no different than cash
           | in their normal operating expenses. If either generates
           | profits in line with their current expected returns, the
           | stock price stays the same, and everyone is indifferent to
           | the transaction.
           | 
           | Same goes for stock issuance. If the expectation of the use
           | of proceeds from the issuance are in line with the company's
           | current expected returns, everyone is indifferent.
           | 
           | Your statement is still true, and the stock market jumped
           | today on the news, so I feel my connotation is misplaced.
        
             | pc86 wrote:
             | > _If either generates profits in line with their current
             | expected returns, the stock price stays the same_
             | 
             | This is a pretty naive depiction of Wall Street.
        
           | bananaface wrote:
           | And note that the $12B is _also_ owned by Nvidia 's
           | shareholders (they own the company which owns the cash), so
           | they're paying for that too.
           | 
           | It's just different forms of shareholder assets being traded
           | for other assets, by the shareholders (or rather, their
           | majority vote).
        
             | manquer wrote:
             | Effectively all of it is funded by the shareholders who
             | "own" the corporation.
             | 
             | In the current example if the cash is coming from a debt
             | instrument is it not bank funding it now?
             | 
             | it is typically about who is fronting the money now, it
             | could be your bank making loans, or from cash reserves you
             | have, fresh stock issue, selling another asset, or even the
             | target's bank as in the case of LBO.
             | 
             | The shareholders always end up paying for it eventually in
             | some form or other. Differentiating it by the current
             | source helps understand the deal structure and risks
             | better.
        
               | pottertheotter wrote:
               | It's still the shareholders even if debt financing is
               | used. The shareholders, through the company, have to pay
               | off the debt. It increases the risk of bankruptcy.
               | 
               | When doing a deal, it comes down to price and source of
               | funds. Changing either can drastically change how good or
               | bad the deal is.
        
               | bananaface wrote:
               | Absolutely. I was just noting it for anyone reading.
        
         | pathseeker wrote:
         | >Also, to first order, when a company issues stock to purchase
         | another corporation, that cost is essentially "free" since the
         | value of the corporation increases.
         | 
         | This isn't correct. If investors thing Nvidia overpaid, its
         | share price will decline. There are many examples of acquiring
         | companies losing significant value on announcements to buy
         | other companies even in pure stock deals.
        
           | siberianbear wrote:
           | In addition, it's not even a valid argument if the cost was
           | entirely in cash.
           | 
           | One would be making the argument, "the cost is essentially
           | free because although we spent $40B, we acquired a company
           | worth $40B". Obviously, that's not any more correct than the
           | case of paying in stock.
        
             | fortran77 wrote:
             | I pay the car dealer $20,000, I get a car worth $20,000.
             | Was the car free?
        
               | tonitosou wrote:
               | actually as soon as u get the car its not a 20k$ car
               | anymore.
        
               | jaywalk wrote:
               | Technically, yes. Your net worth is unchanged. The
               | problem is that the car is no longer worth $20,000 as
               | soon as you drive it off the lot.
        
             | fauigerzigerk wrote:
             | I agree that paying in new shares doesn't make the
             | acquisition free. But the value of shares is an expectation
             | of future cash flows. If that expectation is currently on
             | the high side for Nvidia and on the low side for Arm then
             | paying in shares makes the acquisition cheaper for Nvidia
             | than paying cash.
             | 
             | Nvidia is essentially telling us that they think their
             | shares are currently richly valued. I agree with that.
        
               | bananaface wrote:
               | Not necessarily. They could just prefer to be
               | diversified. It can be rational to take a _loss_ of value
               | in the pursuit of diversification (depends on the
               | portfolios of the stakeholders).
               | 
               | They could also think their shares are valued accurately
               | but believe the benefits of synthesis would increase the
               | value.
        
               | fauigerzigerk wrote:
               | How would any of that make payment in shares beneficial
               | compared to cash payment?
        
               | bananaface wrote:
               | $12B cash appears to be all they have _available_. They
               | can 't pay more cash.
               | 
               | https://www.marketwatch.com/investing/stock/nvda/financia
               | ls/...
               | 
               | Their only other option would be taking on debt.
        
               | fauigerzigerk wrote:
               | Right, so Nvidia with their A2 rating and their solid
               | balance sheet in a rock bottom interest rate environment
               | still found it favourable to purchase Arm with newly
               | issued shares.
               | 
               | That's telling us something about what they think about
               | their share valuation right now.
        
               | xxpor wrote:
               | Perhaps there are tax implications?
        
             | HenryBemis wrote:
             | Correct, if it's shares, they just dilluted the value of
             | the stock. If I owned 1% of the company, after printing and
             | handing out $40bn worth of stock, then I keep the same
             | amount of stock, but now it's (e.g.) 0.5%, which means I
             | just lost 50% of that investment. Which means I will
             | receive 50% of the dividend (since out of the next year's
             | dividend 'pool' someone will get a big chunk. Which means
             | NVidia just screwed the existing shareholders (for the time
             | being, and once the numbers/$ are merged I should be
             | getting my dividend by two parts, one from ARM and one from
             | NVIDIA).
             | 
             | If they gave away cash, that's a different story, it all
             | depends.. if they were sitting on $1tn cash, and the spent
             | $40bn, that's no biggie. I mean we went through COVID, what
             | worse can come next?
        
               | throwaway5792 wrote:
               | That's not how equity works. Paying cash or shares, the
               | end result is exactly the same for shareholders barring
               | perhaps some tax implications.
        
               | kd5bjo wrote:
               | > If I owned 1% of the company, after printing and
               | handing out $40bn worth of stock, then I keep the same
               | amount of stock, but now it's (e.g.) 0.5%, which means I
               | just lost 50% of that investment. Which means I will
               | receive 50% of the dividend
               | 
               | Well, you're now getting 50% of the dividend produced by
               | the new combined entity. If the deal was correctly
               | priced, your share of the Arm dividend should exactly
               | replace the portion of your Nvidia dividend that you lost
               | through dilution.
        
           | smabie wrote:
           | It's rare for the acquiring companies stock to not decline,
           | regardless of whether the market thinks they overpaid.
        
             | inlined wrote:
             | I worked at Parse when it was bought by Facebook. The day
             | the news broke, Facebook's market cap grew by multiples of
             | the acquisition price. I remember being gobsmacked that
             | Facebook effectively got paid to buy us.
        
               | rocqua wrote:
               | Unless Facebook itself actually issued stock at the new
               | price, Facebook did not get paid. It were the Facebook
               | shareholders that got paid.
               | 
               | Really, it shows that the market valued Parse much more
               | than the cash it cost Facebook. If Parse was bought with
               | stock instead of cash, that's almost cooler. Since it
               | allowed Parse to capture more of the surplus value they
               | created. (Since the stock price popped).
        
               | jannes wrote:
               | Keep in mind that market cap is a fictional aggregate. It
               | does not represent the real price that all shares could
               | be sold for.
        
         | walterbell wrote:
         | There were two separate IoT business units: Platform
         | (https://pelion.com) and Data (https://www.treasuredata.com/).
         | The Platform unit fits the Segars post-acquisition comment
         | about end-to-end IoT software architecture,
         | https://news.ycombinator.com/item?id=24465005
         | 
         |  _> One person close to the talks said that Nvidia would make
         | commitments to the UK government over Arm's future in Britain,
         | where opposition politicians have recently insisted that any
         | potential deal must safeguard British jobs._
         | 
         | So the deal has already been influenced by one regulator. That
         | should encourage other regulators.
         | 
         |  _> SoftBank will remain committed to Arm's long-term success
         | through its ownership stake in NVIDIA, expected to be under 10
         | percent._
         | 
         | Why is this stake necessary?
        
           | nautilus12 wrote:
           | Does this mean they are or aren't buying treasure data? What
           | would happen if not?
        
             | walterbell wrote:
             | That would be up to Softbank.
        
           | xbmcuser wrote:
           | Are they getting 10% of nvidia or keeping 10% of Arm
        
             | inopinatus wrote:
             | Given the quality of the reporting so far it's possible
             | that they are getting 10% of the UK.
        
             | btown wrote:
             | From the press release directly, it appears to be 10% of
             | NVIDIA:
             | 
             | https://nvidianews.nvidia.com/news/nvidia-to-acquire-arm-
             | for...
             | 
             | > Under the terms of the transaction, which has been
             | approved by the boards of directors of NVIDIA, SBG and Arm,
             | NVIDIA will pay to SoftBank a total of $21.5 billion in
             | NVIDIA common stock and $12 billion in cash, which includes
             | $2 billion payable at signing. The number of NVIDIA shares
             | to be issued at closing is 44.3 million, determined using
             | the average closing price of NVIDIA common stock for the
             | last 30 trading days. Additionally, SoftBank may receive up
             | to $5 billion in cash or common stock under an earn-out
             | construct, subject to satisfaction of specific financial
             | performance targets by Arm.
             | 
             | Since NVIDIA currently has 617 million shares outstanding,
             | if the earn-out were to be fully in common stock, this
             | would bring Softbank to 8.8% of NVIDIA from this
             | transaction alone (plus anything they already have in
             | NVIDIA as public-market investors).
             | 
             | (I believe that the Forbes analysis in the sibling comment
             | is mistaken in describing this as a "10% stake in new
             | entity" - no new entity is mentioned in the press release
             | itself.)
        
               | walterbell wrote:
               | _> no new entity is mentioned in the press release
               | itself_
               | 
               | The Forbes article is based on a joint interview today
               | with the CEOs of Arm and Nvidia, who could have provided
               | more detail than the press release, specifically:
               | 
               |  _> Arm operating structure: Arm will operate as an
               | NVIDIA division_
               | 
               | This level of detail can be confirmed during the analyst
               | call on Monday. Operating as a separate division would
               | help assuage concerns about Arm's independence. The press
               | release says:
               | 
               |  _> Arm will remain headquartered in Cambridge ... Arm's
               | intellectual property will remain registered in the U.K._
               | 
               | Those statements are both consistent with Arm operating
               | as a UK-domiciled business that is owned by Nvidia.
        
             | walterbell wrote:
             | 10% of Arm division of Nvidia, https://www.forbes.com/sites
             | /patrickmoorhead/2020/09/13/its-....
             | 
             |  _> Softbank ownership: Will keep 10% stake in new entity_
             | 
             | That would mean Nvidia acquired 90% of Arm for $40B, i.e.
             | Arm was valued at $44B.
             | 
             | Is Softbank invested in Arm licensees, who may benefit from
             | Softbank influence? Alternately, which Arm licensees would
             | bid in a future auction of Softbank's 10% stake of Nvidia's
             | Arm division?
        
           | boulos wrote:
           | > Why is this stake necessary?
           | 
           | Edit: it's not necessary/a requirement.
           | 
           | They're noting that after the transaction, SoftBank will
           | still fall under the 10% ownership threshold that requires
           | more reporting from the SEC [1]:
           | 
           | > Section 16 of the Exchange Act applies to an SEC reporting
           | company's directors and officers, as well as shareholders who
           | own more than 10% of a class of the company's equity
           | securities registered under the Exchange Act. The rules under
           | Section 16 require these "insiders" to report most of their
           | transactions involving the company's equity securities to the
           | SEC within two business days on Forms 3, 4 or 5.
           | 
           | [1] https://www.sec.gov/smallbusiness/goingpublic/officersand
           | dir...
        
           | oxfordmale wrote:
           | The UK government never learns. Kraft made similar promises
           | when buying Cadbury regarding jobs,but quietly reneged on
           | them over time. The Kraft CEO was asked to show up before an
           | UK parliament committee, but of course declined, and that was
           | the end of the story.
        
             | bencollier49 wrote:
             | It absolutely outrages me. I've posted elsewhere on the
             | thread about this, but I want to do something to prevent
             | this sort of thing in future. Setting up a think tank seems
             | like a viable idea. See my other response in the thread if
             | you are interested and want to get in contact.
        
               | lumberingjack wrote:
               | Welcome to vulture capitalism it's been gutting American
               | industry for about 10 20 years used to be four or five
               | paper mills around here but now they're all in China and
               | everyone who worked there is on drugs now
        
               | fencepost wrote:
               | _Welcome to vulture capitalism it 's been gutting
               | American industry for about 10 20 years_
               | 
               | More like 30-40.
               | https://en.wikipedia.org/wiki/Private_equity_in_the_1980s
        
               | Torkel wrote:
               | Get the deals in writing, with explicit liabilities if
               | contract is broken. There, I fixed it.
        
               | bencollier49 wrote:
               | What, and the profits, investments and growth made by the
               | company will stay inside the country? There's a much
               | broader problem here.
        
               | [deleted]
        
             | LoSboccacc wrote:
             | it's all posturing unless the government makes the company
             | to agree to punitive severance packages to be put in an
             | escrow account to be released to the company after 20 years
             | or to the people if fired before that.
        
             | sgt101 wrote:
             | What's this "quietly reneged over time"! Kraft simply
             | welshed on the deal instantly! There were explosions and a
             | change in the law.
        
             | djmobley wrote:
             | Except the Cadbury-Kraft debacle led to major reforms in
             | how the UK regulates foreign takeovers.
             | 
             | In the case of Arm, the guarantees provided back in 2016
             | were legally binding, which is why we're here, four years
             | and another acquisition later, with Nvidia now eager to
             | demonstrate it is standing by those commitments.
             | 
             | Maybe in this particular instance they did learn something?
        
               | bencollier49 wrote:
               | They should learn to prevent the sale, period. Promises
               | to retain some jobs (for ever? What happens if profits
               | decline? What happens if the core tech is sold and ARM
               | becomes a shell? Do the rules still apply?) address a
               | tiny fraction of the problems presented by the sale of
               | one of our core national tech companies.
        
               | tomalpha wrote:
               | I would have liked to see ARM remain owned in the UK. I
               | think it's proven itself capable of innovation and
               | organic growth on its own.
               | 
               | But how can we evaluate the whether that will continue?
               | 
               | What if ARM is not sold, and then (for whatever reason)
               | stagnates, doesn't innovate, gets overtaken in some way,
               | and enters gradual decline?
               | 
               | Perhaps that's unlikely, but _prevent the sale, period_
               | is feels too absolute.
        
               | djmobley wrote:
               | Who is to say ARM was "owned in the UK" prior to the
               | SoftBank acquisition?
               | 
               | Prior to that it was a publicly traded company,
               | presumably with a diverse array of international
               | shareholders.
        
               | bencollier49 wrote:
               | That feels like giving up, to me. We should have the
               | confidence that British industry can development and
               | flourish on its own merits without being sold off to
               | foreign interests.
        
               | fluffything wrote:
               | What have confidence when we can just look at ARM
               | financials?
               | 
               | There are more ARM chips sold each year than those of all
               | its competitors together. Yet ARM's revenue is 300
               | million $.
               | 
               | Why? Because ARM lives from the ISA royalties, and their
               | revenue on the cores they license is actually small.
               | 
               | With RISC-V on the rise, and west sanctions against
               | china, RISC-V competition against ARM will only increase,
               | and it is very hard to compete against something that's
               | good / better and has lower costs (RISC-V royalties are
               | "free").
               | 
               | I really have no idea why NVIDIA would adquire ARM. If
               | they want a world-class CPU team for the data-center, ARM
               | isn't that (Graviton, Apple Silicon, Fujitsu, etc. are
               | built and designed by better teams). ARM cores are used
               | by Qualcom and Samsung, but these aren't world-class and
               | get beaten every gen by Apple Silicon. If they want ARM
               | royalties, that's high risk business, and very low reward
               | (there is little money to make there).
               | 
               | The only ok-ish cores ARM makes are embedded low-power
               | cores (not mobile, but truly IoT < 1W embedded). Hard to
               | imagine that an architecture like Volta or Ampere that
               | perform well at 200-400W would perform well at the <1W
               | envelope. No mobile phone in the world uses nvidia
               | accelerators, and mobile phones are "supercomputers" when
               | compared with the kind of devices ARM is "ok-ish" at.
               | 
               | So none of this makes sense to me, except if NVIDIA would
               | want to "license" GPUs with ARM cores to IoT and low
               | power devices like ARM does, but that sounds extremely
               | far-fetched, because nvidia is super-far away from a
               | product there, and also because the margins for those
               | products are very very thin, and nvidia tends to like
               | 40-60% margins. You just can have those when buying IoT
               | chips for 0.12$. Its also hard to sell a GPU to these use
               | cases because they often don't need it.
        
               | als0 wrote:
               | > If they want a world-class CPU team for the data-
               | center, ARM isn't that (Graviton
               | 
               | Graviton uses Neoverse CPU cores, which are designed by
               | ARM. To say that ARM is not a world-class CPU team is
               | unfair. Especially as Ampere just announced an 80 core
               | datacenter SoC using Neoverse cores.
        
               | [deleted]
        
               | Followerer wrote:
               | The main source of revenue for ARM is, by far, royalties.
               | Licenses are paid once, royalties are paid by unit
               | shipped. And they shipped billions last year.
               | 
               | Revenue is not $300, we don't know what ARM's revenue is
               | because it hasn't been published since 2016. And back
               | then it was like $1.5 _billion_. $300 million was _net_
               | income. Again, in 2016.
               | 
               | I think you've already been adequately corrected on your
               | misconceptions about ARM's CPU design teams.
        
               | lumberingjack wrote:
               | Apple mobile beating arm mobile is pretty irrelevant
               | considering they can barely run the same type of tasks
               | it's a miracle they both have the same benchmarks since
               | they're operating systems are totally different. Beyond
               | that it doesn't even really matter which one's faster
               | because you got all that overhead of the os's that are so
               | different. I would also like to say that Apple users are
               | going to buy the newer chip whether it's faster or not.
               | Because of planned obstinance at a software level. I've
               | never heard an Apple user talk about speed or The
               | benchmark of the phones because that marketing segment is
               | clueless to that. My guess is that Nvidia needs arm for
               | their self-driving stuff.
        
               | bogomipz wrote:
               | >" If they want a world-class CPU team for the data-
               | center, ARM isn't that (Graviton, Apple Silicon, Fujitsu,
               | etc. are built and designed by better teams)."
               | 
               | The latest Fujitsus HPC offering the A64FX is also ARM
               | based though.[1][2] And it sounds as though this is
               | replacing their SPARC64 in this role .
               | 
               | [1] https://en.wikipedia.org/wiki/Fujitsu_A64FX
               | 
               | [2] https://en.wikipedia.org/wiki/Fugaku_(supercomputer)
        
               | [deleted]
        
               | formerly_proven wrote:
               | The core is not designed by ARM.
        
               | oxfordmale wrote:
               | Nvidia could still asset strip ARM, and then let ARM
               | decline organically with redundancies justified by the
               | decrease in revenue.
        
               | lumberingjack wrote:
               | Sounds like with all those new regulations it will be a
               | limiting factor for new jobs not going to set up my
               | company there if I got to jump through hoops like that.
        
       ___________________________________________________________________
       (page generated 2020-09-14 23:00 UTC)