[HN Gopher] Apple M2 Pro to use new 3nm process
       ___________________________________________________________________
        
       Apple M2 Pro to use new 3nm process
        
       Author : nateb2022
       Score  : 175 points
       Date   : 2022-08-25 15:11 UTC (7 hours ago)
        
 (HTM) web link (www.cultofmac.com)
 (TXT) w3m dump (www.cultofmac.com)
        
       | brundolf wrote:
       | I'm curious what actually unifies the "MX" for some X. There are
       | different chips in the series, and apparently they can even be on
       | different-sized processes and keep the name
       | 
       | Anybody know more detail?
        
         | spullara wrote:
         | Marketing?
        
           | brundolf wrote:
           | It's possible it's nothing but marketing, but I didn't want
           | to assume that without knowing what I'm talking about
        
           | wilg wrote:
           | That's what the M is for
        
         | gpderetta wrote:
         | Microarchitecture.
        
       | adtac wrote:
       | Is the node process size comparable across architectures?
        
         | remlov wrote:
         | No. At these nodes it's all marketing.
        
         | muricula wrote:
         | Process and architecture are mostly independent.
         | 
         | Node process is determined by the physical manufacturing.
         | Architecture is determined by the design templated on during
         | manufacturing. You could make an arm core on an intel process
         | (which I think even happens in some of their testing phases).
         | So yes.
        
         | cpurdy wrote:
         | No, not really. The "3nm" in the "3nm process" is not a measure
         | of anything in particular, and even if it is a measure, the
         | measure may or may not be in the neighborhood of 3nm.
         | 
         | Several years ago, fabs started naming each next-gen process
         | with a smaller number of nanometers, even if the process size
         | didn't change. It's just marketing now.
        
       | narrator wrote:
       | I wonder how many people have to collaborate to get a 3nm
       | semiconductor out the door. TSMC has 65,000 employees. ASML has
       | 32,000 employees and 5000 suppliers. The complexity of it all is
       | unimaginable!
        
       | Kalanos wrote:
       | excellent. i've been waiting for an M2 macbook pro
        
         | ArcMex wrote:
         | That already exists.
         | 
         | https://www.apple.com/shop/buy-mac/macbook-pro/13-inch
         | 
         | Did you mean an M2 Pro MacBook Pro?
        
           | solarkraft wrote:
           | It's so weird that they made another one, keeping the old
           | design and touch bar alive.
           | 
           | I wonder whether they do focus group testing and found that
           | some significant minority likes them enough for it to be
           | worth it.
        
       | bonney_io wrote:
       | So it seems like the M2 is really an "M1+" or "M1X", whereas the
       | M2 Pro/Max/Ultra are really the second-generation Apple Silicon.
       | 
       | That's fine, in my opinion. M1 is still an amazing chip, and if
       | that product class (MacBook Air, entry iMac, etc.) gets even
       | marginal yearly revisions, that's still better than life was on
       | Intel.
        
         | top_sigrid wrote:
         | Actually I get a different impression. Although the M2 tests
         | have been impressive nonetheless (the M2 being based on the A15
         | and not on the A14 makes it more the an M1X imho), the issues
         | around throttling and thermals with the MacBook Air make it
         | seem to me, that the M2 was actually designed to be on the 3nm
         | node - which then seems to have been delayed by TSMC. That the
         | rest of the M2* line will presumably be made with the 3nm
         | process boosts this impression for me.
         | 
         | I was planning on getting the redesigned M2 Air, but with the
         | above in mind (which is just speculation) it got me thinking
         | again.
        
       | lxe wrote:
       | According to Wikipedia:
       | 
       | > The term "3 nanometer" has no relation to any actual physical
       | feature (such as gate length, metal pitch or gate pitch) of the
       | transistors.
       | 
       | I thought it at least maps to something physical. But it's just a
       | marketing term.
        
       | eis wrote:
       | M1 and M2 actually are not produced on the exact same process
       | node. M1 is N5 and M2 is N5P, an optimized version of N5.
       | 
       | I think Kuo might be misinterpreting the statement from TSMC
       | regarding revenue from N3. The key is that they said it wont
       | "substantially" contribute to revenue until 2023. Of course
       | processors like M2 Pro/Max/Ultra wont generate the same amount of
       | numbers like something more high volume like an iPhone and in the
       | grand scheme of things can't represent a substantial contribution
       | to TSMC revenue.
       | 
       | The fact is TSMC said they'll start N3 HVM in September. So they
       | are producing _something_ and we know Apple is expected to be the
       | first customer for this node. It 's too early for the A17 so
       | either it's the M2 Pro/Max/Ultra or something new like the VR
       | headset chip. Can someone see another possibility?
       | 
       | Apple still btw has to replace the Mac Pro with an Apple Silicon
       | based model and their own deadline (2 years from first M1) is
       | running out. It could make sense that they want to bring this one
       | with a "bang" and claim the performance crown just to stick it to
       | Intel :)
        
         | ksec wrote:
         | Starting HVM in Sept does not mean you get Revenue in Sept. It
         | takes months before volume reached, testing, packaging done and
         | shipped. TSMC isn't unusual in stating it they would get
         | revenue from N3 until 2023.
        
           | martin_bech wrote:
           | Well dosent Apple usually prepay? Thats normally why they get
           | preferential treatment.
        
             | refulgentis wrote:
             | It's more complicated than this, accounting as a field
             | exists pretty much because there's intricate sets of rules
             | and ways to interpret them. Here, my understanding is a
             | good accountant would say not to recognize the revenue
             | until you consider it shipped -
             | 
             | i.e. if you agree to pay me a bajillion dollars for a time
             | machine with an out clause of no cash if no delivery, that
             | doesn't mean I get to book a bajillion dollars in revenue
             | 
             | over the top example, but this was the general shape of
             | much Enron chicanery, booking speculative revenue based on
             | coming to terms on projects they were in no shape to
             | deliver, so its very much an accounting 'code smell' if
             | 'code smell' meant 'attracts regulators attention'
        
             | ralph84 wrote:
             | In accrual accounting payment has very little connection to
             | when revenue is recognized. In order for revenue to be
             | recognized the product has to ship.
        
           | eis wrote:
           | Yes. So either way, Kuo cannot deduce that M2 Pro wont be on
           | N3. If the revenue is realized later or if the numbers are
           | too low to justify calling it a substantial contrubution to
           | TSMC revenue... same result. Kuo's argument does not seem to
           | hold water. Now, that does not mean the inverse is true and
           | M2 Pro is guaranteed to be on N3. I can only come up with the
           | VR chip as alternative and so far I think nobody else came up
           | with a suggestion.
        
         | simonebrunozzi wrote:
         | > something new like the VR headset chip
         | 
         | My bet is on this one.
        
         | greenknight wrote:
         | I would expect that Apple would push its goldenchild (iphone)
         | onto the node first. Its a small chip which they can use as a
         | pipecleaner, makign sure they can get the yields up and
         | optimise the process before pushing a larger die onto the node.
         | 
         | They easily could have been allocating risk production on the
         | iphone for the past couple of months, ready for the launch.
         | Apple being like yes we will take lower yields for less cost.
         | 
         | I do not expect any company to announce production 3N product,
         | until apple has had one out for atleast 6-12 months. Look how
         | long it took the rest of the industry to move to 5N. I swear
         | part of that reason was an exclusivity agreement with Apple,
         | and it massively paid off for their CPUs. Having a node
         | advantage is always massive in terms of price / performance /
         | power matrix.
        
           | eis wrote:
           | Are you suggesting they might have produced millions of A16
           | chips on N3 during risk production phase and launched it
           | before TSMC even reaches HVM? Highly unlikely. Risk
           | production is a phase where they still make changes and fix
           | issues. It's like a beta phase. It does not come at a lower
           | cost, it would be more expensive to throw out a big chunk of
           | chips. The iPhone chips are very high volume, you can't
           | produce them before reaching... high volume manufacturing
           | phase.
           | 
           | The iPhone contributes to TSMC revenue in a substantial
           | manner so that also would totally not fit what TSMC said.
           | 
           | The M2 Pro/Max/Ultra are much lower volume and higher margin.
           | It makes sense to start with them.
        
             | BackBlast wrote:
             | Except that they are much larger chips, that will be much
             | more sensitive to yield issues. They could do that, but
             | they will be expensive. Maybe that's ok.
        
         | xiphias2 wrote:
         | They don't ,,have to'' be ready in 2 years. The M2 numbers from
         | the N5P process were underwhelming, I wouldn't replace my M1
         | MacBook pro without seeing significantly superior performance /
         | watt numbers, and happy to wait for the N3 process to be in
         | production whatever it takes.
        
           | rvz wrote:
           | After the November 2020 launch day chaos, with not that much
           | existing software available was working on those machines at
           | the time like Docker, Java, Android Studio / Emulator, VSTs,
           | etc a typical developer would have to wait more than 6 months
           | just to do their work with fully supported software on the
           | system and to take full advantage of the performance gains
           | rather than using Rosetta.
           | 
           | At that point, they might as well skipped the M1 machines and
           | instead waited to purchase the M1 Pro MacBooks. Now there
           | isn't a rush in getting a M1 Macbook anymore as now Apple is
           | already moving to the M2 line up.
           | 
           | By the time they have made an Apple Silicon Mac Pro, they are
           | already planning ahead for the new series of Apple Silicon
           | chips; probably M3, which will be after the M2 Pro/Ultra
           | products.
           | 
           | After that, it will be the beginning of the end of macOS on
           | Intel.
        
             | simonh wrote:
             | >...not much existing software was working [lists a few
             | nerdy dev tools used by 0.01% of the Mac user bsse]...
        
               | umanwizard wrote:
               | Surely software developers (and other people using
               | x86-only software like Photoshop) are more than 0.01% of
               | the mac user base.
               | 
               | Apple has specifically said that vim users are the reason
               | they put back the physical escape key...
        
               | rvz wrote:
               | It gets even better. Before they could even run any sort
               | of new software on the system, and as soon as they
               | updated it, they bricked it afterwards even on week one
               | after launch day.
               | 
               | So back to the Apple store for lots of the Mac user base
               | complaining about their M1 Macs getting bricked on top of
               | those unable to run their old software on the system.
               | 
               | Such chaos that was.
        
             | 2muchcoffeeman wrote:
             | What's the point of this comment? Every consumer electronic
             | product has a new version a year or 2 away.
             | 
             | Apple products also have a long reputation of having a
             | sweet spot for buying a new product. The Mac Buyers guide
             | has existed for like a decade or more.
        
               | rvz wrote:
               | > What's the point of this comment? Every consumer
               | electronic product has a new version a year or 2 away.
               | 
               | So after 9 months releasing the M1 Macbooks, the M1 Pro
               | Macbooks came out afterwards, already replacing the old
               | ones in less than a year. Given this fast cycle, there is
               | a reason why the Osborne effect precisely applies to
               | Apple's flagship products rather than _' Every consumer
               | electronic product'_.
               | 
               | This is a new system running on a new architecture and it
               | must run the same apps on the user's previous computer.
               | Unfortunately, the software for it was just too early to
               | be available on the system at the time and if was there,
               | it didn't run at all in Nov 2020. Even a simple update
               | will brick the system.
               | 
               | What use is a system that bricks on an update; losing
               | your important file or for power users having to wait 6
               | months for the software they use everyday to be available
               | and supported for their work?
               | 
               | Going all in on the hype fed by the Apple boosters and
               | hype squad doesn't make any sense as a buyers guide.
        
               | 2muchcoffeeman wrote:
               | > _So after 9 months releasing the M1 Macbooks, the M1
               | Pro Macbooks came out afterwards, already replacing the
               | old ones in less than a year._
               | 
               | The M1 Air and 13" Pro are really entry level machines.
               | The first model with a M1 Pro costs $700USD over the base
               | model 13" M2 MBP. The M1 Pro still has much better
               | performance compared to a base M2. The M1 Pro, Max and
               | Ultra didn't replace anything. No one with a budget is
               | going "Oh, the M1 Pro only cost an extra $700USD, I'll
               | get that".
               | 
               | > _What use is a system that bricks on an update; losing
               | your important file or for power users having to wait 6
               | months for the software they use everyday to be available
               | and supported for their work?_
               | 
               | What's the point of this comment? Things happen. It
               | sucks. Apple isn't the first and won't be the last
               | company to make a mistake. Don't get sucked into the
               | shininess of their latest product.
        
           | saagarjha wrote:
           | idk, I feel like most people don't replace their expensive
           | MacBook Pros every single time there's a new one
        
           | eis wrote:
           | Of course nothing forces them to be ready within 2 years but
           | alas, that's what Apple said they'd do. I agree the M2
           | numbers were not amazing. I guess after the big M1 shock it's
           | hard to follow up with something that comes even close. You
           | can't get similar gains like the transition from x86 to an
           | integrated arm based SOC brought, doubly so when there's no
           | substantial process node improvement (N5 -> N5P is a minor
           | optimization). In the end they mostly bought better
           | performance with a bigger die and increased power
           | consumption. I'm pretty convinced they'll need N3 for the
           | next jump but even that wont be on the level of the Intel ->
           | M1 step.
           | 
           | The revolution has happened, now it's all about evolution.
           | 
           | BTW if Apple wants to increase the prices of the Pro macbooks
           | like they did with the M2 Air due to inflation then they
           | better justify it with some good gains. The big changes in
           | terms of hardware redesign already happened last time.
        
             | nonameiguess wrote:
             | Are you European? The price of the M2 Air did not increase.
             | The USD price was exactly the same as for the M1 Air. Both
             | debuted at $1199. The price went up in Europe because of a
             | drastic reduction in the EUR/USD exchange rate.
        
               | kergonath wrote:
               | The euro is not doing great, but the dollar is falling as
               | well.
        
               | eis wrote:
               | The M1 Air launched at a price of $999. The increase to
               | $1199 happened with the launch of the M2 Air.
               | > With its sleek wedge-shaped design, stunning Retina
               | display, Magic Keyboard, and astonishing level of
               | performance thanks to M1, the new MacBook Air once again
               | redefines what a thin and light notebook can do. And it
               | is still just $999, and $899 for education.
               | 
               | https://www.apple.com/newsroom/2020/11/introducing-the-
               | next-...
        
             | GeekyBear wrote:
             | > I agree the M2 numbers were not amazing.
             | 
             | What other CPU core design iteration managed to improve
             | performance while also cutting power draw?
             | 
             | Anandtech's deep dive on the performance and efficiency
             | cores used in the A15 and M2:
             | 
             | Performance:
             | 
             | >In our extensive testing, we're elated to see that it was
             | actually mostly an efficiency focus this year, with the new
             | performance cores showcasing adequate performance
             | improvements, while at the same time reducing power
             | consumption, as well as significantly improving energy
             | efficiency.
             | 
             | Efficiency:
             | 
             | >The efficiency cores have also seen massive gains, this
             | time around with Apple mostly investing them back into
             | performance, with the new cores showcasing +23-28% absolute
             | performance improvements, something that isn't easily
             | identified by popular benchmarking. This large performance
             | increase further helps the SoC improve energy efficiency,
             | and our initial battery life figures of the new 13 series
             | showcase that the chip has a very large part into the
             | vastly longer longevity of the new devices.
             | 
             | https://www.anandtech.com/show/16983/the-apple-a15-soc-
             | perfo...
             | 
             | Intel and AMD seem to have both returned to the Pentium 4
             | days of chasing performance via increased clock speeds and
             | power draws.
        
               | eis wrote:
               | The report you quoted and linked to is about the A15 and
               | not the M2. The M2 is based on the A15 but from what I've
               | seen it does use quite a bit more power (~30%?) than the
               | M1 when loaded. Anandtech did not analyze the M2 yet as
               | far as I can see.
        
               | GeekyBear wrote:
               | As previously noted, those core designs are used in both
               | the A15 and the M2.
               | 
               | Just as the same cores were used in the A14 and M1.
               | 
               | Using more power overall comes from adding additional GPU
               | cores and other non-CPU core functionality.
        
               | eis wrote:
               | If the increase in power consumption comes from the
               | additional GPU core, from increased frequencies in the
               | CPU cores or other added parts to the chip imho is not
               | that important for users (and depends on what they are
               | doing). They see the system as a whole. They get x% more
               | performance for y% more power usage. For the CPU x is
               | smaller than y. This is totally normal when increasing
               | frequencies.
               | 
               | Note: I'm not saying the M2 is bad. It's a very good chip
               | indeed. All I said was it was not amazing. It was an
               | iterational, yet welcome, improvement. And I think one
               | couldn't expect anything amazing quite so quickly.
        
               | GeekyBear wrote:
               | We're talking about CPU core design.
               | 
               | Would we say the Zen 4 core design is less efficient
               | because AMD is going to start bundling an integrated GPU
               | with Ryzen chips, or would we just talk about Zen 4 core
               | power draw vs Zen 3?
               | 
               | Apple's performance cores managed to improve performance
               | while cutting power.
               | 
               | What other iterative core design did this?
               | 
               | It helps to remember that Apple isn't playing the
               | performance via clock increases no matter what happens to
               | power and heat game.
        
               | eis wrote:
               | I guess that's where the misunderstanding comes from. I
               | was not talking about CPU cores alone. Only M1, M2 as a
               | whole.
               | 
               | But I still am not sure if I can believe that the M2 CPU
               | improved performance while at the same time cutting
               | power. Can you link to some analysis? Would be very
               | interesting. Though please not the A15 one, the cores are
               | related but not the same and the CPUs have big
               | differences.
        
           | kergonath wrote:
           | > I wouldn't replace my M1 MacBook pro without seeing
           | significantly superior performance / watt numbers
           | 
           | What makes you think that this will happen in one generation?
           | The point of the M2 is not to get M1 users to migrate, it's
           | to keep improving so that MacBooks are still better products
           | than the competition. Apple does not care that you don't get
           | a new computer every year, they are most likely planning for
           | 3 to 5 years replacement cycles.
        
           | yoz-y wrote:
           | Almost nobody should update from one generation of CPUs to
           | the next one though. Incremental upgrades are fine.
        
       | georgelyon wrote:
       | If M2 Pro is similar to M1 Pro (two M1s duct-taped together with
       | _very_ fancy duct tape), this is interesting because usually
       | chips need to be significantly reworked for a newer process and
       | this implies an M2 core complex will be printable both at 5nm and
       | 3nm. It would be interesting to know how much of this is
       | fabrication becoming more standardized and how much is Apple 's
       | core designs being flexible. If this is the latter, then Apple
       | has a significant advantage beyond just saturating the most
       | recent process node.
        
         | dont__panic wrote:
         | I wonder if they'll also bump the M2 machines to 3nm silently,
         | if the efficiency bump is minor? Apple previously split the A9
         | between TSMC and Samsung at two different node sizes, so it
         | wouldn't be completely crazy.
         | 
         | Or perhaps they're content to leave the M2 as 5nm for easy
         | performance gains in the M3 next year. It also has the
         | advantage of keeping the cheapest machines off of the best node
         | size, which is surely more expensive and more limited than 5nm.
        
         | buu700 wrote:
         | There was some speculation in another thread not too long ago
         | that the M2 design was originally 3nm, and was backported to
         | 5nm after the fact.
        
         | minimaul wrote:
         | I think you're thinking of M1 Ultra - which is 2x M1 Max on an
         | interconnect.
         | 
         | M1, M1 Pro and M1 Max are separate dies.
        
         | skavi wrote:
         | The M1 Pro was not two M1s duct taped together. Their core
         | configurations do not share the same proportions (8+2 vs 4+4).
         | 
         | You may be thinking of the GPUs? Each step in M1 -> M1 Pro ->
         | M1 Max -> M1 Ultra represents a doubling of GPU cores.
         | 
         | Or you may be thinking of the M1 Max and Ultra. The Ultra is
         | nearly just two Maxes.
         | 
         | Regarding your point about flexibility, it's hardly
         | unprecedented for the same core to be used on different
         | processes.
         | 
         | Apple has at times contracted Samsung and TSMC for the same
         | SoC. Qualcomm just recently ported their flagship SoC from
         | Samsung to TSMC. Even Intel backported Sunny Cove to 14nm. And
         | of course there's ARM.
        
           | wolf550e wrote:
        
             | ttoinou wrote:
             | Thats what the parent said
        
               | paulmd wrote:
               | meta: HN constantly feels the need to be maximally
               | pendantic even when what they're trying to say was
               | already covered, and it's just very tedious and leads to
               | an exhausting style of posting to try and prevent it.
               | 
               | that's really why the "be maximally generous in your
               | interpretation of a comment" rule exists, and the
               | pedantry is against the spirit of that requirement, yet
               | it's super super common, and I think a lot of people feel
               | it's "part of the site's culture" - but if it is, that's
               | not really a good thing, it's against the rules.
               | 
               | Just waiting for the pedantic "ackshuyally the rule says
               | PLEASE" reply.
        
               | 1123581321 wrote:
               | I want to generously overlook any particular words you
               | used and totally disagree with your main point. :) I
               | think that it's a positive feature of threaded comments
               | to spin off side discussions and minor corrections. In
               | this case, the correction was wrong, but if it was right,
               | I'd have appreciated it in addition to whatever else
               | ended up being written by others.
               | 
               | What's bad for discussion is when those receiving the
               | reply feel attacked, as if the author of the minor point
               | was implying that nothing else was worth discussing. I
               | wish that neither parent nor child comment authors felt
               | the urge to qualify and head off critical or clarifying
               | responses.
        
               | phpnode wrote:
               | Well actually, it's not just HN, I see this pattern all
               | over tech Twitter, programming subs on Reddit etc too. I
               | think it happens when people want to participate in the
               | conversation but don't have anything actually worthwhile
               | to say, so rather than say nothing they nitpick.
        
           | tooltalk wrote:
           | >> Apple has at times contracted Samsung and TSMC for the
           | same SoC
           | 
           | That was only once at 14nm(Samsung)/16nm(TSMC) as Apple
           | outsourced US chip production in TX to Taiwan.
           | 
           | Qualcomm uses both TSMC and Samsung on rotational basis to
           | this date.
        
         | [deleted]
        
         | sbierwagen wrote:
         | I mean, Intel used a "tick-tock" model for a decade. (New
         | microarchitecture, then die shrink, then new arch...)
         | https://en.wikipedia.org/wiki/Tick%E2%80%93tock_model
        
         | msoad wrote:
         | you are mixing M1 Ultra which is two M1 Max taped together with
         | M1 Pro which is a weaker variant of M1 Max.
        
         | chippiewill wrote:
         | There's no reason to assume that a 3nm and 5nm M2 core is
         | identical in that way. It's probably similar to the changes
         | Intel used to do for die shrinks when they were doing tick-
         | tock.
        
       | linuxhansl wrote:
       | This is cool!
       | 
       | At the same time, not being part of the Apple ecosystem, should I
       | be worried about the closed nature of this. I have been using
       | Linux for over two decades now, and Intel seems to be falling
       | behind.
       | 
       | (I do realize Linux runs on the M1. But it's a mostly hobby
       | projects, the GPU is not well supported, and the M1/M2 will
       | never(?) be available with open H/W.)
        
         | afarrell wrote:
         | Good question
         | 
         | https://aws.amazon.com/pm/ec2-graviton/ is an indication that
         | Amazon cares about linux support for the arm64 architecture. So
         | the question is how much variance there is to the M1 relative
         | to that.
        
         | TheBigSalad wrote:
         | There is... another
        
         | heavyset_go wrote:
         | x86 processors will be produced on the same nodes. Many ARM
         | SoCs require binary blobs or otherwise closed source software,
         | so they are not the best choice to run Linux on if you're
         | approaching it from a longevity and stability perspective.
        
         | jjtheblunt wrote:
         | I'm running Asahi Linux on the M2 and it's great. Drivers are
         | not all complete yet, but it's awesome.
        
           | arjvik wrote:
           | As your daily driver machine?
        
             | jjtheblunt wrote:
             | i've not switched over to try that yet.
        
               | jjtheblunt wrote:
               | in my case i could use it as a daily driver, since i'm
               | just needing a fast browser and Linux with compilers etc.
               | but i've been using macos as a daily driver despite
               | loathing (since the dawn of time) its font rendering.
               | 
               | i'll switch at some point, probably.
        
           | amelius wrote:
           | It's great for you. But some of us are using Linux in
           | industrial applications. You can't really put an Apple laptop
           | inside e.g. an MRI machine. It may run highly specialized
           | software, needs specific acceleration hardware, etc.
           | 
           | It's going to be a very sad day when consumer electronics win
           | over industrial applications.
        
             | mindwok wrote:
             | I don't think you need to worry about that, those are
             | completely different use-cases and markets. ARM CPUs will
             | be available and widespread in other applications soon
             | enough, and Linux support is already strong in that regard.
        
             | novok wrote:
             | Apple hardware has never been about non-consumer, server or
             | industrial applications outside of some film, music and
             | movie studios using mac pros and the Xserve long time ago.
             | 
             | And if your making an MRI machine or other industrial
             | equipment that consumes a huge amount of power, the fact
             | your attached computer uses 300W vs 600W doesn't really
             | seem like much of a big deal.
             | 
             | Apple has a head start with their ARM machines, but I'm
             | also not really worried that the rest of the industry won't
             | catch up in a few years eventually. You can only really
             | pull off the new architecture trick once or twice, and
             | being a leader has a way of inspiring competitors.
             | 
             | Apple's software and OS is also horrible to use in server
             | applications, you only do it if you need to do it, such as
             | iOS CI, device testing and such. Otherwise you avoid it as
             | much as you can.
        
             | JumpCrisscross wrote:
             | > _can 't really put an Apple laptop inside e.g. an MRI
             | machine. It may run highly specialized software, needs
             | specific acceleration hardware, etc._
             | 
             | This sounds more like a pitch for letting the MRI machine
             | talk to the laptop than putting redundant chips in every
             | device.
        
               | rtlfe wrote:
               | > letting the MRI machine talk to the laptop than putting
               | redundant chips in every device
               | 
               | This sounds like a huge security risk.
        
               | amelius wrote:
               | No. This is not a good universal solution. What if the
               | machine needs more processing power than one laptop can
               | provide?
               | 
               | Do you want to put a rack of laptops inside the machine,
               | waste several screens and keyboards? Log into every
               | laptop with your AppleID before you can start your
               | machine? It's such an inelegant solution.
               | 
               | Instead, the x86/Linux environment lets you put multiple
               | mainboards in a machine, or you can choose a mainboard
               | with more processors; it is a much more flexible solution
               | in industrial settings.
        
               | kergonath wrote:
               | No. You want the computer running the thing to be as
               | simple, known, and predictable as possible. So that is
               | necessarily going to be a computer provided by true
               | manufacturer, and not whatever a random doctor feels like
               | using. Consumer devices are compeletely irrelevant or
               | that use case.
        
               | heavyset_go wrote:
               | It would be a gimmick given that real-time workloads
               | can't be offloaded via some serial connection to consumer
               | laptops. You'd still need hardware and software capable
               | of driving and operating the machines embedded in the
               | machines themselves.
        
             | kergonath wrote:
             | What are you on about? A EUR5M MRI machine will have
             | whatever computer its manufacturer will want to support.
             | Which will probably be something like a Core 2 running
             | Windows XP.
             | 
             | None of these machines have used Macs, ever. Why would
             | anything Apple does affect this market?
        
           | jonfw wrote:
           | How is webcam and microphone support? Web conferencing always
           | killed me running Linux even on well supported hardware
           | 
           | How is battery life?
           | 
           | What distro are you running?
           | 
           | Are you doing all ARM binaries or is there some translation
           | layer that works?
           | 
           | Sorry to bombard you, but I'm really curious about the
           | support
        
             | risho wrote:
             | not the person you are responding to but i was looking into
             | it today. webcam/mic/speakers don't work but bluetooth
             | does. there is arm to x86/x86_64 translation tools akin to
             | rosetta 2 but they have a lot of warts and are not well
             | supported yet. the most promising one in my opinion is
             | called fex.
        
             | jjtheblunt wrote:
             | I've not tried the webcam and microphone, which I guess i
             | could from Firefox. Battery life is less than when the
             | drivers further evolve, because I think it's imperfect in
             | going into sleep mode.
             | 
             | The distro is Asahi Linux, which is ARM ArchLinux. All ARM
             | binaries.
             | 
             | If you follow the Asahi Linux page, it updates super
             | frequently, as drivers get tuned and so on.
        
             | heavyset_go wrote:
             | If you buy hardware supported by Linux, Zoom works well on
             | it, including screen sharing.
             | 
             | I've been using Zoom on my desktop with a USB camera for
             | video calls and screen sharing for a while now.
             | 
             | I was pleasantly surprised it actually worked and worked
             | well.
             | 
             | Edit: to clarify, this post is about Linux in general, I
             | don't use M1 or M2 Macs with Linux.
        
               | xtracto wrote:
               | Why are you answering a question specifically made about
               | M1 and M2 with a generic answer about a Desktop computer?
               | 
               | What is the point?
        
         | eropple wrote:
         | There are other ARM providers, and personally I expect to see
         | some of them ramping up significantly in the next couple years.
         | 
         | Qualcomm's taking another shot, for example.
         | https://www.bloomberg.com/news/articles/2022-08-18/qualcomm-...
        
         | KaoruAoiShiho wrote:
         | AMD is not behind at all. Have you seen the latest benchmarks?
         | 
         | https://www.phoronix.com/review/apple-m2-linux/15
        
           | free652 wrote:
           | It's behind, looks like they are comparing the high end
           | 5900hx with the low end m1 in multi core tests.
        
             | KaoruAoiShiho wrote:
             | Dunno what you're talking about it's definitely the M2 in
             | the test. It's also the same price.
        
             | Tsarbomb wrote:
             | As the other reply mentioned they are testing against the
             | M2, and they are also testing the lower powered AMD part
             | 6850U which does best the M2 in some tests.
             | 
             | Not sure why you came out so strong with such a false
             | statement.
        
           | ceeplusplus wrote:
           | 5900HX TDP: 35-80W depending on boost setting. Most gaming
           | laptops set it at 60W+.
           | 
           | M2 TDP: 20W
        
             | KaoruAoiShiho wrote:
             | The 6850U is comparable in power use and still has a big
             | perf gap against the M2 in mosts tests. Though there are
             | some tests where the M2 leads with a big gap too so maybe
             | it comes down to software in a lot of these. Still it seems
             | to me like Apple is not leading.
        
               | ceeplusplus wrote:
               | It is not, the laptop tested with the 6850U has a 30W PL2
               | and 50W PL1.
        
             | kimixa wrote:
             | Power use tends to scale non-linearly past a point -
             | disabling turbo modes would likely significantly reduce the
             | peak power use, and ~18% performance differenceis pretty
             | big buffer to lose.
             | 
             | The 6850u also beats it rather comprehensively according to
             | those same results, and that's only 18-25w.
             | 
             | Really, you'd need everything power normalized, and even
             | the rest of the hardware and software used normalized to
             | compare "just" the CPU, which is pretty much impossible due
             | to Apple and their vertical integration - which is often a
             | strength in tests like this.
        
         | xref wrote:
         | What would be your concern? M1/M2 is just arm which Linux has
         | run on for decades?
        
           | 3pm wrote:
           | I think the concern is there is currently no 'IBM-
           | compatible'-like hardware ecosystem around ARM. Raspberry Pi
           | is closest, but nothing mainstream yet. And it looks like
           | RISC-V will have a better chance than ARM.
        
           | heavyset_go wrote:
           | Linux support is about much more than instruction set
           | support. Most ARM chips are shipped on SoCs which can take a
           | lot of work to get Linux running on, and even then it might
           | not run well.
        
         | johnklos wrote:
         | Apple isn't going to somehow make 64 bit ARM in to something
         | proprietary. Sure, they have their own special instructions for
         | stuff like high performance x86 emulation, but aarch64 on Apple
         | is only going to mean more stuff is optimized for ARM, which is
         | good not only for Linux, but for other open source OSes like
         | the BSDs.
        
           | rodgerd wrote:
           | Apple are, if anything, more helpful to the Linux community
           | that Qualcomm.
        
           | saagarjha wrote:
           | There are no special instructions for x86 emulation.
        
             | LordDragonfang wrote:
             | There's a whole special CPU/memory mode for it, actually.
             | 
             | https://twitter.com/ErrataRob/status/1331735383193903104
        
         | dheera wrote:
         | Me too. I _really_ wish I could buy a Samsung Galaxy Book Go
         | 360 which is ARM and has amazing battery life, and install
         | Ubuntu on it, but I don 't think there's a known possible way
         | to do so.
         | 
         | I _really_ want a competent, high-end ARM Ubuntu laptop to
         | happen. The Pinebook Pro has shitty specs and looks like shit
         | with the 90s-sized inset bezels and 1080p screen.
        
           | jacquesm wrote:
           | Likewise, I'd be on that for sure. Right now I'm using older
           | MacBook Air's running Ubuntu as my daily drivers and a big
           | dell at the home office for other work.
           | 
           | Longer battery life and something like the Galaxy Book Go
           | would definitely make me happy.
        
             | dheera wrote:
             | Ubuntu on MacBook M1 is a horrible experience. Screen
             | tearing and lots of other issues.
        
         | umanwizard wrote:
         | Eventually non-Apple laptops will be sold with silicon from
         | this process node. You just won't be the first to use it, which
         | is fine.
         | 
         | Also, Asahi is getting closer to "generally usable" at an
         | astounding pace, so who knows.
        
           | 3pm wrote:
           | I think that is what the parent meant by feeling left behind.
           | It is either Apple or something underwhelming like ThinkPad
           | with Snapdragon.
        
       | keepquestioning wrote:
       | Has this analyst ever been correct?
        
       | drexlspivey wrote:
       | Can anyone speculate what will happen to Apple if for any reason
       | TSM abruptly stops supplying chips? Will their revenue just drop
       | by 80%?
        
         | [deleted]
        
         | beautifulfreak wrote:
         | TSMC announced its new Phoenix Arizona fab would begin mass
         | production in the first quarter of 2024.
        
           | joenathanone wrote:
           | That is funny we are literally about to run out of water over
           | here in AZ.
        
           | eis wrote:
           | By that time it'll be far from bleeding edge. The Taiwan fab
           | will be close to N2 while the Arizona fab will be able to
           | produce 5nm generation chips, that'll be 4-5 year old tech
           | then.
        
           | bogomipz wrote:
           | Similarly Intel just announced a new partnership to
           | accomplish similar. Also in Arizon:
           | 
           | https://money.usnews.com/investing/news/articles/2022-08-23/.
           | ..
        
         | ytdytvhxgydvhh wrote:
         | Way down the road I hope Tim Cook writes a memoir. I'm curious
         | as to his unvarnished thoughts about doing business in (and
         | being so reliant on) Taiwan and China. I'm sure he can't
         | publicly express some of those thoughts without unnecessarily
         | adding risk for Apple but he must have lots of interesting
         | opinions about things like being reliant on TSMC vs trying to
         | build their own fabs, etc.
        
         | gjsman-1000 wrote:
         | You mean TSMC.
         | 
         | Well, they would be seriously hurt. However, does that matter
         | when almost every tech company (including Qualcomm, MediaTek,
         | AMD, Apple, ARM, Broadcom, Marvell, Nvidia, Intel, so forth)
         | would also be harmed?
         | 
         | TSMC going down is basically the MAD (Mutually Assured
         | Destruction) of tech companies. Kind of a single point of
         | failure. Intel would probably weather it best but would still
         | be hurt because they need TSMC for some products. Plus, well,
         | in the event of TSMC's destruction (most likely by a Chinese
         | invasion), Intel might raise prices considerably or even stop
         | sale to consumers especially as their chips would now have
         | major strategic value for government operations. NVIDIA might
         | also survive by reviving older products which can be
         | manufactured by Samsung in Korea, but same situation about the
         | strategic value there, and getting chips from Korea to US might
         | be difficult in such a conflict.
        
           | earthscienceman wrote:
           | Does anyone have any resources that explain the historical
           | reason(s) TSMC became what it is? How did the world's most
           | important hardware manufacturer manage to get constructed on
           | a geopolitical pinchpoint?
        
             | kochb wrote:
             | This podcast covers most of the important points:
             | 
             | https://www.acquired.fm/episodes/tsmc
        
           | [deleted]
        
           | nomel wrote:
           | > TSMC going down is basically the MAD (Mutually Assured
           | Destruction) of tech companies.
           | 
           | This is why it's appropriately called the "Silicon Shield",
           | within Taiwan: https://semiwiki.com/china/314669-the-
           | evolution-of-taiwans-s...
        
         | jkestner wrote:
         | Repeat of the car market? I'm going to make a bunch of money
         | off the old computers collecting dust here, and Apple's going
         | to have to unlock support for new OSes on them, making all its
         | money on services.
        
         | mlindner wrote:
         | For what reason would TSMC abruptly stop supplying chips short
         | of war? There's nothing other than war that would cause it. And
         | if there's a war, Apple's profits are the least of problems.
        
           | rowanG077 wrote:
           | Meteor, Large tsunami, Earth quake, Solar flare, extremely
           | infectious disease that isn't as weak as Covid-19. There are
           | so many natural disasters that could cripple or outright
           | destroy TSMC production facilities.
        
           | lostlogin wrote:
           | > For what reason would TSMC abruptly stop supplying chips
           | short of war?
           | 
           | Earthquakes would be top of my list of things that would
           | cause problems.
        
           | Bud wrote:
           | Climate change impacts or global pandemics could also
           | significantly impact TSMC's operations. Or, Chinese actions
           | that are somewhat short of full-on war.
           | 
           | Also relevant here: TSMC is building a chip fab in the US.
        
         | novok wrote:
         | If TSMC poofed out of existence because of a few bombs from
         | china or a freak super natural disaster, global GDP would drop
         | significantly and quickly. One of the biggest SPOF that I'm
         | worried about in the world.
        
         | zaroth wrote:
         | If WW3 goes hot beyond the regional proxies, we all lose pretty
         | hard.
         | 
         | A new iPhone 14 or Mac M2 will be a pipe that we all chuckle
         | about how we used to care about such things.
        
         | wilsonnb3 wrote:
         | Everybody except Intel and Samsung is screwed if TSMC stops
         | making chips.
         | 
         | Apple (and the rest of the mobile industry) would try to move
         | to using Samsung's fabs and Intel would go back to being the
         | undisputed king on desktops, laptops, and servers.
         | 
         | I think TSMC has like 2-3 times the fab capacity that Samsung
         | does right now for modern chips, so there would be a huge chip
         | shortage.
         | 
         | Apple's $200 billion cash pile would come in handy when trying
         | to buy fab capacity from Samsung so they might come out ahead
         | of less cash-rich competitors.
         | 
         | There would be a significant hit to processor performance.
         | Samsung fabbed the recent Snapdragon 8 Gen 1, which has
         | comparable single core performance to the iPhone 7's A10 fusion
         | chip.
        
           | florakel wrote:
           | >There would be a significant hit to processor performance.
           | 
           | You are probably right. Look at the gains Qualcomm saw by
           | migrating from Samsung to TSMC:
           | https://www.anandtech.com/show/17395/qualcomm-announces-
           | snap...
        
             | 2muchcoffeeman wrote:
             | I never thought you'd get gains by simply switching vendor.
             | 4nm to 4nm and they still saw gains.
        
               | terafo wrote:
               | The thing is that 4nm doesn't actually mean anything.
               | Intel's 10nm node is mostly on par with TSMC N7 and it
               | caused quite a bit of confusion, so Intel renamed a bit
               | improved version(something they would call 10nm++) to
               | Intel 7. It's all just marketing and have been for 15
               | years or so.
        
           | Tostino wrote:
           | The performance of that core is not necessarily all dependent
           | on the node... Apple have been lauded for their designs.
           | Samsung / arm, not so much.
        
       | vondro wrote:
       | So we'll see at least 1-2 years of Apple Silicon being at least
       | one node ahead of competition. I am curious for how long will be
       | Apple able to pull this lead off, and what the perf/watt will
       | look like when (if?) AMD has node parity with Apple in the near
       | future. Or when perhaps Intel uses TSMC as well, and the same
       | process node.
        
         | r00fus wrote:
         | I think this was Apple's game for a LONG time. They have led in
         | mobile chips to the point where they are sometimes 2 years
         | ahead of the competition.
         | 
         | They do this using their monopsony power (they will buy all the
         | fab capacity at TSMC and/or Samsung, and well before
         | competition is aiming to do so either).
        
           | klelatti wrote:
           | IIRC they were using TSMC before TSMC had a material process
           | lead and supported them (and moved away from Samsung) with
           | big contracts and a long term commitment. Hardly surprising
           | that they have first go a new process. Not a risk less bet
           | but one that has paid off.
        
           | paulmd wrote:
           | > They do this using their monopsony power (they will buy all
           | the fab capacity at TSMC and/or Samsung, and well before
           | competition is aiming to do so either).
           | 
           | It's not just buying power - Apple pays billions of dollars
           | yearly to TSMC for R&D work itself. These nodes literally
           | would not exist on the timelines they do without Apple
           | writing big fat checks for blue-sky R&D, unless there's
           | another big customer who would be willing to step up and play
           | sugar-daddy.
           | 
           | Most of the other potential candidates either own their own
           | fabs (intel, samsung, TI, etc), are working on stuff that
           | doesn't really need cutting-edge nodes (TI, Asmedia, Renesas,
           | etc), or simply lack the scale of production to ever make it
           | work (NVIDIA, AMD, etc). Apple is unique in that they hit all
           | three: fabless, cutting-edge, massive-scale, plus they're
           | willing to pay a premium to not just _secure access_ but to
           | actually _fund development of the nodes from scratch_.
           | 
           | It would be a very interesting alt-history if Apple had not
           | done this - TSMC 7nm would probably have been on timelines
           | similar to Intel 10nm, AMD wouldn't have access to a node
           | with absurd cache density and vastly superior efficiency
           | compared to the alternatives (Intel 14nm was still a better-
           | than-market node, compared to the GF/Samsung alternatives in
           | 2019!), etc. I think AMD almost certainly goes under in this
           | timeline, without Zen2/Zen3/Zen3D having huge caches and Rome
           | making a huge splash in the server market, and without TSMC
           | styling on GF so badly that GF leaves the market and lets AMD
           | out of the WSA, Zen2 probably would have been on a failing GF
           | 7nm node with much lower cache density, and would just have
           | been far less impressive.
           | 
           | AMD of course did a ton of work too, they came up with the
           | interconnect and the topology, but it still rather directly
           | owes its continued existence to Apple and those big fat R&D
           | check. You can't have AMD building efficient, scalable cache
           | monsters (CPU and GPU) without TSMC being 2 nodes ahead of
           | market on cache density and 1 node ahead of the market on
           | efficiency. And they wouldn't have been there without Apple
           | writing a blank check for node R&D.
        
             | duxup wrote:
             | I do sometimes wonder if we could ask and get an honest
             | answer "Ok well then who wants to pay for all this from
             | step 1?"
        
               | wmf wrote:
               | China.
        
           | afarrell wrote:
           | > they will buy all the fab capacity at TSMC
           | 
           | What would motivate TSMC to choose to only have 1 customer?
           | 
           | TSMC is known as "huguo shenshan" or "magic mountain that
           | protects the nation". What would motivate TSMC to choose to
           | have their geopolitical security represented by only 2
           | senators?
        
           | joshstrange wrote:
           | They absolutely use their power (aka money) to buy fab
           | capacity but they are also responsible for a ton of
           | investment in fabs (new fabs and new nodes). Because of that
           | investment they get first dibs and the new node. In the end
           | it's up to the the reader to decide if this is a net positive
           | for the industry (would we be moving as fast without Apple's
           | investment? Even accounting for the delay in getting fab time
           | until after Apple gets a taste).
        
         | Melatonic wrote:
         | Yea this is what I am wondering as well. If nobody else ends up
         | switching to ARM in the laptop/desktop space and eventually AMD
         | and Intel are making 5 or 3nm chips then surely this massive
         | lead in power efficiency is going to close. At the current
         | levels the new apple computers seem awesome - but if they are
         | only 10-20% more efficient?
        
           | ghaff wrote:
           | You do have ARM in Chromebooks. Any wholesale switch for
           | Windows seems problematic given software support. But beyond
           | gaming, a decent chunk of development, and multimdeia, a lot
           | of people mostly live in a browser these days.
        
       | derbOac wrote:
       | I know I'm being irrational about this, but for some reason this
       | makes me lean toward getting an M1 Air or 13-inch Pro rather than
       | an M2: it's like with the M2 performance gains are being squeezed
       | out of the same (or similar enough) M1 process rather than
       | changing the process significantly, at the cost of efficiency.
        
         | alberth wrote:
         | You'd definitely save $200 since base M1 Air is $999 vs M2 Air
         | being $1199.
        
           | solarkraft wrote:
           | They're also available used and a lot cheaper now.
           | 
           | I'm plenty happy with mine and don't plan to switch any time
           | soon. Yeah, the M2 Air looks a bit nicer, but it's more of a
           | Pro-follow up with its boxy design and ... eh, the M1 Air is
           | totally fine in all aspects I can spontaneously come up with.
           | It's a really good device and the laptop I might recommend
           | for years to come. It getting cheaper and cheaper will only
           | increase the value you'll get.
        
         | K7PJP wrote:
         | I almost did this, but the return of MagSafe (which frees up a
         | USB port) and the display improvements were worth it to me. Oh,
         | how I've missed MagSafe.
        
           | derbOac wrote:
           | Yeah the things you mention definitely enter into the
           | equation.
        
         | wilg wrote:
         | The hardware is much nicer on the M2 Air though.
        
       | [deleted]
        
       ___________________________________________________________________
       (page generated 2022-08-25 23:00 UTC)