_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
 (HTM) Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
 (HTM)   LPCAMM2 is a modular, repairable, upgradeable memory standard for laptops
       
       
        hinkley wrote 11 hours 27 min ago:
        > LPDDR operates at lower voltages compared to DDR, giving it the edge
        in power efficiency. But, the lower voltage makes signal integrity
        between the memory and processor challenging,
        
        Why can't the signaling channels use a higher voltage and control
        circuitry on the memory stick step up and step down the gain to access
        the memory module?
       
        sharpshadow wrote 22 hours 16 min ago:
        Would it be possible to have LPCAMM2 as external device tru
        thunderbolt?
       
          6SixTy wrote 16 hours 16 min ago:
          Only CXL has the potential to be outsourced to Thunderbolt, as it
          works off PCIe and system RAM does not. CXL (Compute eXpress Link) is
          a server grade technology that's really aimed at solving some
          problems within the high performance compute area, like cache
          coherency. If you don't get it, I don't either tbh.
       
          noodlesUK wrote 22 hours 11 min ago:
          No, RAM is not something that is exposed on the PCIe bus (which is
          what thunderbolt is based on). RAM has a different protocol (DDR5 in
          this case), and as it says in the article, is very sensitive to the
          distance between the CPU and the RAM. External RAM isn't really
          something that is viable in the modern era of computers as far as I
          know.
       
            simcop2387 wrote 20 hours 36 min ago:
            Surprisingly this is something starting to show up in the server
            market lately with a new protocol/tech called CXL.  But yea that
            latency issue is still there over the distance but it'll let more
            remote memory type stuff start to happen.  I doubt you'll do more
            than a few meters (i.e. within the same rack) ever but it'll likely
            end up getting used for so called "hyperscaler" type companies to
            more flexibly allocate resources, similar to how they're doing PCIe
            over ethernet with DPU devices right now.  Unlikely that this will
            end up at the consumer level anytime even medium term because that
            kind of flexibility is still just so niche but we might see some
            CXL connectivity eventually for things like GPUs or other
            accelerators to have more memory or share better between host and
            accelerator.
            
            EDIT: article about a tech demo of it on a laptop actually, hadn't
            seen this before:
            
 (HTM)      [1]: https://www.techradar.com/pro/even-a-laptop-can-run-ram-ex...
       
        quailfarmer wrote 1 day ago:
        I'm sure this will find use in Business-Class "Mobile workstations",
        but having integrated DDR4 in my own hardware, I have a hard time
        seeing this as the mainstream path forward for mobile computing.
        
        There's lots of value in tight integration. Improved signal integrity
        (ie, faster), improved reliability, better thermal flow, smaller
        packaging, and lower cost. Do I really want to compromise all of those
        things just to make RAM upgrades easier?
        
        And how many times do I need to upgrade the RAM in a laptop, really?
        Twice? Why make all those sacrifices to use a connector, instead of
        just reworking the DRAM parts? A robotic reflow machine is not so
        complex that a small repair shop couldn't afford one, which is what you
        see if you to to parts of the world where repair is taken seriously.
        Why do I need to be able to do it at home? I can't re-machine my engine
        at home. It's the most advanced nanotechnology humanity can produce,
        why is a $5k repair setup unreasonable?
        
        This is not to mention the direction things are really going, DRAM on
        Package/Die. The signaling speed and bus widths possible with
        co-packaged memory and HBM are impossible to avoid, and I'm not going
        to complain about the fact that I can't upgrade the RAM separately from
        the CPU, any more than I complain about not being able to upgrade my L2
        cache today. The memory is part of the compute, in the same way the GPU
        memory is part of the GPU.
        
        I hope players like iFixit and Framework aren't too stubborn in
        opposing the tight integration of modern platforms. "Repairable"
        doesn't need to mean the same thing it did 10 years ago, and there are
        so many repairability battles that are actually worth fighting, that
        being stubborn about the SOTA isn't productive.
       
          Timshel wrote 1 day ago:
          >I'm sure this will find use in Business-Class "Mobile workstations",
          but having integrated DDR4 in my own hardware, I have a hard time
          seeing this as the mainstream path forward for mobile computing.
          
          Don't know would say the reverse, workstation might need the
          performance of DRAM on Package/Die, but I don't believe it's the case
          for mainstream user.
          
          > A robotic reflow machine
          
          Same maybe to service enterprise customer but probably way too
          expensive for mainstream.
          
          I certainly hope that players continue to oppose tight integration
          and I'll try to support them. I value the ability that anyone can
          swap ram and disks to easily upgrade or repair their device more than
          an increase of performance or even battery life.
          
          I recently cobbled up a computer for a friend's child with component
          from three different computers; any additional cost would have made
          the exercise worthless.
       
        snvzz wrote 1 day ago:
        I see no mention of ECC.
        
        It worries me.
       
        userbinator wrote 1 day ago:
        A bit of a disingenious argument intended to sell this as being more
        revolutionary than it really is --- BGA sockets already exist for LPDDR
        as well as other things like CPUs/SoCs, but they're very expensive due
        to low volumes. If the volume went up, they'd go down in price
        significantly just like LGA sockets for CPUs have.
        
 (HTM)  [1]: https://www.ironwoodelectronics.com/products/lpddr/
       
        PTOB wrote 1 day ago:
        The current Dell version of this: upgrade to 64GB is $1200. Found this
        the hard way when trying to get my engineering team what I thought
        would be a $200 upgrade per machine from their stock 32GB Precision
        laptop workstations.
       
        kristianp wrote 1 day ago:
        So this is going into the ThinkPad P1 (Gen 7), which is too expensive
        and power hungry for my use cases. How long until it filters down into
        less expensive SKUs? Are we talking next years generation?
        
        Ifixit also links to a repair guide:
        
 (HTM)  [1]: https://www.ifixit.com/Device/Lenovo_ThinkPad_P1_Gen_7
       
          CoolCold wrote 20 hours 8 min ago:
          My personal understanding - for Thinkpads, it's next year. I guess
          Lenovo is making real-life testes with P1 here, gather feedback
          before addressing other families like T14/T14s
       
        cryptonector wrote 1 day ago:
        Yes please.  Also, can we haz ECC?
       
          seanp2k2 wrote 1 day ago:
          Why are you trying to bankrupt Intel??? Without being able to charge
          5x as much for Xeons for ECC support, why would anyone ever pony up
          for one?
       
        Dwedit wrote 1 day ago:
        Can it become loose then suddenly not have all pins attached properly? 
        This is something that's unlikely to happen with SODIMM slots, but I've
        seen so many times when screw receptacles fail.
       
        sharpshadow wrote 1 day ago:
        Is it possible to have both LPDDR and LPCAMM2 in use at the same time?
       
          wtallis wrote 1 day ago:
          LPCAMM2 is a connector and form factor standard for modules carrying
          LPDDR type memory chips.
       
            masklinn wrote 1 day ago:
            I assume they mean having some memory soldered and an expansion
            slot.
            
            I've seen laptops like that, with e.g. 8GB soldered and a sodimm
            slot.
       
              sharpshadow wrote 22 hours 17 min ago:
              That would be nice since there is a rise of CPU+RAM and even GPU
              I think all on one chip. Would be interesting to be able to
              upgrade RAM on maschines like that.
       
        Tran84jfj wrote 1 day ago:
        I would welcome something like Raspberry Pi compute module, that
        contains CPU+RAM and communicates with other parts via PCIE. This
        standard can last decades!
        
        Yet another standard for memory will just fail.
       
        zokier wrote 1 day ago:
        I wonder if this will bring a new widely available high-performance
        connector to the wider market. SO-DIMM connectors have been
        occasionally repurposed to other uses, most notably by Raspberry Pi
        Compute Models 1-3 among other similar SOM/COM boards. RPi CM4 switched
        to 2x 100pin mezzanine connectors; maybe some future module could use
        CAMM connectors, I'd imagine they are capable enough
       
          wmf wrote 1 day ago:
          The compression connector looks flimsier than a mezzanine so it
          should probably be a last resort for multi-gigahertz single-ended
          signaling.
       
        p0w3n3d wrote 1 day ago:
        Apple hates it
       
        oneplane wrote 1 day ago:
        On the other hand, with a reflow station everything becomes modular and
        repairable.
        
        I do hope that a more widespread usage of compressed attachment gives
        us some development in that area where projects that were promising
        modular devices failed (remember those 'modular' phone concepts?
        available physical interconnects were one of the failures...). Sockets
        for BGAs have existed for a while, but were not really end-user
        friendly (not that LGA or PGA are that amazing), so maybe my hope is
        misplaced and many-contact connections will always be worse than direct
        attachment (be it PCB or SiP/SoC/CPU shared substrate).
       
          RetroTechie wrote 1 day ago:
          > maybe my hope is misplaced and many-contact connections will always
          be worse than direct attachment
          
          As much as I like socketed / user-replaceable parts, fact is that
          soldering down a BGA is a very reliable way to make those many
          connections.
          
          On devices like smartphones & tablets RAM would hardly ever be
          upgraded even if possible. On laptops most users don't bother. On
          Raspberry Pi style SBCs it's not doable.
          
          Desktops, workstations & servers are the exception here.
          
          Basically the high-speed parts of a system need to be as close
          together as physically possible. Especially if low power consumption
          is important.
          
          Want easy upgrades? Then compute module + carrier board setups might
          be the way to go. Keep your I/O connectors / display / SSD etc, swap
          out the CPU/GPU/RAM part.
       
          zokier wrote 1 day ago:
          > On the other hand, with a reflow station everything becomes modular
          and repairable.
          
          until you hit custom undocumented unobtainium proprietary chips. good
          luck repairing anything with those.
       
          jcotton42 wrote 1 day ago:
          > On the other hand, with a reflow station everything becomes modular
          and repairable.
          
          Not for the average person.
       
            redeeman wrote 1 day ago:
            true, but can the average person replace the innertube on a 
            bicycle wheel? :)
       
              pezezin wrote 1 day ago:
              Yes? I did it many, many times as a kid, it is not that
              difficult.
       
                lazide wrote 6 hours 17 min ago:
                I suspect the poster would argue you’re not average -
                possibly even because you’re on HN to say so.
       
        ThinkBeat wrote 1 day ago:
        Meanwhile Apple bakes the RAM,CPU,GPU all into the same "chip".
        Good luck with that.
       
          0x457 wrote 1 day ago:
          Meanwhile, Apple ships machines with a 1024bit wide memory bus, while
          this solution offers just 128 bits per "stick".
       
            Dylan16807 wrote 1 day ago:
            Compared to how big the CPU package is on those machines, 4 of
            these sticks on each side of the motherboard should fit acceptably.
            
            And you'd be able to have a lot more than 192GB.
       
              0x457 wrote 18 hours 32 min ago:
              I'm looking at the boards from M1 and M2: [1] I can see a max of
              2 fitting there.
              
 (HTM)        [1]: https://valkyrie.cdn.ifixit.com/media/2023/01/26174137/M...
       
                Dylan16807 wrote 14 hours 55 min ago:
                That's not the model with the 1024 bit bus.
                
 (HTM)          [1]: https://cdn.wccftech.com/wp-content/uploads/2023/06/M2...
       
          colinng wrote 1 day ago:
          Don’t forget - they solder in the flash too even though there is no
          technical reason to do so.
          
          Unless “impossibly far profit margin” is a technical requirement.
       
            mschuster91 wrote 1 day ago:
            > Don’t forget - they solder in the flash too even though there
            is no technical reason to do so.
            
            There is, Apple uses flash memory as swap to get away with low RAM
            specs, and the latency and speed required for that purpose all but
            necessitates putting the flash memory directly next to the SoC.
       
              wmf wrote 1 day ago:
              This is not really true; Apple's SSDs are no faster than
              off-the-shelf premium NVMe SSDs.
       
                Rohansi wrote 1 day ago:
                Yeah but some people need to justify their $1,800 USD purchase
                of laptop that comes with only 8 GB of RAM. Even though most
                laptops manufactured today would also come with NVMe (PCIe
                directly connected to the CPU, usually) flash storage, which is
                used by all operating systems as swap.
       
                  mschuster91 wrote 1 day ago:
                  NVMe by no means is directly connected to the CPU directly,
                  usually it's connected through at least one PCIe switch.
       
                    Rohansi wrote 21 hours 57 min ago:
                    It's harder to confirm for laptops but you can refer to
                    motherboard manuals to see if any of your PCIe-related
                    slots go through a switch or not. For example, my current
                    PC has a PCIe x16 slot, x1 slot, and two M.2 NVMe slots. It
                    says everything is integrated into the CPU except the x1
                    slot which goes through the motherboard chipset. I don't
                    see why any laptop would make NVMe go through a PCIe switch
                    unless the CPU doesn't provide enough lanes to support
                    everything supported by the motherboard. Even the at the
                    lowest end, a dual core Intel Core i3-10110U 
                    (laptop processor from 2019) has 16 lanes from the CPU
                    which could support at least one NVMe without going through
                    a switch.
       
                wtallis wrote 1 day ago:
                And the latency of flash memory is several orders of magnitude
                higher than even the slowest interconnect used for internal
                SSDs.
       
        zxcvgm wrote 1 day ago:
        I remember when Dell was the first to introduce [1] these Compression
        Attached Memory Modules in their laptops in an attempt to move away
        from soldered-on RAM. Glad this is now being more widely adopted and
        standardized.
        
 (HTM)  [1]: https://www.pcworld.com/article/693366/dell-defends-its-contro...
       
          AlexDragusin wrote 1 day ago:
          > The first iteration, known as CAMM, was an in-house project at
          Dell, with the first DDR5-equipped CAMM modules installed in Dell
          Precision 7000 series laptops. And thankfully, after doing the
          initial R&D to make the tech a reality, Dell didn’t gatekeep. Their
          engineers believed that the project had such a good chance at
          becoming the next widespread memory standard that instead of keeping
          it proprietary, they went the other way and opened it up for
          standardization.
       
            jimbobthrowawy wrote 21 hours 27 min ago:
            Trying to make it a standard is one of the least surprising things
            about it. You want accessories/components in your product to be as
            commodity as possible to drive costs down.
       
        orev wrote 1 day ago:
        I’m glad they explained why RAM has become soldered to the board
        recently. It’s easy to be cynical and assume they were doing it for
        profit motive purposes (which might be a nice side effect), but it’s
        good to know that there’s also a technical reason to solder it. Even
        better to know that it’s been recognized and a solution is being
        worked on.
       
          yread wrote 1 day ago:
          If they soldered a decent amount that gou can be sure you don't ever
          need to upgrade it would be fine (seriously, 64GB ram costs like
          100eur, non issue in a 1000eur laptop). 8 is not enough already and
          16 will soon be limiting too.
       
            orev wrote 19 hours 36 min ago:
            No matter how much the specs increase, developers find a way to use
            it all up. This approach would just accelerate that process.
       
            nuancebydefault wrote 22 hours 50 min ago:
            10 percent is not neglectible. Also 64GB is a lot _today_ but most
            probably not 5 years from now. The alternative of buying a new
            laptop feels like a big waste.
       
            brookst wrote 22 hours 55 min ago:
            Is the goal to not have any computers that are limited to a single
            task? Tons of corporate IT purchases go to someone only using e.g.
            Word all day. Do we really care if they are provisioned with
            “enough” memory for you or me?
       
              pathartl wrote 19 hours 52 min ago:
              The baseline 14" MacBook Pro that costs $1600 has 8GB of shared
              RAM. That's not enough. I don't believe OP is talking about
              machines better suited for your task, machines in the $1k range.
       
          klysm wrote 1 day ago:
          I didn’t really appreciate the insanity of the electrical
          engineering involved in high frequency stuff till I tried to design
          some PCBs. A simplistic mental model of wires and interconnects
          rapidly falls apart as frequencies increase
       
          kjkjadksj wrote 1 day ago:
          They can have their technical fig leaf to hide behind but in
          practice, how many watts are we really saving between lpddr5 and
          ddr5? is it worth the ewaste tradeoff to have a laptop we can't
          modularly upgrade to meet our needs? I would guess not.
       
            masklinn wrote 1 day ago:
            > how many watts are we really saving between lpddr5 and ddr5?
            
            From what I gathered, it's around a watt per when idling (which is
            when it's most critical): the sources I found seem to indicate that
            ddr5 always runs at 1.1V (or more but probably not in laptops),
            while lpddr5 can be downvolted. That's an extra 10% idle power
            consumption per.
       
          tombert wrote 1 day ago:
          Yeah, I was actually surprised to learn there was a reason other than
          "Apple wants you to buy a new Macbook or overspec your current one".
          It's annoying, but at least there's a plausible reason to why they do
          it.
       
            klausa wrote 1 day ago:
            Apple's RAM is not soldered to the _motherboard_, it's part of the
            SoC package.
       
              Vogtinator wrote 1 day ago:
              Only recently. It started out as soldered to the main board.
       
                brookst wrote 22 hours 53 min ago:
                No, it started out as chips in sockets. I (dimly) remember
                upgrading my II+, I think from 32kb to 48kb?
                
                A lot has changed.
       
                  lazide wrote 6 hours 27 min ago:
                  EEPROM like DIP packaging where it was damn near impossible
                  to pull without bending a pin and/or smacking your hand on
                  something?
                  
                  God forbid someone steps on it too, I think I might still
                  have some scars on my feet.
       
            seanp2k2 wrote 1 day ago:
            "...and they charge 4x what the retail of premium RAM would
            otherwise be per GB"
            
            do storage next.
       
          OJFord wrote 1 day ago:
          I didn't find that a particularly complete explanation - and the slot
          can't be closer to the CPU because? - I think it must be more about
          parasitic properties of the card edge connector on DIMMs being
          problematic at lower voltage (and higher frequencies) or something.
          Note the solution is a ball grid connection and the whole thing's
          shielded.
          
          I suppose in fairness and to the explanation it does give, the other
          thing that footprint allows is a shorter path for the pins that would
          otherwise be near the ends of the daughter board (e.g. on a DIMM),
          since they can all go roughly straight across (on multiple layers)
          instead of a longer diagonal according to how far off centre they
          are. But even if that's it, that's what I mean by it seeming
          incomplete. :)
       
            throwaway48476 wrote 1 day ago:
            Competes with space for VRM's.
       
            Tuna-Fish wrote 1 day ago:
            > and the slot can't be closer to the CPU because?
            
            All the traces going into the slot need to be length-matched to
            obscene precision, and the physical width of the slot and the room
            required by the "wiggles" made in the middle traces to length-match
            them restrict how close you can put the slot. Most modern boards
            are designed to place it as close as possible.
            
            LPCAMM2 fixes this by having a lot of the length-matching done in
            the connector.
       
              ansible wrote 23 hours 26 min ago:
              Generally speaking, layout for modern DRAM (LPDDRx, etc.) is a
              giant pain. Trace width, differential trace length matching,
              spacing, number of vias, and more.
              
              And all this is needed even though the DRAM signaling standard
              has extensive measurement and analysis of the traces built right
              into the hardware of the DRAM and the memory controller on the
              processor. They negotiate the speed and latency at runtime.
              
              Giant pain.
       
            smolder wrote 1 day ago:
            Yeah, you can only make the furthest RAM chip in DIMM be so close
            to the CPU based on the form factor, and the other traces need to
            match that length. Distance is critical and edge connectors sure
            don't help.
       
          drivingmenuts wrote 1 day ago:
          The problem is getting manufacturers to implement the new RAM
          standard. While the justifications given are great for the consumer,
          I didn't see any reason for a manufacturer to sign on.
          
          They are going to lose money when people buy new RAM, rather than a
          whole new laptop. While processor speeds and size haven't plateaued
          yet, it's going to take a while to develop significant new speed
          upgrades and in the meantime, the only other upgrade is disk
          size/long-term storage, which, aside from Apple, they don't totally
          control.
          
          So, why should they relenquish that to the user?
       
            rock_artist wrote 23 hours 25 min ago:
            Unlike Apple, where they are in in-direct competition on computer
            hardware,
            For PCs,
            If Lenovo starts doing it, then it's a marketing point. now Asus,
            HP, Dell would try and get it.
            
            So it's the egg and the chicken where if it'll be important to
            consumers, it might end up as catching up.
       
            AnthonyMouse wrote 1 day ago:
            > They are going to lose money when people buy new RAM, rather than
            a whole new laptop.
            
            You're thinking about this the wrong way around.
            
            Suppose the user has $800 to buy a new laptop. That's enough to get
            one with a faster processor than they have right now or more
            memory, but not both. If they buy one and it's not upgradable,
            that's not worth it. Wait another year, save up another $200, then
            buy the one that has both.
            
            Whereas if it can be upgraded, you buy the new one with the faster
            CPU right away and upgrade the memory in a year. Manufacturer gets
            your money now instead of later, meanwhile the manufacturer who
            didn't offer this not only doesn't sell to you in a year, they just
            lost your business to the competition.
       
              petemir wrote 1 day ago:
              I doubt the consumer mass that actually matters to manufacturer's
              earnings understands RAM value and if the computer they are
              buying is RAM-upgradable or not.
              
              They are going to buy the 800$, any of the two, complain when it
              inevitably "works slower" in a couple of years (if they are
              lucky), and buy a new 800$ once again then. I don't see the
              manufacturer's motivation to offer upgradable RAM.
       
                AnthonyMouse wrote 1 day ago:
                They don't have $800 to buy another one so soon. So they take
                the one that "works slower" to some tech who knows the deal and
                tells them this machine sucks because you can't upgrade it, and
                now they think your brand is crap (because it is), curse you
                for the next however many years until they have the money and
                then buy the next one from someone else.
       
            makeitdouble wrote 1 day ago:
            I'd see two angles:
            
            - the manufacturer themselves benefit from easier to repair
            machines. If DELL can replace the RAM and send back the laptop in a
            matter of minutes instead of replacing the whole motherboard to
            then have it salvaged somewhere else, it's a clear win.
            
            - prosumers will be willing to invest more in a laptop that has
            better chance to survive a few years. Right now we're all expecting
            to have parts fail within 2 to 3 years on the higher end, and
            budget accordingly. You need a serious reason to buy a 3000$/€
            laptop that might be dead in 2 years. Knowing it could weather RAM
            failure without manufacturer repair is a plus.
       
            7speter wrote 1 day ago:
            These companies did plenty well 12+ years ago when users could
            upgrade their systems memory.
       
            cesarb wrote 1 day ago:
            > While the justifications given are great for the consumer, I
            didn't see any reason for a manufacturer to sign on. [...] So, why
            should they relenquish that to the user?
            
            It makes sense that the first ones to use this new standard would
            be Dell and Lenovo. They both have "business" lines of computers,
            which usually offer on-site repairs (they send the parts and a
            technician to your office) for a somewhat long time (often 3 or 5
            years). To them, it's a cost advantage to make these computers
            easier to repair. Having the memory (which is a part which not
            rarely fails) in a separate module means they don't have to replace
            and refurbish the whole logic board, and having it easy to remove
            and replace means less time used by the on-site technician
            (replacing the main logic board or the chassis often means
            dismantling nearly everything until it can be removed).
       
              babypuncher wrote 1 day ago:
              They also charge a lot more for these "business-class" machines.
              That higher margin captures the revenue lost to DIY repairs and
              upgrades.
       
              masklinn wrote 1 day ago:
              > To them, it's a cost advantage to make these computers easier
              to repair.
              
              Alternatively, it allows them to use more efficient RAM in
              computer lines they can't make non-repairable so they can boast
              of higher battery life.
       
            bugfix wrote 1 day ago:
            Even if it's just Lenovo using these new modules, I still think
            it's a win for the consumer (if the modules aren't crazy
            expensive).
       
        doublextremevil wrote 1 day ago:
        Cant wait to see this in a framework laptop
       
          OJFord wrote 1 day ago:
          For the presumed improvement to battery life? Because Fw already uses
          SO-DIMMs.
       
            universa1 wrote 1 day ago:
            That's also nice, but the memory speed is also higher, Ddr5-7266 vs
            5600 iirc. The resulting higher bandwidth translates more or less
            directly into more performance for the iGPU.
       
            wmf wrote 1 day ago:
            It's also faster (7500 vs. 5600).
       
        farmdve wrote 1 day ago:
        Remember that Haswell laptops were the last to feature socketed CPUs.
        
        RAM is nice to upgrade, for sure. As well as an SSD, but CPUs are still
        a must. I would even suggest upgradeable GPUs but I don't think the
        money is there for the manufacturers. Why allow you to upgrade when you
        can buy a whole new laptop?
       
          seanp2k2 wrote 1 day ago:
          They've done upgradeable laptop GPUs before with MXM: [1] Looks like
          the best card they have out with MXM right now is a Quadro RTX 5000
          Mobile which seem to be going for ~$1000 on eBay.
          
 (HTM)    [1]: https://en.wikipedia.org/wiki/Mobile_PCI_Express_Module
       
          immibis wrote 1 day ago:
          Laptops have always been trading size for upgradeability and other
          factors, and soldering everything is the way to make them tiny. If
          you ask me they've gotten too extreme in size. The first laptops were
          way too bulky, but they hit a sweet spot around 2005-2010, being just
          thick enough to hold all those D-Sub connectors (VGA, serial, etc).
          
          And soldering stuff to the board is the default way to make something
          when upgradeability isn't a feature.
       
          Night_Thastus wrote 1 day ago:
          On a laptop it's not very practical.
          
          Because you can't swap the motherboard, your options for CPUs are
          going to be quite limited. Generally, only higher-tier CPUs of that
          same generation - which draw more power and require more cooling.
          
          Generally a laptop is built designed to provide a specific budget of
          power to the CPU and has a limited amount of cooling.
          
          Even if you could swap out the CPU, it wouldn't work properly if the
          laptop couldn't provide the necessary power or cooling.
       
            yencabulator wrote 1 day ago:
            > On a laptop it's not very practical.
            
            > Because you can't swap the motherboard, [1] has entered the chat.
            
 (HTM)      [1]: https://frame.work/
       
            farmdve wrote 1 day ago:
            I can't say I agree. Back in 2014 a laptop was purchased with a
            dual-core haswell CPU. 8 years later I revive the laptop by
            upgrading the CPU to almost the best possible CPU, which is a
            4-core 8 thread CPU or 4-core 4 threads, I am unsure which of these
            it was, but the speed boost was massive. This is how you keep old
            tech alive.
            
            And the good thing about mobile CPUs is that they have almost the
            same TDP across the various dual-quad versions(or whatever is the
            norm today).
       
              Rohansi wrote 1 day ago:
              How old was the new CPU though? Probably the same or similar
              generation to what it originally came with since the socket needs
              to be the same.
              
              IMO the switch to an SSD would have been the biggest boost.
       
                farmdve wrote 20 hours 5 min ago:
                Same gen but with 2 more cores + Hyperthreading
       
          zamadatix wrote 1 day ago:
          I'm not sure I really get much value out of a socketed CPU,
          particularly in a laptop, vs something like a swappable MB+CPU combo
          where the CPU is not socketed.
          
          RAM/Storage are great upgrades because 5 years from now you can pop
          in 4x the capacity at a bargain since it's the "old slow type". CPUs
          don't really get the same growth in a socket's lifespan.
       
            farmdve wrote 1 day ago:
            As I said to the comment above, it makes perfect sense. In 2014 we
            purchased a dual core Haswell. Almost a decade later I revive the
            laptop by installing more ram, an SSD and the best possible quad
            core CPU for that laptop. The gain in processing power were massive
            and made the laptop useable again.
       
              zamadatix wrote 1 day ago:
              I'm sure it's all subjective (e.g. I'm sure someone here even
              considers the original dual core Haswell more than fine without
              upgrade in 2024) but going from a dual core Haswell to a quad
              core Haswell (or even a generation or two beyond, had it been
              supported) as an upgrade a decade after the fact just doesn't
              seem worth it to me.
              
              The RAM/SSD sure - a 2 TB consumer SSD wasn't even a possible
              thing to buy until a year after that laptop would have come out
              and you can get that for <$100 new now. It won't be the highest
              performing modern drive but it'll still max out the bus and be
              many times larger than the original drive. Swap equipment 3 years
              from now and that's also still a great usable drive rather than a
              museum piece. Upgrading to a CPU that you could have gotten
              around the time the laptop came out? Sure, it has twice as many
              cores... but it still has pretty bad multi core performance and a
              god awful perf/wattage ratio to be investing new money on a
              laptop for. It's also a bit of a dead end, in 3 years you'll now
              have 2 CPUs so ancient you can't really do much with them.
       
                farmdve wrote 21 hours 58 min ago:
                Maybe it is subjective. For me it made perfect sense. I could
                not afford a new laptop but could afford rejuvenating an old
                one.
       
                pavon wrote 1 day ago:
                This matches my experience. Every PC I've built over the last
                30 years have benefited from memory and storage upgrades
                through their life, and I've upgraded GPU a few times. However,
                every time I've looked at upgrading to another CPU with the
                same socket it is either not a big enough step up, or too much
                of a power hog relative to the midrange CPU I originally built
                with. The only time I've replaced CPUs is when I've fried them
                :)
       
                  seanp2k2 wrote 1 day ago:
                  Yup, so I've adopted a strategy for my past few desktop
                  builds like this:
                  
                    - Every time a new ToTL GPU comes out for a new family, buy
                  it at retail price as soon as it launches (so, the
                  first-available ToTL models that were big gains in perf: GTX
                  1080 Ti, RTX 2080 Ti, RTX 3090, RTX 4090)
                  
                    - Every other release cycle, upgrade CPU to the ToTL
                  consumer chip (eg on a 12900KS right now, HEDT like
                  ThreadRipper is super expensive and not usually better for
                  gaming or normal dev stuff). I was with Ryzen since 1800x ->
                  3950x -> 5950x but Intel is better for the particular game I
                  play 90% of the time.
                  
                    - Every time you upgrade, sell the stuff you've upgraded
                  ASAP. If you do this right and never pay above MSRP for
                  parts, you can usually keep running very high-end hardware
                  for minimal TCO.
                  
                    - Buy a great case, ToTL >1000w PSU (Seasonic or be
                  quiet!), and ToTL cooling system (currently on half a dozen
                  140mm Noctua fans and a Corsair 420mm AIO). This should last
                  at least 3 generations of upgrading the other stuff.
                  
                    - Storage moves more slowly than the rest, and I've had
                  cycles where I've re-used RAM as well, so again here go for
                  the good stuff to maximize perf, but older SSDs work great
                  for home servers or whatever else.
                  
                    - Monitor and other peripherals are outside of the scope of
                  this but should hopefully last at least 3 upgrade
                  generations. I bit when OLED TVs supported 4K 120hz G-Sync,
                  so I've got a 55" LG G1 that I'm still quite happy with and
                  not wanting to immediately upgrade, though I do wish they
                  made it in a 42" size, and 16:10 would be just perfect.
       
            immibis wrote 1 day ago:
            Socket AM4 had a really good run. Maybe we just have to pressure
            manufacturers to make old-socket variations of modern processors.
            
            The technical differences between sockets aren't usually huge.
            Upgrade the memory standard here, add or remove PCIe lanes there.
            Using new cores with an older memory controller may or may not be
            doable, but it's quite simple to not connect all the PCIe lanes the
            die supports.
       
              seanp2k2 wrote 1 day ago:
              but then what excuse would you have to throw another $500 at Asus
              for their latest board that while being the best chance the
              platform has, still feels like it runs a beta BIOS for the first
              9 months of ownership?
       
          sojuz151 wrote 1 day ago:
          I would say it would make the most sense to have a replaceable entire
          ram+cpu+gpu assemble. Just have some standard form factors and
          connectors for external connectors.
          
          This way, you could keep power consumption low and be able to upgrade
          cpu to a new generation
       
          leduyquang753 wrote 1 day ago:
          The Framework laptop 16 features replaceable GPU.
       
            freedomben wrote 1 day ago:
            I'm writing this from my Framework 16 with GPU and it is the best
            laptop I've ever known.  It's heavy and big and not the most
            portable, but I knew that would be the case going into it and I
            have no regrets
       
            FloatArtifact wrote 1 day ago:
            > The Framework laptop 16 features replaceable GPU.
            
            In a way I don't mind having non-replaceable ram in the framework
            ecosystem as an option. Put simply because the motherboard itself
            is modular and needs to be upgraded for the CPU. At that point
            though I would prefer on integrated ram CPU/GPU.
       
            farmdve wrote 1 day ago:
            These are very obscure, or perhaps I mean to say niche laptop
            manufacturers. We need this standard for all of them, HP, Lenovo,
            Acer etc.
       
              nwah1 wrote 1 day ago:
              Framework open sources most of their schematics, if I understand
              correctly. So it should be possible for others to use the same
              standard, if they wanted to. (they don't want to)
       
                Dylan16807 wrote 1 day ago:
                The form factor isn't great for being a vendor-neutral thing.
                
                If we can convince the companies to actually try for
                compatibility, then a revival of MXM is probably a
                significantly better option.
       
                  Manabu-eo wrote 16 hours 29 min ago:
                  MXM was problematic because the inflexibility of the form
                  factor to upgrade a given system. If your laptop size, power
                  and cooling was designed for a gtx1030 you couldn't replace
                  it with a gtx1080 module.
                  
                  In framework's case, the cooling is integrated in the gpu
                  module, and both it's size, cooling and power deliver can be
                  adjusted depending on the gpu power.
       
                    Dylan16807 wrote 10 hours 42 min ago:
                    I don't mind having a wattage limit on the slot.  That's
                    easy to factor into purchasing decisions.  The much bigger
                    issues are how custom each kind was, with very limited
                    competition on individual modules and a big conflict of
                    interest in wanting to sell you a new laptop.
                    
                    A friend of mine was betrayed on this by MSI, where laptops
                    with GTX 900 series GPUs were promised upgrades and then
                    when the 1000 series came out they didn't offer any.  I
                    think they did make weak excuses about power use, but a
                    1060 would have fit within the power budget fine and been
                    an enormous upgrade.  A few people have even gotten 1060
                    modules to work with BIOS edits, so it wasn't some other
                    incompatibility.  It seems like they saw they couldn't
                    offer a 1080 and threw out the entire project and promise,
                    and then offered a mild discount on a brand new laptop, no
                    other recourse.
       
                nrp wrote 1 day ago:
                Published here:
                
 (HTM)          [1]: https://github.com/FrameworkComputer/ExpansionBay
       
        dvh wrote 1 day ago:
        What's wrong with DIMM?
       
          magicalhippo wrote 1 day ago:
          The physical size of the socket and having the connections on the
          edge means you're forced to have much longer traces. Longer traces
          means slower signalling and more power loss due to higher resistance
          and parasitics.
          
          This[1] Anandtech article from last year has a better look at how the
          LPCAMM module works. Especially note how the connectors are now
          densely packed directly under the memory chips, significantly
          reducing the trace length needed. Not just on the memory module
          itself but also on the motherboard due to the more compact memory
          module. It also allows for more pins to be connected, thus higher
          bandwidth (more bits per cycle).
          
          [1] 
          
 (HTM)    [1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a...
       
            kjkjadksj wrote 1 day ago:
            I'd wager for most consumers capacity is more important than
            bandwidth and the power losses are going to be small compared to
            the rest of the stack.
       
              magicalhippo wrote 1 day ago:
              > power losses are going to be small compared to the rest of the
              stack
              
              While certainly not the largest losses, they do not appear
              insignificant. In LPDDDR4 they introduced[1] a new low-voltage
              signalling, which I doubt they could have gotten working with
              SODIMMs due to the extra parasitics.
              
              If you look at this[2] presentation you can see that at 3200MHz a
              DDR4 SODIMM would consume around 2 x 16 x 4 x 6.5mW x 3.2GHz =
              2.6W for signalling going full tilt. Thanks to the new signalling
              LPDDR4 reduces this by 40% to around 1.6W.
              
              Compare that to a low-power CPU having a TDP of 10W or less a
              full 1W reduction per SODIMM just due to signalling isn't
              insignificant.
              
              To further put it into perspective, the recent Lenovo ThinkPad
              X1[3] uses around 4.15W average during normal usage, and that
              includes the screen.
              
              Obviously the memory isn't going full tilt at normal load, but
              say average 0.25W x 2 sticks would reduce the X1's battery
              lifetime by 10%.
              
              edit: yes I'm aware the presentation is about LPDDR4 yet the X1
              uses LPDDR5, just trying add context using available sources.
              
              [1] [2]
              
 (HTM)        [1]: https://www.jedec.org/news/pressreleases/jedec-releases-...
 (HTM)        [2]: https://www.jedec.org/sites/default/files/JY_Choi_Mobile...
 (HTM)        [3]: https://www.tomshardware.com/reviews/lenovo-thinkpad-x1-...
       
                CoolCold wrote 20 hours 5 min ago:
                useful, thank you!
       
              bmicraft wrote 1 day ago:
              Bandwidth translates directly into better (igpu) performance
       
          0x457 wrote 1 day ago:
          There is literally an entire section explaining why LPDDR needs to be
          soldered down as close as possible to the memory controller.
       
          adgjlsfhk1 wrote 1 day ago:
          One of the biggest problems is that edge connections don't give you
          enough density. Edge connections are great for serves where you stack
          16 channels next to each other, but in a laptop form factor, your
          capacity is already limited, so you can get more wires coming out of
          the ram by connecting to the face rather than the edge.
       
          rangerelf wrote 1 day ago:
          There's nothing _wrong_ with it, it performs according to spec, but
          it has limitations: trace length, power requirements, signal
          limitations, heat, etc.
       
          armarr wrote 1 day ago:
          Larger footprint, taller, longer traces and signal degradation in the
          connectors.
       
          linsomniac wrote 1 day ago:
          It requires too much power, according to the article.  This allows
          using "LP" (Low Power) parts to be removable, they normally have to
          be soldered on board close to the CPU because of the low voltage
          tolerances.
       
          mmastrac wrote 1 day ago:
          The size, the sockets, the heat distribution, etc, etc, etc.
       
        baby_souffle wrote 1 day ago:
        This is fantastic news.
        Hopefully the cost to manufacturers is only marginal and they find a
        suitable replacement for their current "each tier in RAM comes with a
        5-20% price bump" pricing scheme.
        
        Too bad apple is almost guaranteed to not adopt the standard. I miss
        being able to upgrade the ram in macbooks.
       
          j16sdiz wrote 1 day ago:
          Unified memory is basically L3 cache speed with zero copy between CPU
          and GPU.
          
          They have engineering difference. Depends on who you ask, it may or
          may not worth it
       
            enragedcacti wrote 1 day ago:
            Assuming you mean latency, Apple's unified memory isn't lower
            latency than other soldered or socketed solutions e.g. M1 Max with
            111ns latency on cache miss vs 13900k with 93ns latency. Certainly
            not L3 level latency. Zero copy between CPU/GPU is great but not
            unique to unified memory or soldered ram.
            
            As far as bandwidth goes, you would only need one or two LPCAMM2
            modules to match or exceed the bandwidth of non-Max M series chips.
            Accommodating Max chips in a macbook with LPCAMM2 would definitely
            be a difficult packaging problem. [1]
            
 (HTM)      [1]: https://www.anandtech.com/show/17024/apple-m1-max-performa...
 (HTM)      [2]: https://www.anandtech.com/show/17047/the-intel-12th-gen-co...
       
          redeeman wrote 1 day ago:
          and they wont so long as people buy regardless
       
          Aurornis wrote 1 day ago:
          > Too bad apple is almost guaranteed to not adopt the standard.
          
          Apple would require multiple LPCAMM2 modules to provide the bus width
          necessary for their chips. Up to 4 x LPCAMM2 modules depending on the
          processor.
          
          The size of each LPCAMM2 module is almost as big as the entire size
          of an Apple CPU combined with the unified RAM chips, so putting 2-4
          LPCAMM2 modules on the board is completely infeasible without
          significantly increasing the size of the laptop.
          
          Remember, the Apple architecture is a combined CPU/GPU architecture
          and has memory bandwidth to match. It's closer to your GPU than the
          CPU in your non-Mac machine. Asking to have upgradeable RAM on Apple
          laptops is akin to almost like asking for upgradeable RAM on your GPU
          (which would not be cheap or easy)
          
          For every 1 person who thinks they'd want a bigger MacBook Pro if it
          enabled memory upgrades, there are many, many more people who would
          gladly take the smaller size of the integrated solution we have
          today.
       
            kokada wrote 1 day ago:
            >  Up to 4 x LPCAMM2 modules depending on the processor.
            
            The non-Pro/Max versions (e.g. M3) uses 128-bits, and arguably is
            the kind of notebook that mostly needs to be upgraded later since
            they commonly come with only 8GB of RAM.
            
            Even the Pro versions (e.g. M3 Pro) use up-to 256-bits, that would
            be 2 x LPCAMM2    modules, that seem plausible.
            
            For the M3 Max in the Macbook Pro, yes, 4 x LPCAMM2 would be
            impossible (probably). But I think you could have something like
            the Mac Studio have them, that is arguably also the kind of device
            that you probably want to increase memory in the future.
       
              throwaway48476 wrote 1 day ago:
              It would only need to be 2x per board side.
       
            coolspot wrote 1 day ago:
            > like asking for upgradeable RAM on your GPU
            
            Can I please have upgradeable RAM on GPU? Pwetty pwease?
       
              thfuran wrote 1 day ago:
              Sure, as long as you're willing to pay in cost, size, and
              performance.
       
          sliken wrote 1 day ago:
          Apple ships 128 bit, 256 bit, and 512 bit wide memory interfaces on
          laptops (up to 1024 bit wide on desktops).
          
          Is it feasible to fit memory bandwidth like the M3 Max (512 bits wide
          LPDDR5-6400) with LPCAMM2 in a thin/light laptop?
       
            AnthonyMouse wrote 1 day ago:
            Apple does this because their CPU and GPU use the same memory, and
            it's generally the GPU that benefits from more memory bandwidth.
            Whereas in a PC optimized for GPU work you'd have a discrete GPU
            that has its own memory which is even faster than that.
       
            jauntywundrkind wrote 1 day ago:
            Hoping we see AMD Strix Halo with it's 256-bit interface crammed
            into an aggressively cooled fairly-thin fairly-light. But it's
            going to require heavy cooling to make full use of.
            
            Heck, make it only run full tilt when on an active cooling dock.
            Let it run half power when unassisted.
       
              seanp2k2 wrote 1 day ago:
              Kinda hilarious to see gamers buying laptops that can't actually
              leave the house in any practical meaningful way. I feel like some
              of them would be better off with SFF PCs and the external
              monitors they already use. I guess the biggest appeal I've seen
              is the ability to fold up the gaming laptop and put the dock away
              to get it off the desk, but then moving to an SFF on the ground
              plus a wireless gaming keyboard and wireless mouse that they
              already use with the normal laptop + one of those compact
              "portable" monitors seems like it'd solve the same problem.
       
                jwells89 wrote 1 day ago:
                I’ve been wondering for a while now why ASUS or some other
                gaming laptop manufacturer doesn’t take one of their flagship
                gaming laptop motherboards, put some beefy but quiet cooling on
                it, put it in a pizza-box/console enclosure, and sell it as a
                silent compact gaming desktop.
                
                A machine like that could still be relatively small but still
                be dramatically better cooled than even the thickest laptop due
                to not having to make space for a battery, keyboard, etc.
       
                  antonkochubey wrote 1 day ago:
                  ZOTAC does these - there are ZBOX Magnus with laptop-grade
                  RTX 4000 series GPUs in 2-3 liter chassis. However their
                  performance and acoustics are rather.. compromised, compared
                  to a proper SFF desktop (which can be built in ~3x the
                  volume)
       
                    jwells89 wrote 1 day ago:
                    Yeah, those look like they’re too small to be reasonably
                    cooled. What I had in mind is shaped like the main body of
                    a laptop but maybe 2-3x as thick (to be able to fit plenty
                    of heatsink and proper 120/140mm fans), stood up on its
                    side.
       
                kristianp wrote 1 day ago:
                My wife can get an hour of gaming out of her gaming laptop. 
                They're good for being able to game in an area of the house
                where the rest of the family is, even if that means being
                plugged in at the dining table.  Our home office isn't close
                enough.
                
                Also a gaming laptop is handy if you want to travel and game at
                your hotel.
       
            wmf wrote 1 day ago:
            For 512 bits you would need four LPCAMM2s. I could imagine putting
            two on opposite sides of the SoC but four might require a huge
            motherboard.
       
              kristianp wrote 1 day ago:
              Perhaps future LPCAMM generations will require more bits? I still
              can't imagine apple using them unless required by right to repair
              laws. But those laws probably don't extend to making RAM
              upgradeable.
       
            pja wrote 1 day ago:
            This PDF[1] suggests that an LPCAMM2 module has a 128 bit wide
            memory interface, so the epic memory bandwidth of the M3 max
            won’t be achievable with one of these memory modules. High end
            devices could potentially have two or more of them arranged around
            the CPU though?
            
 (HTM)      [1]: https://investors.micron.com/node/47186/pdf
       
              7speter wrote 1 day ago:
              Apple could just make lower tier macbooks but mac fanboys wouldnt
              be able to ask “but what about apples quarterly profits?”
              
              Most macbooks dont need high memory bandwidth, most users are
              using their macs for word processing, excel and vscode.
       
                pmontra wrote 1 day ago:
                As a non Mac reference, I work on a HP laptop from 2014. It was
                a high end laptop by then. It's between 300 and 600 Euro
                refurbished now.
                
                I expanded it to 32 GB RAM, 3 TB SSD but it's still a i7 4xxx
                with 1666 MHz RAM.  And yet it's OK for Ruby, Python, Node,
                PostgreSQL, docker. I don't feel the need to upgrade. I will
                when I'll get a major failure and no spare parts to fix it.
                
                So yes, low end Macs are probably good for nearly everything.
       
                sliken wrote 1 day ago:
                Even low end gaming, simulations, and even fun webGL toys can
                require a fair amount of memory bandwidth with an iGPU, like
                apple's M series.  It also helps quite a bit for inference.  I
                MBP with a M3 max can run models requiring multiple GPUs on a
                desktop and still get decent perf for single users.
       
                  consp wrote 1 day ago:
                  > I MBP with a M3 max can run models requiring multiple GPUs
                  on a desktop and still get decent perf for single users.
                  
                  Good for your niche case, the other 99.8% still only does web
                  and low performance desktop applications (which includes
                  IDEs)
       
                teaearlgraycold wrote 1 day ago:
                Yes but Apple’s trying to build an ecosystem where users get
                highly quality, offline, low latency AI computed on their
                device. Today there’s not much of that. And I don’t think
                they even really know what’s going to justify all of that
                silicon in the neural engine and the memory bandwidth.
                
                Imagine 5 years from now people have built whole stacks on that
                foundation. And then competing laptops need to ship that
                compute to the cloud, with all of the unsolvable problems that
                come with that. Privacy, service costs (ads?), latency,
                reliability.
       
                  jwells89 wrote 1 day ago:
                  Apple is also deliberately avoiding having “celeron” type
                  products in their lineup because those ultimately mar the
                  brand’s image due to being kinda crap, even if they’re
                  technically adequate for the tasks they’re used for.
                  
                  They instead position midrange products from 1-2 gens ago as
                  their entry level which isn’t quite as cheap but is usually
                  also much more pleasant to use than the usual bargain
                  basement stuff.
       
          cjk2 wrote 1 day ago:
          Given enough pressure ...
       
            colinng wrote 1 day ago:
            They will maliciously comply. They might even have 4 sockets for
            the 512-bit wide systems. But then they’ll keep the SSD devices
            soldered - just like they’ve done for a long time. Or cover them
            with epoxy, or rig it with explosives. That’ll show you for
            trying to upgrade! How dare you ruin the beautiful fat profit
            margin that our MBAs worked so hard to design in?!?
       
              cjk2 wrote 1 day ago:
              This is hyperbole. They are replaceable. It's just more
              difficult.
       
              7speter wrote 1 day ago:
              Apple lines perimeter of the nand chips on modern mac minis with
              an array of tiny capacitors, so even the crazy people with heater
              boards can’t unsolder the nand and replace them with higher
              density NAND.
       
                cjk2 wrote 1 day ago:
                This is normal. They are called decoupling capacitors and are
                there to provide energy if the SSD requires short bursts of it.
                If you put them any further away the bit of wire between them
                and the gate turns into an inductor and has some somewhat
                undesirable characteristics.
                
                Also replacing them is not rocket science. I reckon I could do
                one fine (used to do rework). The software side is the bugbear.
       
                wtallis wrote 1 day ago:
                Have you not looked at the NAND packages on any regular SSDs?
                Tiny decoupling caps alongside the NAND is pretty standard
                practice.
       
            armarr wrote 1 day ago:
            You mean pressure from regulators, surely. Because 99% of consumers
            will not notice or know the difference in a spec sheet.
       
        mmastrac wrote 1 day ago:
        Ugh, finally. And it's not just a repurposed desktop memory standard
        either! The overall space requirements look to be similar to the BGA
        that you'd normally solder on (perhaps 2-3x as thick?). I'm sure they
        can reduce that overhead going forward.
        
        I love the disclosure at the bottom:
        
        Full Disclosure: iFixit has prior business relationships with both
        Micron and Lenovo, and we are hopelessly biased in favor of repairable
        products.
       
          Aurornis wrote 1 day ago:
          > Ugh, finally.
          
          FYI, the '2' at the end is because this isn't the first time this has
          been done. :)
          
          LPCAMM spec has been out for a while. LPCAMM2 is the spec for
          next-generation parts.
          
          Don't expect either to become mainstream. It's relatively more
          expensive and space-consuming to build an LPCAMM motherboard versus
          dropping the RAM chips directly on to the motherboard.
       
            audunw wrote 1 day ago:
            Not to mention putting the RAM directly on a System-in-Package chip
            like Apple does now. That's going to be unbeatable in terms of
            space and possibly have an edge when it comes to power consumption
            too. I wouldn't be surprised if future standards will require
            on-package RAM.
            
            I kind of wish we could establish a new level in the memory
            hierarchy. Like, just make a slot where you can add slower more
            power hungry DDR RAM that acts as a big cache for the NVM storage,
            or that the OS can offload some of the stuff in main memory if it's
            not used much. It could be unpopulated in base models, and then you
            can buy an upgrade to stick in there to get some extra performance
            later if needed.
       
              burutthrow1234 wrote 22 hours 21 min ago:
              This is kind of what Optane was in some incarnations (it's really
              terrible branding that conflates multiple technologies).
       
            nrp wrote 1 day ago:
            My recollection of this is that LPCAMM was a proposal from Dell
            that they put into the JEDEC standardization process, and LPCAMM2
            is the resulting standard, named that way to avoid confusion with
            the non-standard LPCAMM that Dell trialed on a small number of
            commercial systems.
       
              Tuna-Fish wrote 21 hours 31 min ago:
              Almost. The Dell proposal is called CAMM, which was slightly
              modified during the JEDEC process and standardized as CAMM2,
              which is the combined with the memory type the same way DIMM was,
              For example LPDDR5X CAMM2 or DDR5 CAMM2. LPCAMM2 is not a name
              used in any JEDEC standard or even referred to anywhere on their
              site, but it seems to be used by both the memory manufacturers
              and the users because it's less of a mouthful, and they feel
              there needs to be more to distinguish between LPDDR5 CAMM2 and
              DDR5 CAMM2 because they are not electrically compatible.
       
          cjk2 wrote 1 day ago:
          Yeah they even gloss over Lenovo's crappy soldered on the motherboard
          USB-C connectors which is always the weak point on modern thinkpads.
          Well that and Digital River (Lenovo's distributor) carries absolutely
          no spare parts at all for any Lenovos in Europe, and if they do they
          only rarely turn up, so you can't replace any replaceable bits
          because you can't get any.
       
            sspiff wrote 1 day ago:
            Digital River is shit at everything. From spare parts, to delivery
            and tracking, to customer communications, to warranty claims. Every
            single interaction with them is a nightmare. It is the single
            reason I prefer to buy Lenovo from resellers rather than directly.
       
            chpatrick wrote 1 day ago:
            Have you tried [1] ?
            
 (HTM)      [1]: https://www.lenovopartsales.com/LenovoEsales
       
       
 (DIR) <- back to front page