[HN Gopher] Linux M1 GPU driver passes 99% of the dEQP-GLES2 com...
       ___________________________________________________________________
        
       Linux M1 GPU driver passes 99% of the dEQP-GLES2 compliance tests
        
       Author : tpush
       Score  : 375 points
       Date   : 2022-10-21 15:00 UTC (8 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | mechanical_bear wrote:
       | Awesome work, glad to see such progress. Why is this person using
       | such a weird voice modulation? Or are they using a text to voice
       | to engine? It makes it difficult to understand.
        
         | dheera wrote:
         | I'd do that if I posted videos online, I hate hearing my voice.
        
       | Pet_Ant wrote:
       | From
       | https://chromium.googlesource.com/angle/angle/+/main/doc/dEQ...
       | it looks like drawElements (dEQP) is:
       | 
       | > drawElements (dEQP) is a very robust and comprehensive set of
       | open-source tests for GLES2, GLES3+ and EGL. They provide a huge
       | net of coverage for almost every GL API feature.
       | 
       | So this is some CSS Acid test for OpenGL Embedded Systems (GLES).
        
         | lynguist wrote:
         | It was unnecessary to mention Acid. Acid was meant to test
         | cherrypicked unimplemented features, not to test coverage.
        
         | AbacusAvenger wrote:
         | It's more than that, it's the official conformance tests:
         | 
         | https://github.com/KhronosGroup/VK-GL-CTS/blob/main/external...
        
       | imwillofficial wrote:
       | I can't wait till we get a Linux iPad Pro
        
         | sofixa wrote:
         | iPads don't have the open bootloader of the Macbooks that would
         | allow that.
        
           | Gigachad wrote:
           | That's an obstruction, but usually someone eventually finds a
           | way around that part. But we never get a usable OS for the
           | hardware even if we can run arbitrary code. We have been able
           | to run linux on older iphones for a while now but I'm not
           | sure it's ever been a useful thing to do.
        
         | fsflover wrote:
         | https://news.ycombinator.com/item?id=25172883
        
         | speedgoose wrote:
         | I'm afraid most of the Linux classic applications will not be
         | designed with multitouch inputs in mind. It may improve
         | eventually, with newer software being created.
        
           | noveltyaccount wrote:
           | But with a Bluetooth mouse and keyboard, could be a
           | compelling form factor
        
             | 988747 wrote:
             | If you count size of all the devices is not that compelling
             | anymore.
        
           | mdp2021 wrote:
           | It's open, we will add the gestures.
        
       | THENATHE wrote:
       | How likely is this to make some kind of stride in the macOS
       | gaming world? Perhaps some kind of "headless" Linux VM that
       | functions similar to parallels or wine that can essentially then
       | run a proton layer to bring macOS up to par with Linux gaming.
       | Can someone with more knowledge than me explain if that is
       | feasible in the coming months/years?
        
         | sofixa wrote:
         | I don't think that's at all the goal or even vague direction of
         | the Asahi Linux project. It's to replace macOS, not to run as a
         | VM within it.
         | 
         | It's already possible to do what you want, with Parallels and
         | any modern arm64 Linux distro, as long as the games are
         | compiled for arm64 / you're willing to take the performance hit
         | of emulation.
        
           | Gigachad wrote:
           | Not really macos gaming but it could be a bit thing for
           | macbook gaming. My only laptop is a macbook and its powerful
           | enough to run many of the games I play but they only run on
           | windows/linux.
        
         | whywouldyoudo wrote:
         | > How likely is this to make some kind of stride in the macOS
         | gaming world?
         | 
         | It doesn't make any strides unfortunately.
         | 
         | > Perhaps some kind of "headless" Linux VM
         | 
         | Linux narrowly supports the Steam Deck's goals. It doesn't make
         | sense to use on a device like a Mac.
         | 
         | > Can someone with more knowledge than me explain if that is
         | feasible in the coming months/years?
         | 
         | An end user is best served by Parallels. Overall, it doesn't
         | make sense to run Windows-only games on a Mac.
        
         | smoldesu wrote:
         | You might see a few games like Diablo 2 or DOOM running, but
         | modern games are almost totally out of the question. The
         | biggest roadblock is getting a complete Vulkan driver, which is
         | at least a few years off. That would allow for DXVK to run,
         | which enables the majority of Proton titles.
         | 
         | And then there's the issue of the stack itself. It's hard
         | enough getting Wine and DXVK to run on hardware designed to run
         | Windows/DirectX respectively, but running it through
         | Rosetta/Box86, on Asahi, using half-finished video drivers, is
         | probably not great for stability. Just my $0.02.
         | 
         | Edit: I thought you were talking about Asahi gaming. The state
         | of Proton on MacOS is hopeless (Valve devs abandoned it after
         | the Catalina announcement), but you can definitely run it in a
         | VM if you're so-inclined. Wouldn't necessarily be any faster
         | than using Windows in a VM, but it's still an option.
        
         | tmdh wrote:
         | Well, this could be done this way,
         | 
         | 1. A Linux VM which has steam installed on it. 2. A project
         | like virglrenderer[1] for macOS and which processes Vulkan
         | commands from the VM with virtio-gpu and sends data back. (I
         | don't exactly know how it's done.)
         | 
         | This would allow games on the Linux VM's steam run at near-
         | native GPU performance. On the CPU side, there still a need for
         | instruction emulation like QEMU. Or, Apple posted somewhere
         | that you can run x86_64 Linux binaries inside the VM with the
         | help of Rosetta. or an ARM64 Linux VM with Steam running via
         | FEX-Emu.
         | 
         | [1]: https://gitlab.freedesktop.org/virgl/virglrenderer
        
       | rnk wrote:
       | How far along is running a regular linux desktop os with working
       | ui on an m1? Recently there were articles that Linus had been
       | using an M1 running some version of linux but there were some
       | aspects of the system that weren't working well, believe it was
       | the ui, maybe he was even in text mode.
        
         | robear wrote:
         | Support matrix -
         | https://github.com/AsahiLinux/docs/wiki/Feature-Support
         | 
         | You can run a GUI but it would currently run on the CPU not
         | GPU. Reports are that the UI is quite snappy running on the
         | CPU. The basic Asahi install is Arch with KDE. Linus apparently
         | put Fedora on his travel laptop. What Linus was complaining
         | about is Chrome support, which is coming or maybe is already
         | available -
         | https://twitter.com/asahilinux/status/1507263017146552326?la...
        
         | david_allison wrote:
         | Besides Asahi, I've done dev work on arm64 Ubuntu running under
         | Parallels (GUI works fine).
        
         | whywouldyoudo wrote:
         | > How far along is running a regular linux desktop os with
         | working ui on an m1?
         | 
         | The total number of people who will be doing this will number
         | in the hundreds. Not to diminish what an interesting
         | achievement it is.
        
         | JLCarveth wrote:
         | Not Linux per se, but there was a post recently about OpenBSD
         | support for M2. https://news.ycombinator.com/item?id=33274679
        
           | trenchgun wrote:
           | It is based on Asahi.
        
         | delroth wrote:
         | See https://github.com/AsahiLinux/docs/wiki/Feature-Support .
         | The laptops are very usable as a daily driver on Linux already,
         | but not 100% is supported and depending on your specific
         | requirements some of the missing hardware support might be a
         | deal breaker. You can very much run a normal Linux desktop with
         | a working UI, that's been working for months, it just doesn't
         | have hardware rendering acceleration (but the CPU is fast
         | enough to make this tolerable).
        
           | criddell wrote:
           | Is the battery life when running Asahi Linux close to what it
           | is with macOS?
        
             | nicoburns wrote:
             | No, aside from anything else the missing GPU driver means
             | that the system is running at ~60% all-core CPU even at
             | idle (rendering a desktop). IIRC, it still lasts around 4
             | hours. We'll have a better idea of where they're at once
             | the GPU driver lands.
        
               | GeekyBear wrote:
               | That's not the word from the developer lead.
               | 
               | >That whole "optimized for macOS" thing is a myth. Heck,
               | we don't even have CPU deep idle support yet and people
               | are reporting 7-10h of battery runtime on Linux. With
               | software rendering running a composited desktop. No GPU.
               | 
               | https://twitter.com/marcan42/status/1498923099169132545
               | 
               | Also, from a couple of days ago, they got basic suspend
               | to work.
               | 
               | >WiFi S3 sleep works, which means s2idle suspend works!
               | 
               | https://twitter.com/marcan42/status/1582363763617181696
        
               | nicoburns wrote:
               | 7-10 hours is good, but that's still around half of what
               | these machines can do on macOS. And it's just common
               | sense that battery life with software rendering won't be
               | as good when hardware acceleration is enabled.
        
               | tbrock wrote:
               | And better than any windows laptop on the market with the
               | same performance profile.
        
               | snovv_crash wrote:
               | Ryzen 6800u is comparable in performance and power
               | consumption, on an older process node.
        
               | smoldesu wrote:
               | Is it still as fast when 60% of it's CPU time is being
               | utilized to render a Retina desktop?
        
               | smoldesu wrote:
               | Software rendering simply stinks, doubly so if you're
               | running a fancy composited desktop. Regardless of CPU
               | speed or UI fluidity, your CPU and GPU processes are now
               | fighting for the same cycles, bottlenecking one another
               | in the worst way possible.
               | 
               | The M1 has a fairly good GPU, so there's hope that the
               | battery life and overall experience will improve in the
               | future. As of now though, I'd reckon there's dozens of
               | x86 Linux laptops that can outlast the M1 on Linux.
               | Pitting a recent Ryzen laptop versus an Asahi Macbook
               | isn't even a fair fight.
        
               | coldtea wrote:
               | > _Software rendering simply stinks, doubly so if you 're
               | running a fancy composited desktop. Regardless of CPU
               | speed or UI fluidity, your CPU and GPU processes are now
               | fighting for the same cycles, bottlenecking one another
               | in the worst way possible._
               | 
               | And yet, after first release use, they have reported that
               | this is the smoothest they've seen a Linux desktop ever
               | run. That is, smoother even when compared to intel-Linux
               | on hw GPU acceleration.
        
               | smoldesu wrote:
               | If your desktop idles at 60% CPU utilization, I should
               | hope it's at least getting the frame timing right.
        
               | Wowfunhappy wrote:
               | Where are you getting this 60% number from?
        
               | the8472 wrote:
               | I'd hope that an idle desktop redraws ~nothing and so
               | doesn't waste any CPU cycles. And the GPU not being used
               | might even save power. So as long as it's idle it would
               | ideally consume less power, not more.
        
               | viraptor wrote:
               | That could be true many years ago, but not anymore. GPU
               | is way more efficient at putting many bitmaps in the
               | right place in the output. Even your mouse cursor
               | compositing is hardware accelerated these days because
               | that's faster and more efficient. Doing that on the CPU
               | is wasted power.
        
               | [deleted]
        
               | smoldesu wrote:
               | If ARM had competitive SIMD performance, then we might be
               | seeing an overall reduction in power usage. The base ARM
               | ISA is excruciatingly bad at vectorized computation
               | though, so eventual GPU support seems like a must-have to
               | me.
        
               | Jweb_Guru wrote:
               | CPUs use significantly more power to perform the same
               | amount of computation that a GPU does, because they're
               | optimized for different workloads.
               | 
               | GPU input programs can be expensive to switch, because
               | they're expected to change relatively rarely. The vast
               | majority of computations are pure or mostly-pure and are
               | expected to be parallelized as part of the semantics.
               | Memory layouts are generally constrained to make tasks
               | extremely local, with a lot less unpredictable memory
               | access than a CPU needs to deal with (almost no pointer
               | chasing for instance, very little stack access, most
               | access to large arrays by explicit stride). Where there
               | _is_ unpredictable access, the expectation is that there
               | is a ton of batched work of the same job type, so it 's
               | okay if memory access is slow since the latency can be
               | hidden by just switching between instances of the job
               | really quickly (much faster than switching between OS
               | threads, which can be totally different programs).
               | Branching is expected to be rare and not required to run
               | efficiently, loops generally assumed to terminate, almost
               | no dynamic allocation, programs are expected to use lower
               | precision operations most of the time, etc. etc.
               | 
               | Being able to assume all these things about the target
               | program allows for a quite different hardware design
               | that's highly optimized for running GPU workloads. The
               | vast majority of GPU silicon is devoted to super wide
               | vector instructions, with large numbers of registers and
               | hardware threads to ensure that they can stay constantly
               | fed. Very little is spent on things like speculation,
               | instruction decoding, branch prediction, massively out of
               | order execution, and all the other goodies we've come to
               | expect from CPUs to make our predominantly single
               | threaded programs faster.
               | 
               | i.e., the reason that GPUs end up being huge power drains
               | isn't because they're energy inefficient (in most cases,
               | anyway)--it's because they can often achieve really high
               | utilization for their target workloads, something that's
               | extremely difficult to achieve on CPUs.
        
               | smoldesu wrote:
               | > it's because they can often achieve really high
               | utilization for their target workloads, something that's
               | extremely difficult to achieve on CPUs.
               | 
               | This part here 100x. It's worth noting that the SIMD
               | performance of the M1's GPU at 3w is probably better than
               | the M1's CPU running at 15w. It's simply because the GPU
               | is accelerated for that workload, and a neccessary
               | component of a functioning computer (even on x86).
               | 
               | The particularly damning aspect here is that ARM is truly
               | awful at GPU calculations. x86 is too, but most CPUs ship
               | with hardware extensions that offer redundant hardware
               | acceleration for the CPU. At least x86 can sorta
               | hardware-accelerate a software-rendered desktop. ARM has
               | to emulate GPU instructions using NEON, which yields
               | truly pitiful results. The GPU is a critical piece of the
               | M1 SOC, at least for full-resolution desktop usage.
        
           | rnk wrote:
           | Thanks. My use case is normal linux use, but I need to
           | occasionally run _x86_ code in a linux vm. So (1) running
           | linux, (2) running a different arch vm, (3) some gui apps.
           | 
           | I have already been reading the state of running x86 vms on
           | macos on an m1, it is slow but my friends claim usable. Add
           | it on an early linux impl on native m1, then x86 vm.
           | 
           | Why am I torturing myself? I want to get back to having a
           | great fanless system like I used to have on a google pixel
           | laptop, where they had underclocked x86 but running without a
           | fan was great.
        
             | DenseComet wrote:
             | Macos Ventura will let you run x86 code on an arm linux vm
             | using a linux version of rosetta 2, which should fix the
             | speed issues.
             | 
             | https://developer.apple.com/documentation/virtualization/ru
             | n...
        
             | wtallis wrote:
             | If you need to run x86 Linux software on Apple Silicon, the
             | better option going forward will be to run an ARM Linux VM
             | and use Apple's Rosetta for Linux to run x86 software on
             | the ARM Linux, rather than having the kernel also running
             | under emulation.
             | 
             | Since Rosetta for Linux was coerced to run on non-Apple ARM
             | Linux systems almost as soon as it shipped in developer
             | preview builds of the upcoming macOS, it would not be
             | surprising if Asahi ends up making use of it outside of a
             | VM context (though that may not comply with the license).
        
               | mkurz wrote:
               | Can you provide links to what you write hear? Thanks!
        
               | AndroidKitKat wrote:
               | Sibling comment, but this has links to Apple
               | documentation:
               | https://news.ycombinator.com/item?id=33290808
               | 
               | And this seems to be able to get it running:
               | https://github.com/diddledani/macOS-Linux-VM-with-Rosetta
               | 
               | I haven't tested this because I'm at work, but I'll
               | verify it when I get home!
        
               | wtallis wrote:
               | Previous HN thread:
               | https://news.ycombinator.com/item?id=31644990
        
         | mkurz wrote:
         | I am using Arch/Asahi Linux on a new MacBook Pro M1 Pro since
         | July as daily driver. I mainly use IntelliJ IDEA for Java and
         | Scala development in KDE (which is installed by default if you
         | choose the UI install). For IntelliJ you have to download the
         | aarch64 JBR yourself and set it in a config file. However
         | upgcoming version 2022.3 will support aarch64/arm OS out of the
         | box... So far it works very very good. Rendering is done by the
         | CPU, but the machine is so fast it's not really an issue (at
         | least compared to my old Thinkpad). However can't wait for the
         | GPU driver to arrive, looking at the posted tweet., I guess
         | that should happen within the next few weeks. For Zoom calls
         | (which I don't have so often) I still have to switch to macOS
         | right now, but AFAIK USB 3 support should work also soon, so I
         | can use my Elgato Facecam soon hopefully. I can listen
         | music/watch youtube with my Pixel Buds Pro via Bluetooth, no
         | problem. Suspending/closing the lid doesn't freeze the
         | userspace yet, but this is already fixed and will arive soon:
         | https://twitter.com/marcan42/status/1582363763617181696 In the
         | last 4 months I only heard the fan once or twice, when opening
         | dozens tabs that run aggressive adds (animations), but with the
         | upcoming GPU driver that will not be an issue anymore. And by
         | "hear" I mean like "what is this quite noice...? or that's the
         | macbook fans..." - when I start my old Thinkpad, it's like a
         | airplane in the runway preparing to start. Battery life is
         | quite good actually, this week I was taking my Macbook out for
         | lunch and working in a coffee for like 2,5 - 3 hours, doing
         | Java/Scala development (but not much compiling actually, just
         | hacking in the IDE) afterwards I had like 69% left. Also it
         | barely gets hot - it does a bit sometimes in the ads scenario
         | above, but most of the time it stays cool. It does get a bit
         | hot when charging (which is normal according to other users).
         | 
         | Personally, after a long time Thinkpad and Dell XPS user, I am
         | so happy about my Macbook, which is my first, they will have to
         | try hard to win me back. And I am saying this as non Apple user
         | (never had an iPad, MacBook, iPhone, whatever). Asahi + Macbook
         | Pro is really really great and IMHO, even in it's early stage
         | and IMHO for Linux enthusiastics will be the killer device in
         | the upcoming years.
         | 
         | BTW: I bought the Macbook in April and gave macOS a serious
         | chance (after being a pure Linux user since 15 years). Tried to
         | set it up for my needs... But I gave up in July, I just can't
         | handle it, for me it's like a toy. It's good for people (like
         | my wife) who browse the internet and do some office work, but
         | for software devs that need to get shit done fast and like to
         | have an open system its... kind of crap (just my pesonal
         | opionion).
        
           | biomcgary wrote:
           | My experience is almost exactly the same. The M1 Air is my
           | first Apple product. I don't like Mac OS after trying it (too
           | dev unfriendly), but love the hardware. I was only willing to
           | purchase the Air because of Asahi Linux (as a backup to
           | giving Mac OS a shot).
        
       | rowanG077 wrote:
       | Impressive! Is this the same test suite that was already ran on
       | OSX using Alyssa's user space driver but now on Linux using the
       | Rust kernel driver?
       | 
       | If so that seems to be shaping up REALLY well!
        
       | schaefer wrote:
       | Sorry for the off topic question, but can a person dual boot Mac
       | OS and a native Linux install on the M1???
        
         | rvz wrote:
         | The correct straightforward answer is, not yet. (The installer
         | is neither reliable or stable)
         | 
         | This is why everyone else (non-developers) are waiting for a
         | reliable release first rather than trying it out, something
         | goes wrong and another person will find out and say: 'it's not
         | ready for general use.'
        
           | torpv12 wrote:
           | Your first sentence is not accurate as, by default, the Asahi
           | installer sets up a separate partition for dual-boot
           | purposes. The Asahi developers have even recommended keeping
           | this configuration in order to receive future firmware
           | updates from Apple.
        
         | GeekyBear wrote:
         | Apple designed a new bootloader that enforces security settings
         | per partition and not per machine, so not only can you run a
         | third party OS, doing so does not degrade the security when you
         | boot the Mac partition.
        
           | glial wrote:
           | I had to read that three times to make sure I was reading
           | right. That's very...open...of them.
        
             | Gigachad wrote:
             | It really doesn't look like Apple has any intention of
             | locking the Macbooks down. They just don't care all that
             | much about building stuff that's incompatible with other
             | OSs, if someone wants to put the hard work in to making
             | Linux work, there are no unreasonable obstructions.
        
             | sofixa wrote:
             | Yes, it's very un-Apple of them, which some are taking as
             | Apple being accepting and welcoming of other OSes, in
             | particular Linux, on Macbooks. Which is an optimistic take,
             | IMO, even if not unfounded.
        
         | thrtythreeforty wrote:
         | Yes. That's what happens by default when you run the Asahi
         | installer
        
       | yewenjie wrote:
       | Can somebody explain how much of the work on M1 GPU can be reused
       | for M2?
        
         | 3836293648 wrote:
         | The vast majority of it. She and Alyssa discussed it during
         | their talk at the Xorg conference.
        
           | unpopularopp wrote:
        
             | [deleted]
        
       | [deleted]
        
       | singhrac wrote:
       | Also another drive by question, but I was surprised that the
       | Apple TVs have A-series chips. Has anyone worked with Asahi on
       | the A-series? I assume the performance isn't great but curious
       | about the technical challenges.
        
         | dlivingston wrote:
         | AFAIK, the latest A-series chip to boot Linux is the A7/A8 (so,
         | iPhone 7-ish): https://konradybcio.pl/linuxona7/
        
           | my123 wrote:
           | A7-A11 via checkm8.
        
         | galad87 wrote:
         | The A-series don't support booting from a third party OS,
         | unless someone found a security issue that can be exploited to
         | make it boot something else it won't be possible to run Linux
         | on it.
        
           | GeekyBear wrote:
           | If you pick up an older device running on an A11 SOC or
           | earlier, they have an unpatchable vulnerability in the
           | bootloader code, so those devices could be repurposed with
           | worrying about a locked bootloader.
           | 
           | https://arstechnica.com/information-
           | technology/2019/09/devel...
        
             | galad87 wrote:
             | I know, but my mind automatically thought about the latest
             | which contains an A15 for some reason :|
        
       | syntaxing wrote:
       | Does this mean there's a decent chance to get this working in
       | UTM? How much is this stuff transferrable to QEMU?
        
         | viraptor wrote:
         | For utm the interesting tech is venus / virgl
         | (https://www.collabora.com/news-and-
         | blog/blog/2021/11/26/venu...) Like the other comments say this
         | work is not really applicable to qemu.
        
         | NobodyNada wrote:
         | I'm not an expert on this, but my understanding is that GPU
         | acceleration in virtual machines uses a "passthrough" method
         | where the graphics calls on the guest system are simply
         | translated to public APIs on the host system, and go through
         | macOS's GPU drivers as usual. VMWare Fusion has had this
         | capability on M1 since about a month or two ago.
         | 
         | This project is designed to provide an open-source replacement
         | for macOS's GPU drivers in order to boot Linux bare-metal, so
         | it solves an entirely different problem. Maybe some of the
         | knowledge will carry over -- especially the userspace side of
         | things Alyssa was working on -- but as far as I know this is
         | not automatically transferrable.
        
       | lamontcg wrote:
       | Are we any closer to eGPU support for apple silicon?
        
         | rollcat wrote:
         | Under macOS? Not until Apple acts, I guess.
        
           | nicoburns wrote:
           | Presumably on linux this would "just work" (assuming drivers
           | for the IO ports are written)?
        
         | mulderc wrote:
         | I am skeptical that we will ever see that.
        
           | fragmede wrote:
           | given that thunderbolt 3 is basically PCI, the only far
           | fetched part would be having to write a 3rd party driver for
           | the card. Given the high end businesses that exist in this
           | world to work with video on the (eg Davinci on the iPad which
           | was recently on the front page), combined with a demand for
           | absolute highest end GPU performance, I think there's a
           | business case for it. I wouldn't hold my breath waiting for
           | it to be open source though.
        
             | smoldesu wrote:
             | Considering how little commercial support there is for
             | basic GPU functionality on _regular_ Linux, it 's wishful
             | thinking to expect a mysterious stranger to write this code
             | for you. Especially for black-box hardware.
        
       | 2bitencryption wrote:
       | * clean-room M1 GPU driver created by (seemingly) one dev in a
       | matter of months
       | 
       | * an excellent POC of Rust drivers on Linux
       | 
       | * the dev live-streamed almost every minute of development
       | 
       | * the dev is a virtual anime girl
       | 
       | So far I'm LOVING 2022.
        
         | robert_foss wrote:
         | Mesa lets OpenGL(ES) drivers to share a lot of code. This
         | doesn't take anything away from this achievement however.
        
         | [deleted]
        
         | rvz wrote:
        
           | apetresc wrote:
           | I can't tell if you're in on the joke or not, but for anyone
           | else who may be misled: they aren't. Proof:
           | https://twitter.com/marcan42/status/1582930401118806016
        
             | least wrote:
             | It doesn't matter if Asahi Lina is an avatar of Marcan or
             | not and you're free to discourage people from speculating,
             | especially since it does not seem that Asahi Lina wishes to
             | disclose her real identity.
             | 
             | That does not mean that what you've provided is proof,
             | however.
        
             | rvz wrote:
             | That isn't proof of anything. It is a denial.
        
           | tpush wrote:
           | I don't think it really matters who's behind Lina.
        
         | foobarian wrote:
         | Having seen some of Rosenzweig's work and talks, I would say
         | "twenty devs".
        
           | pas wrote:
           | interesting! can you share some details?
        
             | skavi wrote:
             | I think they are implying Rosenzweig is as capable as 20
             | devs.
        
               | 64StarFox64 wrote:
               | Even 10x engineers are getting hit by inflation!
        
               | ansonhoyt wrote:
               | I think more code with fewer devs is deflationary. "As
               | productivity increases, the cost of goods decreases." [1]
               | 
               | [1]: https://en.wikipedia.org/wiki/Deflation
        
         | mdp2021 wrote:
         | > _So far I 'm LOVING 2022_
         | 
         | Stoical irony.
         | 
         | Edit: Look: you can be excited by some tech advance, but no -
         | 2022 remains part of an ongoing global disaster. The above
         | quoted sentence is hardly manageable. The best it deserves is
         | to be read like some twisted humour.
        
           | mechanical_bear wrote:
           | "Ongoing global disaster"
           | 
           | Life is good and only getting better :- Not sure where this
           | "global disaster", but then again I'm not a Doomer.
        
           | a1369209993 wrote:
           | > 2022 remains part of an ongoing global disaster.
           | 
           | Which of the several dozen currently ongoing global disasters
           | are you referring to, and, bluntly, why should we care?
        
         | jancsika wrote:
         | > the dev live-streamed almost every minute of development
         | 
         | Is there an archive of this dev work?
        
           | tbodt wrote:
           | https://www.youtube.com/AsahiLina
        
             | amadvance wrote:
             | The key moment:
             | https://www.youtube.com/watch?v=X7cNeFtGM3k&t=35595s
        
         | mlindner wrote:
        
           | nicoburns wrote:
           | Well of course nobody is an actual virtual anime girl.
        
             | onepointsixC wrote:
             | Sure, but listening to even a minute is difficult. I don't
             | know if there's such a thing as an audio version of the
             | uncanny valley, but were such a concept exist this would
             | fall right there for me. I don't have any issue with
             | naturally high pitched voices but I just can't listening to
             | this.
        
             | mlindner wrote:
             | Yeah but it makes the voice too high to listen to
             | comfortably.
        
               | prmoustache wrote:
               | Her video content is just unwatchable. Too bad as there
               | are probably interesting bits inside.
        
               | kccqzy wrote:
               | This is such a westernized perspective. If you were
               | Japanese or if you just grew up watching anime you'll
               | find those videos perfectly watchable.
        
               | Cyberdog wrote:
               | What a bizarre comment. Do you really think Japanese
               | people and anime fans find heavily-roboticized voices
               | peasant to listen to for long periods of time?
        
               | lvass wrote:
               | I grew up watching anime and that's not the case. It's
               | far too cacophonous to me, anime does not sound like
               | that.
        
         | gpm wrote:
         | > by (seemingly) one dev
         | 
         | I think Alyssa Rosenzweig and Asahi Lina both did pretty
         | substantial work on this?
        
           | nicce wrote:
           | It is impressive. I remember reading something related to new
           | abstractions in chip level. The most of the GPU drivers are
           | actually on some specific chip and it works similarly
           | regardless of the OS. Kernel developers actually develop
           | against this new abtraction layer and not directly against
           | GPU. Does someone has more information about this, or was
           | this nonsense?
        
             | humanwhosits wrote:
             | It means that the drivers likely have a giant 'firmware'
             | which is really just a closed source driver loaded into the
             | chip, but with a nicer api for the kernel to program
             | against
        
           | dottedmag wrote:
           | GPU driver is mostly by Alyssa, DCP (display controller) is
           | mostly by Lina.
        
           | outworlder wrote:
           | That's even acknowledged in the tweet thread.
        
           | kelnos wrote:
           | I believe Rosenzweig was responsible for the user-space bits;
           | this post is about the kernel driver, which (as I understand
           | it) was done entirely by Asahi Lina.
           | 
           | (Both of them deserve a ton of praise, of course.)
        
             | nicoburns wrote:
             | I'm pretty sure this post is about both bits. I don't think
             | you can pass that test suite without both bits.
        
               | entropicdrifter wrote:
               | It was passing that test suite when being run as of their
               | XDC talk 10 days ago using only Alyssa's bit. This update
               | shows that when using both bits combined it now passes
               | with the same level of conformance.
               | 
               | At least that's my understanding of it. You're right in
               | that this is about both bits working together, but the
               | person you're replying to is right in the sense that this
               | particular update is showing specifically an update in
               | Lina's bit.
        
             | daniel-cussen wrote:
        
       | _fizz_buzz_ wrote:
       | Do they get help from Apple? Is there documentation of the M1?
       | This is such a massive achievement!
        
         | leidenfrost wrote:
         | The only help is from the bootloader side and by not having
         | Apple actively obscuring things.
         | 
         | Conpared to other reverse engineering projects, the M1 macs are
         | not a moving target and they dont have to encounter new
         | lockdowns every time, unlike other reverse engineering
         | scenarios lile the cracking scene or emulators
        
           | lostlogin wrote:
           | > not having Apple actively obscuring things.
           | 
           | This seems quite a big deal for Apple - and said as a long
           | time Apple user.
        
             | smoldesu wrote:
             | It's such a minor contribution that I always get a good
             | laugh watching people praise Apple for this. Microsoft, for
             | all the evil they do, contributes millions of SLOC to open
             | source along with generous funding for various non-selfish
             | projects. Facebook, for all the societal ills they cause,
             | also develops multiple projects in public even when it
             | would make more sense to withhold them. Apple, for all
             | their exploitation of the developer community, flips a
             | switch on the Macbook that enables parity with traditional
             | x86 BIOS functionality, and people start partying in the
             | street crying "this is for us!"
             | 
             | And since then, it's been radio silence from Cupertino HQ.
             | Nobody ever confirmed why they did this. Was it serendipity
             | or loving compassion? Nobody can say for sure, but if it's
             | the latter then Apple seems to be very conservative with
             | their blessings upon the community.
        
               | stormbrew wrote:
               | Apple contributes quite a lot to open source (WebKit,
               | llvm, clang in particular but there are other high volume
               | projects), though not so much in the Linux kernel. There
               | are some patches by Apple in Linux though and they've
               | made quite a lot of strides in expediting postbacks
               | internally over the last few years.
               | 
               | I left Apple partly over their poor oss policies, but
               | even I won't sign off on the idea that they don't do any
               | at all.
        
               | smoldesu wrote:
               | - WebKit is a co-opted KDE project that was open source
               | before Apple forked it (also LGPL, it would be illegal
               | for Apple to distribute WebKit if it wasn't GPL too)
               | 
               | - LLVM was also open source before Apple bought the core
               | developers
               | 
               | - If Clang wasn't open source then literally nobody would
               | ever use it
               | 
               | There are definitely a few spotty places where Apple
               | tosses up a CUPS patch for Linux or fixes a cross-
               | platform WebKit bug. Relative to the rest of FAANG,
               | though, Apple is unprecedented in their posturing against
               | open source.
        
         | Havoc wrote:
         | >Do they get help from Apple?
         | 
         | Indirectly. Apple made some changes in the past that were
         | likely geared towards helping them. Was on kernel not gpu
         | though I think:
         | 
         | https://twitter.com/marcan42/status/1471799568807636994
         | 
         | Don't think they got active help in the "here's the docs" sense
        
       ___________________________________________________________________
       (page generated 2022-10-21 23:00 UTC)