[HN Gopher] Porting the Slint UI Toolkit to a Microcontroller wi...
       ___________________________________________________________________
        
       Porting the Slint UI Toolkit to a Microcontroller with 264K RAM
        
       Author : zdw
       Score  : 53 points
       Date   : 2023-04-08 00:44 UTC (22 hours ago)
        
 (HTM) web link (slint-ui.com)
 (TXT) w3m dump (slint-ui.com)
        
       | ilyt wrote:
       | It's funny to compare "modern" with microcomputers of the old.
       | Original Amiga 500 ran with 512kB RAM, with OS requiring only
       | 256kB (as in "you could run something other than just OS on
       | that").
       | 
       | Meanwhile we have this laggy mess...
        
         | gijou6 wrote:
         | The Amiga had a dedicated video chip (and it output vga signals
         | which are fairly cheap).
         | 
         | This is a slow SPI bus with the cpu needing to push W x H x BPP
         | pixels and with a 320x240 16bpp that comes out to 9 million
         | bytes/sec for 60fps or 4.5 million for 30 fps. Cortex M0 I
         | believe has 4 cycles for load and store, so even if you had a
         | perfect parallel 16 bit bus where you could do 1 load + 1 store
         | to send a pixel, that comes out to a best case ~80 fps @ 100MHz
         | with 100% cpu utilization (i.e you could do nothing else on
         | that cpu, not even serve interrupts). Another core wont help
         | much because it shares the memory bus, and fill rate is the
         | bottleneck here.
         | 
         | There's a good reason why we have dedicated chips for pushing
         | framebuffer -> lcd physical pixels even back in the 80s.
        
       | cryo wrote:
       | The Pico runs at 133 MHz. I don't know how the slint-ui works
       | under the hood but the shown demo (with DMA speedup) could be
       | much more snappy imho. For that the code needs to be aligned to
       | what the SPI protocol of the display offers instead of treating
       | it as a general purpose frame buffer. For example while scrolling
       | not sending the whole area but only the part which becomes
       | visible.
        
         | devbent wrote:
         | The Microsoft Band managed a fancier UI with only 96mhz of CPU,
         | it did have an FPU though which speeds things up quite a bit.
         | 
         | Getting good throughput requires DMA, and Band also paid the
         | price and dedicated tons of local SRAM for the frame buffer. A
         | local framebuffer that can be read from is needed for
         | antialiased fonts and alpha blending, not to mention things
         | like screen fades!
         | 
         | Band also had real true type fonts, and ran at 30FPS[1] with
         | vsync!
         | 
         | I did a write-up of how at
         | https://meanderingthoughts.hashnode.dev/cooperative-multitas...
         | 
         | [1] the 30fps cap was due to bandwidth to the display
         | controller, uncapped the UI could run internally at around
         | 100fps if it wasn't doing loads of text rendering.
        
           | solarkraft wrote:
           | Little side note: Props for the regard for UI quality.
           | 
           | > At launch, the Microsoft Band dropped fewer frames than an
           | Apple Watch
           | 
           | > if any module took more than 2ms before it returned, a
           | crash dump was created and the code was investigated and
           | optimized.
           | 
           | If only Microsoft cared even half as much about the UI in its
           | other products. Visual Studio still hangs up the main thread
           | for many seconds.
           | 
           | (side side note: touch input/output latency is a field
           | Microsoft is actually remarkably good at, I'm guessing
           | because they did some research on how much it matters decades
           | ago. if only they had done some research about how glaring UX
           | defects impair the over all usage of a product ...)
        
       | sitzkrieg wrote:
       | the "if we can port it to the constrained pico we can make it run
       | on any mcu" bit made me wat. the pico is a pretty large mcu
       | really.
       | 
       | i get displays are not the realm of single digit cents 4 bitters
       | or something. but i can think of other mcus more constrained w
       | probably still enough flash and fast enough SPI bus and dma that
       | would be a much better showing.
       | 
       | i guess i feel the pico is overkill, but at any rate this is
       | mostly unfair because its not the toolkits target market in the
       | first place
        
         | MrBuddyCasino wrote:
         | Its weird they don't mention RAM and code sizes. If they
         | require just two lines (240 x 2 Bytes x 2) thats less than 1K
         | for the pixels, but the code size can't be that small if they
         | compile in fonts etc.
         | 
         | I feel people that don't usually work mit MCUs think a Pico2040
         | or ESP32 is a crazy constrained environment, when those are
         | really rather luxurious. I'm not sure how much it matters -
         | maybe its like complaining about Elektron while Slack
         | (deservedly) succeeds.
        
       | karmicthreat wrote:
       | Does this offer me much over LVGL?
        
         | tronical wrote:
         | A typed and tooled (lsp, live-preview, etc) DSL for the UI and
         | Rust/C++20 APIs, compared to C APIs.
        
       | detrites wrote:
       | If you get to the inlined video and think it looks a little
       | sluggish, keep reading - they later implemented DMA to speed it
       | up. Here's the link to this video showing it:
       | https://youtube.com/watch?v=dkBwNocItGs
        
         | KRAKRISMOTT wrote:
         | It's slow even with DMA. The original iPhone was snappier than
         | this.
        
           | wmf wrote:
           | The original iPhone was far more powerful than this hardware.
        
             | abujazar wrote:
             | The original Mac was also snappier.
        
       | avx56 wrote:
       | (2022)
        
       ___________________________________________________________________
       (page generated 2023-04-08 23:00 UTC)