[HN Gopher] Architecture of Lisp Machines (2008) [pdf]
       ___________________________________________________________________
        
       Architecture of Lisp Machines (2008) [pdf]
        
       Author : todsacerdoti
       Score  : 76 points
       Date   : 2021-07-02 18:30 UTC (4 hours ago)
        
 (HTM) web link (www.cs.utah.edu)
 (TXT) w3m dump (www.cs.utah.edu)
        
       | aliasEli wrote:
       | Lisp machines were an interesting idea. Unfortunately they were
       | very expensive and fairly slow compared to other machines at the
       | time.
        
         | mark_l_watson wrote:
         | My Xerox 1108 was reasonably fast, even updating it from
         | InterLisp D to Common Lisp.
         | 
         | Now I now live in a combination of SBCL+Emacs+Slime and also
         | LispWorks Pro. For newbies who want to learn a Lisp, I point
         | them to Racket.
        
         | retrac wrote:
         | Lisp machines weren't slow; the original CADR of the late 70s
         | ran at around 1 MIPS on 32 bit data with up to 8 MB of RAM,
         | making it about as fast as the VAX 780. The VAX was a large
         | minicomputer released in 1977 and one of the fastest machines,
         | short of a high-end mainframe, at the time. A Lisp machine also
         | cost about as much as a VAX (but for a single user).
         | 
         | The problem was maybe, aside from a $50,000 PC being hard to
         | sell, that even on such generous hardware with specialized
         | support, Lisp, particularly with the more naive compilation
         | techniques of the 70s and early 80s, and after adding a fairly
         | sophisticated operating environment, was still a rather hefty
         | language.
        
           | rjsw wrote:
           | The CADR used basically the same chips as a VAX 11/780.
        
         | jampekka wrote:
         | I.e. worse is better.
        
         | bitwize wrote:
         | They were fast compared to contemporary machines (minicomputers
         | like the PDP-10). What happened was. powerful micros came out
         | and the technology in those and in Lisp compilers for those
         | machines eventually surpassed the LispM architecture in speed.
         | Complacency and mismanagement at companies like Symbolics meant
         | the LispM architecture never caught up, even when it moved to a
         | microprocessor architecture in the 80s.
        
           | pfdietz wrote:
           | The single most important trick I remember for Lisp on stock
           | hardware was implementing pointers to cons cells as pointers
           | to the next byte, and doing car/cdr by -1(reg) and 3(reg) (or
           | 7(reg) on a 64 bit machine). This automatically traps on non-
           | conses without any extra cost.
        
         | lispm wrote:
         | Actually they were not slow compared to other machines.
         | Initially they were developed to replace minicomputers
         | (https://en.wikipedia.org/wiki/Minicomputer) as machines for
         | Lisp programmers.
         | 
         | Instead of sharing one minicomputer having 8 MB RAM (or less)
         | with tens or hundred users, the Lisp programmer had a Lisp
         | Machine as a first personal workstation with GUI (1981 saw the
         | first commercial Lisp Machine systems, before SUN, Lisa, Macs,
         | etc.) - thus the Lisp programmer had not to compete with many
         | other users with scarce memory availability. Often Lisp
         | programmers had to work at night when they had a minicomputer
         | alone - a global garbage collection would make the whole
         | machine busy and response times for other users were impacted,
         | up to making machines unusable for longer periods of time. When
         | I was a student I got 30 minutes (!) CPU time for a half year
         | course on a minicomputer (DEC10, later VAX11/780).
         | 
         | So for a Lisp programmer their personal Lisp Machine was much
         | faster than what he/she had before (a Lisp on a time-shared
         | minicomputer). That was initially an investment of around $100k
         | per programmer seat then.
         | 
         | Later clever garbage collection systems were developed, which
         | enabled Lisp Machines to practically use large amounts of
         | virtual memory. For example: 40 MB physical RAM and 400 MB
         | virtual memory. This enabled the development of large
         | applications. Already in the early 80s, the Lisp Machine
         | operating systems was in the range of one million lines of
         | object-oriented Lisp code.
         | 
         | The memory overhead of a garbage collected system increased
         | prices compared to other machines, since RAM and disks were
         | very expensive in the 80s.
         | 
         | A typical Unix Lisp system was getting cheap fast, though the
         | performance of the Lisp application might have been slower.
         | Note that there is a huge difference between the speed of small
         | code (a drawing routine) and whole Lisp applications (a CAD
         | system). Running a large Lisp-based CAD system (like ICAD) at
         | some point in time was both cheaper and faster on Unix than a
         | Lisp Machine. But that was not initially, since the Unix
         | machines usually had no (or only a primitive) integration of
         | the garbage collector with the virtual memory system. Customers
         | at that time were then already moving to Unix machines. New
         | Lisp projects were also moving to Unix machines. For example
         | the Crash Bandicoot games were developed on SGIs with Allegro
         | Common Lisp. Earlier some game contents was even developed on
         | Symbolics Lisp Machines - the software later was moved to SGIs
         | and even later to PCs. Still a UNIX based system like a SUN
         | could cost $10k for the Lisp license and $40k for a machine
         | with some memory. Often users later bought additional memory to
         | get 32MB or even 64MB. I had a Mac IIfx with 32MB RAM and
         | Macintosh Common Lisp - my Symbolics Lisp Machine board for the
         | Mac had 48MB RAM with 40bits and 8bit ECC.
         | 
         | Currently a Lisp Machine emulator on a M1 Mac is roughly 1000
         | times faster than the hardware from 1990 which had a few MIPS
         | (million instructions per second). The CPU of a Lisp Machine
         | then was as fast as a 40Mhz 68040. New processor generations
         | had then either been under development, but potential customers
         | moved away - especially as the AI winter caused an implosion of
         | a key market: AI software.
         | 
         | For an article about this topic see:
         | http://pt.withington.org/publications/LispM.html
         | 
         | "The Lisp Machine: Noble Experiment Or Fabulous Failure?"
        
           | eschaton wrote:
           | They were (are) slow, though. By 1990, workstations a tenth
           | the price were just as fast, and while Symbolics was trying
           | to scale Ivory past 14 MHz, RISC CPUs were rapidly
           | approaching 100 MHz and CISC CPUs were heading that way too.
           | And Coral, Gold Hill, and Lucid all showed that modern
           | general purpose CPUs could run good Lisp environments well.
           | 
           | My Symbolics systems are elegant, don't get me wrong. But
           | Genera wouldn't have been any less elegant if they'd taken
           | their 80386+DOS deployment environment (CLOE) and used it as
           | the basis for a true 80386 port of Genera. They were too
           | stuck on being better than everyone else at designing
           | hardware for Lisp that they missed not needing special
           | hardware for it.
        
             | bitwize wrote:
             | In fact, one of the "Lisp machine on a budget" options of
             | the mid 80s was the Hummingboard -- a 386 with many
             | megabytes of RAM on an ISA card, specifically commissioned
             | and built to work with Golden Common Lisp.
        
             | lispm wrote:
             | 1990 was already 10 years after the first machines had some
             | wider availability. 'Wider availability' means more then 20
             | hand-made machines and having commercial vendors (LMI and
             | Symbolics, then TI and Xerox). Yeah, Lucid was pretty nice
             | - too bad then went under when their investment into C++
             | killed them.
             | 
             | Actually I think Lucid was founded, because Symbolics did
             | not want to further invest into a UNIX based
             | implementation. Symbolics did support SUNs with Lisp
             | Machine boards (the UX400 and UX1200). TI had Lisp Machines
             | with UNIX boards.
             | 
             | Later Symbolics developed a virtual Lisp Machine running
             | Open Genera (a version of their Genera operating system)
             | for the 64bit DEC Alpha chip on top of UNIX.
             | 
             | "The Symbolics Virtual Lisp Machine Or Using The Dec Alpha
             | As A Programmable Micro-engine"
             | 
             | http://pt.withington.org/publications/VLM.html
        
           | Zelphyr wrote:
           | Do you have any recommendations for a Lisp Machine emulator
           | for Mac?
        
             | rjsw wrote:
             | As well as the ones lispm has described there are emulators
             | for MIT CADR, LMI and TI Lisp Machines. The LMI one [1] is
             | the most complete of these.
             | 
             | [1] https://github.com/dseagrav/ld
        
             | lispm wrote:
             | The Interlisp-D system from Xerox/... is available:
             | https://interlisp.org . Expect a real parallel universe.
             | Even for a Lisp programmer this will challenge what one
             | expects from a development system.
             | 
             | The Symbolics system is only available as a pirated and
             | slighty buggy software for Linux (also in VM running
             | Linux). A better version exists, but that one is only
             | available in limited commercial form. It's another parallel
             | universe from 30 years ago. Most development basically
             | stopped mid 90s.
        
               | jacquesm wrote:
               | In a way that's great: it will be lightning fast compared
               | to running on the original hardware (many orders of
               | magnitude) and it won't be affected by all the bloat that
               | they didn't tack on during the last 30 years.
        
         | peter303 wrote:
         | Special purpose CPUs ran faster than general purpose. However
         | they had upgrade cycles of 3-5 years compared 1/2 to 1 year for
         | commodity chips. The commodity chip almost always caught up in
         | the meantime at a lower cost. My research group bought array
         | processors, fine grained processor like MassPar and Thinking
         | Machines, min-super computers like Convex, and this catch-up
         | happened every time. LISP firmware on general CPUs caught up
         | with custom hardware like Symbolics too.
         | 
         | Very large customer bases like Nvidia can have annual design
         | releases and keep up.
        
           | zozbot234 wrote:
           | > The commodity chip almost always caught up in the meantime
           | at a lower cost.
           | 
           | This dynamic is dead now, thanks to the slowing down of
           | Moore's Law. We're even seeing a resurgence of special-
           | purpose hardwired accelerators in CPU's, because "dark
           | silicon" (i.e. the practical death of Dennard scaling) opens
           | up a lot of opportunity for hardware blocks that are only
           | powered up rarely in a typical workload. That's not too
           | different from what the Lisp machines did.
        
       | gumby wrote:
       | The lisp implementations described here had a small number
       | expensive runtime costs (otherwise lisp can, if you wish, be
       | complied into very fast code).
       | 
       | One was the cost of the gc memory barrier (cleverly managed for
       | commodity hardware by using the mmu and toggling the write bits,
       | I think thought up by Solvobarro). I think a slightly more
       | sophisticated trick could be done with some extra TLB hardware to
       | generalize this for generational collectors for any gc language,
       | say Java. Another smart trick would be to skip transporting
       | unless fragmentation got too bad. In a modern memory model
       | compaction just isn't what it used to be.
       | 
       | A second one is runtime type analysis. With the RISC-V spec
       | supporting tagged memory this could be speeded up tremendously
       | for Lisp, Python et al. Is anyone dabbing chips with that option?
       | 
       | The nice thing today is that a lot of languages are revisiting
       | ideas originally shaken out by lisp, so speeding those languages
       | up can speed up Lisp implementations too.
       | 
       | Ps: wish this article has mentioned the KA-10 (first PDP-10)
       | which was really the first machine designed with Lisp in mind and
       | with an assembly language that directly implemented a number of
       | lisp primitives.
        
         | ekez wrote:
         | What are the advantages of tagged memory as opposed to using
         | the unused bits in a regular pointer as a tag?
        
           | retrac wrote:
           | Usually, tagged memory at a hardware level, goes along with
           | support for tagged operations in the processor.
           | 
           | In the LISP machines for example, you had an add instruction.
           | Which would happily work correctly on pointers, floats, and
           | integers depending on the data type, at the machine code
           | level. Offers safety and also makes the the compiler simpler.
           | 
           | But where this really shines is in things like, well, lists,
           | since the tags can distinguish atoms from pairs and values
           | like nil, fairly complex list-walking operations can be done
           | in hardware, and pretty quickly too. It also makes hardware
           | implementation of garbage collection possible.
           | 
           | This is just my intuition, but I suspect, these days, it all
           | works out to about the same thing in the end. You use some of
           | the cache for code that implements pointer tagging, or you
           | can sacrifice some die area away from cache for hardwired
           | logic doing the same thing. It probably is in the same
           | ballpark of complexity and speed.
        
             | gumby wrote:
             | In addition to what retrac wrote, these tag buts would
             | apply to immediates as well, not just pointers.
        
       | analognoise wrote:
       | I don't suppose anyone wants to team up to try to build one for
       | an FPGA?
       | 
       | I know it's been done, but it sounds like fun.
        
         | Rochus wrote:
         | https://github.com/lisper/cpus-caddr
        
       ___________________________________________________________________
       (page generated 2021-07-02 23:00 UTC)