[HN Gopher] Temporal Programming, a new name for an old paradigm
       ___________________________________________________________________
        
       Temporal Programming, a new name for an old paradigm
        
       Author : beefman
       Score  : 128 points
       Date   : 2022-11-27 19:46 UTC (9 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | mrkeen wrote:
       | > Temporal programs, as I am attempting to define them, are of
       | the form x' = f(x) - a recurrence relation.
       | 
       | Functions.
       | 
       | > x = x + 1 would not be valid, while x' = x + 1 would be.
       | 
       | Yep.                   void tick() {           a@ = b + 1;
       | b@ = a + 1;         }
       | 
       | Void, oh no!
        
         | a1369209993 wrote:
         | > Void, oh no!
         | 
         | void is the unit type, not the uninhabited type. (I'm really
         | not sure what Haskell was smoking when it messed that[0] up,
         | but it's probably too late to fix now.)
         | 
         | 0: Calling it Data.Void rather than something like
         | Data.Noreturn or whatever.
        
           | wnoise wrote:
           | In C void is the unit type. I'm not sure why we should
           | respect C's choice of nomenclature, when the obvious thing to
           | call it is Unit, and an obvious thing to call an uninhabited
           | type is Void. Haskell's choice is fine.
        
           | louthy wrote:
           | Void is the uninhabited type, Unit is the singleton type.
           | Void != Unit
        
             | a1369209993 wrote:
             | No, as previously mentioned, `void` is the singleton type
             | (the same type haskell calls `()`[0], and some other
             | languages call `Unit`). Eg:                 void foo(int x)
             | {         printf("%i\n",x);         return; /* <- inhabited
             | */ }
             | 
             | 0: Give or take some pedantry about `TYPE 'LiftedRep`.
        
       | dl7 wrote:
       | > Programs that adhere to these tenets should in theory be far
       | more optimizable than what would be possible in a procedural
       | language, as the compiler is essentially given the entire
       | evaluation graph for the program up front and can completely
       | reschedule evaluations in whatever order is most efficient. They
       | should also be more amenable to formal TLA-type proofs since
       | their computation model is much closer to what TLA+ expects.
       | 
       | This sounds a lot like graph computation but on an extremely
       | granular level.
        
       | blowski wrote:
       | Facinating how this sounds - to different people - like event
       | sourcing, functional programming and "software transaction
       | memory".
       | 
       | Are people interpreting using their own experience, or is there a
       | big overlap?
        
         | greenbit wrote:
         | To a hammer, everything looks like a nail?
        
         | marcosdumay wrote:
         | To me, it sounds a lot like the part of functional reactive
         | programing that nobody bother to explain or specify.
         | 
         | But, well, I'm not really sure people mean that when they talk
         | about it, because it's left unespecified. And when people
         | implement FRP they always seem to take obvious "shortucts" with
         | significant downsides and that avoid implementing something
         | like this.
        
         | thwayunion wrote:
         | There's a big and historical overlap. Now these are all
         | distinct topics of study, but in the 80s when these ideas
         | originated there were precious few CS departments. Most all of
         | the software scientists knew each other and were kind of
         | swimming in the same primordial soup of ideas.
        
         | mrkeen wrote:
         | Event sourcing is one way to approximate functional programming
         | in the large.
         | 
         | STM provides (compositional) thread-safety within a single
         | process. It requires FP but other than that I don't see the
         | comparison.
        
       | scrubs wrote:
       | I'm liking this post a lot. The write up is good Gonna fork it.
       | Well done aappleby
        
       | 314 wrote:
       | Some other names that I've encountered for these kinds of
       | programming constructs:
       | 
       | * simulation variables (presumably because the separation into
       | x/x' is similar to the now/next calculation of state in a lot of
       | simulations).
       | 
       | * SSA (although there is a subtle difference as x' becoming x in
       | the next period isn't really SSA without some extra edges in the
       | graph).
       | 
       | I wonder how phi nodes would fit into the author's scheme?
        
         | mrkeen wrote:
         | Can't speak for the author but I believe 'phi nodes' map to
         | 'join points' in function land:
         | 
         | https://www.microsoft.com/en-us/research/publication/compili...
        
       | avodonosov wrote:
       | I once glanced through some explanation of monads by Simon Peyton
       | Jones. He said that in ALGOL specification, the semicolon
       | operator is formally defined as taking the current state of the
       | world, transforming it by the statements preceding the semicolon,
       | and then arranging for the program to continue execution in this
       | new world.
       | 
       | So John Backus not only suggested "Applicative State Transition"
       | in his paper. He participated in ALGOL creation. Which influenced
       | most contemporary languages.
       | 
       | We have this style of programming in our Pascal and Java
       | programs.
       | 
       | Please no new names. Semicolon is semicolon.
       | 
       | Sorry, I can't find this blog or paper by Peyton Jones. Leaving
       | that as an exercise for the reader :) And of course, I can
       | misremember something.
        
         | nerdponx wrote:
         | Now if we could only statically reason about updates to the
         | world state...
        
       | pubby wrote:
       | This sounds a lot like software transactional memory (STM)
       | combined with static single assignment form (SSA).
       | 
       | The idea behind STM is to specify some block of code as being one
       | big fat atomic operation, and then the runtime will magically
       | make it so. The idea behind SSA form is to only assign variables
       | once, as doing so makes imperative optimization as easy as
       | functional.
       | 
       | I wonder if the author knows about these things, or if they're
       | coming in at a different angle.
        
         | parentheses wrote:
         | The proposal here seems to make the idea of STM more natural by
         | creating a language that keeps to that core concept.
        
       | mjburgess wrote:
       | As far as I can see this is just pure functional programming.
       | 
       | The IO type in haskell is defined,                   newtype IO a
       | = IO (State# RealWorld -> (# State# RealWorld, a #))
        
         | dustingetz wrote:
         | i think the cycle needs to be looped, see MonadFix and
         | ArrowLoop, also recursive do extension. which is all connected
         | with FRP
        
         | marcosdumay wrote:
         | It's not, because the article is feeding several commands into
         | the same RealWord value, before changing into the next, and
         | feeding a lot of commands again.
        
       | not2b wrote:
       | Nonblocking assignments in hardware description languages (like
       | Verilog or VHDL) work a lot like this.
       | 
       | For example
       | 
       | always @(posedge clk) begin A <= B; B <= A; end
       | 
       | swaps A and B on each positive clock edge.
        
       | kazinator wrote:
       | Why wait for a language; here is a macro:
       | 
       | Firstly, we define a place (temporal x y) such that when this
       | place is accessed, it denotes x, and when it is assigned, the
       | assignment goes to y:                 (defplace (temporal-place
       | var var*) body         (getter setter           ^(macrolet
       | ((,getter () ,var)                       (,setter (val) ^(set
       | ,',var* ,val)))              ,body))         (setter
       | ^(macrolet ((,setter (val) ^(set ,',var* ,val)))
       | ,body)))            (define-place-macro temporal (var var*)
       | ^(temporal-place ,var ,var*))            (defmacro temporal (var
       | var*) var)
       | 
       | We can then use that as a building block whereby we define an
       | apparent variable v as a symbol macro which expands to (temporal
       | vold vnew). All accesses to v get the value of vold, but
       | assignments to v do not affect vold; they go to vnew.
       | 
       | Here is a let-like macro which sets up the specified variable as
       | temporal. The initial values propagate into both the old and new
       | location.
       | 
       | Then before the last form of the body is evalulated, the new
       | values are copied to the old values.                 (defmacro
       | temporal-let (vars . body)         (let ((olds (mapcar (ret
       | (gensym)) vars))               (news (mapcar (ret (gensym))
       | vars)))           ^(let* (,*(zip olds [mapcar cadr vars])
       | ,*(zip news olds))              (symacrolet ,(zip [mapcar car
       | vars]                                [mapcar (op list 'temporal)
       | olds news])                ,*(butlast body)                (set
       | ,*(weave olds news))                ,*(last body)))))
       | 
       | Test. e.g:                 1> (temporal-let ((x 1) (y 2))
       | (set x y)             (set y x)             (list x y))       (2
       | 1)
       | 
       | The expansion is:                 2> (expand '(temporal-let ((x
       | 1) (y 2))             (set x y)             (set y x)
       | (list x y)))       (let* ((#:g0024 1)              (#:g0025 2)
       | (#:g0026 #:g0024)              (#:g0027 #:g0025))
       | (sys:setq #:g0026           #:g0025)         (sys:setq #:g0027
       | #:g0024)         (progn (sys:setq #:g0024
       | #:g0026)           (sys:setq #:g0025             #:g0027))
       | (list #:g0024 #:g0025))
       | 
       | Let's see how it compiles:                 2> (disassemble
       | (compile-toplevel '(temporal-let ((x 1) (y 2))             (set x
       | y)             (set y x)             (list x y))))       data:
       | 0: 1           1: 2       syms:           0: list       code:
       | 0: 20020002 gcall t2 0 d1 d0           1: 04010000           2:
       | 00000400           3: 10000002 end t2       instruction count:
       | 2       #<sys:vm-desc: 903ec50>
       | 
       | Oops, it got compiled to a single call instruction invoking (list
       | 2 1). Let's try _opt-level_ zero:                 1> (let ((*opt-
       | level* 0))            (disassemble (compile-toplevel '(temporal-
       | let ((x 1) (y 2))              (set x y)              (set y x)
       | (list x y)))))       data:           0: 1           1: 2
       | syms:           0: list       code:           0: 04020004 frame 2
       | 4           1: 2C800400 movsr v00000 d0           2: 2C810401
       | movsr v00001 d1           3: 2C820800 movsr v00002 v00000
       | 4: 2C830801 movsr v00003 v00001           5: 2C820801 movsr
       | v00002 v00001           6: 2C830800 movsr v00003 v00000
       | 7: 2C800802 movsr v00000 v00002           8: 2C810803 movsr
       | v00001 v00003           9: 20020002 gcall t2 0 v00000 v00001
       | 10: 08000000          11: 00000801          12: 10000002 end t2
       | 13: 10000002 end t2       instruction count:          12
       | #<sys:vm-desc: 9de1b50>
       | 
       | TXR Lisp's compiler performs frame elimination optimization
       | whereby the v registers get reassigned to t registers,
       | eliminating the frame setup. The t registers are subject to some
       | decent data flow optimizations and can disappear entirely, like
       | in this case.
        
       | charlieflowers wrote:
       | Overlaps with functional reactive programming
        
       | togaen wrote:
       | Property 2 implies immutable state. I also used to think that was
       | a great idea. Then I tried actually programming that way.
        
         | AlchemistCamp wrote:
         | I've done most my programming in the past several years in
         | Elixir, where mutable variables aren't an option, and it's been
         | a great dev experience. An entire class of bugs disappears and
         | it makes concurrency very easy to reason about.
        
           | macintux wrote:
           | After spending a few years in Erlang, it's hard (and
           | depressing) to go back to any other way of programming.
        
       | meindnoch wrote:
       | In game programming this is called "double-buffered state":
       | https://gameprogrammingpatterns.com/double-buffer.html
        
       | muhaaa wrote:
       | You want to try temporal programming? Try clojure: immutable
       | values, immutable & persistent data structures, software
       | transactional memory.
       | 
       | You need a data base with full history (transaction time), try
       | datomic. Additionally if you need full history AND domain time
       | (bi-temporal) included in your data base, try or xtdb.
        
         | gleenn wrote:
         | I second this, Clojure gives you this both in memory and in the
         | database via Datomic and reasoning about immutable data and
         | pure functions is such a dream. Stateful programming is the
         | worst, once you live in Clojure for a while everything else
         | seems nuts trying to debug what got passed to what and when and
         | what modified what. Such a disaster relatively speaking.
        
         | deterministic wrote:
         | Or Haskell/Elm/... if you prefer typed programming languages.
        
           | rockwotj wrote:
           | Gleam.run is my current favorite flavor for "typed functional
           | language". It's simple (a small language) and compiles to
           | erlang
        
       | mpweiher wrote:
       | Sounds a lot like Lucid?
       | 
       | https://en.wikipedia.org/wiki/Lucid_(programming_language)
       | 
       | They don't use x', but rather fby x, but otherwise the ideas seem
       | very similar.
       | 
       | Of course, they called it dataflow, and Lucid inspired the
       | "synchronous dataflow" languages like Esterel (imperative) and
       | Lustre (functional). Which in turn inspired the non Conal Elliott
       | variants of "FRP".
       | 
       | It's all related...
        
       | lalwanivikas wrote:
       | Hmmm I got confused by the name. I thought it's related to
       | https://temporal.io/
        
       | deterministic wrote:
       | This is how I think about programming and how I design software.
       | 
       | You don't need a functional language to do this. However it does
       | require more discipline when the language doesn't enforce purity.
       | 
       | And once I started thinking about software this way I realised
       | that events are important and should be the core of any software
       | design. Basically the application is a single function that takes
       | an event and the current state, and computes a new state and zero
       | or more events.
        
       | gijoeyguerra wrote:
       | Looks and sounds like event sourcing.
        
       | thwayunion wrote:
       | This idea has a history! The reason most language research
       | programs like this had limited success is as follows.
       | 
       | There are precious few programs for which a|b ~ a;b is true for
       | all atomic programs a,b where:
       | 
       | | is a parallel composition operator
       | 
       | ; is the normal sequential composition operator (defined in the
       | completely standard way in terms of updates to a global state or
       | heap/stack-like structure if you want to be fancy)
       | 
       | ~ is any sort of equivalence relation on reasonable and realistic
       | semantics.
       | 
       | More-over, there are precious few programs that easily decompose
       | into parts A,B,... such that A|B ~ A;B is true for each A and B
       | in the set of parts. You can give theoretic characterizations of
       | this fact, and many have, but the pragmatic point is more
       | convincing.
       | 
       | Associativity and even reflexivity often fail as well.
       | 
       | The allure of this line of thought is promising, but alas... The
       | "just make a simple algebra and arrange your programs to fit it"
       | research program died early on for a reason. In the context of
       | parallel and concurrent programs, it is hard/impossible to solve
       | the decomposition problem at the language level of abstraction.
       | 
       | That pretty much summarizes outcome of a big chunk of program
       | analysis research from the 70s to early 90s in a nutshell,
       | unfortunately.
       | 
       | Let me briefly illustrate using OP's example:
       | 
       | `a' = b + 1; b' = a + 1`
       | 
       | from which I've deleted the errant training semicolon (it's `1+1`
       | not `1+1+`!)
       | 
       | Suppose we instead have the program:
       | 
       | a' = a + b + 1; b' = a + b + 1
       | 
       | Now, the ordering does matter! No worries, so we order.
       | 
       | Additionally, suppose we put these programs into an outer
       | while(true) loop and that's our whole program. Now what have we
       | bought ourselves? Not much.
       | 
       | In this case we can solve that problem by... well, by solving
       | some sort of integer recurrence equations that give us a program
       | without loops which we can then think about algebraically. But of
       | course this gets difficult or impossible very fast. (And, btw,
       | the algebra part here did not buy us much. It was solving integer
       | recurrence equations that saved the idea in this example). We
       | haven't even added real data structures or external state yet.
       | 
       | Anyways, looks like OP is building something around this idea.
       | Good luck! I'll be curious to see how you handle loops inside of
       | blocks that contain @'d variables with recursively defined
       | quantities :)
        
         | oh_my_goodness wrote:
         | I don't think the ordering matters. In both a' = a + b + 1 and
         | b' = a + b + 1 the new values are on the left and the old
         | values are on the right.
        
           | thwayunion wrote:
           | Sorry, I edited a bunch and the point was that the ordering
           | can matter due to references to the new value on the rhs or
           | due to a loop. So
           | 
           | a' = a + b + 1 and b' = a + b + 1
           | 
           | should be
           | 
           | a' = a + b' + 1 and b' = a' + b + 1
           | 
           | Or alternatively they can have non-primed right hand sides
           | but occur in a loop and you have the same problem even though
           | the rhs's are at-free. A syntactic constraint won't work
           | unless it includes "and also no loops in at-mentioning
           | blocks" which... well, yeah, we know how to reason about NFAs
           | :)
           | 
           | Sorry again, trying to make too many points with a single
           | example
        
       ___________________________________________________________________
       (page generated 2022-11-28 05:00 UTC)