[HN Gopher] Why I Prefer Functional Programming
       ___________________________________________________________________
        
       Why I Prefer Functional Programming
        
       Author : quickthrower2
       Score  : 150 points
       Date   : 2020-10-31 08:28 UTC (14 hours ago)
        
 (HTM) web link (www.haskellforall.com)
 (TXT) w3m dump (www.haskellforall.com)
        
       | blunte wrote:
       | The author presents succinct and compelling reasons to prefer
       | functional principles.
       | 
       | To add, I find the strong push for purity of functions helps free
       | up the part of my mind which seems to constantly worry about
       | potential problems or unknown/unexpected behaviors. With that
       | background noise out of the way, I can create more elegant (by my
       | own judgement :) ) solutions to needs. I also sleep better.
        
       | loxs wrote:
       | I agree to a great extent with the article, except for the
       | dislike for mutability. I used to be like this (dislike it), then
       | I found Rust.
       | 
       | Now I prefer imperative, eager computation with (safe) mutability
       | and even some lightweight OOP (the way Rust does it with
       | Traits/Type-classes). It seems that it much better reflects the
       | real world. Rust makes the mutability concerns obsolete, as you
       | can have safety in the imperative/mutable world.
       | 
       | Otherwise Rust is very much functional in all the great ways
       | explained in the article.
        
       | karmakaze wrote:
       | I like FP's other "timeless" feature, you can think of the
       | operations without respect to time, there isn't a sequence of
       | operations, it's just a relation between the input and output. It
       | may be described sequentially in parts but there is no mixing of
       | output with input so can be thought of happening instantaneously.
       | 
       | This is also the same reason why I like SQL, it describes
       | properties of the result set--if your thinking of it as
       | procedural steps it's more difficult. When it comes down to
       | performance tuning however, you do have to consider what really
       | happens in terms of index choices, loop nesting, etc. I never got
       | into FP problems large/complex enough to hit this point where I
       | have to understand how FP sausages are made.
        
         | marcosdumay wrote:
         | Haskell is famous for having problems with space leaks, where
         | the runtime just refuses to evaluate your code and keeps
         | storing more of it to run later. It's not usually a problem
         | because people will teach you how to avoid most of it on any
         | introductory text (usually, it's not hard), but if no care is
         | taken, it will happen.
         | 
         | To go into another timeless paradigm, logical programming is
         | famous for unexpected exponential execution times. It's rare
         | that those get past tests, but it's something that developers
         | are constantly having to fix (this or ensuring the type system
         | detects determinism). SQL has a much milder version or this
         | with underspecified joins.
         | 
         | Getting ride of time makes a lot of things easier, but is not
         | trouble-free.
        
           | samcal wrote:
           | I really like that distinction, relating imperative/FP
           | tradeoffs to the classic space/time tradeoff. I'd also point
           | out that betting on space getting cheaper over time seems
           | like a better bet than betting on time getting cheaper :)
        
             | marcosdumay wrote:
             | That's not the trade-off. The trade-off is about what you
             | have to think about when programming, and what kinds of
             | bugs appear.
             | 
             | Non-buggy performance follows different rules.
        
         | carapace wrote:
         | You might like Prolog.
        
       | nendroid wrote:
       | I disagree with the higher order functions thing. Writing
       | functions that receive other functions as input can lead to over
       | complicated code that's really hard to read. It's literally the
       | same thing as dependency injection just with functions instead of
       | objects.
       | 
       | Use sparingly. I would avoid altogether except for common ones
       | like map, reduce and filter.
       | 
       | Functional programming promotes the idea of composition of
       | morphisms and point free programming. I would use this in place
       | of Object composition/dependency injection/higher order functions
       | or whatever you want to call it.
        
         | quickthrower2 wrote:
         | I agree to some extent. When I see this in imperative code-
         | bases, it can be used as a way to reduce lines of code and save
         | having to define a new class. But when debugging or trying to
         | make sense of the code it can be awful. However it's not the
         | passing functions to functions that is the problem, it is the
         | division of responsibility not being though through while doing
         | so.
         | 
         | As a silly made up example, you pass a callback to an XML
         | parser to tell you when a node has been parsed, so you can pass
         | in a callback to call an API to let it know. Doing something
         | like that just pollutes the XML parser code for no good reason,
         | and you then debug and find you get a 400 error when trying to
         | parse something and 'wtf' and you have to untangle the
         | callbacks.
         | 
         | I'm having a hard time summing it up right now, but I think if
         | you try to "libraritize" "specific code", i.e. make something
         | that does a very specific job, too generic it causes confusion.
         | 
         | That example has 1 callback but believe me some developers want
         | to stack 'em high.
        
         | anon_d wrote:
         | _Writing functions that receive other functions as input can
         | lead to over complicated code that 's really hard to read._
         | 
         | I work in FP languages, and I find this type of code
         | _extremely_ intuitive and easy to understand.
         | 
         | I suspect that you are just not used to thinking this way, and
         | that with more familiarity, you would not have this opinion.
        
           | nendroid wrote:
           | So you like writing functions that take 3 functions as input
           | parameters and returns a new function that also takes a
           | function as an input parameter and returns another function?
           | In general excessive use of this at great depth leads to code
           | that is not only less readable but less modular.
           | 
           | The complexity of higher order functions can easily be
           | reasoned about by looking at the cardinality of the type. Sum
           | types and product types are standard and easy to reason
           | about.
           | 
           | However an exponential type that takes in an exponential type
           | and returns another exponential type leads to cardinalities
           | and possibilities that are much harder to reason about.
           | 
           | Composition of combinators is a pattern promoted by point
           | free programming that leads not only to higher readability
           | but greater modularity due to the reduced cardinality.
           | Additionally cardinality of the entire type can he reduced to
           | only analyzing the the final result of the composition.
           | F = A . B . C
           | 
           | The cardinality of F is all that needs to be known. The
           | cardinality of the individual components can be ignored and
           | you don't have to reason about this. This is not the case for
           | higher order functions.
           | 
           | In general business problems that we deal with do not Require
           | higher order functions. Higher order functions share
           | isomorphisms with two concepts we are already familiar with:
           | frameworks and metaprogramming.
           | 
           | Frameworks are programs that call your code and
           | metaprogramming is code that writes and calls code. These
           | things are no different then a function that takes another
           | function in as a parameter and calls it. All three concepts
           | are aspects of the same thing.
           | 
           | Therefore all the downsides of metaprogramming and all the
           | downsides of frameworks are applicable to higher order
           | functions.
           | 
           | Most of the programming problems we solve in the real world
           | do not require metaprogramming. It can easily be avoided and
           | solved with much more composition of combinators. Again, Use
           | sparingly.
        
       | Syzygies wrote:
       | Functional programming appeals to many of us for reasons other
       | than these practical considerations: Functional programming feels
       | like reasoning in algebra. As in Modern Algebra for math majors
       | (groups, rings, fields) and beyond.
       | 
       | There's a saying in mathematics that when any field matures it
       | turns into algebra. Life crossed a threshold from chemistry to
       | biology on Earth, and developed exponentially from there. Order
       | in mathematics crosses a similar threshold, as it becomes
       | sufficiently structured to support algebraic reasoning. The
       | subjective experience is like ice melting into a churning liquid,
       | or a land-locked creature learning to fly. Once one has this
       | experience, one cannot imagine thinking any other way. In the
       | case of functional programming, programs written in other
       | languages feel like ad hoc pre-civilization constructions, doing
       | arithmetic by counting pebbles.
       | 
       | Advocates of Haskell don't tend to express this, because from the
       | outside it can come off like trolling, but this algebraic sense
       | of wonder is at the core of many Haskeller's experiences. We all
       | have the example of Lisp in our minds, its "we found God"
       | advocacy did much to hinder its adoption. Nevertheless,
       | understanding this explains much about Haskell. The real point of
       | lazy evaluation is that it best supports this algebraic
       | reasoning, as carbon best supports life. The 47,000 compiler
       | options reflect a primary goal of being a research bed for
       | language constructs derived from mathematical category theory,
       | despite its success as a practical language for those free to
       | choose it.
       | 
       | The killer app for Haskell is parallelism. To this day it has the
       | best implementation of parallelism; one can achieve a 7x speedup
       | on 8 cores by adding a handful of lines to a program. This ease
       | is a consequence of the functional model, and of considerable
       | effort by Haskell's developers.
       | 
       | Idris 2 is itself a joy to learn, if one wants a smaller, cleaner
       | Haskell without the 47,000 compiler options. One gets to learn
       | dependent types. Alas, it doesn't offer commercial-grade
       | parallelism.
        
         | adwn wrote:
         | > _To this day it has the best implementation of parallelism;
         | one can achieve a 7x speedup on 8 cores by adding a handful of
         | lines to a program._
         | 
         | Eh, you can achieve the same in Rust using rayon, this isn't
         | exclusive to Haskell or FP. Do you have an example in Haskell,
         | where you get an effortless speedup which wouldn't be easy in
         | another language?
        
           | jeffreygoesto wrote:
           | Isn't the speedup a direct result of immutable data more than
           | a specific language?
        
           | jgilias wrote:
           | Yeah, but Rust isn't exactly a counter example, I think. It's
           | heavily inspired by Haskell, its type system, as well as FP
           | in general. So, it sort of inherits (pun not intended) its
           | parallelization capabilities from that.
        
             | platinumrad wrote:
             | What? No. Rust has some influence from Haskell in terms of
             | traits and such, but Rust's parallelization capabilities
             | comes from its tracking of mutability. (No, Haskell did not
             | invent caring about mutability.)
             | 
             | Also, succinct expression of parallel algorithms isn't a
             | particularly unique feature. Like half of the named
             | algorithms in the C++ standard library can be parallelized
             | by adding a single parameter.
        
               | jolux wrote:
               | That's because the parallelization in the STL is
               | encapsulated. In Haskell and Rust it's explicit, it's not
               | just about adding a parameter to an existing function,
               | it's a general mechanism.
        
               | platinumrad wrote:
               | Rayon literally works on iterators just like the C++
               | algorithm library.
        
             | darksaints wrote:
             | Heavily? It's actually heavily inspired by ML (SML/OCaml),
             | not Haskell. Most of its functional programming attributes
             | originated in SML, and has avoided most of the things that
             | make Haskell unique. Really it is just type classes that
             | came from Haskell.
        
               | pjmlp wrote:
               | Type classes are just a way of doing module interfaces or
               | Objective-C protocols/categories, as prior art before
               | Haskell or Miranda came into being.
        
               | centimeter wrote:
               | I was around in the very early days of rust, and it was
               | explicitly stated in many places in the docs that the
               | trait system, ADTs, etc. were all inspired by Haskell
               | experience. I don't think we ever mentioned OCaml or SML.
               | The trait system in Rust has nothing whatsoever to do
               | with SML or OCaml and everything to do with Haskell's
               | typeclass system. So between that and ADTs, on what
               | grounds do you think it was "heavily inspired by ML"?
        
           | WJW wrote:
           | From the article at https://medium.com/@s.nawaz/optimizing-
           | ray-tracing-in-haskel...:
           | 
           | > let colors = fmap computeColor coordinates
           | 
           | to
           | 
           | > let colors = fmap computeColor coordinates `using`
           | parListChunk 32 rseq
           | 
           | So half a line added to parallelize the mapping a pure
           | computation including using a sensible threadpool
           | implementation. How impressive you find this depends on how
           | you define "easy in another language". Rayon comes pretty
           | close but you can encode some properties about your
           | computation in the Haskell type system that Rust does not yet
           | support and so the safety of some computations can be
           | statically proven in Haskell but not in Rust.
        
         | carapace wrote:
         | Backus emphasized the algebra of programs in his Turing Award
         | lecture: "Can Programming Be Liberated From the von Neumann
         | Style? A Functional Style and its Algebra of Programs"
         | 
         | https://amturing.acm.org/award_winners/backus_0703524.cfm
         | 
         | https://dl.acm.org/doi/10.1145/359576.359579
        
         | spekcular wrote:
         | > There's a saying in mathematics that when any field matures
         | it turns into algebra. Life crossed a threshold from chemistry
         | to biology on Earth, and developed exponentially from there.
         | Order in mathematics crosses a similar threshold, as it becomes
         | sufficiently structured to support algebraic reasoning. The
         | subjective experience is like ice melting into a churning
         | liquid, or a land-locked creature learning to fly. Once one has
         | this experience, one cannot imagine thinking any other way.
         | 
         | Hi, this seems wrong to me.
         | 
         | First, as far as I can tell, such a saying does not exist. I've
         | never heard it, despite having a fair amount of experience with
         | mathematics (though not as much as you), and I can't find it
         | through Googling.
         | 
         | Second, and more substantively, it is not the case that
         | mathematical fields inevitably turn into algebra. One only has
         | to look at PDE, probability, and analytic number theory to see
         | this is the case (to give just a few examples). All are highly
         | mature fields and essentially non-algebraic. This is not to say
         | that ideas from algebra are not occasionally useful, just that
         | it should be obvious to anyone who opens an introductory
         | graduate text in PDE that the subject has not "turn[ed] into
         | algebra."
         | 
         | > The killer app for Haskell is parallelism. To this day it has
         | the best implementation of parallelism; one can achieve a 7x
         | speedup on 8 cores by adding a handful of lines to a program.
         | This ease is a consequence of the functional model, and of
         | considerable effort by Haskell's developers.
         | 
         | I don't think this statement is entirely wrong, but I don't
         | think it's entirely right either. There are plenty of languages
         | where one can get parallelism with just a few extra lines of
         | code; Julia comes to mind immediately. Easy parallelism is
         | hardly a Haskell-specific, or even FP-specific, feature. I
         | would of course agree that designing things without mutable
         | state makes writing parallel code easier, but this can be done
         | in many high-level languages these days.
         | 
         | But that is a nitpick. A more substantive objection is that the
         | sole purpose of parallelism is performance, and idiomatic
         | Haskell is generally speaking slower, sometimes a lot slower,
         | than the same algorithm written in C++ or similar languages.
         | (The adjective _idiomatic_ is important here - I 'm aware that
         | one can with enough contortions write C-like code in Haskell,
         | but this response undercuts of the point of highlighting
         | Haskell-specific features like immutability, laziness, etc.) In
         | particular, a few years ago it seemed like the people
         | implementing parallel Haskell features didn't understand or
         | care about fundamental things like cache locality. Maybe this
         | has changed?
         | 
         | There is also an amusing story about the "ease" of writing
         | parallel quicksort in Haskell [1].
         | 
         | > The real point of lazy evaluation is that it best supports
         | this algebraic reasoning, as carbon best supports life.
         | 
         | It's worth noting that Simon Peyton Jones is on record as
         | saying that any "Haskell 2" should not be lazy [2].
         | 
         | [1] https://sudonull.com/post/75314-Parallel-quick-sort-on-
         | Haske...
         | 
         | [2] https://news.ycombinator.com/item?id=1924061
        
           | pessimizer wrote:
           | > There are plenty of languages where one can get parallelism
           | with just a few extra lines of code; Julia comes to mind
           | immediately.
           | 
           | Julia is a multi-paradigm language, but it is heavily Erlang
           | inspired (another multi-paradigm language, but primarily
           | functional.)
        
           | the__alchemist wrote:
           | Good point... The realization that chaotic systems are
           | ubiquitous, and that many physical phenomena (motion of the
           | planets, how chemical interactions work at the electron level
           | etc) can't be modeled analytically (so far as we know) was a
           | big upset to mathematics and science, and still isn't
           | addressed well in school curriculum.
           | 
           | To expand on your main point: A loose, but appropriate
           | analogy is that FP appeals to programmers in the way analytic
           | solutions appeal to mathematicians and scientists: Neat,
           | ideal, beautiful... but a poor fit for many practical
           | problems.
        
           | adwn wrote:
           | > _In particular, a few years ago it seemed like the people
           | implementing parallel Haskell features didn 't understand or
           | care about fundamental things like cache locality._
           | 
           | That's the main issue which FP-for-parallel-execution
           | proponents don't seem to get: _Identifying_ independent (and
           | therefore parallelisable) computations isn 't the hard part,
           | but _planning the data layout and partitioning_ so that all
           | your parallelism isn 't eaten up by synchronization and data
           | movement between cores.
        
             | kenjackson wrote:
             | I think it may be more accurate to say they are both hard
             | problems. It is still open if P=NC.
        
           | marcosdumay wrote:
           | Probability has surely morphed into an algebra, and what you
           | study in modern introductory books is the conversion of the
           | algebra into "everyday language" so that students don't get
           | scared. It has very little similarity to what probability
           | looked like when it was being developed.
           | 
           | Also, you seem to be overstating the maturity of PDE (people
           | are pretty much still trying to turn it into algebra, but it
           | may not be possible) and number theory (that's an active
           | research area).
           | 
           | I have never seen that saying either, but it seems quite
           | correct, with the exception of the areas where an algebra is
           | impossible, so the field matures without becoming one.
        
             | spekcular wrote:
             | On what basis do you say that probability has "morphed into
             | an algebra"? Consider all of the popular research topics
             | today: the KPZ equation, Schramm-Loewner evolution, spin
             | glasses, percolation and critical phenomena, rigorous
             | statistical mechanics in general [1], random matrix theory,
             | large deviations, stochastic analysis and (P)SDEs.
             | 
             | Obviously, I'm forgetting some, but none of these are
             | primarily algebraic. Some algebraic methods are used
             | (orthogonal polynomials in random matrix theory, and
             | determinants and a whole host of other things to study
             | integrable models in the KPZ universality class, etc.), but
             | clearly groups/rings/fields are not playing a major role.
             | 
             | For a more systematic approach you could look at recent
             | issues of Annals of Probability, Annals of Applied
             | Probability, and similar journals. There's not going to be
             | a lot of "modern algebra" (of the flavor you see in
             | algebraic geometry) there.
             | 
             | The same comments apply to PDE and analytic number theory.
             | Both are obviously mature fields (worked on for a long time
             | by many people, with a lot of great discoveries), but again
             | algebra does not play a central role in either. In
             | particular I am not aware of any PDE specialists whose
             | research agenda consists of "trying to turn it into
             | algebra."
             | 
             | [1]: https://www.unige.ch/math/folks/velenik/smbook/
        
         | __jem wrote:
         | > Life crossed a threshold from chemistry to biology on Earth,
         | and developed exponentially from there. Order in mathematics
         | crosses a similar threshold, as it becomes sufficiently
         | structured to support algebraic reasoning.
         | 
         | It's funny, because I think this metaphor cuts exactly the
         | opposite direction -- code reaching sufficient complexity, the
         | pure math approach doesn't survive an encounter with the real
         | world.
         | 
         | I love static analysis and think that there are core business
         | domains that are best expressed with a rich type system, but
         | there is a whole world of absolutely critical programming that
         | cannot simply eliminate messiness, due to interfacing with
         | humans and our flawed practices.
         | 
         | With sufficient time and an expressive enough type system, even
         | the messiest business process can be represented, but I'm not
         | convinced that's an example of beauty but rather stubbornness.
        
           | sli wrote:
           | > ... but there is a whole world of absolutely critical
           | programming that cannot simply eliminate messiness, due to
           | interfacing with humans and our flawed practices.
           | 
           | I understand what you're saying, but I don't think you'll
           | find many programmers that will argue that a programmer
           | _shouldn 't_ use the best tool for the job.
        
             | __jem wrote:
             | Agreed, except the sort of "we found God" types referenced
             | by the parent in regards to pure functional programming.
        
           | [deleted]
        
         | nickdrozd wrote:
         | > In the case of functional programming, programs written in
         | other languages feel like ad hoc pre-civilization
         | constructions, doing arithmetic by counting pebbles.
         | 
         | > Idris 2 is itself a joy to learn, if one wants a smaller,
         | cleaner Haskell without the 47,000 compiler options.
         | 
         | From a conceptual point of view, Idris makes Haskell look
         | similarly crude.
        
       | barry27 wrote:
       | "I prefer functional programming because I have decided to prefer
       | it".
        
       | bachmeier wrote:
       | Requiring a functional programming approach when you teach makes
       | your life _much_ easier. I teach a computationally-intensive
       | course for advanced economics PhD students. These days I have
       | them think carefully through their problem, set up a recursion,
       | and make one call to reduce. The vast majority of problems fit
       | into this framework. I no longer have to work my way through
       | 500-line monsters that look like entries in an obfuscated Perl
       | competition. I read a few lines of code, I know exactly what it
       | does, and I get on with my life.
        
         | mattmanser wrote:
         | You can do this in any language, functional or not. Every
         | language has recursion, every language has a simple one liner
         | way to do reduce.
         | 
         | They have generally had them for over a decade now.
         | 
         | I don't know if it's the case any more as I mainly work alone,
         | but I often saw new programmers had been taught about
         | recursion, but didn't actually understand it. So they never
         | used the concept until you pointed it out, even if it could
         | vastly simplify their code.
        
           | willtim wrote:
           | > You can do this in any language, functional or not.
           | 
           | No, not in practice. For example, in an average imperative
           | language, you'd have no tail-call optimisation, and so would
           | run out of stack space pretty quickly. There'd also be no
           | persistent collections, meaning you'd have to keep copying
           | any sets of values for every recursive call with absymal
           | complexity. And so on...
        
       | zupa-hu wrote:
       | So he prefers the functional programming style based on superior
       | portability.
       | 
       | While I too mostly prefer functional programming principles, the
       | argument seems weak to me. Even if those features are timeless
       | and programs written with those can be ported to most other
       | languages, it doesn't follow that an extra feature (even if
       | invented in the future) won't make you 10x more productive thus
       | desirable.
       | 
       | I prefer the functional programming style (in a non-FP language)
       | for the productivity gains. I move faster, the codebase has a
       | smaller footprint 'in my brain', it's easier to reason about it,
       | so overall I'm more productive when using it.
        
       | bionhoward wrote:
       | I'm super interested in FP for AGI, but one issue with such
       | setups (see Jax.experimental.stax) is the neural nets are
       | parametric functions, and if recurrent, can have state, too. That
       | adds up to a lot of extra work for a programmer to manually pass
       | around these parameters. It's doable but also distracting from
       | the real work of cognitive architecture, which is why I switched
       | from stax style to Flax.
       | 
       | Flax is currently by far my favorite neural net library, despite
       | being pretty small, because it lets you write a neural net module
       | as a function and decorate that with @nn.module -- the decorator
       | wraps your function in a class instance to manage the params and
       | state. This can shave 10 lines per module, and if you're like me,
       | you have like 50 bajillion ideas for different neural net
       | modules, it really adds up.
       | 
       | TLDR: Check out Flax if you like functional programming and
       | neural networks!
        
       | curryhoward wrote:
       | I like to think there is a nice pun here, that functional
       | programming is "timeless" in two simultaneous ways:
       | 
       | (1) As mentioned in the article, values are immutable and side
       | effects are reified so you do not have to reason about time.
       | Instead, you can reason equationally, which unlocks some nice
       | benefits related to refactoring, reasoning about correctness and
       | concurrency, testing, reproducibility, etc.
       | 
       | (2) Functional programming is based on well-behaved mathematical
       | foundations and so is more stable than the other programming
       | paradigms. Object-oriented programming is a constantly moving
       | target because it doesn't have a principled formal foundation
       | (despite many attempts to establish one on an ex post facto
       | basis), whereas functions are "timeless" in the sense that
       | they're here to stay.
       | 
       | It's not without tradeoffs of course, but if you're willing to
       | have an open mind about how computers should be programmed and
       | forego some things that are so pervasive in traditional
       | programming (e.g., unrestricted side effects), the payoff is
       | huge.
        
         | sidpatil wrote:
         | This is known as _combinatorial logic_ , contrasted with
         | sequential logic which is "timeful".
        
       | caryd wrote:
       | I think it all comes down to one question, do you need objects or
       | not?
        
         | josephcsible wrote:
         | What functionality can you only get with objects, and not with
         | anything that is part of the FP paradigm?
        
         | quickthrower2 wrote:
         | And also what is an 'object'?
         | 
         | Haskell has a decent system for building up complex data
         | structures. You can write functions for those. Those data
         | structures + those functions are kind of "an object" from the
         | OO world (data + methods).
         | 
         | The differences are:
         | 
         | * Inheritance - well Haskell has many kinds of polymorphism so
         | it doesn't need it.
         | 
         | * Hiding data - well Haskell can hide data using smart
         | constructors, you don't need an explicit private keyword
         | 
         | * Interfaces - covered by typeclasses but they do a lot more!
         | 
         | Also without any of the above you can create object-like things
         | with just closures. If a scope has access to 3 variables in
         | scope, it's "kinda" like an OO object with 3 member variables.
         | You can create new scopes in a for loop, which is like creating
         | many new object. It's a stretch but the underlying concept of
         | something owning other things is still there.
        
           | ufo wrote:
           | > Those data structures + those functions are kind of "an
           | object"
           | 
           | And in this setting, the type of that datastructure
           | containing the functions can act as an interface.
           | 
           | A trap that OO programmers sometimes fall into with Haskell
           | is to try to emulate OO interfaces using existential types +
           | typeclass interfaces but that is a bit of an antipattern.
        
       ___________________________________________________________________
       (page generated 2020-10-31 23:00 UTC)