[HN Gopher] Math is an insurance policy
       ___________________________________________________________________
        
       Math is an insurance policy
        
       Author : ingve
       Score  : 143 points
       Date   : 2020-02-24 18:19 UTC (4 hours ago)
        
 (HTM) web link (bartoszmilewski.com)
 (TXT) w3m dump (bartoszmilewski.com)
        
       | ditonal wrote:
       | I got interested in programming in 2000 and graduated with a CS
       | degree in 2009. Throughout the 2000s (remember this was post-dot
       | com) I got told software was a bad field to get into for two
       | reasons:
       | 
       | 1) The advent of advanced tooling would make software engineers
       | unnecessary. Tools like Visual Basic and UML diagrams were the
       | tip of the iceberg of "visual coding" where a business person
       | could just specify requirements and the software would be
       | automatically created.
       | 
       | 2) Jobs were all going to go overseas. There is no reason to pay
       | a programmer in California 10x what you can pay someone in
       | Mumbai. It's better to study something like finance where the
       | secret domain knowledge is held within the chambers of ibankers
       | in Manhattan. The future of tech startups is a few product
       | managers in NYC outsourcing the coding work to India.
       | 
       | There's also a 3rd argument that gets floated around that the
       | field will be oversaturated, "everyone" is learning to code, etc.
       | In 2015 I asked a startup for 125k and they told me, while that
       | did seem to be market at the time, they thought salaries had
       | peaked and were going in the opposite direction. In 2020 you
       | probably can't hire a bootcamp grad for 125k.
       | 
       | Since then the field has exploded and wages have gone off the
       | charts, but I still hear the same type of arguments over and
       | over.
       | 
       | In 2020, you hear stuff like:
       | 
       | 1) AI is the future, it's going to automate away all of the
       | menial programming jobs.
       | 
       | 2) Bay Area is overcrowded, all the jobs are going remote. The
       | future of tech startup is a few marketing execs in SF outsourcing
       | the menial tech work to flyover states.
       | 
       | Personally, I didn't believe the hype then and don't believe the
       | hype now. I find it amusing that the author questions the wisdom
       | of Google for not using AI to automate development, as if that's
       | never occurred to Google.
       | 
       | Of course the author is right that tech is a treadmill, new
       | skills move into the spotlight while old ones become outdated,
       | although even then the mass of legacy code means consultants will
       | have lucrative jobs taking care of it for a long time.
       | 
       | In my experience, new tooling always creates more software jobs,
       | not less. Software is not like high frequency trading, the more
       | people that compete to make software, the more people we seem to
       | need to make software.
       | 
       | Sure, Bay Area is getting insanely expensive, but Google still
       | tries to fit as many as it can into Mountain View, Facebook still
       | crams as many as it can into Menlo Park. Every 3 months some VC
       | will have the bright idea that, what if we just pay people to do
       | the lowly engineering work somewhere else and just have execs in
       | the Bay Area? And 99% of those startups go nowhere and Google is
       | worth a trillion dollars.
       | 
       | There is a very intuitive line of reasoning that goes, software
       | can be done my machines and software can be done anywhere. There
       | is a thread of truth to both those narratives, but it leads
       | people to very incorrect conclusions that there won't be as many
       | software jobs in the future and they will be done in cheap
       | places.
       | 
       | Despite all intuition, and all the logical narratives about costs
       | and automation, a group of people dedicated to technology in the
       | same physical room have defied that intuition. Virtually every
       | extremely important software company grew in the West Coast of
       | the USA, in some of the most expensive places in the world, and
       | the more software tools have improved the more headcount these
       | companies have had. So take all this intuition about the end of
       | days for software engineers with a huge grain of salt.
        
       | woah wrote:
       | It's odd that functional programming enthusiasts always love to
       | put the mantle of "math" on their favorite functional tidbits,
       | while real mathematicians write imperative, stateful (and often
       | sloppy) Python.
        
         | dddbbb wrote:
         | Plenty of serious researchers in top universities are studying
         | the intersection of mathematics and functional programming, and
         | I don't think that they're somehow not 'real mathematicians'.
         | 
         | Here's a paper by the author of this post, formalising a
         | popular Haskell pattern in category theory:
         | https://arxiv.org/abs/2001.07488
         | 
         | EDIT: switched to non-PDF link.
        
         | shadowfox wrote:
         | You may well be talking about different kinds of math here.
         | 
         | Most of what functional programmers / theorists tend to care
         | about are from sub-fields like logic, abstract algebra and the
         | like.
         | 
         | The "real mathematicians" that you mention are most likely
         | working in other fields like linear algebra, statistics etc.
         | While it is certainly possible that logicians and algebraists
         | work with sloppy python (in as much as they write _any_ code at
         | all), but I don 't feel that it had be a great fit.
        
       | kebos wrote:
       | I think this article gives AI more credit than it demonstrates at
       | present and simplifies the examples.
       | 
       | It's useful I expect for quick fixes/guidance like the loop
       | example.
       | 
       | For example on improving performance - these days that often
       | needs holistic architectural re-think - surely a creative
       | process? The idea of optimizing involving a loop seems very
       | distant from heterogeneous asynchronous behavior of modern
       | hardware.
       | 
       | If AI really does start to solve in the more 'general' way not
       | just a bit of object recognition here and there, won't software
       | developers incorporate with it as part of process instead.
       | Enabling even more sophisticated software to be written?
       | 
       | I think that is the key part of longevity to software development
       | as a career. The compiler didn't remove the assembly programmer
       | it simply enabled a whole new level of software complexity to be
       | feasible.
        
       | seibelj wrote:
       | Self driving cars are not coming. Jobs are not being eliminated
       | en masse. This is hyperventilation and is not borne out by
       | history, facts, or reality. Don't fall for the scary AI nonsense.
       | 
       | https://medium.com/@seibelj/the-artificial-intelligence-scam...
        
       | dcolkitt wrote:
       | To paraphrase what's said about fusion energy: Haskell is the
       | language of the future and always will be.
       | 
       | I love Haskell, but there's really not a single shred of evidence
       | that programming's moving towards high-level abstractions like
       | category theory. The reality is that 99% of working developers
       | are not implementing complex algorithms or pushing the frontier
       | of computational possibility. The vast majority are delivering
       | wet. messy business logic under tight constraints, ambiguously
       | defined criteria, and rapidly changing organizational
       | requirements.
       | 
       | By far more important than writing the purest or most elegant
       | code, are "soft skills" or the ability to efficiently and
       | effectively work within the broader organization. Can you
       | effectively communicate to non-technical people, do you write
       | good documentation, can you work with a team, can you accurately
       | estimate and deliver on schedules, do you prioritize effectively,
       | are you rigorously focused on delivering business value, do you
       | understand the broader corporate strategy.
       | 
       | At the end of the day, senior management doesn't care whether the
       | codebase is written in the purest, most abstracted Haskell or
       | EnterpriseAbstractFactoryJava. They care about meeting the
       | organizational objectives on time, on budget and with minimal
       | risk. The way to achieve that is to hire pragmatic, goal-oriented
       | people. (Or at the very least put them in charge.) And that group
       | rarely intersects with the type of people fascinated by the
       | mathematical properties of the type system.
        
         | FabHK wrote:
         | Agreed. One sentence in the article embodies this problem:
         | 
         | > As long as the dumb AI is unable to guess our wishes, there
         | will be a need to specify them using a precise language. We
         | already have such language, it's called math.
         | 
         | Math is wonderful and elegant, but it is useless (in itself) to
         | specify things in the real world. You need tons of
         | conceptualisation and operationalisation and common sense etc.
         | for the foreseeable future.
         | 
         | Heck, logic is useless for the real world - everyone knows the
         | example that "a bachelor is a man that is not married", but
         | even if you can implement it as a conjunction of two logical
         | predicates, then try to define, with logic a computer can
         | understand, what "man" is, or "married", or "game", or "taxable
         | income" or "murder" or "fake news". Go on, use math and
         | category theory. Good luck.
         | 
         | EDIT to add: For a wonderful overview of the problems with the
         | naive Aristotelian view of definition (easily captured by
         | predicate logic) in the real world, see George Lakoff's book
         | "Women, Fire and Dangerous Things: What Categories Reveal About
         | the Mind".
        
           | mreome wrote:
           | To be pedantic a bachelor is a man that is not, _and has
           | never been_ , married. A man that is not married may be a
           | bachelor, a divorce, or a widower. But _bachelor_ is still
           | used colloquially (in the US) for all three (but not as often
           | for a widower). Now if the age of the person is under a
           | certain age, most people would not seriously call them a
           | _bachelor_ yet, but the _cut-off_ age varies culturally. Now
           | married is a bit complicated too.. married in the eyes of the
           | law, formally or common-law, in a particular country, in the
           | eyes of a particular church?
           | 
           | Also, my aunt is a Bachelor of Arts, and her martial statues
           | doesn't effect that.
           | 
           | Language, life, and business logic is messy, confusing, and
           | conditional/contextual to the point of absurdity. You will
           | never realistically reduce it to pure mathematics in a way
           | that makes it _easier_ to deal with.
        
         | mumblemumble wrote:
         | > The vast majority are delivering wet. messy business logic
         | under tight constraints, ambiguously defined criteria, and
         | rapidly changing organizational requirements.
         | 
         | And herein lies the fundamental reason why most large
         | aggregations of business software that I've seen in the wild
         | are only strongly typed at about the same scale as the one on
         | which the universe actually agrees with Deepak Chopra. Beyond
         | that, it's all text.
         | 
         | It doesn't have to be full-blown Unix style the-howling-
         | madness-of-Zalgo-is-held-back-by-sed-but-only-barely; it can be
         | CSV or JSON or XML or whatever. And a lot of it's really just
         | random magic strings. But it's still all stringly typed at
         | human scales.
        
           | Diederich wrote:
           | I agree with both yours and the parent comments. But this is
           | just glorious:
           | 
           | > full-blown Unix style the-howling-madness-of-Zalgo-is-held-
           | back-by-sed-but-only-barely
           | 
           | I'm a very long-time UNIX guy, so the I think I get the zen
           | of what you're saying here, but I don't really get the Zalgo
           | reference. Is this it? https://knowyourmeme.com/memes/zalgo
           | Thanks!
        
             | sigstoat wrote:
             | first answer on
             | https://stackoverflow.com/questions/1732348/regex-match-
             | open...
        
               | schoen wrote:
               | And in context, the idea is that when we use classic Unix
               | text manipulation tools, we make implicit and sometimes
               | explicit assumptions about the nature and structure of
               | the data. Those assumptions can be wrong and then our
               | pipelines are also wrong. By contrast, a strongly
               | statically typed environment might force us to _prove_
               | our assumptions in the context of the interactions among
               | the different parts of our code, and so there will be
               | fewer corner cases where we have unexpected or
               | unspecified behavior.
               | 
               | The only barely held back part is presumably that when we
               | notice a corner case, we might add a little more logic to
               | handle that specific case, but we never really approach
               | the goal of handling all possible inputs with valid or
               | useful behavior. (We might even declare that a non-goal.)
        
         | darzu wrote:
         | I would argue that Haskell lets you respond to nebulous
         | requirements better than almost any other language, because
         | refactoring is so much easier and safer.
         | 
         | I self identify much more with being pragmatic and goal-
         | oriented than math-y and perfectionist, and I think for almost
         | every programming domain we'd achieve our goals faster by
         | moving more towards having strong static guarantees in an
         | ergonomic, expressive language.
         | 
         | Finally, I would also put forth that debugging state corruption
         | or randomly failing assertions is much harder than learning to
         | avoid side-effects and leaning into immutability.
        
           | KurtMueller wrote:
           | Alexis King, a professional Haskell coder, recently wrote an
           | article on exactly that:
           | 
           | https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-
           | typ...
        
             | mumblemumble wrote:
             | It's a very insightful article. It's clear and precise and
             | lucidly written, and I'd recommend anyone who cares about
             | these things read it.
             | 
             | That said, I found the article unconvincing. The author's
             | writing is perhaps _too_ precise, to the point where the
             | forest has been lost amidst the trees. I can draw the main
             | reasons why I 'm losing my religion w/r/t to static typing
             | from out of the appendix portion of the article itself:
             | 
             | > _not only can statically-typed languages support
             | structural typing, many dynamically-typed languages also
             | support nominal typing. These axes have historically
             | loosely correlated, but they are theoretically orthogonal_
             | 
             | The fact that they're theoretically orthogonal is small
             | consolation. They _are_ loosely correlated, but the
             | looseness doesn 't happen in a way that's particularly
             | useful to me. The fact of the matter is, the only languages
             | I'm aware of that have decent ergonomics for structural
             | typing are either dynamically typed, or constrained to some
             | fairly specific niches. If I want structural typing in a
             | general-purpose language, I'm kind of stuck with Clojure or
             | Python. The list of suggested languages that comes a
             | paragraph later fails to disabuse me of that notion. As
             | does this observation:
             | 
             | > _all mainstream, statically-typed OOP languages are even
             | more nominal than Haskell!_
        
           | KurtMueller wrote:
           | Scott Wlaschin, author of F# for Fun and Profit, has an
           | excellent talk on using F# (an ML language) for domain driven
           | design and business development:
           | 
           | https://www.youtube.com/watch?v=Up7LcbGZFuo
        
         | bjornsing wrote:
         | This is definitely an accurate description of the past, but is
         | it the future? I'm really interested in what happens when we
         | start to see (more) companies organized around software...
        
           | mrkeen wrote:
           | In the future we'll say "We don't need Haskell because we
           | already have values, higher-kinded types, pure functions, and
           | do-notation in Java".
           | 
           | (Although if I could speculate based on past performance:
           | values won't work with collections, pure functions won't
           | allow recursion, do-notation will be incompatible with
           | checked exceptions, and the higher-kinded types won't have
           | inference. And all of them will be jam-packed with
           | unnecessary syntax.)
        
         | ska wrote:
         | cf history of lisp.
        
         | marcosdumay wrote:
         | > The reality is that 99% of working developers are not
         | implementing complex algorithms
         | 
         | > The vast majority are delivering wet. messy business logic
         | 
         | Ok, notice something wrong here?
        
           | jandrese wrote:
           | That people are using computers to solve real world problems
           | instead of pursuing the higher planes of pure math?
           | 
           | 90% of the work ends up being input validation and
           | normalization. Humans suck at consistency, especially if your
           | data comes from multiple sources, and potentially multiple
           | languages. None of those complex algorithms are going to work
           | until you make the data sane.
        
             | marcosdumay wrote:
             | Those two lines contradict each other.
        
         | whateveracct wrote:
         | > The way to achieve that is to hire pragmatic, goal-oriented
         | people. (Or at the very least put them in charge.) And that
         | group rarely intersects with the type of people fascinated by
         | the mathematical properties of the type system.
         | 
         | I wouldn't say it rarely intersects - I'm one and I've worked
         | with plenty. And I've seen plenty with strength in one join a
         | production Haskell shop and pick up the other (in both
         | directions!)
         | 
         | Haskell definitely gives you a lot to help manage the messiness
         | you mention. You have to have know-how and awareness to do it,
         | but it can help like none other.
         | 
         | But overall, stuff like this is why I wish to eventually become
         | as independent as I can manage in my career. The intersection
         | of Haskell and management thinking is frequently a bad faith
         | shitshow in my experience, so the sooner I can start creating
         | on my own terms, the better. The best part about Haskell is how
         | little effort I exert solving problems compared to my
         | comparable experience in Go, Python, Java. A perfect match for
         | personal creation.
        
         | [deleted]
        
       | irrational wrote:
       | If the author is right, how long will it take to get there? I've
       | been working in the field for 25 years. Frankly what I do today
       | isn't that much different than what I did 25 years ago and we are
       | busy trying to hire more people who do the same types of things.
       | I'll probably retire in the next 15-20 years. Will this wholesale
       | change proposed by the author take place within that time frame?
       | Based on past experience I doubt it.
        
         | Traster wrote:
         | To add to this: In my career I've personally witnessed so many
         | instances of people saying "Well, this can all be automated..."
         | and then starting enormous projects to automate a process that
         | is currently the sole focus of literally hundreds of engineers.
         | As if the reason there are hundreds of engineers doing this job
         | is that all those guys are just idiots. The result is these
         | massively ambitious projects that promise senior management the
         | world, drag on forever, often justifying themselves with hand-
         | optimized toy examples to show progress. In the end the actual
         | problem is very obviously intractable and years of progress are
         | wasted.
         | 
         | It might be true that eventually we can solve "implementing
         | cross-platform UIs in the general case" but we could also be
         | literally 100 years away from achieving that, and in the mean
         | time the fact that it's theoretically possible is worthless.
        
       | ukj wrote:
       | I am not convinced. Most mathematicians embrace denotational
       | semantics, but most engineers intuitively default to operational
       | semantics, because ultimately, we need to make choices/trade-offs
       | that have consequences which are too complex for any optimiser
       | which lacks a holistic view of the system our system is part of.
       | 
       | Operational semantics are actually a higher order logic than
       | Category theory when expressed in a Geometry of Interaction (GoI)
       | grammar.
       | 
       | https://ncatlab.org/nlab/show/Geometry+of+Interaction
       | 
       | I don't have the fancy nomenclature to utter the Mathematical
       | phrase "endomorphism on the object A[?]B.", but I intuitively
       | understand what operational semantics are and why they are useful
       | to me. When a programming language/grammar comes along which
       | implements most of the design-patterns I need/use to turn my
       | intuitions into __behavioural __specifications - I am still going
       | to be more productive than a Mathematician because I will not
       | have to pay the (upfront cost) of learning a denotational
       | vocabulary. The compiler will do it for me, right?
       | 
       | In the words of the late Ed Nelson: The dwelling place of meaning
       | is syntax; semantics is the home of illusion.
       | 
       | Languages are human interfaces. Computer languages are better
       | interfaces than Mathematics because we design (and evolve them)
       | to be usable by the average human, not Mathematicians. Good
       | interfaces lower the barrier to entry by allowing plebs like
       | myself to stand on the shoulder of giants. Mathematics expects
       | everybody to be a giant.
       | 
       | And in so far as dealing with ambiguity goes, Programming
       | languages are way less ambiguous than Mathematical notation!
       | 
       | Ink on paper has no source code - no context.
        
       | state_less wrote:
       | I always figured our domain (computation) is so vast that once
       | programming is automated, so too are all the other jobs. If we
       | get an AGI that can program our programs and learn to learn, it
       | won't be hard to have it write a program to make sales calls, or
       | gather user feedback, or build buzz for the company.
       | 
       | So don't worry, when it happens we can all rest because there
       | won't be any need for our labor anyway.
        
         | qqii wrote:
         | What about the pix2code example? It's consevable that domain
         | specific automation will reduce the number of jobs that exist.
        
           | sameers wrote:
           | The pix2code paper is interesting, but it doesn't really
           | answer the question of translating the UI requirements into
           | the corresponding "business logic" - it's limited to
           | producing the code that manipulates the "surface" elements,
           | so to speak. The real challenge I think for an AGI is
           | translating something like, "This button shows green, if the
           | user has previously scored 10 or higher on the previous 5
           | tests, which have all been taken in the previous 6 weeks,"
           | into code.
           | 
           | Then there's the problem of edge cases - in this case, what
           | if the user has not had an account for more than 6 weeks but
           | has met the other conditions? Now the AGI has to detect that
           | context and formulate the question back to the developer.
           | 
           | The "code will eat your coding job" hype sounds a lot like
           | "we'll have self-driving cars all over the country by 2000"
           | hype (yes, that hype did exist back then,) or going further
           | back, "All of AI is done, we just need a few more decision
           | rules" hype back in the seventies.
           | 
           | For sure, many coding frameworks are a lot simpler now than
           | they were two decades ago, and yes, I think it has meant many
           | aspects of digitized services are now much cheaper. You can
           | build a Wix website for yourself, or a Shopify e-business,
           | without paying a developer, which you needed to do in the
           | year 2000. But the consequent growth in digital businesses
           | has led to induced demand for more developers, as businesses
           | constantly test the edges of these "no-code" services.
           | 
           | I would say we have reached some amount of saturation
           | already. Anec-datally, it seems that if you segmented
           | salaries by experience, you might find some amount of decline
           | or stagnation in the lower levels of experience relative to a
           | decade ago. So in that sense, the original point has some
           | valence, but I don't think it has anything to do with "AI"
        
           | state_less wrote:
           | Assuming pix2code really did automate away traditional UI
           | work, the development effort would then move to the next
           | subdomain (e.g. a sales bot, etc).
           | 
           | I suspect as long as you're willing to learn and are
           | competent, you should have a job until the final effort of a
           | general AI self-learning programmer.
        
             | qqii wrote:
             | I'm also sceptical about pix2code, but the point is that
             | domain specific automation could consevably reduce the
             | number of overall jobs. The cost of switching domains is
             | also non negligible.
             | 
             | The question with domain specific automation (and one of my
             | takeaways from the article) isn't whether or not you'll
             | have a job, but if the effort your put into getting your
             | current job is worthwhile.
        
               | state_less wrote:
               | I think the total number of jobs (programming + other)
               | that humans can do economically might go down over time.
               | Programmers can usually pickup the next ambitious project
               | (e.g. a sales bot) when the old domain is no longer
               | profitable.
               | 
               | I think Bartosz is saying that Math and Category theory
               | is useful to learn because it works in a number of
               | subdomains. It can help keep the domain switching cost
               | down somewhat.
        
         | joshspankit wrote:
         | If everything else is automated, would sales calls even exist?
         | Or any need for buzz?
        
           | nexuist wrote:
           | It's hard to tell. I recently visited my homeland of Albania
           | and was surprised to learn that things like food delivery /
           | ridesharing apps did not exist at all. Someone who goes there
           | could probably make a killing just copying what we have in
           | the States. The future is unevenly distributed, as they say.
           | While we automate away our problems in the first world the
           | third world will be eagerly awaiting existing solutions.
        
         | pm90 wrote:
         | Eh. Every time this comes up what is often missed is that
         | humans are complementary to machines and we don't yet
         | understand conscience and identity all that well.
         | 
         | You're right that the jobs as they exist today won't in the
         | future. Just like manufacturing replaced agriculture which was
         | then replaced by services, there will be higher orders of
         | creativity and problem solving that most humans will be engaged
         | in the future.
         | 
         | There really isn't a lack of problems to solve. Consider that
         | we've barely scratched the surface of the earth and the
         | farthest we've sent humans to is the Moon. The Space Industry
         | might as well be the economic market that keeps expanding
         | forever...
        
           | state_less wrote:
           | Humans look a lot like machines to me though. So it seems we
           | are looking at the old machine competing against the new. And
           | maybe the new machine is a hybrid of the old machine with
           | some new feature built in. I don't think we'd call it human
           | though, cyborg maybe.
           | 
           | One of the things I didn't like about Star Trek was that they
           | have this super powerful computer and yet Picard has to ask
           | Wesley to punch in the coordinates on the console and engage
           | the warp coils. What kind of theater is this? Sure a floating
           | monolithic slab of computation in space has less cinema
           | appeal than a crew of humans, but a hardened machine seems
           | more plausible for space travel. Humans can't sustain much
           | more than 1g acceleration and don't live too long.
        
         | clSTophEjUdRanu wrote:
         | This is my thought as well. Just don't be the first one
         | automated. We will either have figured out post-work society or
         | we will face some sort of collapse.
        
       | bobjones2013 wrote:
       | The one liner quicksort implementation in Haskell is only really
       | possible because a good chunk of the hard work was handled by a
       | partition library function... I'm not sure how different that is
       | than just calling quicksort from a library in any other high
       | level language.
        
         | [deleted]
        
         | dddbbb wrote:
         | partition p xs = foldr (select p) ([],[]) xs       select p x
         | ~(ts,fs) | p x       = (x:ts,fs)                           |
         | otherwise = (ts, x:fs)
         | 
         | That is the entirety of `partition` in the standard library.
        
       | ForHackernews wrote:
       | This article purports to be talking about math but then goes off
       | down some insular functional programming rabbit hole.
       | 
       | For what it's worth, I can easily accept that Haskell
       | programmers' career prospects will not be altered one whit by
       | improvements in optimisation and automation...
       | 
       | P.S. Haskell is not math:
       | https://dl.acm.org/doi/10.1017/S0956796808007004
       | https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf
        
         | dddbbb wrote:
         | The paper you linked shows that one Haskell implementation does
         | not exactly correspond to a specific algorithm, then gives an
         | alternative definition which does correspond. What does that
         | have to do with the statement 'Haskell is not math'?
        
       | eindiran wrote:
       | In case anyone is interested, the author also wrote the excellent
       | "Category Theory for Programmers", available in print or online:
       | 
       | https://bartoszmilewski.com/2014/10/28/category-theory-for-p...
       | 
       | If you're interested in what Category Theory is about, it's a
       | great place to start for people with a background in programming
       | but not necessarily mathematics.
        
       | yongjik wrote:
       | It seems rather funny to write a treatise, with quicksort being a
       | central example, where the shown code requires O(N) temporary
       | space.
       | 
       | C/C++ programmers might not be good at category theory, but no
       | one worthy of their salary would walk past a quicksort routine
       | with O(N) memory without stopping and asking "Wait, what?"
       | 
       | Seriously, I remember when _this_ used to be the first Haskell
       | code shown on haskell.org homepage, and I had to stop and wonder
       | if these folks were just trolling or if they are actually _that_
       | oblivious of performance. If you want to promote Haskell, you
       | could have hardly chosen a worse piece of code.
        
       | juped wrote:
       | >In mathematics, a monad is defined as a monoid in the category
       | of endofunctors.
       | 
       | Sure. But like everything in math what it really is is a thing
       | that the mathematician has come to know through application of
       | effort. The definition is a portion of a cage built to hold the
       | concept in place so the next mathematician can come along and
       | expend effort to know it.
       | 
       | You've probably done this yourself, just maybe not that deep in
       | the abstraction tower (though you'd be surprised how abstract
       | some everyday things can be). For example, you may have
       | internalized the fact about division of integers that every
       | integer has a unique prime factorization. This is an important
       | part of seeing what multiplication is, but it's not part of the
       | tower of abstraction upon which multiplication is built.
       | 
       | Mathematicians tend to end up being unconscious or conscious
       | Platonists because mathematicians are trained to see the
       | mathematics itself.
        
         | kian wrote:
         | words form a net
         | 
         | hunt writhing concepts
         | 
         | some escape
        
         | [deleted]
        
         | sriku wrote:
         | If I understand what you're saying, you're asking to not
         | confuse mathematical communication with actually doing
         | mathematics ... sort of how Ramanujan was doing amazing math
         | but in a way that wasn't communicable to the convention until
         | the language was introduced by Hardy?
        
         | manthideaal wrote:
         | The definition of a monoid here is not the usual definition, is
         | a new definition for the special case of a strict category as
         | defined MacLane's Book Categories for the Working
         | Mathematicean. Since you maybe thinking about composition of
         | endofunctors and unit endofunctors you get a confused picture.
         | The monad as a monoid in the category of endofunctors is a way
         | to show that you can confuse people using two different
         | definition of an usual concept and both give different results.
         | I got this from (1), look for "main confusion": monoid in the
         | category of endofunctors is defined in a new way, and is not
         | the expected monoid in the set of all endofunctors.
         | 
         | The definition of monoid in monomial categories is on page 166
         | and Monads in a category are on page 133. As a math person I
         | know what is a monoid (usual term) but I did not know what is a
         | monoid in a monomial category (well I know now because is on
         | page 166 of the book).
         | 
         | (1) https://stackoverflow.com/questions/3870088/a-monad-is-
         | just-...
        
           | manthideaal wrote:
           | I wonder what's the point of using such a phrase, it doesn't
           | help you to grok the concept of monad. It can help you to
           | know that someone has given a new definition to sound cute,
           | shame on them. By the way I admire MacLane as a math person,
           | but people seem to use category theory to sell snake oil.
           | Category theory is a tool to give names to some diagrams and
           | properties that are used frequently to avoid repeating the
           | same argument in proofs, is a dry (don't repeat yourself, as
           | in the ruby motto). If you are an expert in category theory
           | you can give short proofs of known facts. Category theory is
           | like pipes in unix, you pipe diagrams to show properties.
           | Grep, sed, awk analoges are functors, categories and natural
           | transformations. The input is a collection of diagrams and
           | the output is a new diagram that has a universal property and
           | it receives a name and a collection of properties that are
           | supposed to be useful to proof new theorems.
        
             | dddbbb wrote:
             | The phrase appears 6 chapters into a graduate mathematic
             | text on category theory. If one reads the preceding
             | chapters, it is a useful but pithy explanation for what a
             | monad is, using terms which have all already been covered.
             | Its use outside of that context is basically just a joke
             | towards the Haskell community being overly mathematical.
        
               | manthideaal wrote:
               | I agree with you, the phrase should be preceded with:
               | Caution, this is a math joke, don't lose sleep trying to
               | grok it.
        
         | QuesnayJr wrote:
         | This is an incredible quote:
         | 
         | > But like everything in math what it really is is a thing that
         | the mathematician has come to know through application of
         | effort. The definition is a portion of a cage built to hold the
         | concept in place so the next mathematician can come along and
         | expend effort to know it.
        
         | mumblemumble wrote:
         | But then there's https://shitpost.plover.com/m/monad.html
         | 
         | > _The more I think about this, the more it seems to me that a
         | monad is not at all a monoid in the category of endofunctors,
         | but actually a monoidal subcategory._
         | 
         | > _That 's the problem._
        
           | juped wrote:
           | Sometimes the definition is wrong (which should be a big hint
           | that the definition is just pointing at something).
           | 
           | Like the definition of prime that I was taught in primary
           | school, "a prime is a positive integer whose only divisors
           | are itself and 1". The right definition is "a prime is a
           | positive integer with exactly two divisors", because
           | otherwise all interesting statements about primes would have
           | to be confusingly rewritten as statements about primes
           | greater than 1. (Sometimes we have theorems about "odd
           | primes", i.e., not 2, but the important set, primes,
           | definitely includes 2.)
           | 
           | I don't get the monad joke fwiw; I've done a ton of Haskell
           | and a ton of math, but the Haskell wasn't deep abstraction
           | and the math was differential geometry and the vicinity.
        
             | gjm11 wrote:
             | To make matters worse, it turns out that there are actually
             | two different concepts commonly labelled by the word
             | "prime", and in some highbrow contexts they are different
             | -- so mathematicians actually define "prime" in an entirely
             | different way: p is prime iff p|ab => (p|a or p|b). That
             | is, a prime is something that can't divide the product of
             | two numbers without dividing at least one of the numbers.
             | The thing about having no nontrivial divisors is called
             | being "irreducible".
             | 
             | (When are they different? When you're working with some
             | kind of number _other than the ordinary integers_. For
             | instance, suppose you decide to treat sqrt(-5) as an
             | integer, so your numbers are now everything of the form a +
             | b sqrt(-5) where a,b are ordinary integers. In this system
             | of numbers, 3 is _irreducible_ , but it isn't _prime_
             | because (1+sqrt(-5)) (1-sqrt(-5)) = 6 which is a multiple
             | of 3 -- but neither of those factors is a multiple of 3
             | even in this extended system of numbers.)
        
         | djedr wrote:
         | > The definition is a portion of a cage built to hold the
         | concept in place so the next mathematician can come along and
         | expend effort to know it.
         | 
         | Well put. I'd say this extends beyond mathematics. All
         | definitions are like that.
        
       | imjustsaying wrote:
       | When someone asks me, "Why don't you have X insurance?"
       | 
       | My reply is, "Because I can do math."
       | 
       | Was surprised to find the article wasn't about that.
        
       | sapientiae3 wrote:
       | > As long as the dumb AI is unable to guess our wishes, there
       | will be a need to specify them using a precise language. We
       | already have such language, it's called math. The advantage of
       | math is that it was invented for humans, not for machines. It
       | solves the basic problem of formalizing our thought process, so
       | it can be reliably transmitted and verified
       | 
       | The same is true of most programming languages - they were all
       | made for humans. Math has the advantage of being to prove
       | formally that it solves the problem, but that isn't a requirement
       | for most software.
       | 
       | > If we tried to express the same ideas in C++, we would very
       | quickly get completely lost.
       | 
       | Same is true for many things that you can express in C++, but not
       | in Math.
       | 
       | Math used this way is essentially just another programming
       | language - with massive advantages in some circumstances, and
       | massive disadvantages in others - I can't find any argument in
       | this article as to why it is a better bet than any other
       | language.
        
       | leftyted wrote:
       | This is historicism. For example, the idea that, because
       | compilers have eliminated most hand-optimization, that process
       | will inexorably continue, moving further up the chain of
       | abstraction until "trivial programming" has been eliminated. The
       | author thinks he's derived some law of historical progress. Along
       | these lines, many smart people have predicted the "end of labor"
       | since at least Marx.
       | 
       | I think that predicting the future is hard and most people are
       | better off "optimizing local minima".
        
       | jart wrote:
       | I agree with the OP that junk programming will likely die. But so
       | will junk math. I don't think programming is going to be
       | automated anytime soon, but I'd imagine that the inputs of an
       | optimizer capable of doing so would look more like vagueish
       | business goals and policy constraints that businessmen /
       | politicians like to write, rather than some functional monad,
       | which I guess is why they still continue to be the master
       | character classes of humanity.
        
         | mattkrause wrote:
         | I'm skeptical of these sorts of claims for two reasons.
         | 
         | 1. Creativity. What objective function do you optimize to write
         | the first Super Mario Bros? Can you then get from there to
         | RocketLeague or Braid? (I think not).
         | 
         | 2. Imagine that you somehow obtain a magical technology that
         | takes in a natural language spec and emits highly-optimized
         | code, just like a human. Who's writing that spec?
         | 
         | For interactions with actual humans, there's usually a
         | professional drafter (lawyer, contracting officer, etc)
         | carefully specifying what the other parties should and should
         | not do. We'd presumably need the same thing even with some
         | fancy AGI. This is perhaps a bit different from worrying about
         | whether foo() returns a Tuple or List, but it's not totally
         | dissimilar from programming.
        
           | qqii wrote:
           | I agree with your first point, such objective function would
           | have to optimise something extremely abstract like player
           | enjoyment.
           | 
           | As for the second, the point is not to have a specificition
           | on the human side. Most of the time when communicating with
           | others we don't need lawyers, and spoken contracts are valid
           | and by law. Even with lawyers contracts can be disputed in
           | court as there is not a formal definition of every exact
           | scenario.
           | 
           | Assuming we had the power to do (1), all we'd need in (2) is
           | something that doesn't provide unexpected outcomes.
        
             | mattkrause wrote:
             | If you don't have some kind of specification, how are you
             | going to control what you get?
             | 
             | It's true that you can turn to a teammate and say "hey, can
             | you write me a pipeline to import data from JSON files?"
             | and you'll usually get something usable. However, you and
             | your teammates have shared goals and background information
             | about your particular project and the world at large, etc.
             | 
             | Projects go off the rails all the time because the
             | generally intelligent (allegedly, anyway) humans don't
             | share these things. Right now, the front page has an
             | article called "Offshoring roulette" about this. If you
             | don't want to click through, here's my story about a
             | contract programmer working in the lab. I asked him to look
             | into a problem: the software running our experiments
             | crashed after a certain number of events occurred. It
             | turned out that the events were being written into a fixed-
             | size buffer, which was overflowing. This _could_ have been
             | fixed in many ways (flush it to disk more often, record
             | events in groups, resize the buffer). However, he chose to
             | make the entire saving function into a no-op. This quickly
             | "fixed" the problem--but imagine my delight when the next
             | few runs contained no data whatsoever. In retrospect,
             | although this guy had a PhD, he wasn't particularly
             | interested in the broader context, namely that it crashed
             | _while collecting data that I wanted._
             | 
             | An optimizer is going to take all kinds of crazy shortcuts
             | like that unless it's somehow constrained by the spec. You
             | could certainly imagine building lots of "do-what-I-mean"
             | constraints into this optimizer but that requires even more
             | magic.
        
         | qqii wrote:
         | As with some of the other commenters, I don't fully agree.
         | Programming is messy, and most end users don't care about the
         | beautiful abstractions you used. Junk programming will always
         | exist, and just good enough programming will always be the
         | mainstream.
         | 
         | When I'm purchasing a chair, I don't normally care about the
         | detailed specifics of wood types, craftsmanship and the
         | specifics of the joins, nor doesn't matter if a master
         | craftsman doesn't approve.
        
       | iainmerrick wrote:
       | It's a little weird for this article to focus on qsort, as surely
       | the key point is that the algorithm doesn't have to be stated at
       | all, just the requirements -- that the output is sorted.
       | 
       | This doesn't necessarily invalidate the argument, but it means
       | all the concrete examples seem rather beside the point.
        
       | huherto wrote:
       | I agree that a lot of software will be done declaratively rather
       | than imperatively. But we will create DSLs for that. I don't
       | think we will use math for that. And somebody needs to create the
       | DSLs.
        
       | skybrian wrote:
       | We already do automate jobs by improving our development
       | environments. A better language can eliminate important classes
       | of bugs, and once a reliable library is available, you don't have
       | to write it yourself. The tools do get better, gradually.
       | 
       | This doesn't seem to be happening particularly quickly, though,
       | and it's not clear that it's accelerating. Setting standards is a
       | social process and it often takes years for new language changes
       | to get widely deployed.
       | 
       | Another thing that slows things down is that every so often some
       | of us decide what we have is terrible and start over with a new
       | language, resulting in all the development tools and libraries
       | having to be rebuilt anew, and hopefully better.
       | 
       | I expect machine learning will result in nicer tools too, but
       | existing standards are entrenched and not that easy to replace,
       | even when they are far from state-of-the-art.
        
       | asow92 wrote:
       | > One such task is the implementation of user interfaces. All
       | this code that's behind various buttons, input fields, sliders,
       | etc., is pretty much standard. Granted, you have to put a lot of
       | effort to make the code portable to a myriad of platforms:
       | various desktops, web browsers, phones, watches, fridges, etc.
       | But that's exactly the kind of expertise that is easily codified.
       | 
       | Good luck. There will always be a need for bespoke UI. Just as
       | there's always a need for bespoke anything that can otherwise be
       | mass produced.
        
       | jonas_kgomo wrote:
       | Interesting. If languages like C and C++ are really first to be
       | vulnerable, why are they the ones used for building AI (computer
       | vision) instead of category theoretic languages?
        
         | beigeoak wrote:
         | The author is saying that the main reason to use something like
         | C++ instead of Python or even Java is for speed. He assumes
         | that optimizing for speed is a problem that can fit neatly
         | under the machine learning domain.
         | 
         | If a machine can optimize performance better than humans, then
         | it would not make sense to use C++, in the context of
         | performance.
        
       | [deleted]
        
       | melling wrote:
       | Data science seems like a gateway drug to doing more math.
       | 
       | I've been working through Joel Grus' Data Science from Scratch,
       | 
       | https://www.amazon.com/Data-Science-Scratch-Principles-Pytho...
       | 
       | rewriting the Python examples in Swift:
       | 
       | https://github.com/melling/data-science-from-scratch-swift
        
       | zwieback wrote:
       | _I'm sorry to say that, but C and C++ programmers will have to
       | go._
       | 
       | I've heard that since I started my career in the early 90s and
       | it's always interesting to compare what reasons people give why
       | low-level languages are going to go away really soon now.
       | 
       | Other than that the article makes a lot of good points.
        
         | msla wrote:
         | > it's always interesting to compare what reasons people give
         | why low-level languages are going to go away really soon now.
         | 
         | Because C has killed off most assembly language already,
         | compared to the 1970s.
         | 
         | Because you can talk about writing C++ for embedded systems
         | without busting up laughing.
         | 
         | Because "embedded" means ARM and not TMS1000 these days.
         | 
         | Because not even BART is still running PDP-8s in production
         | anymore.
         | 
         | Because the extreme low end changes slowly, but it does change.
        
         | OrangeMango wrote:
         | > I've heard that since I started my career in the early 90s
         | and it's always interesting to compare what reasons people give
         | why low-level languages are going to go away really soon now.
         | 
         | The main reason: They'll go away when the graybeards retire and
         | the companies fail without them.
         | 
         | For the last 5 years, I've been babysitting a C-based system
         | that generates more than 70% of our corporate profit. The
         | youngsters working on the 7+ year-old project to replace it are
         | on the 3rd refactoring of the 2nd programming language codebase
         | and the "architect" is already talking about rewriting in a new
         | language for "productivity improvements." For the 4th year in a
         | row, the first of five essential elements of the new system
         | will go on line Next Year, leaving still more years of work
         | until we can shut the old system down and let the C programmers
         | go (except we all know more new languages than the new guys, as
         | we all have loads of free time - our old system is quite
         | reliable). Without a substantial payment directly into my kids'
         | trust fund, I have no intention of delaying retirement by a day
         | - I have way too many side projects to explore!
        
       | philshem wrote:
       | > Experience tells us that it's the boring menial jobs that get
       | automated first.
       | 
       | I doubt that the economic drivers of automation consider if the
       | job is boring or menial for the worker. I think this "experience"
       | needs a source.
       | 
       | Furthermore, imagine working 40 years in a field you didn't enjoy
       | in order to have an "insurance policy". (Just do what you like.)
        
         | vb6sp6 wrote:
         | > I doubt that the economic drivers of automation consider if
         | the job is boring or menial for the worker. I think this
         | "experience" needs a source
         | 
         | You are right, no one says "lets automate the boring jobs".
         | What happens is these types of jobs naturally select themselves
         | because they are uncomplicated which makes them prime targets
         | for automation
        
       | cryptozeus wrote:
       | "One such task is the implementation of user interfaces." Clearly
       | author has not used image to code tools before. The amount of
       | junk code that is created through these tools is unusable in
       | production.
        
         | mamon wrote:
         | For now. But the very fact that the code compiles and does it's
         | basic tasks is already an achievement.
        
       | lordleft wrote:
       | If the author is right, what subfields of mathematics will likely
       | be the most salient? Linear Algebra? Stats? Category Theory?
       | Isn't it possible that the species of math you invest in will
       | turn out to not be so valuable in an AI-driven future? Or is the
       | hope that a baseline mathematical fluency will help engineers
       | pivot no matter what?
        
         | moralestapia wrote:
         | Category theory is fine but won't give you "anything new". I
         | wasted a lot of time and money as plenty of big names were
         | praising it (e.g. John Baez) so I followed them on a few
         | conferences/talks. In the end, it's a nice theoretical
         | framework to think about how to structure your code but that's
         | it. If you do want to get involved (and, like me, are coming
         | from a computational background) the book from Bartosz (i.e.
         | the guy from the post) tells you enough to get a good overview
         | of the thing; it's also available for free online.
         | 
         | A second recommendation, a bit odd maybe, do not waste your
         | time trying to engage with that community as most people there
         | are incredibly smug and toxic (but if you don't believe me feel
         | free to try it, ymmv). They are now pushing this idea that
         | they're 'opening' the field to everybody and would like as much
         | people to jump on their bandwagon, but they are just trying to
         | get some traction to keep leeching off grants, etc... they
         | couldn't care less about you, I experienced this first-hand.
         | You can learn most of these things on your own anyway, as it is
         | usual with most mathy things.
         | 
         | I don't disagree with the premise of the post, when the time
         | comes math will save your ass, for sure. Now, what works? As
         | you mentioned linear algebra and some intermediate statistics
         | would greatly improve your chances. I would drop some geometry
         | as well, geometry has an inherent 'visual/spatial' feeling to
         | it but try to learn it in such way that you become familiar of
         | dealing with it in a completely abstract way. Regarding this, I
         | have noticed a small surge in interest for clifford algebras,
         | geometric algebras, etc... if you have the time pay attention
         | to it, there are some greats talks about it on youtube and this
         | one has an amazing community that I would definitely recommend
         | to engage with.
         | 
         | Also interesting to read about, some other algebras that are
         | defined mathematically and are at the foundation of CS, things
         | like lambda calculus come to mind.
         | 
         | You don't have to become an expert on these fields to start
         | seeing the benefits. Just build of the habit of reading about
         | one or two topics per week, what they do, how they do it, and
         | before you know a year has gone by and you will feel much more
         | confident in your skills. Anything that requires more attention
         | will naturally drag you to it.
         | 
         | Finally, this guy Robert Ghrist is amazing (really!)
         | https://www.math.upenn.edu/~ghrist/, check him out as well, his
         | talks and other content. He also wrote a book on applied
         | topology (free online as well) which is amazing. Even if you
         | don't understand anything about it, give it a quick skim to
         | have an idea of all the incredible things math can you do for
         | you that you could not have imagined existed!
        
           | tobmlt wrote:
           | Seconded. Robert Ghrist's Elementary Applied Topology is one
           | of my favorite books. A real eye opener if you haven't seen
           | some of it before.
        
         | mamon wrote:
         | You cannot really predict that. Number theory was once
         | considered "pure mathematics" in a sense that one could not
         | even imagine practical applications for checking if a number is
         | a prime or not. And then someone invented cryptography...
         | 
         | EDIT: For all those nitpicking downvoters: yes, I meant public
         | key cryptography.
        
           | ska wrote:
           | That's not really a useful definition of "pure" mathematics.
           | 
           | Also cryptography is one of the oldest applications, and has
           | interacted with a variety of sub-disciplines in mathematics
           | for basically ever.
        
           | zeroonetwothree wrote:
           | Cryptography predates number theory. Probably you mean public
           | key cryptography.
        
           | lou1306 wrote:
           | To be fair, cryptography and number theory have coexisted
           | without much overlap from centuries (millennia?). Then, the
           | rise of mechanized cryptanalysis forced us to look for hard-
           | to-break ways to encrypt stuff, and prime factorization was a
           | very good candidate.
        
         | photon_lines wrote:
         | The author isn't right. He's right in some aspects (I'd say in
         | 5% on what he conjectures), but for the most part, programmers
         | and programming will be in quite a high demand for quite a
         | foreseeable time in the future. Yes, C / C++ programming might
         | become less important (see: memory wall) and I'd say highly
         | parallel computation will take precedence, but this won't
         | replace computer science nor programming...it'll just create
         | the need to programmers to transition from one skill set into
         | another.
         | 
         | Also, as a note: the field of computer science is a branch of
         | mathematics. Whenever you deal with data structures or
         | algorithms, you're dealing with how to procedurally design
         | information structures and procedures to solve problems and
         | model the real world. Mathematics? Well, depending on what
         | branch you're into, it's using symbols and information ...to,
         | well, do the same thing!
         | 
         | Out of all of the professions, it seems rather silly to me to
         | think that our own profession is in danger of being replaced. I
         | would think that it's the LEAST likely one, since programmers,
         | to me at least, represent workers who are paid to communicate
         | and find ways to solve problems in the real world using a tool
         | which will continue to improve and enable our species to excel
         | and automate jobs filled with repetitive drudgery. Out of all
         | of the professions, I'm not sure why the author would think
         | that ours is likely to go out of demand soon. I would
         | conjecture that us playing a prominent role in automating other
         | jobs ensures that we stay in demand for quite a long time!
         | 
         | As far as the most useful mathematical branches are concerned -
         | if you're interested, the branches I find have the highest
         | ability to help in solving real world problems are: calculus,
         | linear algebra, probability and statistics, computer science
         | (as already mentioned), convex and computational optimization,
         | quantum mechanics, complex analysis, ordinary and partial
         | differential equations, data mining and analysis, information
         | theory ... well, I could keep going, but I think you'd greatly
         | benefit from checking out a great book which gives a pretty
         | good overview on the key areas in applied mathematics: The
         | Princeton Companion to Applied Mathematics. The above are just
         | my own guesses, and it's highly dependent on which problem
         | you're looking to solve.
        
       | maerF0x0 wrote:
       | > So the programmers of the future will stop telling the computer
       | how to perform a given task; rather they will specify what to do.
       | In other words, declarative programming will overtake imperative
       | programming.
       | 
       | This has been happening for a very long time with abstraction.
       | Layers upon layers of computing "just works" without the
       | programmer having to think about it or why. Things like
       | GRPC/OpenAPI etc make it conceivable of a day where a product
       | manager just needs to write the schema and methods and hit
       | "Deploy to AWS" .
        
       | bovermyer wrote:
       | Math is not the insurance policy. Your social skills and your
       | ability to continually "sell" your worth to others are the
       | insurance policy.
        
       | ndonnellan wrote:
       | > You might think that programmers are expensive-the salaries of
       | programmers are quite respectable in comparison to other
       | industries. But if this were true, a lot more effort would go
       | into improving programmers' productivity, in particular in
       | creating better tools.
       | 
       | ...
       | 
       | > I am immensely impressed with the progress companies like
       | Google or IBM made in playing go, chess, and Jeopardy, but I keep
       | asking myself, why don't they invest all this effort in
       | programming technology?
       | 
       | It seems like Google does invest in programming technology, but a
       | lot of that tech is internal. Google spends an order of magnitude
       | more money on employee productivity than any other job I've
       | worked at. But that's probably because at previous jobs we spent
       | <<1% of salary on tools and didn't have economies of scale.
        
         | gowld wrote:
         | programming productivity gets absolutely massive investment --
         | all of secret, proprietary, and open source.
         | 
         | They reason it seems otherwise is that software has infinite
         | appetite for increased productivity, since there is minimal
         | friction and energy cost commonly seen in almost every other
         | endeavor. There are essentially two throttles on the
         | exponential improvement in computing: (1) the speed of
         | electromagnetic objects, and (2) the speed of humans to learn
         | new things recently invented and use them to invent newer
         | things.
        
       ___________________________________________________________________
       (page generated 2020-02-24 23:00 UTC)