[HN Gopher] Short term usability is not the same as long term us...
       ___________________________________________________________________
        
       Short term usability is not the same as long term usability
        
       Author : ingve
       Score  : 90 points
       Date   : 2020-06-10 12:12 UTC (10 hours ago)
        
 (HTM) web link (nibblestew.blogspot.com)
 (TXT) w3m dump (nibblestew.blogspot.com)
        
       | jnxx wrote:
       | > Perhaps the best known example of this kind of tool is Make. At
       | its core it is only a DAG solver and you can have arbitrary
       | functionality just by writing shell script fragments directly
       | inside your Makefile.
       | 
       | I am still fascinated by redo
       | (https://redo.readthedocs.io/en/latest/), which provides exactly
       | the same purpose as Make, but turns its interface inside out:
       | Make is a DSL for describing an acyclic dependency graph, with
       | attached command lines to compile, link, install or run stuff,
       | with a minimum amount operations. The fact that Make is not
       | language-specific, and one just uses shell commands to build
       | stuff, makes it versatile. But there are edge cases which have
       | the effect that people often to prefer to do "make clean",
       | because dependencies might not be captured completely.
       | 
       | "redo" is internally more complex, but basically it is a set of
       | just ten shell commands which are used declarative to capture
       | dependencies, and are part of build scripts. For example,
       | #!/bin/sh         redo-ifchange /usr/local/include/stdio.h
       | redo-ifchange helloworld.c         gcc -I/usr/local/include
       | helloworld.c -o $3
       | 
       | has the effect to rebuild a program if the file
       | "/usr/local/include/stdio.h", or the source file "helloworld.c"
       | was changed - be it by the programmer or by a system update.
       | That's it. No "clean" command is needed.
       | 
       | The result is, in terms of user interface and usage,
       | fascinatingly simple (I tried it with a ~ 20,000 lines C++
       | project which generated a Python extension module using boost.)
       | The lack of special syntax needed is just astonishing.
       | 
       | But I wonder how well it would run with all the additional
       | complexities like autoconf and autotools - knowing that the all
       | complexity of these tools is usually there for a reason.
        
         | cjfd wrote:
         | One thing that one usually does with make is that one also
         | figures out on what header file helloworld.c depends during the
         | make run. Is the above code fragment hand coded or generated? I
         | also am wondering if this also works if one only wants to build
         | part of the project. make can take a list of files to rebuild
         | as arguments. This also provides the ability to only create
         | some set on intermediate files.
        
           | jnxx wrote:
           | > Is the above code fragment hand coded or generated?
           | 
           | It is a handwritten example, there is no code generation
           | involved. However, standard rules can be defined by patterns
           | for specific file extensions.
           | 
           | Basically how redo works is that it runs the builds scripts
           | to generate a target for the first time (these are shell
           | scripts ending with *.do), and along the way it records every
           | dependency for each target in an sqlite database, using a
           | very small number of shell commands to define them in a
           | declarative way. To re-built, it then uses this database of
           | dependencies.
           | 
           | Because it knows all the dependencies, and because the build
           | product is always $3 and then mv'd to the target, it can
           | build everything in parallel by default.
        
           | jnxx wrote:
           | You could read here:
           | https://redo.readthedocs.io/en/latest/#whats-so-special-
           | abou...
           | 
           | (I think this explanation is much better than what I can do)
           | 
           | And yes, it can build specific intermediate files or targets.
        
         | Nullabillity wrote:
         | If you trust yourself to capture the dependencies totally then
         | it doesn't really matter whether you use make or redo, you
         | could add the stdio.h dependency to either. But the only way to
         | actually achieve reliable clean-less builds is to run it in an
         | environment where you either take away all access to non-
         | dependencies (like Nix[0]) or automatically record all accessed
         | files (like Tup[1]).
         | 
         | GCC also supports emitting header lists in a format that Make
         | understands, but that won't cover non-GCC targets, or be as
         | comprehensive as doing it in the build system.
         | 
         | [0]: https://nixos.org/
         | 
         | [1]: http://gittup.org/tup/
        
           | jnxx wrote:
           | > But the only way to actually achieve reliable clean-less
           | builds is to run it in an environment where you either take
           | away all access to non-dependencies (like Nix[0]) or
           | automatically record all accessed files (like Tup[1]).
           | 
           | You can also let the compiler record dependencies. Gcc, for
           | example, does have an option for that.
           | 
           | That's explained here:
           | https://redo.readthedocs.io/en/latest/#whats-so-special-
           | abou...
        
           | jnxx wrote:
           | To mention, I think Nix or Guix are indeed good complements
           | to things like redo! Guix System and NixOS are just geared to
           | define the whole system, while make or redo build a single
           | program or a set of artifacts.
        
         | aequitas wrote:
         | I (ab)use Make for anything but compiling C sources to
         | binaries. Eg. all my Python projects use Make to conditionally
         | create virtualenv's, install packages, run code linting/tests.
         | For which Make is just the most ubiquitous and simple tool
         | available.
         | 
         | But one thing that still annoys me with Make are the
         | workarounds needed to work with stuff which state can not be
         | derived from a file (or only from complex paths). Things like
         | having a background process running (eg. a test db) or a
         | external package that needs to be installed (which can be
         | easily queries using apt, but is much harder to determine
         | reliably using system files). I end up having to create
         | intermediate hidden files which sometimes decouple from the
         | real state. Then things really become messy with Make for me.
         | 
         | Would redo be a better solution in this case as well?
        
           | jnxx wrote:
           | > But one thing that still annoys me with Make are the
           | workarounds needed to work with stuff which state can not be
           | derived from a file
           | 
           | I am not sure about that one. I think the reason is that make
           | is not made for general scripting, but it is made for
           | generating a deterministic build product, with minimum time,
           | from a deterministic input (the source files). In that sense,
           | it is something like a "pure function" in functional
           | programming. If you build software, you do not want to depend
           | the result today on your CPU fan's speed, and tomorrow on the
           | moon's phase. In other words, you really try to make the
           | result deterministic.
           | 
           | What you certainly _can_ do is to first generate a kind of
           | task file from the current system state (e.g., needs to re-
           | create a test db), and letting redo build these targets.
           | 
           | For installing / updating stuff, I do not see problems, but I
           | believe package managers such as guix or nix or pip have more
           | specialized options for this.
        
         | dllthomas wrote:
         | > No "clean" command is needed.
         | 
         | I don't see how this follows from what you've said. IME,
         | `clean` is for when I want things rebuilt _despite_ "nothing"
         | having changed. Maybe it's actually the case that nothing's
         | changed, and I want a full build because I want to know how
         | long my build takes. Or maybe I've changed implicit
         | dependencies - compiler version or similar.
         | 
         | Could you elaborate?
        
           | jnxx wrote:
           | Ah, and no, it does not follow from what I said, I am just
           | reporting about redo's properties.
           | 
           | As said, _using_ redo is very very simple, but it is also
           | mind-blowingly different from make. It is not difficult to
           | understand, but the original documentation explains it far
           | better than I could here.
        
           | jnxx wrote:
           | > IME, `clean` is for when I want things rebuilt despite
           | "nothing" having changed.
           | 
           | Of course, you can use clean in this case. You can do that
           | also with redo, just delete all the intermediate files and
           | start a normal build.
           | 
           | However the point is that you do a "make clean" most often
           | _to be sure_ that all dependencies are refreshed, because you
           | are actually _not_ sure that your Makefile captures them all.
           | 
           | A good example are builds in yocto. This is a system to build
           | complete embedded Linux images based on make, with many
           | subprojects. But if you change, for example, the kernel to
           | support large files, or a 32 bit time_t, you can't be sure
           | that a simple "make" delivers a correct system, because the
           | dependencies on the system headers are not include in all the
           | makefiles.
        
             | danaris wrote:
             | Please forgive the confusion, as I've only dabbled in Make,
             | but...
             | 
             | > You can do that also with redo, just delete all the
             | intermediate files and start a normal build.
             | 
             | ...isn't that pretty much what "make clean" generally
             | _does_? I mean, in a project of any significant size,  "all
             | the intermediate files" aren't going to be trivial to
             | delete without a pre-built "delete all the intermediate
             | files" script...which is what "make clean" is (at least
             | partly) for.
             | 
             | Or are you just saying that "redo" (which I'd never heard
             | of before, so, again, please forgive my ignorance) can also
             | process a pre-built list of "intermediate files" to delete
             | with some particular command or option...?
        
               | dllthomas wrote:
               | If I understand, the real point was that clean is less
               | often needed.
               | 
               | Having said that, you could certainly list system headers
               | in a Makefile (... hopefully a generated one) if that's
               | the behavior you want, so I am not sure what difference
               | is actually being pointed to.
        
               | jnxx wrote:
               | > Having said that, you could certainly list system
               | headers in a Makefile
               | 
               | You can. But that's a lot of headers. You can instruct
               | gcc to tell the dependencies to you, but this information
               | is hard to pipe into make because you'd need to generate
               | a new Makefile from it (which is more or less what some
               | larger projects do).
               | 
               | Using redo it is easy to use this compiler-generated
               | dependency information, and the paramount reason for this
               | is that it is not a DAG dependency graph DSL with added
               | shell snippets but just a shell script with added shell
               | commands which define dependencies. This is what I mean
               | with "it is Make turned inside out".
        
               | dllthomas wrote:
               | I should note that in general, decomposing things into
               | multiple shell utilities is something I'm a huge
               | proponent of, and assuming redo does it well that's
               | really cool!
               | 
               | That said, I don't think "depending on system headers" is
               | somewhere make is lacking. The -M argument to gcc or
               | clang gives you a well formatted Makefile which includes
               | system headers. You have to explicitly ask they be
               | excluded (-MM) if that's what you want (which it was
               | often enough for the option to be added, it seems).
        
               | jnxx wrote:
               | > ...isn't that pretty much what "make clean" generally
               | does?
               | 
               | Yes, exactly.
               | 
               | The point is that "make clean" is used because the
               | Makefile does not capture the dependencies perfectly. So,
               | "make clean" ensures a clean rebuilt. But that can cost a
               | lot of time, especially in large projects or when using
               | C++ header libraries such as boost::python or Eigen.
               | 
               | Now, if you were sure that your depedency description is
               | correct, you could skip "make clean". And this is what
               | redo provides, because, at the one hand, it allows to use
               | dependency information provided by the compiler, and
               | moreover, it covers a lot of corner cases precisely. The
               | corner cases not always working is why "make clean" is
               | typically used for.
        
       | chewxy wrote:
       | Maximal cooperation?
       | 
       | Two schools of thought:
       | 
       | 1. Maximal cooperation require maximal composability
       | 
       | 2. Maximal cooperation requires some dictatorship
       | 
       | Interestingly I find that this is quite embodied in two of my
       | favourite languages - Go and Haskell.
       | 
       | Haskell is all about composition. And it does so by dictatorship
       | of its type system.
       | 
       | Go is dictatorial in the sense that there is one way to do
       | things, but encourages compositionality, coming from a unix world
       | where pipes are everything.
        
       | allenu wrote:
       | I agree with the notion.
       | 
       | I find that a lot of less experienced devs I work with like to
       | prioritize "ease of use" in API design over other things, such as
       | testability, orthogonality, decoupledness, reversibility, etc. If
       | an API is "easy to use" from a client perspective, they often
       | deem it a good one. API ease of use is definitely important, but
       | weighed against other constraints, which are more fuzzy and more
       | about long-term maintainability. Sometimes making an API slightly
       | harder to use (often requiring additional client knowledge about
       | the domain before using it) is worth the trade-off against ease
       | of use since it means it's easier to extend in the future.
       | 
       | It's definitely a skill to learn what helps long-term usability
       | vs short term usability.
       | 
       | I often go back to Rich Hickey's talk about Simple Made Easy when
       | thinking about this problem.
       | https://www.infoq.com/presentations/Simple-Made-Easy/
        
         | umvi wrote:
         | IMO "public" facing APIs should _always_ be easy to use and
         | only require the minimum information from the user necessary.
         | An example of an outstanding public API would be nlohmann 's
         | json library for C++[0].
         | 
         | Whether that API is merely a wrapper for an internal API that
         | is more testable (i.e. allows injection, etc.) or whatever is
         | another matter.
         | 
         | [0] https://github.com/nlohmann/json
        
           | allenu wrote:
           | I think there can be debate on what is "minimum information".
           | I'd also say "easy" for one developer may be challenging for
           | another developer if the domain of the model is foreign to
           | them.
           | 
           | A lot of frameworks require up-front knowledge to work with.
           | To some, that's not "easy", but it allows the client to do so
           | much because what the framework is providing is not simple.
           | 
           | In other places, the API can be dead easy because what it's
           | providing is so simple.
        
       | sanxiyn wrote:
       | I consider this a fundamental insight of Rust programming
       | language: that it is a programming language optimized for long
       | term usability.
       | 
       | No wonder it is struggling for adoption.
        
         | jnxx wrote:
         | For the deeply conservative domain that infrastructure code is,
         | it has a fantastic fast and enthusiastic adoption. It will take
         | a while until it is used as a standard choice for building
         | python extensions and such (which also makes sense, this stuff
         | should last for some while).
        
         | MaxBarraclough wrote:
         | I wouldn't say Rust is 'struggling'. Compared to other
         | languages, like say D and Nim, it's doing well.
         | 
         | No language could possibly topple its incumbent rivals
         | overnight. To do this, it would have to have great advantages
         | over existing languages, _and_ excellent interoperability,
         | _and_ an approachable learning-curve. That 's essentially a
         | contradiction. If your language offers a new and better way to
         | do things, it pretty much _must_ be unfamiliar to those who use
         | older languages. If the concepts involved were similar, you 'd
         | be releasing a library, or publishing a compiler optimisation,
         | rather than developing a whole new language.
         | 
         | Perhaps that's a bit of a generalisation though. TypeScript
         | isn't doing anything new, it's just adding an old and familiar
         | feature (static type checking) to an old and familiar language
         | that lacks it. In TypeScript's case, the feature isn't ground-
         | breaking, but it's valuable enough that it may be worth the
         | pain of using a different language.
        
           | Symmetry wrote:
           | It seems like C++ offered all of "it would have to have great
           | advantages over existing languages, and excellent
           | interoperability, and an approachable learning-curve" and
           | worked hard and made whatever compromises were needed to do
           | it. Which is probably why it took off relatively quickly.
        
       | cjfd wrote:
       | The general point the author is making sounds true but I am
       | sceptical regarding any criticism against make. Building and/or
       | handling dependencies was basically a solved problem as soon as
       | make was invented and all of the new stuff in this area just
       | seems plainly unneeded to me. Also, when people invent new build
       | systems one can end up with projects where one part is built
       | using one build system and another part is using another. Since
       | things can depend on each other in arbitrarily complex ways
       | because of code generation this will lead to build either too
       | much or too little. In particular having a build system for a
       | particular language is such a strange concept.
        
         | jnxx wrote:
         | Make works pretty well, that's why it is used. But there are a
         | few points it does not cover:
         | 
         | - Recursive building is complicated. it is not really easy to
         | compose a small unit into a larger project, so that one just
         | needs to copy the sub-unit into a folder.
         | 
         | - in some instances, it is necessary to type extra instances of
         | "make clean", because the dependency analysis of make does not
         | understand that dependencies have changed.
         | 
         | - say you build a program which includes
         | /usr/include/ncurses.h. Then, you do a system update which
         | replaces your ncurses library. Or, you do a git pull which
         | changes that header. Make will not rebuild your binary without
         | "make clean".
         | 
         | - files appearing or disappearing along the include path can
         | change the result of a "make clean;make all", but are not
         | detected by make's dependecy analysis. Here an explanation of
         | D.J. Bernstein: http://cr.yp.to/redo/honest-nonfile.html
         | 
         | - doing atomically rebuilds: http://cr.yp.to/redo/atomic.html,
         | so that a new dependency is either there, and complete, or not
         | there
         | 
         | - parallel builds (which is related to the previous point, and
         | also to the necessity to have complete dependency information).
        
           | boomlinde wrote:
           | _> in some instances, it is necessary to type extra instances
           | of  "make clean", because the dependency analysis of make
           | does not understand that dependencies have changed._
           | 
           | The only sense in which Make will analyze a dependency is
           | that it will run rules for which the target does not exist or
           | has a less recent mtime than any of its dependencies. When
           | what you describe happens, it's not a failure of Make's
           | dependency analysis, but your failure to specify the
           | dependency. This is easier said than done with the Make+CC
           | combo, but I attribute that problem to quirks of C and CC,
           | not to Make's dead simple and basically infallible dependency
           | analysis.
           | 
           | The only way to cause Make's dependency analysis to "fail" is
           | to modify the mtime of an already built target to be more
           | recent than that of a dependency that would otherwise be more
           | recent. That's only a failure in the sense that it might not
           | desirable, it's still well defined behavior of Make.
           | 
           |  _> say you build a program which includes
           | /usr/include/ncurses.h. Then, you do a system update which
           | replaces your ncurses library. Or, you do a git pull which
           | changes that header. Make will not rebuild your binary
           | without "make clean"._
           | 
           | If this happens, it's because you have a dependency on
           | /usr/include/ncurses.h that you have not specified for the
           | target. Again, easier said than done to specify the location
           | of system wide header files that are maybe only resolved
           | using pkg-config because you have no idea of their location
           | on the users' systems, but that's not a problem that Make
           | supposes to solve for C and the ecosystem of applications
           | that exist to patch up its deficiencies.
           | 
           | Even worse is for example if you've specified your dependency
           | on the ncurses header, update the system and end up with a
           | new ncurses header that has new includes of its own that you
           | have not specified. The only way you'll ever know is by
           | reading ncurses header file. If you don't, the next system
           | upgrade might update those unspecified indirect dependencies
           | and your Makefile will shrug because you haven't specified
           | those as dependencies. Broken, but really not on the Make
           | end.
           | 
           |  _> files appearing or disappearing along the include path
           | can change the result of a  "make clean;make all", but are
           | not detected by make's dependecy analysis. Here an
           | explanation of D.J. Bernstein: http://cr.yp.to/redo/honest-
           | nonfile.html _
           | 
           | If the developer in the scenario described had simply added
           | "vis.h" as a dependency to the rule that depends on it, it
           | would not have happened. Creating "vis.h" and adding it and
           | running make again would have solved the problem.
           | 
           |  _> doing atomically rebuilds:
           | http://cr.yp.to/redo/atomic.html, so that a new dependency is
           | either there, and complete, or not there_
           | target: generate_something
           | ./generate_something -o target.tmp && mv target.tmp target
           | 
           | _> parallel builds_
           | 
           | GNU Make for example does parallel builds fine. Yes, it's
           | necessary to have complete dependency information (which in
           | Make is achieved only by specifying it) but that's a
           | necessity of any build system that supposes to reliably track
           | dependencies in every case.
           | 
           | In general, your criticism seems aimed specifically against
           | Make+CC. In that sense I completely agree that it's not a
           | great combination and that there are probably solutions much
           | better tailored for building C code that address its quirks
           | specifically. Make works well however when you know what and
           | where your dependencies are. C, when used to build software
           | for n>1 frequently updated operating systems presents a
           | header and library labyrinth that makes that a non-trivial
           | problem.
        
         | BiteCode_dev wrote:
         | The problem with make, is that it's yet another DSL to learn.
         | 
         | Every single tool in your toolbox could introduce a new one,
         | and it leads to fatigue.
         | 
         | Task runner? New DSL (make)
         | 
         | Test runner? New DSL (robot)
         | 
         | Deployement? New DSL (Ansible)
         | 
         | Batch spawning? New DSL (tox)
         | 
         | Etc
         | 
         | Of course, dev in half of those DSL is a total pain, because,
         | like with most DSL, the tooling sucks: terrible completion,
         | linting, debugging or composing experience.
         | 
         | So people write/leverage tools they can use with their favorite
         | language. And why not? You have to install it anyway (make
         | isn't default on windows, it's not even installed on vanilla
         | Ubuntu!).
         | 
         | When I need a make-like tool, I use doit (https://pydoit.org/).
         | 
         | Why?
         | 
         | It's in Python, the language of my projects. So I can use the
         | same tooling, the same libs, the same install and testing
         | procedures. And so can people contributing: no need to train
         | them. Most devs don't know how to use make (most of them are on
         | Windows after all).
         | 
         | Using make adds zero benefit for me: doit just does what make
         | does (ok, that sentence is hard to parse :)). But make adds an
         | extra step of asking to install it while I can just slip doit
         | among my other python project dependancies. I have to google
         | the syntax. I can't use tab to complete names, of right click
         | to get documentation. And if (actually when) I need to debug my
         | make file, God have mercy.
         | 
         | It's not against make. It just doesn't provide enough value to
         | justify the cost of it.
        
           | The_rationalist wrote:
           | I agree with this, personally I use gradle with kotlin script
           | which is just the normal programming language but behaving as
           | a REPL
        
       | ssivark wrote:
       | I like the two graphs to organize the conversation. IMHO the key
       | to success (max bang for the buck) is better composability and as
       | little boilerplate code as possible. Haskell and Julia (for
       | example) do this fantastically well.
       | 
       | The essential theme of long term usability is very close to what
       | Guy Steele talks about in his fantastic talk/article _Growing a
       | language_ -- you want as little as possible embedded into the
       | language, and you want as much as possible farmed off to the
       | libraries so that users can compose the pieces they like without
       | drowning in too much code.
        
       | [deleted]
        
       | boomlinde wrote:
       | I agree with the sentiment in the headline, but want to offer a
       | counter example.
       | 
       | Consider Emacs vs. Notepad++, for the purpose of editing code.
       | Emacs in this example represents "maximal flexibility" for having
       | a configuration interface where almost every aspect of its
       | function can be (re)programmed, and Notepad++ represents "maximal
       | cooperation" for having a configuration interface and limited
       | toolset tailored to the task specifically at hand (editing code).
       | I'm not going to contribute code to either project; submitting
       | patches for a "maximally cooperative" system to adapt it to your
       | use case is just an advanced and inconvenient form of "maximal
       | flexibility".
       | 
       | In my experience this has the opposite relationship to that
       | described in the article. Getting started with Emacs is a
       | significant investment (as per the article, "everything is
       | possible but nothing is easy"), while Notepad++ is pretty much
       | pick-up-and-go out of the box, but over time the extensibility of
       | the former pays off in a better functionality/amount of work
       | ratio.
       | 
       | There is an example of "maximal flexibility" in the article, but
       | none for "maximal cooperation", and I'd like to see one.
        
       | hinkley wrote:
       | I remember when I was starting out that people talked about this
       | problem a lot. But it's been years since I've heard it said
       | unless I was the one saying it.
       | 
       | I still routinely wonder if the right solution is to build up an
       | 'action bar' the way video games often do. Microsoft got into
       | this neighborhood with the Ribbon but I feel like they missed the
       | target.
       | 
       | You graduate to more sophisticated options and the ones you use
       | all the time are mapped to whatever key _you_ want it to be. Add
       | a hidden flag for setting up a new machine or user where you jump
       | straight to expert mode and you 're done. This costs the expert
       | one more setting to memorize, but the payoff for doing so is
       | quite lucrative.
       | 
       | Shortcuts seem to work great on QWERTY but less awesome for
       | everyone else. Just let me set my own so I don't have to use an
       | extra finger on Dvorak or a Japanese keyboard.
        
         | tomlagier wrote:
         | I love this idea!
         | 
         | I feel like there are three paradigms that can cover 100% of
         | software.
         | 
         | The first is re-mappable hotkeys. An action bar would be great,
         | but failing that - obvious tooltips of the hotkey on every
         | single action. Also a complete and searchable reference that's
         | easily discoverable.
         | 
         | The second is a quick-actions panel (a la Alfred, Spotlight,
         | Cmd-P in Chrome Inspector and VSCode). Having this as a
         | generally available searching hub is a huge timesaver, and lets
         | you quickly find actions that you might not have bound. It's
         | also (IMO) the best interface for quickly navigating between
         | objects in different trees.
         | 
         | The third is a bog-standard menu system. It should use the same
         | patterns that are ingrained in every software user's head from
         | their first program (File, Edit, View, ..., Window, Help).
         | 
         | If every program had these features I feel like there would be
         | very little UI friction at the expert level and a good deal of
         | comfort at the beginner level.
         | 
         | Providing a good default "action bar" and mapping of keys for
         | the beginner is left as an exercise for the UX team :)
        
         | mjevans wrote:
         | While that sounds like a great idea...
         | 
         | * How do these shortcuts remain consistent between
         | applications?
         | 
         | * What's a system wide, task-type-wide, and program specific
         | action?
         | 
         | * How do these persist across different versions of OS,
         | application, devices, different ownership (work / home / etc),
         | or otherwise follow the user?
         | 
         | * "X is broken" use the shortcut... wait what did you configure
         | that as?
        
         | renewiltord wrote:
         | There are some examples of this UI pattern of advance the user
         | in their skill on your program and match them (which I'm a fan
         | of):
         | 
         | * Old-school programs used to have Basic/Advanced/Expert mode
         | 
         | * Games like LoL etc. slow-introduce
         | heroes/champions/characters so you learn to play them
         | 
         | * In an IDE like IntelliJ with KeyPromoter or variant installed
         | you start off with clicking through the UI and it tells you
         | each time what the shortcut key could have been
         | 
         | * Clippy was a failed attempt. It's really hard to guess at
         | intent.
        
       | m463 wrote:
       | You could make the same argument for presales vs postsales.
       | 
       | There are numerous examples of this:
       | 
       | The look of the apple keyboard in a store, vs day-to-day
       | functionality (or admittedly many apple products, such as a
       | glossy display)
       | 
       | Any RGB product for sale now -- RGB keyboards, RGB mouse, RGB
       | computer, RGB system memory. (get it home, turn it off)
       | 
       | meanwhile, a trackball or weird vertical mouse might be
       | completely unapproachable, but for the folks who need them they
       | are usable forever after putting in the time.
        
       | wruza wrote:
       | >project('tutorial', 'c') >gtkdep = dependency('gtk+-3.0')
       | >executable('demo', 'main.c', dependencies : gtkdep)
       | 
       | This is a snippet from meson tutorial, and that's why I'm still
       | using make (and not cmake) in all my personal C projects. I have
       | no idea what flags will be passed to gcc and how to change them
       | (-mmsbitfields was required on windows for gtk). Second, I may
       | have no pkg-config environment when I link windows executable
       | against msys2-installed libs in plain cmd. These shorthands may
       | be a long-term win for a regular project in strict unixlike or
       | msvc-env.bat, but it is not a build system. It is a fixed recipe
       | book (as seen from the tutorial page, don't take it as a
       | criticism). It substitutes a simple knowledge on -I, -L and -l
       | with a cryptic set of directives. You spoke gcc very good, now
       | you have to speak meson's local dialect and be able to catch and
       | fix subtle errors in translation.
       | 
       | The problem is, one has to investigate into seemingly-easy build
       | system to tune it to their needs. It is much harder than just
       | fixing CFLAGS+= or LDFLAGS+= in a Makefile.
       | 
       | For me, a better build system would look like a set of rules, but
       | not in a Makefile, but in a general-purpose language, like:
       | var all_srcs = qw('a.c b.c')
       | if_(files_changed(all_srcs)).do_(changed_srcs => {
       | changed_srcs.forEach(src => compile(src))       })       fn
       | compile(src) {         exec('gcc', CFLAGS, '-o', to_o(src), src)
       | }            if_...
       | 
       | This simple process would cover 99% of common cases and you're
       | still in control of everything. Just prepend:                 if
       | (os == 'msys2') {         CFLAGS += ' ...'       }
       | 
       | And that's it. It is a long-term usability, because you may open
       | this file a year later and still figure out what needs to be done
       | in few seconds.
        
         | wwright wrote:
         | I'd recommend you take a look at how Bazel works (Meson may be
         | similar if you look further, but I haven't used it much
         | myself). The default interface you get is relatively "high-
         | level", but everything behind the scenes is a general-purpose
         | system like what you describe, and you can customize it pretty
         | deeply.
         | 
         | What makes it _really_ great IMO is that the language and tool
         | are designed for best practices. For example, your scripts can
         | 't actually execute anything: they can tell the build system
         | what command would be used to build the file and what the
         | dependencies would be. The sandboxing allows the build system
         | to be pretty hermetic without much effort. This means that it
         | can always parallelize your build, and incremental builds are
         | always fast and correct.
        
       ___________________________________________________________________
       (page generated 2020-06-10 23:00 UTC)