[HN Gopher] Fragile narrow laggy asynchronous mismatched pipes k...
       ___________________________________________________________________
        
       Fragile narrow laggy asynchronous mismatched pipes kill
       productivity
        
       Author : trishume
       Score  : 151 points
       Date   : 2020-05-17 15:19 UTC (7 hours ago)
        
 (HTM) web link (thume.ca)
 (TXT) w3m dump (thume.ca)
        
       | ChrisMarshallNY wrote:
       | This article has a point.
       | 
       | But, as in all things software, "it depends."
       | 
       | It depends on what the tools are, and what we are writing.
       | 
       | In my own case, I have the luxury of writing fully native Swift
       | code for Apple devices. I don't need to work with anything off
       | the device, except for fairly direct interfaces, like USB, TCP/IP
       | or Bluetooth.
       | 
       | Usually.
       | 
       | I have written "full stack" systems that included self-authored
       | SDKs in Swift, and self-authored servers in PHP/JS.
       | 
       | I avoid dependencies like the plague. Some of them are excellent,
       | and well worth the effort, but I have encountered very few that
       | really make my life as an Apple device programmer that much
       | easier. The rare ones I do use (like SOAPEngine, or ffmpeg, for
       | instance), are local to the development environment, and usually
       | quite well-written and supported.
       | 
       | If I were writing an app that reflected server-provided utility
       | on a local device, then there's a really good chance that I'd use
       | an SDK/dependency with network connectivity, like GraphQL, or
       | MapBox. These are great services, but ones that I don't use (at
       | the moment).
       | 
       | I'm skeptical of a lot of "Big Social Media" SDKs. I believe that
       | we just had an issue with the FB SDK.
       | 
       | That said, if I were writing an app that leveraged FB services, I
       | don't see how I could avoid their SDK.
       | 
       | So I write fully native software with Swift, and avoid
       | dependencies. That seems to make my life easier.
       | 
       | But Xcode is still a really crashy toolbox.
        
       | wpietri wrote:
       | Funnily, I thought the headline was talking about development
       | process, as that also describes how a lot of places (mis-)handle
       | the flow of what get worked on.
        
       | jancsika wrote:
       | > Untrusted: If you don't want everything to be taken down by one
       | malfunction you need to defend against invalid inputs and being
       | overwhelmed. Sometimes you also need to defend against actual
       | attackers.
       | 
       | Sorry, but unless your centralized alternative is only used
       | internally by troglodytes you have to at least defend against
       | invalid inputs.
        
       | carapace wrote:
       | > I hope this leads you to think about the ways that your work
       | could be more productive if you had better tools to deal with
       | distributed systems, and what those might be.
       | 
       | We have tools. Promula/SPIN model checker is one just off the top
       | of my head.
        
       | chubot wrote:
       | I think git is a good model for what would otherwise be "laggy
       | async and mismatched" distributed systems.
       | 
       | It has a fast sync algorithm, and after you sync, everything
       | works locally on a fast file system. You explicitly know when
       | you're hitting the network, rather than hitting it ALL THE TIME.
       | 
       | -----
       | 
       | I would like to use something like git to store the source code
       | to every piece of software I use, and the binaries. That is, most
       | of a whole Linux distro.
       | 
       | I have been loosely following some "git for binary data" projects
       | for a number of years. I looked at IPFS like 5 years ago but it
       | seems to have gone off the rails. The dat project seems to have
       | morphed into something else?
       | 
       | Are there any new storage projects in that vein? I think the OP
       | is identifying a real problem -- distributed systems are
       | unreliable, and you can get a lot done on a single machine. But
       | we are missing some primitives that would enable that. Every
       | application is littered with buggy and difficut network logic,
       | rather than having a single tool like git (or rsync) which would
       | handle the problem in a focused and fast way.
       | 
       | It would be like if Vim/Emacs and GCC/Clang all were "network-
       | enabled"... that doesn't really make sense. Instead they all use
       | the file system, and the file system can be sync'd as an
       | orthogonal issue.
       | 
       | Sort of related is a fast distro here I'm looking at:
       | https://michael.stapelberg.ch/posts/2019-08-17-introducing-d...
        
         | hobofan wrote:
         | Not sure how well if fits your use-case, but I've been very
         | happy with git-lfs in combination with a NAS I have at home.
         | The NAS is just mounted as a normal network drive and available
         | to use with LFS via lfs-filestore[0] and available on the go
         | via the builtin DynDNS + VPC of the NAS.
         | 
         | I've been using it for a repo where I store all my
         | university/academic stuff like lectures (recordings), PDFs of
         | books and papers, Anki decks, etc., and it has now grown to be
         | ~120GB big.
         | 
         | Biggest issue was that I'm always running low on disk space
         | with my laptop, and git-lfs doesn't have a good built-in way to
         | only checkout part of the files on your machine, so I built a
         | small tool to make that easier[1]. Since I've been using that
         | it's been a pretty smooth ride.
         | 
         | [0]: https://github.com/sinbad/lfs-folderstore
         | 
         | [1]: https://github.com/hobofan/lfs-unload
        
         | dimatura wrote:
         | I have been interested in "git for binary data" for a while,
         | mostly for ML/computer vision purposes.
         | 
         | I've tried quite a few systems. Of course, there's git-lfs
         | (which keeps "pointer" files and blobs in a cache), which I do
         | use sometimes - but it has a quite few things I don't like. It
         | doesn't give you a lot of control on where the files are stored
         | and how the storage is managed on the remote side. The way it
         | works means there'll be two copies of your data, which is not
         | great for huge datasets.
         | 
         | Git-annex (https://git-annex.branchable.com/) is pretty great,
         | and ticks almost every checkbox I want. Unlike git-lfs, it uses
         | symlinks instead of pointer files (by default) and gives you a
         | lot of control in managing multiple remote repositories. On the
         | other hand, using it outside of Linux (e.g., MacOS) has always
         | been a bit painful, specially when trying to collaborate with
         | less technical users. I also get the impression that the main
         | developer doesn't have much time for it (understandably - I
         | don't think he makes any money off it, even if there were some
         | early attempts).
         | 
         | My current solution is DVC (https://dvc.org/). It's explicitly
         | made with ML in mind, and implements a bunch of stuff beyond
         | binary versioning. It does lack a few of the features of git-
         | annex, but has the ones I do care about most - namely, a fair
         | amount of flexibility on how the remote storage is implemented.
         | And the one thing I like the most is that it can work either
         | like git-lfs (with pointer files), like git-annex (with soft-
         | or hard-links), or -- my favorite -- using reflinks, when
         | running on filesystems that support it (e.g. APFS, btrfs). It
         | also is being actively developed by a team at a company, though
         | so far there doesn't seem to be any paid features or services
         | around it.
         | 
         | Pachyderm (https://www.pachyderm.com) also seems quite
         | interesting, and pretty ideal for some workflows. Unfortunately
         | it's also more opinionated, in that it requires using docker
         | for the filesystem, as far as I can tell.
         | 
         | Edit: a rather different alternative I've resorted to in the
         | past -- which of course lacks a lot of the features of "git for
         | binary data" -- is simply to do regular backups of data to
         | either borg or restic, which are pretty good deduplicating
         | backup systems. Both allow you to mount past snapshots with
         | FUSE, which is a nice way of accessing earlier versions of your
         | data (read-only, of course). These days, this kind of thing can
         | also be done with ZFS or btrfs as well, though.
        
           | chubot wrote:
           | Thanks for the response! I have heard of dvc and git annex,
           | and it's probably time to give them another try :)
        
         | trishume wrote:
         | This is also an interesting system in that it's an example of
         | how you can get away with a non-distributed system if your
         | problem is small enough but eventually that falls over. Once
         | you get to large corporation monorepos git operations start to
         | get real slow and use too much hard disk, so you end up with
         | them either creating/using a new VCS or doing some complicated
         | undertaking like https://devblogs.microsoft.com/bharry/the-
         | largest-git-repo-o...
        
           | chubot wrote:
           | Yeah I agree, it's sort of an open problem, but I guess a
           | bunch of arrows are pointing toward FUSE.
           | 
           | Distri uses FUSE and it appears Microsoft's GVFS uses FUSE or
           | whatever Windows technology is the equivalent. (My teammates
           | developed Google's equivalent about 14 years ago, using FUSE,
           | so it's something I've used / seen several times.)
           | 
           | FUSE requires some kernel support (a module), which git of
           | course doesn't require. That is a barrier, but perhaps not an
           | insurmountable one. Basically I would like to offload all the
           | "network" work to the OS, so applications are free of that
           | logic.
        
         | nixpulvis wrote:
         | I agree that git is a great model, however it's often hard to
         | explain to new users why merge conflict require time to
         | resolve, and nothing saves you from the added work.
         | 
         | Life just isn't completely decentralizable... sorry.
        
           | chubot wrote:
           | git isn't completely centralized either. You can centralize
           | it like Github, and that plays an important role in the
           | ecosystem.
           | 
           | Decentralization is a spectrum, not the opposite of
           | centralization.
           | 
           | ----
           | 
           | One way to partially address to merge conflict problem is to
           | explode application files into directory hierarchies.
           | 
           | For example, Word .doc files and Photoshop .PSD files are
           | basically huge hierchical data structures inside a single
           | file. I believe video formats also have significant
           | hierarchical structure.
           | 
           | A lot of them even have immutable portions and mutable
           | portions -- e.g. for storing an entire version history.
           | 
           | So if those were exploded into something that the OS (or
           | tools like git/rsync) could understand, like a tree of files,
           | then you would have a lot fewer merge conflicts.
           | 
           | That's how people tend structure their git repos too. If you
           | have a frequently edited file that's hard to merge, then
           | that's a smell, and you can fix it.
           | 
           | This won't solve every problem but again there needs to be
           | something in between "rewrite Photoshop as Figma" and "email
           | around a bunch of PSD files" (which is a very flaky
           | distributed system on top of e-mail.)
        
         | tsimionescu wrote:
         | > It would be like if Vim/Emacs and GCC/Clang all were
         | "network-enabled"... that doesn't really make sense. Instead
         | they all use the file system, and the file system can be sync'd
         | as an orthogonal issue.
         | 
         | Well, you may already be aware, but Emacs is actually 'network-
         | enabled' in this way through TRAMP, and to a lesser extent, the
         | emacs client/server protocol.
         | 
         | There are also many issues other than file syncing that
         | networking has to solve. And also, even for regular files,
         | there are many ways to interact with them, and many protocols
         | that existing systems can already speak.
        
         | j88439h84 wrote:
         | > most of a whole Linux distro.
         | 
         | It's NixOS.
        
         | jazzyjackson wrote:
         | I've been bouncing an idea around for a while, on how could I
         | use git as a back-end for a filesharing/chat/collaboration
         | suite -- I think it would work to have a git hook, pre-commit,
         | that replaces all binary files / blacklisted file extensions,
         | with a text file whose contents are the magnet address to
         | download the large binary file over torrent - so the file name,
         | permissions etc don't change, but a small text file is
         | committed instead of the binary.
         | 
         | So, as a consumer of the repo, I would clone the repo, and then
         | have a post-checkout hook check the blacklisted file extensions
         | and grab their contents: the magnet link, all that's left is to
         | find peers, download the file and replace the text file with a
         | hardlink to the binary. Maybe this is all too laggy,
         | asynchronous, and mismatched, but I think git+magnet could be a
         | cool combination.
         | 
         | I found this repo [1] that generates the magnet link for you, I
         | would just need to find away to use the repo to make all the
         | contributors peers to each other, so we can download the
         | binaries from whoever is nearest / highest bandwidth from us.
         | 
         | [1] https://github.com/casey/intermodal
        
           | robochat wrote:
           | I think that a version of this idea underpins the operation
           | of the matrix messaging protocol. Rather than just sending
           | the latest messages, the conversation is synced between
           | clients to ensure that everyone sees the same history.
           | 
           | [1] https://matrix.org
        
         | gumby wrote:
         | Actually git "suffers" from two of the problems he listed:
         | cache coherency and, as with all filesystem based approaches,
         | serialization.
         | 
         | These don't matter for git which manages to push all the
         | coherency issues onto the user and which can afford to operate
         | (in computational terms, not human terms) very slowly on small
         | amounts of data.
         | 
         | I'm not saying git is slow (it's gratifyingly fast) but it has
         | a remarkably smaller problem domain than the one described in
         | the article.
        
           | chubot wrote:
           | I don't agree with that framing... When you say "cache
           | coherency" you're implicitly assuming some authoritative
           | state. The point of git is that there isn't a single
           | authoritative state.
           | 
           | Not all apps will work with that model, but more apps than
           | you think would.
           | 
           | I guess the difference is how fine-grained you want the
           | updates to be. For example something like Figma in the
           | browser (a collaborative photoshop) implements a lot of
           | custom application-specific sync with CRDTs and so forth.
           | 
           | Maybe you need the really fine-grained updates, or maybe you
           | just need some Github-like site which allows coarse-grained
           | collaboration.
           | 
           | In other words, there can be a network model between "e-mail
           | a .PSD file" and Figma. I think this "in between" would scale
           | to more applications than reinventing sync inside every app.
           | Imagine audio editors, video editors, 3D modellers, etc.
           | Rewriting all those in the style of Figma is prohibtive.
           | 
           | I would rather have open file formats and application-
           | agnostic sync, like git. And I think it's a lot cheaper to
           | develop, although current software business models don't
           | really support its development.
        
             | Someone wrote:
             | _"The point of git is that there isn 't a single
             | authoritative state."_
             | 
             | I think you're missing the point of the post you refer to.
             | 
             | For git, there _technically_ isn't a single authoritave
             | state, but for many, if not most, git use cases, there
             | _sociologically_ is. Projects typically have one repo that
             | is _the_repo_: the repo most merges are done to, that
             | releases get built from, whose url you give when you tell
             | people where the project lives, etc.
             | 
             | All other clones of that repo, sociologically, are just
             | caching the main repo and changes you make.
             | 
             | Then, humans have to decide when to flush what part(s) of
             | the write cache to the main server, and when that results
             | in conflicts, humans have to resolve them.
             | 
             | That's, I think, what _"which manages to push all the
             | coherency issues onto the user"_ means.
        
               | gumby wrote:
               | You are correct, I meant that even in a peer-peer merge
               | (which can even be just one person debugging the same
               | program on both a Mac and a Linux machine), any merge
               | conflicts are marked and left for the user to decide
               | about.
        
         | nanomonkey wrote:
         | There is git/github on top of Secure Scuttlebutt:
         | 
         | https://github.com/noffle/git-ssb-intro
        
         | benibela wrote:
         | I have searched something like this for backups, but have not
         | found anything good.
         | 
         | But I also have conflicting wishes. I want it to store the
         | history (so that a backup of directory A cannot be overridden
         | by a backup of an unrelated directory B, and it can apply
         | renames without copying large files again), and then I do not
         | want it to store a history (so that large files that are
         | removed from the main system can be permanently removed from
         | the backup)
        
       | at_a_remove wrote:
       | Yes, I have a little Python library for managing network shares
       | in Windows.
       | 
       | It has things like automatic retries that "back off" slowly,
       | switching to cached IPs in case DNS is down, and checking to see
       | if all of the drive-letters are full and either re-using a letter
       | or creating a "letter-less" share. I had to develop it during a
       | period of great instability within our network. It's ... large
       | and over-engineered, but it just keeps on truckin'.
       | 
       | On the other hand, it has been quite useful going forward, so
       | that's a plus.
       | 
       | I tend to program fairly defensively, in layers, right down to
       | the much maligned Pokemon exception handling. The results don't
       | have the, ah, velocity that is so often praised but they'll be
       | there ticking along years later.
        
       | gfxgirl wrote:
       | doh! I thought this was going to be about remote work.
        
       | adrianmonk wrote:
       | See also Peter Deutsch's "Fallacies of Distributed Computing"
       | list (https://en.wikipedia.org/wiki/Fallacies_of_distributed_comp
       | u...).
       | 
       | There's some overlap, but also some new stuff. In particular,
       | "pipes" isn't covered by the Fallacies list and is consistently a
       | pain point and/or issue you always face in some way. Also
       | "asynchronous" isn't covered by the Fallacies list.
        
       | [deleted]
        
       | btreecat wrote:
       | So I have been thinking about software projects lately, and I
       | have come to the conclusion that a lot of these tools/solutions
       | exist to "build houses" when most of us are just throwing
       | together lean-to sheds and dog houses.
       | 
       | Software projects today are naturally more complex and have more
       | complex tooling the same way building a house today requires more
       | knowledge and skill than it did 50 years ago.
       | 
       | Then there are some folks/organizations building cathedrals, and
       | the associated tooling (react, angular, maven, etc) and all the
       | rest of us look up in awe and thing "well I guess if I want to be
       | that good I need to use those tools on this dog house."
       | 
       | But your dog house doesn't have the need to host parties, provide
       | security, or even real weather protection other than a roof to
       | keep the rain and sun out. Yet we all try to build our dog houses
       | in ways that might be better if they are one day converted to a
       | proper living quarters but likely will never have a need for
       | running Water or windows.
        
       | evadne wrote:
       | Do you have a moment to talk about our Lord and Saviour,
       | Erlang/OTP?
        
       | nitwit005 wrote:
       | I've found people have these problems inside of their datacenter,
       | where there is reliable low latency bandwidth, but where things
       | might rebooted due to upgrades or maintenance.
       | 
       | Common example is data being pushed between systems with HTTP.
       | Take the simplest case of propagating a boolean value. You toggle
       | some setting in the UI, and it sends an update to another system
       | with an HTTP request, retrying on a delay if it can't connect.
       | This has two problems. The first is that if the user toggles a
       | setting on and then off, you can have two sets of retries going,
       | producing a random result when the far end can be connected to
       | again. The second, is that the machine doing the retries might
       | get rebooted, and people often fail to persist the fact that a
       | change needs to be pushed to the other system.
       | 
       | I've seen this issue between two processes on the same machine,
       | so technically you don't even need a network.
        
       | lostcolony wrote:
       | Failure is such a fun thing to think about, and it gets handwaved
       | away so often. So many devs, architects, product owners, etc,
       | just focus on happy path, and leave failure unspecced, unhandled,
       | and just hope it never happens. And then boast about 99% uptime,
       | but once you start questioning them you find out they get weekly
       | pages they have to go investigate (and really the system is
       | behaving weirdly a solid 10% of the time, but they don't know
       | what to do about it and it eventually resolves itself, and they
       | don't count "pageable weirdness" in their failure metric).
       | 
       | It's actually one of the things I love about Erlang, and how it's
       | changed my thinking. Think about failures. Or rather; don't.
       | Assume they'll happen, in ways you can't plan for. Instead think
       | about what acceptable degraded behavior looks like, how to best
       | ensure it in the event of failure, and how to automatically
       | recover.
        
         | jodrellblank wrote:
         | On the subject of failures, you might like this blog post
         | https://danluu.com/postmortem-lessons/ if you haven't seen it
         | before.
        
         | [deleted]
        
         | LargeWu wrote:
         | As a developer who's trying to move on to either a management
         | or product role, failure modes is one of the things I want to
         | emphasize in that role. The sad fact is that so many product
         | owners really don't understand how software works or gets
         | built, and as such, they are unequipped to reason about such
         | things.
        
       | crazygringo wrote:
       | Of course they do. But there's no alternative.
       | 
       | No matter how fast or beefy your server is, these days if your
       | product becomes a success, 99% of the time it will outgrow what's
       | possible on a single server. (Not to mention needs for
       | redundancy, geographic latency, etc.) And by the time you see the
       | trend heading upwards so you can predict what day that will
       | happen, you already won't have the time for the massive rewrite
       | necessary.
       | 
       | So yes, it's tons slower to write distributed servers/systems.
       | But what other choice do you usually have?
       | 
       | Though, as much as possible, you can _try_ to avoid the
       | microservices route, and integrate everything as much as possible
       | into monolithic replicable  "full-stack servers" that never talk
       | to _each other_ , but rather rely entirely on things like cloud
       | storage and cloud database. Where you're paying your cloud
       | provider $$$ to not fail, rather than handle it yourself.
       | Sometimes this will work for you, sometimes it won't.
        
         | nixpulvis wrote:
         | I've seen a handful of applications that attempt to "scale" by
         | going down the micro-service route in a completely flawed way.
         | Only to end up with a tangled mess that's impossible to
         | reliably debug. All progress halts.
         | 
         | There's nothing inherently wrong about your statement, just
         | that it's still far to easy to write shitty distributed system,
         | and so much easier to push that complexity off onto the OS or
         | even network layer itself.
         | 
         | Why should _I_ the application developer care about the way my
         | user 's data enters my DB? This should be tightly abstracted
         | away, and traced/logged accordingly. Leave it to _me_ the
         | systems developer to get the details right, and share the
         | fruits of my labor with everyone.
         | 
         | I can imagine no system more deserving of shared resources than
         | network technology. Try and imagine a world without TCP/IP, do
         | you not end up with something similar?
        
       | robbrown451 wrote:
       | The title reminded me of the turboencabulator.
       | https://www.thechiefstoryteller.com/2014/07/16/turbo-encabul...
        
       | tlarkworthy wrote:
       | I love the catagorization. But decent software should be
       | distributed. I dislike single teams that begat 10s of
       | miscroservices, but it should be buy, not build, features from
       | specialized 3rd parties. Thus a decent modern installation should
       | be leaning on a ton of 3rd party services (e.g. Identity
       | providers, databases, caches) because they all do a better job
       | than the hand rolled local one. It's how you outsource expertise.
       | 
       | The vision of the service mesh is to make unreliability and
       | security no longer the job if the application binary. Even
       | without a service mesh, you can put a lot of common functionality
       | into a reverse proxy. Personally I am loving OpenResty for the
       | simplicity of writing adapters and oauth at the proxy layer with
       | good performance.
        
         | trishume wrote:
         | I think it should be possible to buy or use software from third
         | parties. One thing I'm disappointed about and think we need
         | better tools to avoid is the fact that third parties provide
         | their tools as services rather than libraries. There's reasons
         | they do that, deploying a library that all your customers can
         | easily use is hard right now, but there's no reason it has to
         | be that way.
         | 
         | Some things can't easily be libraries like databases, but other
         | things like some caches, miscellaneous operations like image
         | resizing (depending on an architecture that can handle the load
         | on those servers), and a bunch of other things could just be
         | libraries.
        
           | nemetroid wrote:
           | > Some things can't easily be libraries like databases
           | 
           | The world's most widely deployed database engine (SQLite) is
           | only available as a library.
        
             | tlarkworthy wrote:
             | The worlds most trafficked database is probably Google
             | search* index and its not sqlite and it's certainly
             | distributed.
             | 
             | * Maybe Facebook, I dunno, the point stands.
        
               | tosserup478 wrote:
               | Modern browsers all use sqllite, so more than 90% of user
               | accessing that index are doing through an app using
               | sqllite. Lot's of things come with sqllite without you
               | knowing about it
        
               | tlarkworthy wrote:
               | Good point, though the Google index users are 1. the
               | google search users AND 2. the machine crawlers who crawl
               | webpages no-one visits. I am not sure the human users of
               | the index are the majority.
               | 
               | Admittedly this is a fairly pedantic point.
        
           | tlarkworthy wrote:
           | The world is distributed. Money is distributed. You need
           | services to interact with the world. The large portion of
           | useful business services cannot be encapsulated into a
           | hermetic library.
           | 
           | If you think that distributed service clients as libraries is
           | enough to solve the issue of distributed computing, that is
           | incorrect as most of the same distributed systems crap will
           | still happen.
        
       | perfunctory wrote:
       | Just reading the title I assumed it was a post about business
       | processes and communication between teams. Because this is how
       | working for a big corp sometimes feel.
        
         | snazz wrote:
         | It's a bit of Conway's law: the software structure mirrors the
         | organization structure.
        
       | hn_throwaway_99 wrote:
       | Hah, I saw the title "Fragile narrow laggy asynchronous
       | mismatched pipes kill productivity" and thought it was about the
       | pitfalls of trying to coordinate remote teams across disparate
       | time zones.
        
       | smitty1e wrote:
       | > Sometimes a distributed system is unavoidable, such as if you
       | want extreme availability or computing power, but other times
       | it's totally avoidable.
       | 
       | But so much of our sales pitch involves these shiny cloud
       | systems.
       | 
       | Who ever sold business by telling the customer: "Your use-case
       | really isn't exciting, and a boring batch-driven process is
       | completely appropriate"?
        
         | TomMarius wrote:
         | I do it like that, it's always met with excitement like "oh
         | wow, all these other companies were telling us how hard and
         | costly and lengthy it will be, thank you"
        
           | smitty1e wrote:
           | Do you ever have the experience of doing a prototype, and the
           | customer looks at it and says: "Great work. Put it into
           | production"?
        
             | dylan604 wrote:
             | every. damn. time. i'm a freelancer, so there's not a dev
             | team behind me. the client likes what i made, and wants to
             | start using it. realizing this, i've started spending much
             | more time on the UI/UX (even though i'm not one of those
             | guys) to at least make the tool useable in a dogfooding
             | way.
        
               | MauranKilom wrote:
               | Maybe it's not 100% relevant to your case, and maybe
               | you've already seen it, but just in case:
               | 
               | https://www.joelonsoftware.com/2002/02/13/the-iceberg-
               | secret...
        
         | FpUser wrote:
         | > _Who ever sold business by telling the customer: "Your use-
         | case really isn't exciting, and a boring batch-driven process
         | is completely appropriate"?_
         | 
         | I did many times. Many practical businesses are looking how to
         | solve their real problems in reliable and cheapest way. They
         | mostly do not give a flying hoot about all those buzzwords and
         | super duper newest tech. They just ask how much, how exactly
         | will it work, what are hidden cost, how will it be maintained
         | and they will also look at your profile (clients, references,
         | examples of finished projects etc).
        
       ___________________________________________________________________
       (page generated 2020-05-17 23:00 UTC)