[HN Gopher] Life is too short to depend on unstable software
       ___________________________________________________________________
        
       Life is too short to depend on unstable software
        
       Author : mooreds
       Score  : 169 points
       Date   : 2021-11-13 12:39 UTC (10 hours ago)
        
 (HTM) web link (blog.sidebits.tech)
 (TXT) w3m dump (blog.sidebits.tech)
        
       | kzrdude wrote:
       | Unstable or even worse, vanishing software. Sometimes we adopt
       | projects that disappear, or their change their pricing so that we
       | have to drop them.
        
       | sys_64738 wrote:
       | The amount of software to protect my Windows based work laptop
       | suggests that unstable software is more than just "does it work
       | for the user"? It feels like it should include how well protected
       | is your data for the application you are using?
        
       | mkl95 wrote:
       | Innovative tech companies are dominated by business analysts and
       | product managers. Shoddy specs are commonplace, and it's crazy at
       | work because big customers push hard deadlines that have to be
       | met to prevent them from switching to That Competitor's app. They
       | are not empty threats. Unstable software is just the reflection
       | of an unstable industry.
        
         | wildrhythms wrote:
         | Exactly. Look no further than promotional processes that value
         | launches over maintenance.
        
         | laurent92 wrote:
         | J2EE entirely spec'ed out a reproducible way to create and
         | deploy reliable software reliably. It is so reliable that no-
         | one uses it, except companies paying $14k per developer per
         | month.
         | 
         | The truth is, the cost of unreliable software is not the same
         | as the cost of an unreliable bridge or plane. And during the
         | ascending phase of the Schumpeter cycle (70-year economic cycle
         | related to an industry), entrepreneurs who move swiftly with
         | agility always win.
        
       | jabl wrote:
       | Related talk distilled into a blog(gish?) post:
       | http://boringtechnology.club/
        
       | outsomnia wrote:
       | Users of only "stable" FOSS releases almost never contribute, in
       | code, or to its health or longevity. They are leeching it,
       | sometimes over decades.
       | 
       | They don't contribute to testing either, since they just consume
       | releases and ignore development, then wonder loudly why there are
       | bugs.
       | 
       | Don't be that guy... if you want your dependencies to have a long
       | life being maintained, find some time to contribute. Leeching is
       | not contribution.
        
         | passerby1 wrote:
         | It's probably a bit blind to think that there's only one way to
         | contribute by fixing issues. However, the new feature
         | development is very important too, and it usually it does not
         | necessary require a testing or beta channel.
        
         | bxparks wrote:
         | I don't bother filing bugs or sending PRs. I spend hours filing
         | a detailed bug report. No response. I spend hours crafting a
         | PR. Ignored. I file a bug report along with several possible
         | solutions. I get a snarky response totally unrelated the
         | proposed solutions.
         | 
         | But I get it. Maintainers are overworked and jaded. I'm on the
         | receiving end as well, with 95% of the issues like: "help, it's
         | broken" (with no error message), "please teach me git", "please
         | debug my application code", "please implement this massive
         | feature for me" (for free), "how do I do xxx?" (obviously
         | didn't read the README.md), "your library is buggy" (no it's
         | not, your pointer has run off the end of your array, causing
         | Undefined Behavior).
         | 
         | With all of that noise, it is difficult to figure out which bug
         | reports or PRs are in the 5% that are actually well-researched
         | and valid.
        
           | throwaway13337 wrote:
           | Well put.
           | 
           | A reputation/karma system for bug reports could be an
           | interesting way to deal with this problem.
           | 
           | I wonder if it's ever been tried.
        
           | deathanatos wrote:
           | I file bug reports. It _greatly_ depends on the person
           | running the project; some are like you describe, or they 're
           | incompetent (particularly problematic in non-FOSS projects).
           | Those, it's usually pretty clear that you're going to have to
           | weigh "how much do I want this bug fixed" against how painful
           | the process will be.
           | 
           | But I've also had maintainers who were very helpful. Recently
           | had to deal with the DST cross-sign expiring, and needed a
           | third-party container to be rebuilt on a later version of
           | Debian. Filed an issue with really nothing more than "Would
           | you mind releasing with a newer version?"1 and the maintainer
           | had it published within like a day! _Greatly_ made my life
           | easier.
           | 
           | So I really want to separate out those maintainers that are
           | doing a stellar job; certainly some merit your criticism,
           | just not all.
           | 
           | 1I didn't have time then to track down the Dockerfile &
           | figure out the required patch. Might have once my time freed
           | up, but the maintainer beat me to it.
        
         | jakear wrote:
         | I would advise against calling users leeches. A product is
         | meaningless without users, and those users that do go out of
         | their way to file helpful bug reports should be lauded not
         | disparaged.
         | 
         | The phrase leeching implies an entity is consuming a host's
         | life force in some way (seeders's bandwidth in the torrenting
         | analogy), simply using the product does not meet this criteria.
        
         | hsn915 wrote:
         | Thank you for confirmin that "Free and Open Source" does not
         | value stability or quality.
         | 
         | I'd rather pay for a closed program that comes with guarantees
         | of stability and quality than participate in a community that
         | considers these concerns tertiarty or off the menu entirely.
        
           | exyi wrote:
           | Often times you can get a support for a stable OSS, and then
           | you actually help the ecosystem by giving it resources
        
             | hsn915 wrote:
             | Unfortunately, the "pay for support" business model means
             | the incentive is to make a system that _needs_ support but
             | pretends on the surface to be stable and high quality.
             | 
             | One way to make a system so incomprehensible that your
             | customers absolutely need support is to make it
             | configurable in a million different ways, and make the
             | configuration invisible / opaque.
             | 
             | Example:
             | 
             | "Why is this program doing this when I told it to do that?"
             | 
             | Answer: Because this file in that directory configure
             | service A to do X, and this other file in that other
             | directory configures the program to behave like Y if it
             | detects that service A is doing X.
             | 
             | But the program does not tell you that via its UI; you have
             | to read obscure documentation.
        
               | md8z wrote:
               | I don't understand, isn't the original complaint here
               | that everything needs support eventually?
        
               | hsn915 wrote:
               | Everything should _not_ need support. The fact that most
               | everything needs support is a symptom of how crazy the
               | software industry is.
        
               | md8z wrote:
               | I keep hearing people say that, but how do you propose we
               | write software that has no bugs and does everything
               | perfectly right the first time? That seems impossible.
        
               | AnIdiotOnTheNet wrote:
               | The first time? Sure, nothing ever gets it right the
               | first time, but over time software should converge on
               | being bug free and not requiring any support at all.
               | Free-with-paid-support has a perverse incentive against
               | this.
        
               | md8z wrote:
               | "over time software should converge on being bug free"
               | 
               | Yeah, and the way that's done is by refactoring things,
               | removing buggy/deprecated things, and not adding any more
               | new features/requirements... So, pick your poison, I
               | guess? I'd love to go move on to the next job as much as
               | the next person, but somebody still has to be paid to do
               | those things. I don't see what the significant difference
               | is there with free-with-paid-support, if you pay for it
               | up front you're still paying the same cost.
        
               | BeFlatXIII wrote:
               | What other projects besides (La)TeX and METAFONT have
               | noticeably converged on bug-free over time? Perhaps the
               | Linux & BSD kernels.
        
           | yoyohello13 wrote:
           | This is an absurd sweeping generalization. There are many
           | FOSS projects that value stability and quality. Just like
           | there are many closed source projects that don't.
        
             | hsn915 wrote:
             | The only one I'm aware of that does a good job of this is
             | the Go language project, but it's not really your typical
             | FOSS project; it's a Google project, and Google relies on
             | it internally, so the incentive to make it stable and high
             | quality is as high as it will ever get. They don't sell it
             | to others, but they themselves rely on it as a core part of
             | their own business.
        
               | zufallsheld wrote:
               | When was the last time the Linux kernel or the gnu utils
               | broke for you?
               | 
               | For me? Can't remember.
        
         | SilasX wrote:
         | I'd love to! Did you make sure your FOSS project is easy to
         | build from source? I don't have to spend hours nagging the dev
         | team for mysterious compilation bugs and dependencies? Projects
         | that make this easy are the extreme exception, so if you
         | actually get it right, then yes, I completely sympathize.
        
       | todd3834 wrote:
       | I know that titles have to be written to capture attention but I
       | was really hoping there was going to be a story here. Something
       | about how someone realized in their old age that the use of
       | unstable software was a big regret. Sounded pretty odd so I
       | clicked through to see. I was disappointed it was just an article
       | about backward compatibility and preferring 3rd party libraries.
       | 
       | I'm sure on my deathbed software will not be coming to mind but
       | if it did I'll probably wish I took more chances on some crazy
       | things like beta iOS ;-p
        
         | sovietmudkipz wrote:
         | On my deathbed I don't think I'll think about software... But
         | if I do I think my biggest regret will be not embracing event
         | driven software design sooner.
        
       | dln_eintr wrote:
       | I thought we were arguing for rapid iteration and continuous
       | integration? Why would I NOT live as I preach? How can I teach
       | others if I'm not learning bugs, quirks and workaround BEFORE
       | those I work with?
        
       | xwdv wrote:
       | I believe it's exactly the opposite, life is too short to wait
       | around for stable software.
       | 
       | Don't expect code to be like a cathedral that can stand the test
       | of time, think of it more like an adhoc bazaar where you can
       | quickly setup shop and _start making money_. Yea sometimes it
       | will not work right or break, but even physical things in this
       | world that people depend on are also shitty and break. Shitty
       | things are just part of the imperfect human condition, and we
       | must live our lives oscillating between stability and
       | instability, until of course we die and it all means nothing in
       | the end.
        
       | intrasight wrote:
       | The most unstable software that I experienced was the original
       | Mac. I'm sure that it cost me a non-trivial drop in my GPA.
        
         | ChuckNorris89 wrote:
         | Do you have nay more details on this? I am very curious.
        
           | intrasight wrote:
           | I still have the Mac. It's now over 35 years old! Remember
           | that the computer only had 128K of memory. There were like
           | three applications. MacWrite, MacPaint, MacDraw. MacDraw was
           | the most crash-prone.
           | 
           | What else might you like to know? I'll have to dust off some
           | old neurons.
        
             | mattarm wrote:
             | I had the same Mac. It crashed sometimes -- enough that
             | "save often" was a habit, but I don't remember it being a
             | productivity problem for me. My grade in English went up
             | from B to A because I have always had trouble spelling and
             | MacWrite had spell check. If the machine had a negative
             | impact on my grades at all it was the distraction it caused
             | by me learning Turbo Pascal on the thing. :-)
        
             | ChuckNorris89 wrote:
             | Is that the infamous Mac model that Steve Jobs
             | intentionally under-specked in order to hit a consumer
             | friendly price point (at the time) and as a consequence was
             | dog-slow and basically unusable for anything serious?
        
       | dimgl wrote:
       | I've been thinking about learning Crystal recently and using it
       | for a personal project. However, now that I have lots of
       | experience with Node.js, Golang and others, I'm torn between the
       | "use what's mature" and "learn a new language" decision. Sure,
       | I'm using Crystal now to learn a new language, but what if this
       | becomes a serious project? Anyhow, I agree with this blog post
       | somewhat but it's always good to expand your repertoire of tools.
        
         | brabel wrote:
         | Why not learn a stable technology instead?
         | 
         | For example, instead of Crystal, you could try something like
         | Common Lisp or Haskell, both of which are really, really
         | stable.
         | 
         | I would even put Rust into the "stable" category... they value
         | backwards compatibility very highly, while allowing breaking
         | changes to happen in newer "editions" of the language (without
         | breaking code written for previous "editions"). Stable does not
         | necessarily mean old.
        
           | dimgl wrote:
           | I'm not really interested in Lisp or Haskell. I don't feel
           | like I'll be all that productive with them and there is
           | little to no chance I'll use them professionally.
           | 
           | If anything I'd be interested in Rust, but it's a bit too
           | low-level for most of the things that I need.
        
         | jraph wrote:
         | There are well established projects using Crystal. At least
         | I've heard of one: Invidious [1], an alternative frontend for
         | YouTube. If your project becomes serious it might as well work
         | for you too!
         | 
         | (I don't know anything about Crystal)
         | 
         | [1] https://github.com/iv-org/invidious
        
           | dimgl wrote:
           | Hey! Thanks for this! I've always loved the Ruby syntax so
           | I'm excited to see what Crystal has to offer. Thanks for
           | sending this over!
        
         | mattarm wrote:
         | I code in a new/unfamiliar language if I want to learn
         | something new about the "art" or "science" of programming and
         | have fun doing it.
         | 
         | I code in an established language if I want to I'm more
         | interesting in solving a specific technical problem.
         | 
         | As for "what if this becomes a serious project?" question,
         | remember that a rewrite always goes faster than the original.
        
         | simpsond wrote:
         | If it becomes serious, and crystal becomes a problem, then you
         | can always port it. What are the odds of both those things
         | happening? Give it a shot.
        
       | ttmu_15 wrote:
       | agreed
        
         | ttmu_15 wrote:
         | cvv
        
       | revskill wrote:
       | My project used RoR 4, React 1.5, PostgreSQL 9.6 and Solr since
       | 2012 and still running stably in production.
       | 
       | Such a beautiful tech stack and it's still my recommendation for
       | any junior programmers to learn on how to do the fullstack
       | development the right way (before jumping on more bleeding edge
       | stuffs).
        
         | bitexploder wrote:
         | Python, Django, etc is similarly good. Elastic search or Solr.
         | I don't even use React... just Bootstrap and plain HTML+CSS
         | where I can. Front end work makes me sad inside. Celery for
         | distributed work. Redis for this and that cause it's so stupid
         | reliable and easy to use and manage. Ansible to configure and
         | deploy stuff. One repo. There was a great post on here about
         | boring tech stacks, and their stack was our stack almost
         | exactly. It's stuff one human can wrap their arms around me
         | build meaningful software with.
        
           | revskill wrote:
           | Python is bad (mostly) as it taught python dev much hate on
           | Javascript, which is a must for user experience.
           | 
           | Most of python programmers i've known and met hate frontend
           | work, and that's why their products are not optimal for user
           | experience, so bad software in general.
           | 
           | The point is, to become a senior programmer, one must learn
           | to do fullstack work, from css to deployment.
        
             | bitexploder wrote:
             | Genuinely curious, how is Ruby any better there? You can
             | build usable front ends in either language. I care about
             | delivering usable software. Interactive front ends for the
             | sake of it is not a good use of effort. If your app and
             | situation justify it, okay, but it isn't a hard requirement
             | like many other aspects of full stack. Just depends on what
             | you need for good UX. A pile of Javascript isn't a
             | requirement there. I also stopped caring about developer
             | titles a long time ago, that is a straw man. Real full
             | stack devs with adequate experience in every aspect are
             | unicorns anyway.
             | 
             | E: I don't even hate JS it's fine, just not a great
             | investment of time for a lot of apps. And the ecosystem and
             | getting it deployed is an absolute chore.
        
       | deltaonesix wrote:
       | Life is too short to make all software stable.
        
       | GhettoComputers wrote:
       | We make trade offs between hardware and software all the time,
       | the fact is you must rely on unstable software if hardware
       | requires it. I can't think of a better example than iOS devices
       | or the M1 (pro) and lack of choice in OS, if you want to run
       | Linux on M1 say goodbye to the GPU and various benefits.
       | 
       | Unless your only platform is embedded software which this occurs
       | less on, it's impossible to realistically expect to never run
       | unstable software. I'm going to guess most consumer computation
       | is running hundreds if not thousands of JavaScript VMs on their
       | devices every hour.
        
       | fendy3002 wrote:
       | Well the title mention unstable software but the article
       | discussed about library or API.
       | 
       | Topic wise, backwards compatibility is indeed really important
       | and useful. However to achieve stable API takes long time and
       | effort.
       | 
       | You're either use the tooling now and risking changes albeit
       | small (honestly almost everything programming related has
       | breaking changes at one point), or roll your own API.
        
       | giovannibajo1 wrote:
       | > Version history. For projects using SemVer, frequently changing
       | major version numbers is an obvious red flag. Note that many old,
       | stable projects don't use SemVer.
       | 
       | This matches my experience. Obviously it's impossible to
       | generalize without making mistakes, but I started to notice that
       | projects that loudly talk about their use of SemVer often break
       | compatibility. In other words, it seems like they think SemVer is
       | a way to liberate them in breaking APIs because now they have a
       | way to tell the world about it explicitly, and so nothing can go
       | wrong.
       | 
       | Ecosystems that have adopted SemVer massively do not value
       | backward compatibility (npm comes to mind), and their package
       | managers often have to provide solvers for complex dependencies;
       | users can get to a corner where they must upgrade something but
       | they can't because it depends on something else that bumped the
       | major version and now the interfacing code has to be rewritten.
       | 
       | Go is an ecosystem that values backward compatibility a lot. They
       | are using SemVer as well now but on the other hand they say that
       | modules shouldn't really bump major version that often, if at all
       | (which in turns makes me question whether adopting SemVer has
       | been a good idea, or a compromise that they had to take to
       | concede something to the community subset that was pushing for a
       | more standard package management solution).
       | 
       | I think Qt is a project that uses SemVer (before it was named
       | so!) in the right way. They break major version every 8-10 years,
       | and they struggle a lot to make sure not to do that often. In
       | C++, it's not even easy because of include files and ABI, but
       | they manage to keep the ABI stable across all minor version
       | upgrades, so that you can upgrade a minor of Qt without even
       | recompiling your software using it.
        
       | the_af wrote:
       | In theory we all mostly agree with this: stable, well understood
       | software is to be preferred.
       | 
       | In _practice_ , it's not true most businesses or teams want newer
       | software just to be "on the bleeding edge". The bleeding edge is
       | not a goal on its own. What usually happens is that you _need_ a
       | feature (for actual business reasons) that is not available in
       | the older version of the software you are using; or there is a
       | serious bug that is only fixed on the  "bleeding edge" version
       | and is nontrivial to backport.
       | 
       | So you often have two choices: make the change yourself in the
       | stable version (risky, time consuming, and can it be considered
       | "stable" anymore once you mess with it?) or move to an unstable
       | version (risky, new bugs).
       | 
       | And that's assuming the software is open source; if it's
       | proprietary you have even fewer choices...
        
         | winternett wrote:
         | A huge dimension concerning the decision on using "unvetted"
         | and/or "cutting edge" technology is how MISSION CRITICAL the
         | system you are creating is...
         | 
         | Building a new social media app as a startup? _Depends on the
         | data you're storing for users and how you market the stability
         | of the system to your user base.
         | 
         | Building a new Government healthcare system? _You better use
         | properly vetted technologies.
         | 
         | This includes using cloud service providers as well.
         | 
         | Some systems simply need to be old school. Old school tech
         | relies on structured data that can prove better for security
         | and for testing. Methods that have been in place over years are
         | not only more reliable, the ways of fixing problems when they
         | occur are well documented as well. Countermeasures to security
         | threats are also well documented for older solutions, yet we
         | also have to acknowledge, the Internet in itself is still a
         | relatively new thing for business and commerce, so things these
         | days are really declared as "Legacy" by companies and
         | individuals who are selling alternative solutions as a part of
         | the "new money marketing" pipeline, not because they are truly
         | "out of date" or "no longer viable"... I am not defending nor
         | advocating COBOL or mainframe systems with that statement
         | though... (Just to be clear).
         | 
         | With newer concepts/solutions like blockchain, using
         | unstructured data, and even cloud hosting, they are vetted to
         | an extent, but they introduce very new threats into the
         | stability of mission critical systems, and they are not perfect
         | solutions. These newer solutions also by nature dictate costly
         | refactoring for many that locks buyers into platform-specific
         | situations that they can't easily migrate back if the ideas
         | don't work out well, and compromise of data integrity or
         | security for mission critical systems is more costly than ever
         | as data builds...
         | 
         | Not every solution should enlist "cutting edge" solutions as
         | their backbone. Even a gradual approach may be a more
         | reasonable option (like introducing new technology in "siloed"
         | and/or "smaller" aspects as a part or feature of a traditional
         | system before a complete refactor (for example).
         | 
         | There are some really good reasons why COBOL programmers still
         | get paid a lot of money to this day, even though I am not one
         | mind you.
         | 
         | Choose wisely my friends.
        
           | tomnipotent wrote:
           | > Building a new Government healthcare system? _You better
           | use properly vetted technologies.
           | 
           | Is there any evidence this systems are more stable and
           | dependable?
        
         | agumonkey wrote:
         | one needs bleeding edge + some form of retrocompatibility
         | 
         | people deployed new machines with new stacks at the building
         | I'm, most things are better except for a few conflicts which
         | turn some tasks 3x slower
         | 
         | people don't mind about things unless they cost them too much
        
         | Wowfunhappy wrote:
         | How badly do you _really_ need that feature? Why did no one
         | need it a couple of years ago?
        
           | chias wrote:
           | Because sometimes "stable" software has bugs.
           | 
           | A true story about one of my websites: It runs on Debian
           | Stable, because I like stability and at the time, Debian was
           | the OS I was most familiar with. It also does a lot of image
           | manipulation, for which it uses ImageMagick.
           | 
           | In March of 2018, I discover a bug in ImageMagick: if you
           | perform various hue/saturation modulations, sometimes pixels
           | just turn "black" for no reason -- essentially it looks like
           | someone sprinkled sand on the image. Reported here [1].
           | Apparently some code ends up with a divide-by-zero error. The
           | good news is that the bug is fixed within a day, and is
           | released to the beta version one day after that.
           | 
           | My website is quite literally built around image layering,
           | manipulation, and generation. My users are experiencing what
           | looks like sand thrown on their images every day. So what do
           | I do? Do I assure my users that stable software is actually
           | good and they should just sit tight for a year or two (or
           | more?) until a version containing this patch hits the Debian
           | Stable repos? Do I rewrite the core of my application to
           | replace ImageMagick? Or do I update to run some unstable
           | software?
           | 
           | [1]: https://legacy.imagemagick.org/discourse-
           | server/viewtopic.ph...
        
             | gmfawcett wrote:
             | You could have just built IM yourself? No need to switch
             | release channels for this.
             | 
             | https://imagemagick.org/script/install-source.php
             | 
             | I used to have a similar version-freshness issue with
             | ffmpeg on Ubuntu, for a video-encoding system I was
             | running. Turns out that building ffmpeg isn't actually that
             | hard. :) Later, I switched to using Nix as a layer over the
             | distro; then I could just build ffmpeg once on my build
             | system, and push the "closure" (the app & all its
             | dependencies) to the other nodes in my encoding farm.
        
               | chias wrote:
               | Technically you're not wrong, though I will point out
               | that it'd still be running the bleeding-edge unstable
               | ImageMagick by definition.
               | 
               | But even so, in practice, building ImageMagick it
               | yourself, along with all configuration and integration
               | into PHP (yes, it was a PHP site) on a production
               | webserver is a much bigger lift than that. And then you
               | have to maintain it manually. Arguably this is a much
               | less stable result than running a bleeding edge Debian
               | install where you just `apt-get install php-imagemagick`.
        
               | gmfawcett wrote:
               | Fair. Re: bleeding edge, my point was that you could keep
               | the _rest_ of your system stable -- building one
               | component yourself lets you make that tradeoff.
        
               | filmor wrote:
               | You could backport the patch, that's exactly how Debian
               | achieves stability.
        
           | exyi wrote:
           | Maybe they needed it, but the devs only had time for it last
           | month
        
           | Zababa wrote:
           | Most of the time people that need the feature are not the
           | same as people that will be in charge of developing the
           | feature. I doubt most people also have the leverage to
           | conving the business that they don't _need_ the feature.
        
             | Wowfunhappy wrote:
             | But what did they _do_ about it a year ago? They didn't
             | have the option of using an unstable version (because the
             | feature still wouldn't have been there), so what happened?
             | Did they go out of business?
             | 
             | IMO, the "bleeding edge" is just overly tempting and people
             | need to learn to resist it. It's hard to know for sure when
             | we're speaking in hypotheticals, but I think that in most
             | cases, the trade-offs aren't being weighed accurately.
        
               | powersnail wrote:
               | They endured it.
               | 
               | People didn't have personal computers or cell phones a
               | century ago. What happened? Did businesses go extinct?
               | 
               | The entire digital empire that we enjoy today is built on
               | people's wants. Some things are wanted so badly, that we
               | are willing to get our hands dirty and actually create
               | new things. It's not hard to imagine that people can want
               | things badly enough to try out those new things.
        
               | jodrellblank wrote:
               | Seems like you're arguing that if they didn't go out of
               | business last year from lack of $feature, then they never
               | will.
               | 
               | Imagine you run a company selling things and can't offer
               | next-day shipping and none of your competitors does
               | either. The company you use for deliveries announces they
               | support next day shipping now but you have to add a
               | priority flag to requests so they can schedule it
               | differently, and they released a new API client to
               | support that. Your competitors all start offering next-
               | day shipping and you don't. You ask your employees why.
               | The development team says the new API client is unproven
               | and the one you have is stable and you didn't go out of
               | business last year so you probably just have no self
               | control and are demanding the latest shiny and are weak
               | and impulsive.
               | 
               | Would you be OK with that reasoning? Or with literally
               | any reason that ended "so we're not offering next day
               | shipping and will definitely lose customers to our
               | competitors because of it."? Or would you start looking
               | for a way you can take the change and isolate it? "We'll
               | run next-day shipping requests through a different queue
               | and watch it more closely", etc.
        
         | the_af wrote:
         | For personal use, here's the same thing again. Say a Linux user
         | wants to play a game. The stable old version of their distro
         | doesn't play well with the libs/drivers needed to play the
         | game.
         | 
         | So the user must install a newer, less tested, distro. But the
         | goal was not to be "on the bleeding edge" for its own sake;
         | it's playing the game, and there's no other (easy) way.
        
           | jcelerier wrote:
           | Even on windows you generally have to install the very latest
           | GPU drivers when an AAA game comes out though
        
             | rodgerd wrote:
             | Sure. But I don't have to upgrade my whole userland from
             | Win 10 to Win 11 for hardware compatibility. I don't have
             | to upgrade my core OS to run a new version of Lightroom.
        
           | md8z wrote:
           | I'm not sure what you mean, this is exactly what Flatpak and
           | Snap were meant to solve. IIRC Steam should also bundle older
           | copies of the libraries needed.
        
             | ludocode wrote:
             | No, you misunderstand. You need new libraries to make your
             | new hardware work. You can't use versions of driver
             | libraries like Mesa that are older than your hardware, and
             | Mesa has a ton of dependencies like libstdc++ and LLVM so
             | you can't use old versions of those either. This is a major
             | problem for Flatpak.
        
               | md8z wrote:
               | I don't see why that's any bigger problem than anything
               | else, flatpak includes mesa as part of the SDK:
               | https://docs.flatpak.org/en/latest/available-
               | runtimes.html#f...
               | 
               | If there ends up being a problem with libstdc++ and LLVM,
               | it's not hard to statically link those, if it's not being
               | done already.
        
               | ludocode wrote:
               | It is nowhere near as simple as you make it out to be.
               | 
               | Yes, the freedesktop runtimes ship extensions with newer
               | versions of Mesa and its dependencies. This doesn't
               | entirely solve the problem. For one thing, libstdc++
               | before GCC 5 did not maintain a backwards-compatible ABI,
               | so if the app was compiled too long ago it won't work
               | with a new libstdc++. Steam is now working around this
               | problem by using dlmopen() namespaces to load different
               | version of libstdc++ into the same process. Flatpak is
               | not there yet.
               | 
               | For another, NVidia drivers complicate this even more.
               | The NVidia client-side library must exactly match the
               | version of the loaded kernel module which Flatpak can't
               | control. So NVidia drivers are broken out into yet
               | another runtime extension, and it can't package these
               | drivers due to licensing issues so it will dynamically
               | download NVidia drivers to generate an extension on the
               | fly. NVidia drivers also depend on libstdc++ by the way,
               | another reason why static linking doesn't magically solve
               | the problem.
               | 
               | On top of all this is just the massive complexity and
               | maintenance burden of keeping all this working. All these
               | runtimes with all their extensions, somebody has to keep
               | updating these, and when a runtime is deprecated all of
               | the software that was built for it is defunct. All of
               | this can be solved just by keeping libraries backwards
               | compatible and building for native Linux, not Flatpak or
               | anything else.
        
               | md8z wrote:
               | I'm still not sure I understand, it sounds like a
               | solution exists and both Steam and Flatpak are working
               | towards it. I don't see why nvidia can't also do the same
               | things.
               | 
               | I hope you can see that "keep libraries backwards
               | compatible forever" is not really a good option either
               | and is probably orders of magnitude more work than just
               | doing all the things you said. In some situations, it is
               | also impossible: if there are bugs in the API contract
               | then it has to be broken eventually.
        
               | ludocode wrote:
               | The "solutions" are hacks at best. This is not the way to
               | build stable software.
               | 
               | > I hope you can see that "keep libraries backwards
               | compatible forever" is not really a good option
               | 
               | ??? Why would I be able to see that? You've given zero
               | explanation or evidence for why that would be the case. I
               | see a whole lot of people in this thread in addition to
               | the article explaining why backwards compatibility is
               | good. Nobody is giving valid reasons as to why it's bad.
               | 
               | Microsoft has managed to keep the whole Win32 API
               | compatible "forever". GUI apps built for Windows 95 still
               | work out of the box on Windows 10. Backwards
               | compatibility is a major part of why they are still the
               | dominant platform: businesses actually care about this.
               | They use ancient proprietary software that is critical to
               | their business whose source code has long been lost to
               | the sands of time. A platform that breaks their software
               | is no platform at all.
               | 
               | > probably orders of magnitude more work than just doing
               | all the things you said.
               | 
               | Really? How hard is it to not break things?
               | 
               | It's sometimes more work to add new features or support
               | new hardware without breaking the ABI but clearly it's
               | feasible. glibc 2.1 was released in 1999 and the
               | maintainers decided at that point that they would
               | preserve backwards compatibility forever. We're now at 22
               | years without a major ABI break. There have been some
               | hiccups of course (the memcpy() fiasco) but they've been
               | fixed.
               | 
               | The GCC team have decided to follow in their footsteps.
               | Since version 5 they've decided they're not going to
               | break the libstdc++ ABI anymore. The culture of backwards
               | compatibility is finally growing on the Linux desktop.
               | This is a far better solution than Flatpak.
        
               | gpderetta wrote:
               | To be fair the equivalent of libstdc++ on Windows has
               | broken the ABI on every MSVC release until very recently.
               | 
               | The difference is that Windows applications historically
               | shipped with the appropriate version of the C++ runtime
               | bundled in (and there wa no guarantee that one was
               | provided by the OS), while Linux app usually rely on the
               | system .so.
        
               | md8z wrote:
               | AFAIK the win32 API is kept backwards compatible by doing
               | exactly as we describe, shipping older versions of the
               | system libraries and automatically using them when it's
               | detected that an application needs them. So it's the same
               | thing you call a "hack". Please don't misunderstand, I'm
               | not saying backwards compatibility is bad. But it does
               | cost a non-trivial amount of money and time, it's not
               | just a magic solution to reduce the maintenance cost of
               | something down to zero. If you're doing a cost
               | comparison, that always has to be taken into account.
               | 
               | Glibc isn't really a good example, that has a ton of
               | unfortunate broken APIs that should probably be removed
               | entirely (the most notorious example probably being gets)
               | but never will be, and I suspect they will continue to be
               | a source of bugs as long as applications use them and
               | aren't patched. I mean the whole reason musl exists is to
               | get away from some of these maintenance issues in glibc.
        
               | marmaduke wrote:
               | > NVidia client-side library must exactly match the
               | version of the loaded kernel module
               | 
               | Sure about that? I was sure I've run Docker containers
               | with GPU stuff compiled for different versions than my
               | driver. Like their nbody container image runs every time
               | I set up the nv container runtime with Docker, regardless
               | of driver version.
        
               | ludocode wrote:
               | NVidia has special support for making their drivers work
               | in Docker containers:
               | 
               | https://docs.nvidia.com/datacenter/cloud-
               | native/container-to...
        
           | AnIdiotOnTheNet wrote:
           | This is a flaw in Linux Desktops' choice of application
           | management paradigms, which insists that everything be
           | tightly coupled and managed. It is entirely possible and
           | reasonable to have a stable set of base system libraries
           | everyone can depend on and otherwise applications must bring
           | their own.
        
             | ludocode wrote:
             | It's not just possible and reasonable. It's how literally
             | every other platform works.
             | 
             | It's also how the Linux Standard Base worked. It was
             | intended to be a stable well-defined backwards-compatible
             | set of libraries common across distributions. Of course the
             | LSB had its share of problems but they had the right idea
             | to bring stability to Linux as a platform for binary apps.
        
               | delusional wrote:
               | Eh, it doesn't really work for a volunteer based
               | projects. It's already hard to find people that want to
               | fix bugs in open software, it would be even harder to
               | find someone willing to fix it in a version 10 years old
               | and then have them verify it working in all possible
               | cases without regression.
        
             | assbuttbuttass wrote:
             | NixOS manages this, it's entirely possible to use the
             | stable channel for system packages but subscribe to the
             | unstable channel for packages installed per-user.
             | 
             | Of course, NixOS also has atomic upgrades and rollback, so
             | there's not much risk just running unstable everywhere
        
             | torstenvl wrote:
             | Agreed whole-heartedly, and one of the reasons I love the
             | FreeBSD model. My ideal Linux distro would be the inverse
             | of Debian k/FreeBSD - a Linux kernel with FreeBSD kernel
             | interfaces provided by loadable module and a FreeBSD-style
             | userland.
             | 
             | Might be possible soon, now that building Linux with clang
             | is supported.
        
               | yjftsjthsd-h wrote:
               | _Huh._ There are already others implementing the Linux
               | ABI, but I can only think of NetBSD rump kernels and some
               | Plan 9 thing...vx9? going the other way. No reason it
               | shouldn 't work.
               | 
               | Edit: Oh, and Darling runs Darwin binaries on Linux,
               | which isn't quite a BSD but is non-Linux
        
       | blondin wrote:
       | we believed that continuous integration/continuous deployment
       | (ci/cd) will only bring good. but we started optimizing for
       | metrics such as profit, ignoring actual customer issues.
       | 
       | we started solving issues that are not real customers issues.
       | 
       | people claim that software in the past was very unstable. sure,
       | but let's start putting some context and numbers around these
       | claims. or at least cite such software. in fact, let's do a quick
       | mental exercise for those of us who have been there.
       | 
       | winamp vs. any current music "player".
       | 
       | in quotes, because most modern music apps can go around the
       | entire web but can't often play music. i can't remember when i
       | ever had any issue with winamp. so let me know if your experience
       | was different.
       | 
       | winamp played "my" music. i don't know where the music these
       | modern apps play is coming from. i must have bought them at some
       | point. sometimes i don't remember when because they use so many
       | user interface dark patterns. so many problems at each release or
       | update these days that i want them to stop releasing.
       | 
       | on the other hand, they can't (or don't want to) solve real long-
       | standing customer issues. every year there is a new round of
       | "missing cover art"[0] issues opened for itunes / apple music.
       | 
       | [0]:
       | https://www.google.com/search?q=apple+music+cover+art+missin...
        
         | mathgladiator wrote:
         | The big difference between WinAmp versus the current state is
         | content availability. While I believe WinAmp is much better
         | than Amazon Music, my WinAmp was full of "questionable content"
         | whilst I have no anxiety around using a streaming service.
         | 
         | The problem that I have with Amazon Music isn't the occasional
         | need to restart the tab, but the lack of selection which my
         | wife needs as part of the They Might Be Giants fan club which
         | sends her stuff which isn't available on Amazon Music. I used
         | to be able to upload it until they removed it. Sonos's software
         | integration with my NAS was lack-luster with a surprising
         | number of issues of "why isn't song X on it?!"
         | 
         | My goal prior to my wife's next birthday is to setup a
         | myalexamedia on a VM/S3 which I can turn on/off quickly to save
         | pennies!
         | 
         | Content and integration with either my Alexa speakers or Sonos
         | are what is making quality an issue.
        
         | agumonkey wrote:
         | what the continuous idea brought is a neverending shape shift
         | of everything.
         | 
         | maybe android web players are better than winamp but every
         | month there's a new one which differs a bit, the others will
         | update and modify lots of things just because they can and
         | there's a new fad.
         | 
         | the experience of 90s software was obviously slower, more risky
         | too (if you had a bug you had it until the next service patch
         | 18months later). But it made people make long term choices and
         | you as a user enjoyed a longer trip on the same plane. Humanly
         | it feels more fulfilling (even though technically, I got less
         | new features per month than with Chrome pace for instance)
        
       | Mikeb85 wrote:
       | Honestly, I've had better luck on the bleeding edge. Newer Linux
       | kernels almost always are better (and when they're not, it gets
       | fixed quickly). Wayland is definitely better than X. Newer Gnome
       | is better than old Gnome.
       | 
       | Ruby 3 and 3.1 is definitely better than 2.7. Rails 7 is way
       | better than 6, even in alpha state. Deno is nicer than Node
       | (although the ecosystem needs to catch up). Hell, V8 and then
       | Node led to a boom in web technologies. In fact, I think with
       | every language newer compilers and interpreters just keep getting
       | better, never seen enough regressions to make me not want to
       | upgrade.
       | 
       | With games, newer kernels and drivers are better. Proton
       | Experimental has Age of Empires 4 working perfectly on Linux
       | what, a week after release?
       | 
       | I really can't think of a time I wanted older technology. I just
       | upgraded to Fedora 35, it's the best laptop experience I've ever
       | had. Ubuntu and others have been stable and good enough, but
       | Fedora 35 is snappy and everything works in a way that's just
       | better. No more slow software centre. No weirdness with snaps.
       | Quicker suspend and awake. Fingerprint sensor works out of the
       | box. Etc...
        
         | jcelerier wrote:
         | I have the same experience, my linux experience (as an user)
         | improved by an order of magnitude when switching to Arch which
         | has the same bleeding edge philosophy.
        
           | jacoblambda wrote:
           | I feel like there's a balancing point though. I've used
           | Gentoo, Arch, Ubuntu, Debian, Fedora, RHEL, SUSE, and CentOS.
           | I've found the most stable and least problematic distro to be
           | Gentoo. After that was Arch and then the rest basically
           | sorting by average package age.
           | 
           | I think there needs to be a trade-off between bleeding edge
           | and stability and I find that Gentoo tends to hit it on the
           | head. I've yet to have the system break anything for me
           | without explicitly warning me before hand, providing me a
           | full mitigation & migration plan, and then requiring me to
           | explicitly continue.
           | 
           | Arch has been pretty stable in the past but I've had problems
           | come up numerous times due to the bleeding edge philosophy.
           | 
           | Point being that you should really only be going as bleeding
           | edge as your community can reliably audit and provide support
           | for. I think Arch toes that line most days and sometimes hops
           | past the line. Gentoo however tends to stay a few steps back
           | from the line and gets close to it but never actually steps
           | over it. And then most other distros are sitting comfortably
           | half a mile back.
        
             | jkepler wrote:
             | Hmm, oddly enough, I've found Debian stable to be the most
             | stable OS for my uses. That said, I do more writing and
             | email than I do programming, so perhaps library
             | dependencies aren't such a big deal for me. If I can run
             | tex-live, emacs, pandoc, along with GNU standard utilities,
             | email client, web, and music, I'm good.
        
         | matheusmoreira wrote:
         | Same experience here. Everything is just better when I'm
         | running the latest software. My system is not "unstable"
         | either.
        
           | edwcross wrote:
           | Counterpoint: when using a system with hybrid graphics which
           | required me to use proprietary video drivers, switching to
           | the latest Fedora version often brings issues with suspend,
           | or random freezes, or lack of external HDMI output... After a
           | few months, the release stabilizes and my issues disappear.
           | But whenever I tried to be optimistic and jump to the latest
           | release too soon, I got bitten by it and wasted hours trying
           | to find workarounds.
        
         | warner25 wrote:
         | I do like to think that the effort put into software
         | development with each iteration is actually making software
         | better: optimizing performance, fixing bugs, closing security
         | vulnerabilities, etc. So I've been happy with Arch-based
         | distros and most Windows updates, at least. When I take the
         | long-term view, I'm amazed by the improvements in software
         | since I started using a computer in the 1990s. Many more things
         | "just work" now (e.g. Linux on a laptop). Home computer
         | operating systems and applications can run for years now
         | without breaking, whereas reinstalling everything (e.g. the
         | cesspool of Windows 95/98/XP) was almost a monthly requirement
         | back in the day. An operating system that used to take 5-10
         | minutes to fully boot now takes only seconds on a typical
         | machine.
         | 
         | I know that this isn't always true. Sometimes it's just feature
         | bloat, or a design change for the sake of design change, which
         | introduces more bugs and vulnerabilities.
         | 
         | Maybe what I really want is the newest version of the software
         | that I already have, as opposed to the newest software.
         | 
         | At work we still use Red Hat Enterprise Linux 6 on some
         | mission-critical systems. So that's version 2.6.32 of the Linux
         | kernel with equally old applications, and I'm not sure what we
         | get for updates from the vendor in 2021. The problem is that
         | there are recurring bugs and serious vulnerabilities in that
         | system which will never get fixed at this point. You could
         | argue that we know what the bugs are, at least.
        
         | LaLaLand122 wrote:
         | The thing is that the blog entry mixes up, as too frequently
         | happens, the two often used meanings of "stable":
         | 
         | - Rock solid/reliable/"good"
         | 
         | - Not changing
         | 
         | And they are completely independent!
         | 
         | - Windows 95: It's definitively not rock solid. But is it
         | "stable"? Sure! You will have a hard time finding something
         | more "stable". When was the last time it received an
         | update/changed? It's "stable", but not "stable".
         | 
         | - RHEL: Rock solid? I don't really have a lot of experience
         | with it, but let's say yes. And its whole business model is
         | about changing as little as it can. It's "stable" and "stable".
         | 
         | - Fedora: Rock solid? Let's say yes. Does it change? All the
         | time. It's not "stable", but it's "stable".
         | 
         | - Linux in 1991: No idea. But I'm guessing it was crashing all
         | the time. And it surely was changing fast. It was not "stable"
         | and not "stable".
         | 
         | "Stable" software can be not "stable" on purpose. You may want
         | to avoid changing so much that you may want to keep the bugs,
         | people rely on that buggy behaviour!
         | 
         | Not "stable" software may have shitty QA and not be "stable";
         | or it may be so well tested that even changing all the time, it
         | never fails.
         | 
         | If I have a contract with the government for a software that
         | needs to provide a service for the next 5 years, I surely will
         | target RHEL X. Not because I think it's specially good, but
         | because I don't want to find myself in court about whether the
         | contract said I need to keep supporting them every time they
         | update the OS. I will deliver something working better or worse
         | today and once accepted... it will keep working the same, bug
         | by bug, relying on the same CentOS X bugs, in 5 years because
         | the underlying system has not changed a bit.
        
         | LAC-Tech wrote:
         | Right but I feel like you're missing the point of the article,
         | since the latest version of something as long running as the
         | linux kernel is unlikely to be flakey.
        
           | dathinab wrote:
           | It's not that simple.
           | 
           | I'm using pure Arch Linux since ~8 years by know and for me
           | it's the best suited distro.
           | 
           | BUT two times after a kernel update my system didn't boot
           | because of some bug in the kernel. Sure, not a problem for
           | me, I can just boot the recovery system, downgrade the kernel
           | temporary and all is fine (and the bug gets fixed fast).
           | 
           | But still, even the latest stable release of the Linux kernel
           | sometimes has problems (through that where the only problems
           | I ever ran into with the kernel and it was at the vendored-
           | efi<->linux boundary, and now that I think about it it might
           | have been efistub, so maybe not even there kernel).
           | 
           | Anyway I have in general less problems with software not
           | working on arch then I had on ubuntu.
        
           | Mikeb85 wrote:
           | Lol the Linux kernel is super flakey. But it moves quick
           | enough it gets fixed when it does flake out.
        
         | vmception wrote:
         | Mostly the same here, but in fresh environments instead of
         | continually upgraded ones.
         | 
         | So local visualization ala docker, vagrant have been a godsend
         | with their isolated environments
        
         | smoldesu wrote:
         | I agree with this mostly, but the new GNOME releases are almost
         | universally a design regression IMO. Pretty much everything
         | that they added was doable with extensions in 3.38 (most of
         | which are now broken with GNOME 40), and the only substantial
         | changes that I can discern is the shift to libadwaita and
         | forcing people away from things like custom stylesheets and
         | shell extentions, which were _the only things that made GNOME
         | tolerable_ for many people, myself included.
         | 
         | Combined with the overall hostility and 'my way or the highway'
         | mentality that's been pervasive throughout the GNOME team, I
         | didn't feel too bad dumping their DE for KDE Plasma. I respect
         | the constant desire to improve things, but their refusal to
         | kill sacred cows and infatuation with destroying people's
         | workflow doesn't really inspire me to spend more time with
         | their software. It's especially ironic when you consider that
         | they recently re-wrote their CoC to be vague enough for the
         | developers to systematically silence anyone who makes feature
         | requests they disagree with, or post bug reports for stale
         | issues. If this mentality continues to spread across the rest
         | of desktop Linux, it will probably be dead in the water. It's
         | no surprise to me that Valve decided to eschew all that GNOME
         | drama with the Steam Deck and just shipped it with Plasma. QT
         | is a genuinely terrible toolkit by many metrics, but at least
         | it doesn't have developers that call you fascist for enforcing
         | a custom stylesheet in their _open source software_.
        
       | [deleted]
        
       | maxekman wrote:
       | Last part of the piece of advice was a bit humorous (sad but
       | true):
       | 
       | > And one final piece of advice: don't ever let your career
       | depend on an unstable platform and tooling unless you directly
       | profit from that instability.
        
       | ravar wrote:
       | Proprietary software with no competitors (Nikon Elements Cough)
       | creates situations where you have to depend on very unstable and
       | buggy software.
        
         | GhettoComputers wrote:
         | Does this also apply to iOS and forced updates? ;)
         | 
         | Trade offs in the form of hardware and software are what I
         | constantly have to think of.
        
       | l0b0 wrote:
       | This is "unstable" as in "recent versions", not as in "broken".
       | Anecdotally, "unstable" is a complete misnomer. Arch Linux
       | (rolling distro, upgraded on my machines almost every day for ~6
       | years) was far more stable than Ubuntu (8.04 through 14.10 or so,
       | 18.04 and 20.04) ever managed.
       | 
       | My pet hypothesis is that the core developers of any system are
       | spending 95%+ of their time working on the latest release, and
       | that (understandably, since that earns all the credit) hardly
       | anybody wants to spend any more time than necessary to support
       | older versions of anything. This goes doubly for combinations of
       | two or more old systems which maybe ten people world wide might
       | be using.
        
       | for_i_in_range wrote:
       | Not really, as advancements in evolution are predicated on
       | unstable software (genetic variations)
        
       | the__alchemist wrote:
       | Counterpoint: Stable software may persist poor UI/API,
       | limitations, and mistakes indefinitely.
        
       | mercurialsolo wrote:
       | Would be interesting to see ratings of software much like the
       | moody's index for financial products.
        
       | rubyist5eva wrote:
       | Regular Releases are Wrong. Roll For Your Life
       | 
       | > Upstream packages change fast
       | 
       | > Upstream support is typically short, shorter than the life-time
       | of some LTS distributions
       | 
       | > SUSE (and others) maintain a large number of distribution
       | variants that need to be updated regularly
       | 
       | > Upstream projects are getting larger and larger
       | 
       | > Using stable release and trying to back-port security fixes
       | isn't safer than using the latest versions with all the security
       | fixes
       | 
       | > The closer you are to upstream the better it is for everyone
       | 
       | > It's easier to work with upstream
       | 
       | > It's easier to contribute and submit patches
       | 
       | > Slow and conservative updates models don't work
       | 
       | > Slow update models are not more sustainable
       | 
       | > Slow update models undermine "Open Source"
       | 
       | > "Partially Slow" is "Totally Broken"
       | 
       | https://linuxreviews.org/Richard_Brown:_Regular_Releases_are...
        
       | deathanatos wrote:
       | > _For projects using SemVer, frequently changing major version
       | numbers is an obvious red flag._
       | 
       | Huh, I have the opposite impression, particularly if the project
       | has a CHANGELOG of the breakages. Smaller releases are easier
       | upgrades, and the changelog tends to indicate what call sites
       | require investigation/changes prior to upgrading; projects with
       | absolutely massive amounts of breaking changes that require
       | rewriting everything are so much more painful.
        
         | crehn wrote:
         | The point is well-designed contracts don't need many breaking
         | changes in the first place.
         | 
         | As a corollary, badly designed software are more likely to
         | require breaking changes.
         | 
         | Same applies to badly managed software, where compatibility is
         | broken inconsiderately, wasting people's time.
        
       | pornel wrote:
       | People often conflate stability with stagnation.
       | 
       | When you have poorly written software, then old versions of it
       | are "stable" only because people have already learned how to live
       | with and work around its obvious bugs. Updating means learning to
       | live with new bugs, which is seen as instability, but it's just
       | the same software again.
       | 
       | OTOH if you're dealing with reliable software with a low defect
       | rate, then all versions are "stable". Updating it isn't painful,
       | and you can expect it to improve things without introducing new
       | bugs.
       | 
       | For example: autotools is "stable", but in the stagnant sense. I
       | use it by googling weird error messages. OTOH cargo is "stable"
       | in the well-tested sense. The newer the better. I can use its
       | nightly build and expect it to work with my every project.
        
       | thrower123 wrote:
       | And yet everything is built on web services nowadays that break
       | contracts and change behaviour constantly.
        
       | akudha wrote:
       | _don't ever let your career depend on an unstable platform and
       | tooling unless you directly profit from that instability._
       | 
       | Woah, unstable platform is bad, unless it makes you money? Am I
       | reading this right?
        
         | bruce343434 wrote:
         | It's just good business advice.
        
         | intrasight wrote:
         | The human body is an unstable platform that doctors make money
         | off. Lawyers make money off the unstable legal platform. So
         | it's been the case for thousands of years. Even better in
         | recent times as you need a license to work that unstable
         | platform;)
        
         | bool3max wrote:
         | > Am I reading this right?
         | 
         | You're not. There may be no correlation between the platform
         | being bad and it being profitable. If it happens to be bad,
         | don't let your career depend upon it, _unless_ it also happens
         | to generate revenue for you.
        
         | hmrr wrote:
         | Yes. This is why I charge by the hour adhoc when I have to deal
         | with windows. I can't estimate all the weird ass problems in my
         | initial quote.
        
         | hsn915 wrote:
         | Without context I think the part after unless is sort of
         | sarcastic or ironic.
        
       ___________________________________________________________________
       (page generated 2021-11-13 23:00 UTC)