[HN Gopher] How to Improve Your Monolith Before Transitioning to...
       ___________________________________________________________________
        
       How to Improve Your Monolith Before Transitioning to Microservices
        
       Author : ahamez
       Score  : 157 points
       Date   : 2022-07-06 13:31 UTC (9 hours ago)
        
 (HTM) web link (semaphoreci.com)
 (TXT) w3m dump (semaphoreci.com)
        
       | iasay wrote:
       | Advice. Don't bother if you have a monolith. Just keep trucking
       | on with it and partition your stuff vertically with shared auth
       | and marketing site.
       | 
       | Fuck microservices unless you have a tiny little product with a
       | few million users. Which is almost none of us.
        
       | muh_gradle wrote:
       | Enjoyed the article, and found #11 and #12 to be almost a
       | requirement for most teams.
       | 
       | I've worked for an org that adopted microservices when it's small
       | size didn't justify it but eventually found its footing with good
       | containerization, K8s from a growing DevOps team. Individual
       | teams were able to eventually deploy and release to various
       | environments independently, test on QA environments easily.
       | 
       | And I now work for an org with a monorepo with 100+ developers
       | that probably should have been broken up into microservices a
       | while ago. Everything just feels broken and we're constantly
       | running around wondering who broke what build when. We have like
       | 6 SREs for a team of 100+ devs? I think a lot depends on how well
       | CI/CD is developed and the DevOps/SRE team.
        
       | codethief wrote:
       | Here's a quote from https://grugbrain.dev/ (discussed here on HN
       | a while ago) which seems very appropriate:
       | 
       | > Microservices: grug wonder why big brain take hardest problem,
       | factoring system correctly, and introduce network call too. seem
       | very confusing to grug
        
         | com2kid wrote:
         | I once worked with a system where all local functions calls had
         | parameters serialized to XML, sent over, then deserialized by
         | the calling functions.
         | 
         | The framework was meant to be network transparent, remote calls
         | looked the same as local calls, everything was async, and since
         | everything used service discovery you could easily refactor so
         | that a locally provided service was spun off to a remote
         | computer somewhere and none of the code that called it had to
         | change.
         | 
         | So anyway 50% of the CPU was being used on XML
         | serialization/deserialization...
        
           | pulse7 wrote:
           | IBM has a hardware for that and it is called "IBM WebSphere
           | DataPower SOA Appliance". It has special hardware-
           | accellerated XML processing...
        
           | codethief wrote:
           | > remote calls looked the same as local calls
           | 
           | To some degree this is actually nice! I mean, one major
           | reason local API calls (in the same programming language) are
           | nicer than network calls - besides avoiding latency - is that
           | network APIs rarely come with strong type safety guarantees
           | (or at least you have to bolt them on on top, think
           | OpenAPI/Swagger). So I actually wish we were in a world where
           | network calls were more similar to local API calls in that
           | regard.
           | 
           | But of course, in the concrete situation you're describing,
           | the fact that
           | 
           | > all local functions calls had parameters serialized to XML
           | 
           | sounds like a very bad idea.
        
         | LewisVerstappen wrote:
         | If you have a large engineering team (hundreds+ devs) with a
         | large codebase then having a monolith can slow down developer
         | velocity.
         | 
         | There's massive scope with the monolith, build tools start to
         | strain, etc
        
           | vbezhenar wrote:
           | Surely there should be something between gigantic monolith
           | and micro services. I would call it service.
        
             | mirekrusin wrote:
             | Yes, not sure why we have so many brainwashed fanatics who
             | see world as hotdog and not-hotdog - microservices and
             | monoliths - only.
        
               | geekbird wrote:
               | Seriously! I think there is a good space for the concept
               | of "mid-services" - cluster similar and interdependent
               | services and service fragments together, so they split in
               | logical groups for updating.
               | 
               | It would be sort of like "Module A is authentication and
               | session management, module B is data handling layer, and
               | module 3 is the presentation and processing layer." Each
               | of those under a microservices dogma would be two to four
               | microservices struggling to interoperate.
               | 
               | I read the book written by the dev that advocated for
               | microservices. I wanted to throw it across the room, but
               | it was an ebook. He literally went for over half the book
               | before he even addressed operability. Everything was
               | about developer convenience, not operating it with an eye
               | toward user satisfaction. The guy was clueless.
        
             | strix_varius wrote:
             | Typically, "monolith" implies services - ie, the "backing
             | services" from a traditional 12-factor monolith:
             | 
             | - https://12factor.net/backing-services
             | 
             | Monolith vs. Microservices comes about because
             | microservices proponents specifically set it up as an
             | alternative to a traditional monolith + services stack:
             | 
             | - https://www.nginx.com/learn/microservices-
             | architecture/#:~:t...
        
           | com2kid wrote:
           | One awesome and often overlooked benefit of microservices is
           | how they simplify security/dependency updates.
           | 
           | With a monolith, dependency updates, especially breaking
           | ones, often mean either all development stops for a "code
           | freeze" so the update can happen, or you have a team
           | responsible for doing the update and they are trying to
           | update code faster than other devs add new code.
           | 
           | The result of this is that updates get pushed back to the
           | last minute, or are never just done. I've seen old (ancient)
           | versions of OpenSSL checked into codebases way too often.
           | 
           | With microservices, you can have a team that isn't as busy
           | take a sprint to update their codebase, carefully document
           | best practices for fixing breaking changes, document best
           | practices for testing the changes, and then spread the
           | learning out to other teams, who can then update as they have
           | time or based on importance / exposure of their maintained
           | services.
           | 
           | It is a much better way of doing things.
           | 
           | It also means some teams can experiment with different
           | technologies or tool chains and see how things work out. The
           | cost of failure is low and there isn't an impact to other
           | teams, and build systems for microservices tend to be much
           | simpler than for monoliths (understatement...)
        
             | 0xFACEFEED wrote:
             | Microservices are a heavy handed way to draw boundaries
             | around your software so that bad technical decisions don't
             | bleed across different teams. Obviously there is some
             | benefit to that but there is also a massive tradeoff -
             | especially for certain types of software like complex UIs.
             | 
             | > With a monolith, dependency updates, especially breaking
             | ones, often mean either all development stops for a "code
             | freeze" so the update can happen, or you have a team
             | responsible for doing the update and they are trying to
             | update code faster than other devs add new code.
             | 
             | In all my years I've never seen a code freeze due to a
             | dependency update. Maybe the project you were working was
             | poorly engineered?
             | 
             | > The result of this is that updates get pushed back to the
             | last minute, or are never just done. I've seen old
             | (ancient) versions of OpenSSL checked into codebases way
             | too often.
             | 
             | There should be nothing stopping you from running multiple
             | versions of a dependency within a single monolothic
             | project.
             | 
             | > With microservices, you can have a team that isn't as
             | busy take a sprint to update their codebase, carefully
             | document best practices for fixing breaking changes,
             | document best practices for testing the changes, and then
             | spread the learning out to other teams, who can then update
             | as they have time or based on importance / exposure of
             | their maintained services.
             | 
             | Gradual adoption of new dependencies has nothing to do with
             | microservices.
        
               | com2kid wrote:
               | > In all my years I've never seen a code freeze due to a
               | dependency update. Maybe the project you were working was
               | poorly engineered?
               | 
               | I spent a decade at Microsoft, I started before cloud was
               | a thing. All code lived in monoliths[1]. I once had the
               | displeasure of looking at the source tree for XBox Live
               | circa 2008 or so. Nasty stuff.
               | 
               | "Don't check anything in today, we're trying to finish up
               | this merge" was not an uncommon refrain.
               | 
               | But you are right, often times there wasn't code freezes,
               | instead system wide changes involved obscene engineering
               | efforts so developers could keep the change branch up to
               | date with mainline while dependencies were being updated.
               | 
               | I'll confess my experience with large monolithic code
               | bases are all around non-networked code, but IMHO the
               | engineering maintenance challenges are the same.
               | 
               | > There should be nothing stopping you from running
               | multiple versions of a dependency within a single
               | monolothic project.
               | 
               | Build systems. They are complicated. I spent most of my
               | life pre JS in native C/C++ land. Adopting a library at
               | all was an undertaking. Trying to add 2 versions of a
               | library to a code base? Bad idea.
               | 
               | Heck even with JS, Yarn and NPM are not fun. And once a
               | build system for a monolith is in place, well the entire
               | idea is that a monolith is one code base, compiled into
               | one executable, so you don't really swap out parts of the
               | build system.
               | 
               | Hope none of your code is dependent upon a compiler
               | extension that got dropped 2 years back. And if it is,
               | better find time in the schedule to have developers
               | rewrite code that "still works just fine".
               | 
               | Contrast that, in my current role each microservice can
               | have its own build tools, and version of build tools.
               | When my team needed to update to the latest version of
               | Typescript to support the new AWS SDK (which gave us an
               | insane double digit % perf improvement), we were able to
               | even though the organization as a whole was not yet
               | moving over.
               | 
               | Meanwhile in Monolith land you have a build system that
               | is so complicated that the dedicated team in charge of
               | maintaining it is the only team who has even the
               | slightest grasp on how it works, and even then the build
               | systems I've seen are literally decades old and no one
               | person, or even group of people, have a complete
               | understanding of it.
               | 
               | Another benefit is that microservices force well defined
               | API boundaries. They force developers to consider, up
               | front, what API consumers are going to want. They force
               | teams to make a choice between engineering around
               | versioning APIs or accepting breaking changes.
               | 
               | Finally, having a REST API for everything is just a nice
               | way to do things. I've found myself able to build tools
               | on top of various microservices that would otherwise not
               | have been possible if those services were locked up
               | behind a monolith instead of having an exposed API.
               | 
               | In fact I just got done designing/launching an internal
               | tool that was only possible because my entire
               | organization uses microservices. Another team already had
               | made an internal web tool, and as part of it they made a
               | separate internal auth microservice (because _everything_
               | is a microservice). I was able to wire up my team 's
               | microservices with their auth service and throw a web UI
               | on top of it all. That website runs in its own
               | microservice with a customized version of the org's build
               | system, something that was possible because as an
               | organization we have scripts that allow for the easy
               | creation of new services in just a matter of minutes.
               | 
               | Back when I was at Microsoft, _none_ of the projects I
               | worked on would have allowed for that sort of absurd code
               | velocity.
               | 
               | Another cool feature of microservices is you can choose
               | what parts are exposed to the public internet, vs
               | internal to your network. Holy cow, so nice! Could you do
               | that with a monolith? Sure, I guess. Is it as simple as a
               | command line option when creating a new service? If you
               | have an absurdly well defined monolith, maybe.
               | 
               | Scaling, different parts of a system need to scale based
               | on different criteria. If you have a monolith that is
               | running on some # of VMs, how do you determine when to
               | scale it up, and by how much? For microservices, you get
               | insane granularity. The microservice pulling data from a
               | queue can auto-scale when the queue gets too big, the
               | microservice doing video transcoding can pull in some
               | more GPUs when its pool of tasks grows too large. With a
               | monolith you have to scale the entire thing up at once,
               | and choose if you want vertical or horizontal scaling.
               | 
               | You can also architect each microservice in a way that is
               | appropriate for the task at hand. Maybe pure functions
               | and completely stateless makes sense for one service,
               | where as a complex OO object hierarchy makes sense
               | someplace else. With microservices, impedance mismatches
               | are hidden behind network call boundaries. Yes you can
               | architect monoliths in vastly different fashions
               | throughout (and I've done such), but there is a limit to
               | that.
               | 
               | E.g. with microservices you can have one running bare
               | metal written in C++ on a hard real time OS, and other
               | written in Python.
               | 
               | Oh and well defined builds and deployments is another
               | thing I like about microservices. I've encountered
               | monoliths where literally no one knew how to completely
               | rebuild the production environment (I overheard from
               | another engineer that Xbox live services existed in that
               | state for awhile...)
               | 
               | And again, my bias is that I've only ever worked on large
               | systems. Outside my startup, I've never worked on a
               | project that didn't end up with at least a couple hundred
               | software engineers writing code all towards one goal.
               | 
               | Is k8s and microservices a good idea for a 5 person
               | startup? Hell no. I ran my startup off a couple VMs that
               | I SCP'd deployments to along side some Firebase
               | Functions. Worked great.
               | 
               | [1] This is not completely true, Office is divided up
               | pretty well and you can pull in bits and pieces of code
               | pretty independently, so if you want a rich text editor,
               | that is its own module. IMHO they've done as good of a
               | job as is possible for native.
        
           | strix_varius wrote:
           | Interesting. I found working on FB's monolith to have, on
           | average, higher velocity and fewer blockers, than working on
           | microservicified systems at other places 2+ orders of
           | magnitude smaller in both engineering and codebase size.
        
             | roflyear wrote:
             | Agreed.
        
             | ddenchev wrote:
             | It is not fair to compare the Facebooks monolith and the
             | monolith at the average company, as they are not really the
             | same thing. The tooling available at Facebook is built and
             | maintained by a team larger than the engineering
             | departments at most companies.
             | 
             | There comes a point, where regular off the shelf tooling
             | does not scale sufficiently well for a monolith. Tests
             | suits and builds start to take too long. Deployments get
             | increasingly complicated. Developers start to get into each
             | other's way, even when working on unrelated features.
             | Additionally, if you are using an untyped, interpreted
             | language, keeping a large app well organized can also be a
             | problem.
             | 
             | Microservices is a tool for dealing with complexity and
             | certainly not the only one. However, building the tooling
             | and infra for a large and sophisticated monolith is not
             | simple and not guaranteed to be an easier solution to the
             | problems listed above.
        
               | strix_varius wrote:
               | How is this relevant? My comment is in response to an
               | observation about "large engineering teams," not "the
               | monolith at the average company."
               | 
               | At the average company, standard tools will work fine,
               | while companies with large engineering teams have the
               | resources to maintain custom tooling.
        
               | dboreham wrote:
               | You are assuming that the observed tool strain scales
               | with the number of developers. In my experience it scales
               | with the number of coupled concerns inside the same repo.
               | Now, this may be somewhat correlated with the number of
               | developers, but not entirely. Therefore in my experience
               | again you can end up with a moderately sized company
               | running into tool limits with a monorepo. FB doesn't have
               | those problems because they use different tools.
        
               | strix_varius wrote:
               | > FB doesn't have those problems because they use
               | different tools.
               | 
               | Exactly - instead of using microservice-oriented tools,
               | they use tools organized around monoliths. And that
               | decision serves them well. _That 's the whole point._
        
               | cmrdporcupine wrote:
               | Microservices move the complexity rather than solve it.
               | 
               | The dependency boundaries between portions of the data
               | model can never be cleanly demarcated -- because that's
               | not how information works, especially in a growing
               | business -- so there's always going to be either some
               | loss of flexibility or addition of complexity over time.
               | 
               | Individual developers getting out of each other's way
               | just ends up getting pushed to getting in each other's
               | way at release time as the matrix of dependencies between
               | services explodes.
               | 
               | Engineers jobs become more about the lives and dramas of
               | the services they work on than about business domain. You
               | end up building your organization and reporting structure
               | around these services, rather than the business needs of
               | the customer. And then you end up indirectly or directly
               | shipping that org chart to the world in delays or bugs
               | caused by your fragmentation.
               | 
               | Instead of modeling facts about data and their
               | relationships, and constructing the relational model
               | which can capture this, the developer in the microservice
               | model becomes bogged down in service roles and activities
               | instead, again taking them away from the actual problem:
               | which is organizing information and making it accessible
               | to users/customers.
               | 
               | It's a shell game.
               | 
               | The Facebook monolith works because engineers there
               | invested time in _building_ the tooling you 're
               | complaining is not available to others. Same with Google:
               | Google invested in F1, etc. because it evaluated the cost
               | to do otherwise and it made sense to invest in
               | infrastructure.
               | 
               | Yes, small companies can't often afford this. Luckily
               | they have two things on their side:
               | 
               | Most small companies don't have a fraction of the scale
               | issues that a FB or a Google have. So they can afford to
               | monolith away for a lot longer than they seem to think
               | they can, while they put in infrastructure to scale the
               | monolith.
               | 
               | The industry as a whole has invested a lot in making
               | existing things scale. e.g. you can do things with a
               | single Postgres instance that we never would have dreamed
               | about 10 years ago. And when that falls over, there's
               | replication, etc. And when that falls over, guess what?
               | There's now high performance distributed ACID SQL
               | databases available for $use.
               | 
               | Microservices is surely one of the longest lived, biggest
               | cargo cults in our industry. I've seen others come and
               | go, but microservices really seems to cling. I think
               | because it has the _perception_ of breaking business
               | problems down into very small elegant independent atomic
               | pieces, so it has a very.. industrial revolution,
               | automation, factory floor, economies of scale vibe. But
               | that 's not what it is.
               | 
               | There are places for it, I'm sure. But systems with
               | highly interelated data and quickly changing requirements
               | are not well suited.
               | 
               | IMHO.
        
               | geekbird wrote:
               | Yeah, I've seen stuff carved into tiny, fragile
               | microservices when the number of nodes was under ten.
               | Stupid, IMO, and it took a stable service and made it a
               | flaky mess. It was done because of dogmatic "It must be
               | in containers in the cloud with microservices, because
               | that is The Way(TM)". Literally there was an initiative
               | to move _everything_ possible to the cloud in containers
               | with lots of microservices because one of the place 's
               | gurus got that religion. It increased complexity,
               | decreased reliability and cost a lot of money for not
               | much benefit.
               | 
               | Until you have well over 20 systems doing one
               | thing/application, trying to treat bespoke services like
               | cattle instead of pets is silly. It will also drive your
               | ops people to drink, especially if it's done by "DevOps"
               | that never get paged, and refer to your group in meetings
               | with other companies as "just ops". (Yes, I'm still salty
               | about it.)
               | 
               | Often I think it's "resume driven development",
               | especially if the people pushing for it want to abandon
               | all your existing tools and languages for whatever is
               | "hot" currently.
        
         | cosmic_quanta wrote:
         | I didn't know about this website. It's hilarious. Thank you for
         | sharing.
        
         | ducharmdev wrote:
         | grug is the peak of SWE wisdom to me
        
         | nostrebored wrote:
         | Seems like grug only works on tiny teams and on
         | undifferentiated problems
        
           | forgetfulness wrote:
           | Well yeah, there's cases where it makes sense.
           | 
           | I was once on a team dedicated to orchestrating machine
           | learning models and their publishing in a prediction service.
           | The training-to-publishing process was best kept an atomic
           | one, and its Scala codebases separate and distinct from the
           | Java monolith that called it, despite being the only caller
           | and not benefiting from asynchronicity in the call, this
           | design, implementation and operation of a very profitable
           | part of the business was way more convenient than having
           | tried to shove the prediction logic inside the caller.
           | 
           | But there are teams that will be splitting every damned
           | endpoint of a service into their own process, that's just
           | unnecessary overhead from whichever way you look at it;
           | network, deployment, infrastructure, monitoring, testing, and
           | deployment again because it bears repeating that it
           | complicates rollouts of new features.
        
           | roflyear wrote:
           | You miss the point. Fucking up your infrastructure isn't a
           | solution to organizational problems.
        
             | benreesman wrote:
             | Bingo. In the long tradition of COM/CORBA, XML World,
             | capital-A Agile, micro-services are hard to argue against
             | because people fucking freak out if you push hard, because
             | it's a career path.
        
         | reidjs wrote:
         | I translated this to plain english!
         | 
         | https://github.com/reidjs/grug-dev-translation
        
         | Tabular-Iceberg wrote:
         | Is there ever a good reason to change from monolith to
         | microservices unless you have already solved the factoring
         | problem in the monolith? The factoring problem doesn't get
         | easier because you add network calls into the mix, but being
         | well factored first makes adding the network calls a lot
         | easier. If you're lucky your framework or platform already has
         | that part figured out.
         | 
         | Maybe one exception is if you absolutely must change the
         | programming language at the same time, but even then I would
         | suggest doing the rewrite as a monolith first for the same
         | reason, so you again don't have to deal with network calls at
         | the same time as solving an already insanely hard problem.
         | 
         | There's the argument that microservices lets you gradually
         | replace the old monolith piece by piece, but there's nothing
         | special about microservices for that, you can do that with two
         | monoliths and a reverse proxy.
         | 
         | And at the end of either you might find that you didn't need
         | microservices in the first place, you just needed to factor the
         | application better.
        
           | Thetawaves wrote:
           | The benefit of microservices is that you can divide areas of
           | responsibility among teams and (theoretically uncouple the
           | change processes for each)
        
         | wvh wrote:
         | At first, you're ignorant. Then, dogmatic. And then you learn
         | what to apply and when.
         | 
         | Due to the average age of programmers, a lot of people fall
         | into the first two categories.
        
           | hinkley wrote:
           | An analogy that seems to connect with some people at least:
           | 
           | You know the Martial Arts movie trope where 20 minutes into
           | the movie, the master tells the student never to do X, and
           | then in the final battle the student only triumphs by doing
           | X? That's because what you _can_ do and what you _should_ do
           | changes as you figure out what you 're doing. The rule
           | applies to you until you understand why the rule doesn't
           | apply to you/this situation.
           | 
           | The subtlety here is in knowing the difference between "the
           | rules don't apply to me" and "I understand the situation
           | where the rule is reasonable, and safe, and either this is
           | not one of those situations, or 'safe' doesn't address an
           | even bigger danger." The former is the origin story of the
           | villain, the latter the origin story of the unlikely hero. Or
           | less dramatically, it's Chesterton's Fence.
        
         | ezekiel11 wrote:
         | 5 years ago this was very much true but with serverless
         | services, it drastically lowered the cost of overhead.
         | 
         | It is more work but the goal is always to move away from
         | monolith and to reap the benefits.
         | 
         | We are past the Backbone.js phase of Microservices from jQuery
         | phase into React phase. An established standard with dividends
         | that pay later is being realized.
         | 
         | I just no longer follow these microservice bashing tropes in
         | 2022, a lot has changed since that Netflix diagram went viral
        
           | pclmulqdq wrote:
           | You still pay that cost with your wallet, it's just hidden
           | from you when you look at your code.
           | 
           | The main monetary benefit of serverless is that you can truly
           | scale down to 0 when your service is unused. Of course,
           | writing a nice self-contained library that fits into your
           | monolith has the same benefit.
        
             | ezekiel11 wrote:
        
               | mattmanser wrote:
               | They already run cloud VMs sharing CPU, the sharing
               | already happens so why would it be somehow magically
               | cheaper?
               | 
               | And how do you think they make money? Every second of a
               | serverless architecture is probably 100s or 1000s times
               | more expensive than a second of a traditional server.
               | 
               | Use your brain for 10 seconds, it's obviously going to be
               | ridiculously overpriced for the compute you actually get.
               | That's how they make money. And on top of that they have
               | to do so much more orchestration and overhead to run it.
               | 
               | And bonus points, you're now locked into their
               | architecture too!
               | 
               | If you have enough load to have a dedicated machine
               | running at 10% CPU load on average, it'll be cheaper
               | running a dedicated machine than anything else, you
               | probably looking at old school VMs costing 2x more, cloud
               | servers 10x more expensive and serverless would probably
               | be at a minimum 20x more expensive.
               | 
               | We're not luddites, you're just a sucker.
        
               | ezekiel11 wrote:
        
               | pclmulqdq wrote:
               | The premium is about 10x over cloud VMs, unless you are
               | running very specific kinds of functions that are long-
               | running and take very little memory.
        
               | pclmulqdq wrote:
               | I fully understand serverless billing, which is why I
               | told you its advantage: scaling to 0 for functions you
               | almost never use. But if you are running literally
               | ANYTHING else, you can get that advantage yourself: run
               | your "serverless microservice" as a *library* inside its
               | caller. You don't need the overhead of an RPC to enforce
               | separation of concerns.
               | 
               | A small startup can pay $5/month to run a monolith on a
               | tiny server commensurate with its use. It can scale that
               | monolith up with use, from a $5 VM offering 1 core and a
               | tiny bit of RAM all the way to a two-socket server VM
               | offering 110+ cores and 512 GB of RAM. Alternatively, a
               | large company can scale nearly infinitely with horizontal
               | scaling. When I worked on a service with a trillion QPS
               | and 50% usage swings at a mega-Corp, that's what we did.
               | All our customers, even the ones with a measly 1 million
               | QPS did the same. And their customers, and their
               | customers, and so on.
               | 
               | "Serverless" was something sold to fashionable tech
               | startups and non-technical people who didn't have the
               | expertise to maintain the VMs/containers they needed. The
               | serverless systems carried a huge price premium too.
               | Everyone with true scale understood that there are
               | servers there, and you are going to pay to manage them
               | whether you want to or not.
        
               | ezekiel11 wrote:
               | serverless is s a huge boon to large enterprises who want
               | to be more agile and not dependant on monolith
               | architecture. the cost to them is a rounding error at
               | best. a startup is a poor yardstick to measure
               | serverless's benefits, if you can run on a $5 DO vps by
               | all means you are not its target market.
        
         | ren_engineer wrote:
         | fundamental problem is cargo cult developers trying to copy
         | massive companies architectures as a startup. They fail to
         | realize those companies only moved to microservices because
         | they had no other option. Lots of startups hurting themselves
         | by blindly following these practices without knowing why they
         | were created in the first place. Some of it is also done by
         | people who know it isn't good for the company, but they want to
         | pad their resume and leave before the consequences are seen.
         | 
         | same thing applies to Leetcode style interviews, no startup
         | should be using them honestly. They are for established
         | companies that have so many quality applicants they can afford
         | to filter out great candidates
        
           | Jistern wrote:
        
           | mirekrusin wrote:
           | this; i've repeated it dozens of times at my co. to the same
           | people. it was funny, then weird, now it's becoming
           | depressing, not sure what's next.
        
             | avgDev wrote:
             | Next, you find a sensible company with reasonable people.
        
           | vinnymac wrote:
           | I have seen the resume padder on many occasions, and have had
           | to clean up after they left. It's amazing to see a team
           | become visibly happier with their day job when you move to
           | sensible foundations based on sound reasoning.
           | 
           | The teams that were the most efficient, and happiest with
           | their day to day were the ones which picked frameworks with
           | good documentation, a plethora of online resources, and
           | ignored the latest shiny toy on the market.
        
             | hinkley wrote:
             | What's amazing is how often people have to see it to
             | believe it.
             | 
             | One of my running jokes/not jokes is that one day I'm going
             | to hire a personal trainer, but then instead of having them
             | train me, I'm just going to ask them a million questions
             | about how they convince people who have never been 'in
             | shape' how good it's going to feel when they are.
             | 
             | Because that's what it often feels like at work.
        
           | cmrdporcupine wrote:
           | re: "copy massive companies"; I don't recall seeing a single
           | "microservice" at Google. Granted, in the 10 years I was
           | there I mostly worked on non-cloud type stuff. But still.
           | 
           | Google has perfected the art of the giant, distributed,
           | horizontally scalable mostly-relational database. They have
           | _services_ , yes. But in large part... they generally all
           | talk to F1 or similar.
           | 
           |  _Microservices_ with each thing having its own schema and
           | /or DB seems to me to be a phenomenally stupid idea which
           | will simply lead to abusing your relational model and doing
           | dumb things like your own custom bespoke server-side joins
           | with your own custom bespoke two phase commit or other
           | transaction logic.
           | 
           | Before Google I worked in one shop that did 'microservices'
           | and the release process was a complicated nightmare that
           | would have been better solved by a complicated combinatorial
           | graph optimization library. There were cross service RPCish
           | calls made all over the place to piece data together that in
           | a more monolithic system would be resolved by a simple fast
           | relational join. I shudder to remember it.
           | 
           | But I'm just an old man. Pay no heed.
        
             | SulphurSmell wrote:
             | I am also old. And tend to agree with you on this.
        
               | verelo wrote:
               | I'm older than i was when people started telling me as a
               | tech founder that we needed micro services to be cool. At
               | 25 i felt it didn't deliver any real customer value and
               | made my life harder, at 35 it seems like the people that
               | do this are not people i want to hire anyway.
        
               | dinvlad wrote:
               | I started sorta the opposite and thought that nano(!)
               | services (think one AWS Lambda per API call) were the
               | best approach, and now I look at that younger wisdom with
               | a parental smile..
        
               | crdrost wrote:
               | I agree that we need to at least _name_ nanoservices --
               | microservices that are  "too small". Like _surely_ we can
               | all agree that if your current microservices were 100x
               | smaller, so that each handled exactly one property of
               | whatever they 're meant to track, it'd be a nightmare. So
               | there _must_ be a lower limit,  "we want to go this small
               | and no smaller."
               | 
               | I think we also need to name something about coupling.
               | "Coupling" is a really fluid term as used in the
               | microservice world. "Our microservices are not strongly
               | coupled." "Really, so could I take this one, roll it back
               | by 6 months in response to an emergency, and that one
               | would still work?" "err... no, that one is a frontend, it
               | consumes an API provided by the former, if you roll back
               | the API by 6 months the frontend will break." Well, I am
               | sorry, my definition of "strong coupling" is "can I make
               | a change over here without something over there
               | breaking", _for example rolling back something by 6
               | months_. (Maybe we found out that this service 's
               | codebase had unauthorized entries from some developer 5
               | months ago and we want to step through every single damn
               | thing that developer wrote, one by one, to make sure it's
               | not leaking everyone's data. IDK. Make up your own
               | scenario.)
        
               | dinvlad wrote:
               | I'm just surprised no one mentioned
               | https://martinfowler.com/bliki/MonolithFirst.html yet :-)
               | 
               | Nano actually did (and does) make sense from access
               | control perspective - if a service has permissions to do
               | one thing and one thing only, it is much harder to
               | escalate from. But I'm not sure if these benefits
               | outweigh the potential complexity.
        
             | taeric wrote:
             | Meanwhile, I come from the perspective that a shared
             | database that everyone talks to is a shared infrastructure
             | point. And... I have seen those cause more problems than
             | makes sense.
             | 
             | My bet is I'm also just an old man in this discussion. Such
             | that I really think we can't underline enough how
             | particular the discussion will be to every organization.
        
           | babbledabbler wrote:
           | Yep have seen this firsthand. Beware the evangelist who shows
           | up bearing fancy new architectural patterns and who can't be
           | bothered to understand nor discuss details of the existing
           | system.
        
             | mirekrusin wrote:
             | Same here.
        
       | hinkley wrote:
       | > 10 Modularize the monolith
       | 
       | A couple paragraphs on a couple of tools for the middle to late
       | stages of such an effort is tantamount to "and then draw the rest
       | of the fucking owl".
       | 
       | Decomposition is hard. It's doubly hard when you have coworkers
       | who are addicted to the vast namespace of possibilities in the
       | code and are reaching halfway across the codebase to grab data to
       | do something and now those things can never be separated.
       | 
       | One of the best tools I know for breaking up a monolith is to
       | start writing little command line debugging tools for these
       | concerns. This exposes the decision making process, the
       | dependency tree, and how they complicate each other. CLIs are
       | much easier to run in a debugger than trying to spin up the
       | application and step through some integration tests.
       | 
       | If you can't get feature parity between the running system and
       | the CLI, then you aren't going to be able to reach feature parity
       | running it in a microservice either. It's an easier milestone to
       | reach, and it has applications to more immediate problems like
       | trying to figure out what change just broke preprod.
       | 
       | I have things that will never be microservices that benefit from
       | this. I have things that used to be microservices that didn't
       | need to be, especially once you had a straightforward way to run
       | the code without it.
        
       | jacquesm wrote:
       | Microservices have one big advantage over monoliths: when you
       | have a very large number of employees developing software it
       | means that you can keep your teams out of each others' hair. If
       | you don't have a very large team (as in 50+ developers) you are
       | _probably_ better off with a monolith, or at best a couple of
       | much larger services that can be developed, tested and released
       | independently of each other. That will get you very far, further
       | than most start-ups will ever go.
        
       | sackerhews wrote:
       | I was once working with a guy that was hell bent on
       | microservices.
       | 
       | When our banking application became incompatible with logging,
       | due to his microservice design, he really argued, fought and
       | sulked that logging wasn't important and we didn't need it.
        
         | why-el wrote:
         | I suppose tracing becomes more important in that new
         | architecture, assuming of course that each service is logging
         | the unique identifier for the trace (for instance the request
         | ID or some system-initiated event ID), but of course that
         | presupposes logging to begin with, so I am not sure what
         | "incompatible with logging" means.
        
         | choward wrote:
         | I'm skeptical of over using microservices but I don't quite
         | understand how they make an app "incompatible with logging".
        
       | bvrmn wrote:
       | Why microservices called "micro"? When it's almost implied it
       | requires a dedicated team to develop and maintain? Looks it's
       | like SOA but with JSON and k8s.
        
         | mirekrusin wrote:
         | Maybe because there are very little real world cases when
         | they're better than alternatives?
        
         | LeonM wrote:
         | In the context of microservices, 'micro' means that the service
         | itself is small. Thus, each service performs a 'micro' (and
         | preferably independent) task. It is the opposite of a monolith,
         | which you could call a 'macro' service.
         | 
         | The system as a whole (whether it being monolith or
         | microservices) still requires a dedicated team to maintain.
         | Switching to microservices will not magically remove the team
         | requirement. In fact, splitting a monolith into smaller
         | services creates overhead, so you'll probably end up with a
         | larger team then what you'd have to maintain a monolith.
        
           | yrgulation wrote:
           | In reality many of the micro services end up as tightly
           | coupled macro services. I've rarely seen teams with the
           | discipline or need for creating truly self contained separate
           | services.
        
       | dinvlad wrote:
       | Is there a guide on doing the opposite?
        
       | ianpurton wrote:
       | You can actually use microservices to reduce the complexity of a
       | monolith.
       | 
       | One example which I built just for this purpose is Barricade
       | https://github.com/purton-tech/barricade which extracts
       | authentication out of your app and into a container.
        
         | bvrmn wrote:
         | Authentication is a very simple part and doesn't add
         | complexity. Every platform has a plethora of libraries to deal
         | with anything. Authorization is a real deal.
        
           | ianpurton wrote:
           | There are libraries but you still have to integrate them into
           | a front end whilst being aware of any security issues.
           | 
           | Those libraries generally come with lots of dependencies so
           | splitting out the front end and back end auth code into a
           | container can reduce the complexity of your main app and is a
           | nice separation of concerns in my opinion.
        
             | bvrmn wrote:
             | But you solution requires architectural decisions. Like
             | fixed DB. For example main app uses Mongo. And now it must
             | be integrated with PG. I choose orthogonal set of
             | primitives as a library any day. Frameworks and more so
             | 3d-party services are too rigid and can contain hidden
             | maintenance cost regarding flexibility. For example, it
             | seems barricade doesn't support TOTP or SMS. What should I
             | do as a backend developer?
        
               | ianpurton wrote:
               | Agreed Barricade is a Postgres solution so may not fit if
               | you are already using Mongo.
               | 
               | Barricade supports OTP via email. TOTP is on the road
               | map.
        
           | hamandcheese wrote:
           | And best of luck to anyone attempting to extract
           | authorization to a microservice in a way that doesn't create
           | new problems.
        
         | mirekrusin wrote:
         | Authentication/sso, logging, metrics, persisted data stores
         | (ie. sql server), caches (ie. redis), events, workflows and few
         | others are all well known, understood, isolated behaviors with
         | good implementations that doesn't need to be
         | embedded/reinvented - it can and in many cases should run
         | separately to your core service/s. It doesn't imply
         | microservices in any way.
        
       | whiddershins wrote:
       | Does each microservice really need its own database? I have
       | recently proposed my team initially _not_ do this, and I 'm
       | wondering if I am creating a huge problem.
        
         | mirekrusin wrote:
         | If they won't have it then they're not microservices.
         | 
         | The main premise is independent deployability. You need to be
         | able to work on microservice independently of the rest, deploy
         | it independently, it has to support partial rollouts (ie. half
         | of replicas on version X and half on version Y), rollbacks
         | including partial rollbacks etc.
         | 
         | You could stretch it in some kind of quasimodo to have separate
         | schemas within single database for each microservice where each
         | would be responsible for managing migrations of that schema and
         | you'd employ some kind of policy of isolation. You pretty much
         | wouldn't be able to use anything from other schemas as that
         | would almost always violate those principles making the whole
         | thing just unnecessary complexity at best. Overall it would be
         | a stretch and a weird one.
         | 
         | Of course it implies that before simple few liners in sql with
         | transaction isolation/atomicity now become phd-level-like,
         | complex, distributed problems to solve with sagas, two phase
         | commits, do+undo actions, complex error handling because comms
         | can break at arbitrary places, performance cam be a problem,
         | ordering of events, you don't have immediate consistency
         | anymore, you have to switch to eventual consistency, very
         | likely have to do some form of event sourcing, duplicate data
         | in multiple places, think about forward and backward
         | compatibility a lot ie. on event schema, taking care of apis
         | and their compatibility contracts, choosing well orchestration
         | vs choreography etc.
         | 
         | You want to employ those kind of techniques not for fun but
         | because you simply have to, you have no other choice - ie. you
         | have hundreds or thousands of developers, scale at hundreds or
         | thousands of servers etc.
         | 
         | It's also worth mentioning that you can have independent
         | deployability with services/platforms as well - if they're
         | conceptually distinct and have relatively low api surface, they
         | are potentially extractable, you can form dedicated team around
         | them etc.
        
           | yrgulation wrote:
           | independent deployability, independent scalability, ease of
           | refactoring, reduced blast radius, code ownership and
           | maintenance, rapid iteration, language diversity (ie an ml
           | service in python and a rest api in nodejs), clear domain
           | (payments, user management, data repository and search) just
           | to name a few. if two or more services need none of the above
           | or must communicate with the same database or is too complex
           | to communicate with each other if a db is not used (ie queue
           | nightmare, shared cache or files) are usually signs that the
           | two should be merged as they probably belong to the same
           | domain. at least thats some of the logic i follow when
           | architecting them.
        
         | adra wrote:
         | I'm agreeing with your other other replies, but with one
         | caveat. Each service needs its own isolated place to store
         | data. This programming and integration layer concern is very
         | important. What's less important is having those data stores
         | physically isolated from each other, which becomes a
         | performance and cost concern. If your database has the ability
         | to isolate schemas / namespaces then you can share the physical
         | DB as long as the data is only used by a single service. I've
         | seen a lot of microservices laid out with different write/read
         | side concerns. These are often due to scaling concerns, as
         | read-side and write-side often have very different scaling
         | needs. This causes data coupling between these two services,
         | but they together form the facade of a single purpose service
         | like any single microservices for outside parties.
         | 
         | Additionally, you can probably get by having low criticality
         | reports fed through direct DB access as well. If you can afford
         | to have them broken after an update for a time, it's probably
         | easier than needing to run queries through the API.
        
         | delecti wrote:
         | There are two ways to interpret this question, and I'm not sure
         | which you're asking. You should not have two microservices
         | _sharing_ a single database (there lies race conditions and
         | schema nightmares), but it is totally fine for some
         | microservices to not have any database at all.
        
         | [deleted]
        
         | jameshart wrote:
         | Isolated datastores is really the thing that differentiates
         | microservice architecture (datastores meant in the most broad
         | sense possible - queues, caches, RDBMSs, nosql catalogs, S3
         | buckets, whatever).
         | 
         | If you share a datastore across multiple services, you have a
         | service-oriented architecture, but it is not a microservice
         | architecture.
         | 
         | Note that I'm saying this without any judgement as to the
         | validity of either architectural choice, just making a
         | definitional point. A non-microservice architecture might be
         | valid for your usecase, but there is no such thing as
         | 'microservices with a shared database'.
         | 
         | It's like, if you're making a cupcake recipe, saying 'but does
         | each cake actually need its own tin? I was planning on just
         | putting all the batter in one large caketin'.
         | 
         | It's fine, that's a perfectly valid way to make a cake, but...
         | you're not making cupcakes any more.
        
         | [deleted]
        
         | nostrebored wrote:
         | I like microservices owning their databases. It allows you to
         | choose the correct database for the job and for the team.
         | Sharing state across these microservices is often a bad sign
         | for how you've split your services. Often a simple orchestrator
         | can aggregate the relevant data that it needs.
        
           | geekbird wrote:
           | Are you talking about different DBs, or just different
           | tables? If it's just different tables, they can operate
           | sufficiently independently if you design them that way, so
           | you can change the schema on one table without messing up the
           | others.
        
         | lapser wrote:
         | Yes, you need it. Imagine having to make a change to the DB for
         | one service. You'll have to coordinate between all
         | microservices using that DB.
        
           | iamflimflam1 wrote:
           | Agree and disagree - it really depends on why you are going
           | to micro services. Is it because you have too many people
           | trying to work on the same thing and you're architecture is
           | just a reflection of your organisation. Or is it to decouple
           | some services that need to scale in different ways but still
           | need to sit on top of the same data. Or is it some other
           | reason?
           | 
           | I think the dogmatic "you always need a separate database for
           | each micro service" ignores a lot of subtleties - and cost...
        
             | bcrosby95 wrote:
             | > Or is it to decouple some services that need to scale in
             | different ways
             | 
             | This is really over sold. You could allocate another
             | instance to a specific service to provide more CPU to it,
             | or you can allocate another instance to your whole monolith
             | to provide more CPU.
             | 
             | Maybe if the services use disproportionately different
             | types of resources - such as GPU vs CPU vs memory vs disk.
             | But if your resources are fungible across services, it
             | generally doesn't matter if you can independently scale
             | them.
             | 
             | Compute for most projects is the easiest thing to scale
             | out. The database is the hard part.
        
           | CurleighBraces wrote:
           | This is particularly painful experience if you've got
           | business logic at the database layer.
           | 
           | For example stored procedures that due to not splitting the
           | db get "shared" between micro-services.
        
         | dboreham wrote:
         | To initially not do this is fine. Otherwise now you have two
         | hard problems to solve concurrently.
        
         | Tabular-Iceberg wrote:
         | Yes, but when doing so seems silly it's a good sign that they
         | should not be separate services. Keep things that change at the
         | same time in the same place. When you schema changes the code
         | that relies on it changes.
        
         | jayd16 wrote:
         | Need depends on your needs. You can share the DB but you lose
         | the isolation. The tradeoff is up to you.
         | 
         | There are also different ways to share. Are we talking about
         | different DBs on the same hardware? Different schemas,
         | different users, different tables?
         | 
         | If you want to be so integrated that services are joining
         | across everything and there is no concept of ownership between
         | service and data, then you're going to have a very tough time
         | untangling that.
         | 
         | If it's just reusing hardware at lower scale but the data is
         | isolated then it won't be so bad.
        
       | SergeAx wrote:
       | I have a strong feeling that the author never in their career
       | transitioned from monolith to microservices. Even to the point
       | "we are getting somewhere", not to mention "we are successful at
       | this". Text reads like self- complacency.
        
       | bvrmn wrote:
       | Looks like microservices became a self-goal. However author can
       | be praised buy giving a context (implicit) of a huge team which
       | should be split.
        
       | chrsig wrote:
       | Some thoughts:
       | 
       | - #12 add observability needs to be #1. if you can't observe your
       | service, for all you know it's not even running. Less
       | hyperbolically, good instrumentation will make every other step
       | go faster by lowering the time to resolution for any issues that
       | come up (and there will be some)
       | 
       | - #11 is incredibly oversimplified, and potentially dangerous.
       | how to do double writes like that and not create temporal soup
       | is...complicated. very complicated. it's important to remember
       | that the database the app is sitting on top of is (probably)
       | taking care of a great many synchronization needs.
       | 
       | if you can do a one way replication, that drastically simplifies
       | things. otherwise either do it in the monolith before breaking up
       | into services, or do it after you've broken up services, and have
       | the services share the database layer in the interim.
       | 
       | (I'm not debating that it needs to be done -- just advocating for
       | sane approaches)
       | 
       | - #10 - I've had great results with the strangler pattern.
       | Intercepting data at I/O gives a lot of tools for being able to
       | gradually change internal interfaces while keeping
       | public/external interfaces constant.
       | 
       | - #5 - as you introduce more processes, integration and end to
       | end testing becomes more and more vital. it becomes harder and
       | harder to run the service locally, and it becomes harder to tell
       | which where a problem is occurring. cross service debugging can
       | be a nightmare. in general it's just important to keep an eye on
       | what the system is doing from an outside perspective and if any
       | contracts have inadvertently changed behaviors.
        
         | jjav wrote:
         | > add observability needs to be #1
         | 
         | Very much this. The importance of this is overlooked so often
         | and then when there is a problem it's far more difficult to
         | solve than it should've been.
        
       | greatpostman wrote:
       | People forget, micro services commonly serve massive tech
       | companies. With 100 developers working on the same product, you
       | need it broken up and separated. If you're in a small company,
       | the value proposition is not as great. It's contextual, and a
       | tech stack that solves organizational problems, not always
       | technical ones
        
         | benreesman wrote:
         | At FB we introduced service boundaries for technical reasons,
         | like needing a different SKU.
         | 
         | Everything that could go into the giant, well-maintained
         | repo/monolith did, because distributed systems problems start
         | at "the hardest fucking thing ever" and go up from there.
        
         | adra wrote:
         | Agreed, but I'll go a step further and say microservices are
         | really valuable when you have products that are powered through
         | a reasonable DevOps and CI/CD support within and organization.
         | If you're a company that only releases quarterly, a monolith
         | probably makes sense. If you're a company releasing changes
         | daily / hourly, monoliths make progressively less sense and
         | become progressively harder to work with. When we release
         | software (a lot) our downtime SLO is generally zero minutes. If
         | you're a well out together outfit with strict discipline, this
         | can be achieved with microservices.
         | 
         | Inversely, monoliths almost never have to deal with multiple
         | components at different release levels, so they don't do a
         | particularly good job to support it, which is why you often see
         | hours long upgrade windows for monoliths. Shut everything down,
         | deploy updates, start everything back up and hope the house is
         | still working plus the changes.
        
           | codethief wrote:
           | > If you're a company releasing changes daily / hourly,
           | monoliths make progressively less sense and become
           | progressively harder to work with.
           | 
           | Counter argument: Your overall CI/CD infrastructure landscape
           | will be much simpler and muss less error-prone.
           | 
           | As will be your release and test pipelines. For instance, if
           | you have a few dozen microservices, each being released every
           | other hour, how do you run E2E tests (across all services)
           | before you release anything? Let's say you take your
           | microservice A in version v1.2.3 (soon to be released) and
           | test it against microservice B's current prod version v2.0.1.
           | Meanwhile, team B is also working on releasing the new
           | version v2.1.0 of microservice B and testing it against A's
           | current prod version v1.2.2. Both test runs work fine. But no
           | one ever bothered to test A v1.2.3 and B v2.1.0 against each
           | other...
        
       | Akronymus wrote:
       | A lot of those steps just seem like good engineering. (I
       | personally prefer modular monoliths over microservices though, in
       | all but very few cases.)
        
         | spaetzleesser wrote:
         | "A lot of those steps just seem like good engineering. "
         | 
         | Agreed. I always wonder why people think that their inability
         | to write libraries with good modularization will be solved by
         | introducing microservices.
        
           | yrgulation wrote:
           | It takes experience and guts to know when to use what and
           | most people just go with the the latest and fanciest. Well
           | tested, focused and self contained libraries are good
           | architecture even when micro-services are a must.
        
         | xbar wrote:
         | Right. I feel like it's a better article without any reference
         | to microservices.
        
         | davidkuennen wrote:
         | Having a modular monolith myself I couldn't agree more.
        
         | tleasure wrote:
         | "I personally prefer modular monoliths over microservices
         | though, in all but very few cases."
         | 
         | Couldn't agree more. Often times folks are using microservices
         | to achieve good modularity at a cost.
        
           | paskozdilar wrote:
           | > Often times folks are using microservices to achieve good
           | modularity at a cost.
           | 
           | And even more often, folks use microservices, but make them
           | so coupled that you can't really run/test/hack one without
           | running all the others... Basically creating a distributed
           | monolith.
        
             | delecti wrote:
             | Strong agree. I worked on a "distributed monolith" once,
             | and now I loudly make it a requirement that all my team's
             | microservices can easily be run locally. Running one stack
             | locally required starting up 12 different microservices,
             | all in a particular order, before you could do anything
             | with any of them. Insanity.
        
             | [deleted]
        
           | Akronymus wrote:
           | Kinda reminds me of how you "need to have a horizontally
           | scaling database setup because 1 million rows a day are
           | impossible to handle via a normal database server"
           | 
           | people really underestimate the power that vertical scaling
           | can achieve, along with the long tail that microservices can
           | bring. (The more calls between services you need to handle x,
           | the more likely it is that you get into a case where one call
           | takes exceptionally long)
           | 
           | https://www.youtube.com/watch?v=SjC9bFjeR1k
        
             | iamflimflam1 wrote:
             | I've faced this a number of times "we think we'll have
             | scaling issues" when they are running on the lowest
             | possible database tier on offer. I think people just don't
             | understand how much power they actually have at their
             | fingertips without needing anything esoteric.
        
           | paskozdilar wrote:
           | (second comment - after I had some time to think about it)
           | 
           | Actually I think microservices serve as a tool for
           | _enforcing_ modularity. When pressure is high, corners are
           | cut, and when unrelated code is easy to reach (as in case of
           | a monolithic codebase), it 's easy to fall into temptation -
           | a small dirty hack is faster than refactoring. And when you
           | consider different maintainers, it's easy to lose track of
           | the original monolith architecture idea in all the dirty
           | hacks.
           | 
           | Microservices enforce some kind of strict separation, so in
           | theory nobody can do anything that they're not supposed to.
           | In practice, a lot of coupling can happen at the
           | microservices level - the most common symptom being some
           | weird isolated APIs whose only purpose is to do that one
           | thing that another service needs for some reason. Those kind
           | of ad-hoc dependencies tend to make services implementation-
           | specific and non-interchangeable, therefore breaking
           | modularity.
           | 
           | So, in conclusion, some approaches are easier to keep modular
           | than others, but there seems to be no silver bullet for
           | replacing care and effort.
        
       ___________________________________________________________________
       (page generated 2022-07-06 23:00 UTC)