[HN Gopher] Evidence-based software engineering: book released
       ___________________________________________________________________
        
       Evidence-based software engineering: book released
        
       Author : qznc
       Score  : 104 points
       Date   : 2020-11-12 18:44 UTC (4 hours ago)
        
 (HTM) web link (shape-of-code.coding-guidelines.com)
 (TXT) w3m dump (shape-of-code.coding-guidelines.com)
        
       | jorgBaller wrote:
       | I find the use of evidence concerning. Software engineering best
       | practices should be established by highly educated academics
       | (Phds etc).
        
         | AnimalMuppet wrote:
         | What are you trying to say? I honestly can't tell.
         | 
         | It _sounds_ like you 're saying that PhDs should decide what
         | best practices are, without regard for evidence. If so, that
         | had better be sarcasm, because it's patently absurd.
         | 
         | If your point was something else, could you clarify?
        
           | Jtsummers wrote:
           | I assumed they were being sarcastic. Perhaps a reference to
           | groups like the SEI (Software Engineering Institute).
        
       | StillBored wrote:
       | A bigger question is whether anyone would pay attention if it
       | turns out some method has a notable improvement.
       | 
       | I remember reading a bunch of code quality studies a few years
       | back. A bunch of them were DoD studies from the 1980/1990's, and
       | the one thing that stuck out were a couple of studies that looked
       | at style issues with ada/c and noted that the matched brace style
       | significantly lowered a couple different defect types.
       | 
       | Yet today, particularly in the opensource world matched brace
       | style is almost completely non-existant, since its a religious
       | war thing, and some of the early opensource projects were run by
       | people who didn't like it.
        
         | bigbubba wrote:
         | I love matching braces, if for no other reason than because
         | they make transversing code easier. If you ever find yourself
         | at the bottom of an awful function with multiple screenfuls of
         | code, matching braces makes it trivial to jump to the top if
         | that function even when your editor has no special knowledge of
         | that language. No LSP/etc needed. Barebones vanilla Vi is
         | actually a viable editor for Lisp because of this, relative to
         | Vi with languages like Python.
        
           | ywei3410 wrote:
           | This is one of the reasons that many Lisp developers like
           | Lisp code - local structured editing is trivial because if
           | the braces match, then most of the parser doesn't need to
           | deal with the rest of the file.
        
         | klenwell wrote:
         | On that question, I've quoted this here on HN before. From
         | Donald G Reinertsen's Principles of Product Development Flow:
         | 
         |  _I used to think that sensible and compelling new ideas would
         | be adopted quickly. Today, I believe this view is hopelessly
         | naive. After all, it took 40 years before we recognized the
         | power of the Toyota Production System._
         | 
         | I read this book on the strength of some recommendation here on
         | Hacker News. It is quite relevant to this general topic:
         | 
         | https://www.goodreads.com/book/show/6278270-the-principles-o...
        
       | beprogrammed wrote:
       | Wow, what a wonderful pdf.
        
       | eatonphil wrote:
       | Is there a physical copy I can buy (couldn't find one) or do I
       | have to build one from Lulu?
        
         | Splendor wrote:
         | > I'm investigating the possibility of a printed version.
         | 
         | That's the 3rd sentence in the linked page.
        
       | anonuser123456 wrote:
       | Anytime I see 'evidence based', I can't help but think "my
       | opinion for which I've cherry picked data".
        
         | mekoka wrote:
         | It works both ways though.
         | 
         | When the data presented to us suit our current view or agenda,
         | we somehow feel that it's valid. I remember when discussions
         | about "open plans" or "estimates" were still a thing, without
         | fail "evidence based" works such as _Peopleware_ or _The
         | Mythical Man Month_ would be mentioned.
         | 
         | If this book turned out to agree with the view that "scrum is
         | not really more productive", or "less meetings means more
         | productive", or "the interview process is broken", then we'd
         | all be really glad to refer to its data.
        
         | IshKebab wrote:
         | I don't, but maybe that's because I've mostly heard in a
         | medical context to distinguish policies based on _any evidence
         | at all_ from policies based on gut feeling, of which there are
         | a lot!
        
         | st1x7 wrote:
         | What's the alternative?
        
           | dpc_pw wrote:
           | Stop pretending that there's an easy empirical answer to
           | everything, and accept the nature of complex domains.
        
         | Shugarl wrote:
         | I also find it quite strange. I remember my teachers telling
         | that we're making a little bit of progress in software
         | engineering research, but comparing it to the state of research
         | in fields like Math or Physics would be like comparing a fully
         | grown man and a toddler.
         | 
         | In other words, we hardly know anything
        
           | pmiller2 wrote:
           | We may hardly know anything, but, comparing any field that
           | involves humans to math or physics is an apples to oranges
           | comparison. All it takes to do math research is time,
           | knowledge, writing material, and, preferably, the support of
           | an institution which allows one to do research full time.
           | There are no pesky experiments to run, except possibly inside
           | a computer. Physics can be the same way, but, even when it
           | involves "pesky experiments," those experiments only depend
           | on the physical laws of the universe, and the labor of a
           | fleet of graduate students who have great incentives to see
           | these experiments through. SWE research has either of these
           | properties.
        
         | mooreds wrote:
         | Have you checked out the book (PDF here:
         | http://knosof.co.uk/ESEUR/ESEUR.pdf )?
         | 
         | There are 2000 footnotes referencing published studies.
         | 
         | It's not perfect, but hopefully there are enough different
         | views all triangulating on "truth" to provide value.
        
         | gameswithgo wrote:
         | It is a common cognitive approach to argue that things are
         | unknowable, especially if evidence is currently pointing to a
         | truth we wish were not so. Certainly it is very, very hard to
         | get quality evidence in some domains, and software engineering
         | is one of the hard ones. That does not mean it is impossible,
         | and we should never dismiss things out of hand. We should have
         | specific critiques for claims we think are wrong, and explain
         | why.
        
         | monocasa wrote:
         | Better than "my opinion for which I've not even attempted to
         | see if the data can be contorted to support", which is still
         | the stage we're at for a lot of our industry since it's such a
         | new field.
        
           | bigbubba wrote:
           | I'm not convinced that's right. Somebody who incorrectly
           | believes their belief is backed by data may be less inclined
           | to change their mind than somebody who recognizes that their
           | belief isn't backed by data and consequently has a good
           | chance of being wrong. In other words, the chance that your
           | belief is correct or incorrect matters, but your inclination
           | to reevaluate your belief is also something to consider.
           | Cherrypicking data to support your beliefs or various forms
           | of inadvertent p-hacking may make people feel quite certain
           | of things that actually aren't true.
        
           | yen223 wrote:
           | My pet peeve with internet forums is when we counter n=1
           | studies with n=0 opinions
        
         | [deleted]
        
         | gwd wrote:
         | Do you have any evidence for that? ;-)
         | 
         | The world is a complicated place and the truth is hard to find.
         | Unintentional confirmation bias is a thing. Intentional (at
         | some level) cherry-picking to support your preferred answer is
         | a thing too. But that doesn't mean that nobody ever learns
         | anything from looking at data.
         | 
         | If someone claims to be "evidence-based", then their intention
         | should be given the benefit of the doubt. They may be affected
         | by confirmation bias, but that doesn't mean their conclusions
         | are wholly baseless. You should take a look and see how they're
         | doing before dismissing them out of hand.
        
       | mettamage wrote:
       | Looking at the TOC this doesn't feel like evidence-based software
       | engineering. This is simply because there seems to be way too
       | much background in this book. It basically spans 4 topics:
       | 
       | - Cognitive (neuro)psychology
       | 
       | - Economics / finance
       | 
       | - chapter 4 to 7 seem to be about software engineering
       | 
       | - Statistics
       | 
       | Don't get me wrong, I'll take a read, but not because I want to
       | read about evidence-based software engineering. It's much more
       | because it seems that you're connecting a lot of different areas
       | together and I want to see how you do it. The second reason is
       | because I happen to know a lot about these specific areas as well
       | :)
        
       | splittingTimes wrote:
       | So how do you measure performance or success of an engineering
       | department within a bigger corp? Not to evaluate salaries or
       | budgets, but to know, are we moving in the right direction? Are
       | we improving or are we getting worse?
       | 
       | From what I see most metrics on an individual or team level
       | can/will be gamed leading to worse outcomes.
       | 
       | Metrics on higher levels like on the level of the product or even
       | revenue of the company/product depends on many other players
       | outside of the control of the engineering team (product
       | management, operations, QA, regulatory, marketing, sales, etc).
       | This of course it's due to the fact that delivering value to
       | customers is a team effort.
       | 
       | But how can you know/show your department is on the right track?
        
         | save_ferris wrote:
         | Personally, having worked for a few big companies in my career,
         | I believe it's absolutely possible for a company to be too big.
         | It might be an impious opinion, but there absolutely comes a
         | time in a company's existence that internal politics begins to
         | undermine the mission of the entire organization.
         | 
         | Of course, I haven't worked everywhere and I recognize that
         | this opinion is based purely on my own experience. But
         | companies that generate billions upon billions in revenue seem
         | to care less and less about functional process improvement and
         | more about short-term profit gains. Because ultimately, the C
         | suite is paid based on stock performance for most large
         | companies, and that becomes the only metric that matters. In
         | that situation, everything falls in line behind it.
        
       | tomgp wrote:
       | If you're looking for something a little more focused I can
       | heartily recommend Andy Oram and Greg Wilson's " Making Software:
       | What Really Works, and Why We Believe It" it's a great collection
       | of literature reviews around a lots of common software
       | engineering practices.
        
       | chubot wrote:
       | Previous thread: https://news.ycombinator.com/item?id=23769930
        
       | louiechristie wrote:
       | Where's the 'conclusions' section?
        
       | spekcular wrote:
       | I read the entirety of a draft of this once, about a year ago,
       | when I was bored one night. I don't want to be overly critical of
       | someone's book, so let me just say that:
       | 
       | A) I don't understand why an entire elementary statistics pseudo-
       | textbook is bolted on at the end, forming the entire back half of
       | the text, and
       | 
       | B) the title interested me because it promised concrete
       | information that would improve my own software development, and
       | while I found many things in the first half of the book, I didn't
       | find this.
        
         | vector_spaces wrote:
         | The "statistics textbook" part is especially confusing to me --
         | who is the audience for this, and why is it even here?
         | 
         | He mentions at the beginning of the probability chapter:
         | 
         | "Readers are assumed to have some basic notion of the concepts
         | associated with probabilities, and to have encountered the idea
         | of probability in the form of likelihood of an event occurring"
         | 
         | This doesn't give us any insight into what background is
         | expected here -- do I need to have taken a measure theory based
         | probability course? Or would a calculus-based course suffice?
         | What about a course for non-STEM majors? The word "basic" means
         | different things to different people, and is generally an anti-
         | pattern in mathematical writing.
         | 
         | Then he launches into this downright strange rabbit hole about
         | mental models. None of this discussion in this chapter feels
         | properly motivated, the writing is disjointed, and it reads
         | like an unedited, non-proofread stream of consciousness. In
         | particular, the exposition gets too distracted by its own
         | examples.
         | 
         | Don't get me wrong -- many of the paragraphs in isolation are
         | interesting, but taken together it's excruciating to read. It's
         | an impressive project overall, and certainly ambitious, but it
         | falls down due to the writer's lack of empathy for the reader.
         | I suspect the situation would be better if they reduced the
         | scope of the project and kept the writing more focused.
        
         | pmiller2 wrote:
         | Can you expand a bit on point B? What sort of things did you
         | find in this book, and why were you not able to apply them to
         | your own practice of software engineering?
        
           | spekcular wrote:
           | Sure. Chapter 1 is titled "Human Cognition" and reads like
           | notes to an introductory psychology textbook. This is not a
           | bad thing (psychology is fun to read about), but I didn't
           | find any practical guidance about building software.
           | 
           | Chapter 2 is "Cognitive Capitalism." Here, I find a lot of
           | social and economic commentary, but again, no information
           | that will actually assist me in writing software.
           | 
           | The same story plays out for basically every other chapter,
           | until we hit the material about statistics in the second
           | half.
           | 
           | Another way to say it is that the level of abstraction feels
           | a bit off, resulting in the sense that the discussion is
           | somewhat superficial and disconnected from practice. It's
           | like someone wrote a book on gardening that begins with a 200
           | page discussion of cell biology.
        
       | aard wrote:
       | > "In a competitive market for development work and staff, paying
       | people to learn from mistakes that have already been made by many
       | others is an unaffordable luxury; an engineering approach,
       | derived from evidence, is a lot more cost-effective than craft
       | development."
       | 
       | I disagree with this characterization. Yet again, we developers
       | are being told to just follow the program. That programming is
       | not an artistic or craft endeavor that benefits from experience
       | and intuition. It is like working in a factory where coders just
       | bang out widgets on an assembly line. And the self-appointed
       | thinkers will optimize the process for us.
       | 
       | What is at risk by not allowing developers to "learn from
       | mistakes" is autonomy. Striping developers of their autonomy is
       | the primary cause of poor performance, not an inability to
       | execute so-called "best practices"
       | 
       | Attempts to codify the process of software development always
       | fail, because coding is a design process not a manufacturing one.
       | Developers do their jobs in many different ways, many of which
       | are equally effective. There is more than one way to skin a cat
       | -- especially in creative work.
       | 
       | > "The labour of the cognitariate is the means of production of
       | software systems"
       | 
       | This false assumption is at the base of the problem. The work of
       | the compiler (or scp) are the means of software production.
       | Coding is design. Once the design is complete, the results are
       | compiled and copied to their target environments. In software,
       | production is negligible. Which promotes the misconception that
       | developers are producing software. In actuality they are
       | designing software. The difference may seem subtle, but it is
       | crucial.
        
         | majormajor wrote:
         | > What is at risk by not allowing developers to "learn from
         | mistakes" is autonomy. Striping developers of their autonomy is
         | the primary cause of poor performance, not an inability to
         | execute so-called "best practices"
         | 
         | I've seen a lot of the opposite. Yes, coding is a design
         | practice, but I've had to clean up a lot of messes resulting
         | from _just plain bad design_ because nobody involved -
         | generally ~25 year olds with very little experience out of
         | school - knew that there were lessons from the past they could
         | learn about what designs would and wouldn 't work.
         | 
         | I agree with you that programming is an endeavor that benefits
         | from experience, and wish that people would realize that means
         | they can _learn from the experience of others_. Sure, intuition
         | is involved too, but one common thing I 've seen in shitty code
         | I've had to salvage is that people often don't apply their
         | intuition to "how could this code fail" or "how easy will this
         | be to modify in the future"?
         | 
         | That said... taking a look at this book... I don't see much in
         | the description or table of contents that would teach those
         | folks whose work I'm decrying above much useful about _writing
         | good software_. It has sections on reliability, project
         | estimation, and development methodology as separate things -
         | plus a lot of non-software-design stuff. But to me, the flow is
         | different - estimation, reliability, and delivery will all
         | suffer if you don 't have the right fundamental design skills.
         | You can't get much better at any of those without some deeper
         | underlying changes.
         | 
         | It seems to have a lot of discussion of studies _adjacent to_
         | software-related things, but I 'm not sold on them saying much
         | meaningful about software design.
        
         | 908B64B197 wrote:
         | I read that sentence as "Failures should be documented. If
         | something failed in the past and the underlying issue why it
         | failed is still there, it's useless to attempt it".
         | 
         | Ex: Using language Y for X was a disaster because of Y didn't
         | have hardware acceleration and we were unable to reach goal Z.
         | Before attempting to use language Y in production again, make
         | sure platform support has improved.
        
           | WJW wrote:
           | I wonder how many confounding factors there are though. At
           | one point when I had just joined a companyy, a colleague in
           | the process of leaving told me very confidently "at our scale
           | it is not possible to do X anymore". Obviously, we got X
           | working the very next week.
           | 
           | What if using language Y for X was a disaster because of poor
           | programming skills or micromanagers? In my experience it is
           | very rarely as clear cut as "Y does not have hardware
           | acceleration support" and there is not really enough rigour
           | in the software engineering process to really figure out
           | where a failure came from.
        
             | 908B64B197 wrote:
             | I also agree with that.
             | 
             | I think it's more about infusing an engineering mindset
             | where there is an analysis of past failures.
        
         | Jtsummers wrote:
         | It really depends on the kind of systems and work you're doing.
         | In my previous office, there was an entire group (about
         | 100-150) of programmers whose job was incredibly rote. That is,
         | you could take a novice out of college and get them up to speed
         | in about 1-3 months to do even their most complex work.
         | 
         | However, the other groups were much less factory-ish, though
         | rarely anything truly novel. Only a small cadre of programmers
         | were working on anything that _really_ required novelty and
         | creativity.
         | 
         | It's a spectrum and that has to be understood by all
         | participants in the discussion. Managers want everything to be
         | like that first group, because it's so consistent and
         | predictable. They want to know that a project will take 1000
         | man-hours and be right 95% of the time. Many programmers want
         | to see themselves part of the last group. The reality is, most
         | of us are in the middle. There are aspects of the job that are
         | almost mechanical, and aspects which require greater creativity
         | or "craft". If we can clarify that for the managers, it can go
         | a long way to ending some of the nonsense.
        
         | mekoka wrote:
         | I didn't get the feeling that the author disagreed that
         | programming has a crafty nature. But since building software
         | also has some engineering aspects, he is pointing out that you
         | (as an organization or project leader) could wait for the
         | graduates that you hire fresh out of uni to mature into
         | accomplished "craftsmen", or you could just observe what has
         | already worked for others, _based on evidence_ and mindlessly
         | apply those patterns. The mileage may vary on your returns from
         | the latter approach, but it will likely be cheaper and will
         | remove some uncertainty from your projections.
         | 
         | Speaking as someone who has observed the kind of massive
         | technical debt you can incur from letting your programmers
         | mature one mistake at a time and ten reinvented wheels later, I
         | would certainly not be too quick to dismiss the proposal.
         | 
         | > "The labour of the cognitariate is the means of production of
         | software systems"
         | 
         | I fail to understand why you would disagree here. That sentence
         | to me means that producing software systems requires brain
         | work. Whether thinking, designing, coding, testing, debugging,
         | reviewing, it's all part of the "labour of the cognitariate".
         | Compiling isn't.
        
           | aard wrote:
           | You have a valid point here. Maybe I was too hasty to judge
           | the book by it's summary. From other comments, it sounds like
           | this book takes a pretty reasonable approach.
           | 
           | I am frequently skeptical of efforts to show the "one true
           | way" of programming, because it usually results in some
           | poorly vetted process being forced on my at work. So, I was
           | probably too quick to jump to conclusions.
           | 
           | It actually sounds like this book is good about showing that
           | a lot of our current assumptions about good process are
           | faulty.
           | 
           | The point I was trying to make about the "labour of the
           | cognitariate" was that developer doesn't really produce
           | software, they create the blue-print for the software (source
           | code), and then compilers or interpreters produce the
           | actually software. It may seem like an overly semantic point,
           | but I think it is an important distinction to make. It
           | changes they way you think about software development.
        
         | StillBored wrote:
         | Sorry to be the killjoy here, but
         | 
         | This is a big mischaracterization of what "software
         | engineering" is attempting. In this case it is an attempt to
         | document what should be learned from experience and intuition.
         | Because, the vast majority of software engineering IS "factory"
         | work because its the grunt work of building what should be a
         | well understood system, with well understood tooling. This
         | doesn't mean that there isn't plenty of space for creative
         | problem solving, particularly when the system is underspeced,
         | it just means that the correct solution most of the time is the
         | boring one.
         | 
         | Most companies don't want "software artists" anymore than they
         | want "artistic bricklayers" or "artistic aerospace engineers".
         | What they want is predictable, maintainable error free software
         | that still works when the "artist" moves on.
         | 
         | That doesn't mean that your not free to upload a ton of
         | artistic software to your github, or have opinions about how
         | something should be designed. It just means that a professional
         | should choose the accepted method over the fun one when given
         | the chance. And why shouldn't they? In very few cases are
         | "software engineers" being paid to work on their own pet
         | projects, the end result is going to be something that the
         | company owns and is responsible for, not the individual working
         | for said company.
        
           | postalrat wrote:
           | It seems you have a pet peeve about pet projects. The parent
           | comment didn't say anything about pet projects.
        
           | relativeadv wrote:
           | "What they want is predictable, maintainable error free
           | software that still works when the "artist" moves on."
           | 
           | How about fast software?
           | 
           | To get fast software today. You're going to need some
           | artists.
        
             | WJW wrote:
             | I think the evidence shows that "fast" software is not
             | actually desired by the majority of companies, otherwise
             | they would put more emphasis on using compiled languages,
             | optimisation and reducing technical debt. Sure, everybody
             | will take a speed improvement if it's free but most
             | companies will go for predictable, maintainable and error
             | free software over speed every time.
        
               | jimbokun wrote:
               | What matters for companies is whether they can accomplish
               | tasks faster than they can do them today. And it's rare
               | that the difference between executing Python or C code is
               | the bottleneck in that process.
        
           | burnthrow wrote:
           | > Most companies don't want "software artists" anymore than
           | they want "artistic bricklayers" or "artistic aerospace
           | engineers". What they want is predictable, maintainable error
           | free software that still works when the "artist" moves on.
           | 
           | Well that's a bloody shame because they have to hire actual
           | humans, and we're a smelly, imaginative lot. If they want an
           | iron they should go to Target.
        
         | TrinaryWorksToo wrote:
         | Actually, the book agrees with you. There's very little
         | evidence to support having a ridgid set of rules to follow when
         | programming.
         | 
         | There are a few high-level take-aways from the recurring
         | patterns seen in the analysis; these include: * there is little
         | or no evidence for many existing theories of software
         | engineering, * most software has a relatively short lifetime,
         | e.g., source code is deleted, packages are withdrawn or
         | replaced, and software systems cease to be supported. A
         | cost/benefit analysis of an investment intended to reduce
         | future development costs needs to include the possibility that
         | there is no future development; see fig 4.24, fig 5.7, fig
         | 5.52, fig 5.69, fig 6.9, fig 3.31. Changes within the
         | ecosystems in which software is built also has an impact on the
         | viability of existing code; see fig 4.13, fig 4.22, fig 4.59,
         | fig 4.61, fig 11.76, * software developers should not be
         | expected to behave according to this or that mathematical
         | ideal. People come bundled with the collection of cognitive
         | abilities and predilections that enabled their ancestors,
         | nearly all of whom lived in stone-age communities, to
         | reproduce; see chapter 2.
        
           | aard wrote:
           | This is good to know. I should jump into the book. I may have
           | gotten the wrong impression from the summary -- the quotes
           | that I shared. I think a scientific effort to look at the
           | many practices that are currently in vogue is very much
           | needed. So many are taken as gospel truths and lorded over
           | people in the name of science-- but they really aren't
           | supported by any rigorous science at all. If this book points
           | that out, than I am all for it.
           | 
           | Especially if the book has more a system thinking approach.
           | Many studies isolate one practice (pair programming, code
           | reviews, etc..) and can show benefits, but they ignore the
           | systems they function in. Apparently opposite approaches,
           | supported by the right personalities and environments can
           | often be equally effective. From your comment, it looks like
           | there may be some analysis like that too.
        
             | [deleted]
        
       ___________________________________________________________________
       (page generated 2020-11-12 23:01 UTC)