[HN Gopher] Does memory leak? (1995)
       ___________________________________________________________________
        
       Does memory leak? (1995)
        
       Author : rot25
       Score  : 384 points
       Date   : 2020-02-22 12:52 UTC (10 hours ago)
        
 (HTM) web link (groups.google.com)
 (TXT) w3m dump (groups.google.com)
        
       | [deleted]
        
       | lmilcin wrote:
       | I once worked on an application which if failed even once meant
       | considerable loss for the company including possible closure.
       | 
       | By design, there was no memory management. The memory was only
       | ever allocated at the start and never de-allocated. All
       | algorithms were implemented around the concept of everything
       | being a static buffer of infinite lifetime.
       | 
       | It was not possible to spring a memory leak.
        
         | conro1108 wrote:
         | This sounds fascinating, could you elaborate any on why a
         | single failure of this application would be so catastrophic?
        
           | lmilcin wrote:
           | I can't discuss this particular application.
           | 
           | But there is whole class of applications that are also
           | mission critical -- an example might be software driving your
           | car or operating dangerous chemical processes.
           | 
           | For automotive industry there are MISRA standards which we
           | used to guide our development process amongst other ideas
           | from NASA and Boeing (yeah, I know...)
        
       | lallysingh wrote:
       | Are these Patriots? Didn't they need a power cycle every 24
       | hours? Is this why?
        
       | tjalfi wrote:
       | This has come up a couple times ([0][1]) before.
       | 
       | [0] https://news.ycombinator.com/item?id=14233542
       | 
       | [1] https://news.ycombinator.com/item?id=16483731
        
       | matsemann wrote:
       | I made a project a few years back where I had really no idea what
       | I was doing. [0] I had to read two live analog video feeds fed
       | into two TV-cards, display them properly on an Oculus Rift and
       | then take the head tilting and send back to the cameras mounted
       | on a flying drone. I spent weeks just getting it to work, so my
       | C++ etc was a mess. The first demo I leaked like 100 MB a second
       | or so, but that meant that it would work for about a minute
       | before everything crashed. We could live with that. Just had to
       | restart the software for each person trying, hehe.
       | 
       | [0]: https://news.ycombinator.com/item?id=7654141
        
       | Out_of_Characte wrote:
       | What an interesting concept. Good programmers always consider
       | certain behaviours to be wrong. Memory 'leaks' being one of them.
       | But this real application of purposefully not managing memory is
       | also an interesting thought exercise. However counter intuitive,
       | a memory leak in this case might be the most optimal solution in
       | this problem space. I just never thought I would have to think of
       | an object's lifetime in such a literal sense.
       | 
       | Edit; ofcouse HN reacts pedantic when I claim good programmers
       | always consider memory leaks wrong. Do I really need to specify
       | the obvious every time?
        
         | blattimwind wrote:
         | Cleaning up memory is an antipattern for many _tools_ ,
         | especially of the EVA/IPO model (input-process-output). For
         | example, cp(1) in preserve hard links mode has to keep track of
         | things in a table; cleaning it up at the end of the operation
         | is a waste of time. Someone "fixed" the leak to make valgrind
         | happy and by doing so introduced a performance regression.
         | Another example might be a compiler; it's pointless to
         | deallocate all your structures manually before calling exit().
         | The kernel throwing away your address space is infinitely
         | faster than you chasing every pointer you ever created down and
         | then having the kernel throw away your address space. The
         | situation is quite different of course if you are libcompiler.
        
           | ufo wrote:
           | Is there is a way to tell Valgrind that a certain memory
           | allocation is intentionally being "leaked", and should not
           | produce a warning?
        
             | blattimwind wrote:
             | https://valgrind.org/docs/manual/manual-core.html#manual-
             | cor...
        
           | saagarjha wrote:
           | > The kernel throwing away your address space is infinitely
           | faster than you chasing every pointer you ever created down
           | and then having the kernel throw away your address space.
           | 
           | In this case you normally want to allocate an arena yourself.
        
           | zozbot234 wrote:
           | "Throwing away" a bunch of address space also happens when
           | freeing up an arena allocation, and that happens in user
           | space. This means that you might sometimes be OK with not
           | managing individual sub-allocations within the arena, for
           | essentially the same reason: it might be pointless work given
           | your constraints.
        
         | emsy wrote:
         | No, they did what good engineers do: They analyzed the problem
         | and found a feasible and robust solution. Following rules
         | without thinking is not what good programmers do. I'd argue
         | that most problems of modern software development stem from
         | this mindset, even when it's a rule that should be applied 99%
         | of the time.
        
           | harryf wrote:
           | All good until years later, a new team builds, unaware of the
           | leak, same system into a longer range missile...
        
             | emsy wrote:
             | Right, but that doesn't invalidate the previous decisions
             | made at that time. And it's not only a problem in software.
             | Accidents can happen because engineers trade material
             | strength to reduce weight only to find out that over a
             | number of generations they slowly crept above the safety
             | margin that was decide upon 10 years ago, resulting in
             | catastrophic failure. I'm sorry I don't have the actual
             | story at hand right now, but it's not unimaginable either
             | way. The problem you described has more to do with proper
             | passing of knowledge and understanding of existing systems
             | rather than strictly adhering to a fixed set of best
             | practices.
        
         | bob1029 wrote:
         | It is interesting what you can come up with if you rely on
         | constraints in the physical realm to inform your virtual realm
         | choices. I've been looking at various highly-available
         | application architectures and came across a similar idea to the
         | missile equation in the article. If you are on a single box
         | your hands are tied. But, if you have an N+1 (or more)
         | architecture, things can get fun.
         | 
         | In theory, you could have a cluster of identical nodes each
         | handling client requests (i.e. behind a load balancer). Each
         | node would monitor its own application memory utilization and
         | automatically cycle itself after some threshold is hit (after
         | draining its buffers). From the perspective of the programmer,
         | you now get to operate in a magical domain where you can
         | allocate whatever you want and never think about how it has to
         | be cleaned up. Obviously, you wouldn't want to maliciously use
         | malloc, but as long as the cycle time of each run is longer
         | than a few minutes I feel the overhead is accounted for.
         | 
         | Also, the above concept could apply to a single node with
         | multiple independent processes performing the same feat, but
         | there may be some increased concerns with memory fragmentation
         | at the OS-level. Worst case with the distributed cluster of
         | nodes, you can simply power cycle the entire node to wipe
         | memory at the physical level and then bring it back up as a
         | clean slate.
        
       | kebman wrote:
       | The garbage is collected in one huge explosion. And then even
       | more garbage is made, so that's why we don't mind leaks...... xD
        
       | simonebrunozzi wrote:
       | > the ultimate in garbage collection is performed without
       | programmer intervention.
       | 
       | Brilliant.
        
       | ggambetta wrote:
       | Of course it's also expected to crash, especially the hardware :)
        
         | Igelau wrote:
         | Remote execution? It was the top requested feature!
        
       | GordonS wrote:
       | A bit OT, but I wonder how I'd feel if I was offered a job
       | working on software for missiles.
       | 
       | I'm sure the technical challenge would be immensely interesting,
       | and I could tell myself that I cared more about accuracy and
       | correctness than other potential hires... but from a moral
       | standpoint, I don't think I could bring myself to do it.
       | 
       | I realise of course that the military uses all sorts of software,
       | including line of business apps, and indeed several military
       | organisations use the B2B security software that my microISV
       | sells, but I think it's very different to directly working on
       | software for killing machines.
        
         | jmpman wrote:
         | Straight out of college, I was offered a job writing software
         | for missiles. Extremely interesting area, working for my
         | adjunct professor's team, who I highly admired and whose class
         | was the best of my college career. The pay was on par with all
         | my other offers. I didn't accept for two reasons.
         | 
         | First, I logically agreed that the missiles were supporting our
         | armed services and I believed that our government was generally
         | on the right side of history and needed the best technology to
         | continue defending our freedoms. However, a job, when executed
         | with passion, becomes a very defining core of your identity. I
         | didn't want death and destruction as my core. I support and
         | admire my college friends who did accept such jobs, but it just
         | wasn't for me.
         | 
         | Second, I had interned at a government contractor, (not the
         | missile manufacturer), and what I saw deeply disturbed me. I
         | came on to a project which was 5 years into a 3 year schedule,
         | and not expected to ship for another 2 years. Shocked, I asked
         | my team lead "Why didn't the government just cancel the
         | contract and assign the work to another company?", her reply,
         | "If they did that, the product likely wouldn't be delivered in
         | under two years, so they stick with us". I understood that this
         | mentality was pervasive, and would ultimately become part of
         | me, if I continued to work for that company. That mentality was
         | completely unacceptable in the competitive commercial world,
         | and I feared the complacency which would infect me and not
         | prepare me for the eventual time when I'd need to look for a
         | job outside that company. As a graduating senior, I attended
         | our college job fair, and when speaking with another (non
         | missile) government contractor, I told the recruiter that I was
         | hesitant working for a his company because I thought it
         | wouldn't keep me as competitive throughout my career. I
         | repeated the story from my internship, and asked if I'd find
         | the same mentality at his company. His face dropped the
         | cheerful recruiter facade, when he pulled me aside and sternly
         | instructed "You should never repeat that story". I took that as
         | an overwhelming "yes". So, my concern was that working for this
         | missile manufacturer, this government contractor mentality
         | would work its way into their company (if it hadn't already),
         | and it would be bad for my long term career. I wanted to remain
         | competitive on a global commercial scale, without relying upon
         | government support.
        
           | tomcam wrote:
           | Thank you for a nuanced and very well explained set of
           | reasons. This is a difficult subject to handle
           | dispassionately here and you did an admirable job.
        
           | Razengan wrote:
           | > _he pulled me aside and sternly instructed "You should
           | never repeat that story"_
           | 
           | We really need more exposure for the things that people like
           | that want to silence..
        
             | prostheticvamp wrote:
             | He wasn't silencing anyone. There were no black suits with
             | billy clubs outside.
             | 
             | He was warning the kid that if he went around repeating
             | that aloud he'd burn himself on the interview trail as
             | someone too naive to tow the corporate line and likely to
             | reveal embarrassing workplace details to outsiders.
             | 
             | He was doing the naive youngster a favor, before he could
             | hurt his own career.
             | 
             | The use of the phrase "people like that" is pretty much
             | always pejorative, in a story where a guy who owes the
             | student absolutely nothing took a moment to warn him "don't
             | touch the stove, you'll burn yourself".
             | 
             | So it's become a story about government contractors instead
             | of a story about "how I fucked up my job search as a new
             | grad."
             | 
             | Thank you, random kind recruiter guy.
        
               | [deleted]
        
               | jmpman wrote:
               | Nah, I was smarter than that. I already had multiple
               | offers, and had no intention of working for a government
               | contractor. I mostly wanted to see how this recruiter
               | would react to my flippant statement. If he had
               | vehemently defended his company, it would have implied
               | that the whole government contracting wasn't quite as
               | disfunctional as I'd experienced. His reaction basically
               | confirmed my suspicions. It was a "don't talk too loud
               | about what we all know is going on", along with a "how
               | dare you unmask us". Sure, it was also a "holy hell, you
               | can't talk to recruiters like that".
               | 
               | But, no black suits with billy clubs.
               | 
               | And, I'm not suggesting that anything was out of norm for
               | any of these government contractors. They're delivering a
               | very specialized service with immense regulations. There
               | are very few companies which can produce the same
               | product, so the competition is low and the feedback loop
               | in procurement cycles is much longer.
        
           | jnwatson wrote:
           | Sticking with a vendor even though they are very late is
           | quite common among even non-government programs.
           | 
           | Big projects are hard and they are frequently late. The fact
           | that it is for the government is largely besides the point.
        
           | throwaway462564 wrote:
           | > First, I logically agreed that the missiles were supporting
           | our armed services and I believed that our government was
           | generally on the right side of history and needed the best
           | technology to continue defending our freedoms
           | 
           | I hope you are not writing about the US government. I don't
           | think the US military can be described as protecting our
           | freedoms after interfering and starting wars all over the
           | world in the past. We are sadly mostly the aggressors and not
           | the defendants.
        
             | [deleted]
        
           | AtlasBarfed wrote:
           | > our government was generally on the right side of history
           | 
           | Well, we are the victors, so far. But the war against
           | ourselves is going quite well.
        
           | newscracker wrote:
           | _> I came on to a project which was 5 years into a 3 year
           | schedule, and not expected to ship for another 2 years.
           | Shocked, I asked my team lead "Why didn't the government just
           | cancel the contract and assign the work to another company?",
           | her reply, "If they did that, the product likely wouldn't be
           | delivered in under two years, so they stick with us". I
           | understood that this mentality was pervasive, and would
           | ultimately become part of me, if I continued to work for that
           | company. That mentality was completely unacceptable in the
           | competitive commercial world, and I feared the complacency
           | which would infect me and not prepare me for the eventual
           | time when I'd need to look for a job outside that company._
           | 
           | Software for any system is complex. And it's quite common for
           | almost every software project to be late on schedule. The
           | Triple Constraint -- "schedule, quality, cost: pick any two"
           | doesn't even fit software engineering in any kind of serious
           | endeavor because it's mostly a "pick one" scenario.
           | 
           | If you've worked on projects where all these three were met
           | with the initial projections, then whoever is estimating
           | those has really made sure that they've added massive buffers
           | on cost and time or the project is too trivial for a one
           | person team to do in a month or two.
           | 
           | The entire reason Agile came up as a methodology was in
           | recognizing that requirements change all the time, and that
           | the "change is the only constant" refrain should be taken in
           | stride to adapt how teams work.
        
             | AtlasBarfed wrote:
             | I vehemently and violently disagree!
             | 
             | The average project achieves 1.5 of the triples.
             | 
             | Here are the true constraints though:
             | 
             | - Schedule - Meets Requirements - Cost - Process -
             | Usefulness/Polish
             | 
             | Yes, usefulness and meets requirements aren't the same
             | thing, and anyone who has done the madness of large scale
             | enterprise software will be nodding their heads.
             | 
             | What really bogs down most software projects is that
             | "quality" means different things to different actors in
             | projects. Project Managers want to follow process and meet
             | political goals. Users want usefulness, polish, and
             | efficiency. Directors/management want requirements
             | fulfilled they dictate (often reporting and other junk that
             | don't add to ROI).
             | 
             | And that I like to say "pick two"
        
         | GuB-42 wrote:
         | There are different kinds of killing machines. And accurate
         | missiles are among the least bad.
         | 
         | With the exception of nuclear weapons (that's another topic),
         | missiles are designed to destroy one particular target of
         | strategic importance and nothing more. They are too expensive
         | as mass killing weapons, but they are particularly appropriate
         | for defense.
         | 
         | Without missiles, you may need to launch an assault, destroying
         | everything on your way to the target, risk soldier lives,
         | etc... Less accurate weapons mean higher yield to compensate,
         | so more needless destruction.
         | 
         | War is terrible, but I'd rather have it fought with missiles
         | than with mass produced bombs, machine guns, and worst of all,
         | mines.
        
           | BiteCode_dev wrote:
           | On the other hand, making killing a target easier to do gives
           | you the incentive to do just this instead of trying to find
           | an alternative solution.
           | 
           | Case in point: currently the country with the best army in
           | the world is also the one going the most at war.
        
         | dahart wrote:
         | I had a family friend who worked on missiles and drones and
         | other defense systems. He was really one of my dad's running
         | buddies, and he was a super nice guy, had 4 kids, went to
         | church, etc.
         | 
         | One day, I believe during the Iraq occupation, maybe ~12 or 13
         | years ago, I asked him very directly how he felt about working
         | on these killing machines and whether it bothered him. He
         | smiled and asked if I'd rather have the war here in the U.S..
         | He also told me he feels like he's saving lives by being able
         | to so directly target the political enemies, without as much
         | collateral damage as in the past. New technology, he truly
         | believed was preventing innocent civilians from being killed.
         | 
         | It certainly made me think about it, and maybe appreciate
         | somewhat the perspective of people who end up working on war
         | technology, even if I wouldn't do it. This point of view
         | assumes we're going to have a war anyway, and no doubt the
         | ideal is just not to have wars, so maybe there's some
         | rationalization, but OTOH maybe he's right that he is helping
         | to make the best of a bad situation and saving lives compared
         | to what might happen otherwise.
        
           | alex_young wrote:
           | Costa Rica hasn't had a standing military since 1948. They
           | are in one of the most politically unstable parts of the
           | world and do just fine without worry of invasion.
           | 
           | The US hasn't been attacked militarily on its own soil in the
           | modern era.
           | 
           | The US military monopoly hasn't prevented horrific attacks
           | such as 9/11 executed by groups claiming to be motivated by
           | our foreign military campaigns.
           | 
           | I think there is a valid question about the moral culpability
           | of working in this area.
        
             | dahart wrote:
             | Of course there is a valid question about the morals of war
             | technology. You are _absolutely_ right about that, and I am
             | not even remotely suggesting otherwise. Like I said, I
             | don't think I would ever choose to work on it.
             | 
             | There's a vast chasm in between right and wrong though.
             | There can be understanding of others' perspectives,
             | regardless of my personal judgement. And there is also a
             | valid question and tightly related question here about the
             | morals of mitigating damage during a military conflict,
             | especially if the mitigation prevents innocent deaths. If
             | there's a hard moral line between doctors and cooks and
             | drivers and snipers and drone programmers, I don't know
             | exactly where it lies. Doctors are generally considered
             | morally good, even war doctors, but if we are at war, it's
             | certainly better to prevent an injury than to treat one.
             | 
             | The best goal in my opinion is no war.
        
             | prostheticvamp wrote:
             | The US was last attacked in living memory; Pearl Harbor
             | survivors still number > 0.
             | 
             | I will leave the WTC attack on the table, as I'm not
             | interested in a nitpicking tangent about what constitutes
             | an attack in asymmetric warfare vs. "terrorism."
             | 
             | "The modern era" is usefully vague enough to be
             | unfalsifiable.
        
             | samatman wrote:
             | In practice, Costa Rica has a standing military. It's just
             | the US military.
             | 
             | Due to the Monroe Doctrine, this is a rational stance for
             | Costa Rica to take. If the US were to adopt this policy,
             | Costa Rica might have to take a hard look at repealing it.
        
             | 3pt14159 wrote:
             | It's a valid question, but realistically if Costa Rica were
             | invaded a number of countries would step in to help them. I
             | love Costa Rica, it's one of the most beautiful countries
             | I've been to and I do appreciate the political statement
             | their making, but at the same time they're in a pretty
             | unique situation.
             | 
             | As for the ethics of working on weapons, I think there is a
             | lot of grey when it comes to software. It tends to
             | centralize wealth, since once you get it right it works for
             | everyone. It tends to be dual use, because a hardened OS
             | can be used for both banks and tanks. Even developments in
             | AI are worrying because they're so clearly applicable to
             | the military.
             | 
             | Would I work on a nuclear bomb? No. Would I work on
             | software that does a better job of, say, facial recognition
             | to lessen the likelihood of a predator drone killing an
             | innocent civilian? Maybe. It's not an all or nothing thing.
        
               | kragen wrote:
               | In the last 40 years, Panama and Grenada were invaded,
               | Honduras had a coup, Colombia had a civil war, Venezuela
               | is currently having a sort of civil war, Nicaragua's
               | government was overthrown by a foreign-armed terrorist
               | campaign, and El Salvador's government sent death squads
               | out to kill its subjects. Nobody stepped in to help _any_
               | of them except Colombia. Why would Costa Rica be
               | different?
               | 
               | > _Would I work on software that does a better job of,
               | say, facial recognition to lessen the likelihood of a
               | predator drone killing an innocent civilian?_
               | 
               | The logical extreme of this is Death Note: the person who
               | has the power simply chooses who should die, and that
               | person dies, immediately and with no opportunity for
               | resistance and no evidence of who killed them. Is that
               | your ideal world? Who do you want to have that power --
               | to define who plays the role of an "innocent civilian" in
               | your sketch -- and what do you do if they lose control of
               | it? What do you do if the person or bureaucracy to which
               | you have given such omnipotence turns out not to be
               | incorruptible and perfectly loving?
               | 
               | I suggest watching Slaughterbots:
               | https://m.youtube.com/watch?v=9CO6M2HsoIA
        
               | 3pt14159 wrote:
               | Eh, there is a difference between the examples you've
               | sited and Costa Rica. They're an ally of the US and a
               | strong democracy focussed on tourism.
               | 
               | > The logical extreme of this is Death Note
               | 
               | I don't really deal with logical extremes. It leads to
               | weird philosophies like Objectivism or Stalinism. In
               | international relations terms, I'm a liberal with a dash
               | of realism and constructivism. I don't live in my ideal
               | world. My ideal world doesn't have torture or murder or
               | war of any kind. It doesn't have extreme wealth
               | inequality or poverty. Unless this is all merely a
               | simulation, I live in the real world. Who has the power
               | to kill people? Lots of people. Everyone driving a car or
               | carrying a gun. Billions of people. It's a matter of
               | degree and targeting and justification and blow-back and
               | economics and ethics and so many other things that it's
               | not really sensible to talk about it.
               | 
               | I'm familiar with the arguments against AI being used on
               | the battlefield, but even though I abhor war, I'm not
               | convinced that there should be a ban.
        
               | dahart wrote:
               | > The logical extreme of this [...] Is that your ideal
               | world?
               | 
               | Clearly not. Would you please not post an extreme straw-
               | man and turn this into polarizing ideological judgement?
               | The post you're responding to very clearly agreed that
               | war is morally questionable, and very clearly argued for
               | middle ground or better, not going to some extreme.
               | 
               | You don't have to agree with war or endorse any kind of
               | killing in any way to see that some of the activities
               | involved by some of the people are trying to prevent
               | damage rather than cause it.
               | 
               | Intentionally choosing not to acknowledge the nuance in
               | someone's point of view is ironic in this discussion,
               | because that's one of the ways that wars start.
        
               | kragen wrote:
               | You assert that "software that does a better job of, say,
               | facial recognition to lessen the likelihood of a predator
               | drone killing an innocent civilian" is "middle ground",
               | "not going to some extreme", "trying to prevent damage",
               | and "nuanced".
               | 
               | It is none of those. It is a non-nuanced extreme that is
               | going to cause damage and kill those of us in the middle
               | ground. Reducing it to a comic book is a way to cut
               | through the confusion and demonstrate that. If you have a
               | reason (that reasonable people will accept) to think that
               | the comic-book scenario is undesirable, you will find
               | that that reason also applies to the facial-recognition-
               | missiles case -- perhaps more weakly, perhaps more
               | strongly, but certainly well enough to make it clear that
               | amplifying the humans' power of violence in that way is
               | not going to _prevent_ damage.
               | 
               | Moreover, it is absurd that someone is _proposing to
               | build Slaughterbots_ and you are accusing _me_ of
               | "turn[ing] this into polarizing ideological judgement"
               | because I presented the commonsense, obvious arguments
               | against that course of action.
        
               | p1esk wrote:
               | What's your moral stance on developing defense mechanisms
               | against Slaughterbot attacks? What if the best defense
               | mechanism is killing the ones launching the attacks?
        
               | kragen wrote:
               | I think developing defense mechanisms against
               | Slaughterbot attacks is a good idea, because certainly
               | they will happen sooner or later. If the best defense
               | mechanism is killing the ones launching the attacks, we
               | will see several significant consequences:
               | 
               | 1. Power will only be exercised by the anonymous and the
               | reckless; government transparency will become a thing of
               | the past. If killing the judge who ruled against you, or
               | the school-board member who voted against teaching
               | Creationism, or the wife you're convinced is cheating on
               | you, is as easy and anonymous as buying porn on Amazon,
               | then no president, no general, no preacher, no judge, and
               | no police officer will dare to show their face. The only
               | people who exercise power non-anonymously would be those
               | whose impulsiveness overcomes their better judgment.
               | 
               | 2. To defend against anonymity, defense efforts will
               | necessarily expand to kill not only those who are certain
               | to be the ones launching the attacks, but those who have
               | a reasonable chance of being the ones launching the
               | attacks. Just as the Khmer Rouge killed everyone who wore
               | glasses or knew how to read, we can expect that anyone
               | with the requisite skills whose loyalty to the victors is
               | in question will be killed. Expect North-Korea-style
               | graded loyalty systems in which having a cousin believed
               | to have doubted the regime will sentence you to death.
               | 
               | 3. Dead-Hand-type systems cannot be defended against by
               | killing their owners, only by misleading their owners as
               | to your identity. So they become the dominant game
               | strategy. This means that it isn't sufficient to kill
               | people once they are launching attacks; you must kill
               | them before they have a chance to deploy their forces.
               | 
               | 4. Battlefields will no longer have borders; war anywhere
               | will mean war everywhere. Combined with Dead Hand
               | systems, the necessity for preemptive strikes, and the
               | enormous capital efficiency of precision munitions, this
               | will result in a holocaust far more rapid and complete
               | than nuclear weapons could ever have threatened.
               | 
               | While this sounds like an awesome plot for a science-
               | fiction novel, I'd rather live in a very different
               | future.
               | 
               | So, I hope that we can develop better defense mechanisms
               | than just drone-striking drone pilots, drone sysadmins,
               | and drone programmers. For example, pervasive
               | surveillance (which also eliminates what we know as
               | "human rights", but doesn't end up with everyone
               | inevitably dead within a few days); undetectable
               | subterranean fortresses; living off-planet in small,
               | high-trust tribes; and immune-system-style area defense
               | with nets, walls, tiny anti-aircraft guns, and so on.
               | With defense mechanisms such as these, the Drone Age
               | should be more survivable than the Nuclear Age.
               | 
               | But, if we can't develop better defense mechanisms than
               | killing attackers, we should delay the advent of the
               | drone holocaust as long as we can, enabling us to enjoy
               | what remains of our lives before it ends them.
        
           | DavidVoid wrote:
           | > New technology, he truly believed was preventing innocent
           | civilians from being killed.
           | 
           | Drones and missiles are definitely a step forward compared to
           | previous technology in many regards, but I can't help but be
           | reminded of people who argued that the development and use of
           | napalm would reduce human suffering by putting an end to the
           | war in Vietnam faster.
           | 
           | For an interesting and rather nuanced (but not 100%
           | realistic) view on drone strikes, I'd recommend giving the
           | 2015 movie _Eye in the Sky_ a watch.
           | 
           | Another issue with drone strikes and missiles is "the bravery
           | of being out of range": it's easier to make the decision to
           | kill someone who you're just watching on a screen than it is
           | to look a person in the eyes and decide to have them killed.
        
         | kebman wrote:
         | Oh, I'm sure you'd make a killing! :D <3
        
         | jpmattia wrote:
         | > _but from a moral standpoint, I don 't think I could bring
         | myself to do it._
         | 
         | I've been in a similar situation, and I think there is
         | something important to think about: Assuming you'd be working
         | for the defense of a country with a track record of decency (at
         | least a good fraction of the time anyway), you have to decide
         | what people you want taking those jobs.
         | 
         | Is it better that all of the people with qualms refuse to take
         | the positions? ie Do you want that work being done by people
         | with no qualms? Because that sounds pretty terrible too.
        
         | [deleted]
        
         | Twirrim wrote:
         | > A bit OT, but I wonder how I'd feel if I was offered a job
         | working on software for missiles.
         | 
         | At one stage in my career I had an opportunity to go work for
         | Betfair. I knew several people there and could bypass most or
         | all of the interview process. At the time a rapidly growing on-
         | line gambling company, wasn't quite the major company it is
         | now. They were paying about half as much again over my existing
         | salary, and technology wise it would have been a good
         | opportunity.
         | 
         | I ended up having quite a long conversation with a few co-
         | workers around the morality of it. I was against it, for what I
         | thought were pretty much obvious reasons. The house always
         | wins, gambling is an amazing con built up on destroying lives.
         | I don't want to be a part of that, much like I wouldn't work
         | for a tobacco company, oil company etc. Co-workers were taking
         | what they saw as more pragmatic perspective: Gamblers gonna
         | gamble, doesn't matter if the site is there or not.
        
           | jmpman wrote:
           | The reality behind corporate casinos is a bit more
           | disheartening. Using analytics, from their "players club"
           | cards, they know what zip code you're from, and based upon
           | that, they approximate your income. They know that if you
           | lose too much, then your wife isn't going to allow you to
           | return to Vegas. Each income level has a certain pain
           | threshold for how much you can lose in a year. The casino's
           | work very hard to ensure you don't go over that limit.
           | 
           | If you're on a bad losing streak, they'll send a host over to
           | offer you tickets to a show or a buffet. The goal is to get
           | you AWAY from gambling. They know you're an addict, but want
           | to keep the addiction going.
           | 
           | That's where they cross the ethical line.
        
         | enriquto wrote:
         | > A bit OT, but I wonder how I'd feel if I was offered a job
         | working on software for missiles.
         | 
         | Unless you are an extreme pacifist (which is a perfectly
         | reasonable thing to be), you'll acknowledge the legitimate need
         | for the existence of an army in your country. In that case, the
         | army better be equipped by missiles than by youths carrying
         | bayonets. Then, there's nothing wrong in providing these
         | missiles with technologically advanced guiding systems.
         | 
         | On the other hand, if I worked in "algorithmic trading" or
         | fancy "financial instruments" I would not be able to sleep at
         | night without a fair dose of drugs.
        
           | GordonS wrote:
           | It's not that I'm a pacifist, but more that I don't trust my
           | government (UK, but I have the same issues with the US gov
           | too) to do justifiable things with them.
           | 
           | If they were for defense only, I might be able to do it. But
           | instead they are sold to any government with the means to
           | pay, regardless of their human rights record or how they will
           | be used (e.g. Saudi). Aside from selling them on, they are
           | used in conflicts that are hard to justify, beyond filling
           | the coffers of the rich and powerful. Take the latest Iraq
           | war for example: started based on falsified evidence,
           | hundreds of thousands dead, atrocities carried out by the
           | west, schools bombed, incidents covered up...
           | 
           | Given these realities, I just couldn't do it.
           | 
           | My original musing was more thinking along the lines of an
           | ideal world, where I trusted my government; I'm still not
           | sure I could do it.
        
           | fancyfredbot wrote:
           | I suppose we have the 2008 crisis to thank for creating a
           | popular view that finance is an entirely morally corrupt
           | industry. Perhaps it's not surprising given the role "fancy
           | financial instruments" played there. All the same, it strikes
           | me as strange to find moving risk around to be more morally
           | difficult than designing a missile - moving risk around is at
           | least sometimes straightforwardly beneficial for everyone
           | involved, a missile strike less so...
        
         | TedDoesntTalk wrote:
         | To offer a contrarian point of view: I'd jump at such an
         | opportunity... to work in such a technically interesting area
         | AND help keep my country technologically relevant. It's a no-
         | brainer.
        
           | GordonS wrote:
           | For me (and I would imagine a lot of people), it's _far_ from
           | a no-brainer.
           | 
           | While it would undoubtedly be interesting from a technical
           | standpoint, there is a serious moral conundrum - even if it
           | was an ideal world where you trusted your government not to
           | start wars based on flimsy or falsified evidence, start wars
           | for profit, or sell weapons to less scrupulous governments.
        
             | lonelappde wrote:
             | Would you rather live in a world dominated by USA or USSR
             | or China or Nazi Germany?
             | 
             | Remember you don't get to take away _everyone 's_ missiles.
        
               | GordonS wrote:
               | The world isn't binary; I don't think the options you
               | laid out are the only possibilities.
               | 
               | I take your point though, and I'd have much less of a
               | dilemma if the missiles in question were not to be sold
               | to other governments, and only to be used for domestic
               | defence or a clear world-threat type scenario. Which for
               | many Western countries is of course not going to happen.
        
             | TedDoesntTalk wrote:
             | That's fine. I am expressing my opinion and you yours. I
             | don't trust my government with everything, but I'd much
             | rather keep the status quo than see China or another
             | country reign in my lifetime.
        
         | ezoe wrote:
         | Well, there is a SAM system which is designed to kill missiles,
         | not the humans.
         | 
         | That said, I think any software development which involves the
         | government aren't fun at all for all the bureaucracies and
         | inefficiency.
        
         | cushychicken wrote:
         | I recently interviewed, and was offered a job at, Draper Labs
         | in Cambridge MA.
         | 
         | The technical work was super interesting. Everyone I spoke to
         | was plainly super sharp, and not morally bankrupt. I fielded
         | similar moral concerns as you, but truthfully, I don't really
         | have much of a personal ethical problem with it. I was a little
         | more concerned at having to explain it to all of my friends,
         | many of whom are substantially more liberal leaning in
         | political views than I am.
         | 
         | Perception, and the pay cut I'd have to take from my current
         | work, ended up being the major things that stopped me from
         | taking it.
        
           | TedDoesntTalk wrote:
           | First time I've heard of someone accepting or not accepting a
           | job based on peer perception. Maybe you should re-evaluate
           | who your peers are if they can't accept you for your career
           | choices?
        
             | DavidVoid wrote:
             | Or maybe they trust/value their peer's judgement despite
             | the fact that they themselves don't have any strong views
             | on the subject?
        
         | lainga wrote:
         | Cue the picture of the protestors holding a sign reading "this
         | machine kills innocents!"... next to a MIM-104. There are many
         | types of missiles.
        
         | mopsi wrote:
         | > _I 'm sure the technical challenge would be immensely
         | interesting, and I could tell myself that I cared more about
         | accuracy and correctness than other potential hires... but from
         | a moral standpoint, I don't think I could bring myself to do
         | it._
         | 
         | Why? The more precise missiles are, the better. If no-one
         | agreed to build missile guidance systems, we'd still have
         | carpet bombing and artillery with 100m accuracy.
        
           | saagarjha wrote:
           | People might use them less, though.
        
             | TedDoesntTalk wrote:
             | How naive. More likely they'd be used just as often, but
             | more civilians would die. See the v1 and v2 nazi rockets,
             | for example, which didn't have any software.
        
         | tomcam wrote:
         | I mean all due respect with this question. Not an attack. Do
         | you think your country should have such missiles? If not, how
         | would you handle the defense case in which your country is
         | attacked but does not have them? Also note that most of Europe
         | is defended by these missiles made in the USA.
        
           | int_19h wrote:
           | In the real world, the dilemma is more often, "do you think
           | your country, and all other countries it considers allies,
           | should have such missiles"?
           | 
           | Right now, for example, Saudi Arabia is bombing Yemen with
           | American-made bombs, and Turkey is using German tanks and
           | Italian drones to grind Kurdish towns and villages into
           | rubble in Syria.
        
           | GordonS wrote:
           | I've mentioned this a few times already in other comments,
           | but I'd be in much less of a quandry about missiles used for
           | defense purposes. Especially so if they could only be used
           | for shooting down other missiles.
        
       | 32gbsd wrote:
       | It is all good until people start to depend on these memory leaks
       | and then you are stuck with a platform that is unsupported.
        
       | tyingq wrote:
       | Until the cruise missile shop down the hall decides to reuse your
       | controller.
        
         | stefan_ wrote:
         | Remember when the CIA contracted with Netezza to improve their
         | predator drone targeting, who then went and reverse-engineered
         | some software from their ex business partner IISI and shipped
         | that?
         | 
         |  _IISi's lawyers claimed on September 7, 2010 that "Netezza
         | secretly reverse engineered IISi's Geospatial product by, inter
         | alia, modifying the internal installation programs of the
         | product and using dummy programs to access its binary code [
         | ... ] to create what Netezza's own personnel reffered to
         | internally as a "hack" version of Geospatial that would run,
         | albeit very imperfectly, on Netezza's new TwinFin machine [ ...
         | ] Netezza then delivered this "hack" version of Geospatial to a
         | U.S. Government customer (the Central Intelligence Agency) [
         | ... ] According to Netezza's records, the CIA accepted this
         | "hack" of Geospatial on October 23, 2009, and put it into
         | operation at that time."_
         | 
         | Reality is always more absurd, government agencies remain inept
         | and corrupt even when shrouded in secrecy to cover up their
         | missteps, and by the way, Kubernetes now flies on the F16.
        
         | DmitryOlshansky wrote:
         | Indeed.
         | 
         | I think one of big problems in software development is that
         | nobody measures the half-life of our assumptions. That is the
         | amount of time it takes for half of the original assumptions to
         | no longer hold.
         | 
         | In my limited experience assumptions half-life in software
         | could be easily as low as around one year. Meaning that in 5
         | years only 1/32 of original architecture would make sense if we
         | do not evolve it.
        
         | gameswithgo wrote:
         | If all software is built to protect against all possible future
         | anticipated use cases, your software will take longer to make,
         | perform worse, and be more likely to have bugs.
         | 
         | If all software is built only to solve the problem at hand, it
         | will take less time to develop, be less likely to have bugs,
         | and perform better.
         | 
         | It isn't clear that coding for reuse is going to get you a net
         | win, especially since computing platforms, the actual hardware,
         | is always evolving, such that reusing code some years later can
         | become sub-optimal for that reason alone.
        
           | tyingq wrote:
           | Fair, but the leaks apparently weren't documented well, or
           | the linked story wouldn't have read like it did.
        
           | eru wrote:
           | There's a middle ground. Eg the classic Unix 'cat' (ignoring
           | all the command line switches) does something really simple
           | and re-usable, so it makes sense to make sure it does the
           | Right Thing in all situations.
        
             | thaumasiotes wrote:
             | I mean, 'cat' does something so simple (apply the identity
             | function to the input) that it has no need to be reusable
             | because there's no point using it in the first place. If
             | you have input, processing it with cat just means you
             | wasted your time to produce something you already had.
        
               | derefr wrote:
               | The point of cat(1), short for _concatenate_ , is to feed
               | a pipeline multiple _concatenated_ files as input,
               | whereas shell stdin redirection only allows you to feed a
               | shell a single file as input.
               | 
               | This is actually highly flexible, since cat(1) recognizes
               | the "-" argument to mean stdin, and so you can `cat a -
               | b` in the middle of a pipeline to "wrap" the output of
               | the previous stage in the contents of files a and b
               | (which could contain e.g. a header and footer to assemble
               | a valid SQL COPY statement from a CSV stream.)
        
               | thaumasiotes wrote:
               | But that is a case where you have several _filenames_ and
               | you want to concatenate the _files_. The work you 're
               | using cat to do is to locate and read the files based on
               | the filename. If you already have the data stream(s), cat
               | does nothing for you; you have to choose the order you
               | want to read them in, but that's also true when you
               | invoke cat.
               | 
               | This is the conceptual difference between
               | pipeline | cat       # does nothing
               | 
               | and                   pipeline | xargs cat # leverages
               | cat's ability to open files
               | 
               | Opening files isn't really something I think of cat as
               | doing in its capacity as cat. It's something all the
               | command line utilities do equally.
        
               | derefr wrote:
               | pipeline | cat    # does nothing
               | 
               | This is actually re-batching stdin into line-oriented
               | write chunks, IIRC. If you write a program to manually
               | select(2) + fread(2) from stdin, then you'll observe
               | slightly different behaviour between e.g.
               | dd if=./file | myprogram
               | 
               | and                   dd if=./file | cat | myprogram
               | 
               | On the former, select(2) will wake your program up with
               | dd(1)'s default obs (output block size) worth of bytes in
               | the stdin kernel buffer; whereas, on the latter,
               | select(2) will wake your program up with one line's worth
               | of input in the buffer.
               | 
               | Also, if you have _multiple data streams_ , by using e.g.
               | explicit file descriptor redirection in your shell, ala
               | (baz | quux) >4
               | 
               | ...then cat(1) won't even help you there. No tooling from
               | POSIX or GNU really supports consuming those streams,
               | AFAIK.
               | 
               | But it's pretty simple to instead target the streams into
               | explicit fifo files, and then concatenate _those_ with
               | cat(1).
        
               | thaumasiotes wrote:
               | > Also, if you have _multiple data streams_ , ...then
               | cat(1) won't even help you there.
               | 
               | I've been thinking about this more from the perspective
               | of reusing code from cat than of using the cat binary in
               | multiple contexts. Looking over the thread, it seems like
               | I'm the odd one out here.
        
             | gameswithgo wrote:
             | For sure, if you can apply a small amount of effort for a
             | high probability of easy re-usability, do it. But if you
             | start going off into weird abstract design land to solve a
             | problem you don't have yet, while it might be fun, probably
             | you should stop. At least if it is a real production thing
             | you are working on.
        
         | chapium wrote:
         | Just add the explode modifier to your classes and you should be
         | good.
        
       | raverbashing wrote:
       | And that's a common mentality in hardware manufacturers as
       | opposed to software developers (you just need to see how many
       | survived)
       | 
       | (Not saying that the manufacturer was necessarily wrong in this
       | case and doubling the memory might have added a tiny
       | manufacturing cost to something that was much more expensive)
        
       | wbhart wrote:
       | Missiles don't always hit their intended target. They can go off
       | course, potentially be hacked, fall into the wrong hands, be sold
       | to mass murderers, fail to explode, accidentally fall out of
       | planes (even nuclear bombs have historically done this), miss
       | their targets, encounter countermeasures, etc.
       | 
       | Nobody is claiming that this was done for reasons of good
       | software design. It's perfectly reasonable to suspect it was done
       | for reasons of cost or plain negligence.
       | 
       | There's a reason tech workers protest involvement of their firms
       | with the military. It's because all too often arms are not used
       | as a deterrent or as a means of absolute last resort, but because
       | they are used due to faulty intelligence, public or political
       | pressure, as a means of aggression, without regard to collateral
       | damage or otherwise in a careless way.
       | 
       | The whole point here is the blase way the technician responded,
       | "of course it leaks". The justification given is not that it was
       | necessary for the design, but that it doesn't matter because it's
       | going to explode at the end of its journey!
        
         | willvarfar wrote:
         | A simple bump allocator with no reclaim is fairly common in
         | embedded code.
         | 
         | Garbage collection makes the performance of the code much less
         | deterministic.
         | 
         | A lot of embedded loops running on embedded in-order cpus
         | without an operating system use cycle count as a timing
         | mechanism etc.
        
           | wbhart wrote:
           | Right, but that isn't the argument that was being used here,
           | which is my point. The way I read it, the contractor cared
           | only enough to get the design over the line so the customer
           | would sign off on it. Their argument was that you shouldn't
           | care about leaks due to scheduled deconstruction, not because
           | of a technical consideration.
           | 
           | There exist options between no reclaim and using a garbage
           | collector which could be considered, depending on the exact
           | technical specifications of the hardware it was running on
           | and the era in which it happened.
           | 
           | But retrofitting technical reasoning about why this may have
           | been done is superfluous. The contractor already said why
           | they did it, and the subtext of the original post is that it
           | was flippant and hilarious.
        
             | ncmncm wrote:
             | Fetishism is not compatible with sound engineering.
             | 
             | "Cared only enough" is just your projection. The contractor
             | knew the requirements, and satified the requirements with
             | no waste of engineering time, and no risk of memory
             | reclamation interfering with correct operation. The person
             | complaining about leaks wasted both his time and the
             | contractor's.
        
               | Dylan16807 wrote:
               | You had a good comment going until the last sentence.
               | 
               | When your job is performing an analysis of the code, five
               | minutes asking for a dangerous feature to be justified is
               | ridiculously far from a "waste of time".
        
       | andreareina wrote:
       | "Git is a really great set of commands and it does things like
       | malloc(); malloc(); malloc(); exit();"
       | 
       | https://www.youtube.com/watch?v=dBSHLb1B8sw&t=113
        
       | geophile wrote:
       | The problem, of course, is that the chief software engineer
       | doesn't appear to be have any understanding of what is causing
       | the leaks, and whether the safety margin is adequate. Maybe there
       | is some obscure and untested code path in which leaking would be
       | much faster than anticipated.
       | 
       | To be sure, it is a unique environment, in which you know for a
       | fact that your software does not need to run beyond a certain
       | point in time. And in a situation like that, I think it is OK to
       | say that we have enough of some resource to reach that point in
       | time. (It's sort of like admitting that climate change is real,
       | and will end life on earth, but then counting on The Rapture to
       | excuse not caring.) But that's not what's going on here. It
       | sounds like they weren't really sure that there would definitely
       | be enough memory.
        
         | willvarfar wrote:
         | You are reading a lot into a short story. You don't know that
         | the engineer hasn't had someone exactly calculate the memory
         | allocations.
         | 
         | Static or never-reclaimed allocations are common enough in
         | embedded code.
        
         | clSTophEjUdRanu wrote:
         | Freeing memory isn't free, it takes time. Maybe it's not worth
         | the time hit and they know exactly where it is leaking memory.
        
         | blattimwind wrote:
         | Actually the story implies the opposite
         | 
         | > they had calculated the amount of memory the application
         | would leak in the total possible flight time for the missile
         | and then doubled that number.
        
       | FpUser wrote:
       | _" Since the missile will explode when it hits it's target or at
       | the end of it's flight, the ultimate in garbage collection is
       | performed without programmer intervention."_
       | 
       | I just can't stop laughing over this _" ultimate in garbage
       | collection"_. What a guy.
       | 
       | Btw we dealt a lot with Rational in the 90's. I might have even
       | met him.
        
       | cryptoscandal wrote:
       | yes
        
       | crawshaw wrote:
       | This is an example of garbage collection being more CPU efficient
       | than manual memory management.
       | 
       | It has limited application, but there is a more common variant:
       | let process exit clean up the heap. You can use an efficient bump
       | allocator for `malloc` and make `free` a no-op.
        
         | acqq wrote:
         | There was also a variant of it with the hard drives: building
         | Windows produced a huge amount of object files, so the trick
         | used was to use a whole hard disk (or a partition) for that.
         | Before the next rebuild, deleting all the files would took far
         | more time than a "quick" reformatting of the whole hard disk,
         | so the later was used.
         | 
         | (I am unable to find a link that talks about that, however).
         | 
         | In general, throwing away at once the set of the things
         | together with the structures that maintain it is always faster
         | than throwing away every item one by one while maintaining the
         | consistency of the structures, in spite of the knowledge that
         | all that is not needed at the end.
         | 
         | An example of arenas in C: "Fast Allocation and Deallocation of
         | Memory Based on Object Lifetimes", Hanson, 1988:
         | 
         | ftp://ftp.cs.princeton.edu/techreports/1988/191.pdf
        
           | GordonS wrote:
           | That's quite a clever solution, I doubt I would have thought
           | of that!
           | 
           | Windows has always been my daily drivers, and I really do
           | like it. But I wish deleting lots of files would be much,
           | much faster. You've got time to make a cup of coffee if you
           | need to delete a node_modules folder...
        
             | acqq wrote:
             | > I wish deleting lots of files would be much, much faster.
             | You've got time to make a cup of coffee if you need to
             | delete a node_modules folder
             | 
             | The example I gave was for the old times when people had
             | much less RAM and the disks had to move physical heads to
             | access different areas. Now with the SSDs you shouldn't be
             | able to experience it _that_ bad (at least when using lower
             | level approaches). How do you start that action? Do you use
             | GUI? Are the files  "deleted" to the recycle bin? The
             | fastest way is to do it is "low level" i.e. without moving
             | the files to the recycle bin, and without some GUI that is
             | in any way suboptimal (I have almost never used Windows
             | Explorer so I don't know if it has some additional
             | inefficiencies).
             | 
             | https://superuser.com/questions/19762/mass-deleting-files-
             | in...
        
               | GordonS wrote:
               | Even with an SSD, it's still bad. Much better than the
               | several minutes it used to take with an HDD, but still
               | annoying.
               | 
               | I just tried deleting a node_modules folder with 18,500
               | files in it, hosted on an NVMe drive. Deleting from
               | Windows Explorer, it took 20s.
               | 
               | But then I tried `rmdir /s /q` from your SU link - 4s! I
               | remember trying tricks like this back with an HDD, but
               | don't remember it having such a dramatic impact.
        
               | acqq wrote:
               | >>> You've got time to make a cup of coffee if you need
               | to delete a node_modules folder...
               | 
               | > Deleting from Windows Explorer, it took 20s.
               | 
               | > `rmdir /s /q` from your SU link - 4s
               | 
               | OK, so you saw that your scenarios could run much better,
               | especially if Windows Explorer is avoided. But in
               | Explorer, is that time you measured with deleting to the
               | Recycle Bin or with the Shift Delete (which deletes
               | irreversibly but can be faster)?
               | 
               | Additionally, I'd guess you don't have to wait at all
               | (i.e. you can reduce it to 0 seconds) if you first rename
               | the folder and than start deleting that renamed one and
               | let it doing that in the background while continuing with
               | your work -- e.g. if you want to create the new content
               | in the original location it's immediately free after the
               | rename, and the rename is practically immediate.
        
       | zozbot234 wrote:
       | (1995) based on the Date: and (plausibly) References: headers in
       | the OP.
        
       | mojuba wrote:
       | One other class of applications that don't really require garbage
       | collection is HTTP request handlers if run as isolated processes.
       | They are usually very short-lived - they can't even live longer
       | than some maximum enforced by the server. For example, PHP takes
       | advantage of this and allows you not to worry about circular
       | references much.
        
         | tyingq wrote:
         | _" For example, PHP takes advantage of this"_
         | 
         | I imagine the various long-running PHP node-ish async
         | frameworks curse this history. Though PHP 7 cleaned up a lot of
         | the leaks and inefficient memory structures.
        
         | chapium wrote:
         | This is clearly not my subject area. Why would we be spawning
         | processes for HTTP requests? This sounds awful for performance.
         | 
         | My best guess is a security guarantee.
        
           | barrkel wrote:
           | Killing a process is much safer than killing a thread, and
           | the OS does cleanup.
           | 
           | It's not great for maximizing performance but it's not 100s
           | of milliseconds either, forking doesn't take long; what is
           | slow is scripting languages loading their runtimes, but you
           | can fork after that's loaded. If hardware is cheaper than
           | opportunity cost of adding new features (rather than
           | debugging leaks) it makes sense.
        
             | clarry wrote:
             | I measured less than half a millisecond to fork, print
             | time, and wait for child to exit.
             | 
             | http://paste.dy.fi/NEs/plain
             | 
             | So forking alone doesn't cap performance too much; one or
             | two cores could handle >1000 requests per second (billions
             | per month).
        
           | rgacote wrote:
           | The (web) world used to be synchronous. Traditional Apache
           | spawns a number of threads and then keeps each thread around
           | for x number or requests, after which the thread is killed
           | and a new one spawned. Incredibly useful feature when you're
           | on limited hardware and want to ensure you don't memory leak
           | yourself out of existence. Modern Apache has newer options
           | (and of course nginx has traditionally been entirely async on
           | multiple threads).
        
           | tyingq wrote:
           | PHP these days doesn't fork and spawn a new process, though
           | it does create a new interpreter context.
           | 
           | In the old cgi-bin days, every web request would fork and
           | exec a new script, whether PHP, Perl, C program, etc. That
           | was replaced with Apache modules (or nsapi, etc), then later,
           | with long running process pooled frameworks like fcgi, php-
           | fpm, etc. Perl and PHP typically then didn't fork for every
           | request. But did create a fresh interpreter context to be
           | backward compatible, avoid memory leaks, etc. So there's
           | still overhead, but not as heavy as fork/exec.
        
           | derefr wrote:
           | Not spawning, forking. Web servers were simple "accept(2)
           | then fork(2)" loops for a long time. This is, for example,
           | how inetd(8) works. Later, servers like Apache were optimized
           | to "prefork" (i.e. to maintain a set of idle processes
           | waiting for work, that would exit after a single request.)
           | 
           | Long-running worker threads came a long time later, and were
           | indeed intensely criticized from a security perspective at
           | the time, given that they'd be one use-after-free away from
           | exposing a previous user's password to a new user. (FCGI/WSGI
           | was criticized for the same reason, as compared to the
           | "clean" fork+exec subprocess model of CGI.)
           | 
           | Note that in the context of longer-running connection-
           | oriented protocols, servers are still built in the "accept(2)
           | then fork(2)" model. Postgres forks a process for each
           | connection, for example.
           | 
           | One lesser-thought-about benefit of the forking model, is
           | that it allows the OS to "see" requests; and so to apply
           | CPU/memory/IO quotas to them, that don't leak over onto undue
           | impacts on successive requests against the same worker. Also,
           | the OOM killer will just kill a request, not the whole
           | server.
        
             | mehrdadn wrote:
             | Thanks for that last paragraph, I'd never thought about
             | that aspect of processes. Learned something new today.
        
         | sumanthvepa wrote:
         | I used to work at Amazon the late 90s and this was the policy
         | they followed. The apache server module written in C would leak
         | so much that the process would have to be killed every 10
         | requests. The problem with the strategy was that it required a
         | lot of CPU and RAM to startup a new process. Amazons answer was
         | to simply throw hardware at the problem. Growing the company
         | fast was more important than cleaning up RAM. They did get
         | round to resolving the problems a few years later wit better
         | architectures. This to was an example of good engineering trade
         | offs.
        
           | saagarjha wrote:
           | > The problem with the strategy was that it required a lot of
           | CPU and RAM to startup a new process.
           | 
           | It's not really kosher, but why not just keep around a fresh
           | process that they can continually fork new handlers from?
        
             | Filligree wrote:
             | Setting it up was expensive, so there's a good chance it
             | involved initializing libraries, launching threads, or
             | otherwise creating state that isn't handled correctly by
             | fork.
        
       | DagAgren wrote:
       | What a cute story about writing software to kill people by
       | shredding them with shrapnel.
        
         | swalsh wrote:
         | What if the bomb is landing on someone who intends to kill you?
        
           | pietrovismara wrote:
           | Have you watched Minority Report? What could go wrong with
           | preemptively punishing crimes!
        
         | DoofusOfDeath wrote:
         | There's a difference between delighting in war vs. accepting it
         | as sometimes the lesser of two evils. I'm okay with discussing
         | the software-engineering considerations needed to support the
         | latter.
        
         | daenz wrote:
         | Missiles are also used for defense to intercept threats.
        
           | ptx wrote:
           | Those threats are sometimes an attempt at retaliation by
           | whoever was attacked earlier by those now defending against
           | the counter-attack, who are now free to attack without fear
           | of the consequences thanks to the missile defense system.
        
           | berns wrote:
           | Thank you. I hadn't thought of that possibility. Now I can
           | join the others in discussing the optimal memory management
           | strategy for missile controllers.
        
       | simias wrote:
       | I think it's a bad mindset to leak resources even when it doesn't
       | effectively matter. In non-garbage collected languages
       | especially, because it's important to keep in mind who owns what
       | and for how long. It also makes refactoring easier because leaked
       | resources effectively become some sort of implicit global state
       | you need to keep track of. If a function that was originally
       | called only once at startup is not called repeatedly and it turns
       | out that it leaks some memory every time you know have a problem.
       | 
       | In this case I assume that a massive amount of testing mitigates
       | these issues however.
        
         | ksherlock wrote:
         | "I gave cp more than a day to [free memory before exiting],
         | before giving up and killing the process."
         | 
         | https://news.ycombinator.com/item?id=8305283
         | 
         | https://lists.gnu.org/archive/html/coreutils/2014-08/msg0001...
        
         | mannykannot wrote:
         | I think you are conflating two issues: while one should
         | understand who owns what and for how long, it does not follow
         | that one should always free resources even when it is not
         | necessary, if doing so adds complexity and therefore more
         | things to go wrong, or if it makes things slower than optimal.
         | 
         | In this particular case, correctness was not primarily assured
         | by a massive amount of testing (though that may have been
         | done), but by a rigorous static analysis.
        
           | anarazel wrote:
           | Freeing memory also isn't free - the bookkeeping necessary,
           | both at allocation and at free time and potentially also for
           | the allocating code has costs.
           | 
           | In postgres memory contexts are used extensively to manage
           | allocations. And in quite few places we intentionally don't
           | do individual frees, but reset the context as a whole
           | (freeing the memory). Obviously only where the total amount
           | of memory is limited...
        
         | samatman wrote:
         | There are a number of old-school C programs that follow a
         | familiar template: they're invoked on the command line, run
         | from top to bottom, and exit.
         | 
         | For those, it's often the case that they allocate-only, and
         | have a cleanup block for resources like file handles which must
         | be cleaned up; any error longjmps there, and it runs at the end
         | under normal circumstances.
         | 
         | This is basically using the operating system as the garbage
         | collector, and it works fine.
        
         | alerighi wrote:
         | Freeing memory (or running a garbage collector) has a cost
         | associated with it, and if you are freeing memory (or closing
         | files, sockets, etc) before exiting a program it's time wasted,
         | since the OS will free all the resources associated with the
         | program anyway.
         | 
         | And a lot of languages, and for sure newer version of the JVM,
         | do exactly that, they don't free memory, and doesn't run the
         | garbage collector since the available memory gets too low. And
         | that is fine for most applications.
        
         | ptero wrote:
         | In a perfect world, yes. But in a hard real time system (and
         | much of missile control will likely be designed as such),
         | timing may be the #1 focus. That is, making sure that events
         | are handled in at most X microseconds or N CPU cycles. In such
         | cases adding GC may open a new can of worms.
         | 
         | I agree that in general leaking resources is bad, but sometimes
         | it is good enough by a large margin. Just a guess.
        
           | H8crilA wrote:
           | It would be an acceptable solution if the memory supply would
           | vastly outsize the demand, by over an order of magnitude. For
           | example if the program never needed more than 100MiB and
           | you'd install 1GiB or 10GiB. 10GiB is still nothing compared
           | to the cost of the missile, and you get the benefit of truly
           | never worrying about the memory management latency.
           | 
           | My favorite trick to optimizing some systems is to see if I
           | can mlock() all of the data in RAM. As long as it's below
           | 1TiB it's a no brainer - 1TiB is very cheap, much cheaper
           | than engineer salaries that would otherwise be wasted on
           | optimizing some database indices.
        
             | anarazel wrote:
             | One TB of memory is actually quite expensive. And uses a
             | fair bit of power.
        
             | bathtub365 wrote:
             | What's your rationale for picking an order of magnitude
             | instead of, say, double?
        
               | lonelappde wrote:
               | Double is an order of magnitude.
        
               | ficklepickle wrote:
               | I suppose it is, in binary. Although humans generally use
               | base 10.
               | 
               | I might have to start saying "a binary order of
               | magnitude" instead of "double" when circumstances call
               | for gobbledygook.
        
             | saagarjha wrote:
             | > As long as it's below 1TiB it's a no brainer - 1TiB is
             | very cheap, much cheaper than engineer salaries that would
             | otherwise be wasted on optimizing some database indices.
             | 
             | Until you have ten thousand machines in your cluster...
        
             | bdavis__ wrote:
             | there are always constraints. other than the cost of the
             | memory, which may appear minimal, there are many others.
             | for a missile that bigger memory chip may require more
             | current, more current means a bigger power supply, or a
             | thicker wire. might add ounces to the weight. and in this
             | environment, that may be significant (probably not in this
             | specific example, but look at this perspective for every
             | part selected...they sum up)
        
       | LucaSas wrote:
       | This pops up again from time to time, I think what people should
       | take away from this is that garbage collection is not just what
       | you see in Java and other high level languages.
       | 
       | There are a lot of strategies to apply garbage collection and
       | they are often used in low level systems too like per-frame
       | temporary arenas in games or in short lived programs that just
       | allocate and never free.
        
         | asveikau wrote:
         | Once you set a limit like this, though, it's brittle, and your
         | code becomes less maintainable or flexible in the face of
         | change. That is why a general purpose strategy is good to use.
        
       | MrBuddyCasino wrote:
       | Why go through the trouble of
       | 
       | a) calculating maximum leakage
       | 
       | b) doubling physical memory
       | 
       | instead of just fixing the leaks? Was it to save cycles? Prevent
       | memory fragmentation? I feel this story misses the details that
       | would make it more than just a cute anecdote.
        
         | qtplatypus wrote:
         | There is the cpu overhead of detecting where the memory goes
         | out of scope and freeing it. So it can be a memory vs cup
         | optimisation
        
         | mantap wrote:
         | It's possible it may have been running on bare metal without an
         | OS. Maybe they didn't want to verify a memory allocator and
         | just treated the whole program as one big arena. I presume by
         | "calculating" they meant "run the program in worst case
         | conditions and see how much memory it uses".
        
         | samatman wrote:
         | It's a missile.
         | 
         | The memory is going to fragment no matter what you do.
        
         | kelvin0 wrote:
         | I feel the same way too, dunno why the down votes? In the
         | absence of all other details it just seems like shoddy work,
         | but of course reality is probably more nuanced ... which is
         | what's missing from the story.
        
           | ratboy666 wrote:
           | I think the story isn't nuanced. The program runs once, the
           | missile explodes, garbage collection is done!
           | 
           | No need for garbage collection, no need for "memory
           | management". Not shoddy work. An expression of "YAGNI". The
           | interesting thing (in my opinion), is the realization. The
           | teller of the story went to the trouble of discovering that
           | memory is leaking. She could have simply asked before
           | engaging the work.
           | 
           | FredW
        
         | ajuc wrote:
         | It was probably more efficient. Fixing leaks often requires
         | copying instead of passing pointers.
        
         | daenz wrote:
         | "just fixing the leaks" can be a very time consuming process,
         | involving hunting and refactoring (valgrind isn't perfect).
         | It's very possible that just throwing more memory at it with
         | the soft guarantee that the leak won't result in OOM may have
         | been the best business decision for that particular contract.
         | Of course it's not the "right" way to build a thing, but
         | sometimes the job wants the thing now and "good enough."
        
       | lala26in wrote:
       | One reason I open HN almost everyday is some top items
       | consistently catch my attention. They are thought provoking.
       | Today's (now) HN I see 3-4 such items. :)
        
       | kleiba wrote:
       | Seems a bit unlikely to me. Intuitively, calculating how much
       | memory a program will leak in the worst case should be at least
       | as much effort as fixing the memory leaks. And if you actually
       | _calculated_ (as in, proved) the amount of leaked memory rather
       | than just by empirically measuring it, there 's no need to
       | install double the amount of physical memory.
       | 
       | This whole procedure appears to be a bit unbelievable. And we're
       | not even talking about code/system maintainability.
        
         | nneonneo wrote:
         | Why is it hard to calculate? Suppose I maintain lots of complex
         | calculations that require variable amounts of buffered
         | measurements (e.g. the last few seconds, the last few minutes
         | at lower resolution, some extrapolations from measurements
         | under different conditions, etc.). Freeing up the right
         | measurements might be really tricky to get right, and if you
         | free a critical measurement and need it later you're hosed.
         | 
         | On the other hand, you can trivially calculate how many
         | measurements you make per unit time, and multiply that by the
         | size of the measurements to upper-bound your storage needs.
         | Hypothetical example: you sample GPS coordinates 20 times per
         | second, which works out to ~160 bytes/sec, 10000 bytes/min, or
         | around 600KB for a full hour of flight. Easy to calculate -
         | hard to fix.
        
           | ken wrote:
           | Are you taking into account memory fragmentation? Or the
           | internal malloc data structures? If your record were just 1
           | byte more, it could easily double the total actual memory
           | usage.
           | 
           | Memory usage is discrete, not continuous. It's not as simple
           | as calculating the safety factor on a rope.
        
             | cozzyd wrote:
             | If you don't free, malloc doesn't need all that overhead
        
         | daenz wrote:
         | >Intuitively, calculating how much memory a program will leak
         | in the worst case should be at least as much effort as fixing
         | the memory leaks.
         | 
         | Why? I could calculate the average amount of leaking of a
         | program much easier than I could find all the leaks.
         | Calculating just involves performing a typical run under
         | valgrind and seeing how much was never freed. Do that N times
         | and average. Finding the leaks is much more involved.
        
         | FreeFull wrote:
         | A memory allocator without the ability to free memory is a lot
         | simpler and faster. Usually though, I'd expect to see static
         | allocation for this sort of code, I'm not sure why a missile
         | would have to allocate more memory on the fly.
        
           | StupidOne wrote:
           | Because he needed more memory mid-air? :)
           | 
           | Not sure was it pun or no pun intended, but you gave me a
           | good laugh.
        
         | [deleted]
        
         | barrkel wrote:
         | The control flow graph will most likely be a loop doing PID; I
         | think it could be statically analysed.
        
       | MaxBarraclough wrote:
       | On such systems the same approach can be taken for a cooling
       | solution. If the chip will fatally overheat in 60 seconds but the
       | device's lifetime is only 45, there's no need for a more
       | elaborate cooling solution.
       | 
       | The always-leak approach to memory management can also be used in
       | short-lived application code. The D compiler once used this
       | approach [0] (I'm not sure whether it still does).
       | 
       | [0] https://www.drdobbs.com/cpp/increasing-compiler-speed-by-
       | ove...
        
       | derefr wrote:
       | Erlang has a parameter called initial_heap_size. Each new actor-
       | process in Erlang gets its own isolated heap, for which it does
       | its own garbage-collection on its own execution thread. This
       | initial_heap_size parameter determines how large each newly-
       | spawned actor's heap will be.
       | 
       | Why would you tune it? Because, if you set it high enough, then
       | for all your _short-lived_ actors, memory allocation will become
       | a no-op (= bump allocation), and the actor will never experience
       | enough memory-pressure to trigger a garbage-collection pass,
       | before the actor exits and the entire process heap can be
       | deallocated as a block. The actor will just "leak" memory onto
       | its heap, and then exit, never having had to spend time
       | accounting for it.
       | 
       | This is also done in many video games, where there is a per-frame
       | temporaries heap that has its free pointer reset at the start of
       | each frame. Rather than individually garbage-collecting these
       | values, they can all just be invalidated at once at the end of
       | the frame.
       | 
       | The usual name for such "heaps you pre-allocate to a capacity
       | you've tuned to ensure you will never run out of, and then
       | deallocate as a whole later on" is a _memory arena_. See
       | https://en.wikipedia.org/wiki/Region-based_memory_management for
       | more examples of memory arenas.
        
         | monocasa wrote:
         | So I kind of disagree with the idea that arenas are all about
         | deallocation at once. There's other contexts where you have
         | separate arenas but don't plan on deallocating in blocks,
         | mainly around when you have memory with different underlying
         | semantics. "This block of memory is faster, but not coherent
         | and needs to be manually flushed for DMA", "this block of
         | memory is fastest but just not DMA capable at all", "there's
         | only 2k of this memory, but it's basically cache", "this memory
         | is large, fairly slow, but can do whatever", "this block of
         | memory is non volatile, but memory mapped", etc.
         | 
         | I'd say that arenas are kind of a superset of both what you and
         | I are talking about.
        
           | hinkley wrote:
           | I can't remember the last time I read C code, but I do recall
           | a particular time when I was reading a library that had been
           | written with a great deal of attention to reliability. The
           | first thing it did was allocate enough memory for the
           | shutdown operations. That way on a malloc() failure, it could
           | still do a a completely orderly shutdown. Or never start in
           | the first place.
           | 
           | From that standpoint, you could also categorize arenas on a
           | priority basis. This one is for recovery operations, this one
           | for normal operation, and whatever is left for low priority
           | tasks.
        
           | MaxBarraclough wrote:
           | Those aren't arenas. I'm inclined to agree with Wikipedia's
           | definition, which does emphasise deallocation all at once:
           | 
           | > A region, also called a zone, arena, area, or memory
           | context, is a collection of allocated objects that can be
           | efficiently deallocated all at once.
        
             | monocasa wrote:
             | I mean, wiki uses zone and region here as synonyms, so
             | according to wiki that definition applies just as much. And
             | yet:
             | 
             | https://www.kernel.org/doc/gorman/html/understand/understan
             | d...
             | 
             | Like, as a embedded developer, these concepts are used
             | pretty much every day. And in a good chunk of those,
             | deallocation isn't allowed at all, so you can't say that
             | the definition is around deallocation at once.
             | 
             | You can also see how glibc's malloc internally creates
             | arenas, but that's not to deallocate at once, but instead
             | to manage different locking semantics.
             | https://sourceware.org/glibc/wiki/MallocInternals
        
         | saagarjha wrote:
         | Note that most general-purpose allocators also keep around
         | internal arenas from which they hand out memory.
        
           | catblast wrote:
           | Not sure how this is related. A general purpose allocator
           | with a plain malloc interface can't use this to do anything
           | useful wrt lifetime because there is no correlation to
           | lifetime provided by the interface. Internal arenas can be
           | useful to address contention and fragmentation.
        
             | saagarjha wrote:
             | I'm pointing out that an arena is more about "a region of
             | memory that you can split up to use later" than "a region
             | of memory that must be allocated and deallocated all at
             | once".
        
               | asdfasgasdgasdg wrote:
               | I've never heard that use of the term "arena". Are you
               | thinking of slabs? Arenas are typically allocated and
               | deallocated at once. That's their main feature.
        
               | saagarjha wrote:
               | https://sourceware.org/glibc/wiki/MallocInternals#Arenas_
               | and...
        
               | asdfasgasdgasdg wrote:
               | Seems like a bit of an ideosyncratic use of the word. In
               | tcmalloc these per thread zones are just called the
               | thread cache (hence the name "thread caching malloc").
        
         | needusername wrote:
         | > memory allocation will become a no-op (= bump allocation)
         | 
         | No, that's a cache miss.
        
           | Tuna-Fish wrote:
           | No, as memory is allocated linearly the cpu prefetchers will
           | most likely keep the heap in cache.
        
             | twic wrote:
             | Speaking of which:
             | 
             | https://www.opsian.com/blog/jvms-allocateprefetch-options/
        
         | dahart wrote:
         | The games and GPU apps I've worked on use memory pools for
         | small allocations, where there will be individual pools for
         | all, say, 1-16 byte allocations, 16-64 byte allocations, 64-256
         | byte allocations, etc. (Sizes just for illustration, not
         | necessarily realistic). The pool sizes always get tuned over
         | time to match the approximate high water mark of the
         | application.
         | 
         | I think pools and arenas mean pretty much the same thing.
         | https://en.wikipedia.org/wiki/Memory_pool I've mostly heard
         | this discussed in terms of pools, but I wonder if there's a
         | subtle difference, or if there's a historical reason arena is
         | popular in some circles and pool in others...?
         | 
         | I haven't personally see a per-frame heap while working in
         | console games, even though games I've worked on probably had
         | one, or something like it. Techniques that I did see and are
         | super-common are fixed maximum-size allocations: just pre-
         | allocate all the memory you'll ever need for some feature and
         | never let it go; stack allocations sometimes with alloca(); and
         | helper functions/classes that put something on the stack for
         | the lifetime of a particular scope.
        
           | monocasa wrote:
           | I've seen Jason Gregory talk about per frame arenas in Game
           | Engine Architecture as a fundamental piece of how the Naughty
           | Dog engines tend to work.
           | 
           | Totally agreed that they aren't required for shipping great
           | console games (and they're really hard to use effectively in
           | C++ since you're pretty much guaranteed to have hanging
           | references if you don't have ascetic levels of discipline).
           | This is mainly just meant as a "here's an example of how they
           | can be used and are by at least one shop".
        
           | twoodfin wrote:
           | I've always understood an arena to use a bump pointer for
           | allocation and to support only a global deallocation, as the
           | GP describes.
           | 
           | A pool or slab allocator separates allocations into one of a
           | range of fixed-size chunks to avoid fragmentation. Such
           | allocators do support object-by-object deallocation.
        
       | jakeinspace wrote:
       | As somebody working on embedded software for aerospace, I'm
       | surprised this missile system even had dynamic memory allocation.
       | My entire organization keeps flight-critical code fully
       | statically allocated.
        
         | onceUponADime wrote:
         | DO 187A ?
        
           | p_l wrote:
           | DO-178C, actually, but good pointer.
        
         | dahart wrote:
         | Note the story isn't detailed enough to know whether they were
         | using what we'd normally call dynamic memory allocation. The
         | embedded system might not have had a memory manager. Or they
         | might have been, like you, fully statically allocating the
         | memory. Kent could be noting that they'll run off the end of
         | their statically allocated memory, or run out of address space,
         | because the code isn't checking the bounds and may be doing
         | something like appending history or sensor data to an array. I
         | have no idea obviously, just imagining multiple ways Kent's
         | very brief description could be interpreted, it maybe shouldn't
         | be assumed that the engineering was doing something stupid or
         | even very different from what we'd do today.
        
         | 2OEH8eoCRo0 wrote:
         | This right here. My previous job was in defense and although it
         | was not an embedded project all the software architects on the
         | project were embedded guys used to doing things their way.
         | Dynamic allocation was strictly forbidden.
        
         | bootloop wrote:
         | I would imagine it might make sense if you offload some short,
         | less frequent but memory intensive sub-routines (sensors,
         | navigation) to run in parallel to the rest of the system. But I
         | would still avoid having a system wide dynamic memory
         | management and just implement one specifically for that part.
        
           | Dylan16807 wrote:
           | Whichever ones you allow to run in parallel need to have
           | enough memory to run at the same time, but such a situation
           | might happen quite rarely.
           | 
           | In other words, that sounds like a system where dynamic
           | memory management is significantly riskier and harder to test
           | than usual!
           | 
           | Why not static allocation, but sharing memory between the
           | greedy chunks of code that can't run parallel to each other?
           | (I assume these chunks exist, because otherwise your worst-
           | case analysis for dynamic memory would be exactly the same as
           | for static, and it wouldn't save you anything.)
        
           | bdavis__ wrote:
           | when you design the system, you make sure there is enough
           | physical RAM to do the job. Period. the problem space is
           | bounded.
        
         | giu wrote:
         | I'm always fascinated about software running on hardware-
         | restricted systems like planes, space shuttles, and so on.
         | 
         | Where can someone (i.e., in my case a software engineer who's
         | working with Kotlin but has used C++ in his past) read more
         | about modern approaches to writing embedded software for such
         | systems?
         | 
         | I'm asking for one because I'm curious by nature and
         | additionally because I simply take the garbage collector for
         | granted nowadays.
         | 
         | Thanks in advance for any pointers (no pun intended)!
        
           | saagarjha wrote:
           | Searching for things like "MISRA C" and "real-time computing"
           | will help you get started.
        
             | jakeinspace wrote:
             | My older co-workers have some great alternative definitions
             | of that initialism.
        
             | giu wrote:
             | Thanks a lot for the keywords; these are very good starting
             | points to look for further stuff on the topic!
             | 
             | Didn't know that there was a term (i.e., real-time
             | computing) for this kind of systems / constraints.
        
               | monocasa wrote:
               | I'd also look a the Joint Strike Fighter C++ Coding
               | Standard. Stroustrup himself hosts it as an example of
               | how C++ is a multi paradigm language that you can use a
               | subset of to meet your engineering needs.
               | 
               | http://www.stroustrup.com/JSF-AV-rules.pdf
        
           | 0xffff2 wrote:
           | The embedded world is _very_ slow to change, so you can read
           | about  "modern approaches" (i.e. approaches used today) in
           | any book about embedded programming written in the last 30
           | years.
           | 
           | I currently work on spacecraft flight software and the only
           | real advance on this project over something like the space
           | shuttle that I can point to is that we're trying out some
           | continuous integration on this project. We would like to use
           | a lot of modern C++ features, but the compiler for our flight
           | hardware platform is GCC 4.1 (upgrading to GCC 4.3 soon if
           | we're lucky).
        
             | bargle0 wrote:
             | How do you do CI/CD for embedded systems?
        
               | hyldmo wrote:
               | I can't speak for what the rest of the industry does, but
               | some chip manufacturers provide decent emulators, so you
               | can run some tests there. We have also done some hardware
               | tests where we connect our hardware to a raspberry pi or
               | similar and run our CI there. It doesn't replace real-
               | world testing, but it does get us some of the way there.
        
               | [deleted]
        
               | jakeinspace wrote:
               | CI during the first phases of development in my
               | experience is now often done with modern tooling (gitlab
               | CI, Jenkins), compiling and running tests on a regular
               | Linux x86 build server. Later phases switch over to some
               | sort of emulated test harness, with interrupts coming
               | from simulated flight hardware. Obviously the further
               | along in the development process, the more expensive and
               | slow it is to run tests. Maybe some software groups
               | (SpaceX?) have a process that allows for tight test loops
               | all the way to actual hardware in the loop tests.
        
             | rowanG077 wrote:
             | I find it interesting that such critical code is written in
             | C. Why not use something with a lot more (easily)statically
             | provable properties. Like Rust or Agda?
        
               | MiroF wrote:
               | Because safety critical fields are also slow-moving.
        
               | retrac wrote:
               | I think the answer was right there in their comment. "The
               | compiler for our flight hardware platform is GCC 4.1
               | (upgrading to GCC 4.3 soon if we're lucky)".
               | 
               | Often, the only high-level language available for an
               | embedded platform is a standard C compiler. If you're
               | lucky.
        
               | NobodyNada wrote:
               | Using a newer language carries a lot of risks and
               | challenges for embedded programs:
               | 
               | - There's a high risk of bugs in the compiler/standard
               | library in languages with lots of features
               | 
               | - Usually, the manufacturer of an embedded platform
               | provides a C compiler. Porting a new compiler can be a
               | LOT of work, and the resulting port can often be very
               | buggy
               | 
               | - Even if you can get a compiler to work, many newer
               | languages rely on a complicated runtime/standard library,
               | which is a deal-breaker when your complete program has to
               | fit in a few kilobytes of ROM
        
               | hechang1997 wrote:
               | Isn't Ada already used in areospace industry?
        
               | bdavis__ wrote:
               | Ada is used a fair amount in high $ projects. Toolchains
               | are expensive, and the C compiler is provided for free
               | from the chip / board vendor.
        
               | amw-zero wrote:
               | You'll find that for very serious, industrial
               | applications, a conservative mindset prevails. C may not
               | be trendy at the moment, but it powers the computing
               | world. Its shortcomings are also extremely well known and
               | also statically analyzable.
               | 
               | Also, think about when flight software started being
               | written. Was Rust an option? And once it came out, do you
               | expect that programmers who are responsible for millions
               | of people's lives to drop their decades of tested code
               | and development practices to make what is a bet on what
               | is still a new language?
               | 
               | What I find interesting is this mindset. My
               | conservativeness on a project is directly proportional to
               | its importance / criticality, and I can't think of
               | anything more important or critical than software that
               | runs on a commercial airplane. C is a small, very well
               | understood language. Of course it gives you nothing in
               | terms of automatic memory safety, but that is one
               | tradeoff in the list of hundreds of other dimensions.
               | 
               | When building "important" things it's important to think
               | about tradeoffs, identify your biases, and make a choice
               | that's best for the project and the people that the
               | choice will affect. If you told me that the moment anyone
               | dies as a result of my software I would have to be
               | killed, I would make sure to use the most tried-and-true
               | tools available to me.
        
               | AtlasBarfed wrote:
               | You're advocating throwing baby out with bathwater.
               | 
               | Rust interops with C seamlessly, doesn't it? You don't
               | have to throw out good code to use a better language or
               | framework.
               | 
               | C may be statically analyzable to some degree, but if
               | Rust's multithreading is truly provable, then new code
               | can be Rust and of course still use the tried and true C
               | libraries.
               | 
               | Disclaimer: I still haven't actually learned any Rust, so
               | my logic is CIO-level of potential ignorance.
        
               | wallacoloo wrote:
               | > Rust interops with C seamlessly, doesn't it?
               | 
               | From someone who works in a mixed C + Rust codebase daily
               | (Something like 2-3M lines of C and 100k lines of Rust),
               | yes and no. They're pretty much ABI compatible, so it's
               | trivial to make calls across the FFI boundary. But each
               | language has its own set of different guarantees it
               | provides _and assumes_ , so it's easy to violate one of
               | those guarantees when crossing a FFI boundary and
               | triggering UB which can stay hidden for months.
               | 
               | One of them is mutability: in C we have some objects
               | which are internally synchronized. If you call an
               | operation on them, either it operates atomically, or it
               | takes a lock, does the operation, and then releases the
               | lock. In Rust, this is termed "interior mutability" and
               | as such these operations would take non-mutable
               | references. But when you actually try that, and make a
               | non-mutable variable in Rust which holds onto this C
               | type, and start calling C methods on it, you run into UB
               | even though it seems like you're using the "right"
               | mutability concepts in each language. On the rust side,
               | you need to encase the C struct inside of a UnsafeCell
               | before calling any methods on it, which becomes not
               | really possible if that synchronized C struct is a member
               | of another C struct. [1]
               | 
               | Another one, although it depends on how exactly you've
               | chosen to implement slices in C since they aren't native:
               | in our C code we pass around buffer slices as (pointer,
               | len) pairs. That looks just like a &[T] slice to Rust. So
               | we convert those types when we cross the FFI boundary.
               | Only, they offer different guarantees: on the C side, the
               | guarantee is generally that it's safe to dereference
               | anything within bounds of the slice. On the rust side,
               | it's that, _plus_ the pointer must point to a valid
               | region of memory (non-null) even if the slice is empty.
               | It 's just similar enough that it's easy to overlook and
               | trigger UB by creating an invalid Rust slice from a
               | (NULL, 0) slice in C (which might be more common than you
               | think because so many things are default-initialized. a
               | vector type which isn't populated with data might
               | naturally have cap=0, size=0, buf=NULL).
               | 
               | So yeah, in theory C + Rust get along well and in
               | practice you're good 99+% of the time. But there are
               | enough subtleties that if you're working on something
               | mission critical you gotta be real careful when mixing
               | the languages.
               | 
               | [1] https://www.reddit.com/r/rust/comments/f3ekb8/some_nu
               | ances_o...
        
               | a1369209993 wrote:
               | > On the rust side, it's that, _plus_ the pointer must
               | point to a valid region of memory (non-null) even if the
               | slice is empty.
               | 
               | Do you have a citation for that, because it _seems_
               | obviously wrong[0] and I 'm having trouble coming up with
               | any situation that would justify it (except possibly
               | using a NULL pointer to indicate the Nothing case of a
               | Maybe<Slice> datum?).
               | 
               | 0: by which I mean that Rust is wrong to require that,
               | not that you're wrong about what Rust requires.
        
               | jschwartzi wrote:
               | The issue is that you're trading a problem space that is
               | very well understood for one that isn't. Making a safe
               | program in C is all about being explicit about resource
               | allocation and controlling resources. So we tend to
               | require that habit in development. It's socialized. The
               | only thing you'd be doing is using technology to replace
               | the socialization. And you'd be adding new problems from
               | Rust that don't exist in the C world.
               | 
               | It's tempting in a lot of cases to read the data sheet
               | and determine that the product is good enough. But there
               | are a lot of engineering and organizational challenges
               | that aren't written in the marketing documents.
               | 
               | Those challenges have to be searched for and social and
               | technological tools must be developed to solve those
               | challenges.
               | 
               | As an exercise in use of technology it looks easy but
               | there's an entire human and organizational side to it
               | that gets lost in discussions on HN.
        
               | NextHendrix wrote:
               | Wanting to suddenly start using rust would mean putting
               | any and all tools through a tool qualification process,
               | which is incredibly time consuming and vastly expensive.
               | In the field of safety critical software, fancy new
               | languages are totally ignored for, at least partially,
               | this reason. What's really safer, a new language that
               | claims to be "safe" or a language with a formally
               | verified compiler and toolchain where all of your
               | developers have decades of experience with it and lots of
               | library code that has been put through stringent
               | independent verification and validation procedures, with
               | proven track record in multiple other safety critical
               | projects?
        
               | prostheticvamp wrote:
               | > Disclaimer: I still haven't actually learned any Rust,
               | so my logic is CIO-level of potential ignorance
               | 
               | And yet you seem to write with such confidence. /Are/ you
               | a CIO? It's the only thing that makes sense.
        
               | UncleMeat wrote:
               | Rust/C interop still has major challenges. It isn't
               | seamless.
        
               | dodobirdlord wrote:
               | > Also, think about when flight software started being
               | written. Was Rust an option?
               | 
               | It wasn't, but Ada probably was (some flight software may
               | have been written before 1980?), and would likely also be
               | a much better choice.
        
               | spencerwgreene wrote:
               | > I can't think of anything more important or critical
               | than software that runs on a commercial airplane.
               | 
               | Nuclear reactors?
        
               | samatman wrote:
               | Arguably, the existence of nuclear reactors which don't
               | fail safe under any contemplated crisis is a hardware
               | bug. It's possible to design a reactor that can be
               | ruptured by a bomb or earthquake, which will then dump
               | core into a prepared area and cool down.
               | 
               | This kind of physics-based safety is obviously not
               | possible for airplanes.
        
             | AlotOfReading wrote:
             | Having worked on embedded systems for a decade at this
             | point, the fact that we allow vendors to get away with
             | providing ancient compilers and runtimes is shameful. We
             | _know_ that these old toolchains have thousands of
             | documented bugs, many critical. We _know_ how to produce
             | code with better verification, but just don 't push for the
             | tools to do it.
        
               | Baeocystin wrote:
               | Isn't the key part that these older systems have
               | _documented_ bugs?
               | 
               | Or, to put it another way, if there's a wasp in the room
               | (and there always is), I'd want to know where it is.
        
               | AlotOfReading wrote:
               | That doesn't end up being the case for a number of
               | reasons. Firstly, no one is actually able to account of
               | all of these known issues a priori. I don't like calling
               | things impossible, but writing safe C that avoids any
               | compiler bugs is probably best labeled as that.
               | 
               | Secondly, vendors make modifications during their release
               | process, which introduces new (and fun!) bugs. You're not
               | really avoiding hidden wasps, just labeling some of them.
               | If you simply moved to a newer compiler, you wouldn't
               | have to avoid them, they'd mostly be gone (or at worst,
               | labeled).
        
               | Baeocystin wrote:
               | Are the newer compilers truly that much better? I've been
               | working in tech since the 90's, and I can't say that for
               | the tools I've used I've noticed any great improvement in
               | overall quality. I am assuming that many optimizations
               | are turned off regardless, due to wanting to keep the
               | resulting assembly as predictable as possible, but I do
               | not work in the embedded space, so this is perhaps a
               | naive question.
        
             | harryf wrote:
             | I wonder if the same is true of Space X?
        
               | monocasa wrote:
               | Yeah. AFAIK they use FreeRTOS for the real deeply
               | embedded stuff which would look very familiar to this
               | discussion.
        
           | enriquto wrote:
           | > Where can someone (i.e., in my case a software engineer
           | who's working with Kotlin but has used C++ in his past) read
           | more about modern approaches to writing embedded software for
           | such systems?
           | 
           | The JPL coding guidelines for C [1] are an amusing, first-
           | hand read about this stuff. Not sure if you would qualify
           | them as "modern approaches".
           | 
           | [1] https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_
           | Dev...
        
             | Cyph0n wrote:
             | > A minimum of two runtime assertions per function.
             | 
             | I am guessing the idea is to catch runtime errors in the
             | test phase, and assertions are disabled for the production
             | build.
        
             | ibrault wrote:
             | I can testify first-hand that the "functions in a single
             | page" and "avoid the pre-processor" rules are not followed
             | very closely haha
        
           | diego wrote:
           | If you want a "toy" example of this type of code, look at
           | flight control software for drones such as Betaflight. You
           | can modify this code and test it in real life. I did this, as
           | I contributed the GPS Rescue feature. I have a blooper reel
           | of failures during testing.
           | 
           | https://github.com/betaflight/betaflight/
        
           | amelius wrote:
           | Just read docs that were written in the 70s, before the
           | advent of garbage collection.
        
             | p_l wrote:
             | Garbage Collection is from 1959, though - and Unix & C's
             | original model pretty much matches "bump allocate then die"
             | with sbrk/brk and lack of support for moving.
             | 
             | Fully static allocation is the norm though for most "small"
             | embedded work.
        
       | b34r wrote:
       | I like the pragmatism. One thing that comes to mind though is
       | stuff gets repurposed for unintended use cases often... as long
       | as these caveats are well documented it's ok but imagine if they
       | were hidden and the missiles were used in space or perhaps as
       | static warheads on a long timer.
        
       ___________________________________________________________________
       (page generated 2020-02-22 23:00 UTC)