= Trust and software :Author: Seth Kenlon :Email: seth@opensource.com An often-praised virtue of open source software is that its code can be reviewed (or "audited", as security professionals like to say) by anyone and everyone. However, if you actually ask many open source users when the last time they reviewed code was, you might get answers ranging from a blank stare to an embarrassed murmer. And besides, there are some really big open source applications out there, so it can be difficult to effectively review every single line of code. Extrapolating from these slightly uncomfortable truths, you have to wonder: when nobody looks at the code, does it really matter whether it's open or not? == Should you trust open source? There's a trite assumption we tend to make in hobbyist computing that open source is "more secure" than anything else. We don't often talk about what that means, or what the basis of comparison is ("more" secure than what?), or how the conclusion has even been reached. It's a dangerous statement to make, because it implies that as long as you call something "open source", it automatically and magically inherits enhanced security. That's not what open source is about, and in fact it's what open source security is very much against. You should never assume an application is secure unless you have personally audited and understood its code. Once you have done this, you can assign _ultimate trust_ to that application. Ultimate trust isn't a thing you do in a computer, it's something you do in your own mind: you trust software because you choose to believe that it is secure, at least until someone finds a way to exploit that software. You're the only person who can place ultimate trust in that code, so every user who wants the luxury of ulitmate trust must audit the code for themselves. Taking someone else's word for it doesn't count! So until you have audited and understood a code base for yourself, the maximum trust level you can give to an application is a spectrum ranging from, approximately, _not trustworthy at all_ to _pretty trustworthy_. There's no cheat sheet for this. It's a personal choice you must make for yourself. If you've heard from people you strongly trust that an application is secure, then you might trust that software more than you trust something for which you've gotten no trusted recommendations. Because you cannot audit proprietary (non-open source) code, you can never assign it _ultimate trust_. == Linus's Law The reality is, not everyone's a programmer and not everyone who is a programmer has the time to dedicate to reviewing hundreds and hundreds of lines of code. So if you're not going to audit code yourself, then you must choose to trust (to some degree) the people who _do_ audit code. So exactly who does audit code, anyway? Linus's Law asserts that _given enough eyeballs, all bugs are shallow_, but we don't really know how many eyeballs are "enough". However, don't underestimate the number. Software is very often reviewed by more people than you might imagine. The original developer or developers obviously know the code that they've written, but open source is often a group effort so the longer code is open, the more software developers end up seeing its code. A developer must review major portions of a project's code because they must learn a code base in order to write new features for it. Open source packagers also get involved with many projects in order to make it available to a Linux distribution. Sometimes an application can be packaged with almost no familiarity with the code, but often times a packager gets familiar with a project's code, both because they don't want to sign off on software they don't trust, and because they may have to make modifications to get it to compile correctly. Bug reporters and triagers also sometimes get familiar with a code base, as they try to solve anomolies ranging from quirks to major crashes. Of course, some bug reporters inadvertently reveal vulnerabilities in code not by reviewing it themselves but by bringing attention to something that obviously doesn't work as intended. Sys admins frequently get intimately familiar with the code of an important software their users rely upon. Finally, there are security researchers who dig into code exclusively to uncover potential exploits. == Trust and transparency Some people assume that because major software is composed of hundreds of thousands of lines of code, it's basically impossible to audit. But don't be fooled by how much code it takes to make an application run. You don't actually have to read all million lines. Code is highly structured, and exploitable flaws are rarely just a single line hidden among the millions of lines; there are usually whole functions involved. There are exceptions, of course. Sometimes a serious vulnerability is enabled with just one system call, or by linking to one flawed library. Luckily, those kinds of errors are relatively easy to notice, thanks to the active role of security researchers and vulnerability databases. Some people point to bug trackers, such as the https://cve.mitre.org[Common Vulnerabilities and Exposures (CVE)] website, and deduce that it's actually as plain as day that open source isn't secure. After all, there are hundreds of security risks filed, against lots of open source projects, out in the open for everyone to see. Don't let that fool you, though. Just because you don't get to see the flaws in closed software doesn't mean those flaws don't exist. In fact, we know that they do, because exploits are filed against them, too. The difference is, though, that _all_ exploits against open source applications are available for developers (and users) to see so those flaws can be mitigated. That's part of the system that boosts trust in open source, and it's wholly missing from proprietary software. There may never be "enough" eyeballs on any code, but the stronger and more diverse the community around code, the better chance there is to uncover and fix weakness. == Trust and people In open source, the probability that many developers, each working on the same project, have noticed something _not secure_ but have all remained equally silent about that flaw is considered to be low, because humans rarely mutually agree to conspire in this way. We've seen how disjointed human behaviour can be recently with Covid19 mitigation: we've all identified a flaw (a virus), we know how to prevent it from spreading (stay home), and yet the virus continues to spread because one or more people deviate from the mitigation plan. The same is true for bugs in software. If there's a flaw, someone noticing it is going to bring it to light (provided, of course, that someone notices it). With proprietary software, however, there can be a high probability that many developers working on a project may notice something not secure but remain equally silent, because the proprietary model relies on paychecks. If a developer speaks out against a flaw, then that developer may at best hurt the reputation of the software, thereby decreasing sales, or at worst may be fired from their job. Developers being paid to work on software in seccret do not tend to talk about its flaws. If you've ever worked as a developer, you've probably signed an NDA, and you've been lectured on the importance of trade secrets, and so on. Proprietary software encourages, and more often enforces, silence even in the face of serious flaws. # Trust and software Don't trust software you haven't audited. If you must trust software you haven't audited, then choose to trust code that's exposed to many developers who independently are likely to speak up about a vulnerability. Open source isn't inherently more secure than proprietary software, but the systems in place to fix it is far better planned, implemented, and staffed.