RISKS-LIST: RISKS-FORUM Digest Sunday, 6 December 1987 Volume 5 : Issue 70 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator Contents: Wall Street crash, computers, and SDI (Rodney Hoffman) NW Flight 255 -- Simulator did, but wasn't (Scot E. Wilcoxon) Whistle-blowers who aren't (Henry Spencer) Re: Space Shuttle Whistle-Blowers Sound Alarm Again (Henry Spencer) A new twist to password insecurity (Roy Smith) More on PIN encoding (Chris Maltby) Telephone overload (Stephen Grove) Software licensing problems (Geof Cooper) Re: Mariner 1 or Apollo 11? (Henry Spencer, Brent Chapman) More on addressable converter box (Allan Pratt) Centralized car locks (K. Richard Magill) The RISKS Forum is moderated. Contributions should be relevant, sound, in good taste, objective, coherent, concise, nonrepetitious. Diversity is welcome. Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM. For Vol i issue j, FTP SRI.COM, CD STRIPE:, GET RISKS-i.j. Volume summaries for each i in max j: (i,j) = (1,46),(2,57),(3,92),(4,97). ---------------------------------------------------------------------- Date: 6 Dec 87 12:29:47 PST (Sunday) Subject: Wall Street crash, computers, and SDI To: RISKS@csl.sri.com From: Rodney Hoffman From the 'Letters' column in the 'Wall Street Journal', Thursday, Dec. 3, 1987: SDI COULD BE TOO SWIFT The role of computers in the recent wide fluctuation of stock prices brings to focus an interesting issue that has wider implications. The initial downward trend in the market was greatly amplified by rapid computer-initiated program trading. From our long experience in computer science and dynamical systems analysis, we fear that this dangerous amplification effect could occur in other more critical computational networks, most notably those envisioned for the strategic defense initiative (SDI). SDI planners have proposed giving computers a key role in the decision about retaliatory measures in the event of attack. Others have argued persuasively that a computer program that reflects our policy cannot be reliably constructed or completely debugged. But setting this aside, we claim that there would be great danger in the very speed of such programs, unmodulated by slower hauman interactions that provide effective damping. It is known that nonlinear systems (such as the ones composing computer networks) can amplify very small disturbances. As illustrated by the program trading example cited, this can cause massive changes in overall system behavior unplanned for by system designers. The programs in the defense network for SDI must process incoming signals for possible threats, and act rapidly in accord with the resulting analysis. Any overt action by the SDI system can lead to a rise in the readiness of the opposite side; blasts in space can be interpreted by Russian programs as attacks on spy satellites, a preparatory move for a U.S. first strike. This automatic feedback loop, through the Russian and American computers and sensors, can easily amplify the intesity of a dangerous situation to the point of nuclear catastrophe. In order to dampen such an inadvertent escalation, humans must be involved in the response progress, even at intermediate stages. They must have time enough to think and communicate to avoid the nonlinear amplification effects. In the case of the envisioned system for SDI, we believe that there is no effective way whereby people can modulate the behavior of a computer system while retaining the hoped for rapid response. Daniel G. Bobrow Bernardo A. Huberman Palo Alto, Calif. ------------------------------ To: RISKS@csl.sri.com Date: Thu, 3 Dec 87 13:36:36 CST Subject: (NW Flight 255) Simulator did, but wasn't. Cc: sewilco@cs-gw.D.UMN.EDU From: umn-cs!datapg.MN.ORG!sewilco@cs-gw.D.UMN.EDU (Scot E. Wilcoxon) The random failure of a $13 circuit breaker may have contributed to an airplane crash. Also, an indicator in the simulator for the aircraft behaves differently than in the real aircraft when that failure occurs. Northwest Flight 255 apparently crashed in Detroit three months ago because the flaps were not lowered during takeoff. The MD 80's takeoff warning system should have warned the pilot, but its audio warning "flaps, flaps" was not on the cockpit recording so the warning system probably failed. NTSB examination of a $13 circuit breaker that supplies power to the warning system has found several planes with breakers that would not pass the electrical current that they should. A McDonnell Douglas document states that when power fails to the warning system, a warning light (CAWS fail light) should go on in the cockpit. That is what happens in the MD80 simulator, but in an actual MD80 aircraft the warning light does not go on. A McDonnell Douglas official said the document is "clearly in error". (quoting StarTribune:) "This revelation has put the FAA in the awkward position of ordering changes that will make the simulators behave the same way as the airplane instead of making the airplane behave like the simulator." The simulator will be altered instead of airplanes because the warning system is classified by the FAA as not requiring backup systems. The warning system is required (FAA won't allow takeoff if it is not working), but is considered "nonessential" for manufacturing purposes (FAA does not require backup systems for nonessential systems). (Information from 11/28/87 Minneapolis Star Tribune, pg 1,4D) Scot E. Wilcoxon sewilco@DataPg.MN.ORG {ems,meccts}!datapg!sewilco Data Progress Minneapolis, MN, USA +1 612-825-2607 ------------------------------ Date: Wed, 2 Dec 87 15:47:52 EST From: mnetor!utzoo!henry@uunet.UU.NET To: RISKS@kl.sri.com Subject: Whistle-blowers who aren't > Maxson will share the stage with former Morton Thiokol engineer Roger > Boisjoly, who currently has a billion-dollar suit underway... Maybe I am just being picky about this, but it still makes me see red when I see Boisjoly described as a "whistle-blower". Boisjoly is the man who could have blown the whistle BUT DIDN'T, and seven astronauts died as a result. Boisjoly was the engineer who told MT management "don't launch", was told "put on your management hat", did so, and changed his expert professional opinion 180 degrees to match his hat color. In a just world, I cannot help but think that he (and, certainly, his management) would be facing criminal charges. Boisjoly did not blow the whistle; he merely turned "state's evidence" after the fact. Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry ------------------------------ Date: Wed, 2 Dec 87 15:48:00 EST From: mnetor!utzoo!henry@uunet.UU.NET To: RISKS@kl.sri.com Subject: Re: Space Shuttle Whistle-Blowers Sound Alarm Again (reprint) > ... new and improved shuttle escape mechanisms. Lot's of > money is being spent, but whether reported or not, upon (close) examination > none of these mechanisms would prevent the death of astronauts in a > Challenger type disaster. I wonder just how much additional engineering > is happening for purely public relations purposes... The escape work is not being done for purely public relations purposes; it merely, for the most part, does not address situations as severe as the Challenger disaster. There is in fact some attention being given to such situations, but the thorough re-examination of shuttle safety issues turned up other cases where modest effort would yield a much higher probability of survival. The reason why most escape-system work is not addressing the Challenger scenario is that it is very difficult to get the crew out of such a situation reliably! There are also tradeoffs to be considered: regardless of managerial idiots blithering about safety being an absolute priority, the only way to make the shuttles completely safe is to put them in museums and never fly them again. In practice, there is no way to avoid some level of compromise between safety and utility, since adding any type of escape system reduces payload. There are also safety-vs-safety tradeoffs to be made, since even simple ejection seats can and do fire accidentally, often with fatal consequences. Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry ------------------------------ From: roy%phri@uunet.UU.NET (Roy Smith) Subject: A new twist to password insecurity (human factors) Date: 6 Dec 87 00:19:06 GMT Organization: Public Health Research Institute, NYC, NY A bunch of people around here have signed up for the BRS/Colleague bibliographic data base service. Each person has an individual account number and (supposedly secret) password. We get a combined invoice each month, with usage itemized and identified, but only by account number (not by name). Our accounting office didn't know what to do with the account numbers, so I called BRS and asked for a list of which name corresponds to which account number. Much to my surprise, I got in the mail a few days later a list of names, account numbers, *and passwords*. Roy Smith, {allegra,cmcl2,philabs}!phri!roy System Administrator, Public Health Research Institute 455 First Avenue, New York, NY 10016 ------------------------------ Date: 2 Dec 87 11:47:01 +1100 (Wed) From: munnari!softway.oz.au!chris@uunet.UU.NET (Chris Maltby) To: risks@csl.sri.com Subject: More on PIN encoding A recent fraud case in Sydney reveals that the PIN details are either encoded on the card, or a function of the card number. The perpetrators of the fraud used some ingenuity in their system. The first stage was to deduce the magnetic encoding on the card's strip from discarded receipts collected around ATMs, and then manufacture cards which duplicated the real card. Unfortunately for them, they were unable to break the PIN encoding algorithm, so they resorted to hanging around in their cars opposite the ATM location with a portable video camera and a zoom lens. When some unsuspecting user left behind a recipt card they were able to re-manufacture his card and replay the video of his PIN entry. There are several morals. First - don't leave your receipts at the machine. Second - stand close when entering your PIN. Third - don't use the system at all - can you trust any system which is this easy to break. If the forgers had been just a bit more resourceful they would have decoded the PIN as well. Of course, the "conditions of use" of your card make you liable for such frauds... Chris Maltby - Softway Pty Ltd (chris@softway.oz) PHONE: +61-2-698-2322 uunet!softway.oz!chris chris@softway.oz.au ------------------------------ From: ptsfa!pbhya!seg@ames.arpa (Stephen Grove) Date: Wed, 2 Dec 87 16:09:35 PST To: ames!comp-risks Subject: Telephone overload (Re: RISKS-5.63) Organization: Pacific * Bell, San Ramon, CA > Date: Fri, 20 Nov 87 15:17:09 PST > From: "LT Scott A. Norton, USN" <4526P%NAVPGS.BITNET@wiscvm.wisc.edu> > Subject: L.A. Earthquake & Telephone Service > > Can anyone with better knowledge of the phone companies' local offices tell > me if there is some simple way to shed this extra load in a reasonable way? > I know that after some minutes off the hook, the phone loses its dial tone. > Does this adequately release the resources the off-the-hook phone was using? The older electromechanical systems, required someone to throw a switch and remove the battery supply from the non-priority customers. Priority-customers being those in the class of Hospitals, Police, Fire, etc. The newer ESS offices (ESS = Electronic Switching Systems, using stored programs for control, as opposed to hard-wired logic) determine when the load is getting excessive, and delay the response to nonpriority customers by a factor of three (I think). In other words if the ESS normally responds in 300ms, it will now take 900ms. I have seen it work in a flood, and it worked fine. It was inhibited at first, and the ESS repeatedly stated the need to implement the control, and was unable to serve anyone, but when allowed, response was slower, but the calls went through. For call in programs and promotions, we like to provide special prefixes that can be limited in the number of interoffice channels they can access. Stephen Grove, Pac Bell, Rohnert Park, Calif. UUCP:{ihnp4,dual,sun,hoptoad}!ptsfa!pbhya!seg ------------------------------ Date: Wed, 2 Dec 87 12:47:00 pst From: imagen!geof@decwrl.dec.com (Geof Cooper) To: risks@csl.sri.com Subject: Software licensing problems We too have experienced the problem of software licensed to run on only some nodes. Our approach has been to convince the repairman types to switch the Node-ID rom's on the apollos in question, so that your access key follows you where you go. I've never seen the address ROM itself fail! Apollo Computer has recently brought out a product that allows N copies of a program to be running simultaneously on any of some larger number of machines. It remains to be seen if any vendors will be interested in using the product. From the vendors' point of view, there is a possible financial risk to adopting the new scheme, since many programs are used less than full time; the customer might elect to buy fewer copies of the program and take the risk that occasionally someone will have to "wait for a dialtone". E.g., we have about 25-30 licenses for Interleaf's desktop publishing software. I've never seen more than 15 of them in use at once). - Geof ------------------------------ Date: Wed, 2 Dec 87 15:48:18 EST From: mnetor!utzoo!henry@uunet.UU.NET To: RISKS@kl.sri.com Subject: Re: Mariner 1 or Apollo 11? (RISKS-5.63) > I heard that the famous "./," disaster caused the problem with the > onboard IBM 1800 on Apollo 11... The onboard computers on Apollo were not IBM 1800s unless I have confused the numbers badly, and almost certainly they were programmed in assembler due to severely limited ROM capacity, so I'd be a bit skeptical of this. Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry ------------------------------ To: kludge@pyr.gatech.edu Cc: risks@kl.sri.com Subject: Re: Mariner 1 or Apollo 11? Date: Mon, 30 Nov 87 22:26:10 PST From: Brent Chapman In RISKS-5.64, Scott Dorsey writes: > I heard that the famous "./," disaster caused the problem with the >onboard IBM 1800 on Apollo 11. I heard this from a professor who teaches >Fortran, so I'm not so sure about the reliability of the source. Anyone >else have information on either the Apollo or the Mariner problems? If you mean the "decent alarm" that occurred in the Lunar Module moments before lunar touchdown, then one of the NASA documentaries (unfortunately, I don't remember the title, or even when or where I saw it) told a different story. They said that the alarm was caused by an overload condition in the processor; apparently the Armstrong and Aldrin had left a certain sensor (radar altimiter, I think) enabled that was supposed to have been shut down by that point in the landing sequence, and the added load of that sensor caused the computer to falsely register an alarm condition. If I remember correctly, the programmer that had written the piece of code involved was in Mission Control at the time the alarm occurred; he stared at the situation and status boards for a few seconds, then announced that he knew what the problem was, that it was a false reading, and advised to continue the landing (although I'm not sure if they could have aborted at that stage or not). I may be confusing what I saw in the documentary (that the false decent alarm happened because the processor was overloaded because a sensor that should have been turned off wasn't) with "urban legend" (that the programmer responsible was in Mission Control, etc.); can anyone else back me up on this? Brent Chapman Capital Market Technology, Inc. Senior Programmer/Analyst 1995 University Ave., Suite 390 {lll-tis,ucbvax!cogsci}!capmkt!brent Berkeley, CA 94704 capmkt!brent@{lll-tis.arpa,cogsci.berkeley.edu} Phone: 415/540-6400 [I am sorry that misinformation is still flowing on this subject. I have held up a large number of potential contributions to RISKS, awaiting a definitive report that is rumored to be working its way RISKSward. PGN] ------------------------------ Date: Mon, 30 Nov 87 11:40:31 pst From: ucbcad!ames.UUCP!atari!apratt@ucbvax.Berkeley.EDU (Allan Pratt) To: ames!KL.SRI.COM!RISKS Subject: More on addressable converter box RISKS@KL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator): I have a note to add about my addressable converter box: the hardware is two-way capable. I know this because there is an "event" button which you hit to purchase an event on Pay-Per-View. You then key in your purchase-authorization code, and you get the event. Obviously, the box has to be two-way, because the cable company has to be able to bill you. Now, the particular cable company in my area does not support this, but the box hardware & software do. The manual talks about it, with an "If your cable company supports it" disclaimer. Opinions expressed above do not necessarily -- Allan Pratt, Atari Corp. reflect those of Atari Corp. or anyone else. ...ames!atari!apratt ------------------------------ From: umix!oxtrap!rich@RUTGERS.EDU (K. Richard Magill) Date: 30 Nov 87 18:17:47 GMT To: moss!cbosgd!comp-risks@rutgers.EDU Subject: centralized car locks (foaf) Organization: Oxford TP, Ann Arbor It's my understanding that a certain lesser known car (the Bricklin) was sold with entirely electronic locks. When the battery died or shorted you were entirely locked out. The Bricklin had gull wing doors. rich. [When the gulls attacked, the car owners were known as the Bricklin drudgers. PGN] ------------------------------ End of RISKS-FORUM Digest ************************