[HN Gopher] Facial Recognition Leads To False Arrest Of Black Ma...
       ___________________________________________________________________
        
       Facial Recognition Leads To False Arrest Of Black Man In Detroit
        
       Author : vermontdevil
       Score  : 359 points
       Date   : 2020-06-24 14:43 UTC (8 hours ago)
        
 (HTM) web link (www.npr.org)
 (TXT) w3m dump (www.npr.org)
        
       | jandrewrogers wrote:
       | It isn't just facial recognition, license plate readers can have
       | the same indefensibly Kafka-esque outcomes where no one is held
       | accountable for verifying computer-generated "evidence". Systems
       | like in the article make it so cheap for the government to make a
       | mistake, since there are few consequences, that they simply
       | accept mistakes as a cost of doing business.
       | 
       | Someone I know received vehicular fines from San Francisco on an
       | almost weekly basis solely from license plate reader hits. The
       | documentary evidence sent with the fines clearly showed her car
       | had been misidentified but no one ever bothered to check. She was
       | forced to fight each and every fine because they come with a
       | presumption of guilt, but as soon as she cleared one they would
       | send her a new one. The experience became extremely upsetting for
       | her, the entire bureaucracy simply didn't care.
       | 
       | It took threats of legal action against the city for them to set
       | a flag that apparently causes violations attributed to her car to
       | be manually reviewed. The city itself claimed the system was only
       | 80-90% accurate, but they didn't believe that to be a problem.
        
         | black_puppydog wrote:
         | I agree that's bad, and license plate readers come with their
         | own set of problems.
         | 
         | But being biased by the skin color of the driver is (AFAIK) not
         | one of them. Which is exactly the problem with vision systems
         | applied to humans, at least the ones we've seen deployed so
         | far.
         | 
         | If a system discriminates against a specific population, that's
         | very different from (indiscriminately) being unreliable.
        
       | datavirtue wrote:
       | All that rigamaroll for $3800 worth of crap? They should just
       | switch it up and start entrapping people like the FBI does. Then
       | at least they would have perhaps one leg to stand on.
        
       | cpeterso wrote:
       | What is a unique use case for facial recognition that cannot be
       | abused and has no other alternative solution?
       | 
       | Even the "good" use cases like unlocking your phone have security
       | problems because malicious people can use photos or videos of
       | your face and you can't change your face like you would a
       | breached username and password.
        
       | mistercool wrote:
       | relevant:
       | https://www.theregister.com/2020/06/24/face_criminal_ai/
        
       | rusty__ wrote:
       | any defence lawyer with more than 3 brain cells would have an
       | absolute field day deconstructing a case brought solely on the
       | basis of a facial recognition. What happened to the idea that
       | police need to gather a variety of evidence confirming their
       | suspicions before an arrest is issued. Even a state prosecutor
       | wouldn't authorize a warrant based on such flimsy methods.
        
         | ARandomerDude wrote:
         | True but the defendant is still financially, and in many cases
         | professionally, ruined.
        
       | js2 wrote:
       | > "I picked it up and held it to my face and told him, 'I hope
       | you don't think all Black people look alike,' " Williams said.
       | 
       | I'm white. I grew up around a sea of white faces. Often when
       | watching a movie filled with a cast of non-white faces, I will
       | have trouble distinguishing one actor from another, especially if
       | they are dressed similarly. This sometimes happens in movies with
       | faces similar to the kinds I grew up surrounded by, but less so.
       | 
       | So unfortunately, yes, I probably do have more trouble
       | distinguishing one black face from another vs one white face from
       | another.
       | 
       | This is known as the cross-race effect and it's only something I
       | became aware of in the last 5-10 years.
       | 
       | Add to that the fallibility of human memory, and I can't believe
       | we still even use line ups. Are there any studies about how often
       | line ups identify the wrong person?
       | 
       | https://en.wikipedia.org/wiki/Cross-race_effect
        
         | SauciestGNU wrote:
         | I lived in South Africa for a while and heard many times, with
         | various degrees of irony, "you white people all look the same"
         | from black South Africans. So yeah it's definitely a cross-
         | racial recognition problem, and it's probably also a problem
         | with distinguishing between members of visible minorities using
         | traits beyond the most noticable othering characteristic.
        
       | zro wrote:
       | NPR article about the same, if you prefer to read instead of
       | listen: https://www.npr.org/2020/06/24/882683463/the-computer-
       | got-it...
       | 
       | I'll be watching this case with great interest
        
       | TedDoesntTalk wrote:
       | > Even if this technology does become accurate (at the expense of
       | people like me), I don't want my daughters' faces to be part of
       | some government database.
       | 
       | Stop using Amazon Ring and similar doorbell products.
        
       | paulorlando wrote:
       | I've been thinking this sort of event has become inevitable. Tech
       | development and business models support extending the
       | environments in which we collect images and analyze them.
       | Confidence values lead to statistical guilt. I wrote about it
       | here if interested: https://unintendedconsequenc.es/inevitable-
       | surveillance/
        
       | Anthony-G wrote:
       | There is just so much wrong with this story. For starters:
       | 
       | The shoplifting incident occurred in October 2018 but it wasn't
       | until March 2019 that the police uploaded the security camera
       | images to the state image-recognition system but the police still
       | waited until the following January to arrest Williams. Unless
       | there was something special about that date in October, there is
       | no way for anyone to remember what they might have been doing on
       | a particular day 15 months previously. Though, as it turns out,
       | the NPR report states that the police did not even try to
       | ascertain whether or not he had an alibi.
       | 
       | Also, after 15 months, there is virtually no chance that any eye-
       | witness (such as the security guard who picked Williams out of a
       | line-up) would be able to recall what the suspect looked like
       | with any degree of certainty or accuracy.
       | 
       | This WUSF article [1] includes a photo of the actual
       | "Investigative Lead Report" and the original image is far too
       | dark for a anyone (human or algorithm) to recognise the person.
       | It's possible that the original is better quality and better
       | detail can be discerned by applying image-processing filters -
       | but it still looks like a very noisy source.
       | 
       | That same "Investigative Lead Report" also clearly states that
       | "This document is not a positive identification ... and is _not_
       | probable cause to arrest. Further investigation is needed to
       | develop probable cause of arrest".
       | 
       | The New York Times article [2] states that this facial
       | recognition technology that the Michigan tax-payer has paid
       | millions of dollars for is known to be biased and that the
       | vendors do "not formally measure the systems' accuracy or bias".
       | 
       | Finally, the original NPR article states that
       | 
       | > "Most of the time, people who are arrested using face
       | recognition are not told face recognition was used to arrest
       | them," said Jameson Spivack
       | 
       | [1] https://www.wusf.org/the-computer-got-it-wrong-how-facial-
       | re...
       | 
       | [2] https://www.nytimes.com/2020/06/24/technology/facial-
       | recogni...
        
       | gentleman11 wrote:
       | The discussion about this tech revolves around accuracy and
       | racism, but the real threat is in global unlimited surveillance.
       | China is installing 200 million of facial recognition cameras
       | right now to keep the population under control. It might be the
       | death of human freedom as this technology spreads
       | 
       | Edit: one source says it is 400 million new cameras:
       | https://www.cbc.ca/passionateeye/m_features/in-xinjiang-chin...
        
       | FpUser wrote:
       | And then in some states employers are allowed to ask have you eve
       | been arrested (never mind convicted of any crime) on employment
       | application. Sure, keep putting people down. One day it might
       | catch up with China's social scoring policies.
        
       | neonate wrote:
       | The prosecutor and the police chief should personally apologize
       | to his daughters, assuming that would be age appropriate.
        
       | crazygringo wrote:
       | In this _particular_ case, computerized facial recognition is
       | _not_ the problem.
       | 
       | Facial recognition produces _potential_ matches. It 's still up
       | to humans to look at footage themselves and _use their judgment_
       | as to whether it 's actually the same person or not, as well as
       | to judge whether other elements fit the suspect or not.
       | 
       | The problem here is 100% on the cop(s) who made that call for
       | themselves, or intentionally ignored obvious differences. (Of
       | course, without us seeing the actual images in question, it's
       | hard to judge.)
       | 
       | There are plenty of dangers with facial recognition (like using
       | it at scale, or to track people without accountability), but this
       | one doesn't seem to be it.
        
         | ChrisKnott wrote:
         | You are being downvoted but you are 100% right.
         | 
         | The justification for depriving someone of their liberty lies
         | solely with the arresting officer. They can base that on
         | whatever they want, as long as they can later justify it to a
         | court.
         | 
         | For example, you might have a trusted informant who could tell
         | you who committed a local burglary, just this on its own could
         | be legitimate grounds to make an arrest. The same informant
         | might walk into a police station and tell the same information
         | to someone else, for that officer, it might not be sufficient
         | to justify an arrest.
        
         | ncallaway wrote:
         | > The problem here is 100% on the cop(s) who made that call for
         | themselves
         | 
         | I disagree. There is plenty of blame on the cops who made that
         | call for themselves, true.
         | 
         | But there doesn't have to be a single party who is at fault.
         | The facial recognition software is _badly flawed_ in this
         | dimension. It 's well established that the current technologies
         | are racially biased. So there's at least some fault in the
         | developer of that technology, and the purchasing officer at the
         | police department, and a criminal justice system that allows it
         | to be used that way.
         | 
         | Reducing a complex problem to a single at-fault person produces
         | an analysis that will often let other issues continue to
         | fester. Consider if the FAA always stopped the analysis of air-
         | crashes at: "the pilot made an error, so we won't take any
         | other corrective actions other than punishing the pilot". Air
         | travel wouldn't nearly as safe as it is today.
         | 
         | While we should hold these officers responsible for their
         | mistake (abolish QI so that these officers could be sued
         | civilly for the wrongful arrest!), we should also fix the other
         | parts of the system that are obviously broken.
        
           | dfxm12 wrote:
           | _The facial recognition software is badly flawed in this
           | dimension. It 's well established that the current
           | technologies are racially biased._
           | 
           | Who decided to use this software for this purpose, _despite
           | these bad flaws and well established bias_? The buck stops
           | with the cops.
        
             | goliatone wrote:
             | I guess the argument would be that some companies are
             | pushing- actively selling- the technology to PDs. My
             | experience listening to the sales pitch by our sales team -
             | of tech I helped develop; they would not only ignore the
             | caveats attached to the products by engineering but
             | straight out sell features that were not done, not even in
             | the roadmap, or just physically impossible to implement as
             | sold. With that in mind I can see how the companies selling
             | these solutions are responsible as well.
        
             | ncallaway wrote:
             | Sure, and that was one of the parties I listed as being at
             | fault:
             | 
             | > purchasing officer at the police department
             | 
             | However, if the criminal justice system decides that this
             | is an acceptable use of software, then the criminal justice
             | system _itself_ also bears responsibility.
             | 
             | The developer of the software _also_ bears the
             | responsibility for developing, marketing, and selling the
             | software for the police department.
             | 
             | I agree that the PD bears the majority of the culpability
             | here, but I disagree that it bears _every ounce of fault_
             | that could exist in this scenario.
        
             | Jtsummers wrote:
             | The cops, the politicians who fund them, the voters who
             | elect the politicians (and possible some of the higher up
             | police ranks), the marketers who sold it to the politician
             | and cops, the management that directed marketing to sell to
             | law enforcement, the developers who let management sell a
             | faulty product, the developers who produced a faulty
             | product.
             | 
             | Plenty of blame to go around.
        
             | moron4hire wrote:
             | There's also the company that built the software and
             | marketed it to law enforcement.
             | 
             | Even disregarding the moral hazard of selecting an
             | appropriate training set, the problem is that ML-based
             | techniques are inherently biased. That's the entire point,
             | to boil down a corpus of data into a smaller model that can
             | generate guesses at results. ML is not useful without the
             | bias.
             | 
             | The problem is that bias is OK in some contexts (guessing
             | at letters that a user has drawn on a digitizer) and
             | absolutely wrong in others (needlessly subjecting an
             | innocent person to the judicial system and all of its
             | current flaws). The difference is in four areas, how easily
             | one can correct for false positives/negatives, how easy it
             | is to recognize false output, how the data and results
             | relate to objective reality, and how destructive bad
             | results may be.
             | 
             | When Amazon product suggestions start dumping weird
             | products on me because they think viewing pages is the same
             | as showing interest in the product (vs. guffawing at weird
             | product listings that a Twitter personality has found), the
             | damage is limited. It's just a suggestion that I'm free to
             | ignore. In particularly egregious scenarios, I've had to
             | explain why weird NSFW results were showing up on my
             | screen, but thankfully the person I'm married to trusts me.
             | 
             | When a voice dictation system gets the wrong words for what
             | I am saying, fixing the problem is not hard. I can try
             | again, or I can restart with a different modality.
             | 
             | In both of the previous cases, the ease of detection of
             | false positives is simplified by the fact that I know what
             | the end result _should_ be. These technologies are
             | assistive, not generative. We don 't use speech recognition
             | technology to determine _what_ we are attempting to say, we
             | use it to speed up getting to a predetermined outcome.
             | 
             | The product suggestion and dictation issues are annoying
             | when encountering them because they are tied to an
             | objective reality: finding products I want to buy,
             | communicating with another person. They're only "annoying"
             | because the mitigation is simple. Alternatively, you can
             | just dispense with the real world entirely. When a NN
             | "dreams" up pictures of dogs melting into a landscape, that
             | is completely disconnected from any real thing. You can't
             | take the hallucinated dog pictures for anything other than
             | generative art. The purpose of the pictures is to look at
             | the weird results and just say, "ah, that was interesting".
             | 
             | But facial recognition and "depixelization" fails on the
             | first three counts, because they are attempts to reconnect
             | the ML-generated results to a thing that exists in the real
             | world, we don't know what the end results should be, and we
             | (as potential users of the system) don't have any means of
             | adjusting the output or escaping to a different system
             | entirely. And when combined with the purpose of law
             | enforcement, it fails on the fourth aspect, in that the
             | modern judicial system in America is singularly optimized
             | for prosecuting people, not determining innocence or guilt,
             | but getting plea bargain deals out of people. Only 10%
             | criminal cases go to trial. 99% of civil suits end in a
             | settlement rather than a judgement (with 90% of the cases
             | settling before ever going to trial). Even in just this
             | case of the original article, this person and his family
             | have been traumatized, and he has lost at least a full day
             | of productivity, if not much, much more from the associated
             | fallout.
             | 
             | When a company builds and markets a product that harms
             | people, they should be held liable. Due to the very nature
             | of how machine vision and learning techniques work, they'll
             | never be able to address these problems. And the
             | combination of failure in all four categories makes them
             | particularly destructive.
        
               | dfxm12 wrote:
               | _When a company builds and markets a product that harms
               | people, they should be held liable._
               | 
               | They should be, however a company building and marketing
               | a harmful product is a separate issue from cops using
               | specious evidence to arrest a man.
               | 
               | Cops (QI aside), are responsible for the actions they
               | take. They shouldn't be able to hide behind "the tools we
               | use are bad", especially when (as a parent poster said),
               | the tool is known to be bad in the first place and the
               | cops still used it.
        
               | moron4hire wrote:
               | This is why I wrote "also", not "instead".
        
               | ncallaway wrote:
               | > Cops (QI aside), are responsible for the actions they
               | take. They shouldn't be able to hide behind "the tools we
               | use are bad", especially when (as a parent poster said),
               | the tool is known to be bad in the first place and the
               | cops still used it.
               | 
               | But literally no one in this thread is arguing to _not_
               | hold them responsible.
               | 
               | Everyone agrees that _yes, the cops and PD are
               | responsible_. It 's just that some people are arguing
               | that there are other parties that __also __bear
               | responsibility.
               | 
               | No one thinks the cops should be able to hide behind the
               | fact that the tool is bad. I think these cops should be
               | fired, sued for a wrongful arrest. I think QI should be
               | abolished so wronged party can go after the house of the
               | officer that made the arrest in a civil court. I think
               | the department should be on the hook for a large
               | settlement payment.
               | 
               | But I _also_ think the criminal justice system should
               | enjoin future departments from using this known bad
               | technology. I think we should _also_ be mad at the
               | technology vendors that created this bad tool.
        
       | seebetter wrote:
       | Reminds me of this-
       | 
       | Facial recognition technology flagged 26 California lawmakers as
       | criminals. (August 2019)
       | 
       | https://www.mercurynews.com/2019/08/14/facial-recognition-te...
        
       | throwawaysea wrote:
       | A human still confirmed the match right? That makes this not a
       | facial recognition issue but something else.
        
         | aaronmdjones wrote:
         | A human who had only seen the same grainy security footage that
         | the algorithm saw.
        
       | aritraghosh007 wrote:
       | The pandemic has accelerated the use of no-touch surfaces
       | specially at places like an airport that are more inclined to now
       | use a face recognition security kiosk. What's not clear is the
       | vetting process for these (albeit controversial) technologies.
       | What if Google thinks person A is an offender but Amazon thinks
       | otherwise. Can they be used as counter evidence? What is the gold
       | standard for surveillance?
        
       | vermontdevil wrote:
       | From ACLU article:
       | 
       |  _Third, Robert's arrest demonstrates why claims that face
       | recognition isn't dangerous are far-removed from reality. Law
       | enforcement has claimed that face recognition technology is only
       | used as an investigative lead and not as the sole basis for
       | arrest. But once the technology falsely identified Robert, there
       | was no real investigation._
       | 
       | I fear this is going to be the norm among police investigations.
        
       | linuxftw wrote:
       | Wait until you hear about how garbage and unscientific
       | fingerprint identification is.
        
         | _underfl0w_ wrote:
         | Speaking of pseudoscience, didn't most police forces just start
         | phasing out polygraphs in the last decade?
        
           | linuxftw wrote:
           | Unlikely unless they were compelled by law or found something
           | else to replace it, and I think it's the latter. Something
           | about machine learning and such.
        
       | vmception wrote:
       | > Federal studies have shown that facial-recognition systems
       | misidentify Asian and black people up to 100 times more often
       | than white people.
       | 
       | The idea behind inclusion is that this product would have never
       | made it to production if the engineering teams, product team,
       | executive team and board members represented the population. But
       | enough representation so that there is a countering voice is even
       | better.
       | 
       | Would have just been "this edge case is not an edge case at all,
       | axe it."
       | 
       | Accurately addressing a market is the point of the corporation
       | more than an illusion of meritocracy amongst the employees.
        
         | JangoSteve wrote:
         | This is so incredibly common, it's embarrassing. I was on an
         | expert panel about "AI and Machine Learning in Healthcare and
         | Life Sciences" back in January, and I made it a point
         | throughout my discussions to keep emphasizing the amount of
         | bias inherent in our current systems, which ends up getting
         | amplified and codified in machine learning systems. Worse yet,
         | it ends up justifying the bias based on the false pretense that
         | the systems built are objective and the data doesn't lie.
         | 
         | Afterward, a couple people asked me to put together a list of
         | the examples I cited in my talk. I'll be adding this to my list
         | of examples:
         | 
         | * A hospital AI algorithm discriminating against black people
         | when providing additional healthcare outreach by amplifying
         | racism already in the system.
         | https://www.nature.com/articles/d41586-019-03228-6
         | 
         | * Misdiagnosing people of African decent with genomic variants
         | misclassified as pathogenic due to most of our reference data
         | coming from European/white males.
         | https://www.nejm.org/doi/full/10.1056/NEJMsa1507092
         | 
         | * The dangers of ML in diagnosing Melanoma exacerbating
         | healthcare disparities for darker skinned people.
         | https://jamanetwork.com/journals/jamadermatology/article-abs...
         | 
         | And some other relevant, but not healthcare examples as well:
         | 
         | * When Google's hate speech detecting AI inadvertantly censored
         | anyone who used vernacular referred to in this article as being
         | "African American English".
         | https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...
         | 
         | * When Amazon's AI recruiting tool inadvertantly filtered out
         | resumes from women. https://www.reuters.com/article/us-amazon-
         | com-jobs-automatio...
         | 
         | * When AI criminal risk prediction software used by judges in
         | deciding the severity of punishment for those convicted
         | predicts a higher chance of future offence for a young, black
         | first time offender than for an older white repeat felon.
         | https://www.propublica.org/article/machine-bias-risk-assessm...
         | 
         | And here's some good news though:
         | 
         | * A hospital used AI to enable care and cut costs (though the
         | reporting seems to over simplify and gloss over enough to make
         | the actual analysis of the results a little suspect).
         | https://www.healthcareitnews.com/news/flagler-hospital-uses-...
        
           | snapetom wrote:
           | I agree 100% about how common it is. The industry also pays
           | lip service about doing something about it. My last job was
           | at a research institution and we had a data ethics czar,
           | who's a very smart (Stats phd) guy and someone I consider a
           | friend. A lot of his job was to go around the org and
           | conferences talking about things like this.
           | 
           | While there's a lot of head nodding, nothing is ever actually
           | addressed in day to day operations. Data scientists barely
           | know what's going on when they throw things through
           | TensorFlow. What matters is the outcome and the confusion
           | matrix at the end.
           | 
           | I say this as someone who works in data and implements AI/ML
           | platforms. Mr. Williams needs to find the biggest ambulance
           | chasing lawyer and file civil suits not only the law
           | enforcement agencies involved, but top down everyone at
           | DataWorks from the president to the data scientist to the
           | lowly engineer who put this in production.
           | 
           | These people have the power to ruin lives. They need to be
           | made an example of and held accountable for the quality of
           | their work.
        
             | vmception wrote:
             | Sounds like a license for developing software is inevitable
             | then.
        
       | joyj2nd wrote:
       | Understandable, all black men look the same.
        
       | VWWHFSfQ wrote:
       | sounds like this guy is about to get a big payday.
        
         | whatshisface wrote:
         | Will he? I thought it was pretty hard to win cases against the
         | police. What does the evidence about the practicality of
         | pursuing police for torts say here, and for that matter what
         | kind of evidence should we be looking for?
        
           | VWWHFSfQ wrote:
           | most cases are settled by the city/county before it goes to
           | court
        
           | dfxm12 wrote:
           | One can file a civil case for false arrest. However, thanks
           | to the qualified immunity cops enjoy coupled with the newness
           | of the facial recognition technology, it will be very hard to
           | prove that they obviously violated clearly established law.
           | Maybe the cops will even point their fingers at the software.
           | 
           | Maybe he can go after the makers of the facial recognition
           | software, but they can probably point their finger at the
           | cops for using it wrong.
           | 
           | So, in any case, the guy will be left with a big legal bill.
        
         | avs733 wrote:
         | at tax payer expense.
        
           | dfxm12 wrote:
           | It must be frustrating to be a taxpayer, demanding to defund
           | the police while simultaneously having to pay for their
           | mistakes.
           | 
           | If only their elected officials would listen to them...
        
         | ncallaway wrote:
         | That and we might get some kind of judicial ruling that current
         | incarnations of facial recognition software are racially based.
         | 
         | It would be a great result if a court declared that the use of
         | racially biased facial recognition software is a violation of
         | the 14th amendment violation, and enjoined PDs from using such
         | software unless it can be demonstrated to be free of racial
         | bias.
        
           | raxxorrax wrote:
           | Facial recognition is a solution to a problem we don't have.
           | It is the smartwatch of ML.
        
           | ChrisMarshallNY wrote:
           | Might have something to do with the...monochromatic...makeup
           | of most software company C-suites.
           | 
           | Fortunately, that is changing, but not that quickly.
        
       | tantalor wrote:
       | No mention of whether a judge signed a warrant for the arrest. In
       | what world can cops just show up and arrest you on your front
       | lawn based on their hunch?
        
       | at_a_remove wrote:
       | I think that your prints, DNA, and so forth must be, in the
       | interests of fairness, utterly erased from all systems in the
       | case of false arrest. With some kind of enormous, ruinous
       | financial penalty in place for the organizations for non-
       | compliance, as well as automatic jail times for involved
       | personnel. These things need teeth to happen.
        
       | danso wrote:
       | Since the NPR is a 3 minute listen without a transcript, here's
       | the ACLU's text/image article: https://www.aclu.org/news/privacy-
       | technology/wrongfully-arre...
       | 
       | And here's a 1st-person account from the arrested man:
       | https://www.washingtonpost.com/opinions/2020/06/24/i-was-wro...
        
         | TedDoesntTalk wrote:
         | As soon as I saw it was audio only, i left the site. Why do
         | sites do this? How many people actually stick to the page and
         | listen to that?
        
           | asveikau wrote:
           | > How many people actually stick to the page and listen to
           | that?
           | 
           | I just did. 3 minutes wasn't that bad and I wasn't somewhere
           | where it would be a problem.
           | 
           | > Why do sites do this?
           | 
           | NPR is a radio network. I have seen that often they do
           | transcribe their clips. I am not sure what the process they
           | have for that looks like, but it seems this particular clip
           | didn't get transcribed.
           | 
           | Edit: looks like they do have a transcription mentioned
           | elsewhere in the thread. So seems like some kind of UI fail.
        
           | scarface74 wrote:
           | Why do radio sites post audio?
        
           | ajzinsbwbs wrote:
           | Most people are going to hear the story on the radio or in a
           | podcast app / RSS feed. It's useful to have the story indexed
           | on a shareable web link where it can be played on different
           | platforms without any setup. If I wanted to share a podcast
           | episode with friends in a group chat, a link like this would
           | be a good way to do it. Since this is more of a long-form
           | text discussion forum I'd probably look for a text format
           | before posting here.
        
           | danso wrote:
           | NPR does transcribe (many, most?) its audio stories, but
           | usually there's a delay of a day or so - the published
           | timestamp for this story is 5:06AM (ET) today.
           | 
           | edit: looks like there's a text version of the article. I'm
           | assuming this is a CMS issue: there's an audio story and a
           | "print story", but the former hadn't been linked to the
           | latter: https://news.ycombinator.com/item?id=23628790
        
             | dhosek wrote:
             | They transcribe all their stories. Back before the web was
             | widespread, you could call or write NPR and have them mail
             | a transcript to you.
        
           | 013a wrote:
           | Well, if anyone were going to do it, you'd think no one would
           | be surprised about it being the "National Public _Radio_ "
        
             | dvtrn wrote:
             | Accessibility still matters, or should still matter even if
             | you're a radio station, but probably especially if you're a
             | _news_ radio station.
        
               | scarface74 wrote:
               | How many TV shows have audio descriptions of non verbal
               | parts of what you see on screen?
        
               | dvtrn wrote:
               | More than zero. It's called closed captioning, isn't it?
               | I've quite often seen closed-captioning that put brief
               | written descriptions of non-verbal depictions in bracket,
               | and it's not entirely common either
               | 
               | https://www.automaticsync.com/captionsync/what-qualifies-
               | as-... (see section: "High Quality Captioning")
        
               | scarface74 wrote:
               | Close captioning is for people who can't hear.
               | 
               | I am not aware of many TV shows that offer audio
               | commentary for the visually impaired.
               | 
               | Here is an example of one that does.
               | 
               | https://www.npr.org/2015/04/18/400590705/after-fan-
               | pressure-...
        
               | dvtrn wrote:
               | Sorry, I thought that since we were originally talking
               | about transcriptions of radio news broadcasts and
               | accessibility for the hard of hearing that closed-
               | captioning would be appropriate and relevant. But your
               | point is well met.
        
               | vermontdevil wrote:
               | NPR is fantastic when it comes to accessibility by
               | providing transcripts. I linked the page thinking the
               | transcript will come later as they usually do. But turns
               | out it was a wrong link. See elsewhere for the correct
               | link.
        
         | milespj wrote:
         | The mods can change this link to
         | https://www.npr.org/2020/06/24/882683463/the-computer-got-it...
         | 
         | The linked story is audio only and is associated with the
         | Morning Edition broadcast, but the full story appears under our
         | Special Series section.
         | 
         | (I work for NPR)
        
           | dang wrote:
           | Ok, changed from
           | https://www.npr.org/2020/06/24/882678392/man-says-he-was-
           | fal.... Thanks!
        
         | Fiveplus wrote:
         | NPR's text-only article served to me:
         | 
         | https://text.npr.org/s.php?sId=882683463
        
       | ghostpepper wrote:
       | He wasn't arrested until the shop owner had also "identified"
       | him. The cops used a single frame of grainy video to pull his
       | driver's license photo, and then put that photo in a lineup and
       | showed the store clerk.
       | 
       | The store clerk (who hadn't witnessed the crime and was going off
       | the same frame of video fed into the facial recognition software)
       | said the driver's license photo was a match.
       | 
       | There are several problems with the conduct of the police in this
       | story but IMHO the use of facial recognition is not the most
       | egregious.
        
         | [deleted]
        
         | businesslucas wrote:
         | It is not clear to me that the person who identified him was
         | shop owner or clerk. From the nyt article:
         | https://www.nytimes.com/2020/06/24/technology/facial-recogni...
         | 
         | "The Shinola shoplifting occurred in October 2018. Katherine
         | Johnston, an investigator at Mackinac Partners, a loss
         | prevention firm, reviewed the store's surveillance video and
         | sent a copy to the Detroit police"
         | 
         | "In this case, however, according to the Detroit police report,
         | investigators simply included Mr. Williams's picture in a
         | "6-pack photo lineup" they created and showed to Ms. Johnston,
         | Shinola's loss-prevention contractor, and she identified him.
         | (Ms. Johnston declined to comment.)"
        
           | ghostpepper wrote:
           | I think you're correct that the person was not an owner or
           | clerk. IMHO the salient point is that the person was not any
           | sort of eyewitness but merely comparing the same grainy photo
           | as the algorithm.
        
         | malwarebytess wrote:
         | The story is the same one that all anti-surveillance, anti-
         | police militarization, pro-privacy, and anti-authoritarian
         | people foretell. Good technology will be used enable, amplify,
         | and justify civil rights abuses by authority figures from your
         | local beat cop, to a faceless corporation, a milquetoast public
         | servant, or the president of the United States.
         | 
         | Our institutions and systems (and maybe humans in general) are
         | not robust enough to cleanly handle these powers, and we are
         | making the same mistake over and over and over again.
        
           | coffeemaniac wrote:
           | Correct, and this has been the story with every piece of
           | technology or tool we've ever given to police. We give them
           | body cameras and they're turned off or used to create FPS-
           | style snuff films of gunned down citizens. Give them rubber
           | bullets and they're aimed at protesters eyeballs. Give them
           | tasers and they're used as an excuse to shoot someone when
           | the suspect "resists." Give them flashbangs and they'll throw
           | them into an infant's crib. Give them mace and it's used out
           | of car windows to punish journalists for standing on the
           | sidewalks.
           | 
           | The mistake is to treat any police department as a good-faith
           | participant in the goal of reducing police violence. Any tool
           | you give them will be used to brutalize. The only solution is
           | to give them less.
        
         | bsenftner wrote:
         | Yes, this is a story of police misconduct. The regulation of
         | facial recognition that is required is regulation against
         | police/authority stupidity. The FR system aids in throwing away
         | misses, leaving investigative leads. But if a criminal is not
         | in the FR database to begin with, any results of the FR are
         | wastes of time.
        
         | [deleted]
        
       | sneak wrote:
       | Another reason that it's absolutely insane that the state demands
       | to know where you sleep at night in a free society. These clowns
       | were able to just show up at his house and kidnap him.
       | 
       | The practice of disclosing one's residence address to the state
       | (for sale to data brokers[1] and accessible by stalkers and the
       | like) when these kinds of abuses are happening is something that
       | needs to stop. There's absolutely no reason that an ID should be
       | gated on the state knowing your residence. It's none of their
       | business. (It's not on a passport. Why is it on a driver's
       | license?)
       | 
       | [1]: https://www.newsweek.com/dmv-drivers-license-data-
       | database-i...
        
       | ChrisMarshallNY wrote:
       | Sadly, there's plenty more where that came from.
        
       | [deleted]
        
       | hpoe wrote:
       | I don't think using the facial recognition is necessarily wrong
       | to help identify probable suspects, but arresting someone based
       | on a facial match algorithm is definitely going too far.
       | 
       | Of course really I blame the AI/ML hucksters for part of this
       | mess who have sold us the idea of machines replacing rather than
       | augmenting human decision making.
        
         | jordanpg wrote:
         | Those hucksters should be worried about the Supreme Court
         | swatting away their business model, because that's where I see
         | this headed.
        
           | hnlmorg wrote:
           | I don't think they'll worry about that. Even if that did
           | happen there are foreign markets who would still invest in
           | this.
        
         | jacquesm wrote:
         | I think it is very wrong. Faces are anything but unique. Having
         | a particular face should not result in you being a suspect.
         | Only once actual policing results in you becoming a suspect
         | then this might be a low quality extra signal.
        
           | gridlockd wrote:
           | > Having a particular face should not result in you being a
           | suspect. Only once actual policing results in you becoming a
           | suspect then this might be a low quality extra signal.
           | 
           | Having a picture or just a description of the face is one of
           | the most important pieces of information the police has in
           | order to do actual policing. You can be arrested for just
           | broadly matching the description if you happen to be in the
           | vicinity.
           | 
           | Had the guy been convicted of anything just based on that
           | evidence, this would be a scandal. As it is, a suspect is
           | just a suspect and this kind of thing happens all the time,
           | because humans are just as fallible. It's just not news when
           | there's no AI involved.
        
             | jacquesm wrote:
             | A face of which there is only a description is not going to
             | work if there aren't any special identifying marks unless
             | you get an artist involved or one of those identikit sets
             | to reconstruct the face. An AI is just going to spit out
             | some generic representation of what it was trained on
             | rather than the specifics of the face of an actual suspect.
             | 
             | Faces generated by AI means should not count as 'probable
             | cause' to go and arrest people. They should count as
             | fantasy.
        
               | gridlockd wrote:
               | > Faces generated by AI means should not count as
               | 'probable cause' to go and arrest people.
               | 
               | They don't:
               | 
               | https://wfdd-live.s3.amazonaws.com/styles/story-
               | full/s3/imag...
               | 
               | There was further work involved, there was a witness who
               | identified the man on a photo lineup, and so on. The AI
               | did not _identify_ anyone, it gave a  "best effort"
               | match. All the actual mistakes were made by humans.
        
         | dafoex wrote:
         | In a world where some police forces don't use polygraph lie
         | detectors because they are deemed too inaccurate, it baffles me
         | that people would make an arrest based on a facial recognition
         | hit from poor quality data.
         | 
         | But no, its AI, its magical and it must be right.
        
           | treis wrote:
           | This seems similar to self-driving cars where people hold the
           | computer to much higher standards than humans. I don't have
           | solid proof, but I suspect that using facial recognition with
           | a reasonable confidence threshold and reasonable source
           | images is more accurate than eyewitness ID. If for no other
           | reason than the threshold for a positive eyewitness ID is
           | laughably bad.
           | 
           | The current best practice is to have a witness pick out the
           | suspect from 6 photos. It should be immediately obvious that
           | right off the bat there's a 17% chance of the witness
           | randomly picking the "right" person. It's a terrible way to
           | do things and it's no surprise that people are wrongly
           | convicted again and again on eyewitness testimony.
        
         | dx87 wrote:
         | Yeah, facial recognition can be useful in law enforcement, as
         | long as it's used responsibly. There was a man who shot people
         | at a newspaper where I lived, and when apprehended, he refused
         | to identify himself, and apparently their fingerprint machine
         | wasn't working, so they used facial recognition to identify
         | him.
         | 
         | https://en.wikipedia.org/wiki/Capital_Gazette_shooting
        
           | rovolo wrote:
           | From the wiki article and the linked news articles, the
           | police picked him up at the scene of the crime. He also had
           | smoke grenades (used in the attack) when they found him.
           | 
           | > Authorities said he was not carrying identification at the
           | time of his arrest and was not cooperating. ... an issue with
           | the fingerprint machine ultimately made it difficult to
           | identify the suspect, ... A source said officials used facial
           | recognition technology to confirm his identity.
           | 
           | https://en.wikipedia.org/wiki/Capital_Gazette_shooting#Suspe.
           | ..
           | 
           | > Police, who arrived at the scene within a minute of the
           | reported gunfire, apprehended a gunman found hiding under a
           | desk in the newsroom, according to the top official in Anne
           | Arundel County, where the attack occurred.
           | 
           | https://www.washingtonpost.com/local/public-safety/heavy-
           | pol...
           | 
           | This doesn't really seem like an awesome use of facial
           | recognition to me. He was already in custody after getting
           | picked up at the crime scene. I doubt he would have been
           | released if facial recognition didn't exist.
        
           | YPCrumble wrote:
           | > as long as it's used responsibly
           | 
           | At what point can we decide that people in positions of power
           | are not and will not ever be responsible enough to handle
           | this technology?
           | 
           | Surely as a society we shouldn't continue to naively assume
           | that police are "responsible" like we've assumed in the past?
        
             | anthonygd wrote:
             | > Surely as a society we shouldn't continue to naively
             | assume that police are "responsible" like we've assumed in
             | the past?
             | 
             | Of course we shouldn't assume it, but we absolutely should
             | require it.
             | 
             | Uncertainty is a core part of policing which can't be
             | removed.
        
             | dx87 wrote:
             | Agreed, I'm not saying we can currently assume they are
             | responsible, but in some hypothetical future where reforms
             | have been made and they can be trusted, I think it would be
             | fine to use. I don't think we should use current bad actors
             | to decide that a technology is completely off limits in the
             | future.
        
           | glenda wrote:
           | I don't think there is such a thing as responsible use of
           | facial recognition technology by law enforcement.
           | 
           | The technology is certainly not robust enough to be trusted
           | to work correctly at that level yet. Even if it was improved
           | I think there is a huge moral issue with the police having
           | the power to use it indiscriminately on the street.
        
       | whatshisface wrote:
       | How does computerized facial recognition compare in terms of
       | racial bias and accuracy to human-brain facial recognition?
       | Police are not exactly perfect in either regard.
        
         | suizi wrote:
         | Face recognition widens the scope of how many people can be
         | harassed.
        
           | _underfl0w_ wrote:
           | While also enabling finger-pointing, e.g. the police can say
           | "We aren't racist or aren't at fault. The system is just
           | faulty." while the engineers behind the facial recognition
           | tech can say that they, "Were just doing their job. The
           | police should've heeded their disclaimers, etc."
        
       | danso wrote:
       | This story is really alarming because as described, the police
       | ran a face recognition tool based on a frame of grainy security
       | footage and got a positive hit. Does this tool give any
       | indication of a confidence value? Does it return a list (sorted
       | by confidence) of possible suspects, or any other kind of
       | feedback that would indicate even to a layperson how much
       | uncertainty there is?
       | 
       | The issue of face recognition algorithms performing worse on dark
       | faces is a major problem. But the other side of it is: would
       | police be more hesitant to act on such fuzzy evidence if the top
       | match appeared to be a middle-class Caucasian (i.e. someone who
       | is more likely to take legal recourse)?
        
         | zaroth wrote:
         | > Does this tool give any indication of a confidence value?
         | 
         | Yes.
         | 
         | > Does it return a list (sorted by confidence) of possible
         | suspects,
         | 
         | Yes.
         | 
         | > ... or any other kind of feedback that would indicate even to
         | a layperson how much uncertainty there is?
         | 
         | Yes it does. It also states in large print heading "THIS
         | DOCUMENT IS NOT A POSITIVE IDENTIFICATION IT IS AN
         | INVESTIGATIVE LEAD AND IS _NOT_ PROBABLE CAUSE TO ARREST".
         | 
         | You can see a picture of this in the ACLU article.
         | 
         | The police bungled this badly by setting up a fake photo lineup
         | with the loss prevention clerk who submitted the report (who
         | had only ever seen the same footage they had).
         | 
         | However, tools that are rife for misuse do not get a pass
         | because they include a bold disclaimer. If the tool/process can
         | not prevent misuse, the tool/process is broken and possibly
         | dangerous.
         | 
         | That said, we have little data on how often the tool results in
         | catching dangerous criminals versus how often it misidentifies
         | innocent people. We have little data on if those innocent
         | people tend to skew toward a particular demographic.
         | 
         | But I have a fair suspicion that dragnet techniques like this
         | unfortunately can be both effective and also problematic.
        
           | danso wrote:
           | I think the software would be potentially less problematic if
           | the victim/witness were given access, and (ostensibly) could
           | see the pool of matches and how much/little the top likely
           | match differed from the less confident matches.
           | 
           | > _The police bungled this badly by setting up a fake photo
           | lineup..._ *
           | 
           | FWIW, this process is similar to traditional police lineups.
           | The witness is shown 4-6 people - one who is the actual
           | suspect, and several that vaguely match a description of the
           | suspect. When I was asked to identify a suspect in my
           | robbery, the lineup included an assistant attorney who would
           | later end up prosecuting the case. The police had to go out
           | and find tall slight-skinned men to round out the lineup.
           | 
           | > _... with the loss prevention clerk who submitted the
           | report (who had only ever seen the same footage they had)._
           | 
           | Yeah, I would hope that this is _not_ standard process. The
           | lineup process is already imperfect and flawed as it is even
           | with a witness who at least saw the crime first-hand.
        
         | Pxtl wrote:
         | Intresting and related, a team made a neat "face depixelizer"
         | that takes a pixelated image and uses machine learning to
         | generate a face that should match the pixelated image.
         | 
         | What's hilarious is that it makes faces that look _nothing_
         | like the original high-resolution images.
         | 
         | https://twitter.com/Chicken3gg/status/1274314622447820801
        
           | mywittyname wrote:
           | I wonder if this is trained on the same, or similar,
           | datasets.
        
             | jcims wrote:
             | One of the underlying models, PULSE, was trained on
             | CelebAHQ, which is likely what the results are mostly
             | white-looking. StyleGAN, which was trained on the much more
             | diverse (but sparse) FFHQ dataset does come up with a much
             | more diverse set of faces[1]...but PULSE couldn't get them
             | to converge very closely on the pixelated subjects...so
             | they went with CelebA [2].
             | 
             | [1] https://github.com/NVlabs/stylegan [2]
             | https://arxiv.org/pdf/2003.03808.pdf (ctrl+f ffhq)
        
           | nfrmatk wrote:
           | Interesting... Neat... Hilarious... In light of the
           | submission and the comment you're responding to, these are
           | not the words I would choose.
           | 
           | I think there's genuine cause for concern here, especially if
           | technologies like these are candidates for inclusion in any
           | real law enforcement decision-making.
        
           | jacquesm wrote:
           | That should be called a face generator, not a depixelizer.
        
           | danso wrote:
           | What's sad is that a tech entrepreneur will definitely add
           | that feature and sell it to law enforcement agencies that
           | believe in CSI magic:
           | https://www.youtube.com/watch?v=Vxq9yj2pVWk
        
             | barrkel wrote:
             | And another entrepreneur can add a feature to generate 10
             | different faces which match the same pixelation, and sell
             | it to the defence.
        
               | emiliobumachar wrote:
               | A better strategy might be to pixelate a photo of each
               | member of the jury, than de-pixelate it through the same
               | service, and distribute the before and after. Maybe
               | include the judge and prosecutor.
        
               | heavyset_go wrote:
               | Doubt that many people can afford to hire an expert
               | witness, or hire someone to develop bespoke software for
               | their trial.
        
         | strgcmc wrote:
         | I think the NYT article has a little more detail:
         | https://www.nytimes.com/2020/06/24/technology/facial-recogni...
         | 
         | Essentially, an employee of the facial recognition provider
         | forwarded an "investigative lead" for the match they generated
         | (which does have a score associated with it on the provider's
         | side, but it's not clear if the score is clearly communicated
         | to detectives as well), and the detectives then put the photo
         | of this man into a "6 pack" photo line-up, from which a store
         | employee then identified that man as being the suspect.
         | 
         | Everyone involved will probably point fingers at each other,
         | because the provider for example put large heading on their
         | communication that, "this is not probable cause for an arrest,
         | this is only an investigative lead, etc.", while the detectives
         | will say well we got a hit from a line-up, blame the witness,
         | and the witness would probably say well the detectives showed
         | me a line-up and he seemed like the right guy (or maybe as is
         | often the case with line-ups, the detectives can exert a huge
         | amount of bias/influence over witnesses).
         | 
         | EDIT: Just to be clear, none of this is to say that the process
         | worked well or that I condone this. I think the data, the
         | technology, the processes, and the level of understanding on
         | the side of the police are all insufficient, and I do not
         | support how this played out, but I think it is easy enough to
         | provide at least some pseudo-justification at each step along
         | the way.
        
           | jhayward wrote:
           | > the detectives then put the photo of this man into a "6
           | pack" photo line-up, from which a store employee then
           | identified that man
           | 
           | This is not correct. The "6-pack" was shown to a security
           | firm's employee, who had viewed the store camera's tape.
           | 
           |  _" In this case, however, according to the Detroit police
           | report, investigators simply included Mr. Williams's picture
           | in a "6-pack photo lineup" they created and showed to Ms.
           | Johnston, Shinola's loss-prevention contractor, and she
           | identified him."_ [1]
           | 
           | [1] ibid.
        
           | danso wrote:
           | That's interesting. In many ways, it's similar to the
           | "traditional" process I went through when reporting a robbery
           | to the NYPD 5+ years ago: they had software where they could
           | search for mugshots of all previously convicted felons living
           | in a x-mile radius of the crime scene, filtered by the
           | physical characteristics I described. Whether the actual
           | suspect's face was found by the software, it was ultimately
           | too slow and clunky to paginate through hundreds of results.
           | 
           | Presumably, the facial recognition software would provide an
           | additional filter/sort. But at least in my situation, I could
           | actually see how big the total pool of potential matches and
           | thus have a sense of uncertainty about false positives, even
           | if I were completely ignorant about the impact of false
           | negatives (i.e. what if my suspect didn't live within x-miles
           | of the scene, or wasn't a known/convicted felon?)
           | 
           | So the caution re: face recognition software is how it may
           | non-transparently add confidence to this already very
           | imperfect filtering process.
           | 
           | (in my case, the suspect was eventually found because he had
           | committed a number of robberies, including being clearly
           | caught on camera, and in an area/pattern that was easy to
           | narrow down where he operated)
        
           | treis wrote:
           | I'm becoming increasingly frustrated with the difficulty in
           | accessing primary source material. Why don't any of these
           | outlets post the surveillance video and let us decide for
           | ourselves how much of a resemblance there is.
        
             | njharman wrote:
             | Because they're not in the business of providing
             | information, transparency or journalism.
             | 
             | They are in the business of exposing you to as many paid
             | ads as possible. And they believe providing outgoing links
             | reduces their ability to do that.
        
               | alasdair_ wrote:
               | >They are in the business of exposing you to as many paid
               | ads as possible.
               | 
               | NPR is a non-profit that is mostly funded by donations.
               | They only have minimal paid ads on their website to pay
               | for running costs - they could _easily_ optimize the news
               | pages to increase ad revenue but they don 't because it
               | would get in the way of their goals.
        
             | tedunangst wrote:
             | Do they have it? Police haven't always been forthcoming in
             | publishing their evidence.
        
               | treis wrote:
               | If they don't how are they describing the quality of
               | video and clear lack of resemblance?
        
               | danso wrote:
               | I don't know what passage you're describing, but this one
               | is implied to be part of a narrative that is told from
               | the perspective of Mr. Williams, i.e. he's the one who
               | remembers "The photo was blurry, but it was clearly not
               | Mr. Williams"
               | 
               | > _The detective turned over the first piece of paper. It
               | was a still image from a surveillance video, showing a
               | heavyset man, dressed in black and wearing a red St.
               | Louis Cardinals cap, standing in front of a watch
               | display. Five timepieces, worth $3,800, were shoplifted._
               | 
               | > _"Is this you?" asked the detective._
               | 
               | > _The second piece of paper was a close-up. The photo
               | was blurry, but it was clearly not Mr. Williams. He
               | picked up the image and held it next to his face._
               | 
               | All the preceding grafs are told in the context of "this
               | what Mr. Williams said happened", most explicitly this
               | one:
               | 
               | > _"When's the last time you went to a Shinola store?"
               | one of the detectives asked, in Mr. Williams's
               | recollection._
               | 
               | According to the ACLU complaint, the DPD and prosecutor
               | have refused FOIA requests regarding the case:
               | 
               | https://www.aclu.org/letter/aclu-michigan-complaint-re-
               | use-f...
               | 
               | > _Yet DPD has failed entirely to respond to Mr.
               | Williams' FOIA request. The Wayne County Prosecutor also
               | has not provided documents._
        
               | treis wrote:
               | Maybe it's just me, but "we just took his word for it"
               | doesn't strike me as particularly good journalism if
               | that's what happened. If they really wrote these articles
               | without that level of basic corroboration then that's
               | pretty bad.
        
               | danso wrote:
               | It's a common technique in journalism to describe and
               | attribute someone's recollection of events in a series of
               | narrative paragraphs. It does not imply "we just took his
               | word for it", though it does imply that the reporter
               | finds his account to be credible enough to be given some
               | prominent space.
               | 
               | This arrest happened 6 months ago. Who else besides the
               | suspect and the police do you believe reporters should
               | ask for "basic corroboration" of events that took place
               | inside a police station? Or do you think this story
               | shouldn't be reported on at all until the police agree to
               | give additional info?
        
               | const_throwaway wrote:
               | > It's a common technique in journalism to describe and
               | attribute someone's recollection of events in a series of
               | narrative paragraphs.
               | 
               | Yes, it's called forwarding a narrative as opposed to
               | reporting on objective facts.
        
               | phendrenad2 wrote:
               | It should at least be very clear at the paragraph level
               | what is established fact and what is speculation/opinion.
        
               | LycheeKing wrote:
               | I fell out of love with NPR and Ira Glass a long time
               | ago.
               | 
               | NPR is unfortunately an actor in the current culture war.
               | The subject of the story is a POC, therefore by
               | definition innocent victim of "systemic racism", "white
               | supremacy", "implicit bias" or whatever fighting word the
               | left comes up.
               | 
               | Check any past story where the subject of the story is a
               | POC, those are never in any way responsible for their own
               | misfortunes.
               | 
               | If in the current story time will show that the
               | investigative lead was indeed correct to point Mr
               | Williams you won't ever see a retraction or update on
               | NPR.
        
           | ed25519FUUU wrote:
           | > _and the detectives then put the photo of this man into a
           | "6 pack" photo line-up, from which a store employee then
           | identified that man as being the suspect._
           | 
           | This is absurdly dangerously. The AI will find people who
           | look like the suspect, that's how the technology works. A
           | lineup as evidence will almost guarantee a bad outcome,
           | because of course the man looks like the suspect!
        
             | barkingcat wrote:
             | I'm also half guessing if the "lineup" was 5 White people
             | and the a photo of the victim.
        
             | kevin_thibedeau wrote:
             | The worse part is that the employee wasn't a witness to
             | anything. He was making the "ID" from the same video the
             | police had.
        
           | BurningFrog wrote:
           | I can see why you'd only get 6 guys together for a physical
           | "6 pack" line-up.
           | 
           | But for a photo lineup I can't imagine why you don't have
           | least 25 photos to pick from.
        
             | wtvanhest wrote:
             | Excellent point. In fact, the entire process of showing the
             | witness the photos should be recorded, and double blind.
             | I.e the officer showing the person should not know anything
             | about the lineup.
        
           | gridlockd wrote:
           | > Essentially, an employee of the facial recognition provider
           | forwarded an "investigative lead" for the match they
           | generated (which does have a score associated with it on the
           | provider's side, but it's not clear if the score is clearly
           | communicated to detectives as well)
           | 
           | This is the lead provided:
           | 
           | https://wfdd-live.s3.amazonaws.com/styles/story-
           | full/s3/imag...
           | 
           | Note that it says in red and bold emphasis:
           | 
           | THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN
           | _INVESTIGATIVE LEAD_ ONLY AND IS _NOT_ PROBABLE CAUSE TO
           | ARRREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE
           | CAUSE TO ARREST.
        
             | throwaway0a5e wrote:
             | Dear god the input image they used to generate that is
             | TERRIBLE! It could be damn near any black male.
             | 
             | The real negligence here is whoever tuned the software to
             | spit out a result for that quality of image rather than a
             | "not enough data, too many matches, please submit a better
             | image" error.
        
               | treis wrote:
               | You're also looking at a scan of a small print out with
               | poor contrast and brightness. There's probably a lot more
               | detail there at full resolution, brightened up to show
               | the face, and then enhanced contrast that the computer is
               | seeing.
        
               | mindslight wrote:
               | I'm not even sure that's definitely a black man, rather
               | than just any person with some kind of visor or mask.
               | There does seem to be a face in the noise, but human
               | brains are primed to see face shapes.
               | 
               | The deeper reform that needs to happen here is that every
               | person falsely arrested and/or prosecuted needs to be
               | automatically compensated for their time wasted and other
               | harm suffered. Only then will police departments have
               | some incentive for restraint. Currently we have a
               | perverse reverse lottery where if you're unlucky you just
               | lose a day/month/year of your life. With the state of
               | what we're actually protesting I'm not holding my breath
               | (eg the privileged criminals who committed the first
               | degree murder of Breonna Taylor _still have yet to be
               | charged_ ), but it's still worth calling out the smaller
               | injustices that criminal "justice" system inflicts.
        
               | alasdair_ wrote:
               | >The deeper reform that needs to happen here is that
               | every person falsely arrested and/or prosecuted needs to
               | be automatically compensated for their time wasted and
               | other harm suffered.
               | 
               | I agree here, but doing that may lead to the prosecutors
               | trying extra hard to find _something_ to charge a person
               | with after they are arrested, even if it was something
               | trivial that would often go un-prosecuted.
               | 
               | Getting the details right seems tough, but doable.
        
         | bsenftner wrote:
         | > The issue of face recognition algorithms performing worse on
         | dark faces is a major problem.
         | 
         | This needs to be coupled with the truth that people (police)
         | without diverse racial exposure are terrible at identifying
         | people outside of their ethnicity. In the photo/text article
         | they show the top of the "Investigative Lead Report" as an
         | image. You mean to say that every cop who saw the two images
         | side by side did not stop and say "hey, these are not the same
         | person!" They did not, and that's because their own brains'
         | could not see the difference.
         | 
         | This is a major reason police forces need to be ethnically
         | diverse. Just that enables those members of the force who never
         | grew up or spent time outside their ethnicity can learn to tell
         | a diverse range of similar but different people outside their
         | ethnicity apart.
        
         | caconym_ wrote:
         | People are not good at understanding uncertainty and its
         | implications, even if you put it front and center. I used to
         | work in renewable energy consulting and I was shocked by how
         | aggressively uncertainty estimates are ignored by those whose
         | goals they threaten.
         | 
         | In this case, it's incumbent on the software vendors to ensure
         | that less-than-certain results aren't even shown to the user.
         | American police can't generally be trusted to understand nuance
         | and/or do the right thing.
        
         | throwaway894345 wrote:
         | > But the other side of it is: would police be more hesitant to
         | act on such fuzzy evidence if the top match appeared to be a
         | middle-class Caucasian (i.e. someone who is more likely to take
         | legal recourse)?
         | 
         | Honest question: does race predict legal recourse when
         | decoupled from socioeconomic status, or is this an assumption?
        
           | SkyBelow wrote:
           | >Honest question: does race predict legal recourse when
           | decoupled from socioeconomic status, or is this an
           | assumption?
           | 
           | I think the issue is that regardless of the answer, it isn't
           | decoupled in real world scenarios.
           | 
           | I think the solution isn't dependent upon race either. It is
           | to ensure everyone have access to legal recourse regardless
           | of socioeconomic status. This would have the side effect of
           | benefiting races correlated with lower socioeconomic status
           | more.
        
             | throwaway894345 wrote:
             | > I think the issue is that regardless of the answer, it
             | isn't decoupled in real world scenarios.
             | 
             | Did you think I was asking about non-real-world scenarios?
             | And how do we know that it's coupled (or rather, the degree
             | to which it's coupled) in real world scenarios?
             | 
             | > I think the solution isn't dependent upon race either. It
             | is to ensure everyone have access to legal recourse
             | regardless of socioeconomic status. This would have the
             | side effect of benefiting races correlated with lower
             | socioeconomic status more.
             | 
             | This makes sense to me, although I don't know what this
             | looks like in practice.
        
           | advisedwang wrote:
           | Race and socioeconomic status are deeply intertwined. Or to
           | be more blunt - US society has kept black people poorer. To
           | treat them as independent variables is to ignore the whole
           | history of race in the US.
        
             | throwaway894345 wrote:
             | > To treat them as independent variables is to ignore the
             | whole history of race in the US.
             | 
             | Presumably the coupling of the variables is not binary
             | (dependent or independent) but variable (degrees of
             | coupling). Presumably these variables were more tightly
             | coupled in the past than in the present. Presumably it's
             | useful to understand precisely how coupled these variables
             | are today because it would drive our approach to addressing
             | these disparities. E.g., if the variables are loosely
             | coupled then bias-reducing programs would have a marginal
             | impact on the disparities and the better investment would
             | be social welfare programs (and the inverse is true if the
             | variables are tightly coupled).
        
         | adim86 wrote:
         | I blame TV shows like CSI and all the other crap out there that
         | make pixelated images look like something you could "Zoom" into
         | or something the computer can still understand even if the eye
         | does not. Because of this, non tech people do not really
         | understand that pixelated images have LOST information. Add
         | that to the racial situation in the U.S. and the the inaccuracy
         | of the tool for black people. Wow, this can lead to some really
         | troublesome results
        
           | ficklepickle wrote:
           | I lose hours every day just yelling "enhance" at my computer
           | screen. Hasn't worked yet, but any day now...
        
       | MikusR wrote:
       | Is that different from somebody getting arrested based on
       | mistaken eyewitness.
        
         | gnarbarian wrote:
         | Nope. But It's certainly far more accurate than eyewitnesses.
         | And will reduce the frequency of false positives. Compare this
         | to "suspect is a 6' male approx 200lbs with a white shirt and
         | blue jeans" and then having police frantically pick up everyone
         | in the area that meets this description.
         | 
         | This is the story that gets attention though. Despite it
         | representing an improvement in likely every potential metic you
         | can measure.
         | 
         | The response is what is interesting to me. It triggers a 1984
         | reflex resulting in people attempting to reject a dramatic
         | enchantment in law enforcement ostensibly because it is not
         | perfect. Or because they believe it a threat to privacy. I
         | think people who are rejecting it should dig deep into their
         | assumptions and reasoning to examine why they are really
         | opposed to technology like this.
        
           | TedDoesntTalk wrote:
           | > think people who are rejecting it should dig deep into
           | their assumptions and reasoning to examine why they are
           | really opposed to technology like this.
           | 
           | Because a false positive ruins lives? Is that not sufficient?
           | This man's arrest record is public and won't disappear. Many
           | employers won't hire if you have an arrest record (regardless
           | of conviction). His reputation is also permanently smeared.
           | These records are permanently public and in fact some
           | counties publish weekly arrest records on their websites and
           | in newspapers (not that newspapers matter much anymore)
           | 
           | Someday this technology may be better and work more reliably.
           | We're not there yet. Right now it's like the early days of
           | voice recognition from the '90s.
        
             | gnarbarian wrote:
             | This will ruin lives far less frequently than the existing
             | (worse) procedures.
        
               | malwarebytess wrote:
               | But as the founders of this country wisely understood,
               | human error is preferable to systematic error. That is
               | the principle under which juries, wildly fallible, exist.
               | 
               | Human error is preferable, even if it is more frequent
               | than the alternative, when it comes to justice. The more
               | human the better.
               | 
               | Humans can be held accountable.
        
         | Enginerrrd wrote:
         | The difference is that is a known problem, but with ML, a large
         | fraction of the population thinks it's infallible. Worse, its
         | reported confidence for an individual face may be grossly
         | overstated, since that is based on all the data it was trained
         | on, rather than the particular subset you may be dealing with.
        
           | gridlockd wrote:
           | > The difference is that is a known problem, but with ML, a
           | large fraction of the population thinks it's infallible.
           | 
           | I don't think _anybody_ actually believes that.
           | 
           | I'm pretty sure the exact opposite is true: People _expect_
           | AI to fail, because they see it fail all the time in their
           | daily use of computers, for example in voice recognition.
           | 
           | > Worse, its reported confidence for an individual face may
           | be grossly overstated, since that is based on all the data it
           | was trained on, rather than the particular subset you may be
           | dealing with.
           | 
           | At the end of the day, this is still human error. A human
           | compared the faces and decided they looked alike enough to go
           | ahead. The whole thing could've happened without AI, it's
           | just that without AI, processing large volumes of data is
           | infeasible.
        
             | leghifla wrote:
             | I think the human error was made possible because of AI:
             | the AI can search millions of records. The police /
             | detective cannot and will only search a very small set,
             | limiting the search by other means.
             | 
             | The probability of finding an innocent with a similar
             | enough face so that the witness can be fouled is much
             | higher with AI.
        
           | raxxorrax wrote:
           | large fraction of the population and ML marketing both
           | believe that.
           | 
           | I still think it insane. We have falling crime rates and we
           | still arm ourselves as fast as we can. Humanity could live
           | without face recognition and we wouldn't even suffer any
           | penalties. Nope, people need to sell their evidently shitty
           | ML work.
        
             | treis wrote:
             | (1) We still have extreme levels of crime compared to other
             | first world countries even if it is in decline
             | 
             | (2) Your argument strikes me as somewhat similar to "I feel
             | fine why should I keep taking my medicine?". It's not
             | exactly the same as the medicine is scientifically proven
             | to cure disease while it's impossible to measure the impact
             | of police on crime. But "things are getting better so we
             | should change what we're doing" is not a particularly sound
             | logical argument.
        
               | raxxorrax wrote:
               | Crimes rates dropped even faster in countries with more
               | rehabilitative approaches and long before some countries
               | began to upgrade their police forces because of unrelated
               | fears. It was more about giving people a second chance in
               | all that.
               | 
               | Criminologists aren't certain about surveillance having a
               | positive or negative effects on crime. We have more than
               | 40 studies with mixed results. What is certain with that
               | this kind of surveillance isn't responsible for the
               | falling crime rates described. Most data is from the UK.
               | Currently I don't think countries without surveillance
               | fair worse on crime. Maybe quite to the contrary.
               | 
               | "what we're doing" is not equivalent to increasing video
               | surveillance or generally increasing armament in civil
               | spaces. It may be sound logic if you extend the benefit
               | of the doubt but it may also just be a false statement.
               | 
               | Since surveillance is actually constitutionally forbidden
               | in many countries, on could argue that deployment would
               | "increase crime".
               | 
               | In some other sound logic it might just be a self
               | reinforcing private prison industry with economic
               | interests to keep a steady supply of criminals. Would
               | also be completely sound.
               | 
               | But all these discussions are quite dishonest, don't you
               | think? I just don't want your fucking camera in my face.
        
         | ncallaway wrote:
         | Yes.
         | 
         | A computer can make a mistake across literally any person who
         | has a publicly available photo (which is almost everyone).
         | 
         | Also, the facial recognition technologies are provably
         | _extremely_ racially biased.
        
         | mtgx wrote:
         | It's like asking "is mass surveillance that different from
         | targeted surveillance"?
         | 
         | Yes, of course it is. Orders of magnitude more people could be
         | negatively and undeservedly affected this for no other reason
         | than the fact that it's now cheap enough and easy enough to use
         | by the authorities.
         | 
         | Just to give one example I came up with right now, in the
         | future the police could stop you, take your picture and
         | automatically have it go through its facial recognition
         | database. Kind of like "stop and scan".
         | 
         | Or if the street cameras get powerful enough (and they will),
         | they could take your picture automatically while driving and
         | then stop you.
         | 
         | Think of it like a "TSA system for the roads". A lot more
         | people will be "randomly picked" by these systems from the
         | roads.
        
         | throwawaygh wrote:
         | The suspect said the picture looked nothing like him. When he
         | was shown the picture, he picked up the picture, put it in
         | front of his face, and said "I hope you don't think all black
         | people look alike".
         | 
         | I see this all the time when working with execs. I have to
         | continually remind even very smart people with STEM undergrad
         | and even graduate degrees that a computer vision system cannot
         | magically see things that are invisible to the human eye.
         | 
         | "the computer said so" is _way_ stronger than you would think.
        
         | phkahler wrote:
         | Not even close to the same thing. People aren't very reliable
         | witnesses either, But they are pretty good at identifying
         | people they actually know.
         | 
         | It's also poor practice to search a database using a photo or
         | even DNA to go fishing for a suspect. A closest match will
         | generally be found even if the actual perpetrator isn't in the
         | database. I think on some level the authorities know this,
         | which is why they dont seed the databases with their own photos
         | and DNA.
        
       | mnw21cam wrote:
       | This is a classic example of the false positive rate fallacy.
       | 
       | Let's say that there are a million people, and the police have
       | photos of 100,000 of them. A crime is committed, and they pull
       | the surveillance of it, and match against their database. They
       | have a funky image matching system that has a false positive rate
       | of 1 in 100,000 people, which is _way_ more accurate than I think
       | facial recognition systems are right now, but let 's just roll
       | with it. Of course, on average, this system will produce one
       | positive hit per search. So, the police roll up to that person's
       | home and arrest them.
       | 
       | Then, in court, they get to argue that their system has a 1 in
       | 100,000 false positive rate, so there is a chance of 1 in 100,000
       | that this person is innocent.
       | 
       | Wrong!
       | 
       | There are ten people in the population of 1 million that the
       | software would comfortably produce a positive hit for. They can't
       | all be the culprit. The chance isn't 1 in 100,000 that the person
       | is innocent - it is in fact at least 9 out of 10 that they are
       | innocent. This person just happens to be the one person out of
       | the ten that would match that had the bad luck to be stored in
       | the police database. Nothing more.
        
         | sirsar wrote:
         | See also: Privileging the hypothesis.
         | 
         | If I'm searching for a murderer in a town of 1000, it takes
         | about 10 independent bits of evidence to get the right one. And
         | when I charge someone, _I must already have the vast majority
         | of that evidence_. To say  "oh well we don't know that it
         | wasn't Mr. or Mrs. Doe, let's bring them in" is itself a breach
         | of the Does' rights. I'm ignoring 9 of the 10 bits of evidence!
         | 
         | Using a low-accuracy facial recognition system and a low-
         | accountability lineup procedure to elevate some random man who
         | did nothing wrong from presumed-innocent to 1-in-6 to prime
         | suspect, without having the necessary amount of evidence, is
         | committing the exact same error and is nearly as egregious as
         | pulling a random civilian out of a hat and charging them.
        
         | x87678r wrote:
         | Definitely they should have everyone's 3d image in the system.
         | DNA too.
        
         | Buttons840 wrote:
         | There's a good book called "The Drunkards Walk", that describes
         | a woman who was jailed after having 2 children die from SIDS.
         | They argued that the odds of this happening is 1 in a million
         | (or something like that), so probably the woman is a baby
         | killer. The prosecution had statisticians argue this. The woman
         | was found guilty.
         | 
         | She later won on appeal in part because the defense showed that
         | the testimony and argument of the original statisticians were
         | wrong.
         | 
         | This stuff is so easy to get wrong. A little knowledge of
         | statistics can be dangerous.
        
       ___________________________________________________________________
       (page generated 2020-06-24 23:00 UTC)