[HN Gopher] Blur Tools for Signal ___________________________________________________________________ Blur Tools for Signal Author : tosh Score : 496 points Date : 2020-06-04 09:44 UTC (13 hours ago) (HTM) web link (signal.org) (TXT) w3m dump (signal.org) | xwowsersx wrote: | I don't know much about image processing, but can't blur from | some area in an image be "removed" so as to recover the original | image underneath or am I just totally mistaken about how | images/pixels work? | [deleted] | haarts wrote: | Think of it this way; there is less information in a blurred | image (less colour, less lines, less areas). You can not* | conjure information out of thin air thus making the unblurred | image. | | * Recent advances in AI actually make this possible to an | extend. The AI delves into it's massive memory and extrapolates | a likely image/face. | regularfry wrote: | That entirely depends on how the image is blurred. The | default gaussian blur in an image editing tool can be | reversed without leaning on magic AI to do it. | miglmj wrote: | Not completely mistaken, I think you're confusing blurring with | "blending", where pixels are displaced in tight irregular | spirals. These images have been successfully unscrambled as | part of criminal investigations into child exploitation cases. | dominotw wrote: | what about videos. | yingw787 wrote: | As a software engineer: screw software-based solutions. Too hard | to communicate to people, too easily compromised without notice, | just blegh for things like this. | | I remember the Mueller report being printed out, inked over, and | then scanned before exported as PDF just to make sure there's no | software shenanigans. I really like this idea. | | If you wanted to implement that in the field, you could purchase | a Polaroid camera, ink over faces manually, and then use your | iPhone and take a picture of that picture and destroy the film | afterwards. | raziel2p wrote: | This strikes me as ridiculously paranoid. Are you worried that | a JPG/PNG contains the original non-blurred picture or | something? | | Nevermind the fact that in your examples, the physical | originals can be stolen before you have a chance to redact/blur | them, or your blurring done by hand isn't good enough and you | can get the original by increasing contrast or whatever. | yingw787 wrote: | ...isn't the whole reason of this discussion revolving around | events causing "ridiculous paranoia" being realized? No, I | don't think I'm being too paranoid. Even if Signal is open | source, it means smack if you don't know what's actually | running on the servers, or what's in each AppImage and | running on your phone. | | If you have your servers and employees where the government | can reach you, you can be compromised, because ethics and | morality go out the window when it's about your safety and | that of those you love. | | Analog is always safest, because it's what the world is | grounded in. If you don't like inking over an image, then | burn the faces of it using a blowtorch, or if you're worried | the ink is still there, you can stamp out the faces using a | hole punch. | noodlesUK wrote: | What kind of blur is used? Blurs are annoyingly bad at obscuring | things like faces. They may be good at making faces | unrecognisable to people, but they're not nearly as good at | making faces unrecognisable to machines. | 2OEH8eoCRo0 wrote: | A good blur sheds far too much information to be meaningfully | reversible. | laughinghan wrote: | But a bad blur doesn't. That's why your parent comment asks | what kind of blur they use. | [deleted] | have_faith wrote: | I've seen this sentiment mentioned quite a bit, but is it still | true with the level of blur being shown in their example | images? the blur level is extremely high to the point that it | has essentially left behind a smooth gradient. Even with the | algorithm known is there enough reversible information left? | samstave wrote: | Why dont they just literally cut-out/replace whatever would | be blurred with just black pixels? Why blur anything anyway? | hutzlibu wrote: | Aesthetic. A blurred face looks better on a picture than a | fat black box. | | But like others have pointed out, you can achieve (allmost) | the same effect, if you remove enough information before | blurring, or just drawing a smooth gradient, but this alone | is harder to make it look as nice, as blurring the actual | image. | StavrosK wrote: | Probably not. You can remove Gaussian blur by performing the | inverse convolution (it's tricky because you need to find the | actual parameters that caused the initial blur), you can | remove motion blur the same way, etc. This looks like there | isn't nearly enough information there to do any of this, | though. | rainforest wrote: | I suspect there's not much information in the individual | blurred face, but I wonder if given enough examples you'd be | able to determine if an unblurred face is the one in a sample | of images with any level of confidence? You can do that with | text (http://dheera.net/projects/blur). | Arnt wrote: | The face and its surroundings are blurred almost to a | single colour. The average RGB value of my face might be | unique-ish, but if you mix in some variable background, | photographed on a camera whose lens has been smeared | against the pocket of someone's jeans, the result should be | human, not individual. | | A side comment: AFAICT what the Signal developers have done | is take code that was developed so that the phone camera | could autofocus on faces, and and used that code to defocus | faces. What a sweet hack. | hnarn wrote: | A very sweet hack, but I think the concern was based on | the example image provided in the link posted. While the | face is blurred, there's still a lot of information you | can glean about the person: their haircut, their neck, | the clothes worn etc. -- so I'm guessing the threat | vector here is that if you also have a general set of | pictures from the same demo, you may be able to | automatically identify who the blurred person is. | | Blurring is better than nothing but the best picture when | it comes to avoid being traced is the picture that was | never taken. | sitkack wrote: | It shouldn't blur, it should be a black box. | | You could definitely take signals code, and run it over | the set of test images and find which output matches | closest to the target image. | Arnt wrote: | What set of test images? | | https://www.androidpolice.com/wp- | content/uploads/2020/06/04/... is blurred by Signal. | Suppose that you have all the photos that have been | posted to Facebook, and that both of those women are on | Facebook, and lastly that you have resources enough to | run all of those through the Signal code. How would you | match those other photos to the blurred part of this one? | btrettel wrote: | Not just any black spot either. A black spot of _random | size_ larger than what you want to redact. That way you | avoid leaking the size of what 's being redacted. The | size of what's being redacted can sometimes provide | enough information to determine plausible contents: | http://blog.nuclearsecrecy.com/2014/07/11/smeared- | richard-fe... | jszymborski wrote: | Ok, but just to be clear, we're redacting faces here. | There isn't much meaningful here other than an | exceptionally rough indication of age/development. | btrettel wrote: | The examples on the Signal website give you hair color, | hair style, likely race, and the shape of the top of the | protesters' ears. While it's not definitive, given that a | fuller redaction is easy and has no disadvantages, I | don't see why someone shouldn't try. | Arnt wrote: | Yes. Someone who has access to _many_ photos of the same | set of people might well able to identify people on one | photo, even though their faces are blurred on that | particular one. | | I'm not sure whether the large number of photos nowadays | is a net negative, though. That's also what finally | stopped Derek Chauvin. | lm28469 wrote: | Let's be real for 2 seconds here, this is pure nonsense. | No court of law would do anything about "hey we arrested | that guy because he has 2 eyes, a mouth and the same | tshirt as that other guy who was protesting yesterday", | if it comes to this you wouldn't even need a picture of | blurred faces, just arrest whoever you want and provide | forged evidences (or none) because that's exactly the | same thing | | And even then law enforcement are already filming them | (cctv + from the air) and tracking their phones, the last | thing you have to worry about is a 100% blurred face that | no amount of technical power would be able to process or | match back to you. | TremendousJudge wrote: | bikeshedding? on hacker news? no way | fragmede wrote: | picture A of an individual, unblurred, protesting | peacefully. | | picture B of a blurred individual from later on in the | same protest, wearing the exact same clothes, commiting | questionable acts, is circumstantially incriminating. | anigbrowl wrote: | You're overthinking it. Police already have their own | camera people doing video surveillance in addition to | CCTV and other surveillance tools. The sort of forensic | analysis you mention is of course possible and is | sometimes engaged in, but obscuring all such information | would defeat the purpose of photojournalism altogether. | TheCraiggers wrote: | > "The face and its surroundings are blurred almost to a | single colour." | | To your eyes, maybe. To a machine, you have an array of | pixels, each with different values which, using an | algorithm, could be adjusted into something your eyes can | resolve into a unique face. | Arnt wrote: | Seriously? Look at https://www.androidpolice.com/wp- | content/uploads/2020/06/04/... -- do you _really_ think | there 's enough information in those two rectangles to | reconstruct the faces even approximately? | TheCraiggers wrote: | Hard to tell, I'm not a computer; but it does look better | than most. To be fair (to me), I was basing my critique | on the picture in TFA, which seems to have far more | detail in it. | | That said, the whole point of my post was that humans are | really bad at judging this. Many blur algorithms can be | reversed because they just modify the color values of the | pixels in a reversible way. You can't always tell by | looking at a picture what data is still there, in much | the same way you can't see the stars in an ISO 200 | picture of the night sky. It's not until you open it in | GIMP and crank the exposure up to max that you see just | how much data is there that your eyes couldn't perceive. | KingOfCoders wrote: | The number of different characters is quite limited. Does | this work for Chinese or only for Latin type languages? | IshKebab wrote: | No chance. That level of blur is essentially impossible to | reverse. I think lots of people here are a bit confused | because they know that all gaussian blurs are theoretically | reversible. But they aren't thinking about how ill- | conditioned the inverse gets as the blur gets larger and | larger. | laughinghan wrote: | There's another concern--even if I can't usably invert the | convolution, if I have photos of a thousand people's faces | and one of them is that blurred face, can I figure out | which one? | steerablesafe wrote: | Well, if you want to leave behind a smooth gradient, then | leave behind a smooth gradient. Suggestion | for an algorithm: * start with the blur * | sample the four colors at the four corners of the blurred | region * quantize them * fill in the region | with bilinear interpolation. | | Then your whole region can only reveal these four quantized | color values. If you only blur then you will have a harder | time proving the leaked information content. | dunefox wrote: | Citation needed. | tomcooks wrote: | Trivial to test by deblurring and sharpening a blurred pic | and passing it to, say, opencv | thdrdt wrote: | 2012: https://www.instantfundas.com/2012/10/how-to-unblur- | out-of-f... | | 2017: https://arxiv.org/pdf/1702.00783.pdf (Pixel Recursive | Super Resolution) | | 2020: https://venturebeat.com/2020/01/22/researchers-use-ai- | to-deb... | | Edit: Most face recognition software works by down-sizing and | blurring an image to faster detect face features. So in | theory it is very easy to detect face features from a blurred | image. A deblur tool can then use this information to better | deblur a face. | ulfw wrote: | Impressive examples. Thanks for posting them! | | So the ridiculous ,,Enhance!" one sees in TV show crime | dramas could one day actually become true. | throwaway0a5e wrote: | You can't make data that isn't there. It's fundamentally | going to be a guess. You can enhance your way to _a_ face | or _a_ license plate but there is zero guarantee it will | be _the_ face or _the_ license plate that the low quality | image /video is of. This is why solid blocks of color or | emojis are so effective at censoring images, it takes the | data and replaces it with pure junk. | regularfry wrote: | If you know it's a gaussian blur with a known radius, you | can uniquely reverse it. | chooseaname wrote: | But is deblurring from handshake or lens out of focus or | even a Gaussian blur the same as some random gradient blur | they seem to be using? | | Edit: The images in the Signal article don't look like | images of blurred faces. They look like blurry images | overlaid onto faces. If you don't blur the face, how can it | be unblurred? | dunefox wrote: | Yes, that works if the face itself is blurred, not if | random noise is used in place of the face. | dbrgn wrote: | I tend to pixelate regions I want to make unrecognizable in | photos instead of using a gaussian blur due to this reason. | Pixelation should be safe as long as the pixels are large | enough, right? | | I wonder why Signal didn't do something like that... | ReactiveJelly wrote: | I think there's a way to do super-resolution on pixellated | video. | | It's okay for still images, but videos have a lot of | information to leak. Just black everything out. | pfortuny wrote: | They should simply explain what convolution they are using, and | it would be easy to know. | simias wrote: | I've been digging through the latest related commit in the | repo: https://github.com/signalapp/Signal- | Android/commits/master | | They appear to use "com.google.firebase:firebase-ml-vision- | face-model:20.0.1" to detect the faces. | | The actual blur appears to be done here: | https://github.com/signalapp/Signal- | Android/blob/514048171bf... | | Not sure what "ScriptIntrinsicBlur" stands for exactly, it | appears to come from the android SDK itself: import | android.renderscript.RenderScript; | | EDIT: https://developer.android.com/reference/kotlin/android/ | rende... | | It's a gaussian blur filter with a radius of 25px if I | understand the code correctly. | greysonp wrote: | FWIW this was updated before release to also scale down the | image before blurring it. We cut the size in half, or cap | it to 300x300, whichever is smaller. This was to ensure | that the effectiveness of the blur isn't reduced on higher- | resolution images. https://github.com/signalapp/Signal- | Android/blob/master/app/... | pfortuny wrote: | You should perform a non-invertible blur though... Or | even easier, use the same noise image for all faces. | | EDIT: come to think of it, you can generate random noise | using a palette from the color in the blur area (say, | take four or five colors and mix them). | | Applying convolutional blur for anonymizing is very very | risky. Because you might end up with something either | invertible or nearly so. | pfortuny wrote: | Ouch: gaussian blur might be invertible if you are not | careful. That is why you need the explicit parameters of | the convolution. | | Thanks for digging. | johnchristopher wrote: | I thought security through obscurity didn't work ^^. /s | easterncalculus wrote: | This is rather silly, you could always draw solid colors over | someone's face and it works better than blurring. A rather | frivolous update, from a software standpoint. The sentiment is | nice. | hiq wrote: | Automation is a big part though, having to do it manually on 5 | faces is tedious, pressing a button is not. | easterncalculus wrote: | I see the point in this, but if you're going to automate | something it should be automated right! In most cases, | getting specific people's faces in shot isn't a good idea in | general. If you're getting five people's faces in center | frame for a photo just to blur their faces out, then it's | probably fair to ask why you'd even take a photo at all. | sitkack wrote: | Recording and sharing are different things. If I take a | picture of a cop pulling masks off of protestors, I sure as | hell want to record the incident, but not necessarily share | images of the victims. | Vinnl wrote: | I've seen it said that blurs can relatively easily be reversed. I | wouldn't expect that to be unknown to the Signal team, so I | wonder if anyone knows how they dealt with that. A different blur | method that is not reversible? | lewiscollard wrote: | For the case of text, blur can be brute forced. | | If you redact, say, a credit card number with a blur, and I | know what typeface the number would have been written in, and | have a reasonable guess as to your blur radius, it might not be | infeasible to compare the blurred version of every possible | credit card number. | | If you redact an email address with a blur, brute-forcing every | possible email address will be harder. But if someone (say) | leaks information to you, and you merely blur out their | address, it's not infeasible that someone else could apply the | same blur _to a known suspect 's email_ to verify whether it | was them or not. | | Of course, with a large enough blur radius it's not an issue. | Still, a non-zero amount of times, it's been done badly enough | I've been able to mostly "reverse" a blur by just squinting and | sitting back a few feet. | | Always redact text with solid blocks. | | I don't know how feasible this approach would be to human | faces. I think Signal has blurred it such to make such an | attack infeasible. | | I also don't think it's sufficient; if you don't want someone | to be identified _don 't take photos of them_, full stop and/or | period. Take the photo at the top of the blog post. Who on that | day, had that a backpack with that type of strap, a blue mask | in exactly that shade of blue, that haircut, and that exact BLM | t-shirt, in that place at that time of day? That could be | sufficient information for a "fingerprint", though maybe not | deanonymisation. | barbegal wrote: | You are correct a standard Gaussian Blur can be reversed except | along the edges where data is effectively lost outside the | blurred rectangle. In this case the radius of the blur is large | enough that a lot of data will be lost. Combined with JPEG | compression removing a lot of information too, reversing this | blur should be impossible. | | A better blur algorithm (in that it can easily be proven not to | be reversible and is faster to process) is to divide the area | to be blurred into a small number of cells, (9,16 or 25) get | the averaged colour in each cell and then apply an | interpolation between those colours as your output. This | algorithm is essentially O(n) where n is the number of pixels | to be blurred. You can easily prove that the information in the | image is at most 3 bytes (each colour) * 25 (number of cells) = | 75 bytes which is not enough to encode a face however it may be | enough to encode some limited details (such as skin colour, | distinctive clothing etc.) so always better to use a black box. | regularfry wrote: | You're right about information being lost at the edges, but I | do wonder if that leaves a region in the center of the image | that's got enough information to be recognisable. There's one | way to check, I guess... | | Also I can't help but wonder, in a case like this where | you've got the rest of the image, whether the pixels around | the border of the blurred region are useful. There's going to | be a probability that they're a similar colour to the outer | ring of pixels that got blurred, and that might give you | enough to start working inwards. | contravariant wrote: | Provided you know the exact method you can _in theory_ | recover even the edges. Although this is very numerically | unstable, to the extent that just double precision might not | be quite enough. That said, that 's just the theoretical | exact inverse. With proper regularization you might be able | to recover far more (although with a complex prior like a | neural network it becomes debatable what information you are | recovering and what information you are putting in yourself). | | Side note, even with a mere 25px image (effectively) of | someone's face I'm not sure if it leaks as little information | as you think it does. Just 33 bits would be enough to | uniquely identify someone, let alone 75 bytes. Practically | you wouldn't be able to recover more than some basic | estimates of skin colour and distance between the eyes etc, | but in extreme cases that might still too much. | rbinv wrote: | The blurs shown on the page can most certainly not be reversed | because the information has been lost. | | Things like swirls can, though: | https://thelede.blogs.nytimes.com/2007/10/08/interpol-untwir... | nullc wrote: | That's like saying that JPEGs can never be displayed because | information has been lost. | | The belief that you cannot identify someone from a blurred | face is an extremely strong assumption that is just begging | to be demolished using some sufficiently advanced technology. | | In particular, if you only need to go from a list of 10,000 | candidate persons (thanks cellphone mass surveillance) to | three or four candidate persons (shoot them all and let god | sort it out) then I am think it is fairly that you could do | so with more or less existent technology. (essentially, use | machine learning to transplant faces from DMV photos into the | scene and then redo the blur and select the most likely | matches). | | Think of it this way: if you want to winnow 10k candidates | down to four people you need to extract less than 12 bits of | entropy. It's not trivial because the scene, pose, lighting, | etc. make all your measurements noisy and non-independent. | teekert wrote: | Certain blurs can be undone indeed [0] is just one link, | search for something like "undo guassian blur point spread | function". There are limits though. | | [0] https://en.wikipedia.org/wiki/Deblurring | futurix wrote: | An actual blur that directly modifies multiple pixel values | cannot be reversed. Things like swirls and motion "blurs" | potentially can be - but I wouldn't even call those blurs as | they are more of a directional transformations. | CGamesPlay wrote: | Hmm, given we know it's a face, and we know their skin tone | from the rest of the photo, I wonder what a computer would be | able to reconstruct... Any papers about this? | johnbellone wrote: | I don't think so based on what we are seeing in the link. | It isn't really a blur at that point. | Arnt wrote: | "For example, an algorithm may analyze the relative | position, size, and/or shape of the eyes, nose, cheekbones, | and jaw" etc, says Wikipedia. | | You can reconstruct a plausible face by deblurring, ie. one | that looks sharp and human. But if you want to identify | someone having a plausible picture with a pair of eyes in a | plausible position doesn't help, you need a fairly accurate | assessment of the distance between the correct eyes, and | that's susceptible to loss of information during blurring. | crazygringo wrote: | That's false. There's an entire field dedicated to reversing | blur. Even Photoshop uses techniques like this. | | https://en.wikipedia.org/wiki/Deconvolution | | https://en.wikipedia.org/wiki/Deblurring | pornel wrote: | They use very large blur radius. At this radius rounding to | 8bit + lossy image compression should be destructive enough. | | It's impossible to tell from the screenshot, but if they're | smart, they should have an explicit degradation step before | blurring (e.g. pixelate/lower resolution first). | thih9 wrote: | Context, i.e. sample link about reversing blur (not just | swirl): https://news.ycombinator.com/item?id=4679801 | simias wrote: | Is the pattern on these masks meant to confuse facial recognition | algorithms or is it just for looks? | ciarannolan wrote: | Probably the latter. The pattern doesn't really matter when 85% | of the facial features are covered by cloth. | yters wrote: | Why would peaceful demonstrators need to hide their identity? | | I have been to numerous peaceful protests in the US, even been | attacked by observers, and have never had to hide my identity. | | Additionally, in a large crowd where most will not hide | identities, this app is useless. | | Only use case I can imagine is a one to many communication likely | to be frowned on by authorities, which sounds like the | coordination of illegal activity, such as violence and looting. | | I wonder if any website where such techniques are popularized | would consequently be considered an accessory to whatever illegal | activity is being coordinated? | | And even if not, as owner of such a platform, it would not rest | easy on my conscience to know my site is being used to help | coordinate activity that will hurt and harm a great many innocent | people. | 2OEH8eoCRo0 wrote: | Devil's advocate here but if I were a Nazi and wanted to | peacefully protest I'd hide my face. If I were protesting for | any socially unacceptable fringe group I'd rather hide my face. | yters wrote: | that is precisely the sort of group i was protesting with, | hence why i was attacked, and i had no need to hide my face | because we were not doing anything illegal | | and the US is not nazi germany or the ccp. if it were, face | blur filters would be the least of your concerns. this only | makes sense in the context of conducting illegal activity in | a lawful democracy | regularfry wrote: | Only if you think the lawful democracy is perfectly | implemented, and we know that's not true. | yters wrote: | no place is perfect, but if we compare to say ccp us is | still orders of magnitude better | regularfry wrote: | "Over there is worse" is not equivalent to "over here is | safe." | yters wrote: | "over here is not perfect" is not equivalent to "over | here is not safe" :) | | i just think we need to look at what we got compared to | most places and times, and not be too quick to throw out | the baby with the bathwater | regularfry wrote: | > "over here is not perfect" is not equivalent to "over | here is not safe" | | Yes, it is. | yters wrote: | I doubt it. You seriously think police will go out of | their way to look through these photos and arrest | peaceful protesters? | | On the other hand, in China, just for having Signal or | the like on your phone is enough to earn a stay in their | concentration camp and some involuntary organ donation | before getting disappeared for good. | | I would say there is at least a slight (very slight, mind | you ;) difference between the two situations. | regularfry wrote: | > You seriously think police will go out of their way to | look through these photos and arrest peaceful protesters? | | Given everything else they seem to be getting up to, why | take the risk? Especially when Facebook will do the hard | job if tagging folks for them. I certainly know of police | keeping their own photographic records of peaceful | protestors, so why contribute to the problem? | | Also, why assume it's only the police a protestor might | be worried about? | | > On the other hand, in China | | Don't care. Totally irrelevant. This is not a comparative | exercise, and you can stop using it as a cheap deflection | now. | ajmurmann wrote: | Even peaceful protesters get at times attacked. Just last | weekend a car drove into peaceful protesters twice in Portland, | OR. A few months ago someone who regularly organized counter- | protests against heavily armed, white supremacists protests got | run over by a car and died as he left a pub known to be | frequented by leftists. | | Look at all the cases of unprovoked, retalitory police violence | over the last week. | | I understand why people are scared and want to stay anonymous. | The US might not be run by the Nazi party or the CCP yet, but | do I want to bet my safety that it won't in the next ten years | or so? Especially given the trend over the last years. | vsareto wrote: | Can someone school me as to why we'd use blur when you can just | put a solid block of pixels of the same color over the face? The | hard part is face detection, right? | ahelwer wrote: | Blur looks better. | bryanmgreen wrote: | I wonder how hard it would be to replace the face with | something akin to Photoshop smart fill then blur the box into | obscurity? | | Keeps the aesthetics of the image but also removes the face | entirely. | _wldu wrote: | I don't understand the need for this. There is nothing criminal | or embarrassing about being in public or participating in a | peaceful protest. Why is this feature needed? | seebetter wrote: | There is no purpose for this. I was in the protests and amid | the looting in LA. You're being photographed for dozens of | different angles. | | I love Signal but it's so klunky and broken. Telegram is much | more fluid despite it being less secure. | | And the Mayors in LA let the looting happen purposely. | mc32 wrote: | I wouldn't be overly surprised that if there were protests on | the right (return to work, for example) which resulted in | violence that that would end up with calls for pulling this | feature --maybe I'm too cynical due to the nature of the | politics of speech over the last few years. | mariocesar wrote: | Retaliation. As it happens for HK Protesters | Spivak wrote: | Because people fear retaliation from both the cops and their | fan club. We're talking about the police that (in my city) | flipped, ransacked, and destroyed tables set out by volunteers | to give protesters food, water, first-aid, and sunscreen. | | The last thing you want is to find photos or videos of yourself | on a right ring YT channel because you will get doxxed, | harassed and threatened. | mc32 wrote: | What kinds of people get doxxed? Your average protester in a | march or the independents who go off script and attack | bystanders, observers? | | The ones I recall like the bike lock incident in Berkeley was | that the extremely violent get doxxed on the chans but not | your average protester who isn't smashing things. | netsharc wrote: | Not the US, but I know someone whose name ended up in a | list being spread around among rightwing groups as a "left | activist" because he was on FB a lot replying to anti- | refugee/anti Muslim comments trying to educate the posters. | | Imagine having your face online, plus the resources of the | police... | ersii wrote: | What country did that happen in? Germany? Did anything | happen to that person, besides being on that list? | Spivak wrote: | Ahh yes, only the people that "deserve it" are subject to | extrajudicial threats of violence by the police and | lunatics on the internet with lots of guns and too much | free time. | | Don't play the game of trying shift the focus on what the | victim did do "deserve" fearing for their life. If someone, | anyone at all, is threatened or harassed they are a victim. | They can also be a shitty person but these don't cancel | each other out. | | The truth is that the people who get harassed and doxxed | are fairly arbitrary and have more to do with whatever | unlucky soul the host decides to pick on that day rather | than any kind of rational process. Trying to figure out how | internet bullies choose their targets won't get you a | satisfying answer other than "people who look like an easy | target to be make fun of." | simias wrote: | At the risk of veering slightly off-topic I really dislike the | modern internet culture that judges that it's perfectly | acceptable to post people's face on public websites for all to | see without their explicit consent. I'd hate to find myself at | the top of the Reddit frontpage or in the latest viral video, | even if I don't do anything particularly embarrassing. | | Although I suppose that if you're participating in a protest | that's not really the same thing, the whole point is to be seen | after all. And signal is generally used for private messaging | so it's less of an issue. So overall I guess I agree with you, | I guess the Signal devs feel strongly about the current events | and wanted to do _something_ to help. | lm28469 wrote: | > There is nothing criminal or embarrassing about being in | public or participating in a peaceful protest. | | Then why are they beat up, gassed and arrested ? | stevehawk wrote: | how else am i going to get to my local church for a bible | photo op? | episode0x01 wrote: | Why is encryption needed? Or privacy in general? If you have | nothing to hide, you have nothing to fear. | | /s | billme wrote: | Without sold proof this is not possible to circumvent, this maybe | more dangerous than not. | | Here's an example of AI being able to identify a blurred face: | https://twitter.com/ak92501/status/1267609424597835777 | | Identifying an individual is not just about a face, but number of | factors that are much more complex and very hard to account for | in a systematic way. | | --- | | If Signal is really concerned about allowing individuals to | control the information they leak, they need to prioritize | releasing the feature that will allow users to use Signal without | providing phone numbers; one of their staff recently publicly | stated this is finally likely to become a feature. Not to mention | stop repeatedly asking for the user to provide their name, access | to contacts lists, etc. | Krasnol wrote: | Besides the fact that Signals "blur" doesn't even look remotely | close to your example, they're working on the phone number | issue: | | > PINs will also help facilitate new features like addressing | that isn't based exclusively on phone numbers, since the system | address book will no longer be a viable way to maintain your | network of contacts. | | https://signal.org/blog/signal-pins/ | brnt wrote: | What is left to prove about (Gaussian) blurring? | nine_k wrote: | Gaussian blurring does not seem to lose enough information. | | A hard 3x3 pixelization would be much more reliable, if less | aesthetically pleasing. | Robotbeat wrote: | You could make it aesthetically pleasing. Just make sure | the data is boiled down to just a handful of bytes first | and there won't be anyway to reverse it. | billme wrote: | The intent of the blur is to hide the identity of the | individual face that has been blurred. Average human sees a | blurry face and assumes the person's identity is safe. | Research has repeatedly should this is false, especially when | combined with other data. | | Here's another example of such research: | | https://www.wired.co.uk/article/facial-recognition- | systems-c... | | >> "researchers said only 10 fully-visible examples of a | person's face were needed to identify a blurred image with | 91.5 per cent accuracy." | brnt wrote: | A Guassian blur is not reversable, information is lost. No | research shows otherwise, because it's a mathematical | property of the Gaussian transform. | | Some methods can be used to find one of many solutions to | the blur, where certain high frequency information is | preferred over others because we know the end results looks | like a human face, and not just any solution. But that only | means you can get out many possible faces; if your | reconstruction tool only gives you want it was simply over- | trained. | | [edit] You just updated your post. If you have tagged, | unblurred photos of the face in your blurred photo, you can | (as expected) constrain the end solutions further. WHat's | not clear to me from the paper is whether or not the | blurred face was tagged as well. Scenario S3 seems most | likely the type of scenario encountered in surveillance | programs, where the results are nowhere near 91% accurate. | cochne wrote: | Ironically I think a Gaussian blur is one of the few | transforms that should be totally reversible. Since the | Fourier transform of a Gaussian kernel is also Gaussian, | it is nonzero everywhere, meaning you can in theory just | divide the Fourier transform of the image by the Fourier | transform of the kernel to get the original back :) | nitrogen wrote: | The quantization to the image colorspace and depth is | probably the limiting factor, moreso if dithering is | used. | sitkack wrote: | As the resolution increases, the ability to reconstruct a | lower resolution image goes up as well, which will be | more than enough for most identification purposes. | | Security as an accidental quality of a system is not | security. | jacobolus wrote: | > _A Gaussian blur is not reversable_ | | This might be narrowly true (it's hard to recover | precisely the original image), but is not really an | accurate summary in this context, if the only goal of | reversal here is to recognize the face. Deconvolution | will quite effectively undo gaussian blur. | https://en.wikipedia.org/wiki/Deconvolution | https://en.wikipedia.org/wiki/Richardson- | Lucy_deconvolution | https://en.wikipedia.org/wiki/Blind_deconvolution | | In Photoshop, the deconvolution tool is called "Smart | Sharpen", and has a preset for a gaussian PSF. | pfortuny wrote: | Wait: informations is lost if the blur is truly a | gaussian process. The simulation of blur by means of a | convolution can perfectly well be reversible. | | Image blur is not a gaussian process. | young_unixer wrote: | Are you saying that convolution with a gaussian kernel is | not real gaussian blur? | | I'm legitimately asking. I'm really ignorant about this | subject. | pfortuny wrote: | No: it is a simulation, because it is a discretization | and the map can be injective (or "almost so"). | teenbear wrote: | In theory it is, in reality there is discretization of | the signal and noise | TJSomething wrote: | The resolutions on those are way higher than what Signal is | doing. It's not surprising that a neural network can give a | decent guess at what a face can look like. Faces don't have | that much entropy. But you can blur them out if you get it down | to like 4x4 pixels. | | Anyway, if you want scarier panopticon stuff, you should look | into gait recognition, which is way harder to censor. | StavrosK wrote: | That's not removing blur, that's making a face (out of | millions) that matches the same pixellization. There's no | telling what the original face was, and it's disingenuous that | they don't show you the original photo. | [deleted] | geoelectric wrote: | You could, however, probably tile the downscale "rainbow | table" in a way that would let you predict some degree of | novel original from a sufficient number of tile samples. | | Thing about downscale blur is that it's nearest-neighborish, | so can be addressed with divide+conquer as blur effects stay | local. You'd end up with a fairly large combination of | potential tiles. Some wouldn't be viable faces, but we have | classifiers for that already. | | Entire combination trees can be culled that way to make the | problem radically smaller, as long as you know it's supposed | to be a face, so I don't know how hard it would really be. | It's possibly pretty easy to come up with the N possible | original faces with enough certainty to then match with | potential targets of interest and make N small enough to use. | StavrosK wrote: | Isn't that exactly what the paper is doing? | felideon wrote: | At the end of the video they posted[1], they show the | original photos of the authors, the downscaled inputs, and | the outputs. | | [1] https://twitter.com/ak92501/status/1267609090689323008 | StavrosK wrote: | Ah, thanks, I missed that in the Tweet. | tommyderami wrote: | They have a sandbox you can run the code yourself--I don't | think we're at dystopian surveillance level just yet | https://imgur.com/a/IfdLWau | erikbye wrote: | You can be identified by gait alone. | notatoad wrote: | from a still photo? | anigbrowl wrote: | Unreliably, and only if they have a clear view of your whole | body, which isn't likely in crowds. | erikbye wrote: | I disagree... this is an area I research, working on a | surveillance system. But even so, if gait alone was not | enough, modern recognition software can easily single out a | subject in a crowd with just seconds of footage, and through | thousands of cameras track said subject throughout the city. | Footage will be plenty. At some point during tracking the | subject is likely to reveal his face, too, or other critical | information. If your voice is picked up it too will be used | for positive identification. When you add in the people in | close proximity to the subject things get even easier, | recognize one of the other collaborators the target subject | affiliates with and identification is often a simple | narrowing scan of enmassed OSINT away, done real-time, of | course. Or simply track the subject to an address, maybe even | his home, and swoop in. | | I want to also clarify what gait recognition is, for those | not that familiar with it a common misconception is thinking | it is limited to analysis of how you walk. It is not; factors | of gait recognition: height, weight, build and proportions, | sex, age, clothes (including type---dress, shirt, etc.--- | shape and colors), emotions displayed, facial tics, unique | mannerisms. The analysis of your actual walk/gait is | incredibly deep and consists of hundreds of variables, too | many for me to care mention here, I might blog about it if it | is of interest to anyone, but a few examples: cadence, the | angles of just about anything you can imagine possible to | measure, spacing between feet, knees, arm swing distance, | etc. | | For anyone familiar with Haar-like features it should be easy | enough to understand that with enough features within | threshold you can id just about anything. | | This is all yesterday's tech, by the way. | | My point, be very cautious of attending anything that might | destroy your future. Do not think a mask or blurring protects | your identity, that is extremely naive. | [deleted] | fit2rule wrote: | In case anyone feels like playing around with it, a friend and I | made a project to do auto-blurring of faces with OpenCV a few | years ago, with both iOS and node frontends .. | | iOS module: | | https://gitlab.com/seclorum/groupie/-/tree/master/ios/groupi... | | Main node.js app: | | https://gitlab.com/seclorum/groupie/ | gregsadetsky wrote: | The first URL doesn't seem to work, and the second URL brings | to an "empty" project. Just to make sure -- maybe it's just me? | fit2rule wrote: | Hmm, I guess I got the URL's wrong, and can't edit now: | | https://gitlab.com/seclorum/groupie/ | | Works on Linux and Darwin, just type 'make'. ;) | JosephRedfern wrote: | The project is public, but the repository is probably | private. We can't see any of the code. | fit2rule wrote: | Hmm, dunno how that happened .. maybe its better now? | JosephRedfern wrote: | Yes, fixed! | mercora wrote: | no. i don't know gitlab well enough but maybe the project is | not really set up as public. | supernihil wrote: | instead of bluring faces we should be replacing them with | computer generated faces, double up on fuzzyness and destroying | the possibility of easily detecting "its been blurred, i must | then take out my best guessing tools then" | kodisha wrote: | Honestly, I can't keep up with acquisitions, full e2e encryption | claims, then those claims get debunked, and you can't find out | what the truth is. | | Based on all information out there, in year 2020, what is the | most secure IM app? | | What do you recommend to your friends if they care about privacy? | paddlesteamer wrote: | Other than Signal, I also recommend Threema. It doesn't rely on | mobile numbers, possible to configure to run on your private | server, etc. It's just not free (as in beer). Also, it's from | Switzerland, a country respects your privacy more than the | USA[0]. | | [0]: | https://www.reddit.com/r/privacy/comments/gukg5z/threema_win... | ohlookabird wrote: | It doesn't look like they are open source, does it? | https://threema.ch/en/faq/source_code | dingaling wrote: | > Also, it's from Switzerland, a country respects your | privacy more than the USA | | Well gouv.ch might, but Crypto AG was an NSA front for | decades so I wouldn't be so certain about the companies. | | If I wanted to lure people in on the pretence of security and | privacy, Being Swiss would be good bait. | caf wrote: | CIA front, I believe. | axegon_ wrote: | Generally signal is a solid option. | | In addition I have a private mattermost server, which is | heavily restricted in terms of firewall and users but this is | reserved only for a very small selected group of people that I | trust and I am 1000% sure that they know what they are doing. | PascLeRasc wrote: | Beyond this, is there a somewhat complete, recent guide for | low-medium technically literate people to secure themselves, in | terms of both privacy and security? I'm going through the easy | steps now, like deleting Facebook, using 1Password, Firefox, | ProtonMail, FileVault. Tor is too complicated for me to figure | out though. Is anyone aware of other "good enough" practices? | fsflover wrote: | Matrix: https://matrix.org/ | | Unlike Signal, it does not rely on a single server. | unicornporn wrote: | And they're working on P2P. | | http://matrix.org/blog/2020/06/02/introducing-p-2-p-matrix/ | medecau wrote: | Use Tor, use Signal. | https://twitter.com/search?q=from%3A%40thegrugq+signal | upofadown wrote: | If you want to be completely sure you can't beat boring old | PGP. It runs on top of XMPP and is too simple to hide anything | in. | | The new XMPP hotness is OMEMO. Conversations is a good mobile | client that supports both PGP and OMEMO. | xorcist wrote: | You probably meant to say OTR, not PGP? | | OMEMO is five years old, and supported by all major clients, | so it's not very "hot" anymore". | | OTRv4 is somewhat hot and new. It's not in wide use (yet) and | it's unclear if it is enough of an improvement to take over. | upofadown wrote: | >You probably meant to say OTR, not PGP? | | No. OTR depends entirely on fingerprints for identity. The | poster was referring to the difficulty of knowing for sure | that you are really end to end. PGP has the advantage here | in that you can be completely sure because you can exchange | the keys yourself. | hiq wrote: | If you dig deeper, you can easily find that the consensus among | IT security experts is Signal for privacy / security. | | Matrix is interesting and I hope it will catch up eventually, | but currently it is not E2EE by default and it leaks way more | metadata than Signal. These point make it strictly worse than | Signal for 1:1 IM. | | The advantage of Matrix is in federation, but regarding privacy | / security, it is still behind (much to my regret). | | Other apps that could provide similar guarantees in theory are | less used and have received less scrutiny, so more not yet | exposed bugs and design flaws should be expected. Other apps | have been relatively well studied, but have well-known design | flaws that also make them worse than Signal (WhatsApp and Wire | leak way more metadata). | Forbo wrote: | Matrix is E2EE by default now and has been for the last | month: https://matrix.org/blog/2020/05/06/cross-signing-and- | end-to-... | teekert wrote: | I recommend Signal. Sure, something selfhosted would be nicer | (provided I can be trusted to get encrytion rightly implemented | and my server updated etc) but Signal hits the best balance for | me between trust, hassle and features. | quantummkv wrote: | Matrix is selfhosted and has e2e encryption support. | egberts1 wrote: | I've pulled down both the backend for Matrix and Signal and | find that the LOC is a lot simpler with Signal. Plus with a | bit of work, selfhosting Signal would require the mobile | apps to be configurable (or fixedly reconfigured) toward | your own backend server. | alias_neo wrote: | You can self-host Signal if you want. It's not easy, or fun, | and you'll need to replace the dependencies on cloud tools if | you want to host it on bare metal, but it can be done (I have | done it). | | Bear in mind, the server is open source only in name, the | state of documentation and configurability is extremely | hostile towards running it yourself, to the point that the | only way to configure it to run correctly requires reading | the code to find the type, size, syntax and everything else | about every piece of configuration because none of it is | documented or clear. | searchableguy wrote: | Try session - https://getsession.org/ | | Can someone explain the downvote? I am not complaining but are | there security problems with it? Could you explain or highlight | them? | emptysongglass wrote: | Don't ask why you were downvoted. It's right there in the | Hacker News Guidelines. [1] | | Secondly, you were probably downvoted because you didn't add | any content to the discussion other than a link. | | Session goes a long way to fixing Signal's problems like its | reliance on a centralized server and phone numbers but it's | still very early days with an unproven product. Messages | still get lost all the time and if you thought it was hard to | find your friends on Signal, it's the Sahara Desert on | Session. You'd be putting in months and months of fervent | pontification to friends and family you've probably just | managed to migrate to your other privacy chat platform of | choice. | | [1] https://news.ycombinator.com/newsguidelines.html | kreetx wrote: | Wire should also be e2ee, but not sure if you can self-host the | server (they seem to have started opens-sourcing it years ago, | but not sure if that is ready). | noeltock wrote: | Impressive how quickly they've reacted. | zeeone wrote: | They probably had worked on this feature for some time and are | using the current times as an opportunity to introduce it. It's | hard to believe they had the capacity to react to the traffic | increase and develop a sharp new feature in less than a week. | simias wrote: | If you look at the code it's not that far fetched. The facial | recognition uses "off the shelf" third party libraries and a | gaussian blur isn't exactly rocket science. | | I don't know how much work goes into making a new Signal | release but it terms of raw coding it's like two days of | work. | chinesempire wrote: | wouldn't it be easier and more secure to put a noise filled | rectangle over the faces? | rtkwe wrote: | You can but it looks bad and is distracting, a good blur | doesn't distract from the rest of the photo and looking good | enough people are more likely to actually use it which is also | important. You can also build a blur that discards enough | information that it's not reversible and it /looks/ like they | did that, Signal has been pretty thoughtful about security so | far so I doubt they missed the research about simple blurs | being insufficient to defeat facial recognition. | thaumasiotes wrote: | Yes. | sjwright wrote: | Would it be practical to take a facial recognition algorithm and | use it to warp the identifying characteristics of faces in a | scene such that the faces lose enough uniqueness to make facial | recognition ineffective? | | My understanding of facial recognition is that it operates on | relative positions of facial elements. If you can "delete" this | uniqueness from the source material by warping faces towards a | limited handful of generic shapes, you make the video less useful | to Government intelligence. | | You could still blur the result, but you might be able to get | away with less blur. Remember that it's important to see that | people have faces otherwise they can be more easily dehumanised. | chooseaname wrote: | There are face blender type algorithms that merge X number of | images of faces. Could use something like that. Grab 10,000 | facial images off the net, merge them, then use that image in | every shot, for every face, so everyone looks the same. | regularfry wrote: | Just DeepFake Nicolas Cage onto everyone. | sitkack wrote: | Excellent idea, but I think Snowden would be more | appropriate. | thrasumachos wrote: | Malkovich Malkovich, Malkovich? | sitkack wrote: | Snowden is actually a character that Nicolas Cage is | working on right now. Cage has such a dedication to his | craft. | Doxin wrote: | Ideally you'd run something like thispersondoesnotexist to | generate random faces to paste overtop people _before_ blurring | it. That way if you somehow manage to revert the blur there 's | still no chance of revealing the original person. | | Of course humans are pretty good at filling in detail, so with | a sufficient blur you can get away with surprisingly poor | approximations of a human face. | malux85 wrote: | Yeah, or maybe someone could implement a feature to somehow | distort, or "blur" the faces if you will. | lanevorockz wrote: | At some point he have to assume this is about defending people | that are committing crimes. Nice to see that the radicalisation | caused by left wing social media is finally getting to its final | conclusion. | itchyjunk wrote: | They are also distributing physicals masks? It's not even a | filtering type mask is it? How odd. | | Is the blurring some type of encryption that the user can unblurr | or is this a one way road? I am just thinking off some odd | circumstance where say they realize they had a picture of a | vandal somewhere. But I guess you can then be forced to unblurr | everything by law enforcement which might be undesirable in some | cases. | | Slight off topic from the article, I was reading about the sting | ray discussion here on HN yesterday. Signal supports some sort of | mesh network communication right? Is that a work around for sting | rays? Thanks. | Myce wrote: | I was also surprised by the physical masks. It seems they are | intended to 'encrypt your face' which gives me the impression | it should make you unidentifiable. | | When peacefully protesting, I can't imagine why you would need | to hide your face. | | If not peacefully protesting and/or looting, such a mask has | use for criminals, but I can't imagine that's the intention of | Signal. | | I think in free, democratic countries, you shouldn't be allowed | to hide your face, so you can be held accountable for your | deeds. | | In non-free countries I can imagine you would need to hide your | identity, but would Signal be able to distribute them there? | | Questions, questions ;) | yule wrote: | Is a police office in a free, democratic country allowed to | hide their badge number? | thaumasiotes wrote: | Depends what you mean by "allowed". If the rules say "you | definitely can't do this", but there is no penalty for | going ahead and doing it anyway, is it allowed? | ictebres wrote: | As the looks of it, US is pretty non-free when it comes to | peacefully protesting. So I guess this feature is very timely | and directed towards users there ;) | yters wrote: | What makes you thinks that? | | When I think non free, I think of the CCP prohibiting | peaceful rememberance of Tianamen square. | erikbye wrote: | > As the looks of it, US is pretty non-free when it comes | to peacefully protesting | | What is your definition of peaceful protest? What we see in | the US now is definitely not within my range. | | Thrashing stores, looting, torching vehicles. | vinay427 wrote: | It's most certainly not just the US. In the (western | European) country where I live, for instance, even a static | protest or demonstration with no chanting or marching and | only a few participants requires non-trivial and somewhat | expensive police approval ahead of time. Most larger | spontaneous events seem to just ignore this and the police | haven't generally responded violently, to their credit. | elliekelly wrote: | Saying only criminals would want to cover their face is the | equivalent of saying only criminals worry about privacy. The | old "if you aren't doing anything wrong then you have nothing | to worry about" argument. I've never looted a store in my | life and I don't ever plan to but I still don't want images | of my face stored in a police database or used in facial | recognition software. Wanting to protect my right to privacy | is not and cannot become a presumption of criminal intent. | Forbo wrote: | If you're looking for mesh network encrypted chat, check out | Briar. | | https://briarproject.org/ | | https://www.youtube.com/watch?v=iRJ8vIh3dVU | billme wrote: | >> " Slight off topic from the article, I was reading about the | sting ray discussion here on HN yesterday. Signal supports some | sort of mesh network communication right? Is that a work around | for sting rays?" | | Believe you're talking about Signal using "domain fronting" - | which is unrelated to stingrays; more information is here: | https://signal.org/blog/doodles-stickers-censorship/ | | As for stingrays, here's recent article on countermeasures: | https://puri.sm/posts/taking-the-sting-out-of-stingray/ | rtkwe wrote: | I don't think Signal has any mesh networking there are other | apps like Firechat and Bridgify (haven't used either of them | just googling). | | As for the mask it'll do a little bit for CS and mace probably | with eye protection but the goal is mostly protecting | protesters by keeping them from being identified and retaliated | against later. It's also way easier to make a buff style | covering and it can be worn over many types of filtering masks. | Welaa wrote: | Www.linkdin.com | seemslegit wrote: | Cool ! Now stop with the forced contact discovery. | exo762 wrote: | May I ask you to elaborate? AFAIK the only thing they are | leaking about you is "is this phone number using Signal?". A | single bit of information. | cjf101 wrote: | Not the OP, but from my perspective, encryption is helpful, | but a good portion of security is anonymity, and Signal | requires that you use and leak personally identifiable | information to even start using it. | | It also informs you when people in your contact list are | using Signal. It's probably not scanning through all of the | phone numbers in Signal's database locally, so it is | exfiltrating your contact list as well, exposing your | network. | | Personally, I'd prefer a model where I am not required to | place even that much trust in the messaging provider. | seemslegit wrote: | The fact that I've started using signal might not be an | information I wish to share with other people who are also | using signal and have my contact. | Marsymars wrote: | One of the best features of Signal, and one that massively | helps adoption with the less tech-savvy crowd, is that you | can set it as the default SMS app on Android, and it then | uses Signal for contacts with Signal. | | If you can't tell if a contact has Signal, it would have to | default to SMS - and when sending a Signal message (to | either a phone number, or in the future, a non-phone | identifier), there'd be no way to tell if you're sending it | to someone with Signal, or sending it into the void. | | Maybe that's a trade-off you'd be willing to make, I don't | think it's cut-and-dry though. | seemslegit wrote: | Not saying it is cut-and-dry, but atm the users doesn't | get to make that tradeoff for themselves - signal made it | on their behalf when it could have allowed the users to | choose on activation to which of their contacts they wish | to be discoverable. | lelandbatey wrote: | I've tried this feature out and found that it doesn't do as good | a job of blurring faces as I'd like, especially when those faces | take up more of the frame. I posted some pictures here: | | http://lelandbatey.com/projects/signal_blur_comparison/ | | Basically, I think they're using a constant blur size which fails | to adequately obscure faces that take up a lot of the image | because when a face takes up a lot of the image then the features | of that face become large, which would require even MORE blurring | to obscure. And they're not doing "more blurring" when the area | which needs blurring grows, or at least they aren't doing | _enough_ additional blurring. ___________________________________________________________________ (page generated 2020-06-04 23:00 UTC)