[HN Gopher] Proof of concept: end-to-end encryption in Jitsi Meet
       ___________________________________________________________________
        
       Proof of concept: end-to-end encryption in Jitsi Meet
        
       Author : jrepinc
       Score  : 521 points
       Date   : 2020-04-13 12:55 UTC (10 hours ago)
        
 (HTM) web link (jitsi.org)
 (TXT) w3m dump (jitsi.org)
        
       | cdolan wrote:
       | I just learned Jitsi is supported in part by 8x8. That gives me
       | concern and I'll have to research the ties there before
       | switiching ti Jitsi. I'm making this a top level comment given
       | its length, but I wrote this after seeing Vinnl's question on
       | 8x8.
       | 
       | I was an 8x8 customer for about 10 years as a small/medium
       | business. I would never buy from them again. I now use Zoom, but
       | am considering switching given the recent privacy revelations.
       | 
       | Here was my 8x8 customer journey. Stick around - at the end I
       | highlight their shifty auto-renewal contracting practices and
       | dark patterns to force you to stay.                    2011-2015:
       | 
       | Startup phase. Was great to get a VOIP phone at this time! Still
       | needed desktop phones from Polycom for the system to work well.
       | Zero online meetings as far as I knew. Acceptable product, was
       | happy enough.                    Around 2015 or so through about
       | 2019:
       | 
       | Newer versions of iOS, that allowed for better phone integration,
       | began to pop up. 8x8 was viable as a mobile-only VOIP provider,
       | but lacked quality features for a growing business: - The ability
       | to retrieve recorded calls from a Mac required you log in with a
       | certain version of Safari that had Flash Enabled. Then, you could
       | only download X MB of calls at a time. All of this had to be done
       | manually. There was no API for reporting, jus CSV exports. This
       | made calculating total customer service costs for our business a
       | massive pain. - If you edited your extensions online, _ALL OF
       | YOUR CALL RECORDINGS SINCE THE OPENING OF YOUR ACCOUNT ARE
       | DELETED_. We found this out the hard way, and there was no
       | warning that adjusting extensions would have this impact.
       | 2019 Q1 thru Q3
       | 
       | I wanted to switch to Zoom, because the 8x8 apps had poor
       | usability from our experience, specifically as a remote-only
       | setup. I did not trust that a video product from 8x8 would be
       | acceptable, and I enjoyed using Zoom with other organizations I
       | am involved with.
       | 
       | Porting phone numbers to Zoom was really easy. It requires about
       | 1-2 weeks of paperwork and processing time (no matter who you are
       | porting numbers with I think this is the default). _However_ -
       | 8x8 tried to block the transfer for about 20% of our numbers,
       | saying that the information we provided was incorrect. After days
       | of back and forth, all the while our numbers were unavailable for
       | use and causing operational problems, the case was escalated and
       | resolved, simply because a new person read the original forms we
       | submitted and processed it correctly.                    On the
       | 8x8 contract & account management dark patterns:
       | 
       | 8x8 did not have a copy of the PDF of our agreement. I did have a
       | copy. The contract was for 3 years with 12 month auto-renewal
       | clauses.
       | 
       | Multiple times in 2018 I requested my account manager offer new
       | pricing, as our rates had doubled from $25/number to $50/number
       | in just 1 year. The cost was out of control. They never did offer
       | pricing, but continually played the Car Salesman move of "Let me
       | see what I can get my finance department to approve", come back
       | with a meager 5% discount, rinse, repeat... The account manager
       | would passively threaten that they will take my account to month
       | to month, "but the pricing controls will no longer be in place
       | and your rates could rise". Big deal, I thought, my rates are
       | already skyrocketing!
       | 
       | Multiple times in 2016-2018 I wrote to them asking if I had an
       | auto-renewal. They would not answer this question over email, but
       | instead forced me to call in.
       | 
       | When I called in ("...to a recorded line, for quality assurance
       | purposes..."), they would tell me I did have an auto renewal in
       | August each year for 12 months. I would tell them over the phone
       | to mark that I wish to cancel my auto renewal and continue on a
       | month to month basis. I would follow up with my 8x8 Account
       | Manager that I was told over the phone by the 8x8 Billing
       | Representative that my company _did not_ have an auto renewal. My
       | account manager never acknowledge these messages.
       | 2019 Q3:
       | 
       | I submitted a cancellation notice to 8x8. They wanted to charge
       | me an Early Termination penalty of more than half the year,
       | claiming I was in a 1 year auto renewal. _It took me over a dozen
       | phone calls for more than an hour each_ to get this resolved.
       | 
       | I've come to expect garbage contract patterns from legacy
       | companies. _However... THE WORST PART ABOUT THIS_ in my opinion
       | was that calls 1 through 11 to 8x8 customer support and billing
       | to resolve this were not successful, in my opinion, because I was
       | being very nice and understanding to the support person on the
       | other line. _ONLY_ when I started threatening legal action,  "I
       | do not care what the script is telling you to say, I must talk to
       | a manager", etc, did I get someone to waive these cancellation
       | fees and allow me to transfer my numbers.
       | 
       | I have done customer service oriented work, I understand the
       | drain it can have on people. I try to be overly nice to anyone I
       | have to call into because so many people are abusive to the
       | support agents trying to assist them. But at the end of the day,
       | wether its Apple or 8x8, the only success I've had at getting
       | what I need in situations where the company is clearly playing
       | hide-the-ball, is to be threatening to the minimum wage worker on
       | the other end of the line.
       | 
       | And that is my take away - you can tell a lot about a company
       | from how they arm (or bind & toss overboard) their front-line
       | support agents to deal with a terminating customer contract.
        
       | shadowgovt wrote:
       | It's a nice tech demo, but it runs into the same problem that so
       | many of these systems run into: individual users don't want to be
       | arsed to self-manage their encryption keys. You can't solve the
       | UX on that, and users will ignore your service in favor of one
       | that doesn't require that of them.
       | 
       | As service provider, you could keep their keys, but if they trust
       | you with their keys, why aren't they trusting you with being MITM
       | on encryption? Especially since if you have their keys you
       | already could.
       | 
       | Still, very cool technical demo, even if the odds of it
       | displacing something like Zoom are near nil.
        
         | drivingmenuts wrote:
         | At its most basic, encryption should be a thing that just
         | happens, out of sight and mind of the user. Yes, that leaves
         | vulnerability, but people generally don't want to have to deal
         | with it. Most of us are more interested in getting work done,
         | rather than fiddling with the tools.
        
           | mvanbaak wrote:
           | you mean, like having people to remember to bring their house
           | keys when they leave so they can get back in? Yes, it's an
           | extra step that has nothing to do with the task I want to
           | complete, but the price I pay is totally fair for the
           | security I get for it.
        
             | shadowgovt wrote:
             | That's a really decent example, because if you forget your
             | house keys, you aren't forever denied access to your house
             | / your house doesn't collapse into a shower of noise, never
             | to be reconstituted.
             | 
             | If it did, you can pretty much guarantee people would
             | either stop using house keys completely and just leave the
             | key in the lock all the time or would come up with schemes
             | to make it impossible to lose the key (such as locking the
             | key in a tiny box, and then that box is secured with a key
             | that _doesn 't_ cause the box to burst into flames if the
             | second key goes missing).
             | 
             | In non-digital life, we make the tradeoff of less-than-
             | perfect security for convenience, backed by the law (i.e.
             | any competent locksmith can break into your house, and the
             | things that keep them from doing it are (a) a general
             | societal understanding that cracking open someone's front
             | door without consent is a really dick move, and (b) if you
             | do it, you're trespassing and can go to jail).
        
           | diablo1 wrote:
           | > out of sight and mind of the user
           | 
           | Which can be sometimes dangerous as more and more crypto is
           | designed to be set-and-forget. It leaves the windows open for
           | bad actors to exploit such simplicity and introduce backdoors
           | when no-one was looking. Vigilance in managing keys (and
           | integrity of the cryptosystem) is still required.
        
             | shadowgovt wrote:
             | You can require something people won't do all you want, and
             | it doesn't get done.
             | 
             | Cryptographic theory breaks down at the UX layer. If you
             | try to improve security by cycling passwords too
             | frequently, users start writing them on sticky notes tacked
             | to their monitors. If you require users to keep their
             | private keys in their own possession, users lose their
             | private keys, get angry when you can't reconstitute those
             | keys for them, and go use a service that can offer that
             | feature.
             | 
             | I think there's a reason the entire ecosystem has moved in
             | the direction of trusted actors who could, in theory,
             | exploit that trust to know everything they can about the
             | signal passing through their trusted zone (with those who
             | trust nobody still operating independently as a fringe
             | minority).
        
               | snarf21 wrote:
               | Exactly. The challenge is the balance between security
               | and convenience. The issue is that without full
               | management of keys and building of source that has been
               | signed and then investigated, you have to trust someone.
               | I think this is the issue, if you decide to trust, then
               | you trust. The harder question is _who_ can you actually
               | trust.
        
         | Arathorn wrote:
         | If they hook into Matrix's E2EE key management stuff, then
         | they'll benefit from the _huge_ amount of work we 've put into
         | letting individual users transparently self-manage their keys -
         | c.f. https://youtu.be/APVp-20ATLk?t=6355 for a demo from last
         | Wednesday. This stuff is due to ship in Matrix/Riot in the
         | coming week.
         | 
         | That said, agreed that it's a massive and (up to now?) unsolved
         | problem for how to get mainstream users to manage their keys
         | sensibly. Keybase could have gone there, but ended up being
         | poweruser-only. It'll be interesting to see if we've solved it
         | in Riot.
        
           | jadbox wrote:
           | Can you explain a bit about "Matrix's E2EE key management
           | stuff" and how it "letting individual users transparently
           | self-manage their keys"? Is there docs on it?
        
             | Arathorn wrote:
             | So E2EE key management has been a (very) long time coming
             | in Matrix. Firstly: E2EE is pretty useless if you don't
             | verify the identity of keys, as you could be just talking
             | to a MITM (e.g. a malicious server admin could add a
             | 'ghost' device onto your account in order to sniff the
             | messages people are sending you).
             | 
             | Originally (in 2016) we let users verify the devices
             | they're talking to by checking their Curve25519 public keys
             | out of band - e.g. "s5jZ K5a/ 4iAN If7K L0PL XNNG h/4G 901H
             | +dB6 YMB9 1H4". This is obviously completely unusable, and
             | precisely the sort of terrible UX which made the great-
             | grand-parent say "individual users don't want to be arsed
             | to self-manage their encryption keys; You can't solve the
             | UX on that".
             | 
             | Then, we improved things a bit (in Feb 2019) by adding the
             | ability to verify devices by comparing a list of 7 emojis
             | out of band - you calculate a shared secret via ECDH
             | between the devices. This is specced in
             | https://github.com/matrix-org/matrix-doc/issues/1267 and
             | analysed in https://www.uhoreg.ca/blog/20190514-1146. This
             | solved the problem of comparing ugly public keys and made
             | verification actually fun (imagine people yelling 7 emoji
             | at each other across a room, or over VoIP etc, to verify
             | identity), but meant you still had to verify each new
             | device manually, which gets very tedious very quickly.
             | 
             | We have finally fixed this over the last N months, which is
             | what I was talking referring to in the previous post.
             | 
             | Firstly, when you sign into a new device, as part of login
             | you have to verify that device's identity with an existing
             | one (or enter a recovery code/passphrase) - a bit like 2FA.
             | Then, _every user who has verified you in the past will
             | automatically trust this new device_ - you have effectively
             | vouched for its veracity yourself. We call this cross-
             | signing, and it 's specced at https://github.com/matrix-
             | org/matrix-doc/pull/1756.
             | 
             | Secondarily, we've added QR-code scan based verification
             | (https://github.com/matrix-org/matrix-doc/pull/1544) - so
             | the actual login process here ends up feeling similar to
             | WhatsApp Web: the user just scans a QR code on their new
             | device, and hey presto: all other users who have ever
             | verified your identity in the past will magically trust
             | your new device.
             | 
             | We're hoping that between QR/emoji-based verification and
             | cross-signing we've ended up with a UX which will let non-
             | technical users transparently manage their keys without
             | really realising it (as it will boil down to "scan this
             | code to log in" and "scan this code to check you're not
             | being intercepted").
             | 
             | The expectation is to turn this on by default in Riot and
             | launch it this Thursday (fingers crossed). And in future,
             | Jitsi could use the same identity/key-management model to
             | ensure that you're actually talking to the people you think
             | you're talking to in their shiny new E2EE conferences.
        
               | josh2600 wrote:
               | How do you do key rotation?
        
               | Arathorn wrote:
               | From a user perspective, you'd either log out of a device
               | and log back in (thus getting a new device key), or you
               | can hit a big reset button on your cross-signing state if
               | you want to blow away your master key and start over
               | (which isn't so much rotation as revocation, but should
               | be adequate).
               | 
               | https://github.com/uhoreg/matrix-doc/blob/cross-
               | signing2/pro... has the details from the implementor's
               | perspective.
               | 
               | See also https://github.com/uhoreg/matrix-doc/blob/cross-
               | signing2/pro...
               | 
               | EDIT: in theory you could also rotate all keys from a
               | client by creating a new master signing key and then re-
               | publishing all your existing cross-signing signatures
               | with the new keys. This sounds like quite a good way to
               | grandfather in untrustworthy attestations though; it
               | might be safer to start over. The current implementation
               | doesn't support this.
        
               | xiphias2 wrote:
               | ,, E2EE is pretty useless if you don't verify the
               | identity of keys, as you could be just talking to a MITM
               | (e.g. a malicious server admin could add a 'ghost' device
               | onto your account in order to sniff the messages people
               | are sending you).''
               | 
               | This is just not true. The amount of passive listening is
               | so much more than the amount of MITM, as most middle men
               | don't want people to know that they are listening. It's
               | just too easy to catch them if they do it on a massive
               | scale, as long as just 0.1% of users verify the E2E keys.
               | This way the remaining 99.9% gets a part of the security
               | benefit as well.
        
               | upofadown wrote:
               | Perhaps it would be more illuminating to say that E2EE is
               | mostly _pointless_ if you don 't verify keys. Then all
               | you have to do is encrypt on the network links to get the
               | same level of security for a lot of these things that
               | claim E2EE as they have the power to do trivial MITM.
        
               | xiphias2 wrote:
               | The point is that instead of requiring end users to
               | verify public keys, it's better UX to give them the
               | ability if they want, but not require. I remember an
               | email standard that sent public keys inside the emails,
               | and the replies are encrypted with that public key.
               | 
               | Sure, MITM is possible, but it's easy to detect, at the
               | same time the UX is easy to scale to billions of people.
        
               | Arathorn wrote:
               | I think there's a misunderstanding here - I was sloppily
               | conflating together MITM (a malicious server admin who
               | has created a false device eclipsing a real one, who
               | forwards traffic onto the real one, having reencrypted
               | it) and a "ghost device" (a malicious party who has added
               | a new device to your account which is sniffing your
               | messages).
               | 
               | My point was that to mitigate _both_ attacks, it 's vital
               | to verify key identity out of band. I agree MITM is much
               | less likely than passive listening via a ghost device: we
               | haven't seen MITM in the field, but we have seen
               | attackers try to add ghost devices to spy on accounts (by
               | acquiring a login password, adding a new device, and
               | hoping the victim doesn't notice they've sprouted a new
               | E2E device and that nobody verifies devices).
        
               | fossuser wrote:
               | Thanks for writing this up - it's really interesting
               | reading and following the Matrix project and this comment
               | was really easy to understand/gives a lot of context.
        
             | est31 wrote:
             | Each end device you use to connect to your Matrix account
             | creates own keys per default, but you can also
             | import/export keys if you want to. Then other parties can
             | enable or disable trust for each key. For browser based
             | clients like riot, I think each session creates another
             | key.
             | 
             | Docs (the first link is most relevant to your question):
             | 
             | https://github.com/matrix-org/matrix-
             | doc/blob/master/specifi...
             | 
             | https://matrix.org/docs/guides/end-to-end-encryption-
             | impleme...
             | 
             | https://gitlab.matrix.org/matrix-
             | org/olm/blob/master/docs/ol...
             | 
             | https://gitlab.matrix.org/matrix-
             | org/olm/blob/master/docs/me...
        
               | rjzzleep wrote:
               | I really like the work matrix did. But the UI for having
               | just an e2e encrypted chat with someone you can't
               | physically authenticate keys with on a simple not so
               | important chat is so annoying and confusing. As I
               | understand it they're redesigning it. But as it stands
               | for a normal end user the UI is broken if you ask me.
               | 
               | Personally I liked the flow of knowing a shared secret to
               | a private channel back when people were doing blowfish in
               | IRC was way better than this exponential key exchange
               | thing.
               | 
               | Sure it's not as secure, but at least it's somehow
               | humanly feasible.
        
               | Arathorn wrote:
               | Yeah, I wasn't talking about the crappy old UX that you &
               | the grand-parent are referring to, but the redesign which
               | is on the horizon in the coming days
               | (https://news.ycombinator.com/item?id=22856867) :)
        
           | fladd wrote:
           | Shipping next week in Riot "stable" already? Does that mean
           | that also device verification will be functional by then?
           | That would be fantastic (and also pretty impressive, given
           | that it is unfortunately still entirely non-functional in the
           | develop branch right now)!
        
             | Arathorn wrote:
             | Yeah, the hope is to ship in stable later this week.
             | Verification should be working in develop as right now, if
             | cross-signing is enabled (although it had a few regressions
             | last week) - if not, please file bugs...
        
           | slacka wrote:
           | Off-the-Record (OTR) Messaging for Pidgin[1] provides this
           | and hides the complexity from the user. Sadly, all of my
           | Google Talk, AIM, QQ, and MSN contacts have moved to
           | proprietary platforms like whatsapp, skype, and facebook.
           | 
           | I miss the days of 1 messaging platform for all my work and
           | personal chatting. For a solid decade Gaim/Pidgin handled all
           | of this for me.
           | 
           | [1] https://otr.cypherpunks.ca/
        
             | Arathorn wrote:
             | hm, OTRv3 has some pretty major shortcomings - off the top
             | of my head; it's only for 1:1 chat; both users need to be
             | online at the same time to initiate a session; limited to
             | non-elliptic-curve DH; socialist millionaire's protocol for
             | in-band key exchange (but relies on a secret being
             | preshared out-of-band); etc. It was great back in the day,
             | but Double Ratchet (a la Signal & Olm) has replaced it -
             | just as MLS may replace the Double Ratchet in time.
        
         | Vinnl wrote:
         | I'm not sure if that's such an unsolvable problem. For example,
         | Firefox Send [1] also provides E2E encryption, but that's
         | practically transparent to the user. The key is added as a hash
         | to the URL, which the browser never sends to the server. The
         | user just has to copy the sharing URL (which they do anyway) do
         | obtain and share the key.
         | 
         | Jitsi might have an additional challenge in that their URLs are
         | often human-readable and user-picked, so not everybody might be
         | used to copy-pasting the links, but then they have the
         | advantage of encryption probably being optional. Or they might
         | think of a whole other solution that provides good UX that
         | doesn't require users to manage keys. (Which, again, might be
         | optional anyway.)
         | 
         | [1] https://send.firefox.com/
        
           | shadowgovt wrote:
           | Yeah, it shouldn't be unsolvable; my key thought is that it's
           | actually the hard part of the story now (encryption client-
           | side is pretty well-understood) and is under-solved. Even
           | still, having more options in the world is better than having
           | fewer, so I'm excited about this demo.
        
             | Vinnl wrote:
             | I disagreed with "you can't solve the UX", but I definitely
             | agree that it's a hard problem. Looking forward to seeing
             | what they come up with.
        
               | shadowgovt wrote:
               | I think my choice of words left me open to
               | misinterpretation: the "you... and...." phrasing was
               | meant as synonymous with "If you... then...", not as a
               | categorical claim that I believe the UX is unsolvable.
        
               | Vinnl wrote:
               | Ah, I see now! Definitely agree; if it hampers UX, users
               | will switch.
        
           | hiq wrote:
           | In this case you still trust Mozilla and the JS code they
           | serve to your browser _every time_ you visit this URL (in
           | contrast to e.g. mobile apps), so I don 't think it solves
           | much: you still trust a third-party.
           | 
           | Of course it's still better than the default, since your data
           | sent at time t_A won't be compromised by an attacker
           | compromising Mozilla's servers at time t_B with t_A < t_B
           | (well, you try to retrieve your data and they steal your
           | passphrase by serving some new JS code).
        
             | Vinnl wrote:
             | Sure, but that's a different problem than the one about UX
             | for E2E encryption, no?
        
               | hiq wrote:
               | The post you answered to said:
               | 
               | > As service provider, you could keep their keys, but if
               | they trust you with their keys, why aren't they trusting
               | you with being MITM on encryption? Especially since if
               | you have their keys you already could.
               | 
               | I understood that you meant that Firefox Send solves this
               | problem and does not handle users' keys. My point was
               | that the trust model is still the same, so you might as
               | well just stick to the current model where you already
               | trust Jitsi. Firefox Send solves the UX problem because
               | it doesn't completely address the encryption key handling
               | problem.
               | 
               | To be fair, I still think Firefox Send is slightly better
               | than traditional file hosting, just not significantly.
        
               | Vinnl wrote:
               | Yeah I guess you're right, in the sense that it's still a
               | web application. Still, I don't think the general
               | approach is limited to that. For example, Jitsi also has
               | an Electron app. I haven't tried that, but I presume that
               | would work in a similar way to e.g. Zoom, i.e. you paste
               | an invitation link in there. That could just include the
               | key, without it being sent to the server, and without it
               | being a significant extra hurdle to the user.
               | 
               | Note that I'm not saying that that's necessarily the best
               | solution; just that I don't believe that
               | 
               | > You can't solve the UX on that
               | 
               | is true, i.e. that there's nothing particularly inherent
               | to the problem that results in there being exactly 0 good
               | solutions to it.
        
             | danShumway wrote:
             | Isn't that an orthogonal concern?
             | 
             | There's nothing technical to prevent Firefox Send from
             | using a native client with the same trust model as
             | everything else on your system, while still embedding the
             | key into the final URL or link you share online.
             | 
             | It wouldn't even need to be complicated -- a wrapper around
             | libsodium that pushed encrypted data to a couple of REST
             | endpoints would do the job. The only difference is that
             | your recipient would need to install an app instead of
             | download a webpage.
        
         | kodablah wrote:
         | You can use KDFs based on passwords, shared phrases, etc even
         | if they are just a shared secret in a DH exchange. I made a
         | thing a while back (https://myscreen.live/) that did a common-
         | phrase that was in a URL fragment which gives pretty good
         | entropy (granted it's only for encrypting signalling).
        
         | lucb1e wrote:
         | So what you're saying is that _all_ end to end encryption is
         | futile and we might as well get rid of it? Let 's not build
         | this in, even opportunistically, for nobody because very few
         | people use it?
         | 
         | Because saying "individual users don't want to be arsed to
         | self-manage their encryption keys" + that service providers
         | could keep your keys but then you might as well not have e2ee
         | in the first place + that the UX is "impossible" to solve, is
         | exactly that.
        
           | shadowgovt wrote:
           | I'm not saying precisely that (I don't think the UX is
           | impossible to solve; merely hard), but you've just given the
           | kind of argument a product manager would give on prioritizing
           | feature sets for a commercial application: by the logic
           | you've laid out, centralized key management provided by the
           | service provider likely attracts more users and is the
           | feature "we" should implement first.
           | 
           | ... which is why we see a lot more people using Zoom than
           | using something else, and we see few solutions available that
           | offer any client-side e2e encryption support at all (and if
           | they offer it, it's almost always in addition to their
           | server-managed key options).
        
         | sergiotapia wrote:
         | You're downvoted but I agree. I'm a software dev and can't be
         | assed about this. Even 2FA is a pain because I switch devices
         | pretty regularly throughout the day. Managing my keys, no I
         | have better things to do. Imagine a normie doing this? No
         | chance.
        
           | mvanbaak wrote:
           | There's usb and nfc solutions for this ;P
        
           | lucb1e wrote:
           | Even so, the person you're replying to is calling for
           | removing end to end encryption. I'm fine if _you_ don 't want
           | to use 2FA or end to end encryption, but can't we let people
           | like myself have it?
        
             | shadowgovt wrote:
             | Removing? Hardly. I'm just saying that as a feature, it's
             | not on the critical path to mass-adoption without solving
             | the key-tracking UX.
        
         | chvid wrote:
         | This hits the nail on its head.
         | 
         | In the end it is all about making something that people will
         | use because it provides what they need in a broad sense
         | covering security, usability, video quality and a lot of other
         | things.
        
         | est31 wrote:
         | Things like Matrix use jitsi for video chatting. There is
         | already existing end to end for messages, but the jitsi
         | component wasn't end to end. Now Matrix can use its existing
         | E2E messaging channels to distribute keys for an E2E jitsi
         | session. In fact, group chat keys are distributed via two party
         | E2E chats handled transparently by the clients, so it wouldn't
         | even be a novel concept.
        
         | sigwinch28 wrote:
         | I don't understand where this problem of distributing
         | encryption keys comes from in the context of this tech demo.
         | Zoom uses "passwords" and people are more than happy to
         | distribute them by email / WhatsApp / shouting from the
         | rooftops. The important aspect is that the _video conferencing
         | provider and server_ do not know the key.
         | 
         | I also disagree with your opinion that "you can't solve the UX
         | on that". The tech demo here essentially gives the same UX as
         | Zoom call + passwords, but with actual end-to-end encryption.
         | 
         | There is the notable caveat in this tech demo that client-side
         | scripts can still read the key (we have to assume the
         | javascript that can read the URL hash is friendly).
         | 
         | Really, though, this can be solved: provide a native(ish) app
         | even if it's just an Electron wrapper of the existing static
         | assets and javascript code, register a handler with the OS for
         | `jitsi://`, and use links like
         | `jitsi://server.name/room#e2ekey=foo`, so we only have to trust
         | the code on our machine, not the server.
        
           | Arathorn wrote:
           | I think passing in a static key (password) via a url fragment
           | is what their demo is doing already. However, I get the
           | impression the intention is to verify the keys of who's
           | actually in the room, rather than just trusting a static key
           | which will inevitably leak. They're presumably also planning
           | to re-key via a ratchet to avoid a single leaked key
           | compromising the whole conference recording.
        
             | sigwinch28 wrote:
             | Yes, my comment w.r.t. the hash is that it's inherently a
             | bad _final_ implementation because it 's visible to the
             | server via malicious Javascript.
             | 
             | I really hope that it can be as simple as "share a password
             | with someone"/"send an invite link to someone" because as
             | the parent effectively said, this would be a complete UX
             | nightmare.
        
       | CiPHPerCoder wrote:
       | Ah, I they see they're using libolm, as is the Matrix project!
       | 
       | I have a number of critiques of libolm that I haven't developed
       | into a practical attack, but are simple enough to fix (if you
       | ignore the massive legacy support and backwards compatibility
       | trap they've set for themselves EDIT: see Arathorn's comment
       | below).
       | 
       | Libolm is encrypting with AES-CBC [1]. In addition to side-
       | stepping entire classes of attack (i.e. padding oracles), CTR
       | would allow better performance: You can parallelize both
       | encryption and decryption with CTR mode. With CBC mode, you can
       | only parallelize decryption (but not encryption) since the IV for
       | all but the first block is the previous block of ciphertext,
       | which means you'll know the correct IV when decrypting but not
       | when encrypting (since you have to calculate it sequentially).
       | 
       | Yes, they HMAC the ciphertext [2]. However, their variable name
       | choice doesn't inspire confidence in its correctness.
       | 
       | Furthermore, they truncate the HMAC to 8 byes and attempt to
       | justify the truncation by appending an Ed25519 signature, but
       | that sort of configuration is just begging for a confused deputy
       | scenario, like an old iMessage vulnerability [3]. It's no where
       | near as bad (iMessage eschewed MACs entirely, this still uses a
       | MAC, so it's not exploitable), but it's something that probably
       | would make anyone working in cryptography (and any adjacent
       | fields) give a confused puppy head tilt when they read it.
       | 
       | Regarding their ratcheting protocol [4]: Instead of feeding HMAC-
       | SHA256 back into itself at each ratchet step, I'd feel way more
       | comfortable if the protocol did HMAC-SHA512 and used one half of
       | the output to derive encryption/authentication keys and the other
       | half the ratcheting-forward key (instead of one HMAC-SHA256 for
       | both purposes).
       | 
       | Using two distinct 256-bit secrets (even if they're generated
       | from the same input at i=0) instead of reusing a secret
       | strengthens the forward secrecy of the entire protocol.
       | 
       | HMAC-SHA256: One ring to rule them all (at any given ratchet
       | step).
       | 
       | HMAC-SHA512-split: If you (against all odds) guess one of the
       | keys, that doesn't give you the ratchet-forward key too, since
       | they're two distinct keys (albeit generated deterministically
       | from the same input).
       | 
       | Nothing I said above is exploitable, otherwise I'd be emailing
       | their security team instead of posting on HN. :)
       | 
       | That being said, if the Libolm devs want to shore up the security
       | of their protocol in a future revision, the following changes
       | would go a long way:
       | 
       | 1. Use HMAC-SHA-512 and split it in half for the ratcheting step
       | of Olm/Megolm
       | 
       | 2. Use AES-CTR instead of AES-CBC
       | 
       | 3. Stop truncating MACs
       | 
       | [1]: https://gitlab.matrix.org/matrix-
       | org/olm/blob/master/docs/me...
       | 
       | [2]: https://gitlab.matrix.org/matrix-
       | org/olm/-/blob/930c4677547e...
       | 
       | [3]:
       | https://blog.cryptographyengineering.com/2016/03/21/attack-o...
       | 
       | [4]: https://gitlab.matrix.org/matrix-
       | org/olm/blob/master/docs/me...
        
         | tptacek wrote:
         | If it's HMAC'd, there's no sidestepping of padding oracles
         | needed. Error side channels are a consequence of chosen
         | ciphertext attacks, not of padding. Switching to CTR would not
         | "go a long way" towards shoring up the security of their
         | protocol.
         | 
         | I do not understand your "confused deputy" attack. Can you
         | outline it in more detail?
        
           | CiPHPerCoder wrote:
           | > I do not understand your "confused deputy" attack. Can you
           | outline it in more detail?
           | 
           | I'm literally referring to "the iMessage attack".
           | 
           | If you recall, iMessage did ECDSA(AES(m, ek), sk) without an
           | intermediary HMAC. Libolm does have an (albeit truncated)
           | HMAC, so the attack doesn't apply at all here. But it's still
           | a design smell.
           | 
           | If I could extend their construction (which looks like the
           | setup to the iMessage attack, without the punchline) into a
           | real attack, I would have just disclosed it to them and not
           | commented publicly.
           | 
           | How the attack I envision would work if their HMAC suddenly
           | got erased from the protocol: Establish a multi-device setup,
           | flip bits in ciphertext you want to decrypt, sign with
           | Ed25519, observe padding oracle.
           | 
           | (A truncated HMAC does prevent that in the real world, but at
           | a 32-bit security level rather than a 128-bit security
           | level.)
           | 
           | These are (somewhat nitpicky) design critiques, not security
           | vulnerabilities. :)
           | 
           | Edit to add: Also, the iMessage attack had some other
           | weirdness that isn't relevant for what I'm describing, but
           | was very relevant for iMessage being broken.
        
             | tptacek wrote:
             | Isn't it a little weird to tell people to do major surgery
             | on a crypto design, especially in ways that depart from the
             | well-regarded design they derived it from, on account of
             | attacks that don't work against that design?
        
               | CiPHPerCoder wrote:
               | It's only weird if you put Signal's specific design
               | decisions from 2013 on a pedestal and declare it perfect
               | and incapable of being improved.
               | 
               | (N.b. I don't expect my specific suggestions to be
               | adopted. However, I view complaining without offering
               | solutions to be poor form, so I offered some alongside my
               | complaints.)
               | 
               | Of course, any deviation from Signal's design can and
               | _should_ be vetted by the same experts that vetted Signal
               | 's. And if they're unavailable to vet the derivations,
               | the conservative thing to do is grit your teeth and bear
               | Signal's legacy until they become available.
               | 
               | However, if Signal still targets Android 4.4 phones in
               | 2020, there's a lot of devices without AES-NI and thus
               | improving upon their AES-CBC design decision is worth
               | probing at least.
        
               | tptacek wrote:
               | In addition to my previous comments, I also have a thing
               | about reflexively suggesting that people eliminate CBC
               | from their designs. Depending on circumstances, CBC can
               | be a safer choice than CTR (and CTR-derived modes like
               | the AEAD stream ciphers); different failure modes.
        
               | CiPHPerCoder wrote:
               | You're entitled to your thing. Granted, CBC mode has a
               | better misuse story than CTR, but an extended-nonce
               | ChaPoly AEAD is likely to be safer in most of Signal's
               | installed userbase (n.b. the same userbase Matrix and
               | Jitsi would be targeting in a lot of cases), given the
               | ARM SIMD (AES-NI equivalent) situation.
        
               | tptacek wrote:
               | Chapoly depends on a reliable per-message CSPRNG. CBC
               | wants randomness too, of course, but the failure mode
               | under randomness hiccups isn't "coughs up keys". If you
               | have a system working with CBC+HMAC, what's the advantage
               | to shouldering that additional risk?
               | 
               | In a new design, I'd recommend Chapoly too. But this
               | isn't a new design. Changing things has cost.
        
               | CiPHPerCoder wrote:
               | > If you have a system working with CBC+HMAC, what's the
               | advantage to shouldering that additional risk?
               | 
               | The details you probably want me to put here are a bit
               | fuzzy still, and I've solicited others' to provide
               | clarity and insight into the specifics, so I apologize if
               | this is hand-wavy, but your question deserves an answer.
               | 
               | Given:
               | 
               | Most smartphones are built on ARM architecture. At the
               | very least, I'm confident about Android being ARM. I've
               | never purchased an Apple product in my life, and can't
               | rightly say much about their internals.
               | 
               | ARM before ARMv8-A did not provide hardware AES.
               | https://en.wikipedia.org/wiki/ARM_architecture#ARMv8-A
               | 
               | Adiantum cites the Cortex-A7 as one example processor
               | that does not provide hardware-accelerated AES:
               | https://security.googleblog.com/2019/02/introducing-
               | adiantum...
               | 
               | "In order to offer low cost options, device manufacturers
               | sometimes use low-end processors such as the ARM
               | Cortex-A7, which does not have hardware support for AES.
               | On these devices, AES is so slow that it would result in
               | a poor user experience; apps would take much longer to
               | launch, and the device would generally feel much slower."
               | 
               | Even for smartphones that use ARMv8-A and newer, OEM
               | weirdness can get in the way of that. Without tearing a
               | specific model of a specific phone apart, I can't really
               | give you much more information than that.
               | 
               | The advantage to the additional risk of ChaPoly is to not
               | cough up keys to JavaScript running in a web browser
               | capable of leveraging a cache-timing attack against
               | software AES, in the smartphones that most people can
               | afford.
               | 
               | That is to say, while it's true that the CSPRNG failure
               | mode of ChaPoly is bad (but relies on conditions the
               | attacker probably can't control), the failure mode of
               | software AES is equally bad and can be influenced by an
               | attacker.
               | 
               | > In a new design, I'd recommend Chapoly too. But this
               | isn't a new design. Changing things has cost.
               | 
               | If there is significant market share where the device has
               | a reliable CSPRNG but not hardware AES (post-OEM
               | tampering), I'd argue that the security gain of a ChaPoly
               | migration is worth the cost of changing, in particular.
               | 
               | The ratcheting changes are mostly a hygiene issue and
               | probably won't be meaningfully important. I was just
               | being nitpicky.
        
               | tptacek wrote:
               | If it's running in a web browser, side-channel attacks
               | are _way_ down the list of problems you need to account
               | for.
        
               | CiPHPerCoder wrote:
               | I think you misunderstood what I'm talking about here.
               | [ App (Java -> Dvorak) ]--.
               | >--- Same CPU       [ Web Browser with JS  ]--`
               | 
               | My argument wasn't about "it" running in a web browser. I
               | was arguing that side-channel attacks that can be
               | exploited from a browser on the same CPU (as per djb's
               | AES cache attack paper) are pretty bad, considering
               | "trick user into opening a webpage" is a pretty low-
               | hanging fruit attack vector.
        
           | garmaine wrote:
           | Is it really HMAC'd? 8 bytes isn't cryptographic protection.
           | It may be secure, but the standard arguments don't apply.
        
             | tptacek wrote:
             | Write a sketch of how a padding oracle attack would work
             | against a CBC-encrypted message authenticated with
             | truncated HMAC.
        
               | CiPHPerCoder wrote:
               | It's like a standard padding oracle attack, but 4 billion
               | times slower (on average) and requires 4 billion times
               | more CPU work and bandwidth, and you have to be able to
               | distinguish between a HMAC failure and a padding error.
               | 
               | (a.k.a. isn't happening)
        
         | Arathorn wrote:
         | Thanks for the feedback. So the reason for these choices of
         | primitives when we wrote libolm was to keep close to
         | libsignalprotocol (or libaxolotl as it was then), to try to
         | keep the door open to interop with Signal at some level.
         | 
         | The primitives can be changed though once there's enough
         | evidence to do so, and Matrix supports pluggable E2EE
         | algorithms as per
         | https://matrix.org/docs/spec/client_server/r0.6.0#messaging-...
         | - so I'm not convinced this is a "massive legacy support and
         | backwards compatibility trap" that we've set for ourselves.
         | 
         | What don't you like about the variable names at
         | https://gitlab.matrix.org/matrix-org/olm/-/blob/930c4677547e...
         | ?
        
           | walterbell wrote:
           | Are you planning to support IETF MLS,
           | https://datatracker.ietf.org/wg/mls/about/?
        
             | Arathorn wrote:
             | potentially; we're experimenting with a decentralised MLS
             | impl currently.
        
               | CiPHPerCoder wrote:
               | That's very cool to hear!
        
           | CiPHPerCoder wrote:
           | I appreciate the context. It's probably wise to abandon
           | Signal interop.
           | 
           | My reasoning here is: Moxie isn't ever going to acquiesce on
           | the points he's stubborn about, and Olm/Megolm could
           | otherwise be a great cryptographic design with or without his
           | approval.
           | 
           | > What don't you like about the variable names at
           | https://gitlab.matrix.org/matrix-
           | org/olm/-/blob/930c4677547e... ?
           | 
           | Confusion between ciphertext on line 85 and output on line 89
           | made me have to reread the function twice to figure out what
           | was going on.
        
             | hiq wrote:
             | > I appreciate the context. It's probably wise to abandon
             | Signal interop.
             | 
             | > Olm/Megolm could otherwise be a great cryptographic
             | design with or without his approval.
             | 
             | Could you (or Arathorn) expand on why Olm should deviate
             | from the Signal protocol, instead of trying to reproduce it
             | as closely as possible? Which requirements are different?
             | 
             | I understand that the Signal protocol is the state of the
             | art in terms of E2EE for 1:1 conversations (and small
             | groups). I understand how Matrix wants to address big
             | groups and thus need Megolm. Where does this leave Olm?
        
               | CiPHPerCoder wrote:
               | > Could you (or Arathorn) expand on why Olm should
               | deviate from the Signal protocol, instead of trying to
               | reproduce it as closely as possible?
               | 
               | Because the only premise for strictly adhering to the
               | Signal protocol has been invalidated by Moxie's
               | personality.
               | 
               | With a false premise, why maintain a true conclusion?
               | 
               | In my OP comment, I outlined some criticisms of what
               | they're doing, and suggested ways to improve it. Some of
               | these (dropping AES-CBC+HMAC for AES-CTR+HMAC) have
               | meaningful gains but, strictly speaking, are not Signal-
               | compat.
               | 
               | The change to the ratcheting protocol adds a layer of
               | indirection in the forward secrecy, but that also
               | deviates from Signal. (My proposed change would make it
               | closer to what the Noise Protocol Framework does.)
               | 
               | > Which requirements are different?
               | 
               | The technical requirements aren't changed, but you can
               | get better performance AND security on more platforms by
               | using XChaCha20 instead of AES-CBC, so that's a
               | meaningful security gain that Signal cannot boast (i.e.
               | in the context of legacy Android devices).
        
               | hiq wrote:
               | I see, thanks a lot!
        
             | Arathorn wrote:
             | > Moxie isn't ever going to acquiesce on the points he's
             | stubborn about
             | 
             | Yup, indeed. https://signal.org/blog/the-ecosystem-is-
             | moving/ was written after I mailed him to ask if they'd
             | consider interop. (https://matrix.org/blog/2020/01/02/on-
             | privacy-versus-freedom was our overdue response)
        
               | CiPHPerCoder wrote:
               | If you're open to changing the protocol, might I also
               | recommend XChaCha20-Poly1305? :)
               | 
               | https://tools.ietf.org/html/draft-irtf-cfrg-xchacha-03
               | 
               | It's fast and constant-time even on mobile devices (where
               | AES is often variable-time or slow due to a lack of AES-
               | NI).
        
               | lucb1e wrote:
               | His eponymous talk on 36C3 was similarly disappointing,
               | mostly defending stubborn choices. Glad I didn't decide
               | to spend the time watching it in person and got to see
               | another talk by watching this one back instead.
        
       | tinus_hn wrote:
       | Does this only work in Chrome? Or why are they using the Chrome
       | logo in the graphs?
        
       | Vinnl wrote:
       | Jitsi has been doing great recently, and it's pretty amazing how
       | many of my now-at-home-friends now reach for Jitsi by default (as
       | opposed to Zoom, which used to be the case) after having been
       | introduced to it just recently. I've never managed to "convert"
       | so many people to something so easily :)
       | 
       | However, I'm wondering if anyone on here has become, or works
       | somewhere that has become, a customer of 8x8 [1], the company
       | behind it [2]? They supposedly support Jitsi primarily to 1) get
       | contributions that also make their product better and 2) to
       | advertise their product. I'm somewhat worried about whether 2 is
       | working, and hence whether Jitsi will continue to receive their
       | support.
       | 
       | (I'm not affiliated or anything btw, just a happy Jitsi user.)
       | 
       | [1] https://www.8x8.com/
       | 
       | [2] https://jitsi.org/user-faq/#heading8
        
         | cdolan wrote:
         | I gave my impressions on 8x8 in a different comment below. It
         | got really long and touched upon some things outside the scope
         | of your question so I made it its own top level comment.
        
         | shravj wrote:
         | Although I'm not technically a 8x8 customer, I do use 8x8 Video
         | Meetings which is powered by Jitsi Meet. 8x8 Video Meetings
         | seems to be a completely free and separate product from the
         | rest of 8x8's conferencing solutions and does not require any
         | sort of existing or new 8x8 subscription to use. From what I
         | can tell, it just seems to be a Jitsi Meet instance running on
         | 8x8 infrastructure with 8x8 branding tied in. See here:
         | https://www.8x8.com/products/video-conferencing
        
         | throwaway7281 wrote:
         | Open source one of the most empowering paradigms. Here is a
         | list of people and organizations running their own servers:
         | 
         | https://github.com/jitsi/jitsi-meet/wiki/Jitsi-Meet-Instance...
        
         | [deleted]
        
       | CrankyBear wrote:
       | Oh please. It's a demo. Moving on.
        
       | shawnz wrote:
       | I see a lot of blocky patterns in the encrypted video streams...
       | isn't that indicative of encryption weaknesses?
        
         | philcrump wrote:
         | I think, I haven't looked at their implementation; they'll be
         | an artifact of piping the encrypted/mis-decrypted stream into a
         | video decoder. A lot of video coding techniques use variable-
         | size blocks to describe changes to areas of the image, so it's
         | reasonable that piping pseudo-random data into the decoder
         | would produce some noticeable block shapes.
        
         | jaywalk wrote:
         | I think it's more a side effect of how video codecs try to make
         | sense of bad data.
        
         | saghul wrote:
         | We've kept the entire packet header unencrypted so the effect
         | is more visible. The final implementation will just leave the
         | minimum required bytes (1 or 2) of the header unencrypted.
        
           | shawnz wrote:
           | Interesting, thanks for the information. And also thank you
           | for working on this important technology.
        
       | kodablah wrote:
       | I have a naive question as I'm toying w/ a similar WebRTC project
       | myself. Since all browsers support h264/opus, can I just
       | reasonably ask each client to send 720p to my WebRTC peer which
       | is really a server before I relay it? Then, can't we just use
       | naive e2e encryption with any extra encoding/decoding? Meaning,
       | in my TransformStream, can I just window.subtle.encrypt on the
       | way out and window.subtle.decrypt on the way in? Is it due to the
       | fact that the insertable streams have to remain in their video
       | format? Pardon my ignorance. This naive approach does assume
       | everyone can at least handle the download speeds of the group's
       | video and the upload speed of their own.
        
       | mbochenek wrote:
       | I love the fact that it's open source, and well documented. I am
       | currently hosting my own instance at https://jitsi.mastodo.ch/
       | and I plan to offer it as a free alternative to zoom, citrix,
       | etc.
        
       | coinward wrote:
       | A bit random, but I'd really like to manage video calls through a
       | browser extension. Am I alone in this? Seems unnecessary to move
       | in between apps, couldnt we be hopping in and out of rooms
       | without leaving the browser?
        
       | christefano wrote:
       | Sad to see this is dependent on an extension to WebRTC that's
       | (currently) Chrome / Chromium-only. It's still a proposed API,
       | and I don't see when this will be supported by any other
       | browsers.
        
         | garmaine wrote:
         | Really, really sad. I'm not going to run Chrome for this one
         | application.
        
           | sigwinch28 wrote:
           | I'm not sure if you are aware of the difference, but you can
           | use Chromium instead of Chrome.
           | 
           | I am mainly a Firefox user on NixOS, but for running
           | Chrom{e,ium} I find that Firejail [0] is a good option. For
           | example, to run chromium in "store" mode and point it to the
           | demo jitsi instance:                   firejail
           | --profile=chromium chromium --app="https://meet.jit.si"
           | 
           | In reality, I combine this with `nix-shell` and set it as a
           | shell alias, since Chromium isn't something I use regularly:
           | nix-shell -p chromium --run "firejail --profile=chromium
           | chromium --app=\"https://meet.jit.si\""
           | 
           | The `--app` option removes browser controls, so it _almost_
           | behaves like an Electron application.
           | 
           | [0]: https://github.com/netblue30/firejail
        
             | barbs wrote:
             | Or you can download the Electron app here
             | 
             | https://github.com/jitsi/jitsi-meet-electron
        
               | sigwinch28 wrote:
               | Didn't realise they had one! Might be more convenient
               | than my shell aliases :)
        
             | garmaine wrote:
             | I'm not going to run a different browser for one
             | application.
             | 
             | I don't even install anything derived from Chrome in my
             | computer, and I'm not going to change that policy.
        
               | sigwinch28 wrote:
               | Fair enough.
               | 
               | Please be aware that aside from E2E demoed here, there is
               | allegedly a Firefox-specific bug regarding WebRTC which
               | degrades performance for _all_ participants in a call:
               | https://community.jitsi.org/t/software-unusable-on-
               | firefox-w...
        
       | manishsharan wrote:
       | Love Jitsi but dont try to run this on AWS EC2 -- the network
       | charges quickly add up. I had set Jitsi videobridge for my kid's
       | friends and their friends -- the network charges quickly added up
       | before I shut it down.
        
         | capableweb wrote:
         | Try a host that doesn't charge for bandwidth. Dedicated servers
         | at Hetzner and/or OVH doesn't charge this imaginary fee so you
         | can sleep well and just pay a static sum each month.
        
           | sigwinch28 wrote:
           | Precisely. Hetzner can provide a dedicated server with an
           | Intel Core i7-2600, 16 GB RAM, and 2x3TB spinning disks for
           | 29,00 EUR/month before tax with unlimited traffic via their
           | server auction site for orphaned servers [0].
           | 
           | Other Hetzner and many OVH offerings have traffic caps, but
           | these are in the region of ~10TB/month.
           | 
           | [0]: https://www.hetzner.com/sb/
        
         | miglmj wrote:
         | Networked applications like this really expose the real money
         | drain in AWS and other cloud services, network charges
         | particularly network egress rates.
        
       | jcims wrote:
       | Tech support question haha. I run Jitsi Meet (the service at
       | meet.jit.si) on an iPad that I bought last year.
       | 
       | After about an hour the audio completely fails. No audio in. No
       | audio out. I can't really tell if it's something with the app or
       | if it's my terrible little dsl router/nat box dropping the audio
       | stream. I don't have this problem on the desktop or laptop
       | clients, just the iPad, so I'm guessing it's the app...but it
       | could be that it works differently on the network.
       | 
       | Anyone see similar?
        
       | sigwinch28 wrote:
       | This is huge for decoupling zoom clients from zoom backend
       | servers.
       | 
       | In theory, using a fixed/trusted set of client assets
       | (javascript+html, e.g. an electron wrapper) allows groups of
       | users to choose a Jitsi "backend" provider that they don't even
       | trust if the encryption key is never sent to the server (and _can
       | 't_ be, i.e. it is never in the URL hash).
       | 
       | Since the video is e2e-encrypted and in theory the server never
       | has to be given the key, this would allow people to purchase
       | hosting services from untrusted providers or spin up a VM in a
       | public cloud with the confidence that the _content_ of their
       | conversations is not available to the provider.
       | 
       | Yes, we could just argue that this is XMPP servers and Pidgin all
       | over again (in fact, Jitsi uses an XMPP server internally), but
       | the modern UI and the timing w.r.t. COVID-19 lockdowns and Zoom
       | privacy issues is fantastic.
        
       | emilfihlman wrote:
       | >GET parameter
       | 
       | Yes, it's convenient.
       | 
       | But to tout that as what a well designed encryption scheme should
       | look like? Nah.
        
         | bflesch wrote:
         | What is your alternative suggestion? In a browser-based video
         | chat scenario you are ultimately assuming some level of browser
         | security and that the clients are trusted.
         | 
         | What they propose is an E2E encryption, e.g. Zoom / MS Teams
         | are unable to decrypt your conversations. Obviously, if you
         | choose a weak key, they can just store the stream and crack it
         | later.
        
         | giancarlostoro wrote:
         | It was a hash mark, only the client got that value
         | (specifically JavaScript). Note how his page did not reload
         | when he hit enter. Had it been a GET request, it would of
         | refreshed.
        
         | CrazyStat wrote:
         | From the link:
         | 
         | > _In order to enable quick demos of the feature we allowed for
         | e2ee keys to be passed to Jitsi Meet via a URL parameter._
         | 
         | > _IMPORTANT NOTE: This is a demo of an e2e encrypted call and
         | the feature is NOT YET intended for general availability, so
         | play with it all you want, but, for now, in matters where
         | security is paramount we still recommend deploying your own
         | Jitsi Meet instance._
         | 
         | > _As we already pointed out, passing keys as URL parameters is
         | a demo thing only. Aside from being impractical it also carries
         | risks given that URL params are stored in browser history._
         | 
         | > _Our next step is therefore to work out exactly how key
         | management and exchange would work. We expect we will be using
         | The Double Ratchet Algorithm through libolm but the details are
         | still to be ironed out._
         | 
         | They're not touting passing the keys in a URL parameter as a
         | "well designed encryption scheme." They're presenting it as a
         | proof of concept, and planning for the future work that they
         | acknowledge is necessary.
         | 
         | Your comment is ill-informed and misleading.
        
       | Arathorn wrote:
       | This is insanely cool - insertable webrtc streams only just
       | landed in canary
       | (https://www.chromestatus.com/feature/6321945865879552). Also
       | very cool they're looking to use Matrix's libolm Double Ratchet
       | implementation for key exchange, which hopefully will make it
       | easier to integrate Matrix's e2ee with Jitsi's in future :)
        
       | exabrial wrote:
       | As long as I can disable this, good. I need performance over
       | E2EE, much like many others. We face far more threats from poor
       | communication being distant than we do from an threat where
       | someone gets past our VPN, Firewall, breaks TLS encryption, gets
       | into our VPS and private server, and silently participates in our
       | meetings.
        
       | giancarlostoro wrote:
       | Jitsi was the one Jabber client I used a few years back that I
       | liked. I didn't find as many people on Jabber back then as I've
       | seen elsewhere however. I believe you could use OTR alongside it,
       | so you don't even need to trust them, just their implementation /
       | version of OTR (a few more layers to consider I suppose).
        
         | Arathorn wrote:
         | I don't think this is anything to do with the legacy jitsi
         | xmpp/sip client, but the "jitsi meet" video conferencing app.
        
           | giancarlostoro wrote:
           | Oh it's been too long if that one's irrelevant.
        
       | crazygringo wrote:
       | I'm actually really curious as to whether _verifiable_ E2EE is
       | possible in normal business videoconferencing -- if someone here
       | can enlighten me I 'd really appreciate it.
       | 
       | It seems clear that there has to be a single key, rather than
       | separate keys for each pair of participants, since in a large
       | meeting we need all video streams running through a server and
       | everyone receiving the same streams, for manageable upload
       | bandwidth. Also, many participants often cannot peer directly due
       | to NAT etc.
       | 
       | But therefore... as long as you're trusting the server with key
       | distribution/management in the first place... don't you
       | necessarily have to simply _trust_ that the server isn 't
       | peeking? That by necessity, since anyone else can join the call
       | and get the decryption key from any of the peers... that the
       | server can too, whether by MITM attack, a "fake participant" that
       | joins for a millisecond but doesn't appear in UX, etc.?
       | 
       | That unless you actually have the capability of auditing the
       | server and the code it's running, E2EE doesn't actually give you
       | any concrete guarantees whatsoever? You've just got to trust?
       | Which at the end of the day, is no different from trusting them
       | not to peek at transport encryption?
       | 
       | Of course you can run your own Jitsi server. But if you've
       | already got control over your servers then you might as well just
       | be using transport encryption anyways, since you trust yourself
       | -- right?
       | 
       | (Obviously if you come up with your own keys and send them to
       | participants via a separate channel of communication then it's
       | fine -- but obviously that's not something regular users are ever
       | going to do.)
       | 
       | Would love to know if I'm misunderstanding something here -- if
       | the newfound significance of E2EE in videoconferencing is just
       | due to Zoom falsely advertising it, or if it's actually a
       | realistic goal.
        
         | tpolzer wrote:
         | Key distribution is a problem, yes (as discussed in other
         | comments here).
         | 
         | Upload bandwidth isn't so much of a problem with the number of
         | participants, as you of course wouldn't encrypt the whole data
         | stream separately for each peer. Instead, you would encrypt it
         | with a rotating symmetrical key that you can then safely
         | exchange with all peers (at negligible bandwidth cost). You
         | would still depend on the server to replicate your encrypted
         | stream (and do NAT traversal etc), but it wouldn't be able to
         | peek inside.
         | 
         | A separate problem is that you usually want your server to
         | actually create reencoded video of your stream for participants
         | with bad internet. There's potentially some ways to do that
         | without server involvement with some clever video encoding (so
         | that instead of reencoding you can just throw away some
         | chunks), but afaik there's nothing production ready for video
         | codecs here.
        
         | PureParadigm wrote:
         | > It seems clear that there has to be a single key, rather than
         | separate keys for each pair of participants, since in a large
         | meeting we need all video streams running through a server and
         | everyone receiving the same streams, for manageable upload
         | bandwidth.
         | 
         | Why is this the case? Watching streams on Twitch or YouTube
         | with HTTPS are all encrypted individually for each connection.
         | It's not like TV or radio where you have to broadcast the same
         | thing to everyone.
         | 
         | > But therefore... as long as you're trusting the server with
         | key distribution/management in the first place... don't you
         | necessarily have to simply trust that the server isn't peeking?
         | 
         | I think you're right that you'll need to trust key
         | distribution. Some companies might actually have PKI set up
         | properly and can do this. Other individuals who are
         | particularly privacy conscious may also have this. Just because
         | verifiable E2EE might not be applicable to a mass market
         | doesn't mean it's not incredibly useful for those who do need
         | it.
        
       | pal_9000 wrote:
       | Jitsi is on a roll! By the way, Does anyone know the challenging
       | part of e2e in video chats? Thinking out of intuition, it would
       | be keys are exchanged during handshake and binary data is decoded
       | on the clients? I'm just wondering how could Zoom miss it?
        
         | Igelau wrote:
         | The hard part about e2e is that it only gets you anything if
         | none of the parties involved are being surveilled after
         | decryption. This is easily within the capabilities of any
         | adversary worth worrying about.
        
           | CodeMichael wrote:
           | e2e means you don't have to worry about transparent 3rd
           | parties keeping secrets for you. It prevents casual abuses of
           | your privacy and sets the bar for violating it (deserved or
           | not)
        
         | Arathorn wrote:
         | The challenging bits are:
         | 
         | * How do you handle non-encrypted participants (e.g. people
         | dialling in from the PSTN?)
         | 
         | * Up until this week, you haven't been able to intercept WebRTC
         | streams for doing E2EE if running in-browser. (Zoom however
         | doesn't use WebRTC, so they don't have this excuse).
         | 
         | * Do you have a safe place to store the keys, and manage user
         | identity?
        
         | georgyo wrote:
         | I cannot think how you could possibly do the key exchange
         | securely and automatically, if you want to give a link to
         | someone and have it "just work".
         | 
         | If all you have is the URL, then the server sees the encryption
         | key.
         | 
         | Video conferencing also rarely has users register. So there
         | isn't a way to validate users either. And even if they did
         | register and users didn't care about the extra friction,
         | multiple devices means either the server stores your private
         | key, or you have many keys which is much harder to verify.
         | 
         | E2EE is much easier on phones, which is why Signal is so good.
         | The identity is your phone number, and you can only have one
         | key associate with your number. That key never leaves your
         | device. Conceptually easy.
         | 
         | Video conferencing has none of those advantages, and I don't
         | know how you would make it conceptually easy for users without
         | reducing the security.
        
           | waterhouse wrote:
           | I don't know much about the broader context, but to this
           | part:
           | 
           | > If all you have is the URL, then the server sees the
           | encryption key.
           | 
           | Not necessarily. It's possible to put the key after a "#" in
           | the URL, which allows client-side code to use it without
           | sending it to the server. This technique is used at ZeroBin,
           | among other places. (Edit: This is actually done in the video
           | in the OP as well.)
        
           | [deleted]
        
           | crznp wrote:
           | You could still make the phone a primary device and allow it
           | to perform the key agreement and pass control off with a QR
           | code, but that is complicated and leaves open the question of
           | who is allowed in this conference.
           | 
           | So perhaps you just give up on persistent identity: just have
           | an unencrypted waiting room, the organizer and their
           | delegates can approve people in the waiting room to enter the
           | encrypted conference.
        
             | GordonS wrote:
             | Do you mean kind of like how authentication is sometimes
             | handled on input/UI constrained devices (e.g. TVs), where a
             | message could be played to callers, asking them to enter a
             | one-time code at a particular website?
             | 
             | On the face of it, this could work quite well for _most_
             | people.
        
         | est31 wrote:
         | Actually, WebRTC was designed from the start to be end to end
         | encrypted as well as peer to peer. Even designed in a way that
         | you can't turn off encryption even if you wanted. This choice
         | was done during the early design period of WebRTC which
         | coincided with the Snowden revelations.
         | 
         | However, over time people who built WebRTC systems (like jitsi
         | or zoom) realized that end to end encryption makes multi-party
         | chats hard. The basic issue is that you don't want to burden
         | end points with sending the video streams to all users. Think
         | about tens or thousands of users.
         | 
         | So the way they used WebRTC was changed. The mandatory end to
         | end encryption was circumvented by connecting to a central
         | server. This entity then could forward streams to as many
         | people as you want. Also, most times people don't need a HD
         | video of someone. A small thumbnail is enough, think of a
         | screen filled with 10 thumbnails of your users. The
         | downsampling of the video stream can happen on that central
         | box. The downsampling was enabled with simulcast mode, but even
         | then it still requires more bandwidth, while insertable streams
         | will enable WebRTC applications to put a second layer of
         | encryption over the encryption provided by the user angent.
         | That second layer can then reach to the actual other end of the
         | communication, as key management is exposed to the entities.
         | 
         | The sad twist in this story is that the desire to make the
         | encryption very inflexible so that it's surely not circumvented
         | made it impossible for people to amend it so that SFUs work...
         | leading to people disabling it altogether.
        
           | waterhouse wrote:
           | > end to end encryption makes multi-party chats hard. The
           | basic issue is that you don't want to burden end points with
           | sending the video streams to all users. ... The mandatory end
           | to end encryption was circumvented by connecting to a central
           | server.
           | 
           | Dumb question: Can't you choose one video encryption key K,
           | use ten thousand individual secure connections (with
           | different keys) to share K with all the other users, then
           | encrypt your video with K and let central servers mirror it
           | all they like? (Could even have other clients do some
           | mirroring, bittorrent-style.) Regarding downsampling: if the
           | client has only enough CPU and bandwidth to put out one
           | stream, then, yeah, that doesn't work very well, but
           | otherwise you could put out multiple streams (all encrypted
           | with K) of different quality.
        
             | traspler wrote:
             | So from what I understood, the clients had no way of
             | accessing an encoded WebRTC video frame before it was sent
             | over the network. Only with the new Insertable Streams is
             | this possible. So they kind of plan to do what you say,
             | encrypt it "manually" on client and let the router mirror
             | it. Sharing the key as you proposed still dictates that you
             | can p2p connect to all participants. Sadly that's not
             | possible in all NAT situations and you would still need a
             | TURN server for the clients to meet, having a again a
             | central point.
        
             | garmaine wrote:
             | Not a dumb question at all. That's exactly how this would
             | be sensibly designed. Does WebRTC's built in E2EE
             | encryption not do this?
        
       ___________________________________________________________________
       (page generated 2020-04-13 23:00 UTC)