[HN Gopher] Container-to-Container Communication
       ___________________________________________________________________
        
       Container-to-Container Communication
        
       Author : miketheman
       Score  : 28 points
       Date   : 2021-12-28 19:10 UTC (3 hours ago)
        
 (HTM) web link (www.miketheman.net)
 (TXT) w3m dump (www.miketheman.net)
        
       | Jhsto wrote:
       | Bit besides the point, but how many of you do still run nginx
       | inside container infrastructures? I've been having container
       | hosts behind a firewall without explicit WAN access for a long
       | time -- to expose public services, I offload the nginx tasks to
       | CloudFlare by running `cloudflared` tunnel. These "Argo" tunnels
       | are free to use, and essentially give you a managed nginx for
       | free. Nifty if you are using CloudFlare anyway.
        
         | akvadrako wrote:
         | Not nginx, but I run haproxy, which serves the reverse proxy
         | role.
         | 
         | I use it instead of Google's own ingress because it gives you
         | better control over load balancing and can react faster to
         | deployments.
        
       | Klasiaster wrote:
       | No talk about permissions, I think locking down access is also an
       | interesting aspect of Unix Domain Sockets compared to TCP
       | sockets.
        
         | Jhsto wrote:
         | As if the TCP is less secure? Not necessarily as the TCP
         | sockets here most likely live in a private network between the
         | two containers: you run `podman network create example` and
         | then `podman run --net=example whatever`. This creates an
         | internal network (e.g., 10.0.0.1/10) over which the pods
         | communicate. Only if you want to expose some port would you use
         | `-p` flag to expose it to host LAN, which you then redirect to
         | WAN.
        
       | snicker7 wrote:
       | Doesn't 700 requests per second for such a trivial service seem
       | kinda slow?
        
         | miketheman wrote:
         | It may, but this is considering the overhead incurred in the
         | local setup - calling the app through a Docker Desktop exposed
         | port. Running locally on macOS produces ~5000 TPS. The `ab`
         | parameters are also not using any sort of concurrency, and
         | perform requests sequentially. The test is not designed to
         | maximize TPS or saturate the resources, rather isolate
         | variables to perform the comparison.
        
         | lmeyerov wrote:
         | I didn't look too closely, but I'm wondering if this is
         | Python's GIL. So instead of nginx -> multiple ~independent
         | Python processes, each async handler is fighting for the lock,
         | even if running on a different core. So read as 700 queries /
         | core vs 700 queries / server. If so, in a slightly tweaked
         | Python server setup, that'd be 3K-12K/s per server. For
         | narrower benchmarking, keeping sequential, and doing container
         | 1 pinned to core 1 <-> container 2 pinned to core 2 might tell
         | an even clearer picture.
         | 
         | I did enjoy the work overall, incl Graviton comparison.
         | Likewise, OS X's painfully slow Docker impl has driven me to
         | Windows w/ WSL2 -> Ubuntu + nvidia-docker, which has been night
         | and day, so not as surprised that those numbers are weird.
        
         | fyrn- wrote:
         | Yeah, it's so slow that I'm wondering if they were actually
         | measuring TCP/unix socket overhead. I wouldn't expect to see a
         | difference at such a low frequency.
        
           | Tostino wrote:
           | Yeah, seems like there was some other bottleneck. Maybe
           | changing the IPC method makes the small difference we are
           | seeing, but we should be seeing orders of magnitude higher
           | TPS prior to caring about the IPC method.
        
       | astrea wrote:
       | The multiple layers of abstraction in this make this test sorta
       | moot. You have the AWS infra, the poor MacOS implementation of
       | Docker, the server architecture. Couldn't you have just had a
       | vanilla Ubuntu install and curl some dummy load n times and get
       | some statistics from that?
        
         | miketheman wrote:
         | Possibly - however the premise is that I'm running an
         | application on cloud infrastructure that I don't control -
         | which is common today. I tried to call that out in the post.
        
       | KaiserPro wrote:
       | Normally I'm all for people using tried and tested primitives for
       | things, however I think that in this case unix sockets are
       | probably not the right choice.
       | 
       | Firstly you are creating a hard dependency for having the two
       | services sharing a same box, with a shared file system (that's
       | difficult to coordinate and secure.) But also should you add a
       | new service that _also_ want to connect via unix socket, stuff
       | could get tricky to orchestrate.
       | 
       | But this also limits your ability to move stuff about, should you
       | need it.
       | 
       | Inside a container, I think its probably a perfectly legitimate
       | way to do IPC. Between containers, I suspect you are asking for
       | trouble.
        
         | akvadrako wrote:
         | Within a pod it seems pretty reasonable.
        
           | erulabs wrote:
           | Exactly, and this is where the concept of "Pod" really
           | shines. Sharing networking and filesystems between containers
           | is, from time to time, exactly the right strategy, as long as
           | it can be encapsulated.
        
       ___________________________________________________________________
       (page generated 2021-12-28 23:01 UTC)