[HN Gopher] Phoenix LiveDashboard
       ___________________________________________________________________
        
       Phoenix LiveDashboard
        
       Author : feross
       Score  : 171 points
       Date   : 2020-04-16 19:03 UTC (3 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | kuzee wrote:
       | Elixir/Phoenix developers will get a lot of mileage out of the
       | ability to quickly collect and inspect stats like this. This is
       | really great and solves ~50% of the use cases for an outside
       | error monitoring service while also making debugging specific
       | instances easier.
        
       | valuearb wrote:
       | What if you live slightly outside of Phoenix? Can you still use
       | the dashboard?
        
         | pselbert wrote:
         | Searching for things related to Phoenix is a constant problem.
         | Even Phoenix Elixir will sometimes give you results for random
         | bars in Arizona.
        
         | dogweather wrote:
         | Lol
        
         | out_of_protocol wrote:
         | Dashboard built on top of LiveView, which is elixir-specific.
         | 
         | * For any other language it means trying to replicate the
         | project from scratch (also unlikely to be of any use on php,
         | python, ruby etc)
         | 
         | * For non-phoenix elixir projects you could make it work but
         | you'll rewrite half of Phoenix framework anyway so i don't see
         | the point
        
       | [deleted]
        
       | krat0sprakhar wrote:
       | Wow, this looks amazing - I'm so jealous of the elixir / pheonix
       | ecosystem! The built in telemetry is the killer feature IMO as
       | generally one would have to use an external system such as
       | NewRelic etc to get this info.
       | 
       | Awesome work by the folks behind it - now I just need a project
       | idea for which I can try this out :)
        
         | nickjj wrote:
         | Not to knock the live dashboard but from what I read on IRC you
         | will still likely need to use New Relic or other tools to look
         | at logs and explore the state of your system from the past to
         | help track down errors or analyze / filter those logs from the
         | past.
         | 
         | The live dashboard only captures information at the moment.
         | There is no persistence or advanced filtering capabilities.
         | Meaning, if you received an error notification 5 minutes ago
         | but didn't already have the dashboard open, you couldn't just
         | load live dashboard and check it out there. You would have to
         | troll through your own logs or use a 3rd party logging tool
         | just like before.
         | 
         | Where live dashboard seems to shine is if you already have it
         | open beforehand, you can see various metrics about your app in
         | real time. Although other logging tools are pretty close to
         | real-time too.
         | 
         | But it is cool that something like this is included by default.
         | I hope future releases expand on what it allows you to do.
        
       | tomconroy wrote:
       | Better link is Jose's tweet about it:
       | https://twitter.com/josevalim/status/1250846714665357315
        
         | brightball wrote:
         | Having this built in is absolutely awesome...
        
         | ariadnavc wrote:
         | hola
        
         | out_of_protocol wrote:
         | ^^^ screenshots!
        
       | rubiquity wrote:
       | It's awesome for Phoenix users to have this built right into
       | their apps. I know it isn't a full replacement for Splunk or what
       | have you, but it's a step in the right direction to have this at
       | your finger tips. The SaaSification of operational tools isn't
       | great from a cost or waste perspective given that most vendors
       | are likely overprovisioned and under utilized. You also can't
       | beat the user experience of staying in your language and
       | framework's ecosystem. Well done.
        
       | taspeotis wrote:
       | Is Erlang's BEAM VM really slow compared to other virtual
       | machines, or is it Phoenix? It seems to be a bit of a laggard on
       | benchmarks [1] and there's been this thread going on for years
       | [2] which hasn't reached a conclusion.
       | 
       | [1]
       | https://www.techempower.com/benchmarks/#section=data-r18&hw=...
       | 
       | [2] https://elixirforum.com/t/techempower-benchmarks/171
       | 
       | EDIT: Thanks everyone for the responses.
        
         | derefr wrote:
         | BEAM is by default tuned for throughput over latency, but also
         | for soft-realtime responsiveness over throughput. The right
         | benchmark for BEAM performance isn't how fast it can do a
         | single-threaded or even multi-threaded CPU-bound task; but
         | rather the 99th percentile packet-to-pcket latency + makespan +
         | memory-requirement volatility of a C10M workload. In other
         | words: if the VM were hosting a million VoIP calls, how many
         | frames would it deliver "on time" such that the client wouldn't
         | discard them?
         | 
         | (This is not to say you can't tune BEAM to the needs of other
         | workloads; those are just the defaults. Though, _because_ that
         | is the default assumed workload, it's also the one most
         | optimization effort goes into.)
         | 
         | BEAM is also pure-functional in a way that a lot of VMs hosting
         | pure-functional _languages_ aren't; things like mutable atomic
         | memory handles were only introduced recently, and there's no
         | plan to rewrite core operations in terms of them, because these
         | primitives are less _predictable_ in their time costs than
         | equivalent functional primitives.
         | 
         | You can go fast under BEAM, but the people who want to do so,
         | often decide that it's too hard to go fast while _also_
         | retaining soft-real-time guarantees they began using a BEAM
         | language to attain; so they instead use one of BEAM's many IPC
         | bridges to hook up a fast native "port program" written in
         | another language to the BEAM node, ensuring that any hiccups or
         | crashes in the native code are isolated to its own address
         | space and execution threads.
         | 
         | This last consideration means that very few people are really
         | driving BEAM's performance forward, because so many instead
         | take the "escape hatch" of low-overhead IPC.
         | 
         | Taken as whole systems, though, you'll find BEAM in a lot of
         | services that go very fast indeed--you just won't usually find
         | BEAM itself handling the performance-critical part!
        
           | taspeotis wrote:
           | Thank you for the explanation of BEAM + its objectives and
           | internals.
           | 
           | > It's incredibly good at multiprocessing, which is really
           | useful in the websphere
           | 
           | So on that benchmark I linked, if you switched to the latency
           | tab Phoenix clocks in at 14.6ms average latency with a
           | standard deviation of 23ms and a max of 460.7ms. The best
           | looking ASP.NET Core result from a standard deviation and max
           | point of view is 1.5ms average latency, 1.4ms standard
           | deviation and 49ms max.
           | 
           | So again it's hard for me to tell whether it's BEAM or
           | Phoenix that's responsible for the relative performance hit.
        
             | derefr wrote:
             | Very likely Phoenix (Phoenix is "more" software than
             | ASP.Net is; a better comparison might be to Erlang's cowboy
             | library); but also very likely that the author of that
             | benchmark didn't really tune BEAM. By default, BEAM has
             | runtime tracing hooks enabled; native code compilation
             | (HiPE) disabled; no core pinning; low enough process-heap-
             | size upon spawn that an immediate grow is usually
             | necessary; and a slew of other things.
             | 
             | There's also the fact that Elixir and even Erlang don't
             | take full advantage of the possibilities of BEAM-bytecode
             | whole-program optimization. For example, any list you walk
             | using Erlang's lists module or Elixir's Enum module is
             | going to generate a back-and-forth of remote (symbolic)
             | calls, rather than resulting in the body of that function
             | of lists/Enum being inlined and fused into the caller.
             | Which in turn means, even if you use HiPE, you'll be
             | thunking in and out of native code so much that the
             | overhead will eat any potential benefit; _and_ each side
             | will be opaque to the static analysis of the other, so even
             | a wholesale native recompilation of the runtime won't save
             | you. (Same with gen_servers: remote callbacks through
             | library code, rather than fused module-local receive
             | statements; ergo, 10x optimization opportunity lost.)
             | 
             | You'll see real optimization in code generated by e.g.
             | Erlang's lexer-generator library leex; or in specific parts
             | of each language's stdlib, like Elixir's Unicode-handling
             | functions. But such optimized code emission is thin-on-the-
             | ground compared to "naive" coding in Erlang/Elixir. (And I
             | don't blame the language devs: the languages are fast
             | enough for almost everything people try to do; and for the
             | rest, you can ignore the language-as-framework and write
             | entirely self-contained modules that only rely on BEAM
             | primitives.)
        
         | josevalim wrote:
         | I have commented this in other places but the main explanation
         | for the benchmark you linked is that Phoenix (and its
         | underlying Cowboy webserver) are not optimized for this
         | particular workflow.
         | 
         | For a plain-text benchmark, you are ultimately measuring the
         | ability of sitting on top of a socket and reading/writing to
         | that socket as fast as you can.
         | 
         | However, Phoenix/Cowboy, whenever there is an HTTP connection,
         | spawns a VM light-weight process to own that connection, and
         | then each individual request runs in its own light-weight
         | process too. This comes with its own guarantees in terms of
         | state isolation, reasoning about failures, and you get both I/O
         | and CPU concurrency for free - if you were to do any meaningful
         | work on those requests.
         | 
         | Even though the VM processes are cheap to spawn and
         | lightweight, it is overhead compared to something that is just
         | directly reading and writing to the socket. I actually wrote a
         | proof of concept where we just sit on top of the socket without
         | spawning processes and it performs quite well - although I
         | don't think it has any practical purpose. There is also an
         | interesting article from [StressGrid](https://stressgrid.com/bl
         | og/cowboy_performance_part_2/) that shows how removing the per-
         | request process speeds it up by ~60%. But once you are going
         | through long requests (i.e. 1ms because you need to talk to a
         | database or API, encode JSON, etc), these differences tend to
         | matter less and the concurrency model starts to give you an
         | upper hand.
        
           | taspeotis wrote:
           | Thanks. So a 60% improvement would be approximately 300K
           | requests/sec vs. ASP.NET Core's seven million. Which still
           | looks concerning.
           | 
           | > But once you are going through long requests (i.e. 1ms
           | because you need to talk to a database or API, encode JSON,
           | etc), these differences tend to matter less and the
           | concurrency model starts to give you an upper hand.
           | 
           | If you look at the other tabs in the benchmark there are
           | "heavier" loads that include database access and the most
           | favourable one I can find is "Fortunes" which puts Phoenix at
           | 17.8% of the relative performance of the best ASP.NET Core
           | one.
        
             | josevalim wrote:
             | For the fortunes case, you are still comparing a web
             | framework with a platform. When comparing web frameworks,
             | some ASP.NET frameworks performs worse, others perform
             | better, but at most 2.4x faster.
             | 
             | On the plain text one, the difference between frameworks is
             | at most 6x. You would have to yank Phoenix and sit directly
             | on top of a socket, as per my previous comment, if you want
             | to compare both VMs.
        
         | TylerE wrote:
         | It's much more complicated than a simple slow <--> fast
         | continuum.
         | 
         | Elixir is pretty slow for simple singlethreaded CPU bound
         | tasks.
         | 
         | It's incredibly good at multiprocessing, which is really useful
         | in the websphere since your workload is much more likely to
         | resemble thousands of simultaneous communications that need to
         | not block each other rather than calculating a
         | 13042345235223452 digit prime number.
        
           | taspeotis wrote:
           | > It's incredibly good at multiprocessing, which is really
           | useful in the websphere
           | 
           | Right so the benchmark I linked to was TechEmpower Web
           | Framework Benchmarks, and Phoenix managed 186,774
           | requests/sec on a 14C/28T Xeon to do basically nothing (echo
           | Hello World) and ASP.NET Core managed ... seven million.
        
             | TylerE wrote:
             | Have you considered that's revealing more about the
             | webserver used than the software running behind it?
             | 
             | Plus, 186k/sec is still a hell of a lot.
        
               | taspeotis wrote:
               | That seems to be what the forum thread is about. It
               | couldn't possibly be that Phoenix is slow, it must be ...
        
             | ghayes wrote:
             | I'd throw this hat into the arena, which is the creator of
             | Phoenix trying to push the framework to its limits (from a
             | few years ago) and getting 2MM active websocket connections
             | from one server: https://www.phoenixframework.org/blog/the-
             | road-to-2-million-...
        
               | leeoniya wrote:
               | or 1MM on an old laptop with uWebSockets.js:
               | 
               | https://medium.com/@alexhultman/millions-of-active-
               | websocket...
        
               | taspeotis wrote:
               | I think that's a good example of how Phoenix can scale
               | but not necessarily how fast it is. Scale is one part of
               | performance but I should have made it more clear: I'm
               | interested in how fast it can be. Looking at those charts
               | it takes them 7.5 minutes to reach 2 million subscribers.
               | Without knowing how much work is involved for each
               | subscription it's hard to take away information about its
               | speed.
        
       | systemd0wn wrote:
       | It looks like Plangora made a youtube video recently.
       | https://www.youtube.com/watch?v=Nqr5ly35tu8
        
       ___________________________________________________________________
       (page generated 2020-04-16 23:00 UTC)