[HN Gopher] Show HN: Hookdeck - an infrastructure to consume web...
       ___________________________________________________________________
        
       Show HN: Hookdeck - an infrastructure to consume webhooks
        
       Author : alexbouchard
       Score  : 65 points
       Date   : 2021-08-04 17:14 UTC (5 hours ago)
        
 (HTM) web link (hookdeck.com)
 (TXT) w3m dump (hookdeck.com)
        
       | tegansnyder wrote:
       | This is pretty neat. If you could take it a step further and have
       | integrations to send the response payloads to other services like
       | S3, SES, SQL insert into Redshift, etc then it would be great.
        
         | turtlebits wrote:
         | You can easily do this with AWS Lambda and the boto3 library.
         | You get logging, metrics and alerting if you want too. AWS has
         | a generous free tier.
         | 
         | For python devs, this is a great framework to easily create and
         | deploy them - https://github.com/aws/chalice.
        
         | alexbouchard wrote:
         | That's a good idea, we've been focused on getting up to speed
         | to run your own code (which could just be a lamda uploading to
         | S3, etc.) but directly integrating with some services is
         | definitely possible. We just want to be careful not to become
         | another Zapier, we want to help the tech teams!
        
           | ezekg wrote:
           | > we've been focused on getting up to speed to run your own
           | code
           | 
           | When this happens, ping me on twitter! I'll send some
           | customers your way. One of my most frequently asked questions
           | is if I know of an easy way to host webhook handler code. I
           | usually point to Zapier, but I'd rather point to Hookdeck. :)
        
       | alexbouchard wrote:
       | Hey, Alex here. I'm excited to share Hookdeck along with my co-
       | founders Eric and Maurice. It's a product we started working on
       | after dealing with my fair share of webhook-related issues
       | (missed webhooks, time-consuming troubleshooting) at our previous
       | employers.
       | 
       | Incoming webhooks are challenging because they require a well-
       | built (and often complex) asynchronous system, and they are never
       | a priority until they break. We were left with two options when I
       | was building webhooks integrations: implement my own
       | infrastructure to handle webhooks (ingestion, queuing,
       | processing, monitoring, and alerting) or ignore the problem
       | altogether and suffer from intermittent, often undiagnosable,
       | failures.
       | 
       | We've found that's it's entirely possible to offer a platform-
       | agnostic webhook infrastructure to consume webhooks reliably.
       | Specifically, Hookdeck acts as a push queue to your HTTP
       | endpoints. Webhooks are ingested by highly available services and
       | go through an event lifecycle that manages webhook delivery. That
       | allows Hookdeck to maintain a log of all events and delivery
       | attempts, perform custom retry logic, routes webhooks to multiple
       | destinations and even apply filters to received events. Hookdeck
       | focuses on ingestion reliability, visibility and error recovery.
       | 
       | It's a satisfying space to work in, as webhooks are now commonly
       | relied upon by most web-based technical teams, and the tooling
       | around them has been lackluster - we have the ambition to change
       | that. I'll be around to answer any questions!
        
       | indigodaddy wrote:
       | Is this similar to something like rundeck?
        
         | alexbouchard wrote:
         | I'm not familiar with Rundeck, but a quick google search makes
         | me think it's a very different tool.
         | 
         | Technical teams use Hookdeck to receive webhooks reliably.
         | However, Hookdeck itself makes no assumption about what those
         | webhooks are used for or do (we don't run workflows). You can
         | think of it as the backbone/infrastructure to manage and run
         | your asynchronous events without having to put together queues,
         | workers, ingestion services, alerts and logs.
        
           | indigodaddy wrote:
           | Thanks for the feedback! Your product looks cool and useful!
        
       | codedestroyer wrote:
       | SLA?
        
         | alexbouchard wrote:
         | We don't have a public SLA right now but work with customers
         | directly to establish one that makes sense for both.
         | 
         | That being said, check out this blog post
         | https://hookdeck.com/blog-posts/hookdecks-approach-to-reliab...
         | about our approach to reliability. Essentially we've decided to
         | focus on ingestion by reducing the dependencies to a minimum
         | and completely isolating it from the rest of our infra. We
         | can't guarantee 100% uptime, that would be unreasonable, but we
         | can have a better likelyhood at ingesting your webhooks than
         | you do.
        
           | edoceo wrote:
           | Our ingestion is a single binary running on $Provider.
           | 
           | Aren't you on $Provider too?
           | 
           | Wouldn't you be just as reliable as anyone else in $Provider?
           | (But adding complexity)
           | 
           | I guess I don't see the problem.
        
             | alexbouchard wrote:
             | It boils down to the number of dependencies you have and
             | the uptime of those dependencies. For instance, you'll need
             | to write to some form of temporary or permanent storage
             | like SQS, S3 or PG. What happens when this is down, or
             | you've busted your connection limit? With webhooks, you
             | have no control over the throughput, and enormous bursts of
             | traffic are frequent.
             | 
             | You can build reliable ingestion, and we aren't reinventing
             | the wheel on that front. The difference is that we've taken
             | the time, and many teams prefer to invest that time
             | elsewhere.
             | 
             | We can also take some extra steps (and will), such as
             | having multiple $Provider as fail-over.
             | 
             | [EDIT]
             | 
             | To add to this, Hookdeck ingestion reliability is only part
             | of the value proposition. What customers really appreciate
             | is the visibility and error recovery. They don't have to
             | build a robust asynchronous processing system. They can
             | just deploy an HTTP endpoint and call it a day.
        
       | OJFord wrote:
       | This is pretty tangential, and doesn't matter at all, but many
       | services do it and I'm always curious - is the second-most
       | expensive ('Team') tier _really_ your  'most popular'? Or is it
       | just chosen to maximise up-selling return? Intuitively I would
       | guess most users don't pay, next most pay the lowest individual,
       | etc. - a few whales, lots of minnows.
        
         | alexbouchard wrote:
         | Fair point! Honestly, I would have assumed the same.
         | 
         | Our largest MRR contributor is the team plan right now, but in
         | terms of total count of plans, it's the individual by about a
         | factor of 2. It would be very hard to build a business of the
         | individual plans, so teams are our focus. To our surprise,
         | there are very few free plan accounts if you exclude "dead"
         | users (signed up but never used it).
        
         | Kinrany wrote:
         | It could be "most popular*" *as weighted by $$$ spent.
        
           | alexbouchard wrote:
           | That's exactly it
        
       | vletal wrote:
       | From time to time I write a quick & dirty hooker.py. A flask
       | based server just for this purpose. I'm sure I'm not alone to
       | name like that.
        
       ___________________________________________________________________
       (page generated 2021-08-04 23:01 UTC)