[HN Gopher] Launch HN: Orbiter (YC W20) - Autonomous data monito...
       ___________________________________________________________________
        
       Launch HN: Orbiter (YC W20) - Autonomous data monitoring for non-
       engineers
        
       Hello HN! We are Victor, Mark, and Winston, founders of Orbiter
       (https://www.getorbiter.com). We monitor data in real-time to
       detect abnormal drops in business and product metrics. When a
       problem is detected, we alert teams in Slack so they never miss an
       issue that could impact revenue or the user experience.  Before
       Orbiter, we were product managers and data scientists at Tesla,
       DoorDash, and Facebook. It often felt impossible trying to keep up
       with the different dashboards and metrics while also actually doing
       work and building things. Even with tools like Amplitude, Tableau,
       and Google Data Studio, we would still catch real issues late by
       days or weeks. This led to lost revenue and bad customer
       experiences (i.e. angry customers who tweet Elon Musk). We couldn't
       stare at dashboards all day, and we needed to quickly understand
       which fluctuating metrics were concerning. We also saw that our
       engineering counterparts had plenty of tools for passive monitoring
       and alerting--PagerDuty, Sentry, DataDog, etc.--but the business
       and product side didn't have many. We built Orbiter to solve these
       problems.  Here's an example: at a previous company, a number of
       backend endpoints were migrated which unknowingly caused a
       connected product feature in the Android shopping flow to
       disappear. Typically, users in that part of the shopping flow
       progress to the next page at a 70% rate but because of the missing
       feature, this rate dropped by 5% absolute. This was a serious issue
       but was hard to catch by looking at dashboards alone because: 1)
       this was just one number changing out of hundreds of metrics that
       change every hour, 2) this number naturally fluctuates daily and
       weekly, especially as the business grows, 3) it would have taken
       hours of historical data analysis to ascertain that a 5% drop was
       highly abnormal for that day. It wasn't until this metric stayed
       depressed for many days that someone found it suspicious enough to
       investigate. All in, including the time to implement and deploy the
       fix, conversion was depressed for seven days costing more than $50K
       in reduced sales.  It can be especially challenging for the human
       eye to judge the severity of a changing metric; seasonality, macro
       trends, and sensitivity all play a role in equivocating
       conclusions. To solve this, we build machine learning models for
       your metrics that capture the normal/abnormal patterns in the data.
       We use a supervised learning approach for our alerting algorithm to
       identify real abnormalities. Then, we forecast the expected
       "normal" metric value and also classify whether an abnormality
       should be labeled as an alert. Specifically, forecasting models
       identify macro-trends and seasonality patterns (e.g. this
       particular metric is over-indexed on Mondays and Tuesdays relative
       to other days of the week). Classifier models determine the
       likelihood of true positives based on historical patterns. Each
       metric has an individual sensitivity threshold that we tune with
       our customers so the alerting conditions catch real issues without
       being overly noisy. Models are re-trained weekly and we take user
       feedback on alerts to update the model and improve accuracy over
       time.  Some of our customers are startups with sparse data. In
       these cases, it can be challenging to build a high-confidence
       model. What we do instead is work with our customers to define
       manual settings for "guardrails" that trigger alerts. For example,
       "Alert me if this metric falls below 70%!" or "Alert me if this
       metric drops more than 5% week over week". As our customers grow
       and their datasets grow, we can apply greater intelligence to their
       monitoring by moving over to the automated modeling approach.  We
       made Orbiter so that it's easy for non-technical teams to set-up
       and use. It's a web app, requires no eng development, and connects
       to existing analytics databases the same way that existing
       dashboard tools like Looker or a SQL editor just plug in. Teams
       connect their Slack to Orbiter so they get immediate notifications
       when a metric changes abnormally.  We anticipate that the HN
       community has members, teammates, or friends who are product
       managers, businesspeople, or data scientists that might have the
       problems we experienced. We'd love for you and them to give Orbiter
       a spin. Most importantly, we'd love to hear your feedback! Please
       let us know in the thread, and/or feel free to send us a note at
       hello@getorbiter.com. Thank you!
        
       Author : zhangwins
       Score  : 55 points
       Date   : 2020-03-07 18:04 UTC (4 hours ago)
        
       | arzel wrote:
       | I saw y'all on PH and immediately submitted to get early access.
       | Super excited to try it out, and congrats on the launch!
        
         | zhangwins wrote:
         | Woohoo! Thanks so much. Looking forward to getting in touch
         | soon :D
        
       | photonios wrote:
       | This sounds really cool! I've wished for something like this many
       | times. I am mostly attracted by the fact that it would be mostly
       | automatic. I am hoping it lives up to the hype.
       | 
       | Signed up for the beta. All the best!
        
       | generatorguy wrote:
       | I work on power stations which normally have about 1000 monitored
       | variables per turbine-generator and another 500 for the plant in
       | general. So typically 2500 for a two unit plant.
       | 
       | Alarms are generated if a variable exceeds a threshold, or a
       | binary variable is in the wrong state.
       | 
       | Is Orbiter something that would benefit power plants?
        
         | parasj wrote:
         | Not OP, but I researched scalable anomaly detection systems for
         | power-generating assets. We collaborated with a large
         | industrial engine manufacturer on this work.
         | https://arxiv.org/abs/1701.07500. The key challenge customers
         | encountered was the prevalence of false alarms that led to
         | unnecessary service.
        
           | zhangwins wrote:
           | Woah this is awesome. How did you guys resolve the false
           | alarm issue wrt power plants?
        
         | zhangwins wrote:
         | Hey generatorguy - this is a really interesting use case so
         | thanks for sharing. I imagine our modeling / monitoring /
         | alerting capabilities can extend to power plants but will need
         | to understand the data better. The common types of business and
         | product metrics that our customers look for include user
         | growth, cancellation rates, call failure %s, all of the above
         | by different geos, etc. Happy to chat more if you'd like to
         | shoot me an email (I'm winston[at]getorbitre.com)
        
       | arciini wrote:
       | This is really cool! Our search-engine-based impressions dropped
       | substantially in early Feb. and because we didn't have that in
       | our main dashboards, it took us almost 2 weeks to discover that.
       | Orbiter would've been pretty useful for that - got in touch!
        
         | zhangwins wrote:
         | Thanks! Looking forward to getting connected. We've heard SEO-
         | specific use cases come up with some of the other companies
         | we've worked with too -- you basically need to find out the
         | exact time that your SEO ranking saw a material change cause
         | it's usually driven by something that shipped at that time.
         | Otherwise takes a long time to get back the traffic from GOOG
        
       | dataminded wrote:
       | Really excited about this.
       | 
       | We're very early into doing a PoC where we use DataDog/Cloudwatch
       | for our business metrics for this specific use case. We're also
       | looking at tracking data quality metrics. The standard BI
       | reporting tools are very immature when it comes to alerting based
       | on changes in data over time.
       | 
       | I hope at some point you consider ingesting metrics like the ops
       | tools do. Giving you direct access to my database is going to be
       | really challenging but I'm glad to send you what I want you to
       | keep track of.
        
         | zhangwins wrote:
         | Ah very interesting, and agree on the immaturity of
         | alerting/time-series changes for current BI reporting tools.
         | Would be great if you could send me more info about what you're
         | thinking about tracking & also hear more about how the PoC you
         | guys are thinking of. Would you mind sending me a note to
         | winston[at]getorbiter.com?
        
       | djiddish98 wrote:
       | Go Winston! So much better than trying to do this in Tableau
        
       | dodata wrote:
       | Congrats on launching! Looks very helpful!
       | 
       | As a data scientist, I found that a drop in metrics was just as
       | often due to a data pipeline issue as it was an actual business
       | problem. This unfortunately causes business users to lose trust
       | in the metrics quickly. How do you plan to differentiate between
       | those two root causes of metric changes?
        
         | zhangwins wrote:
         | Ah I can empathize with you here (as a former DS) -- we had
         | incidents in the past that were data pipeline / instrumentation
         | changes causing bad data which then caused metric drops (versus
         | a real product issue, but they nonetheless caused a loss of
         | confidence in data).
         | 
         | We think there are a number of diagnostic features that could
         | be helpful here (to be built!). Teams today run playbooks to
         | root cause issues when metric drops happen. We should be able
         | to take that playbook and automate it. Say, Orbiter identifies
         | an abnormal change in Metric X. The team is then probably
         | analyzing sub-funnel metrics Y and Z, or looking at various
         | dimension cuts to isolate the issue. Maybe they're also
         | checking data quality by comparing the count of event volume
         | vs. count of user IDs vs. count of device IDs, etc. If we run
         | all of these diagnostic checks when Metric X drops, we could
         | give the team insight into what we know is OK vs. not OK.
        
           | sanabriarenato wrote:
           | That's really cool! Besides identifying abrupt changes in
           | metric X, for me the most difficult part is trying to
           | understand what caused this change in X. Great to know that
           | you have this issue in the roadmap, but do you think it's
           | possible to develop a model/automation that is generic enough
           | to be used in different business ? Maybe analysing the
           | correlation between different time series could be a way to
           | go ?
        
       ___________________________________________________________________
       (page generated 2020-03-07 23:00 UTC)