[HN Gopher] Launch HN: Okay (YC W20) - Analytics for engineering...
       ___________________________________________________________________
        
       Launch HN: Okay (YC W20) - Analytics for engineering teams
        
       Antoine and Tomas here - we are excited to share Okay
       (https://www.okayhq.com) with you! Okay is an engineering analytics
       platform focused on detecting bottlenecks and annoyances that
       prevent engineering teams from being fully productive. We connect
       with all the devtools in your company and give you a query and
       alerting engine to find and solve common bottlenecks like long
       review cycles, after-hours on-call pages, heavy interview load,
       etc. Think Datadog or Grafana, but for team analytics.  For the
       past 12 years, we've been engineers and then managers of teams of 5
       to 150 in several types of companies - startups and big tech. We've
       seen the dev experience being affected by the same problems
       everywhere: maybe it's a slow build on your local machine, too many
       meetings and interviews, or inefficient code review practices that
       force you to open 10 PRs in parallel to make progress on a given
       week. We personally struggled with automated tests suites that
       would take 4 hours to complete and we saw teammates become so
       desensitized to heavy oncall load that they would stop complaining
       and just give up.  We also learned that the discussion about
       engineering metrics always falls into a false dichotomy: don't
       measure anything because engineering is creative work (it is!) or
       measure engineers in intrusive ways along meaningless dimensions
       like lines of code. We believe that the way to overcome this false
       dichotomy is to apply quantitative measurements _empathetically_ ,
       that is, with a clear understanding of the human impacts of what's
       being measured on the people doing the actual work - for example,
       by measuring how noisy on-call pages disrupt an engineer's life
       after hours. The key is to focus on bottlenecks instead of output,
       and on the team level rather than on individuals. So we set out to
       build a product where you can see all the data from all your dev
       tools, query it, make sense of trends, and build alerts for when
       things go wrong.  At its core, Okay is an end-to-end analytics
       platform focused on engineering data. First, we ingest data from
       tools like Google Calendar, Github, Pagerduty, etc. We join it with
       the team structure that we find in services like Workday. In
       addition to pre-built integrations, you can also use a tracing-like
       API to capture e.g. how long local builds are taking. Then, we
       clean up and enrich the signal: tagging interviews correctly,
       rebuilding the full history of a PR as a connected chain of review
       events, inferring dimensions like tenure (which can e.g. help
       capture new hire experience). Finally, we expose all this data in a
       query builder UI that closely maps to the underlying SQL query, and
       we enable users to choose from visualizations we built specifically
       for representing engineering work: time series of course, but also
       calendars (e.g. to understand the life of a PR) or heatmaps (e.g.
       to identify a painful on-call rotation quickly). The opinionated
       part of Okay is all in the data modeling we do on behalf of users -
       we aim to reflect our values (team-based vs individuals) and to
       retain a lot of expressiveness so that users can ask questions like
       "what is the code review experience of our new hires in our NYC
       office compared to the SF office?".  You can check how Okay works
       by going to our website (https://www.okayhq.com) or checking our
       product video (https://www.youtube.com/watch?v=jzzo3m4280k). We
       don't have free trials because once you identify bottlenecks and
       set the right alerts to create new habits, it usually takes several
       weeks to see the changes happen - we're talking about humans
       working together after all, so it does require a little bit of
       upfront investment. We price based on the number of users and
       engineers on the team.  If you are interested or have specific
       questions for your use-case, we'd love to connect with your team
       directly in the comments. Thanks!
        
       Author : tonioab
       Score  : 82 points
       Date   : 2021-06-10 15:04 UTC (7 hours ago)
        
       | carstenhag wrote:
       | Some things (maybe with a focus from a German perspective):
       | 
       | * some words are too complicated or seldomly used - "tenure",
       | "runbook" are not words people know here. * seems like a gdpr
       | nightmare (connecting okay to all kinds of data sources like
       | calendars, repos, etc. You would need legal agreements, in the
       | best case servers in Europe) so yeah, maybe just focus on the US
       | :D * as a data source in Germany you would definitely need
       | Outlook and Azure DevOps * upload the product demo on a new
       | YouTube channel. Add a voice-over. Currently it's too fast and I
       | did not understand it. * I'd prefer a "getokay.com" domain, hq
       | makes no sense, also hard to understand
        
       | MattGaiser wrote:
       | I came expecting something for extracting higher velocity from a
       | Scrum feature factory.
       | 
       | Very pleased that this is not the case. This is the most
       | thoughtful attempt at software engineering metrics I have seen
       | yet. Especially like the build time tracking because at a prior
       | company, builds took 4 minutes and running the full test suite
       | could take another 6.
       | 
       | We couldn't convince management that this was a problem
       | (multiplied numbers make people suspicious, idk). This would have
       | shown them the hours wasted per year, which would have been far
       | more than the cost of JRebel or installing HotSwap.
        
         | Aken wrote:
         | /me cries in 30+ minute builds
        
         | tomasrb wrote:
         | Thanks for the feedback! One of our core values is being
         | engineer first. We've personally struggled with build times at
         | previous roles and other types of slow dev environment tooling.
         | Our goal with Okay is to drive awareness and empathy in the
         | product around all the problems like this engineering teams run
         | into so that management can both connect to the pain and take
         | action to help their teams.
        
       | oliverx0 wrote:
       | Can you elaborate on how current customers are using it? Any fun
       | / funny insights that were discovered thanks to your product that
       | helped them in meaningful ways?
       | 
       | Congrats on the launch! Looks like an interesting (and more
       | human) approach.
        
         | tonioab wrote:
         | Sure! We've seen teams starting from situations where 60-70% of
         | their week is spent in meetings, so these users benefit from
         | calendar analysis. We encourage users to increase _Maker Time_
         | (2 hours or more un-interrupted), which is inspired by
         | http://www.paulgraham.com/makersschedule.htm. This notion of
         | having enough time to code or focus on complex tasks has been
         | shown to correlate positively with self-reported productivity
         | and engagement.
         | 
         | Another example is a manager who noticed that one engineer on
         | their teams was carrying 70% of the PR review load for the
         | team. This was creating a situation where this person was
         | burning out under the review load, but they also assumed that
         | everyone was doing this same amount. We actually see this
         | problem a lot, where people get used to bad situations just out
         | of habit. In this case, the manager re-organized the code
         | review process to make it more fair.
        
       | JoshTriplett wrote:
       | I love the idea of having metrics for things like "how long are
       | people spending waiting on builds".
       | 
       | You mention that you're trying to get data from existing tools
       | rather than requiring self-reporting. What are you using to track
       | time spent on local builds?
       | 
       | I'm currently building a service to speed up both local and CI
       | builds. I'd love to talk with you more; my email is in my
       | profile.
        
         | tonioab wrote:
         | For time spent on local builds, we expose a tracing-like API
         | where you can tag the build id, start and end events, as well
         | as connect it back to the right team. This custom event gets
         | joined with everything else. I'll reach out to you on your
         | email!
        
           | mrkurt wrote:
           | This is super cool. We know way more about Docker than we
           | should now, but it seems like you could instrument local
           | Docker for some teams.
        
       | mrkurt wrote:
       | Wow this is really smart. Extrapolating "developer happiness" is
       | a great idea.
        
         | tonioab wrote:
         | Thanks for the kind words. Our mission is really to enable this
         | concept and make it more actionable
        
         | sdesol wrote:
         | Based on sources of mine and what you can find online, tracking
         | developer sentiment is the next thing and github and gitlab are
         | looking into it. I also know of a startup that is working on
         | this. This is obvoiusly a good metrics to track but it doesn't
         | provide much of a moat.
        
       | dickfickling wrote:
       | I've got no feature requests or questions, I just want to say I'm
       | really excited about this. It sounds like a really thoughtful
       | attempt to be better than the standard approach to eng analytics,
       | e.g. "how many points did CodeMonkey complete this sprint"
        
       | ablekh wrote:
       | What is the point of having a webpage called "Pricing", which has
       | practically no information about pricing? (Rhetorical question)
        
         | parkerhiggins wrote:
         | Often the "pricing" page is the first page potential customers
         | check prior to evaluating the product. Without one the customer
         | journey generally gets disrupted.
        
           | ablekh wrote:
           | Sure. And having absolutely no relevant info on that page is
           | a great way to certainly disrupt a potential customer's
           | journey one step later. :-)
        
             | mrkurt wrote:
             | It's a filter in the sales funnel, often intentionally.
             | "Contact us for pricing" tells everyone what they need to
             | know - either you have a big budget and are not price
             | sensitive, or you're self service and this product isn't
             | for you.
             | 
             | Pricing is hard, it makes total sense to just not publish
             | prices early on.
        
               | ablekh wrote:
               | I'm well aware of this practice and have no objection to
               | it, in general. However, unless a company provides enough
               | relevant information (e.g., feature breakdown per tier,
               | other details) on the pricing page, IMO it could be
               | better implemented as a button, link, one-liner or other
               | compact visual element located on the home page, without
               | wasting time and effort on repeating - and maintaining -
               | essentially the same marketing copy on a separate page
               | (as is the case here).
        
         | dang wrote:
         | I need to make sure that startups say something about this when
         | they're launching. It came up yesterday too:
         | https://news.ycombinator.com/item?id=27447214.
        
       | sdesol wrote:
       | Disclaimer: I'm a competitor of Okay along with others in the
       | software development metrics space.
       | 
       | I just want to comment on
       | 
       | > We also learned that the discussion about engineering metrics
       | always falls into a false dichotomy: don't measure anything
       | because engineering is creative work (it is!) or measure
       | engineers in intrusive ways along meaningless dimensions like
       | lines of code.
       | 
       | I think with close to 50 years of doing things wrong with
       | software development metrics, we've left a very bitter taste in
       | the mouths of developers and it is fully understandable that
       | developers would be weary and skeptical of software development
       | metrics. It is certainly one sided and I do agree this false
       | dichotomy needs to be addressed.
       | 
       | When it is all over, if software development metrics is done
       | right (with the emphasis on done right), developers should be the
       | ones advocating for it, since it means:
       | 
       | - They can work more efficiently since software metrics can help
       | them better understand how a piece of code came to be
       | 
       | - Better sell themselves for promotions and raises. For example,
       | they can use it to highlight impact and what it means if they
       | leave. Their manager may know they are a top contributor but if
       | their manager can't sell them, it won't help. With software
       | metrics, manager's should be able to highlight how their
       | developer is having an impact when the raise/promotion pool is
       | divided up.
       | 
       | - And so forth
       | 
       | I honestly think the best way to get everybody onboard with
       | metrics, is to clearly show that it takes effort to generate
       | meaningful insights. And this is why I'm not so much focused on
       | providing canned reports, but rather, I want to provide business
       | intelligence for the software development lifecycle.
       | 
       | The goal (which it sounds like Okay is working towards as well)
       | is to connect all the dots in the software development lifecycle
       | and provide users with the necessary data to make informed
       | decisions. In the business world, we have "business intelligence
       | specialist" because nobody takes for granted how difficult it is
       | to get business insights. And it is truly baffling how we don't
       | have "software development specialist" to help us interpret
       | efficiency and productivity as context matters and not everybody
       | is qualified to interpret development metrics.
        
         | Johnie wrote:
         | The way to get engineers to adopt it is to demonstrate how it
         | can be useful for them. Engineering quality of life can be
         | derived from a lot of these metrics.
         | 
         | Take for example, XKCD 303: https://xkcd.com/303/. Engineers
         | spend so much time waiting on compile, build, and deployment
         | time. It is such a waste of time and resources. These are
         | things that engineers tend to absorb resulting in engineers
         | getting blamed for low velocity.
         | 
         | The other area is on call metrics. Healthy oncall metrics
         | significantly increases engineering QOL.
        
           | sdesol wrote:
           | > The way to get engineers to adopt it is to demonstrate how
           | it can be useful for them
           | 
           | Agreed. My goal is produce a connected efforts graph that can
           | accurately connect code with effort and time spent (meetings,
           | code reviews, waiting for builds, etc.)
           | 
           | This is why I refer to this as business intelligence for the
           | software development lifecycle. Insights come from data and
           | you can't use traditional BI tools to analyze software
           | development data. If you want to understand effort, you need
           | to be able to slice and dice coding activity which GitHub and
           | traditional BI tools can't do easily or at all.
           | 
           | Take the following for example, if you want to quickly
           | understand the significance of three commits, you can do
           | something like this:
           | 
           | https://public-001.gitsense.com/insights/github/repos?q=comm.
           | ..
           | 
           | which will stitch together the commits in real-time for
           | analysis. And this is what I ultimately mean by being able to
           | slice and dice software development activity. And how I see
           | thing is, BI for software development activity is both useful
           | for developers and leaders.
        
       | sneak wrote:
       | Does this work with systems outside of the Microsoft/Google
       | ecosystem (i.e. not github, not g suite/gcal)?
       | 
       | I'm interested, but it's unlikely that I'm going to be building
       | teams on either of these platforms in the future, considering how
       | easy these are to host internally now.
        
         | tonioab wrote:
         | Currently we only support these ecosystems because most of our
         | early customers are on these platforms. We do have a tracing-
         | like API that you can use to send custom events, but that would
         | require more integration work of course.
        
       | ilikebits wrote:
       | This is an excellent idea, and looks like an excellent product.
       | As an engineering manager, I've built many of these tools myself
       | before as one-off scripts. I've always wondered whether the
       | market cares enough about this problem for it to be a viable
       | startup.
        
       ___________________________________________________________________
       (page generated 2021-06-10 23:01 UTC)