[HN Gopher] Launch HN: Litebulb (YC W22) - Automating the coding...
       ___________________________________________________________________
        
       Launch HN: Litebulb (YC W22) - Automating the coding interview
        
       Hi HN, I'm Gary from Litebulb (https://litebulb.io). We automate
       technical onsite interviews for remote teams. When I say
       "automate", I should add "as much as possible". Our software
       doesn't decide who you should hire! But we set up dev environments
       for interviews, ask questions on real codebases, track candidates,
       run tests to verify correctness, and analyze the code submitted. On
       the roadmap are things like scheduling, tracking timing, and
       customizing questions.  I've been a software engineer at 11
       companies and have gone through well over a hundred interviewing
       funnels. Tech interviews suck. Engineers grind LeetCode for months
       just so they can write the optimal quicksort solution in 15
       minutes, but on the job you just import it from some library like
       you're supposed to. My friends and I memorized half of HackerRank
       just to stack up job offers, but none of these recruiting teams
       actually knew whether or not we were good fits for the roles. In
       some cases we weren't.  After I went to the other side of the
       interviewing table, it got worse. It takes days to create a good
       interview, and engineers hate running repetitive, multi-hour
       interviews for people they likely won't ever see again. They get
       pulled away from dev work to do interviews, then have to sync up
       with the rest of the team to decide what everyone thinks and come
       to an often arbitrary decision. At some point, HR comes back to eng
       and asks them to fix or upgrade a 2 year old interview question,
       and nobody wants to or has the time. Having talked with hundreds of
       hiring managers, VPs of eng, heads of HR, and CTOs, I know how
       common this problem is. Common enough to warrant starting a
       startup, hence Litebulb.  We don't do LeetCode--our interviews are
       like regular dev work. Candidates get access to an existing
       codebase on Github complete with a DB, server, and client.
       Environments are Dockerized, and every interview's setup is boiled
       down to a single "make" command (DB init, migration, seed, server,
       client, tunnelling, etc), so a candidate can get started on coding
       within minutes of accepting the invite. Candidates code on
       Codespaces (browser-based VSCode IDE), but can choose to set up
       locally, though we don't guarantee there won't be package
       versioning conflicts or environment problems. Candidates are given
       a set of specs and Figma mockups (if it's a frontend/fullstack
       interview) and asked to build out a real feature on top of this
       existing codebase. When candidates submit their solution, it's in
       the form of a Github pull request. The experience is meant to feel
       the same as building a feature on the job. Right now, we support a
       few popular stacks: Node + Express, React, GraphQL, Golang, Ruby on
       Rails, Python/Django and Flask, and Bootstrap, and we're growing
       support by popular demand.  We then take that PR, run a bunch of
       automated analysis on it, and produce a report for the employer. Of
       course there's a limit to what an automated analysis can reveal,
       but standardized metrics are useful. Metrics we collect include
       linter output, integration testing, visual regression testing,
       performance (using load testing), cyclomatic/halstead complexity,
       identifier naming convention testing, event logs, edge case
       handling, code coverage. And of course all our interview projects
       come with automated tests that run automatically to verify the
       correctness of the candidate's code (as much as unit and
       integration tests can do, at least--we're not into formal
       verification at this stage!)  Right now, Litebulb compiles the
       report, but we're building a way for employers to do it themselves
       using the data collected. Litebulb is still early, so we're still
       manually verifying all results (24 hour turnaround policy).  There
       are a lot of interview service providers and automated screening
       platforms, but they tend to either not be automated (i.e. you still
       need engineers to do the interviews) or are early-funnel, meaning
       they test for basic programming or brainteasers, but not regular
       dev work. Litebulb is different because we're late-funnel _and_
       automated. We can get the depth of a service like Karat but at the
       scale and price point of a tool like HackerRank. Longer term, we
       're hoping to become something like Webflow for interviews.  Here's
       a Loom demo:
       https://www.loom.com/share/bdca5f77379140ecb69f7c1917663ae5, it's a
       bit informal but gets the idea across. There's a trial mode too,
       for which you can sign up here:
       https://litebulb.typeform.com/to/J7mQ5KZI. Be warned that it's
       still unpolished--we're probably going to still be in beta for
       another 3 months at least. That said, the product is usable and
       people have been paying and getting substantial value out of it,
       which is why we thought an HN launch might be a good idea.  We'd
       love to hear your feedback, your interview experiences or ideas for
       building better tech interviews. If you have thoughts, want to try
       out Litebulb, or just want to chat, you can always reach me
       directly at gary@litebulb.io. Thanks everyone!
        
       Author : garyjlin
       Score  : 75 points
       Date   : 2022-03-07 19:00 UTC (3 hours ago)
        
 (HTM) web link (www.litebulb.io)
 (TXT) w3m dump (www.litebulb.io)
        
       | yunohn wrote:
       | This is a very interesting approach, I've seen some competitors
       | who do this in the learning/course space as well. I've also done
       | a lot of interviews from both sides of the table.
       | 
       | My question is around the grading examples. How do you
       | automatically determine Functionality is Senior - Weak or Commit
       | History is Intermediate - Strong?
       | 
       | I find these to be quite subjective and better understood as a
       | whole for any given candidate. The grading mechanism provides a
       | false sense of being unbiased, and further dehumanizes the
       | process.
       | 
       | > Every metric we measure is numeric, and every test is
       | deterministic. As a result, you can make hiring decisions based
       | purely on merit.
       | 
       | I fully disagree with this for hiring, unless you're looking for
       | machines, not humans. Measuring things automatically does not
       | result in purity of bias.
        
       | hbarka wrote:
       | Here's a radical idea: hire the top 4 candidates on a
       | probationary period. Have them work remote and with the same
       | requirements and blind to each other.
        
       | siamakfr wrote:
       | At every company I've been part of the interview design process,
       | I always insist on having practical tasks with real tools because
       | how quickly someone can parse documentation and code context are
       | not trivial aspects of the job. It does take a lot of time to set
       | up sandbox projects however which a platform like this does away
       | with. Looking forward to the day when no candidate sees Leetcode
       | or HackerRank as part of a tech interview again.
        
         | garyjlin wrote:
         | Thank you, and love that you promote great interviewing
         | practices! What were some examples of prompts you routinely
         | used for interviewing? And any learnings on great (or not so
         | great) interview design?
        
       | e4e78a06 wrote:
       | Curious how this will work with more complex cases? E.g.
       | distributed systems, concurrency safety, etc. Also, how do you
       | deal with solutions that don't compile but are 95% of the way
       | there conceptually? I find that's a common occurrence in non-
       | FAANG settings that generally would still mean a pass to the next
       | round (sometimes even with strong perf ratings).
       | 
       | And how do you prevent someone from cheating if there's no
       | engineer there on the interview?
        
         | onlyrealcuzzo wrote:
         | If TC can run the code (not sure why not in this environment) -
         | then it not compiling should probably be a no hire.
        
         | aronowb14 wrote:
         | love this question: especially about the distributed systems.
         | 
         | Something I've been thinking about, not necessarily in the
         | interviewing world, would be to have a simulation of
         | distributed systems for non-production code: in theory I think
         | you totally could have a simulation layer for databases /
         | server loads and test how your code performs in such
         | environments.
         | 
         | I guess it exists in production: but haven't seen it for either
         | a) learninga bout distributed systems, or b) for interviews
        
       | murtali wrote:
       | Three metrics you pointed out: 1. Linter 2. Cyclomatic/halstead
       | complexity 3. identifier naming convention testing
       | 
       | Are those really things that candidates can get a negative ding
       | for? Those are things that can/should be handled automatically by
       | libraries or your CI/CD (ex: Rubocop)
       | 
       | More importantly, how do you get more details/nuance over things
       | like naming or cyclomatic complexity.
        
         | moritonal wrote:
         | I actually really value the naming metric. I'd value a dev who
         | looks at other classes to to see what the convention is rather
         | than just adding their own personal form of camel-case. It
         | shows a sense of collaboration.
        
       | Grustaf wrote:
       | > We automate technical _onsite_ interviews for _remote_ teams.
       | 
       | This seems a bit contradictory!
        
       | [deleted]
        
       | prakhar897 wrote:
       | We can already automate leetcode interviews much more easily then
       | why don't we do it? Because people cheat. U can ask your
       | competitive coder friend or search google to get the answers.
       | Similarly, here also friends can give the complete solutions even
       | more easily. Heck there are sites dedicated to these kinds of
       | services. This leads to following consequences:
       | 
       | 1. In both automated leetcode and automated "coding" interviews,
       | most people get full score. This even turns into a negative bias
       | as an actual good candidate who did not cheat has greater chance
       | of getting rejected here than the cheaters.
       | 
       | 2. Interviewers put both automated leetcode and in-person
       | leetcode to first filter out candidates, then put them through
       | actual interview hoops. If your product succeeds, this means
       | another layer of "interviewing" and more wastage of engineer's
       | time.
       | 
       | Sorry to say this isn't solving any problem but has the potential
       | to create more harm for both sides.
        
         | austincheney wrote:
         | I suspect cheating is not the primary problem. I suspect, like
         | so many other decisions in software, the primary problem is
         | unrealistic and unfounded expectations. This behavior typically
         | comes from unmeasured assumptions that may work well in one
         | narrow circumstance applied into an unrelated circumstance and
         | then blaming the result instead of the decision. This is bias.
         | 
         | Unless there is a solution that directly addresses hiring bias
         | this company will fail like all the others before it for
         | exactly the same reason: their clientele will apply bias (the
         | actual cause of the problem), be unhappy with the result, and
         | then blame the service provider. In sales the client is never
         | wrong, which means the service provider is always wrong. This
         | is why software overspends on recruiters and then invents
         | fantasies like React developers with a year of experience are
         | rare.
        
         | indymike wrote:
         | I routinely see 75% fail.
        
         | keerthiko wrote:
         | I think you missed the entire premise of Litebulb -- they are
         | avoiding Leetcode-style testing in favor of development tasks
         | on the real business codebase, and quickly ramping up
         | environments that are based on your business' proprietary dev
         | stack.
         | 
         | Many software companies opt to avoid leetcode-like interviewing
         | entirely. If leetcode and prepackaged, recycled, self-contained
         | code tests are your workplace's primary process for hiring,
         | then I don't think this service is for you at all.
        
         | imglorp wrote:
         | We counter this by giving applicants a small homework problem
         | and ask them to solve it in the language of their choice. Then
         | we have tech interview centered around their solution. We'd
         | know in about a minute if someone else did the did the thinking
         | for them. It's also a good jumping off point for all kinds of
         | deeper questions, systems knowledge, engineering practices,
         | wherever we need to go.
        
         | justshowpost wrote:
         | I don't think cheating prevention is in scope for a tool like
         | this and neither should be.
         | 
         | Any decent employer puts the new hire on a trial period first.
         | During the trial period, the employer should task them with
         | real work. When the candidate doesn't deliver, the employer can
         | let them go. When they do deliver, the employer can keep them.
         | 
         | In both cases, does it really matter if they cheated their way
         | in or not? No. Preventing cheating is the wrong place to
         | optimize.
         | 
         | While we're here, dumb memorization of LeetCode style Q&As
         | instead of deep understanding of underlying principles can be
         | seen as cheating just as well. Doing actual work is better in
         | every way.
         | 
         | And many employers already do exactly this coding work already,
         | just manually. They make up a problem themselves, and give you
         | a time limit to commit a solution without checking if you're
         | cheating or not.
         | 
         | Litebulb just automates this, by taking over the creation of
         | the problem and the analysis of the solution. There's lots of
         | advantages of the centralization of this, like creating more
         | problems, fixing and updating existing ones, etc.
         | 
         | Provided the problems and the analysis are as good as
         | advertised, I think Litebulb has a great value proposition. I
         | wish them success and hope they can have a lasting impact on
         | the hiring process for software developers.
        
       | pixel_tracing wrote:
       | So how are mobile / hardware engineering technical interviews
       | done? This seems specific to only a subset of engineers.
        
       | rubyron wrote:
       | fyi, the long loom url in your post is causing this page to be
       | nearly unreadable on iPhone Safari. Have to squint to read tiny
       | text or scroll horizontally.
        
       | ushakov wrote:
       | i like the concept much better than LeetCode
       | 
       | if you succeed in making this feel "native" to both employer and
       | the applicant, you'd be billionaires by tomorrow
        
         | garyjlin wrote:
         | Appreciate it tons! And yes, making this experience feel
         | "native" and familiar is one of our primary focal points. In
         | the context of an interview, any slight deviation of an
         | expected form factor can contribute to nervousness or loss of
         | candidate performance, which is why we're trying to make sure
         | that dev env spin up is quick (eventually <60s), coding
         | environment is familiar (VSCode or JetBrains), and submissions
         | are familiar (probably just Git). For employers, we still have
         | lots of work to do to ensure the signals collected integrate
         | well with popular ATS's like Greenhouse or Lever.
        
       | droobles wrote:
       | At the last company I worked for, the interview was amazing
       | because it was tailored towards exercises that actually had to do
       | with my job, but were obviously not going to be used in their
       | product. Hope this becomes the standard, that + a genuine
       | conversation with the team lead just works.
        
       | [deleted]
        
         | garyjlin wrote:
        
       | catchmeifyoucan wrote:
       | This looks pretty awesome! Congrats on the launch. It seems like
       | a much needed alternative to Leetcode with more practical
       | applications. As a developer, wondering how I can benchmark
       | myself on your platform to see how I perform with a sample?
        
         | garyjlin wrote:
         | Thank you! And yeah we've had that request a lot, later this
         | year we'd like to open a candidate-facing platform to hone
         | their skills on Litebulb interviews. Conceptually, getting
         | better at Litebulb interviews should just make you a better
         | dev, not better at interviewing.
         | 
         | Potential feature: get high enough scores on a specific
         | interview, get recommended to companies currently hiring that
         | use this stack and get inserted mid-way through the funnel?
        
           | beepbooptheory wrote:
           | Yes please! I have never even gotten a shot at interviews
           | like these, its such a black box to me, and I love tests.
           | 
           | I would honestly pay for this without even the hope of a job,
           | just to be able to know where I am at, feel confident if I
           | ever get past the first screen.
        
             | garyjlin wrote:
             | Honestly I feel you, I recall interviewing before and the
             | number one thing I kept asking for was feedback, but rarely
             | ever got any. Currently, when a candidate does a Litebulb
             | interview and submits it, we email the same report PDF to
             | the candidate that the employer gets. As in, feedback is
             | built into the flow of the product.
             | 
             | The next step would be to open up that candidate-facing
             | side, and hearing that you'd pay for just that alone is
             | great feedback!
        
         | mwcampbell wrote:
         | > wondering how I can benchmark myself on your platform to see
         | how I perform with a sample?
         | 
         | Reading this comment, it occurs to me that if just anyone can
         | sign up for a trial, then candidates may be able to game the
         | system and prepare for the specific work sample tests that this
         | product provides. If so, then that's unfortunate, because I
         | like the idea behind this product.
        
           | garyjlin wrote:
           | 2 thoughts here:
           | 
           | 1) This is actually why I was going back and forth on
           | offering a trial version, and decided to use a form so that I
           | can qualify employers who legitimately need this vs
           | candidates just trying to game the system
           | 
           | 2) It might not actually matter if candidates get preliminary
           | access to Litebulb interview code bases. Our interviews
           | aren't brainteasers that have a specific optimal solution,
           | they're just regular feature requests, ie build a CRUD API
           | for this new resource model, so it almost doesn't matter how
           | long you have. From what we've seen with submissions so far,
           | having extra time doesn't necessarily make your code quality
           | better. Potential idea: open source all Litebulb interview
           | codebases?
        
             | jahewson wrote:
             | > open source all Litebulb interview codebases
             | 
             | This would be such a wonderful contribution to the
             | industry. Imagine having a hundred JS front end questions,
             | it's not really game-able any more. I agree with your take.
             | Keep qualifying your leads though, for the sake of your
             | funnel.
        
       | arjie wrote:
       | Very cool, Gary. Grabbed some time on your Calendly. Eager to see
       | if your product can work for us and always interested in talking
       | to a founder in this space.
        
         | garyjlin wrote:
         | Thank you, looking forward to chatting!
        
       | jnguyen64 wrote:
       | Congrats on the launch! I run a job board for companies that
       | don't do LeetCode interviews (www.nowhiteboard.org) and would
       | love to add any companies that use your platform for free & see
       | if we can partner up somehow!
       | 
       | I think a big reason (in addition to many others..) that
       | companies still use LeetCode is the lack of interviewing platform
       | alternatives, so I'm glad to see Litebulb is looking to fill that
       | niche. I'll email you directly after work if I don't hear
       | anything on here!
        
       | shmatt wrote:
       | This feels like a solution in search of a problem
       | 
       | Fact: The LeetCode loop sucks. So couldn't any Director or VP in
       | big tech just decide they are done giving that interview? They
       | have the resources to spin up something like this in a week. Even
       | outside big tech, companies can very well think of non LeetCode
       | style interviews, some already do
       | 
       | The problem is, many engineers want new engineers forced into
       | this type of interview, because they had to do it, so you have to
       | do it too. Not for any reason better than that
       | 
       | I'm not sure a full featured well polished product like this one
       | can change toxic engineering culture when it comes to joining the
       | club
        
         | sokoloff wrote:
         | I think there's (unfortunately) a need for a low-effort high-
         | pass filter for candidates because there are so many utterly
         | unqualified candidates chasing the dollars that a software job
         | can bring. I don't blame them, but I also think the bottom 10%
         | of all potential programmers is way over-represented in the
         | applicant pool and finding a way to filter them out effectively
         | has value.
         | 
         | It doesn't have to be leetcode, but it does have to not be
         | "have an engineer spend an hour with everyone"
        
           | garyjlin wrote:
           | This is what we saw too when we were hiring for our own
           | engineers 2 months ago. A surprisingly good filter is "can
           | you even get started?". Like, the fact that our interviews
           | required users to navigate Github, setup a dev env (even if
           | it's just the make command), understand the existing code and
           | prompt alone filtered out a non-negligible percentage of the
           | candidate pool.
        
         | garyjlin wrote:
         | Right, biggest pain point we're trying to solve for is
         | engineers getting pulled away from their work to build,
         | maintain, conduct, and evaluate interviews. I recently spoke to
         | Stripe engineers that spend 2 hours a day, every day, on this.
         | A Brex EM also every engineer on the team conducts 1 - 2 onsite
         | interviews every week. Eng time is the biggest expense in the
         | recruiting funnel, and it's particularly difficult to solve for
         | that problem. We're hoping that if Litebulb can take care of
         | some of the more tedious code analysis work, then we can help
         | teams reduce their eng spend, or use the newly available time
         | to build deeper connections with candidates as humans.
         | 
         | When it comes to changing engineering culture, I completely
         | agree that a single tool like this isn't going to cut it, but
         | we're seeing a big shift in new startups focusing a lot more on
         | real work relevant take homes and/or work trials. For example,
         | Gumroad (disclaimer: a client), does a Litebulb interview as
         | the entrypoint, then it's straight to a 4-8 week paid trial on
         | flexible hours with a choice of which project to work on. We're
         | hoping that we can be a part of, and accelerate, this culture
         | shift.
        
           | hemloc_io wrote:
           | Hmmm, maybe not a question for you, but for Gumroad, isn't
           | there already an issue with candidates investing so much time
           | in the process? (E.g. with any large company it can literally
           | take 3 months+)
           | 
           | For me as someone who just finished interviewing, if you told
           | me that I either have to work on two jobs for 4-8 weeks or
           | quit my current job for a limited contract that may or may
           | not work out I'd almost certainly say no. (I think most
           | people would too, I mean people already complain about take
           | home quizes etc.)
        
         | ketzo wrote:
         | > They have the resources to spin up something like this in a
         | week
         | 
         | That sentence right there is a pretty solid indicator that it's
         | a solid business to be in, no? What VP wants to take engineer
         | time to create an internal tool when they could pay
         | $40/user/month or whatever? How many SaaS startups do we see
         | right now that occupy that exact sentiment area?
         | 
         | As far as this being more of a culture change than a process
         | change -- well, good products _can_ change culture. Making
         | something much easier is a huge step on the way to getting
         | people to do it.
        
           | garyjlin wrote:
           | Completely agree! This is indeed a big problem, and we have
           | so many CTOs, VPs and founders willing to jump into 6 figure
           | contracts if it works. The biggest risk here is execution,
           | not market size.
        
         | Melatonic wrote:
         | I also know a lot of Directors who like buying a solid product
         | with solid support. That could be a big selling point on its
         | own.
        
       | IncRnd wrote:
       | I hope this works out gangbusters for you!
       | 
       | Some feedback on the homepage. Its current form can be improved.
       | 1) To watch the video in the section, "Here's how it works", I
       | need to load javascript from navattic.com.  JS, expecially from
       | an external website, is something that should not be required to
       | view a video.  This tells me there is likely tracking of hiring
       | managers who only want to watch the how-to video.            2)
       | When I enabled JS for navattic.com, the video's first frame was
       | shown.  But, to actually watch the video I had to input a work
       | email and my name.  I closed your webpage at this point.  I could
       | have clicked the REQUEST DEMO button if I were ready to engage
       | your service.  However, I wanted to learn about your offerings by
       | watching your video.  Even clicking your about-us link didn't
       | work, since it scrolled me right back down the page.
       | 
       | My suggestion is to not try to get every visitor's email account
       | information before you tell them how your product works. It might
       | help raise the quality of engagements to provide your information
       | first and then let hiring managers click on REQUEST DEMO.
        
         | lelandfe wrote:
         | > JS, expecially from an external website, is something that
         | should not be required to view a video
         | 
         | You expect everyone to self-host videos?
        
           | IncRnd wrote:
           | No, and that's not how the Internet works - JS is not
           | required to access other computers in order to display
           | videos. A video file can still be located on a CDN [1] in
           | addition to being self-hosted. This is an extremely common
           | operation that websites perform. You can look at the docs for
           | any CDN. They all tell you how to do this.
           | 
           | [1] CDN: Content Delivery Network, i.e. youtube, cloudflare,
           | or any one of the others. There are multiple CDNs used by
           | this very article's webpage.
        
             | lelandfe wrote:
             | Nice. Let's see if I can restate this well enough to ward
             | off further condescension from you:
             | 
             | Would you similarly be up in arms over the external JS were
             | the site embedding a YouTube video? It also requires JS -
             | as does most (all?) major video hosting platforms.
        
               | rhizome wrote:
               | For all of its faults, YT is a known quantity.
        
       | innermatrix wrote:
       | Any time you say merit-based hiring and quantifiable hiring
       | metrics, you should realize that your best case is that you don't
       | add any additional bias to an already (racist, sexist, etc)
       | biased system. And any hiring process company that doesn't
       | address this in their product pitch doesn't understand the real
       | barriers to equity in hiring today.
        
       | napolux wrote:
       | Let me get it straight... Is the quality of the code still
       | checked manually? You're a startup setting up interviewing
       | environments, then.
       | 
       | I thought there was some sort of machine learning involved.
       | 
       | "fake it till you make it"
       | 
       | Who can guarantee that your "reviews" are aligned to my goals and
       | not just outsourced to the cheapest contractor available?
        
         | garyjlin wrote:
         | Yeah so this is a common concern we've seen. "How do I know
         | your idea of a senior engineer is the same as my idea of a
         | senior engineer?" And the answer is you don't. We're modifying
         | the report to no longer say things like "Strong Junior dev" but
         | rather to just give you a bunch of raw data, and then you
         | decide how that data is interpreted.
         | 
         | Some examples:
         | 
         | 1. 7/10 unit tests passed
         | 
         | 2. GET:/api/v1/vehicle/{id} OOM'd out at 1.5M records in the DB
         | 
         | 3. Linter returned 10 warnings, 0 errors
         | 
         | 4. Cyclomatic complexity counter returned max of 24
         | 
         | 5. 5 new commits, shortest commit message 3 chars, longest
         | commit message 45 chars
         | 
         | Also, the concern with outsourcing to a cheap contractor is a
         | very legitimate concern, because we ALMOST actually did that.
         | Instead, we decided that it was a bad move and to double down
         | on building product + surfacing clear, unambiguous data. That's
         | also why we're intentionally staying away from ML (at least for
         | now), because it's inherently unpredictable.
        
           | frakkingcylons wrote:
           | To be blunt, a lot of those data points are not useful in
           | evaluating a candidate beyond entry level positions. I really
           | don't care that much about commit message length or some
           | linter's idea of complex code. To me, trying to evaluate
           | candidates on these kinds of quantitative factors is just not
           | useful and it feels like grading a standardized test (in a
           | bad way).
        
       | lux wrote:
       | Very cool! I'd love to see game engine support added (mainly
       | Unity & Unreal).
        
         | garyjlin wrote:
         | That would be super cool, and we'll be adding support for new
         | stacks by popular demand. We've already had a few people
         | mention support for Unity, so it's on our roadmap (but likely
         | won't get to it until Q4)!
        
       | tagolli wrote:
       | This looks really interesting! It seems like there's potentially
       | 2 great companies in here:
       | 
       | 1. Providing good programming tests with automatic env setup
       | 
       | 2. Automatic PR reviewing and grading.
       | 
       | By this I mean that you could do less and still be useful, for
       | example just run linting and tests on the PR and let me review it
       | myself. That will match what I do at work anyway. That would let
       | you focus on just providing great example projects and covering
       | more languages, rather than on the automatic grading.
        
         | nicoburns wrote:
         | +1 on this. The env setup and test design along with a way for
         | the candidate to submit their solution would save us a ton of
         | work. I'd happily pay $30/interview just for that. I wouldn't
         | be wanting to to rely on a 3rd party for grading. Nor would I
         | want to assign a grade without having a chance to talk through
         | the code with the candidate.
        
       | dom96 wrote:
       | Looks brilliant. Anyone that makes the LeetCode grind go away is
       | already a positive in my book, but this seems to go much further
       | to automate a lot of the steps companies need to do these days.
       | Awesome idea!
        
         | garyjlin wrote:
         | Appreciate it, thank you!
        
       | xiphias2 wrote:
       | I appreciate that some people prefer this kind of interviewing,
       | but instead of memorizing some algorithms, it all comes down to
       | whether the dev knows the same specific framework or not in this
       | case. I'm not saying one is better than the other, just
       | different, and I think both are valid ways of interviewing.
        
         | mrkurt wrote:
         | We hire with projects derived from the specific frameworks we
         | use. It's the best way to give them a taste of actual work.
         | 
         | We also give people time to learn the framework we're using as
         | part of the hiring process. And we encourage them to do that if
         | they want!
         | 
         | People with a tremendous amount of experience in our framework-
         | of-choice have an advantage. But they're also more likely to be
         | immediately valuable. It's imperfect, but it works well for us.
         | And you'd be surprised how many people float right through the
         | technical assessment learning as they go.
        
         | garyjlin wrote:
         | I've thought about this a lot too, which is why we heavily
         | recommend using Litebulb for take-homes. Sit-down onsites on
         | Litebulb are only good if the candidate knows the existing
         | frameworks, if they don't it's pretty tough to figure it out on
         | the spot.
         | 
         | Take homes are great because the candidate either knows the
         | specific frameworks, or doesn't but can learn them quick
         | enough. Either way, if you complete it within the allocated
         | time and get relatively high enough scores, we'd recommend
         | moving forward.
         | 
         | Actually @xiphias2 what type of interviews have you seen that
         | were really good? Like even the in-person ones, what made them
         | good?
        
           | nicoburns wrote:
           | Could you set the tests up so that candidates have a choice
           | of framework? (and the hiring company can choose which ones).
           | We use React at work, but I'd probably be quite happy to
           | accept solutions in Vue and Svelte (and maybe Angular 2+).
           | Similarly for backend work we use Node.js + Postgres, but I
           | might be willing to accept solutions in Python, Ruby, Java,
           | C# and MySql, Sql Server for example.
        
           | xiphias2 wrote:
           | I really liked LeetCode style interviews at big companies,
           | and I loved that my colleagues also understood the importance
           | of run time of complex algorithms. I also interviewed at
           | startups where algorithmic knowledge is more of a
           | disadvantage, as I may optimize things too early.
           | 
           | As an example of why for some (especially big) companies need
           | deep algorithmic knowledge is that at Google I was working on
           | adding new features to an ads prediction model, and even
           | without changing the code base the new (slightly more
           | complex) model increased the ads 99%-ile latency by 0.1ms
           | (millions of dollars loss because of the RPC timeouts), which
           | required director level exception there if I wanted to ramp
           | up the experiment to total user traffic (the usual thing that
           | people do is to optimize something else in the code base to
           | stay in the latency budget).
           | 
           | The small problems that LeetCode provides for interviews are
           | great filter for people in these companies when they are
           | faced with much more complex optimization problems. The
           | problem is when a small company with lots of user facing
           | features uses the same type of interview (they shouldn't).
        
       | whoknew1122 wrote:
       | It looks like you're automating _coding_ interviews. Not
       | _technical_ interviews.
       | 
       | I conduct technical interviews routinely and very rarely do I
       | touch on anything coding-related. That's because I'm in security,
       | compliance, and identity.
       | 
       | I'd strong suggest changing changing 'technical interview',
       | because it gave me a completely incorrect idea of your product.
       | 
       | Coding interviews are technical. But not all technical interviews
       | are coding.
        
         | dang wrote:
         | Ok, we've s/technical/coding/'d the title above.
        
         | geekbird wrote:
         | Seriously. I'm not an actual software developer, I do systems
         | admin and DevOps config whacking. If I have to bring up an IDE
         | to write something I'm waaaaay out of my wheelhouse. I don't
         | touch JS, React, Java or any of that stuff. So this "technical"
         | interview would tell people absolutely nothing about the skills
         | needed fort my job.
         | 
         | This is a software engineering test only, and very specific
         | stacks at that. It would be a waste of a person's time unless
         | that was the exact stack they used in the past, or was demanded
         | by the job.
        
           | Melatonic wrote:
           | That is still very technical work however - and I bet you are
           | decent at scripting in a language or two. Still plenty to
           | test with that (theoretically).
        
       | rsweeney21 wrote:
       | Services that automate the interviewing or vetting process
       | benefit the hiring manager, but feel impersonal and disrespectful
       | to candidates. (Isn't my time valuable too?)
       | 
       | In today's market you will likely see a large percentage of
       | candidates opt out of the interview process when you send them an
       | automated technical interview.
        
         | Melatonic wrote:
         | If I was a hiring manager with the actual time to do this I
         | think what I would likely do is come up with a series of
         | thought experiments and some actual technical tests for the
         | candidate. Probably I would try to pick things that I did not
         | (off the top of my head) know the answer to and exactly how to
         | do it. And then I might try to go through with the candidate
         | and see their thought process through the whole thing while
         | also volunteering some of my thought process to try to solve it
         | together.
         | 
         | The point would not even be to try to necessarily solve the
         | problem itself - more just to see how I got along with them. Of
         | course some people might freeze up when asked to do this, but I
         | think that happens in any higher pressure interview
         | environment. And if I do not know the solution off the top of
         | my head either and am obviously even struggling in some parts
         | myself I think that takes a ton of the pressure off of them to
         | be a wizard that can solve it in the moment.
        
           | garyjlin wrote:
           | This makes a ton of sense to me. Seeing how you solve a
           | difficult problem with someone collaboratively is a great
           | insight on how you'll continue working together, because
           | that's exactly what you'll be doing. It's also very valuable
           | to see how someone receives suggestions or criticism and how
           | they respond. We'd actually like to build a lot of these
           | concepts into Litebulb, ie: responding to code review
           | requests?
           | 
           | As an add-on, an interesting question one of our clients asks
           | as a conversation starter is "what's the most technically
           | complex project you've ever worked on?". This question sets
           | up the space such that the candidate is the expert (since
           | they've already worked on it), and it almost becomes a
           | session where the candidate teaches the interviewer.
        
         | __turbobrew__ wrote:
         | Yes. If I'm going to put a large time and effort investment
         | into interviewing I want an equal investment by the hiring
         | team. We both need "skin in the game" so I know that they are
         | semi-serious about hiring me.
        
         | bobbylarrybobby wrote:
         | Sounds like the perfect tool to select for candidates willing
         | to do lots of grunt work.
        
           | garyjlin wrote:
           | I wouldn't necessarily put it that way, primarily because
           | some of our interviews require less than 20 lines of code!
           | It's a bit more about selecting candidates with the right
           | fundamentals. You don't need to write too much code, but the
           | code that you do write, is it clean? Are you following best
           | practices? Is it efficient? Have you thought about how your
           | new service affects the rest of the system, and what the
           | implications are?
           | 
           | I do agree though that some of our frontend interviews are a
           | bit heavier on code + css and less on systems design
           | thinking.
           | 
           | Btw just to clarify, we have a policy where we won't host any
           | interviews that are just open dev tickets to build a feature
           | that will be used in production. Like, we DO NOT tolerate
           | companies using interviews to get free labor.
        
         | jahewson wrote:
         | I would have no problem with this as a 45min screen after
         | having talking with a hiring manager. I'm going to invest that
         | time anyway and it's nicer to not have to "do it live". There
         | does need to be humans for the later rounds though - because
         | I'm assessing the company as much as they're assessing me!
        
           | garyjlin wrote:
           | Yup! That's why we recommend using Litebulb as a mid-funnel
           | tool. Initial screening should be with a hiring manager (or a
           | founder if it's an early stage startup), final meetings
           | should be with team members/managers to get cultural fit. We
           | just try to make the technical coding evaluation part easier.
        
         | davidweatherall wrote:
         | I am eagerly awaiting a YC startup that automates the coding
         | interview from the candidate's side.
        
           | rhizome wrote:
           | I believe this already exists. However, my impression is that
           | while they may ask for your github, no potential employer
           | ever actually checks it.
        
             | postalrat wrote:
             | Next level: a tool that rates quality and innovation in
             | your github.
        
           | Grustaf wrote:
           | That's what I thought this was, until I saw the "YC W22"
           | mark.
        
           | Melatonic wrote:
           | Not a bad idea - you could get "vetted" by the company doing
           | the interview and then companies that want to hire you could
           | just search by type.
           | 
           | So kind of LinkedIn badges but with actual meaning :-D
        
             | jahewson wrote:
             | That's how Triplebyte use to operate. It didn't work out.
        
         | baby_wipe wrote:
         | Making it cheaper for companies to interview increases the
         | potential opportunities for me as a candidate.
         | 
         | I do not take personal offense to being tested in an automated
         | fashion.
        
         | garyjlin wrote:
         | I completely get what you're saying. We've seen many candidates
         | that have expressed the concern of "if I'm going to put X units
         | of time into this, you should also put X units of time into
         | this". A good approach of running a Litebulb interview is to
         | offer to do it in a sit-down session over a call with the
         | interviewer, or to do it asynchronously. When we did sit-down
         | sessions, we offered candidates a choice to just go off and do
         | it throughout the day, whenever they can find the time. 100% of
         | candidates chose to do it async. As it turns out, trying to
         | code something with an interviewer breathing down your neck
         | wasn't a good experience and made them very nervous.
         | 
         | Note that it's an option, not mandatory, to do it async. If the
         | candidate wanted to do it in a sit down session, then we
         | prioritize making time for it to be there the entire session.
         | The reason we offer this option is because devs that have busy
         | lives often can't easily find a single consecutive multi-hour
         | session to do an interview. They have full time jobs, and might
         | have family obligations at home and over the weekends.
         | Basically, the only way to actually do a full onsite interview
         | is to take a day off work to go do the interview. We're giving
         | the flexibility for the candidate to find a chunk of time here,
         | a chunk of time there to complete the interview.
         | 
         | We tested this above approach, because just as you said, we did
         | see a huge dropoff when we just emailed candidates the
         | interview link. After we offered to hop on a call and to be
         | available for support, we saw dropoff reduced to less than 15%.
         | 
         | Also, a good chunk of candidates that drop out of interview
         | processes are senior engineers that can't be bothered with
         | lengthy processes, which is fair. I wouldn't give a Litebulb
         | interview to a senior engineer to begin with. Debatably I
         | wouldn't give any kind of technical interview, actually. If
         | they have decades of experience leading teams at big or high-
         | growth tech companies, those achievements speak for itself. The
         | interview process for seniors should be more geared towards
         | finding a product-team-eng fit, rather than an evaluation.
        
       | ZeroCool2u wrote:
       | This is a damn good idea. Like almost everyone else, I don't
       | really enjoy the leetcode grind, but it seemed like a necessary
       | evil.
       | 
       | I'm currently tasked with hiring a senior data scientist and I'm
       | dealing with figuring out the take home case study right now.
       | Would be great if you guys did something similar that wasn't for
       | pure CRUD apps, but also included a normal data science case
       | study as well as included things like unit tests, enforced a PR
       | workflow, and other MLOps tasks like quantization, A/B+regression
       | testing, and monitoring, etc. I've been asking candidates
       | questions about those topics for a while and it just feels pretty
       | tricky to come up with what are fair and reasonable questions
       | that aren't too easy and at the same time can be graded
       | objectively.
       | 
       | Probably a lot to bite off for your team right now, but I'm just
       | sitting here wishing I didn't have to come up with this case
       | study from scratch atm, so there's my dream scenario :)
        
         | ramraj07 wrote:
         | Please don't give a take home DS case study. It's annoying, can
         | take arbitrary amounts of time and very hard to judge well
         | with. You can come up with some good interview questions if you
         | want, (hmu if you want an example). Just give that and ask them
         | to talk about their favorite ml model. You can easily figure
         | the masses from the good ones.
        
       | say_it_as_it_is wrote:
       | Are you planning to create tech management interview cases? They
       | don't seem to be as qualified as the people they hire. Seems like
       | a problem in need of better application screening processes.
        
       ___________________________________________________________________
       (page generated 2022-03-07 23:00 UTC)