[HN Gopher] Launch HN: Rainforest QA (YC S12) - No-Code UI Test ...
       ___________________________________________________________________
        
       Launch HN: Rainforest QA (YC S12) - No-Code UI Test Automation
        
       Russ, here, CTO and cofounder of Rainforest QA
       (https://www.rainforestqa.com). Way back in 2012, my cofounder Fred
       (fredsters_s) and I got into YC with one idea in mind, but soon
       pivoted once we saw a pattern among most of the other companies in
       our cohort.  These startups were trying to push code through CI/CD
       pipelines as frequently as possible, but were stymied by quality
       assurance. Labor-intensive QA (specifically, smoke, regression, and
       other UI tests) tended to be the bottleneck preventing CI/CD from
       delivering on its promise of speed and efficiency. That left a
       frustrating dilemma for these teams: slow down the release to do
       QA, or move faster at the expense of product quality. Given that we
       were sure CI/CD would be the future of software development, we
       decided to dedicate our startup to solving this challenge.  For us,
       inspired at the time by Mechanical Turk, the question was: could we
       organize and train crowdsourced testers to do manual UI testing
       quickly, affordably, and accurately enough for CI/CD?  In the
       following years, we optimized crowd testing to be as fast as it
       could possibly be, including parallelization of work and 24/7, on-
       demand availability. (Our human-powered test suites complete in
       under 17 minutes, on average!) But, the fact is, for many rote
       tasks (like regression tests), humans will never be as fast or as
       affordable as the processing power of computers.  The logical
       conclusion is that teams should simply automate as much UI testing
       as possible. But we found that UI test automation is out of reach
       for many startups--it's expensive to hire an engineer who has the
       skills to create and maintain such automated tests in one of the
       popular frameworks like Selenium. Worse, those tests tend to be
       brittle, further inflating maintenance costs.  With the rise of no-
       code, we saw an opportunity to make automated UI testing truly
       accessible to all companies and product contributors. So two years
       ago, we made a big decision to pivot the company and got to work
       building a no-code test automation framework from scratch. We're
       excited to have launched our new platform this summer.  On our
       platform, anyone on your team can write, maintain, and run
       automated UI tests using a WYSIWYG test editor. Unlike other "no-
       code" test solutions which still require coding for test
       maintenance, our proprietary automation framework isn't a front-end
       for Selenium. Unlike most test automation frameworks that test the
       DOM, our automation service interacts with and evaluates the UI of
       your app or website via machine-vision, to give you the confidence
       you're testing exactly what your users and customers will
       experience. Minor, behind-the-scenes code changes that don't affect
       the UI often break Selenium tests (i.e., create false positives),
       but not Rainforest tests.  Our automated tests return detailed
       results in under four minutes on average, providing regression
       steps, video recordings, and HTTP logs of every test. You don't
       have to set up or pay extra for testing infrastructure, because
       it's all included in the plans on our platform. Tests run on
       virtual machines in our cloud, including 40+ combinations of
       platforms and browsers. We build everything with CI/CD pipelines in
       mind, so most of our customers kick off tests using our API, CLI,
       or CircleCI.  Of course, not all tests can or should be automated;
       e.g. when a feature UI is changing frequently or when you need
       subjective feedback like, "Is this image clear?". Today's computers
       are nowhere near able to replace the ingenuity and judgement of
       people; that's why our crowd testing community isn't going
       anywhere. But we can now say that Rainforest is the only QA
       platform that provides on-demand access to both no-code automated
       testing and manual testing by QA specialists.  We offer a free plan
       that provides five free hours of test automation every month,
       because we don't think cost should make test automation
       inaccessible, either.  I'm looking forward to your questions and
       feedback!
        
       Author : ukd1
       Score  : 82 points
       Date   : 2021-10-21 17:14 UTC (5 hours ago)
        
       | timconnors wrote:
       | This is awesome y'all. Can't believe this didn't exist before
        
       | edanm wrote:
       | This is a brilliant idea and direction. Congrats on launching
       | this.
       | 
       | How do you deal with things like permissions, proprietary
       | information, etc?
        
         | ukd1 wrote:
         | Thanks!
         | 
         | Depending on what you mean; assuming it's the security angle:
         | 
         | TLDR; carefully
         | 
         | At a high level, most of our customers are testing in QA - not
         | production - so usually the only proprietary information
         | (outside of credentials to access it) we'd see is something
         | they'd be releasing shortly anyway. However, we take security
         | seriously;
         | 
         | Our infrastructure and code is heavily tested + reviewed before
         | shipping, as well as externally audited yearly. Currently we're
         | checked yearly for HIPAA, and from this have strong internal
         | controls / processes, documentation and guidelines around
         | access controls, how-things-are-done, and audited. Everything
         | is encrypted at rest and transit (db, logs, images, etc). All
         | the testing is done through our infra, recorded (video, kvm)
         | and logged (http, https, dns, etc). Obviously we never re-use a
         | VM, they're destroyed post use.
         | 
         | From the crowd side, they test using the same machines as
         | automation uses (i.e. all the same logging levels as above).
         | Additionally, each individual is KYC'd and signs an NDA with us
         | before they can work. Enterprise, or folks needing BAAs have a
         | sub-crowd with extra levels of KYC / other requirements.
         | 
         | We're currently early in starting being formally SOC 2, but
         | it's not complete. More details here -
         | https://go.rainforestqa.com/rs/601-CFF-493/images/Rainforest...
        
       | sergiotapia wrote:
       | Does this work for SPAs?
        
         | ukd1 wrote:
         | Yes, zero issues - as we test like a human would. Rainforest
         | looks at the screen, uses the keyboard and mouse to interact
         | with the software under test. For an SPA, that stack is likely
         | windows, chrome and your SPA inside Chrome.
        
       | throwaway69123 wrote:
       | Does this work with desktop apps ?
        
         | ukd1 wrote:
         | Yes; you can download and install anything, then test it .
         | Ideally you setup the install process as one test, then embed
         | in other tests for better maintainablity. For very large apps,
         | or long to install, we can/have Pre installed things on
         | customized VMs for enterprise folks.
        
       | alixanderwang wrote:
       | The hero gif seems to stop exactly before what I'm curious about:
       | running the test. The mouse literally hovers there, and I even
       | clicked on it thinking maybe it's like an interactive thing I
       | have to continue.
        
         | ukd1 wrote:
         | There is a video on the bottom of
         | https://www.rainforestqa.com/how-rainforest-works that shows
         | the product being used, which should answer it!
        
       | colinchartier wrote:
       | Title should say S12 instead of S21, no?
        
         | dang wrote:
         | Whoops - that may have originated with me (habit). Fixed now -
         | thanks!
        
           | exdsq wrote:
           | Genuinely thought this was one of the worlds longest startup
           | launches!
        
             | Kiro wrote:
             | It said S21 but was changed to S12 so you were right in
             | your assumption.
        
             | fredsters_s wrote:
             | it kinda is :) - 10 years in, we're just getting started!
        
             | dang wrote:
             | There was also this one just a few weeks ago - just a
             | coincidence, btw:
             | 
             |  _Launch HN: RescueTime (YC W08) - Redesigned for wellness,
             | balance, remote work_ -
             | https://news.ycombinator.com/item?id=28683597 - Sept 2021
             | (141 comments)
        
       | ramish94 wrote:
       | Are you hiring PM's? :)
        
       | stayux wrote:
       | This tool has a potential to change my product building workflow
       | in revolutionary way.
       | 
       | Congrats, you not only solve a big problem but introduce real
       | advancement in UX with no-code option. Added value from real
       | human QA is a wonderful bonus point.
       | 
       | P.S. A little aside. Your branding and design of landing page is
       | too simplified and schematic. It is adequate for framework or
       | small app. For a serious B2B tool as yours, more serious design
       | is needed. As a start give a little more color and
       | differentiation to sections of the site.
        
       | satvikpendem wrote:
       | I remember working with Rainforest back in 2016 when I was at
       | Zentail, glad to see you guys are still alive and doing well!
        
         | ukd1 wrote:
         | Thanks! We've definitely improved a lot since then, I'd love to
         | hear what you think today!
        
       | worik wrote:
       | What is the benefit of "no-code"?
       | 
       | The benefit of using code is that you have to know what you are
       | doing. Having experienced what happens when people build data
       | driven systems out of building blocks, with out a thorough
       | understanding of what they are doing (brittle, failing is strange
       | ways under load, general unreliability and low quality) I am
       | suspicious
        
         | ukd1 wrote:
         | At least for testing, and traditional automation (aka code) the
         | bar is knowing what you're doing AND knowing the product enough
         | to be able to test it effectively.
         | 
         | We remove the code requirement, making testing accessible to
         | more folks - i.e. product managers and product designers who
         | have great knowledge of the product, but don't want to or can't
         | code. Also, this doesn't tend to exclude developers either.
         | Currently our user base is roughly 1/3rd engineers, 1/3rd
         | product managers/designers, and 1/3 QA folks.
        
           | primitivesuave wrote:
           | Completely agree that testing should be more accessible to
           | product managers and designers, and love the concept.
           | Consider the simple example of a web form with multiple
           | `input` elements - if I use your product to click on each
           | input and configure a QA procedure, how does the system
           | unambiguously identify each input given that the page layout
           | may change in the future?
           | 
           | The current code-based testing frameworks force me to add an
           | unambiguous marker to the `input` element, like an attribute
           | or ID, which also makes it easy to query from the DOM during
           | the QA process. How does this QA product handle breaking
           | changes to the UI, and how robust could you expect it to be
           | to code changes?
        
             | ukd1 wrote:
             | It identifies things visually, and optionally with OCR as
             | well. If it moves likely no issue. Changing shape or size
             | too much, as well as not matching OCR can cause failures -
             | as expected. If it is an expected update and the test needs
             | updating, we smartly suggest updates to the target if we
             | can find one. Alternatively we have a crowd-based service
             | to help write abs maintain your tests; it's usually used by
             | teams needing high leverage when managing a lot of tests.
        
         | nhoughto wrote:
         | Agree, no-code seems to optimize for a large number of people
         | being somewhat effective, but with less control over the output
         | (increasing fragility, rework)
         | 
         | Code is more upfront effort but more control and thus less
         | fragility but more maintainable over time (if done right).
         | 
         | I imagine there are some/many situations where throwing many
         | people at a problem is the "best" way and this would suit that
         | quite well I guess.
        
           | franciscassel wrote:
           | In my experience, many orgs that work with Selenium and its
           | derivatives have described that (coded) approach as flaky /
           | brittle (i.e., fragile).
           | 
           | Of course, until automation gets to be as clever of humans,
           | _any_ test automation approach is going to have some flavor
           | of brittleness.
        
       | psadauskas wrote:
       | Congrats Rainforest! As a former early employee, and occasional
       | customer, I'm glad you guys are still going. Turk UI testing is a
       | brilliant idea, and automating that with machine vision is a
       | great next step.
        
       | peterbell_nyc wrote:
       | Congrats on the launch Russ - really excited to see how this
       | works out!
        
         | ukd1 wrote:
         | Thanks Peter! Loved having your support over these years!
        
       | mrkurt wrote:
       | This is amazing. Raising a Series B with an enterprise sales
       | model, then releasing a self service, bottoms up product is like
       | the hardest possible shift. Hopefully it's exhilarating.
        
       | prasadkawthekar wrote:
       | We use Rainforest at dashworks.ai and it's been instrumental in
       | improving the quality of our product. Super intuitive product and
       | great team!
        
       | nickstinemates wrote:
       | Good to see y'all again after so long. As a continued fan, I love
       | to see the refinement over time.
        
       | lharries wrote:
       | Looks awesome! How hard was it to get the testing working from
       | visual alone? Would love to learn what you're tackling the
       | problem technically?
        
       | hacliff wrote:
       | Not a question, but I just want to thank you for "inventing"
       | review apps back in the day, the idea of having a full version of
       | your app running for a branch was pretty game changing for the
       | companies I worked at.
        
         | ukd1 wrote:
         | Awesome! IMHO it was more of a refinement of the concept of
         | deploying each commit; doing it at a PR and push level worked
         | better. Open-sourcing it, and getting Heroku to bake it in was
         | the icing on the cake for us!
         | 
         | Context for everyone else: early 2014 we open-sourced
         | https://github.com/rainforestapp/fourchette, which was an
         | pioneer of Heroku Review Apps
         | (https://devcenter.heroku.com/articles/github-integration-
         | rev...), and generally the concept of doing this on pull
         | requests over just per branch or commit.
        
       | tekacs wrote:
       | Great to see this shift!
       | 
       | I'd be curious to hear about the comparison to
       | https://reflect.run -- I can imagine that access to the tester
       | community is a piece of that...
        
         | tmcneal wrote:
         | Hey there - I'm one of the co-founders of Reflect so just to
         | give my perspective:
         | 
         | The workflow for creating tests in Reflect is pretty similar to
         | Rainforest: we both expose a "cloud browser" that loads up your
         | webapp and you interact with that to create your tests. The
         | biggest difference workflow-wise is that Reflect records all
         | your actions automatically, whereas with Rainforest you often
         | need to both specify what step you're going to take, and then
         | actually perform that action in the browser itself. Recording
         | everything automatically is technically harder to pull off
         | since it's forced us to ensure we accurately record every step
         | you take, but we think it makes for a better workflow since you
         | can create tests faster, and there's less chance of
         | inaccuracies that cause tests to not be repeatable.
         | 
         | I would quibble with the statement that you need to be far more
         | technical to use Reflect - we're a no-code product after all.
         | :) We have plenty of folks who aren't developers using our
         | product. But the good thing is that both products have free
         | tiers, so users can always give us both a try for free and
         | decide for themselves.
         | 
         | Edit: Also their statement about Reflect running in headless
         | mode is incorrect. Our test grid is a cluster of VMs: we spin
         | up a Docker container for each test run, and each Docker
         | container is running the test steps using a normal non-headless
         | browser.
        
           | fredsters_s wrote:
           | Oh good catch Todd, sorry about that - updated.
           | 
           | And totally agreed, both products take a slightly different
           | approach and have different strengths and weaknesses, try
           | them both and see which is a better fit!
        
             | lharries wrote:
             | What's the reason the Rainforest would need to declare
             | "Left click" before doing it rather than just having the
             | user do it?
        
         | ukd1 wrote:
         | The obvious major difference is the ability to use the crowd as
         | well as automate; which I believe is unique to Rainforest.
         | 
         | Outwardly, the way they automate is very similar to us. Looking
         | a little deeper, it seems like they _do_ use the DOM pretty
         | heavily (from re watching the video at https://reflect.run).
         | For us, this is a fundamental difference - we do not believe in
         | this; we want to automate testing-like-humans. It's harder, but
         | we believe replicating how a human would detect things working
         | or not (visually, via kvm) ends up with less brittle, easier to
         | maintain tests that are closer to the reality of how a human
         | would interact with your app.
         | 
         | Also, we test using VMs (or physical devices if needed for
         | mobile) - allowing us to test the browser, or any other kind of
         | software. This lets us support a large combination of OS and
         | browser variants out of the box, or custom images for
         | enterprise. Reflect doesn't seem to support more than Chrome
         | when I last looked.
        
           | ukd1 wrote:
           | Digging a little more - their pricing; our free plan seems
           | equivalent (yet with better data retention, and no user
           | limits, and email testing included) to their $99/mo plan.
        
         | fredsters_s wrote:
         | the other major difference is the design principles: our core
         | belief is that everyone owns quality, and so we build for the
         | 'no code' user as well. It's a really hard bar to hit, but I
         | think we've done a good job so far - to use reflect you need to
         | be far more technical. 1/3 of our daily users are PMs.
        
       | annamarie wrote:
       | What are some of the most important lessons you learned in
       | building out Rainforest as a self-serve freemium product after
       | building Rainforest as a more enterprise-focused product to
       | start?
        
         | ukd1 wrote:
         | Good question; the enterprise focused product relied on a much
         | more hands-on onboarding, and day-to-day support - sometimes
         | even to the level of professional services - to use the
         | product. Moving to self-serve exposed all of the hard-edges of
         | Rainforest, especially around on-boarding - and later around
         | general use of it. This forced us to up-our-game product design
         | wise significantly, which wasn't a strong focus before. The
         | lesson being, it's not an easy shift - and takes time even if
         | you have a product that works for enterprise.
        
           | anonymouse008 wrote:
           | I wonder if this fits the bill of "do things that don't
           | scale" -- your experience is similar to a theme revealed in
           | my work.
        
       | melony wrote:
       | Do you plan to expand into no-code web scraping too? The frontend
       | tech is the same for both.
        
         | ukd1 wrote:
         | No plans to - our focus is on helping folks improve their
         | product quality. I guess you could probably use it for that if
         | you wished, but it's really not optimized for extracting data
         | from pages.
        
       | choeger wrote:
       | Ok, I'll bite. How do you integrate with the tested software?
       | 
       | Do you a) run the tested software inside your VMs (if so, what's
       | the integration API?) or b) expect your clients to run it (if so,
       | how can the client authenticate your test access?)
        
         | ukd1 wrote:
         | Mostly b, but:
         | 
         | a) we can, if so generally they install it as part of the
         | testing (e.g. a client testing a chrome ext), or have us build
         | a custom vm for them (e.g. clients with 20gb download)
         | 
         | b) this is the common path; folks push something, ci builds it,
         | ships to a qa env, they run us, if it passes, push to prod.
         | 
         | For B, Auth is handled anywhere from zero auth (just fully open
         | QA env, but usually it's SaaS so you still have to login to
         | their app), through to http auth, limiting the IPs
         | (https://help.rainforestqa.com/docs/which-ip-addresses-do-
         | rai...), to VPN directly into their QA infra. Without pulling
         | numbers, I'd guess 95% go the zero-auth route.
        
       | nocommandline wrote:
       | >>> slow down or pause the release process to do QA, or move
       | faster at the expense of product quality <<<
       | 
       | This is a constant dilemma for Solopreneurs/very small teams. I
       | was just thinking last week - can I find an affordable automated
       | testing platform (need to run tests on new features that I've
       | added to my latest project - an Electron App).
       | 
       | Follow up question - does this work for Electron Apps? Either
       | way, still happy to try it for other Web App projects if I do
       | another Web App
        
         | ukd1 wrote:
         | Yes, it works for electron apps. You'll need to host the binary
         | somewhere, then install it. You can bundle those actions in to
         | one test and then reuse as a building-block for your actual
         | tests.
        
           | nocommandline wrote:
           | Got it. Thanks
        
       | mustacheemperor wrote:
       | Chiming in as a happy Rainforest user. This tool has been a huge
       | benefit to our startup. It is a great way to maintain ongoing QA
       | at a minimum cost of hours while iterating your product, and the
       | built-in ability to get your test fixed by Rainforest for a few
       | bucks or to have certain tests done by hand at an hourly rate is
       | extremely useful and makes it simple to quantify the cost of
       | outsourcing to rainforest vs doing something inhouse.
       | 
       | This tool makes the benefits of a well-built automated testing
       | setup much more accessible and less costly.
       | 
       | We had previously been using traditional automated QA tools like
       | selenium when someone suggested Rainforest and I am very happy we
       | made the switch. Nothing but praise and well wishes for this team
       | - you are en route to massive success.
        
       | ramesh31 wrote:
       | Does the UI output some kind of configuration file that can be
       | checked into source control? If not, how do I maintain a history
       | of test changes?
        
         | ukd1 wrote:
         | We support that for the human language tests, but not yet for
         | the automation. A few customers have done it themselves by
         | exporting the JSON (it's a defined, versioned schema), but
         | we've not yet productized that.
        
       ___________________________________________________________________
       (page generated 2021-10-21 23:00 UTC)