[HN Gopher] Launch HN: Reflect (YC S20) - No-code test automatio...
       ___________________________________________________________________
        
       Launch HN: Reflect (YC S20) - No-code test automation for web apps
        
       We're Fitz and Todd, co-founders of Reflect (https://reflect.run) -
       we're excited to share our no-code tool for automated web testing.
       We worked together for 5+ years at a tech start-up that deployed
       multiple times a week. After every deployment, a bunch of our
       developers would manually smoke test the application by running
       through all of the critical user experiences. This manual testing
       was expensive in terms of our time. To speed up the tests' run
       time, we dedicated developer resources to writing and managing
       Selenium scripts. That was expensive at "compile time" due to
       authoring and maintenance. At a high-level, we believe that the
       problem with automated end-to-end testing comes down to two things:
       tests are too hard to create, and they take too much time to
       maintain. These are the two issues we are trying to solve with
       Reflect.  Reflect lets you create end-to-end tests just by using
       your web application, and then executes that test definition
       whenever you want: on a schedule, via API trigger or simply on-
       demand. It emails you whenever the test fails and provides a video
       and the browser logs of the execution.  One knee-jerk reaction
       we're well aware of: record-and-playback testing tools, where the
       user creates a test automatically by interacting with their web
       application, have traditionally not worked very well. We're taking
       a new approach by loading the site-under-test inside of a VM in the
       cloud rather than rely on a locally installed browser extension.
       This eliminates a class of recording errors due to existing
       cookies, proxies or other extensions introducing state that the
       test executor is not aware of, and unifies the test creation
       environment with the test execution environment. By completely
       controlling the test environment we can also expose a better UX for
       certain actions. For example, to do visual testing you just click-
       and-drag over the element(s) you want to test. For recording file
       uploads we intercept the upload request in the VM, prompt you to
       upload a file from your local file system, and then store that file
       in the cloud and inject it into the running test. If you want to
       add additional steps to an existing test, we'll put you back into
       the recording experience and fast-forward you to that point in the
       test, where again all you need to do is use your site and we'll add
       those actions to your existing test. Controlling the environment
       also allows us to reduce the problem space by blocking actions
       which you typically wouldn't want to test, but which are hard to
       replicate and thus could lead to failed recordings (e.g. changing
       browser dimensions mid-recording). As an added bonus, our approach
       requires no installation whatsoever!  We capture nearly every
       browser action from hovers to file uploads, and drag-and-drops to
       iframes, while building a repeatable, machine-executable test
       definition. We support variables for dynamic inputs, and test
       composition so your test suite is DRY. The API provides flexible
       integration with your CI/CD out of the box and supports creating
       tests in prod and running them in staging on the fly. You don't
       need to use a separate test grid, as all Reflect tests run on our
       own infrastructure. Parallel execution of your tests is a two click
       config change and we don't charge you extra for it.  Some technical
       details that folks might find interesting:  - For every action you
       take we'll generate multiple selectors targeting the element you
       interacted with. We wrote a custom algorithm that generates a
       diverse set of selectors (so that if you delete a class in the
       future your tests won't break), and ranks them by specificity (i.e.
       [data-test-id] > img[alt="foo"] > #bar > .baz).  - To detect
       certain actions we have to track DOM mutations across async
       boundaries. So for example we can detect if a hover ended up
       mutating an element you clicked on and thus should be captured as a
       test step, even if the hover occurred within a
       requestAnimationFrame, XHR/fetch callback, setTimeout/setInterval,
       etc.  - We detect and ignore auto-genned classes from libraries
       like Styled Components. We use a heuristic to do this so it's not
       perfect, but this approach allows us to generate higher quality
       selectors than if we didn't ignore them.  - One feature in beta
       that we're really excited about: For React apps we have the ability
       to target React component names as if they were DOM elements (e.g.
       if you click on a button you might get a selector like
       "<NotificationPopupMenu> button"). We think this is the best
       solution for the auto-genned classes problem described in the
       bullet above, as selectors containing component names should be
       very stable.  We tried to make Reflect as easy as possible to get
       started with - we have a free tier (no credit card required) and
       there's nothing to install. Our paid plans start at $99 and you pay
       primarily for test execution time. Thanks and we look forward to
       your feedback!
        
       Author : tmcneal
       Score  : 179 points
       Date   : 2020-07-20 13:33 UTC (9 hours ago)
        
       | loh wrote:
       | Congrats on launching Reflect. Looks solid!
       | 
       | I soft launched something eerily similar ~6 months ago and got
       | zero feedback (probably because no one was able to actually try
       | it since Google took forever to approve the extension).
       | 
       | TestFront.io: https://news.ycombinator.com/item?id=22130590
       | 
       | Ran out of money though and had to pursue other things so I put
       | it on hold. Maybe I'll resume work on TestFront at some point and
       | you'll have some competition. ;)
        
       | foreigner wrote:
       | Looks neato. Are tests versioned? I'd like to be able to see a
       | textual diff to changes in tests over time.
        
         | tmcneal wrote:
         | Thanks! Yep tests are versioned - if you click on the 'History'
         | button when viewing a test you'll see all the executions within
         | your retention period. We don't show a diff view of historical
         | changes, however we do show a diff view when a text change
         | causes your test to fail. So say if your 'Log In' button
         | changed to 'Sign In', we would show you a diff of that text
         | change and give you an option to make 'Sign In' the new
         | baseline used when running future tests.
        
           | inglor wrote:
           | One thing that will come up and we've found out: users want
           | branches, they want merging and they want the ability to
           | track their git branches with their tests.
        
           | foreigner wrote:
           | That's not the kind of versioning I intended. What I'm
           | interested in is if I make some changes to the test setup in
           | your UI, is there a way to see what I changed, and when those
           | changes happened in the past?
        
             | tmcneal wrote:
             | Ah - sorry for misunderstanding. No we don't have a view
             | like that, though I can see that being pretty handy. This
             | is great feedback - thank you!
        
       | ggregoire wrote:
       | Looks good. A small suggestion for the website: perhaps I missed
       | it, but I watched both videos "record a test" and "view test
       | results" on the front page and I didn't see Reflect detects an
       | actual regression.
        
         | fitzn wrote:
         | Good point. The test failure case is something we should show.
         | Thank you!
        
       | catchmeifyoucan wrote:
       | One thing I like about this is that it supports drag and drop
       | support. That's an item that I haven't seen to be super straight-
       | forward with other suites like Cypress and is very-much a user
       | initiated action.
        
       | [deleted]
        
       | gfodor wrote:
       | How well would this work with a hybrid DOM/WebGL application?
        
         | tmcneal wrote:
         | If the elements you interact with are DOM elements and non-DOM
         | elements within a canvas then we should be able to detect and
         | replicate those actions. The biggest remaining issue might be
         | performance - our VMs don't have a GPU attached so depending on
         | the application it may be slow because WebGL is not running
         | with hardware acceleration.
        
       | [deleted]
        
       | cmehdy wrote:
       | Interesting tool! And you've definition given a lot of thought to
       | many of the problems encountered while attempting to do UI
       | testing automation.
       | 
       | I've written my fair share of extensive selenium stuff (and
       | appium, but.. let's forget about those painful memories) and one
       | thing that I found fairly "easy' to add to the suites I had
       | written was accessibility testing (using Deque's Axe[0] tools).
       | It literally took a few lines of code and a flag at runtime to
       | enable the accessibility report on my Jenkins/TestNG/Selenium
       | combo. However WCAG is constantly changing and it's hard to keep
       | up, and even Deque is not always up to date AFAIK. Do you have
       | plans to have accessibility testing with your tool ? (even a
       | subset of WCAG's rules)
       | 
       | Another thing I've noticed is the jump in pricing from Free to
       | 100 USD/month which goes from 30mins to 6hours. This might be
       | steep for a team attempting to test out the validity of the tool
       | against the competition - perhaps offering a first-month
       | discounted trial or something like that would be appealing.
       | 
       | I also haven't really seen if it is possible to enable
       | concurrency (for example, is testing on three platforms at the
       | same time for 10 minutes on that free tier possible? I would
       | imagine so if one is doing the lifting with CI like Jenkins - but
       | perhaps you have your own solutions). Tangentially related but
       | you say your tests integrate with various CI solutions, does this
       | mean one can extract the test results in a way that allows
       | further processing and integration into other tools? (I'm
       | thinking of the XMLs coming out of TestNG there).
       | 
       | Lastly, I don't know if the time used is counted from the moment
       | you start your VMs or browser starts, or the page is loaded, or
       | the first step is done or something else. Clarifying that might
       | help with a team's estimates (I had internal tests where the
       | struggle was entirely on having the virtual
       | environment/device/browser ready and the test was then a breeze,
       | so the significant metric was the boot-up time).
       | 
       | [0] https://github.com/dequelabs/axe-core
        
       | chris_st wrote:
       | For selecting things, I've found that allowing a "data-
       | test='user-name-input'" type attribute is useful for a lot of
       | cases, when doing Gauge/Taiko tests.
       | 
       | It might make sense to allow/recognize these to save trying to
       | find things that way, rather than via CSS-type selectors that may
       | change.
        
         | tmcneal wrote:
         | Totally! If you have data-test* attributes set up, we will use
         | those first when generating selectors for each action. The full
         | list of data-test attributes we use is listed here:
         | https://reflect.run/docs/recording-tests/creating-
         | resilient-.... If we don't find data-test* attributes we'll
         | also look for other attributes that tend to have a good degree
         | of specificity, like alt, rel, schema.org, and aria-*
         | attributes.
        
           | inglor wrote:
           | I recomment considering learning between test runs and I
           | encourage you to train a relatively simple model for
           | selection on top of http-archive and tagged data.
           | 
           | "off the shelf" machine learning makes it pretty easy to
           | create very robust selectors. I gave a talk about it in GDG
           | Israel and was supposed to speak about it in HalfStack that
           | got delayed cancelled because of COVID19 - but the principle
           | is pretty simple.
           | 
           | It's amazing how much RoI you can get from relatively simple
           | models of reinforcement learning. Here are some ideas: https:
           | //docs.google.com/presentation/d/1OViIwDJJw1kjVJH5Z2N5...
           | 
           | Good luck :]
        
       | [deleted]
        
       | YPCrumble wrote:
       | Similar product I've had a great experience with is
       | https://ghostinspector.com
        
         | ohadron wrote:
         | I use it too and I really like it.
         | 
         | However the premise of having a test editor in a VM where I can
         | time travel to a specific point in time and add steps from
         | there could save me a bunch of time. Also the multiple
         | selectors per element and the ability to target React component
         | names sound really cool.
         | 
         | Happy for the competition :)
        
       | idreyn wrote:
       | I maintain a web app that would really benefit from an E2E suite,
       | but we don't have the developer capacity to write one right now,
       | so this looks like it hits a potential sweet spot for me. To use
       | Reflect, I think we would need to understand the plan for what
       | happens if the SaaS goes under -- ideally, we'd be left holding
       | our test definitions and some open-source version of the test
       | runner that we can instantiate on our own VM.
        
         | jelling wrote:
         | Testcafe's test recorder would let you create tests relatively
         | easy by browsing and you can tweak them if needed. Ime they
         | have great support and I like that I can buy the software
         | rather than having another SaaS subscription. Ymmv.
        
         | inglor wrote:
         | The "easy out" for tools in this space is to give you export
         | (For example to Selenium/Puppeteer/Playwright). A lot of the
         | "premium" test tools offer this functionality.
         | 
         | The "less easy out" is an on-prem version with a contract
         | regarding updates + a clause for what happens if the company
         | goes under in terms of support + an escrow over the code (the
         | company gets a copy of the code + the license to change but not
         | sell it etc).
        
       | jameslk wrote:
       | How does this compare to other existing "no code" SaaS regression
       | testing tools such as screenster.io?
        
       | o_____________o wrote:
       | Took a look, nice job!
       | 
       | Would be nice:
       | 
       | 1. Clearer setting for notification email
       | 
       | 2. The ability to target an area for an element change without
       | knowing what the elements will be. Example: filter for recent
       | items without foreknowledge of which items will appear.
       | 
       | 3. Ability to traverse up DOM (to select parent/s) based on a
       | selector that was too specific. Encountering this quite a bit.
        
       | mchusma wrote:
       | How does this compare to recent YC alum Preflight?
        
         | mritchie712 wrote:
         | Also, for those familiar with the space or using something
         | else, what should we compare this against? I'm in the market
         | for something like this.
        
         | fitzn wrote:
         | Hey, co-founder Fitz here! Great question. We're both tackling
         | the same problem. Our key differentiator from Preflight (and
         | testim, and mabl and ghost inspector) is that we spin up a new
         | VM for every browser session rather than rely on a local
         | browser extension like all of those competitors. The comment
         | above highlights the trade-offs of this approach but happy to
         | discuss further (fitz at).
        
       | karussell wrote:
       | Really nice tool, thanks! I'm more a backend developer so I might
       | have some stupid questions: What is your competition and what are
       | you doing better?
       | 
       | Update: reading through this thread and a bit web search resulted
       | in the following list:
       | 
       | https://ghostinspector.com https://www.testim.io
       | https://www.katalon.com https://www.cypress.io
       | https://github.com/dequelabs/axe-core https://www.mabl.com
       | https://preflight.com
       | 
       | > We're taking a new approach by loading the site-under-test
       | inside of a VM in the cloud rather than rely on a locally
       | installed browser extension.
       | 
       | But how would I record and test things locally? I.e. I would need
       | a public setup right at the beginning.
        
         | fitzn wrote:
         | Hi! It's technically possible to test your localhost though
         | it's a bit of work with ngrok. What we're seeing more and more
         | of with our customers is they stand up ephemeral environments
         | for each PR, or each merge, and then use Reflect's hostname and
         | URL overrides at run time to target their code changes with the
         | tests they already recorded against prod or staging. We've
         | worked with Release W20 (https://releaseapp.io/) in the past to
         | demonstrate running Reflect tests against each PR. I know it's
         | not exactly what you're looking for, but similar in spirit, I
         | think.
        
           | karussell wrote:
           | Never heard of ngrok, thanks for this pointer. Sounds like a
           | magic tool :)
           | 
           | Will look into releaseapp.io
        
       | catchmeifyoucan wrote:
       | Are there any plans to support electron apps. I have a react-
       | based Electron App, and also would love to test a few things that
       | are relevant to the app loading and getting data from disk. Just
       | another use case to consider.
        
         | fitzn wrote:
         | Hey, no plans to support electron apps right now. If your app
         | has a web-based portal (or started as web based), then you
         | could do test that, of course.
        
       | jaequery wrote:
       | How does this compare with qawolf.com which uses Playwright,
       | open-source and free?
        
       | tribeca18 wrote:
       | Makes me think of http://waldo.io, but for web apps! Really
       | excited about the new wave of no-code test automation tools -
       | definitely helps semi-technical team members take on more
       | responsibility on the testing side vs. just writing specs.
        
         | fitzn wrote:
         | Yes! 100% our feeling as well.
        
       | IAmNotAFix wrote:
       | This assumes the URL is publicly available, or is there an on-
       | premise offer?
        
         | tmcneal wrote:
         | We don't offer an on-premise version, but we do have customers
         | testing environments not accessible to the public internet.
         | They're doing it by allow-listing a static IP and we configure
         | all traffic to come from that IP for that account. There's
         | other options if a static IP allow-list doesn't work - we've
         | specced an approach for a prospect where we would set up an
         | agent inside their firewall that we use to access their
         | internal environments. This is an approach used by other tools
         | to access secured environments - we haven't done it ourselves
         | yet though.
        
       | amw-zero wrote:
       | I've largely moved away from UI testing. The two main reasons
       | are:
       | 
       | 1) Flakiness / non-determinism 2) The constant change of the UI
       | 
       | Both of these are absolute killers to productivity on a large
       | team. Note, I'm not against UI testing in theory. I think if you
       | could, you would have full end to end tests for every single edge
       | case so that you could be sure that the actual product presented
       | to users work. But in practice, end to end testing an
       | asynchronous distributed system (which a simple client-server
       | application still is) is full of non-determinism.
       | 
       | Re the constant changing of the UI. This is just also true in my
       | experience. I've worked on a navigation / global UI redesign at
       | every company I've ever worked at. It happens like once every 3-5
       | years. Within the redesigns, it's still extremely common to
       | subtly change the UI for UX reasons all the time. When this
       | happens, be prepared to spend half of the time updating all of
       | your tests.
        
         | inglor wrote:
         | To be fair, I work for a company in this space (Testim.io) and
         | we have had success stories with very large companies doing UI
         | tests with the service (like Microsoft).
         | 
         | I think the hardest aspects of testing are fast authoring and
         | stability (maintenance) - AI tools can help with that and
         | learning data between test runs can create very stable tests.
         | 
         | So as someone in this space - it's a very exciting space and I
         | am very optimistic for this startup (Reflect)
        
           | amw-zero wrote:
           | Yea I don't want to discourage work in the space. Actually,
           | quite the opposite. I'm just saying where the bar is for me
           | when I'm considering handing over money. I'm not going to pay
           | money for something that ends up actually slowing me down.
        
         | hinkley wrote:
         | First AJAX heavy app I worked on, we rigged the game so that we
         | could win.
         | 
         | We wired up all of the asynchronous bits to tweak a hidden DOM
         | element at the bottom of the page, so that our tests could wait
         | for that element to settle before validating anything.
         | 
         | We'd already had a lot of our async code run through a couple
         | of helper methods (to facilitate cache busting and Karma
         | tests), so it was 'just' a matter of finishing what we'd
         | started.
         | 
         | I kinda feel like the browsers are letting us down by not
         | exposing this sort of stuff.
        
           | atarian wrote:
           | How does that address the issue of the UI changing? If the
           | markup changed, you still need to update your tests.
        
       | inglor wrote:
       | Hey congrats, I work for Testim ( https://testim.io ) which I
       | assume is somewhat competition?
       | 
       | Excited to see more players in this space - good luck! Most of
       | the market is still doing manual QA and that has to change.
        
       | yeldarb wrote:
       | We are using Reflect.run at Roboflow and it really is very slick.
       | We have it testing our core flow on a schedule so we know if any
       | of our 3rd party services or dependencies go down and/or if we
       | introduce a regression.
       | 
       | Our app's core workflow involves signing in, uploading images and
       | annotation files, parsing those files, using HTML5 canvas to
       | render previews and thumbnails, uploading files to GCS, and
       | kicking off several async functions and ajax requests.
       | 
       | That Reflect.run was able to handle all of those complex
       | operations makes me pretty confident it can effectively test any
       | web app.
        
         | LeonidBugaev wrote:
         | I would love to learn more about your setup. Does Reflect has
         | built-in Roboflow integration, or it ismth custom?
        
           | yeldarb wrote:
           | Nope, we were able to use their UI to create our tests just
           | like any other user would -- everything we needed was
           | supported out of the box (minus one minor workaround I had to
           | add to my code because I had some janky, non-deterministic
           | IDs on some dynamically added hidden fields).
        
       | cbenincasa wrote:
       | This is awesome, congrats guys!
        
       | mritchie712 wrote:
       | How do you handle logging in to the app that needs to be tested?
       | We use Google OAuth.
        
         | inglor wrote:
         | Ungh Google OAuth is kind of frustrating - the major reason is
         | that websites are pretty good at blocking pages that are coming
         | from automation - and you get all sorts of issues and problems
         | (not to mention if a company like Google (not the real example)
         | uses you to test their platform and then SafeSearch picks up
         | their staging login as a phishing site and blocks you ^_^).
         | 
         | I warmly recommend _not_ testing Google OAuth and instead
         | passing a token and bypassing it on your server's side.
         | 
         | The way automation can work around it (in Testim.io for
         | example) is "mock network" which is supported in our codeless
         | solution but also available in most test infrastructure (like
         | Playwright which is FOSS). You would mock out the parts that
         | leave the user with a token they can use to authenticate.
        
         | tmcneal wrote:
         | We fully support e-mail based logins, but OAuth can be
         | challenging.
         | 
         | Github OAuth for example will issue a 2FA email-based challenge
         | non-deterministically. We handle that by detecting the
         | challenge and filling out the challenge code based on the
         | contents of the email sent by Github. This requires a one-time
         | setup where you add an email address we control to your Github
         | user so that we can read and parse it.
         | 
         | For Google OAuth we can execute all the steps but the two
         | issues there are (1) we run everything in a single window and
         | some web apps don't like that because they assume the oauth
         | flow will happen in a new window, and (2) sometimes Google
         | prompts you to answer a security question and we don't yet
         | support marking test steps as optional.
         | 
         | What our customers have been doing instead is setting up a
         | mechanism to auto-log in the test user using a magic link.
         | Basically sending in a one-time-use auth code to a URL in their
         | app that then authenticate the user. I think some platforms
         | (Firebase?) have built-in support for this.
         | 
         | I'm certainly happy to brainstorm what could work best for you
         | though if you'd like (my email: todd at reflect.run)
        
           | verdverm wrote:
           | I'm using Hardware keys to secure my personal accounts. What
           | are the alternative auth methods suggested for this
           | situation?
        
             | tmcneal wrote:
             | I would suggest adding the ability to auth via a magic-link
             | in your web app. This would allow your tests to bypass the
             | auth flow entirely by passing an auth token as a request
             | parameter in the first URL of your test. You can pass new
             | auth tokens when you go to run your tests either via the UI
             | or via our API if you have it hooked up to your CI/CD
             | pipeline. More docs on how to do it via the API is here -
             | we call these 'request overrides' in our docs:
             | https://reflect.run/docs/developer-api/documentation/#run-
             | te....
             | 
             | In terms of added security, some options there would be to
             | only enable these "magic links" in staging and only enable
             | this type of auth for a least-privledged user account (e.g.
             | no admin or employee accounts could auth this way).
        
               | verdverm wrote:
               | Magic links sound like they have several pitfalls (the
               | potential for security incidents like Twitter are beeping
               | here) and require significant changes on our end to use
               | this platform
        
               | tmcneal wrote:
               | You're right, there's certainly security implications for
               | magic links. Unfortunately for an auth that incorporates
               | hardware keys, I can't think of how you would test behind
               | that without some sort of workaround, but I may be
               | overlooking something.
        
               | verdverm wrote:
               | I generally have service accounts specific for testing
               | with significant restrictions. Hardware keys present
               | their own complications for non-human ops, so they don't
               | really belong there.
               | 
               | More just seeking bounds of possibilities, thanks for
               | your replies.
        
         | [deleted]
        
       | p17 wrote:
       | I want to use Reflect to test my game with many concurrent,
       | simulated players. For context, the game needs to properly handle
       | 50 players at once.
       | 
       | The pricing (6 hours of testing for $99) means that if I want to
       | do a 1 minute test with 50 concurrent players, I can only test 6
       | times a month. A big benefit of testing is to ensure we ship
       | reliable software, and we plan to ship much more often than 6
       | times a month.
       | 
       | Is there a way you could price by unique tests instead of hours
       | tested?
        
         | fitzn wrote:
         | Interesting use case. We haven't encountered this or thought
         | about this before. It would be a change for us to price like
         | this, but happy to discuss further if you want to shoot us an
         | email!
        
       | [deleted]
        
       | ck_one wrote:
       | I assume it's not possible to clean up after a test since you
       | directly interact with the web app, right?
        
         | fitzn wrote:
         | Current customers typically either run a "clean up" Reflect
         | test that deletes account state that was created in previous
         | tests, or they have a periodic internal job/logic that auto-
         | deletes all test accounts. The next day's Reflect tests can
         | sign-up fresh, for example, and use the new accounts for tests.
        
       | UweSchmidt wrote:
       | The true test for such a tool are usually the edge cases; in my
       | opinion the web is simply too finnicky, and all those well-
       | meaning custom algorithms I've tried fell way short.
       | 
       | I would recommend Ranorex, which combines a comfortable record-
       | replay functionality (which creates real C# code) and a code-
       | first approach and everything in-between. A powerful "spy"
       | assists in creating the most sophisticated locators and turns
       | them into first-class Ranorex objects; a shortcut jumps from code
       | to the repository and back; duplicate elements are detected on
       | the fly.
        
       | start123 wrote:
       | The registration is buggy - The form is misaligned - at least on
       | Firefox, and why does Google sign up say "continue to
       | amazoncognito.com"?
        
         | [deleted]
        
       | [deleted]
        
       | mmckelvy wrote:
       | Looks great. How much easier is this than say, writing E2E tests
       | with Cypress?
        
         | ollerac wrote:
         | This is what my company is currently just getting started with,
         | so I'd be interested in this comparison as well. If Reflect is
         | much better, we still have time to switch.
        
           | tmcneal wrote:
           | Extremely biased of course :) But here's where I see the
           | advantages of Reflect vs. Cypress:
           | 
           | - Cypress has a really nice local development experience that
           | feels at home to front-end devs. I would describe Reflect as
           | a nice remote recording experience that simulates an end-
           | user. So we're kind of attacking the same problem from a
           | different perspective. You can technically record Reflect
           | tests against your local env with ngrok, but Cypress is
           | certainly a better local testing experience. So advantage
           | Cypress here.
           | 
           | - There are actions like interacting with iframes, drag and
           | drop, links that open a new window, and file uploads that
           | range from difficult to almost impossible to do in Cypress.
           | We support these actions out-of-the-box.
           | 
           | - Similarly if you want to do visual testing in Cypress
           | you'll need to integrate with a third-party visual testing
           | tool like Percy or Applitools. We have visual testing support
           | built-in.
           | 
           | - I've seen folks struggle bridging the gap between using
           | Cypress for local testing and actually getting it set up in
           | their build/deployment pipeline. Since we're a fully managed
           | service it's just a single API call to get Reflect in your
           | CI/CD. We also have built-in scheduling so if you just wanted
           | to run your tests at a fixed schedule you don't need to
           | integrate with anything, which I think is a nice way to ge t
           | going quickly and prove out our tool.
           | 
           | - Because of Cypress's design, it's not so easy to get
           | Cypress tests to run in parallel. This is really important
           | because E2E tests take way longer to run vs. unit and
           | integration tests, and the best lever IMO for reducing this
           | is parallelization. This is also another tripping point for
           | folks getting Cypress set up in CI/CD.
           | 
           | - The final differentiator I'll mention is really the
           | difference between a code-based vs. codeless approach. We're
           | trying to reduce the time it takes to create and maintain
           | tests and we think the way to do that is to basically handle
           | as much as we can automatically on your behalf. Instead of
           | writing code that simulates actions, you just record your
           | actions and we generate the automation for you. So for a flow
           | like adding to cart or user registration, you might only take
           | you a few minutes to set that up in Reflect but it'd be a lot
           | longer to do in Cypress. Certainly as your test suite grows
           | things like reusability become really important, and we
           | support that as well. This also means that non-developers
           | like QA testers w/o dev experience, PMs, designers, and
           | customer support can write and maintain tests.
        
             | mmckelvy wrote:
             | Sounds great. Thanks for the detailed response.
        
             | inglor wrote:
             | > - Because of Cypress's design, it's not so easy to get
             | Cypress tests to run in parallel.
             | 
             | I am not sure why you think Cypress is hard to parallelize
             | but if you don't like their managed service (dashboard) you
             | can use https://github.com/agoldis/sorry-cypress - it's
             | quite possible to do.
             | 
             | (All the rest of what you wrote sounds solid - good luck
             | again :])
        
         | inglor wrote:
         | I wrote an article about Cypress. I was a big fan (gave panels
         | about cypress in conferences) but am disillusioned. For a very
         | specific use case Cypress is great and arguably the best choice
         | though: https://www.testim.io/blog/puppeteer-selenium-
         | playwright-cyp...
         | 
         | Even if you use something like Cypress you probably need
         | something like Reflect (or Testim where I work but don't
         | represent - for that matter) - you would just end up writing
         | the framework in-house.
        
       | Trufa wrote:
       | Hey! Product looks great! My main question would be, for a
       | rapidly evolving product would it generate a lot of false
       | positives?
       | 
       | I really enjoy the idea of business/marketing people
       | collaborating with tests. Congrats.
       | 
       | I also really like your business model so will give it a try.
        
         | tmcneal wrote:
         | False positive failures are a really common issue with existing
         | E2E testing tools, so we try to do a number of things to
         | prevent them in Reflect tests:
         | 
         | - We generate multiple selectors for each element you interact
         | with. So if in the future you change a class or id, we'll
         | fallback to other selectors when running your test. We also
         | only choose a selector if (1) it matches a single element in
         | the DOM, and (2) that element is interactable (e.g. not
         | completely hidden), and (3) if it has text associated with it
         | then we'll only choose it if the text captured at recording
         | time matches the text at run time. This helps prevent us from
         | selecting the wrong element when running a test.
         | 
         | - For React apps, we use heuristics to ignore classes generated
         | by CSS-in-JS classes like Styled Components and EmotionJS. We
         | also have the ability to target elements based on React
         | component name (it requires a one-time setup on your side to
         | enable this)
         | 
         | - For Visual Testing failures (e.g. you've screenshotted an
         | element and now that element has a different UI) we have a
         | simple workflow to 'Accept Changes' and mark the new UI as the
         | new baseline for tests going forward.
         | 
         | Certainly more to do here but this is one of the key problems
         | we're looking to tackle.
        
           | Trufa wrote:
           | Thank you, there are two things that are either a bug, or
           | hard to figure out.
           | 
           | 1) I can't delete steps
           | 
           | 2) I can't hover.
           | 
           | Great product guys, keep it up!
        
             | fitzn wrote:
             | Thanks for this feedback!
             | 
             | 1) You can delete steps on a recorded test but not during
             | the recording. This is to ensure that we have an exact copy
             | of the initial recording. Thereafter, you can click on the
             | test step and click "Delete" on the middle pane.
             | 
             | 2) These should be captured out of the box. Can you email
             | me your test URL so I can investigate? fitz at Thank you!
        
       | shay_ker wrote:
       | > One feature in beta that we're really excited about: For React
       | apps we have the ability to target React component names as if
       | they were DOM elements (e.g. if you click on a button you might
       | get a selector like "<NotificationPopupMenu> button"). We think
       | this is the best solution for the auto-genned classes problem
       | described in the bullet above, as selectors containing component
       | names should be very stable.
       | 
       | I remember looking into this exact idea. Theoretically, if you're
       | able to capture the React state, and you're working with a "pure"
       | React app, you should be able to auto-gen readable tests from
       | human interaction. And, if you're capturing the state in a
       | granular enough fashion, you should be able to "time-travel", but
       | for non-technical users.
       | 
       | IMO the biggest use case for E2E tests are for critical things
       | like auth & checkout. If you're able to auto-gen, maybe you can
       | get even deeper than that.
       | 
       | Congrats on the launch, it looks cool!
        
         | inglor wrote:
         | We have a product that does this (generates E2E tests from user
         | interactions) - it's very tricky. If you just want to cover the
         | auth & checkout flows it's great but if you need to model the
         | more complex and nuanced interactions record + playback is
         | really a great way to go.
        
           | shay_ker wrote:
           | You should be able to hack React to do what I'm suggesting
           | (playbacks), though last time I looked at the source code I
           | got a bit confused and gave up.
           | 
           | It's historically been such a battle to deal with E2E tests
           | and changing code, but if this playback idea works well then
           | the interop should be relatively seamless. It might require
           | an over-engineered React app, though, which is likely the
           | biggest issue.
        
       ___________________________________________________________________
       (page generated 2020-07-20 23:00 UTC)