Tolerance Metrics in Test Automation
Web applications are not built perfectly. Even though we see our tests passing, it doesn't necessary mean that application is performing well. There are secondary indicators of the application stability, which if not properly observed, can lead to development nightmares. To be more precise, this presentation is about how to cover discovered issues which manifest as slow responses, flaky page loading, browser console error logs, etc. The challenge is how to observe these issues, and properly report on them. How to make sure the business understands what's going on, and can prioritize any work that if not performed, could lead to major side effects in production environment. Addressing this problem consists of including tracking mechanisms, which would produce logs or traces of these kind of events. Then it comes to analysis, setting baselines, and education of stake holders on what this means for them on the long run. Takeaway is to be proactive on issues that don't appear to impact user experience directly. Have the means to measure them, and use opportunities to coach business owners about those issues and their implications. Perhaps the biggest take a way is while doing all of this, it's important that we as testers should not drive development priorities, but rather make sure we influence on them.