top of page

Automated Testing for Quality and Agility: Lessons from Detox and testRigor

Writer's picture: Taylor SmithTaylor Smith

Updated: Dec 15, 2024

UI testing plays a critical role in ensuring apps function smoothly and deliver a seamless user experience. As someone relatively new to automated testing, I was both excited and nervous to dive into tools like Detox and testRigor. This term, I had the chance to explore these frameworks in real-world scenarios, testing across mobile (iOS and Android) and web platforms. The experience proved to be a blend of challenges, surprises, and valuable lessons that broadened my perspective on the diverse approaches required for different platforms.


Getting Started

When I first set up Detox, it was a bit intimidating. With no prior testing experience, I found myself learning as I went—configuring simulators, managing dependencies, and figuring out how to even write and run tests. There was a lot of trial and error (and a lot of reading documentation)!


On the other hand, I found that setting up testRigor was much more beginner-friendly. All I had to do was provide the web link or APK/AAB file to be tested, and I could start writing the tests using plain English right away. The platform’s AI handled much of the technical complexity behind the scenes, which was a refreshing change of pace—though by this point, I was starting to feel like a bit of a set-up pro myself.


Writing and Debugging

Writing tests in Detox required a solid understanding of the app’s architecture. Each UI element needed to be identifiable by Detox (through test IDs or accessibility labels), and creating tests often involved tracing the application logic through the code. It wasn’t just about writing tests—it was about learning to think like the app itself. How does the app handle input? What might cause delays or errors? I found myself constantly analyzing how the pieces fit together, which deepened my appreciation for the development process. By the end, it felt genuinely rewarding to understand the app on such a detailed level.


During a six-week period, I developed approximately 33 test cases covering 8 distinct iOS mobile flows. Some tests were particularly complex, such as simulating backend functionality for surveys and notifications or testing how the app handled poor network conditions. Debugging, while sometimes frustrating, became a bit like solving puzzles. Each error log was a clue leading me closer to a solution, and I loved the sense of accomplishment when a stubborn test finally passed.


With testRigor, writing tests was a lot faster. Within four weeks, I developed 89 test cases for 9 web app flows and 15 Android mobile tests across 4 flows in just a few days. Writing tests in plain English facilitated easy documentation and sharing of test scenarios, making it ideal for demonstrations and collaboration. Testing a successful login, for example, was a simple 4 lines:

1. enter username into username

2. enter password into password

3. click login

4. check that home page loads


That said, testRigor’s simplicity had some limitations. The AI occasionally had difficulty with repeated elements or more complex app states. In such situations, I had to get creative, either by finding workarounds or by employing compatible JavaScript commands.


Stability and Flakiness

Mobile testing often suffers from issues like race conditions or timing conflicts which can cause tests to fail unpredictably. In iOS tests, resolving these issues typically involved manual adjustments like setting timeouts or verifying an element’s presence before interacting with it. These tweaks required a lot of patience, but they also gave me an appreciation for the level of precision and reliability required in mobile testing.


In testRigor, the adjustment process was much simpler. I could use plain English instructions like “Wait 30 seconds up to 4 times until the page contains 'Success!'” to manage polling intervals and timeouts, rather than simply applying a 30-second timeout. I also appreciated the built-in history of test runs, which made it easy to compare the last successful run with the one that failed.


When it came to web testing, testRigor was typically more stable. Its visual-based sync system reduced concerns about backend delays or timing issues. That said, the AI sometimes had its quirks, interpreting elements differently across runs for no obvious reason. When this happened, adding more instructions or wait conditions usually resolved the problem.


Final Reflection

Working with Detox on the iOS app gave me a deep dive into not just UI testing but mobile development as a whole. It taught me how apps are built, how components connect, and how to troubleshoot at a granular level. By the end of the project, I had developed a profound familiarity with the application's internal mechanisms, making the experience especially fulfilling.


At the same time, using testRigor highlighted the power of abstraction and AI in testing. It’s exciting to think that someone with no technical background could pick up testRigor and start writing meaningful tests almost immediately. The speed and simplicity it offers has huge potential in making software testing less tedious and more accessible.


It was incredibly exciting to contribute to this phase of the project and gain hands-on experience with this aspect of product development. Diving into different frameworks and technologies expanded my understanding of UI testing and left me with a broader appreciation for the tools that make it possible. Both Detox and testRigor brought unique lessons, and I’m looking forward to applying this knowledge in future projects.

49 views0 comments

Comments


bottom of page