Leak detection into UI tests. Memory leaks can cause performance… | by Danil Perevalov | Sep, 2023
You can easily integrate LeakCanary into UI tests by simply including a specific code that runs when the test is completed.
LeakAssertions.assertNoLeaks()
After the test is completed, if you are using Espresso, you should start leak detection with LeakCanary in the method annotated with After.
@After
fun after() {
// Launch leak detection
LeakAssertions.assertNoLeaks()
}
Our company uses Kaspresso, and we have created a wrapper around its Rule to incorporate our own code in the init and after sections for all tests. Here is an example of how we have implemented this:
class LeakKaspressoRule(
testClassName: String
) : TestRule {val kaspressoRule = KaspressoRule(testClassName)
override fun apply(base: Statement, description: Description): Statement {
return kaspressoRule.apply(base, description)
}
fun before(actions: BaseTestContext.() -> Unit) = After(kaspressoRule.before {
// own code
actions(this)
})
class After(
private val after: AfterTestSection<Unit, Unit>
) {
fun after(actions: BaseTestContext.() -> Unit) = Init(after.after {
// own code
actions(this)
})
}
class Init(
private val init: InitSection<Unit, Unit>
) {
fun run(steps: TestContext<Unit>.() -> Unit) = init.run {
steps(this)
// own code
}
}
}
Add leak detection to the After Kaspresso class.
after.after {
LeakAssertions.assertNoLeaks()
actions(this)
}
And overall, that’s it. Let’s look at an example of such a test.
class LeakAuthUiTest {@get:Rule
val leakKaspressoRule = LeakKaspressoRule(javaClass.simpleName)
@Test
fun testLeakOnAuth() {
leakKaspressoRule.before {
}.after {
}.run {
step("Open user profile") { ... }
step("Open auth") { ... }
step("Open registration") { ... }
step("Open restore password") { ... }
}
}
}
As described above, everything occurs by simply opening screens and running a memory leak detection inside LeakKaspressoRule once the test is finished.
Let’s now discuss launching and providing support for these tests.
Our memory leak tests were launched when we needed it, which was not often. Usually at most once a month.
However, this approach generally proved effective. We would write tests that would open a stack of screens based on a specific scenario. We run the specialized tests, and periodically discover leaks. Consequently, tasks were initiated to address these leaks, and the best part was that it did not demand excessive time from developers or testers.
But… The problem of supporting such tests arose over time. This was due to the constant changes in the application and the need to adjust the tests accordingly. Especially when they affect multiple screens.
Developers unintentionally broke the tests for detecting memory leaks without realizing it. Since the tests were not consistently launched. As a result, we had to periodically create tasks to fix these tests. Writing new tests of this nature was not desirable for anyone, as they required painful maintenance. This situation persisted for a while.
It is important to note that manual detection of leaks required significantly more effort compared to this method.
We decided it was time for a change.
The number of regular UI-tests had exceeded a hundred by that time. Covering a much larger number of screens and user scenarios than those specialized for leak detection.
We decided to try doing things differently because we didn’t need the specialized tests anymore.
Instead of including specialized tests for the leak detection in the project, we will add an option to run leak detection at the end of each test.
Importantly, this is only an option. Adding the leak detection to the test significantly increases the testing time. It is important for these tests to be able to run quickly. Because, we run UI tests before merging a feature branch with the main branch in Git.
On our CI, we implemented the following logic:
- When UI tests are run before the merge, they are run normally without leak detection. Only the tests related to the changed code in the Git branch are executed.
- All UI tests with leak detection are run if they are executed before the Release Candidate build.
To implement this behavior easily, one can add a flag and pass it as an argument to the TestRunner. If the flag `isLeakTest` is set to true, it indicates that the tests should be run in the leak detection mode.
The value of the flag is read from the TestRunner arguments and then written to a static variable.
class CianUiTestRunner : AllureAndroidJUnitRunner() {override fun onCreate(arguments: Bundle) {
IS_LEAK_TEST = arguments.getString("isLeakTest") == "true"
}
}
In the After annotated method for Espresso, we only run a leak detection from LeakCanary if the `IS_LEAK_TEST` flag is true.
@After
fun after() {
if (IS_LEAK_TEST) {
// Launch leak detection
LeakAssertions.assertNoLeaks()
}
}
Implementing such logic in Kaspresso is not too difficult.
after.after {
if (IS_LEAK_TEST) {
LeakAssertions.assertNoLeaks()
}
actions(this)
}
That’s it. The improvements may be small, especially when not considering the hundreds of tests written before.
The scheme works as we have achieved fully automated leak detection during Release Candidate builds, resulting in the identification of bugs to fix leaks. Minimal developer involvement is necessary for leak detection, except for the task of reporting the detected leaks.
We are currently satisfied with it.
Both options have their own benefits, in fact.
- The first option is easy to integrate, but difficult to support. Is suitable when there is a low coverage of UI-tests.
- The second option is difficult to integrate, but easy to support. Is suitable when there is a high coverage of UI-tests. Which not everyone possesses, and it is not always essential.