Skip it, then fix it or delete it

Kevin McCarthy
1 min readSep 25, 2019

--

How we’ve dealt with flakey tests

Flakey tests are inconsistent tests that sometimes pass and sometimes fail. They are far far worse than bad tests as they are giving you bad information.

These first appeared in our CI/CD and the general response was to just rerun the tests until they pass. Apparently this is what Google does (although with fast automated re-running and lots of fanciness)!

Skip it

The approach we’ve taken recently is the minute we find a flakey test we skip it by adding an x to the start of the test. The test shows up as skipped when you run them but it does not get run.

Fix it

Next we attempt to fix the test. The technique I’ve been using is to run the same test multiple times until it fails and then attempt to debug the failure. I do this by wrapping it in a context block.

context "run lots of times" do
10.times do
it "this is a flakey test" do
do_the_setup
expect(something).to eq something end
end
end

This way should hopefully be able to replicate the flakiness and debug why its happening and hopefully fix it either by writing better tests or refactoring code to be better and more testable.

Or delete it

Depending on the importance/criticality/likelihood to change of the feature and its need for tests we may just delete the test altogether.

--

--

No responses yet