…it’s not a Unit Test

Riffing on Jeff Foxworthy’s “…you might be a redneck”, I present “…it’s not a Unit Test”:

  • If it requires manual setup… it’s not a Unit Test.
  • If it requires manual intervention… it’s not a Unit Test.
  • If it requires a network connection… it’s not a Unit Test.
  • If it requires a container… it’s not a Unit Test.
  • If it requires a database… it’s not a Unit Test.
  • If it accesses a file system… it’s not a Unit Test.
  • If it requires a service to be available… it’s not a Unit Test.
  • If it takes longer than a millisecond to run… it’s not a Unit Test.
  • If it lacks assertions… it’s not a Unit Test.
  • If it breaks on certain dates or at certain times… it’s not a Unit Test.
  • If it requires a UI… it’s not a Unit Test.
  • If it fails intermittently for an unknown reason… it’s not a Unit Test.
  • If it passes intermittently for an unknown reason… it’s not a Unit Test.
  • If it fails when you look at it funny… it’s not a Unit Test.
  • If the mock setup code dwarfs the actual test code… it’s not a Unit Test.
  • If you have to mock beyond immediate dependencies… it’s not a Unit Test.

I think that’s a pretty good start. What have I missed? Are there any that make you cringe? Let your voice be heard: leave a comment!

13 Comments

  1. Hmm…I seem to have had this conversation today….forwarding link to my project’s new Dev manager and the QA guy.

    Though I would quibble and add “Not a good” in front of the “mock setup” and “millisecond”. Given that I’ve worked in a lot of crappy legacy code sometimes you gotta do what you gotta do to get ANY unit test running (since there aren’t usually any tests). And sometimes that requires pages of mock setup before you can safely start re-factoring.

    Oh, and if it doesn’t have any assertions it’s not a test of ANY kind.

    • I completely agree; especially on the “no assertions it’s not a test” point! I’m perfectly find with violating any of these as a short-term approach to getting ANY/SOME test coverage so that refactoring can be done safely. Sadly I find most often that the refactoring is put off indefinitely.

      So “do what ya gotta do”, but recognize it as technical debt and don’t let that debt continue to grow!

  2. Overall pretty good. I have a bit of trouble with the second to last (setup code). This may be good smell tests for the code under test – but doesn’t reveal the nature of the test. A method under test which has a complex relationship with underlying services could require quite a bit of mocking to get started. What’s more, the amount of code required to mock the environment could be quite large, and yet hidden away by shared helpers, testing frameworks, etc. The amount of code in the test could be a mark of immature test framework but reveal nothing about the unit-ness of the test itself.

    • Hi Jeff!

      Thanks for the comment! When working with legacy code there is no doubt that the state of the code under test makes limiting mocking difficult. I consider that to be a smell: something seems wrong. In the case of extensive mocking being required to get a test up and running, my experience has always been that there are major design problems in the code: encapsulation has been violated, a class is trying to do too much (violates Single Responsibility Principle), etc.

      Sometimes, though, I find that the test is unnecessarily complex. I’ve seen this in codebases where a mocking framework has been used for a while and the framework ends up getting abused. So, while a mocking framework can be a very good thing, it can also be an enabler for bad test design.

      So while I agree that sometimes extensive setup is required, I would challenge you to consider that as technical debt and look for refactoring opportunities to improve the design of both the code under test and the tests themselves.

  3. I find it a lot more relevant and useful to discuss and understand WHAT functionality the tests are testing and how long they take to run other than what we really call them (unit, integration, poporn, bacon). I have been finding a lot of tests that ARE called “unit tests”, if we use your definition criteria above, but they don’t test anything really. I call those tests Tautological and the practice of writing those tests TTDD – TautologicalTDD.

    When we educate people that unit tests have to follow all those rules the tests end up becoming tautological due to the fact that everyone tries to isolate every single dependency around a single class.

    I’d appreciate some feedback and your thoughts on this as well… Cheers

    http://fabiopereira.me/blog/2010/05/27/ttdd-tautological-test-driven-development-anti-pattern/

    • Thanks for the comment! Looking at your post now…

      Yes, what we are testing and how long the tests take to run are valuable foci. I compiled this list from coaching many teams on good testing practices, not because I want teams to focus on them, but because I find it helpful to use ONE or TWO of these statements to jar someone’s thinking. After many years of coaching, I thought it would be fun to take a light-hearted approach to thinking about Unit Tests.

    • Very true. These are mostly negative descriptions of integration tests. However, I find it useful when working with new teams to give them an idea of what a Unit Test is NOT… and I just finally decided to take a light-hearted approach to compiling a list.

  4. If the test run touches half of all classes of your application … it’s not a unit test
    If either all tests pass or nearly half of them fail if you change one line of code – INAUT

  5. (You need to start numbering these for easy referral, e.g., this test INAUT#10.)

    I’d generalize “If the mock setup code dwarfs the actual test code… it’s not a Unit Test.” with simply “If the setup code is bigger than the test code…INAUT”. AKA, Test Setup Sermon.

Comments are closed.