I saw a recommendation the other day that unit tests should be erased in preference to retaining integration tests since integration tests give more coverage for less effort and unit tests can impede the speed of future changes.

Part of the observation relied on the metaphor of a scaffolding being removed once a building is constructed. On the surface that seems reasonable - once the utility of the scaffold has ended, it can be removed so as to not obscure the construction or interfere with the utility of the building by users. In contrast, the “presence” of unit tests or integration tests in the source repo does not (generally) interfere at all with the end product packaged as a jar file, whl package, executable, or whatever. By this measure a scaffold that didn’t interfere at all with the appearance or utility of the building might be left in place to ensure future access for maintenance and repair - which is exactly what we do with unit and integration tests!

As with most things software development related, I think the overall answer to the question “is it a good idea to rely solely on integration tests once development is completed?” is, it depends.


  • You absolutely do get a lot of “coverage value” from end to end tests.
  • Assuming that integration tests alone have exactly the same coverage value as unit tests, by eliminating unit tests you may eliminate layers of test duplication that can impeded future code changes.


  • In certain edge cases it can be prohibitive or impossible to have comprehensive integration tests due to intrinsic limitations of the system being tested.
  • In certain edge cases integration tests may require manual testing, in which case my preference would be to maintain comprehensive automated tests, even if they are unit tests because manual tests are an unacceptable compromise since developers can no longer develop in confidence by constantly running tests to check their work.
  • In my experience those “redundant” layers of testing in unit tests are invaluable when making code changes because they highlight some overlooked corner that needs attention due to a change. But I’ll also accept that this might be a code smell that my integration tests are not sufficiently complete.
  • To confidently make a change you’ll probably have to restore that unit test scaffolding, and now you have the problem of knowing where to correctly put the “new” tests and are bound to sometimes make the wrong choice so that some area isn’t properly tested.

If you are really determined to follow this advice, then I suggest you pay careful attention to the following:

  • Take note of the areas of code that are not covered by integration tests and ensure there are sufficient unit tests to cover those areas, assuming that integration tests cannot be implemented for some reason.
  • Ensure that your integration tests (and unit tests) are fully automated.
  • Keep an eye over the life of the project on how tests are changing and that coverage levels are consistently high enough.
  • Keep an eye on how many errors are propagating through to production - a growing rate of errors may be another smell that tests are not keeping up with the changes in your code.