I’m like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you’re into that kind of thing, go for it. Coverage of lines of code doesn’t matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.

I’m a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We’ve got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.

I have support from the CTO in getting more testing in, and I’m able to use testing to cover bugs and regressions, and there’s solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.

The resistance is usually along lines like:

  • You shouldn’t have to refactor to test something
  • We shouldn’t use mocks, only integration testing works.
    • Repeat for test types N and M
  • We can’t test yet, we’re going to make changes soon.

How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?

  • mqn
    link
    fedilink
    81 year ago

    If you can measure regressions in some way it would help to quantify the scale of the problem and also give the team something to visibly work towards.

    For example: number of automated error reports (tracking like Sentry), number of issues/bug tickets created manually or number of PRs that are associated with fixing regressions (tagged after the fact).

    Watching these numbers go down is satisfying.

    The other thing I’d do is try to improve the tooling around testing to reduce friction when writing tests.

    Are there no consequences for shipping buggy things though? No grumpy customers or internal users? I take pride in stuff that works well first time.

    • @Reader9@programming.dev
      link
      fedilink
      English
      21 year ago

      This is a great suggestion because it focuses directly on tracking the outcome (did the software work?) and it gives a fair chance to the folks who don’t want to test - maybe their code really is perfect!

      Another similar metric I would add is the number of rollbacks of newly released code, if the CD system supports it using a method like canary or blue-green rollouts.