Adam Tuttle

Testing Media Roundup #1

The project I was using to push myself forward on testing stuff? I wrapped that up at the end of last week. It's by no means perfect, but it's better than it would have been without tests, and it works, and it's on its way to production. I would love to get the opportunity to go back and make the tests themselves better, cleaner, and smarter. I've already learned a few things that I would do differently next time.

But in the meantime, my ongoing projects aren't tested with automated-tests (for *reasons*) and life must go on. Momentum takes lots of different forms. So instead of pondering testing while working, this week I tried to consume a bunch of testing content from others. My podcast cohosts suggested that I should share that content here, for people who might be interested in it but who wouldn't have otherwise seen it. I'm happy to oblige. Maybe this will become a regular thing? 🤷‍♂️

Are we all doing TDD wrong? #

Firstly, here's a TDD conference presentation video I enjoyed. He ran out of time towards the end, just when I thought he was getting to the good stuff, so I think this could have benefited from another 15 minutes, but it definitely succeeded at churning something up inside me. I will definitely need to watch this again and try to figure out how I should be changing my workflows.

In particular I was struck by the way he stresses the red-green-refactor ("RGR") cycle and the specific benefits available from doing it right. I think until now I was treating RGR as a quick checkbox to be able to say I was doing TDD and not really a process that derives benefit.

What is an "integration" test, anyway? #

And perhaps more interestingly, is it different depending on the types of systems you're testing?

I'm a big fan of Kent C. Dodds and his blog, his courses, and his open source work. Some of the stuff he's been teaching came under scrutiny recently by Tim Bray, and then by Martin Fowler. Kent responded with an explanation of the way he thinks about this problem, and I think he does a great job turning the conversation toward something useful.

At the end of the day, does it matter if we disagree on what makes something an "integration test"? Does it matter if I have too many "integration" tests in your opinion? I think not. Anything that encourages people to write more tests, and helps them figure out how to do that well, is just fine with me.

I got the book #

From what I understand, if TDD has a Bible it is Test-Driven Development By Example, by Kent Beck. So I bought myself a copy. I haven't even cracked the cover open yet, but I'm trying to take this aspect of my professional development seriously so I'm quite looking forward to better understanding TDD. I don't know about you, but I have a really bad habit of hearing a few people talk about a concept —like TDD— and then fooling myself into believing that I understand it enough to do it, without any sort of training formal or otherwise.

It's time to admit to myself that I don't actually know much about TDD, and that I could benefit from learning it from the TDD Bible. And then, you know... Do it.