As I continue to catch up on delinquent and neglected tasks during the inter-semester break, I’ve started porting the software from last year’s piClinic Console to make it ready for prime time. I don’t want to have any prototype code in the software that I’ll be leaving in the clinics this coming summer!
So, module by module, I’m reviewing the code and tightening all the loose screws. To help me along the way, I’m developing automated tests, which is something I haven’t done for quite a while.
The good news about automated tests is they find bugs. The bad news is they find bugs (a lot of bugs, especially as I get things off the ground). The code, however, is noticeably more solid as a result of all this automated testing and I no longer have to wonder if the code can handle this case or that, because I’ll have a test for that!
With testing, I’m getting to know the joy that comes with making a change to the code and not breaking any of the previous tests and the excitement of having the new features work the first time!
I’m also learning to live with the pain of troubleshooting a failed test. Anywhere during the test and development cycle a test could fail because:
- The test is broken, which happens when I didn’t update a test to accommodate a change in the code.
- The code is broken, which happens (occasionally).
- The environment is broken. Some tests work only in a specific context like with a certain user account or after a previous test has passes.
- Cosmic rays. Sometimes they just fail.
The challenge in troubleshooting these test failures is picking the right option from the start to make sure you don’t break something that actually was working (but whatever was really broken is hiding that fact).
But, this is nothing new for developers (or testers). It is, however, completely foreign to a writer.
Here are some of the differences.
Continue reading “If we could only test docs like we can test code”