Page 1 of 1

Unit Testing: another view

Posted: Tue Nov 30, 2010 6:28 pm
by FourthWorld
When I was speaking at the MacTech conference earlier this month, the speaker before me was Wil Shipley, developer of the acclaimed Delicious Library. His talk was on my must-see list both because of his product's excellent reputation and because of his subject: "Writing a Test Harness for your Application".

He devoted the first several minutes of his talk to why unit testing sucks. And oddly enough, by the time he was done I was a convert.

I asked him if he had a blog post on that I could share, and he pointed me to this one:

Unit testing is teh suck, Urr. ... k-urr.html

A worthy read, IMO, whether you agree with it or not, well worth the few minutes, though this snippet gets to the core of his point:
Most programmers don't know how to test their own stuff, and so when they approach testing they approach it using their programming minds: "Oh, if I just write a program to do the testing for me, it'll save me tons of time and effort."

There's only three major flaws with this: (1) Essentially, to write a program that fully tests your program, you need to encapsulate all of your functionality in the test program, which means you're writing ALL THE CODE you wrote for the original program plus some more test stuff, (2) YOUR PROGRAM IS NOT GOING TO BE USED BY OTHER PROGRAMS, it's going to be used by people, and (3) It's actually provably impossible to test your program with every conceivable type of input programmatically, but if you test by hand you can change the input in ways that you, the programmer, know might be prone to error.
Having been immersed the last few weeks in a code base in which most components had good through significant unit testing but are exhibiting issues with their interactions, I can sympathize with Mr. Shipley's plea for more holistic testing practices.

Your thoughts?

Re: Unit Testing: another view

Posted: Wed Dec 01, 2010 6:36 am
by WaltBrown

I don't think that it's as simple as "unit testing teh sucks". It seems the issue is more with the difference between an extensive, formalized unit testing implementation or regime being used exclusively, than with the generality of unit testing as a step in the process. As a generality, unit testing can be as simple as running it to see if it works, with real simple "gozintas" and "gozoutas", and it can be as complex as he describes as a "worst case".

Of note is a phrase in the first quote in his article, "solo testing and debugging". A very useful part of any process is to let someone else get their hands on what you have made. That's what many other quality methods are, like walk-throughs, "alpha" and beta tests, usability labs, etc, basically getting other sets of eyes to look at the work. But I as a solo practitioner am absolutely not going to build test jigs, input and output emulator frameworks, etc, just so I can test individual modules. That would be insane. Well, in most cases. But 'll get to that.

A second point is "what is a unit?". As you mention he is the author of a library. From his POV that library is probably composed of many "units". But if you or I use it (and probably our own wrapper) in an app, it is a "unit". Somewhere in any project is a tradeoff between available resources, eventual revenues or benefit, cost of field failures, time to market, etc, etc, that we all have to make - IBM's first OS left the door with over 20,000 known bugs. At some point you have to look at the elements of your project and decide if there are a couple "hot spots" where some kind of unit testing can help performance, graceful failure, simplification of later system level diagnosis, etc.

A third point is the intended usage environment of your project. A client application used to access a MMRPG might very well be more complex than, say, a user interface to control a surgeons laser or to verify international telecom settlement, but the oversight processes in place for each are very different (driven of course by liability which in turn is driven by market experiences with failures). Many oversight processes require a certain level of documentation and some minimum level of functional validation - while complete testing is an np-complete problem, formalized unit testing is one way to provide validation and tracing information. I've been part of efforts to create testing labs and regimes that include unit testing workbenches with distributed source and load server farms, etc, and it is absolutely a pain in the ass and takes years to get right. But in that case, the cost of field failure was much higher. I had one multivendor integration that cost a major carrier over $10k per hour for 9 months before the issue was diagnosed and resolved. If we hadn't been able to get at the data on the unit interfaces and previous test plans, ranges, and results as starting points the issue may never have been resolved, and the entire system trashed.

So I think the issue is SOME kind of testing is important (as is a mix of testing methods), and the amount and type done is project dependent, but that it is silly to empirically waste effort in one area alone, in this case formalized unit testing. But then, that's what the original quote was asking - "I'm curious to know how you approach product testing in such a small company". I think focusing the comments on unit testing really didn't answer the original question.


Re: Unit Testing: another view

Posted: Wed Dec 01, 2010 8:18 am
by WaltBrown
Sorry, I got brain lock on this.

Further supporting Richard's comments, integration in ICT is a huge current problem and getting worse faster. And it's source is not confined to software development, it's a generic issue with any social interaction between human communities, which is also getting to be a bigger issue as many global organizations become more granular and less hierarchical. It frequently boils down to understanding meaning in each others language and descriptions.

It can be as simple as two vendors saying "Yes, we do T1 for TDM trunks", or "Yes, we handle ADPCM encoded audio files". But in the end there are over 35 possible T1 protocols for TDM signaling alone, and 20 or more ADPCM algorithms, and many of the implementations are not standard (even the standards say "vendor specific implementation" in places, but I won't dive into that can of worms here). I've handled over 2000 multivendor integration issues in my career (I'm not bragging, they weren't all successful, it's just to illustrate), and over 90 percent boiled down in the end to a human communications issue. In a multivendor integration, that effectively means "between units and their creators".

A key to "unit" testing is testing (to obviously your own satisfaction) to the range at which someone else COULD POSSIBLY use your module, NOT just to the range that the initial spec called for. You might say "never pass an empty parameter" because you throw an unhandled exception if that happens, and then blame the user for doing so - "hey, it was in the documentation, RTFM, it said "NO EMPTY PARAMETERS", I'm closing the issue as user error". I've run formal beta trials, with contracts, guarantees to not deploy, and everything (even free beer and limited edition embroidered shirts to the testers :-), limits on use, etc, etc. Over two thirds of the beta developers used the product outside the intended range anyway, deployed anyway, and we had to handle the issues in the field, anyway. We could have avoided some number of those field issues if all the unit testing tested what was possible, not just what was intended - many of you have seen that parabolic chart that shows cost per bug at the various lifecycle stages of software products. (As a side note, one of my former teams once got signature authority on unit testing results, which of course promptly halted most deliveries, which in turn had marketing revoke our signature authority so product could go out the door anyway, but that's a story for over beers sometime).

In the end, there's probably no way to "suggest" a specific testing regime to a new developer in this realm of smaller development shops feeding mashups, other than a couple general points - use multiple testing methods, keep your interfaces simple, get another pair of knowledgeable and willing eyes to look at and try your work, keep extensive notes on what you can't get to right away, and be confident that what you make WILL CERTAINLY be used in ways you didn't originally intend.

I'm going to be real crotchety when I get old...

Re: Unit Testing: another view

Posted: Sun Dec 05, 2010 7:12 pm
by mwieder
Well, it's a good (and long) read... however...

The author is quite correct about the need to test. And test some more. But when he posts inane stuff like
Most programmers don't know how to test their own stuff
I have to get on a soapbox about this. For a first step, it's one of the cardinal rules of testing software that you can't test *your own* stuff. You're in too deep to get a perspective on how an application will be used in the real world. That's where beta testers come into play - you need to make sure your software is working, then hand it off to someone else. And then start fixing things because your beta testers will have done things you never expected.

But there's a place for unit testing. Unit testing provides a couple of very real advantages. First of all you can place unit test code in your application as you write it as a sanity check. That way you can ensure that code changes you make later on don't break things in unexpected ways. Secondly, unit testing paves the way for test-driven development (TDD), where you write the unit tests first, then write the code until the tests pass.

As a simple example, if I'm writing a function to provide the cube of a given number I might write a stub like

Code: Select all

command cubeOfUnitTest
    assertEquals cubeOf(3), 27
    assertEquals cubeOf(four), 64
    assertNotEquals cubeOf("haha"), 10
end cubeOfUnitTest

function cubeOf pValue
    local tCube
    return tCube
end cubeOf
Unit tests provide a contract between what you expect from your software and what the code actually does. In the same way, TDD provides a contract between what you want your software to do and what it actually does.

Re: Unit Testing: another view

Posted: Mon Dec 06, 2010 2:57 pm
by WaltBrown
The libSTSXML.rev Workshop is a kind of example of a "personal" unit test. As I write, I try to keep a card in each stack that tests each "element" with known inputs and outputs. Then I can at least go back and double check that any given element is still doing what I originally intended as I add. It's also handy in case any component external to my work gets updated or otherwise changed.

Re: Unit Testing: another view

Posted: Thu Dec 09, 2010 7:49 pm
by mwieder
...and since I brought up TDD, I just came across this blog entry today: ... -fitnesse/

Re: Unit Testing: another view

Posted: Tue Dec 14, 2010 1:17 am
by mwieder
...and this one:

Ten Reasons to Write Unit Tests

Re: Unit Testing: another view

Posted: Mon Jan 23, 2012 8:50 am
by kdjanz
Ran across this thread while replying to another one and was intrigued.

I first heard of unit tests in Rails - a standard part of the environment there. How does this apply to LC? Since scripts are scattered through different objects in the stack (or stacks) where would you put the unit tests? Would you create a "doUnitTests" command that would contain a list of handlers to automate this?

I saw the start of some code from mwieder but "assert" is not part of LC - do you create a testing stack? I saw another comment that one coder included a page with tests - how do you strip this out so that it's not in the standalone?

Many questions, but I see this as a Best Practices type of thing that would be a good habit to acquire right from the start. Do many of the professionals here actually use unit tests or TDD as standard practice?

Thanks for any discussion,