Page 1 of 1

Testing methods comparison

Posted: Mon Sep 04, 2006 5:57 pm
by alex.barylski
Edit: I realize that TDD is also a state of mind or approach to solving software development woes, so I'm not looking for that kind of explanation please, but rather a technical, pragmatic comparison of each...

I've been reading phpUnit documentation and I have stumbled onto the differences between using external test or inlined tests using phpUnit static method invocation...

The latter I am quite comfortable with as it's similar to MFC ASSERT macro, but as is obvious, this causes additional overhead in the daily execution of my PHP production code...not a good thing...

As far as PHP is concerned I am convinced the external approach is better.

However, it appears this approach is only good for testing an interface, whereas the other is useful in testing the implementation as well...

I've considered many times having all my objects derive from a base class which supported some form of logging functionality.

Logging would certainly be more thorough then AUT...

So which do you prefer or do you use a combination of all three?

1) External AUT
2) Inline AUT
3) Logging functionality

The external approach would certainly be more helpful in refactoring codebases as inlined would disappear if you removed that implementation...

I'd like to hear other benfits, etc to each approach and if you find logging redundant, etc...

I have always liked the idea of a logger in the base class of all objects. It's code is inlined (which is a downside) and it won't choke your application like an ASSERT will so you have to keep an eye on your log files, but it's probaly the most flexible, easiest to implement, and easy to maintain...

Cheers :)

Posted: Mon Sep 04, 2006 6:56 pm
by alex.barylski
Just to ad to what I've said above and actually retract :P

I can see now how the two inline methods are somewhat analogous to each other, but function differently. One is displayed at runtime whereas the other is logged.

I also have observed that external methods don't incur runtime costs (that was actually obvious the first time just not sure if I mentioned that) and are more geared towards testing the interface, like already noted...

Whereas inline are more intended to test implementation.

So really both are good, however I would favour logging during implementation testing over runtime exceptions, etc because of performance and when something chokes in a production application I don't need users seeing test case data.

What I propose (if it doesn't already exist) is a PHP tool which instead of displaying test case trials, it simply logs the data for later review, both at interface and implementation stages.

I realize that traditional TDD requires you to likely run your tests after each code change and only continue once all tests pass (assume you all keep two windows open: one for production code and another for test cases?) so you do something like:

1) Write AUT for object or method(s)
2) Run automated test suite - get green lights or red
3) Hopefully all red, now you hammered out an interface start coding
4) Work on method until it passes unit test, continue onto next method

This same process would work, except require an additional step, implementation invocation, so you would need to run your production code as well as your unit tests.

- Production code first (generate implementation log)
- Automated unit test (generate interface log)
- Data analysis (view both interface/implementation logs)

You could probably even work a UI AUT into the testing framework to avoid invoking production code, except through the unit test framework...

Do you follow? Am I making sense? What do you think? I babbled a bit here so correct me where I'm wrong, etc...

I need something to eat :P

Cheers :)

Posted: Mon Sep 11, 2006 9:42 am
by Maugrim_The_Reaper
TDD and Unit Testing often get a confusing rap. I posted a big retraction of my own personal problems with the approach a few months back (my blog is down for now - been away for 3 months and some blighter wormed into the account).
I realize that traditional TDD requires you to likely run your tests after each code change and only continue once all tests pass (assume you all keep two windows open: one for production code and another for test cases?) so you do something like:
The first line dictates my every action these days - I run the tests after every mid-major change (tiny stuff is rarely test worthy). This works on a few assumptions:

1. You have sufficient tests to detect problems and unexpected issues (never a guarantee, but generally you have a feel for the likely problems).
2. The tests are not buggy (SimpleTest etc. have nice clean interfaces that limit this risk)
3. Your current PHP version is not from the Stone Age. ;)

My usual steps:

1. Think about the aims of the code
2. Write an interface skeleton (method and function names only)
3. Describe the expected behaviour (tests which must always pass)
4. Describe likely problems (test for bad outputs, invalid operations, etc.)
5. Check test syntax (Eclipse, yay...)
6. Write sample code, incrementally. Usually I comment out all tests but the ones the incremented code should be passing. If everything passes, develop another increment, uncomment the relevant tests, and rewind.
7. Along the way check for "code smells" and refactor when appropriate.
8. NEVER test for performance (wait until the near finish line and find the main culprits for optimisation)
9. Revisit the behaviour - is it all on course for what's needed or mutating into a beast
10. Keep all books written by Martin Fowler close to hand (POEAA, Refactoring).

Why I do it this way:

- catch bugs early
- add undiscovered bugs as regression tests
- code generally stays simple, well organised, easy to read (tests are documentation to other developers following TDD).
- maintainance is usually simple thereafter and refactoring a synch.
- instantaneous feedback on changes (logging for later works too, but feedback is time-sensitive).

Important to note, it's all about the behaviour. If you ignore the implementation, a classes inputs and outputs are generally limited. Easier to test them then small discrete units of the implementation. Wait for the whole to fail before digging deeper.

End result - initial testing cost is high, development is normal, maintenance and adaptation requires far less time. It's often a long term time saver which is where the cost can be misinterpreted as being too much for the benefits. A lot of folk (yep, incl. me!) fail to see where maintenance is a blood sucking nefarious demon which can eat hours of my time if I have nothing in place to support and even prevent that maintenance sinkhole.

My 2c. :) Feeling wordy today.

Posted: Mon Sep 11, 2006 8:12 pm
by Ambush Commander
Welcome back Maugrim!

I have a few extra comments to make about your post. I did a skim of "Refactoring" and while I'm sure it's refactoring smells and actions could be quite useful, the most important thing I got from it was only refactor in small pieces, "one hat at a time."

My approach to TDD is this: for writing new code, write the the case, write the code, test, fix bugs. For refactoring, refactor in small bits preserving functionality, and test each step. My biggest problem is how detailed I should get in the testcase.

However, the test tool you describe might be useful for a different purpose: running unit test cases on different PHP versions, servers, configurations, etc. Ideally, all that data gets collected realtime, but that may not be possible. It's not too big of a problem if your test-cases start failing in 4.3.6 as long as they still work in 4.3.11, so you can hammer that out later (although, once again, it's a lot easier if you nip it at the bud).

I've seen something like this in practice, actually. MediaWiki's Parser class is notoriously buggy and fails about twenty test-cases, however, the changes required to fix the bad behavior would require such sweeping changes that no one really wants to do them. So Brion Vibber has this nifty little script that runs the unit tests and the emails it to wikitech-l. But no one really pays attention...