TDD and Unit Testing often get a confusing rap. I posted a big retraction of my own personal problems with the approach a few months back (my blog is down for now - been away for 3 months and some blighter wormed into the account).
I realize that traditional TDD requires you to likely run your tests after each code change and only continue once all tests pass (assume you all keep two windows open: one for production code and another for test cases?) so you do something like:
The first line dictates my every action these days - I run the tests after every mid-major change (tiny stuff is rarely test worthy). This works on a few assumptions:
1. You have sufficient tests to detect problems and unexpected issues (never a guarantee, but generally you have a feel for the likely problems).
2. The tests are not buggy (SimpleTest etc. have nice clean interfaces that limit this risk)
3. Your current PHP version is not from the Stone Age.
My usual steps:
1. Think about the aims of the code
2. Write an interface skeleton (method and function names only)
3. Describe the expected behaviour (tests which must always pass)
4. Describe likely problems (test for bad outputs, invalid operations, etc.)
5. Check test syntax (Eclipse, yay...)
6. Write sample code, incrementally. Usually I comment out all tests but the ones the incremented code should be passing. If everything passes, develop another increment, uncomment the relevant tests, and rewind.
7. Along the way check for "code smells" and refactor when appropriate.
8. NEVER test for performance (wait until the near finish line and find the main culprits for optimisation)
9. Revisit the behaviour - is it all on course for what's needed or mutating into a beast
10. Keep all books written by Martin Fowler close to hand (POEAA, Refactoring).
Why I do it this way:
- catch bugs early
- add undiscovered bugs as regression tests
- code generally stays simple, well organised, easy to read (tests are documentation to other developers following TDD).
- maintainance is usually simple thereafter and refactoring a synch.
- instantaneous feedback on changes (logging for later works too, but feedback is time-sensitive).
Important to note, it's all about the behaviour. If you ignore the implementation, a classes inputs and outputs are generally limited. Easier to test them then small discrete units of the implementation. Wait for the whole to fail before digging deeper.
End result - initial testing cost is high, development is normal, maintenance and adaptation requires far less time. It's often a long term time saver which is where the cost can be misinterpreted as being too much for the benefits. A lot of folk (yep, incl. me!) fail to see where maintenance is a blood sucking nefarious demon which can eat hours of my time if I have nothing in place to support and even prevent that maintenance sinkhole.
My 2c.

Feeling wordy today.