BDD confusion

Discussion of testing theory and practice, including methodologies (such as TDD, BDD, DDD, Agile, XP) and software - anything to do with testing goes here. (Formerly "The Testing Side of Development")

Moderator: General Moderators

alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

BDD confusion

Post by alex.barylski »

From wikipedia: http://en.wikipedia.org/wiki/Behavior_d ... evelopment
In the example above it's not really important how the prime numbers are calculated - if calculated at all. What's important is that the numbers are correct which is the expected behavior of the application.
First example:

Code: Select all

public class PrimeNumberCalculatorTests extends junit.framework.TestCase {
   public void testIfPrimeAfter100() {
      PrimeCalculator calculator = new EratosthenesPrimesCalculator(100);
      int result = calculator.nextPrime();
      assertEquals("First prime after 100 should be 101 but is " + result, 101, result);
   }
}
The quoted statement above confuses me...especially "If calculated at all"

How else would the prime number object return results? It could use an internal lookup table I suppose and thus avoid calculating...but practically speaking...the object would likely calculate the values...which is what you would be interested in testing to ensure the algorithm worked as expected, is it not?

Inorder to test it's behavior...would this not require you to mock the PrimeNumber object? In which case it would return only what you told it to via expectOnce() and/or setReturnValue (assuming simpletest)???

What am I missing about that statement and the above code snippet? If the PrimeNumber object didn't calculate values...and it used a lookup table...the final assertion is still what you are testing - verifying the implement works as expected.

What was the authors intent with that statement (Not calculated at all)?

1) Was he suggesting it could use an alternative technique, such as a lookup?
2) Was he suggesting that the object be mocked and would return whatever you told it to via setReturnValue() because you are testing the behavior *not* the state?

If #2 is correct...and the PrimeNumber object was mocked and you told it to return a certain value and thus forced the assert to pass...what exactly are you testing? I see no behavioral testing happening there and the assert is obviously going to pass or fail...as you are basically passing the expected result indirectly to the assert via the mock object - make sense?

Cheers :)
User avatar
Maugrim_The_Reaper
DevNet Master
Posts: 2704
Joined: Tue Nov 02, 2004 5:43 am
Location: Ireland

Post by Maugrim_The_Reaper »

The Wikipedia example is crap. Not to be terribly scathing (hehe) but it's a horrible example of BDD. The author goes so far off the mark they use the term "behavioral tests". The one word BDD advocates despise above all others is...yes..."test".

A simple example would be:

1. Write textual spec:

PrimeCalculator should calculate the next prime number after 100 to be 101.

Keep it simple, short and obvious. If it looks ungainly, split it up. Specs should be very specific.

2. Encode as an example demonstrating the specified behaviour in action:

(PHPSpec...)

Code: Select all

// ...
public function itShouldCalculateTheNextPrimeNumberAfter100ToBe101() { // yes, it's the full spec text; be clear in what the example is for
    $calc = new PrimeCalculator();
    $calc->startFrom(100); // guess we have to set 100 somehow...
    $this->spec($calc->nextPrime())->should->be(101);
}
// ...
3. Implement (we have a class name, two methods, behaviour specified - one piddly example does offer some design quality).

4. Re-run specs to verify acceptance of the implementation (i.e. you can think about "test" now if you're that attached to it ;)).

5. Write another spec, and repeat. Refactor as needed.

6. We're serious about one spec per example method. Two specs in a method means your spec is not specific enough - don't let the two-for-one practice take root. Doesn't work like that. Iterated runs are sometimes fine though.
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

Ok that example made sense...I see now how the spec is (how else to say this?) testing the behavior???

Is that the same as an ASSERT however? I can see how specs are far more human friendly than a more assert, but technically, are they not accomplishing the same task?

The assert (as I see it) is used to validate some value or state. Unlike expectations, which are used to validate (how would I say this and avoid using "test"?) the use of it's collaborators?

If you mocked the PrimseNumber calculator though...as I assumed the author meant when he said "calculated at all"...would the spec not serve the same purpose as an assert and basically be pointless? In point form I"ll try and explain what I mean:

- Create mock object of PrimeNumber class. There are no collaborators just a inline prime number calculator. Does mocking make sense? There is no need for setting any expectations.
- Mocked object has no code to execute so it's faster than a prime number calculator - benefit?
- Mocked object has it's nextPrime() method called which returns whatever we tell it to when we call the mock setReturnValue

The final line:

Code: Select all

$this->spec($calc->nextPrime())->should->be(101);
Assuming there are no collaborators and we inform nextPrime() to return 101 using setReturnValue then...here is where my confusion lies...

What exactly are we accomplishing?

However, if the PrimeNumber wasn't mocked and it's actual implementation was tested/spec'ed? Now at least the state/behavior of the implementation is validated.

The more I think about it, the more I think I must have misunderstood what the author was imlying by the sentance "if calculated at all". Surely he couldn't mean the example above would use a mocked prime number object as it makes no sense, does it???
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

I think I may have the question which is leading to much of my confusion:

Would you ever Mock the very object you are testing? To me, it doesn't make sense as the object would literally have nothing to test. Is this correct? If not, can you give an example with detailed reasons as to why you might mock the SUT (system under test)?

Cheers :)
User avatar
Maugrim_The_Reaper
DevNet Master
Posts: 2704
Joined: Tue Nov 02, 2004 5:43 am
Location: Ireland

Post by Maugrim_The_Reaper »

Is that the same as an ASSERT however? I can see how specs are far more human friendly than a more assert, but technically, are they not accomplishing the same task?
Technically yes, which is the main retort against BDD. But as you noted, it's human friendly. In BDD there's a strong focus on making code readable to the extent it's almost plain English. There's a body of practice revolving around that concept called Domain Specific Languages which focuses on designing API's which adhere to high standards of readability, intuitiveness and predictability. assert* methods are completely counter to this standard since they require a "translation effort" to translate PHP code, into English, into a symbolic concept. BDD moves into a groove where the code you write reflects your thoughts as closely as possible.

If that sounds shrinky ;), which it does, it's deliberate. People don't think in terms of PHP syntax, they think in terms of their native language. When you write a spec saying "next prime after 100 should be 101", it's easier to copy it word for word into a spec, and assemble the code as:

- Given 100
- When I request next Prime
- It will be 101

If you follow the spec example, that's the exact workflow of the API suggested. Each step merely follows an intuitive staging. Text - Example - Implementation. The reading of each is typically interchangeable.

Now if you switch to TDD mode, when was that ever explained to you in a TDD article? It's simple, clear, works, and is where TDD users who are really good end up through experience and prompting from other good TDD users. It's just not declared openly. Unit Testing frameworks won't lead you to it (they're test oriented), a good BDD framework should do so almost immediately.

Back to technicalities. spec() vs assert() are low level methods which have a common purpose in many instances. BDD abstracts a little further however to add a Domain Specific Language modelled on English, allows for predication, and encourages the addition of custom "matchers". It's a more extensible approach which is easier to think with.

IMO, that's the single greatest benefit of BDD - it puts you into the right mode of thinking. Specify, write example, implement example, pass acceptance, finished. TDD muddles though on write test, implement code to pass test, etc. - you can interpret that far too flexibly. Most TDDers in PHP do - their suites go off the cliff on testing state, short numbered methods, close correlation of tests to code (behaviour rarely follows a 1:1 relationship to code - one method can exhibit n types of behaviour afterall), multiple assertions per method (clumped specs), Reflection to test private resources (brittle tests), testing to real objects or resources (interdependent tests - the domino effect), etc. On the flipside, it's rare to see that in BDD since the terminology is very specific, very small, and the frameworks discourage poor practice.
If you mocked the PrimseNumber calculator though...as I assumed the author meant when he said "calculated at all"...would the spec not serve the same purpose as an assert and basically be pointless? In point form I"ll try and explain what I mean
You don't mock the object being specified - only other external helper/collaborator objects which are not currently being specified. If PrimeCalculator used a separate Math object, then you'd mock the Math class (assuming it's sufficiently complex to justify mocking - mocking simple objects is simply not an efficient use of your precious time resource. If it's simple, just design and write it - rely on the parent specs to assure it works. If it's reusable elsewhere, or complex, or uses data heavy operations; then write separate specs for it).

Think this was you one main concern there ;). You don't mock the one being specified - only other objects. The point is to maintain isolation. The only exception should be for simple, non-reusable classes which are not worth a) mocking and b) writing new specs for. This all has one key effect - you'll have less specs, than the equivelent as tests under TDD. BDD has in my experience made me more efficient about what I write specs for. Note: BDD therefore mixes a little from Unit Testing with Integration Testing - the two in spec terms aren't as independent as TDD suites tend to enforce.
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

Maugrim_The_Reaper wrote:assert* methods are completely counter to this standard since they require a "translation effort" to translate PHP code, into English, into a symbolic concept. BDD moves into a groove where the code you write reflects your thoughts as closely as possible.
That makes perfect sense, coding to a specification. It's eliminating some work by merging two practices...I write specifications already...usually slightly higher level than what BDD promotes but that is then translated directly into source. However I can BDD being used as a one stop shop kinda deal, where you use the same specs in testing as you deliver to your client. It's abstract enough that I"m sure with a little coaching any laymen could understand the concept of a "spec".
If that sounds shrinky , which it does, it's deliberate. People don't think in terms of PHP syntax, they think in terms of their native language. When you write a spec saying "next prime after 100 should be 101", it's easier to copy it word for word into a spec, and assemble the code as:
You lost me on the shrinky bit. :P
IMO, that's the single greatest benefit of BDD - it puts you into the right mode of thinking. Specify, write example, implement example, pass acceptance, finished. TDD muddles though on write test, implement code to pass test, etc. - you can interpret that far too flexibly. Most TDDers in PHP do - their suites go off the cliff on testing state, short numbered methods, close correlation of tests to code (behaviour rarely follows a 1:1 relationship to code - one method can exhibit n types of behaviour afterall), multiple assertions per method (clumped specs), Reflection to test private resources (brittle tests), testing to real objects or resources (interdependent tests - the domino effect), etc. On the flipside, it's rare to see that in BDD since the terminology is very specific, very small, and the frameworks discourage poor practice
BDD is certainly a good idea...I appreciate your sharing this with me (us - the community). Personally, I don't share good ideas it's usually bad for business. :P All joking aside, BDD is certainly a radically different approach. I've always tested my code, even with external scripts which was similar to AUT in many ways. I see BDD as more innovation rather than invention. Unit testing spawn from necessity...as developers learned that implementation testing had problems, they gradually began testing interface instead. When this became problematic, testing frameworks were introduced. I see TDD as the collection of best practices when testing.

Some of my confusion is being caused by the lack of formal definition though - actually thats almost always my biggest trouble.

When I first started reading about TDD/BDD...most of the TDD stuff clicked right away as it was essentially what I already did but refined (ie: frameworks) and formalized (similar to when I started on design patterns). But BDD like I said IMHO is revolutionary and not something I have ever considered so it's proving to be more difficult to pick up - contrary to most BDD articles which claim the opposite. :P Likely caused by lack of formal TDD experience, but still.

Definitions still elude me. The only article I have found which actually attempts to define term:

http://adams.id.au/blog/2007/10/what-is ... velopment/

And I'm not sure how accurate they are, but it's a start at least. A small suggestion, perhaps you could consider a glossary for your manual on PHPSpec (I couldn't find one)???

Originally, I thought behaviors were simply the result of mock objects. Capturing method invocations and having them verified against expectations is what I find really exciting. I see how this may have stemmed from TDD and how BDD was conceived through the notion of testing behavior, but under the context of BDD am I correct in assuming that behaviors are not *just* verifiying expectations, but also the specification (which is where my problem with definitions comes in)?

If the expectation is the contract for the implementation and the "spec" is the verification of the result or state of a method...what is the specification? I thought the specification was the term used to describe the method. But then is that not the behavior as well?

I'll try and explain with my own understanding of definitions.

You have a context. This is the class which contains the specifications. The specifications are the individal methods which verify behavior. Behaviors are both the assertion/specs AND mocks/expectations.

What do you call the "spec" or assertion if the specification is the method itself which contains the "spec" assertion AND expectations? Is it appropriate to refer to this as the "test" of state or the assertion? The use of the $this->spec

Is this because your framework keeps mocks separate from specs? Thus the PHPMock/PHPSpec projects rather than project? That question actually just gave me the answer I think. :P

Verifying "specification" is different than verifying the "expectation" and is why you give no examples in the manual for PHPSpec of verifying expectations? :idea:

If my assumption is correct, that really clears up my definition problem above. A specification is the equivelant of assertion but more human friendly and promotes more fine grained, loosely coupled testing. Verifying expectations, is then done separately (inside the specification???).
Think this was you one main concern there . You don't mock the one being specified - only other objects. The point is to maintain isolation. The only exception should be for simple, non-reusable classes which are not worth a) mocking and b) writing new specs for. This all has one key effect - you'll have less specs, than the equivelent as tests under TDD. BDD has in my experience made me more efficient about what I write specs for. Note: BDD therefore mixes a little from Unit Testing with Integration Testing - the two in spec terms aren't as independent as TDD suites tend to enforce.
Yes thank you, that was a important question for me. Perhaps another important point to make mention in any articles you write for newbies, as that question had me stumped. However, the question only arose after reading the wiki article, which was confusing. Someone should update that article. :P

Quick question: Why would you *not* mock all your collaborators? Isn't it worth verifying the expectations? It seems to me, that is one of the best things about mock objects, not that they avoid the over head of the real object (which is what every article I've read seems to promote) but you can observe the behavior of the implementation of the spec'ed method???

Finally my last question: I know you will likely see me as over eager and suggest that I practice with just writing simple specs for a while, but I'm stubborn and eager to get started using PHPSpec/PHPMock on my current project (I decided that full AJAX support was required so I re-designed the architecture to support it if javascript is enabled - hosted application so hopefully it saves me killer bandwidth) so the timing couldn't better as I refactor the codebase. So yea, the question... :P

Are they both ready for production use?

http://blog.astrumfutura.com/archives/3 ... ework.html

That blog has me thinking that the API will change which makes me nervous as I will likely have 1,000+ specs when done...

Actually I have one more question. LOL Sorry.

If my understanding is now correct, and that a spec is different from a expectation. If the PHPMock objects are not used within the specification to verifying collaboration, where is this done???

Thanks again, your time is truly appreciated. Your mentoring me has likely saved me literally months of struggle and unanswered questions.

Cheers :)
User avatar
Maugrim_The_Reaper
DevNet Master
Posts: 2704
Joined: Tue Nov 02, 2004 5:43 am
Location: Ireland

Post by Maugrim_The_Reaper »

Shrinky, from the term "shrink", referring to a psychiatrist. ;) Sorry, when I'm in the writer's zone I have a habit of pulling words out of my hat which are partially made up...

You're perfectly correct - BDD is innovation, not inventive. It's based on taking pre-existing concepts and boiling them into one concrete practice. The advantage to that is you get one focused track, rather than two dozen. For example Domain Driven Design delivered the concept of the "ubiquitous language" common to developers, clients, and lay persons. Domain Specific Languages delivered the concept of a fluent API taking on characteristics of a natural language. BDD combines both - $spec($cat)->should->purr(). It's a DSL, fluent API, modelled on a natural language, and readable by a lay person (though preferably translated into plain text ;).

TDD has $this->assert($cat->canPurr(), true); - it's easy to pass it off as equivalent but scatter these over 1000 tests and try again. In a large suite of anything, the less effort to translate is a great benefit. It just looks very modest in one isolated example.
Some of my confusion is being caused by the lack of formal definition though - actually thats almost always my biggest trouble.
BDD unfortunately is not a static concept. It's still young enough that it's evolving at a staggering pace. My own blog piece from two months ago reflected the standing interpretation - this December Rspec 1.0.0 was released and a new Rubyconf presentation by David Chelimsky introduced a broader range of concepts and greater differentiation from TDD. Thankfully it is settling down now! The problem therefore is that the older the blog post - the more likely it is to be innaccurate in any number of respects.

XP User Stories - they're the basis of how you can write specs.

You take a concept (Logging) and write stories of how you (the client) want it to work. From there take each story, split into specific specifications, then code. In BDD however, you also have one extra step - each time to write a specification, you're investigating the problem - as with all problems, you'll discover new solutions, new valuable behaviours, and better ways of making a Logger works to the client's needs. These require additional specs. If you look again at PHPMock's specs you'd never suspect I edited some of those method names (specs) several times to get them just right. The spec is a huge influence in letting YOU understand what to write about - let alone the client.

Terminolgy sucks :). I really need to add that to the PHPSpec manual. My own short take is based on a simple thought exercise to get specifications just right:

GIVEN a File Logger after instantiated
WHEN there is no file to log to
THEN it should create a new log file

You can call it the GWT way.

GIVEN <context>, WHEN <activity>, THEN <observable behaviour>

If you understand this, you'll understand the rest much better - GWT is the core ideal in BDD specifications. It's used for low-level specs all the way up to the higher-level Story Runner (PHPSpec has no Story Runner yet) used for ATDD. I use this exercise all the time to write specifications and User Stories. In fact, in Rspec for Ruby, this exact exercise is implemented as a Domain Specific Language - they even use given, when, then blocks!
Verifying "specification" is different than verifying the "expectation" and is why you give no examples in the manual for PHPSpec of verifying expectations?
It's more than that - verification cannot occur until something is specified and implemented (that's everyone's immediate understanding). A big complaint with TDD is that it puts the horse before the cart and swears to everyone that this works. Since it's counter-intuitive and an obvious misuse of English - TDD just fails to be adopted (the "it's gibberish" adoption fallacy). Saying we're specifying first is simply truthful, and obvious - why TDD insists on "test first" is simple denial I think - it's a catchy phrase, but it means nothing to the average developer who already tests exhaustively and knows tests are time consuming boring tasks. Specs at least don't even need code at first - I use a simple text editor before opening my IDE.

Which makes sense to you?
What do you call the "spec" or assertion if the specification is the method itself which contains the "spec" assertion AND expectations? Is it appropriate to refer to this as the "test" of state or the assertion? The use of the $this->spec
Spec got tied up in BDD :). The shortened form refers to the group of Text and Example demonstrating one specification. In code terms, it's the method declared using "it<should>", in text terms the Specification sentence copied to the method name. Spec is sometimes (wrongly) used wherever the word "test" might be needed - sometimes a test really is just a test, but BDD users try to avoid the word in explaining BDD since it gives TDD folk an easy target to bomb ;). Having BDD labelled a practice identical to TDD except with "spec" instead of "test" is the number one way for anti-BDD folk to come up with a catchy BDD Is Useless declaration for their blog posts.

In this sense, the method content - Mock Objects, plus the 1 (always one!) spec() call is the SPEC. Many times you'll see the SPEC confusion rendered mute by calling all spec methods an even simpler word - the EXAMPLE. I prefer Example where possible - people get that easier since it reminds them to write short code to provide an example of the textual spec being used.
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

Sorry, when I'm in the writer's zone I have a habit of pulling words out of my hat which are partially made up...
Hahaha...I see. :lol:
It's more than that - verification cannot occur until something is specified and implemented (that's everyone's immediate understanding). A big complaint with TDD is that it puts the horse before the cart and swears to everyone that this works. Since it's counter-intuitive and an obvious misuse of English - TDD just fails to be adopted (the "it's gibberish" adoption fallacy). Saying we're specifying first is simply truthful, and obvious - why TDD insists on "test first" is simple denial I think - it's a catchy phrase, but it means nothing to the average developer who already tests exhaustively and knows tests are time consuming boring tasks. Specs at least don't even need code at first - I use a simple text editor before opening my IDE.
Good summary. :) Honestly in only the last few days of readin up on TDD and BDD I have seen the issues arise with TDD. The concept of mock objetcs I think really clicked after reading BDD articles. Under the context of TDD I simply seen them phantom objects which done but removed a dependency on a colloborators real object. It was only once the idea of testing "behavior" after reading some articles that using mocks actually allowed a lot more.

Basically I'm attributing that personal break-through to the high level perspective BDD prmotes. Many of the examples I seen/read with TDD had me sit down and paint the overall picture in my head as typically tightly coupled systems...so a change in the production code could possibly affect your tests...I think this was being caused because of the reliance on real colloborator objects...

I read a few blogs from TDD'ers who argued to death the mis-use of mocks objects (mostly as phantom objects I seen them as originally) and swore that testing the real collaborators was the only TRUE way to test a system. If my understanding of mocks is correct, then, you can indeed mock most if not all collaborators and successfully test the isolated primary object, as I believe you are concerned with it's behavior more than you are it's overall result. The latter is which I have found TDD to promote a lot more - at least indirectly through it's hundreds of supporting bloggers (I realize thats not probably part of the doctrine). I think it's this realiance on using *real* objects which leads to brittle tests. Is this not one of the problems BDD promotes to solve through the use of abstraction, DSL and the idea of specifying behavior rather than testing state??? For me, the idea of testing state is what makes me want to believe that the colloborators need to be real objects, because you are concerned with the result (making sure records are returned from the database and processed properly by the primary object or specified object?? :P

The idea of validating a spec and verifying expectations and thus the behavior of an isolated object becomes much more understandable and more importantly doable...configuring all those colloborators in my tests was a scary thought, even after removing all dependencies...but using mocks for everything but the most trivial object (value or transfer objects like config?) are worth observing - at least I can't see why you wouldn't use mock objects...???

So terminology is still shifting, eh?

Too recap this round of questions:

1) Mocks. Why would I not mock every colloborator? Each time you depend on the real object inside your spec I see two problems. One is the complexity introduced into the setup/teardown/before/after phases. The second is that introduces a more concrete dependency into your specifications. So your tests require servers, files, etc...PITA to cleanup after...difficult to create and if your database connection fails or changes...that needs to be reflected in your tests/specs now. By mocking, you not only eliminate that extra layer of complexity and dependency, but you get the added benefit of observing/validating ojbject interaction - which again (I'm a newbie so pardon any zeal on my behalf) to me is the coolest thing. The only downside with their potential overuse is your tests/specs become dependent upon implementation. I'm not sure I agree with that logic though, while it does in a way support implementation observation instead of the hard-fast rule of testing the public interface *only* that seems popular in TDD. I see it as an excellent way to ensure that implementations are held against a contract.
Especially useful in the case where the order of method invocations is important and if changed could have reverberating side effects. I suppose when the order is not important, then holding it against that implementation contract might be a PITA (not damaging though). I am one of extremely strict discipline...everything from coding standards and conventions, to documentation, so personally I see having an error brought to your attention when implementation changes may poentially cause side effects as an extremely powerful tool. Sorry my questions inside a question. ;)

2) If terminology is sketchy right now, do you have an example of how I could use PHPMock and PHPSpec in a single specification? Using my classical model example would be awesome. :D

Code: Select all

class MyModel{
  function create($email)
  {
    if($check->isEmailValid($email)){
      return self::EMAIL_INVALID;
    }

    $pkid = 0; // Assume it's initialized by a mocked database object after a successful INSERT

    return $pkid;
  }
}
Cheers :)
User avatar
Maugrim_The_Reaper
DevNet Master
Posts: 2704
Joined: Tue Nov 02, 2004 5:43 am
Location: Ireland

Post by Maugrim_The_Reaper »

Why would I not mock every colloborator?
Comes down to efficiency, and refactoring. Here's a simple example. You have a Logger class which optionally writes to a file, or a database. You have written specifications for the Logger object, implemented it, and everything looks fine. You smell a rat though - the Logger combines File and Database operations. To clean up the code, you refactor the Logger class into a standard Strategy Pattern - a parent Logger with common code, and two Strategy classes: Logger_File, and Logger_Database. Assume you keep the original API.

Now, consider the implications. You have three classes, the original specs (by intent) remain valid - i.e. the original API is unchanged. Are you required to now write new specs for the two new classes you introduced? Under BDD, unless you can justify extra specification, you should leave well enough alone. The behaviour has not changed, just the internal implementation. Since specs are based on behaviour only - the original specs are sufficient. Refactoring implementation, does not equate to refactoring behaviour. This leads to an obvious outcome - BDD generally does not require a 1:1 correlation of specs to classes/methods. Instead there's a leaning towards a 1:N relationship.

Back to efficiency, you can probably see where this fits in. By keeping specifications focused on behaviour, there's less inclination to increase the level of specification beyond those attributed to the higher valuable behaviours you want. Less specs means less time invested in subsequent specification, which means less maintenance of valueless specs, which means less billable hours wasted on pointless crud, which means - lower costs. Maintaining code is costly enough that adding maintenance of duplicated spec levels on top of that is just a burden.

As to mocking, it's the same effect. Logger_File and Logger_Database are external objects to Logger (where our API still resides). Do we mock those? Nope. If we refactored them out of Logger, but determined they needed no extra specs, then we can assume their behaviour is already specified. Mock Objects are a tool for exploring the interactions between objects, when those interactions are not already known. If we already know (we just refactored afterall!) then there's nothing to be gained (back to billable hours and costs ;)).

Same applies to small objects, simple objects, or predictable objects. Sometimes an object is simply never intended to be used from a library's API because it's a deeply nested helper. Now if the higher classes rely on the helper to achieve their behaviour, you can make the judgement that the higher class's specs already determine the helper's performance. If the helper fails, the higher object's behaviour will fail, and...the spec will fail. Why add another set of specs? The failure is caught once - will catching it twice make it easier to spot or something? ;) Maybe... Sometimes the extra specific failure wil make things easier. That's why it's a judgement call - to spec or not to spec.

So the quick summary is easy :). If you can't honestly justify doing something, then don't do it. If you think mocking a simple object is stupid, then accept it's stupid, use the real object instead, and move on. Mock Objects are a powerful tool, and by all means use them liberally, but there are always exceptions to using them.
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

So the quick summary is easy . If you can't honestly justify doing something, then don't do it. If you think mocking a simple object is stupid, then accept it's stupid, use the real object instead, and move on. Mock Objects are a powerful tool, and by all means use them liberally, but there are always exceptions to using them.
Thanks for addressing that question...I'm actually dead set on mock objects but I'll heed your adivce and keep prodding until I have it figured out...

I can see how using mocks for trivial objects would be useless and although your logger class example made sense...nothing clicked so I'm still missing something. If you don't mind I"ll just reiterate what I interpreted what you said and hopefully you can catch any misunderstanding I may have.

You have a logger class, this is acting as a collaborator inside a specified method such as my model->create example...it's logging the creation of records.

Originally the logger uses the file system but I later switch to using DB instead (which is actually common practice for me) - this requires a refactoring and introduction of a new class. This is how I envision the create method using the logger collaborator:

Code: Select all

class MyModel{
  function create($email)
  {
    // TODO: Perform error checking (is valid and not duplicate)

    $pkid = $db->insert($email);

    $logger = new Logger('/var/logs/audit.dat');
    if($pkid){
      $logger->record("$email - Success");
    }
    else{
      $logger->record("$email - Failure");
    }

    return $pkid;
  }
}
In this case, because the collaborators are locally scoped objects, it would be impossible to mock the objects, in this case, the real object is your only choice. If I needed t mock it, I would then have to refactor the object out of local scope and either inject it or stuff it in a service locator or similar. Once the object creation has been externalized (pardon the awkward term) it is now possible to Mock the object and observe it's behavior - so I probably would. :P

The issue (I believe anyways - correct me if I'm wrong) is that refactoring of the logger class would cause your specs to possibly change but *only* if I mocked the object? Now I'm confused...what I just asked made no sense. Maybe I can clear things up with a quick spec of the above code:

Code: Select all

public function itShouldReturnEmailInvalidError()
{     
  $model = new Model();     
  $model->create('invalid-email');     

  $this->spec($model->getErrorCode())->should->be(Model::EMAIL_INVALID); 
}
In the above I have "specified" a behavior, correct? (Making sur emy terms are right). I would also introduce yet another method called itShouldReturnEmailDuplicateError correct?

Would these be best to keep in a separate context or would I best have 3-4 specifications for the create method inside a context of the "model" I'm specifying? Should I keep the contexts method specific in order to promote the 1:n relationship instead of the 1:1?

Back to the problem at hand:

In the above code, obviously there are no mock objects and no way of doing so unless refactor the logger out of the method being specified...

I have to re-read this post...so my next sentance may be off tangent when continuing from the last sentance... :P
As to mocking, it's the same effect. Logger_File and Logger_Database are external objects to Logger (where our API still resides). Do we mock those? Nope. If we refactored them out of Logger, but determined they needed no extra specs, then we can assume their behaviour is already specified. Mock Objects are a tool for exploring the interactions between objects, when those interactions are not already known. If we already know (we just refactored afterall!) then there's nothing to be gained (back to billable hours and costs ).
Ok, I understand why you wouldn't mock the objects Logger_File and Logger_Database, at least for the sake of observing their behavior, but mocking them for:

1) The sake of efficiency. Obviously a mocked object is lighting fast and lightweight
2) The sake of verifying object interactions.

Assuming that logger is refactored out of the model above and it's two strategies are also injected into the logger for extensibility...wouldn't it make sense to mock both the strategies and the logger itself?

I can't say I see it making sense to spec the strategies inside the same specification of the model method, as that is hopefully already done if nessecary, but mocking them to avoid any unessecary code execution makes sense, doesn't it? :?

Code: Select all

Mock::generate('Logger');
Mock::generate('Logger_Database');

public function itShouldReturnEmailInvalidError()
{     
  $logger_driver = new MockLogger_Database();
  $logger = new MockLogger($logger_driver); // Bind the database driver to logger class

  $model = new Model($logger); // Bind the logger class to model
  $model->create('invalid-email');     

  $this->spec($model->getErrorCode())->should->be(Model::EMAIL_INVALID); 
}
The way I see it...I have mocked the logger class :idea: By mocking the logger class I no longer need to mock it's composite objects - there is no point! $logger_driver above, although it's mocked, when bound to the Logger class will not actually be injected into the object at all...

Cheers :)
User avatar
Weirdan
Moderator
Posts: 5978
Joined: Mon Nov 03, 2003 6:13 pm
Location: Odessa, Ukraine

Post by Weirdan »

The way I see it...I have mocked the logger class Idea By mocking the logger class I no longer need to mock it's composite objects - there is no point! $logger_driver above, although it's mocked, when bound to the Logger class will not actually be injected into the object at all...
Yes, usually you want to mock only direct collaborators of the class under test.

Maugrim, could we spec out the logging behaviour of the model class as well? (something along the lines of):

Code: Select all

public function itShouldLogInvalidMessages()
{     
  $logger = $this->getMock('Logger');
  $logger
     ->expects($this->once())
     ->method('record')
     ->with($this->equalTo('Invalid: invalid-email'));
         

  $model = new Model($logger); // Bind the logger class to model
  $model->create('invalid-email');     

  $this->spec($logger->verify())->should->be(true);
}
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

Weirdan wrote:
The way I see it...I have mocked the logger class Idea By mocking the logger class I no longer need to mock it's composite objects - there is no point! $logger_driver above, although it's mocked, when bound to the Logger class will not actually be injected into the object at all...
Yes, usually you want to mock only direct collaborators of the class under test.

Maugrim, could we spec out the logging behaviour of the model class as well? (something along the lines of):

Code: Select all

public function itShouldLogInvalidMessages()
{     
  $logger = $this->getMock('Logger');
  $logger
     ->expects($this->once())
     ->method('record')
     ->with($this->equalTo('Invalid: invalid-email'));
         

  $model = new Model($logger); // Bind the logger class to model
  $model->create('invalid-email');     

  $this->spec($logger->verify())->should->be(true);
}
That is the gist of the idea as far as I understand it...you mock collaborators to verifying their implementation and you spec the behavior...
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

One more question for you Maugrim when you have a chance. :)

My mode method create...it has several behaviors (I think).

1) Verify email is not empty and is valid
2) Verify email address is not already taken - duplicates
3) The actual creation of the record in which case a valid PKID is returned

I have a spec or unit test (as i'm still using SimpleTest) for each of the above behaviors...but currently have them inside a single class which has the 1:1 mapping with the real model class...

Should I move these behaviors into a separate class/context? I realize this would promote the 1:n mapping instead of the 1:1 typically found in unit testing...

Does it reduce coupled code between test and production...??? Can you give me an example, if true?

Its it then best to have a separate context/class for each method you specify, as many methods would likely have more than one behavior worth specifying...

Cheers :)
User avatar
Jenk
DevNet Master
Posts: 3587
Joined: Mon Sep 19, 2005 6:24 am
Location: London

Post by Jenk »

I wouldn't move them into separate test cases/spec contexts. They are all relevant to your email creation behaviour. I would separate them into their own test/spec methods, and probably call them shouldRaiseEmailAlreadyExistsException (or Error,) shouldRaiseEmailNotValidException (or Error, again,) and finally shouldReturnValidPkid, where hopefully the names are explanatory.. the first is a scenario with a duplicate email, second an invalid email, and third a correct email.

The invalid email may have more than one should/assert - one for a blank/empty, another for "foobar"
alex.barylski
DevNet Evangelist
Posts: 6267
Joined: Tue Dec 21, 2004 5:00 pm
Location: Winnipeg

Post by alex.barylski »

Jenk wrote:I wouldn't move them into separate test cases/spec contexts. They are all relevant to your email creation behaviour. I would separate them into their own test/spec methods, and probably call them shouldRaiseEmailAlreadyExistsException (or Error,) shouldRaiseEmailNotValidException (or Error, again,) and finally shouldReturnValidPkid, where hopefully the names are explanatory.. the first is a scenario with a duplicate email, second an invalid email, and third a correct email.

The invalid email may have more than one should/assert - one for a blank/empty, another for "foobar"
OK Groovy...I currently have grouped all methods related to the model inside a model context class, so this works for me...
Post Reply