Page 1 of 3

Testing the output

Posted: Thu Jul 01, 2010 8:48 am
by Bruno De Barros
I'm currently writing a class to generate a set of management pages (for adding/editing/deleting/managing records of a table), and I was wondering what is the best way to test it.

I've got template html files, my class picks them up, fills them up with the necessary data, and outputs them. I want to make sure the page works, and I was considering making simple string searches to make sure the required data showed up where it's meant to be. I'm sure there must be other ways to do this kind of testing, no?

How do you guys do it?

Re: Testing the output

Posted: Fri Jul 02, 2010 9:35 am
by Jade
Test all possible cases, ie you want to make sure you're hitting all of your code and that it works. Try entering data in every field and then check the database to see if it added/updated/removed as expected. Also try entering invalid data, out of range data and SQL injections. Basically you want to try and break it :)

Re: Testing the output

Posted: Fri Jul 02, 2010 9:48 am
by Bruno De Barros
Thanks Jade :) It's much easier to do that when you have a class and its methods, and you can just call them a bunch of times with a variety of different arguments to try and break them, but when it comes to testing an HTML page it gets harder :P

Re: Testing the output

Posted: Fri Jul 02, 2010 10:39 am
by Jade
Typically what I do is I use the Firefox Web Developer add-on and go through enabling/disabling things and using the auto-form fill command trying to break the page. Of course it's tough to account for all situations (yes, users are very dumb and often enter incorrect data) but the more you try to kill your code the better your end results.

Re: Testing the output

Posted: Fri Jul 02, 2010 2:09 pm
by Bruno De Barros
The problem with that would be me having to do it manually. I've been searching a bit, and I might use Selenium, to test that the HTML functionality works as required. Thanks :)

Re: Testing the output

Posted: Fri Jul 02, 2010 2:29 pm
by Jade
Hmm, I dunno how fond I am of a script being responsible for checking if my html/forms are working properly... but that's just me!

Re: Testing the output

Posted: Fri Jul 02, 2010 5:43 pm
by Bruno De Barros
Selenium acts human, as far as I've seen. Besides, when testing hundreds of times a day, you can't go check all the pages to make sure they work manually. Sure, you can do it manually now and then, but I like to make sure that if I change a piece of code, it doesn't break anything at all. Not the code functionality, nor the HTML pages.

Re: Testing the output

Posted: Sat Jul 03, 2010 12:14 pm
by josh
Output buffering & regex. Or this instead of regex - http://framework.zend.com/manual/1.10/e ... query.html

Basically it works like jquery. You can say give me an array of all anchor tags. Or give me an array of anchor tags that have this CSS selector, or find me the anchor tag with this id attribute. You can then pattern match on individual tags/subsections of the output.

Re: Testing the output

Posted: Sat Jul 03, 2010 5:27 pm
by Bruno De Barros
That looks really good, thanks! I'll try it out.

Re: Testing the output

Posted: Sun Jul 04, 2010 9:02 am
by McGruff
You need Simpletest and its webtester.

Obviously you can't point tests at the production server if they will manipulate data. I always set up an indentical staging server in VirtualBox or etc. Set tests which manipulate data to skip if they target production so you can run the rest of the test suite against the live site.

Re: Testing the output

Posted: Mon Jul 05, 2010 10:25 pm
by josh
Thru the browser testing can lead to test that pass or fail randomly. FYI you don't need simpletest. If you want to use brittle browser based tests then PHPUnit + Selenium is another good option for you. Brittle is just a word I'm not criticizing it just characterizing it, since they produce a high rate of test failures, you "harden" your tests, but you add a ton of work for yourself basically. Browser based tests are an asset, but I prefer the defect localization of a unit test.

My definition of unit tests is tests that are designed to execute quickly, may touch a database but must run under 200ms, and must cover as small of a feature as possible (defect localization - if a feature gets broken there would ideally only be 1 corresponding test that fails, as to make the correlation between test & defect easier on the developer's part).

So instead of having 100 tests testing against strings like this:

Code: Select all

<html>
<body>
<div>
test123
</div>
</body/>
</html>

I would prefer to have 100 tests against this string:

Code: Select all

<div>
test123
</div>
and only one test worrying about the rest of the HTML document.

Re: Testing the output

Posted: Mon Jul 05, 2010 11:27 pm
by McGruff
I wouldn't say browser-based tests are brittle. I do think they are essential. Unit tests test individual components but they don't test if everything is working properly together. I could do something as trivial as misconfiguring a path in a config file and knock out the whole site even though all the unit tests are passing.

The TDD way - and testing doesn't really make much sense without TDD - would be to start with an acceptance test (eg with SimpleTest's webtester or the equivalent) which describes a web page, or maybe a form submission and the expected response. After that it's unit tests and code until finally the acceptance test is passing and you're done (but don't forget to come back to refactor).

As well as guiding you through the first coding run, a complete suite of acceptance tests are absolutely vital for future updates. It's so easy to break something else when you add a new feature. I wouldn't dare upload a line of code to production until I've checked all tests are passing on staging and I'll run them again (as far as possible, see above) on production when I'm done. I honestly think it's extremely unprofessional to work in any other way. In my current job if I make a mistake maybe we can't process payments or maybe the services being paid for go offline and we won't know anything is wrong until someone complains. That's just not acceptable, not for one second. It's bad for the site and it's bad for my professional reputation.

You can't avoid testing. The question is will you test in private with comprehensive, automated acceptance tests or will you test in public with "wait until someone complains" customer tests?

Re: Testing the output

Posted: Tue Jul 06, 2010 9:14 am
by Jade
I have to agree with McGruff on this one. Sure you can use something to check your input and make sure the page compiles, but that won't tell you if your logic WORKS. If I enter "dog" into google I don't want to get a long list of results for "water softeners"....

Re: Testing the output

Posted: Tue Jul 06, 2010 9:49 am
by josh
.

Re: Testing the output

Posted: Tue Jul 06, 2010 9:50 am
by josh
McGruff wrote:I wouldn't say browser-based tests are brittle. I do think they are essential. Unit tests test individual components but they don't test if everything is working properly together. I could do something as trivial as misconfiguring a path in a config file and knock out the whole site even though all the unit tests are passing.

....

You can't avoid testing. The question is will you test in private with comprehensive, automated acceptance tests or will you test in public with "wait until someone complains" customer tests?
If the whole site is down that isn't going to slip past my manual testers. Surely someone as experienced as yourself knows you can't just stop manual tests, no matter how strong your automated tests are. To me it seems like you're making the argument we all HAVE to do things your way. I'm saying I chose not to.