It seems you're possibly falling into "everything must have an automated test" hole. It doesn't.
It's a fundamental tenet of test-driven design that code is only ever written in response to a failing test and so, if you're doing it right, there won't be anything which isn't covered by a test (except for trivially simple things like getters or setters). Even if you write tests after the fact rather than test-first, everything still needs to be tested.
A Functional test, by a person, at a terminal, just as a normal user would interact with the system, is the best test you can have for this example (and others like it.)
How many buttons can you click per minute? How long does it take to examine the results of a button click? How many mistakes will you make? Will you miss something important at the end of a long day when you're tired? How many tests can you remember? How many will you have forgotten six months from now?
There's really no comparison between automated and manual testing. The computer will never forget anything, never get tired, never make a mistake, and it will chew through the tests in a fraction of the time you can. How could manual testing possibly be better? I can guarantee that somewhere down the line you won't even be able to remember what you're supposed to be checking.
That incidentally is the main task of TDD: requirements gathering, formalising these as automated tests. An executable specification defines the target to be hit and makes sure it stays hit. Acceptance (functional) tests express customer requirements and then unit tests represent hours of research into the problem domain and possible routes through it.
If you held a gun to a test-infected programmer's head and forced them to choose between the tests and the code, they'd choose the tests every time. With these as a guide it's relatively easy to rattle out more code. Recreating all the tests would be much, much harder.
If your concerns are with string comparison, then create a beautifier to rectify any extra whitespace and so forth, even a simple one like case insensitive, and replace all whitespace with special markers, or just trim and replace with single whitespace etc.
That's just the start of it. Aliases might be named differently. Predicates might have a different order. Subselects v joins. All different strings but the same query. Sql is code and, like all code, it's the logical statements which need to be tested not the string phrasing - and that means throwing the sql at a real database. Would you unit test a php class by comparing an expected class definition with the actual class read from a file? That would also be a very brittle test which doesn't really prove anything.
In practice a mocked connection plays out something like this.
(1) With trivally simple sql like "select foo from bar", and a database schema which never changes, none of my objections would be significant. A mocked connection would be OK. However, I've never written an app like that.
(2) A mocked connection doesn't tie the code down to a real database. If schema changes get out of step with the tests, the tests will pass when they should fail. Schemas - everything really - are more volatile in the early stages of development but even more mature apps may need to change from time to time.
(3) Say you've got a complex query to write - as in the example below. Could you write this straight off the top of your head or will you need to run it past a database for verification?
Code: Select all
select path as path, group_concat(offset) as offsets from files
inner join (select c.name, c.fid, m.name_offset as offset
from classes as c
inner join methods as m using(cid)
where m.name='changeMe' and c.hid in (
select distinct c.hid from classes as c
inner join classes_interfaces as ci using(cid)
inner join (
select candidates.iid, candidates.name from (
select distinct i.iid, i.name from interfaces as i
inner join classes_interfaces as ci using(iid)
inner join classes as c using(cid)
where find_in_set(c.name, (
select group_concat(name) as members from classes as c
group by hid
having find_in_set('Foo', members))))
as candidates
inner join methods as m using(iid)
where m.name='changeMe')
as links using(iid))
union
select ifo.name, ifo.fid, m.name_offset as offset from (
select distinct i.iid, i.name, i.fid from interfaces as i
inner join classes_interfaces as ci using(iid)
inner join (
select c.cid, c.name from classes as c where c.hid in (
select distinct c.hid from classes as c
inner join classes_interfaces as ci using(cid)
inner join (
select candidates.iid, candidates.name from (
select distinct i.iid, i.name from interfaces
inner join classes_interfaces as ci using(iid)
inner join classes as c using(cid)
where find_in_set(c.name, (
select group_concat(name) as members from classes as c
group by hid
having find_in_set('Foo', members))))
as candidates
inner join methods as m using(iid)
where m.name='changeMe')
as links using(iid)))
as clh on (clh.cid=ci.cid))
as ifo
inner join methods as m using(iid)
where m.name='changeMe')
as items using(fid)
group by fid
order by path
I defy anyone to write non-trivial sql like this reliably. It's just not possible (this one was built up phrase by phrase until the final query, and the main test, suddenly started to pass). When I said that I thought mocking connections and specifying sql string expectations was a bad idea I was really just being polite: I think it's insane.
Still, let's say I soldier on. I will make mistakes; no doubt about it. Can you spot the deliberate mistake (a different one this time) in the above query? Of course you can't - you need the dbms parser.
So, we have a malfunctioning app. What happens next? Nothing. Since the class is written to conform to specifications defined in the tests it will share the same error. Expected sql matches actual sql received and so the tests are passing when they should fail.
OK a manual test at the UI level would alert you to the problem but will you remember to carry it out, and will you remember the exact procedure which exercises this part of the app?
Perhaps you will but even so a failing acceptance test doesn't tell you the reason for failure. It could be anything anywhere in the app. Normally the unit tests would provide detailed information when something goes wrong but in this case the test isn't failing - a false positive. You certainly do need (automated) functional tests but they are at the wrong level to test such fine-grained details.