I can see how by decoupling that kind of code from your main index.php might help keep code slightly clearer, but...
Assume I have a application which uses CSV or AdoDB:
Do I use a service locator to abstract the data source or would a DAO make more sense?
If I understand correctly, AdoDB is already something of a DAO as it provides a consistent API to manipulate all SQL storage engines, but what about CSV? The connections are quite different. Open files and working with handles as opposed to SQL engine handles...
If I were to implement a DAO for SQL and CSV (XML, etc) would I basically have to first detemrine the API and basically use a service locator to set the connection contexts and manipulate the data source via the DAO API?
Wouldn't I already be wrapping a DAO (AdoDB) with another DAO (AdoDB, CSV)? Adapter pattern???
Obviously a single large DAO which provided an API for an entire application would be *heavy* overkill so grouping according to class would make sense - hence the use of classes I guess eh?
I should note, I'm actually leaning more towards the use of a procedural DAL (data access layer) for CSV and SQL but the connection configuration is where I am struggling slightly - which is where I think a service locator might come in handy. The reason for this is performance. Instantiating a object for accessing a database or CSV file, when there is likely no member data associated seems awfully wasteful. Simple function calls seem more appropriate, but where in the application do I setup the DB for usage or open the required CSV files - service locator anybody?
Thoughts, suggestions, example code of what each might look like???
Cheers