Roja wrote:I've run into several issues with implementing adodb-lite.
There is no session-clob support, which is needed for large object session variables on Postgresql. Since the adodb session table on postgresql uses a clob by default, thats a critical need.
Postgresql doesn't even have a CLOB data type. I spent the last hour and a half reading up on Postgresql and Clob/Blob support. You will find that If you check the datadictionary for ADOdb you will find they use the TEXT datatype for large character fields. And with Postgresql 7 and higher the VARCHAR datatype can use very large values like VARCHAR(4000) for a fixed 4,000 character field. Basically if you enable CLOBs on a Postgresql session you will greatly slow down the session handler as it executes a bunch of totally useless code to store virtually the same data into a TEXT field. Yes, the ADOdb session handler uses a TEXT datatype when Postgresql is selected.
Out of all of databases that ADOdb supports only Oracle actually has a CLOB datatype. Enabling CLOBs on any database other than Oracle doesn't do anything but increase the execution time of the session handler.
If you are using the UpdateClob function in ADOdb for Postgresql all you are doing is this...
Code: Select all
$this->Execute("UPDATE $table SET $column=? WHERE $where",array($val))
You can do that without using the UpdateClob function and save yourself a step.
So no, I will not add CLOB support for the session handler until I add Oracle support.
Roja wrote:It appears that CacheExecute is not implemented, is there an alternative?
Now this is a sticking point. I will eventually create a Caching Module for ADOdb Lite but I cannot see any real improvement in doing so as caching has a couple of drawbacks.
But there is a misnomer about caching and adodb. Caching only helps if you are accessing huge tables, with poor indexing, that have queries scanning the entire table before sending the data to the client.
In otherwords caching will only help if you have tables with hundreds of thousands of entries with a large number of fields and no limit queries on those fields. No limit queries will cause a scan of the entire table for the data on most databases.
Also, the overhead in ADOdb for reading and unserializing in the serialized data from the file it was stored in can take as much overhead as the original query if the initial query returned a large resultset.
I have performed a number of tests on ADOdb using both cached and uncached queries and most of the time the uncached queries were faster. This is because mysql can usually parse the table faster than PHP can perform the load and unserialize of the data. Now if the stored resultset is incredably SMALL then you can have a speed advantage. Caching in ADOdb CAN be faster than the database queries if the database system being used is slow by default but databases like MySql do not have this problem from my testing.
There are two advantages to using caching queries.
1. The CPU load is transfered to the client server from the database server.
2. The cached query CAN be faster if the database server is storing the data on very slow hard drives or the database itself is very slow.
But in many cases caching will not speed up the queries but can slow them down.
Another disadvantage to the caching scheme used in ADOdb is it cannot take into account changes to the database. If you use the default 60 MINUTE cache time in ADOdb then no changes to the database will be visable until 60 minutes after the initial cached query is executed. You could have 1,000 changes to the database table during that 60 minute period. None of them will show up in the cached query until 60 minutes has timed out.
In otherwords the caching system in ADOdb is very rudamentary and cannot detect when there have been changes made to the database tables. So old outdated data is used by the cached query. If ADOdb was able to detect when a table was updated then it could clear the cached data and execute a noncached query to rebuild the cache data. But sadly that is not possible and many databases do not offer the ability to detect that a table has been updated. In this case caching should only be used on tables that do not update their data frequently.
If you are using MySql you would be better off using the built in caching features it has instead of caching in ADOdb. If caching is enabled in Mysql then all queries are automatically cached and if a table is updated it knows to clear the cache for that query and rebuild it for you. The
built in caching in Mysql is also MANY times faster than anything ADOdb or any PHP based caching program could come up with.
MySql also has the ability to check when a table has been last modified. When I eventually create the caching module for MySql it will not use the 60 minute cache delay but will check when the table was last updated. This will atleast make the MySql versions of the cache module intelligent enough to rebuild the query data when it has been previously updated.
I plan on adding caching to ADOdb Lite but it is a very low priority at the moment due to caching not being that much better than straight queries.
Roja wrote:The ADODB_Session::dataFieldName method is not available, is there an alternative?
No, I did not support that function but it was easy to add.

For the next release the dataFieldName function is supported.
Roja wrote:ADODB_Session::encryptionKey method is not available, is there an alternative?
It is there... Scroll to the bottom of the class and you will see the function.
Roja wrote:There is no perf mon class, is there an alternative?
No, the performance monitor is not support at this time. When I have the time I will be porting it as it will need to be rewritten so it can interface with ADOdb Lite. Plus, I will need to get the meta_module completed for all databases. The meta module will contain all of the seldom used meta functions.