Page 1 of 1
Multiple Users Writing To The Same Log File?
Posted: Thu Dec 06, 2007 11:22 am
by Vik
I'm building a PHP app, and I thought it would be good to keep a log of how many times users complete a specific operation on my site. My first thought was, just have a log file, in tab text format, and write a line to it whenever a user completes that action.
But then it occurred to me - what if multiple users are on the system at once, so that the log file is open by one user's session, when another session tries to write to it?
How does PHP handle that? Is there a potential for a session to lock up until all the previous sessions accessing the log file have completed the operation? How do you avoid the appearance to the user that the system has become unresponsive?
Thanks very much in advance to all for any info.
Posted: Thu Dec 06, 2007 11:31 am
by Christopher
Take a look at the file locking functions in the manual. There are a number of examples of Loggers and Caches that use file locking around.
Posted: Thu Dec 06, 2007 1:11 pm
by Vik
Thanks for the info. So, it looks very straightforward - just lock the file, write to it, and unlock it. Does this mean that in practice users don't see delays from a lot of sessions trying to write to the same file at once? To take an extreme example, if, say, 100 simultaneous sessions tried to write to the same log file, would it look to session #100 as though the system had locked up? (I only have to write <256 bytes to the log file for each logging action.)
In my case, it would not cause data loss to postpone the logging action to run at some later time, asynchronously with the user's system access. Does anyone go to this extreme?
Posted: Thu Dec 06, 2007 1:39 pm
by Christopher
There are certainly well designed logging system that can log under heavy load. Or you might want to think about logging to database because they have locking built in.
Posted: Thu Dec 06, 2007 2:51 pm
by dbevfat
About the locks; yes, locking a file means that file isn't accessible for writing for any other process. Other processes have to wait until the lock is released and even though a single write is fast, this can actually become a bottleneck.
As arborint said, logging to the database is an option, although introduces some other issues, mainly in the performance area. For example, you probably shouldn't use indices, because each insertion writes to the index. This operation is much slower than a single write to file. Also, you have to purge older records if you don't want the database to grow too big. And if you want to select older records, you have to use at least one index on the timestamp to avoid full table scan. With files, this is easily solved with file rotation, so no file gets too big and you can access older records without actually interfering with the current log file. Database is definitely an option and maybe suits you fine, but you have to consider the drawbacks.
On a website I develop, we're using syslog (
http://www.php.net/syslog), and it works great (or at least good enough not to have caused any problems so far). The number of requests is approximately 2M/day and the number of logged entries is essentially the same. I am not particularly familiar with how it works, but I believe there is a daemon process running in the background and php is posting the messages to it. So the concurrent-write problem is basically removed; there is only one writer (syslogd).
You can also look at log4php (
http://incubator.apache.org/log4php/index.html), which is a port from log4j, a well-known, stable solution for Java logging.
best regards
Posted: Thu Dec 06, 2007 4:00 pm
by nathanr
maybe worth looking into setting the stream_set_write_buffer - either set it to 0 or to the exact legnth of the data to be written?
spawn a single process that logs to the file and has it's own buffer, the simply send the disered logable's to the ever running process?
database..
Posted: Thu Dec 06, 2007 4:26 pm
by s.dot
I've used file writing on a fairly high trafficed web site before, using file locking, and it worked good and never any noticeable delays, at least for me.
You'd probably have to have to have 50,000+ people online to get 100 people trying to write to the file in the same fraction of a second

.
For me it was as simple as
Code: Select all
$handle = fopen('/file.txt');
flock($handle, LOCK_EX);
fwrite($handle, $data);
flock($handle, LOCK_UN);
fclose($handle);
Posted: Thu Dec 06, 2007 5:18 pm
by Vik
Thanks very much to all for this great info!