Don't know what this question falls under really so popped it in Misc... hope someone here can help
Bascially, I'm working on a distributed system at the moment that has a central db/app server and a number of process nodes (separate servers that do the work behind the scenes)
I have a script that checks the master db and pulls in a chunk of data to work with, now the data sets are huge so the page running the processing has to loop every 1000 records or so to prevent time out. At the end of the loop it checks if there is any more data to gather from the master db, if so it starts the loop again and if not it hibernates until the next cron triggers it to check the db for work to do.
Now, if I run this using IE from my desktop, just calling the server IP and the page name everything works fine. The page gathers the info and loops to completion, adding and removing relevant flags on its way. However... on the linux server itself when the cron calls it, it times out and the browser error is that the page seems to be redirecting in a way that will never complete.
Is there some way to get around this so I can run the loop on the server at set times?
If you execute a script from a cron it's not going to be able to redirect back to itself. You can either have the script execute itself again via a shell command or increase the max_execution time in either php.ini or by using ini_set.
What happens if you curl the page if you say it works in a browser? Then you could just set a cron job to curl the page rather than execute the script.