Code: Select all
for($x=0;$x<=10000; $x+=100) {
fseek($h,$x);
fread()
}Code: Select all
for($x=0;$x<=10000; $x-=100) {
fseek($h,$x);
fread()
}Moderator: General Moderators
Code: Select all
for($x=0;$x<=10000; $x+=100) {
fseek($h,$x);
fread()
}Code: Select all
for($x=0;$x<=10000; $x-=100) {
fseek($h,$x);
fread()
}Is their some reason you can't cache this data once it's been generated? Does it change too often to make this a reasonable plan of action?jshpro2 wrote: ...and the file that reads the data, and converts it to an array of usable points is the bottleneck in my system, ...
This right here is a perfect example of overhead that can be cached.jshpro2 wrote:... If I need record #400 for example, I look up record 400's byte position in mysql, .... I may have to do this 700 times per page request,...
Which doesn't change anything. Whether it's the same info 700 times (which is brain dead) or 700 seperate peices of info means there are still upwards of 700 requests to the database per page request (which is brain dead) based on information you've provided above.shpro2 wrote: By 700 times per request, I mean 700 different pieces of data,
In my last post, my suggestion was that you serialize the index that is being held in memory instead of making all of those calls to mysql. I wasn't talking about the file that you are using fseek() on. I still believe this is a much better option from an algorithmic point of viewshpro2 wrote: I decided to throw all my data into a file and have the mysql table hold the index (tells me which data can be found where). This brought my average execution time down .3 seconds, down from over 2 seconds. Commenting out the unserialize() function brings my total execution time down to .4 seconds. I will be storing my data in a binary format, instead of a serialized array stored as ascii for 2 reasons:
That may be, however, the time taken to rebuild a serialized array of indexes to points in your binary file would be much, MUCH faster then upwards of 700 seperate requests to a database.shpro2 wrote: speed = serialize wasn't built for large amounts of data in short amounts of time
You've misunderstood me. One more time: my contention was that building a data structure of some sort that represents your index stored in MySQL and a) unserializing it from a file, or b) storing it in shared memory, would be a lot faster then doing upwards of 700 queries to the database.shpro2 wrote: space = why use serialized on a 1D array, when you can store it as binary with 20% the required disk space