Well maybe I need to post a bit more on the problem
alrighty, i have a very huge file, full of data,
i chopped the data up into smaller files according to their location.
then i load that data, and i chop individual pieces up into an array,
using /n with the explode function that data looks like this before.
SEX,Frequency,Percent,ValidPercent,CumulativePercent,Valid,F,627,60.5,60.5,60.5,M,409,39.4,39.4,99.9,uk,1,.1,.1,100.0,Total,1037,100.0,100.0,/nRES,Frequency,Percent,ValidPercent,CumulativePercent,Valid,INSTATE,416,40.1,40.2,40.2,OUTOFSTATE,155,14.9,15.0,55.1,INDISTRICT,299,28.8,28.9,84.0,NONCREDIT,166,16.0,16.0,100.0,Total,1036,99.9,100.0,Missing,System,1,.1,Total,1037,100.0,/n
i use strpos and seperate the data up using the commas as seperators.
see that the first thing is the table name in these two cases SEX and RES. then the field names are the static frequency, percent, validpercent, cumulativepercent, and valid.
i then use a loop to extract the rest of the data, and put it into an array, which is then put into a database using another loop to extract the data from the array.
only problem, is i need to count the commas from start to the /n which is string 0 in the first array run through the program. it is 25 commas in the first set of data, i need to subtract 5 from that because those are the commas seperating the table name and the field names, then i need to divide by 5 and get the number for the loop which puts the data into an array, and then for the loop which puts the data into a database from the array.
I doubt i made much sense, since i am running on only coffee, and nothing else, whooo i can feel my brain expanding already

. so if you want i can post the sections of code that pertain to this so you can better understand what i am trying to get at here

.
and if you have any ideas that may help make this faster, Please let me know

-=Levi=-