Page 1 of 1

Forcing re-evaluation of function within regex replace str

Posted: Fri Jul 15, 2005 3:02 am
by sehrgut
If you're familiar with the various web-literature projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(їa-z,A-Z,0-9]*)--/,&quote;<a href='?&quote;.url().&quote;'>$1</a>&quote;,$aї$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 PowerBook), but I'd still like to improve the efficiency of execution.

Thanks!e projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(їa-z,A-Z,0-9]*)--/,&quote;<a href='?&quote;.url().&quote;'>$1</a>&quote;,$aї$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells[/upiece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($listїmt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(їa-z,A-Z,0-9]*)--/,&quote;<a href='?&quote;.url().&quote;'>$1</a>&quote;,$aї$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed,ut the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(їa-z,A-Z,0-9]*)--/,&quote;<a href='?&quote;.url().&quote;'>$1</a>&quote;,$aї$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go g a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(їa-z,A-Z,0-9]*)--/,&quote;<a href='?&quote;.url().&quote;'>$1</a>&quote;,$aї$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a with the various web-literature projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(їa-z,A-Z,0-9]*)--/,&quote;<a href='?&quote;.url().&quote;'>$1</a>&quote;,$aї$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 PowerBook), but I'd s with the various web-literature projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--([a-z,A-Z,0-9]*)--/,"<a href='?".url()."'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), ra }

What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--([a-z,A-Z,0-9]*)--/,"<a href='?".url()."'>$1</a>",$a[$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 PowerBook), but I'd still like to improve the efficiency of execution.

Thanks!icient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = f piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching st I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and [urlts which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 Pakelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose)ious web-literature projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--([a-z,A-Z,0-9]*)--/,"<a href='?".url()."'>$1</a>",$a[$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 PowerBook), but I'd still like to improve the efficiency oc8e903c]function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }

What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--([a-z,A-Z,0-9]*)--/,"<a href='?".url()."'>$1</a>",$a[$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 PowerBook), but I'd still like to improve the efficiency of execution.

Thanks!'t been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--([a-z,A-Z,0-9]*)--/,"<a href='?".url()."'>$1</a>",$a[$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wellh piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells[/ur. getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source[/u_rand(0, sizeof($list)-1)]);
    }

What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn with the various web-literature projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and [url=http://sehrgut.co.uk/testbed/weave/script-sotes my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }

What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--([a-z,A-Z,0-9]*)--/,"<a href='?".url()."'>$1</a>",$a[$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to [url=httpure projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list&#1111;mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 PowerBook), but I'd slly insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--([a-z,A-Z,0-9]*)--/,"<a href='?".url()."'>$1</a>",$a[$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to ?wells, you can see the largest input file. It runs significantly faster on the server than on my computer (an old G4 PowerBook), but I'd still like to improve the efficiency of execution.

Thanks!ature projects which link from words in one piece to other pieces, I'm writing a script to do about the same thing. Only, I would like to dynamically insert random links in each piece. So far, the script has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list[mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'&gt;$1&lt;/a&gt;&quote;,$a&#1111;$i]);
which added a link derived from the url() function around each tagged word. (I added spaces back in between the words as I built my output string.) This took between three and five seconds to execute, but did give unique links (meaning, separate evaluations of url()).

2)

Code: Select all

$a = file_get_contents('content/'.getfn().'.inc');
After which I would feed the whole string to my regex, with $a, rather than $a[$i], as its subject. This executed in about 0.1 seconds, but url() was only evaluated once (since I was not calling it on separate iterations of a loop, I suppose), rather than once for each iteration of the regex. The result, was, of course, that each all 14 links are identical.

My question is, is there a way to make the regex re-evaluate url() for each instance of a matching string? If not, is there a work-around? I'd like to be able to achieve the results of (1) in under a second for my test file.

I've posted the whole shebang (along with links to page source and script source) at my testbed, in case you want to take a look if you go to [urript has three parts: url(), getfn(), and makelink(). getfn() is merely responsible for passing the desired content filename to makelink().

Code: Select all

function url() { # This function generates my query-string for the random link
    $list = file('weavelib/fnlist.txt'); # \n-separated list of filenames
    return rtrim($list&#1111;mt_rand(0, sizeof($list)-1)]);
    }
What makelink() does is reads the content file, parsing it for "--word--" and linking that word to another random content file. However, I haven't been able to get it to work and be efficient at the same time. I've been using an input file of 322 words, 14 of which are links, which is typical of the maximum input size I anticipate. I've been using two methods:

1)

Code: Select all

$a = explode(" ", file_get_contents('content/'.getfn().'.inc'));
After which, I would loop through the resulting array with a for loop (I tested it for speed, and it was the quickest method for this particular array) comparing each word to my regex,

Code: Select all

preg_replace(/--(&#1111;a-z,A-Z,0-9]*)--/,&quote;&lt;a href='?&quote;.url().&quote;'