Page 2 of 2

Posted: Sat Feb 21, 2004 1:25 am
by eletrium
Ech, couple comments...

"The op (me) wants to achieve global access to object instances. "

global = evil

Avoid globals in general.

-----------

"Why? Because I believe it's a better software engineering practice than just bashing at the database. "

Not necessarily. "Good" software enginerring practice is getting a working program that satisfies the customer's needs. Overanalyzing and trying to get a perfect program is impossible, so you just write something, and then you test it. I can't tell you how % of things like this I hear about only to run a test and see that there really is no issue.

For example, if you spend too much focus on this, and you miss an index on your database, then it won't matter what you do, data access will be slow (yes, this happened in one case). Get the overall program running, then identify specific problems, and then define what would make each problem "fixed" and then address those specific problems. As you get more experience, you simply will have less and less problems in your programs. But the key here is to just do it then test it.

In other words, you're trying to solve a problem that you have no clue if it is a problem even. Trust me, there are plenty of problems out there, but you need to be sure it's a problem first.

My last company was ALWAYS worried about our software being too slow. So there was CONSTANTLY a push to write more "efficient" code, even at the expense of readability and useabaility and re-useability. Problem one was that 95% of our time was in getting data from the database, not in our code... so they wanted to cache everything. Did nothing. Period.

Why? Because the effectiveness on the cache was not the right solution. Why? Because it was the right solution, but not to the problem we had. The problem we had (found by our resident nutjobber using VTune) was that the database drivers themselves were the issue. The timings in our code was fine, the SQL we ran on command line was fine on the servers, but when the same SQL ran through the drivers on the clients, it was slow as hell. The solution? Would have been to write a custom driver.

Lastly and MOST important, ANY time you are doing something to "speed something up" it is ENTIRELY worthless to do it until you make a test program to get timings. Unless you can say MethodA runs in X time and MethodB runs in Y time, its useless. More often then not, when you make your test program and get the timing, you'll probably find out it's fast enough already anyways.

---------

"It is extremely counter-productive to waste time worrying over performance. You should not worry about this while you're coding (assuming you're not doing anything totally crazy). Just try to write good, maintainable code. "

Smart man. You know there is going to be a problem somewhere. It may well be performance. But chances are not. Can tell you're a professional McGruff.

----------
"Wait and see how the finished product performs. Think about optimising if you really need to - and if it isn't cheaper and easier just to get new hardware."

If you charge your client say, 50 bucks an hour for your labor, and it will take a week to fix something, you just charged them 2000 bucks. For 800 bucks I can build you a computer that would blow your socks off... you'd be surprised how often a problem with performance can be solved just by upgrading the hardware. It's business, not perfection in code.

-----------

"Hmm .. if you're expecting hundreds of thousands of hits a day, caching is extremely important to reduce the massive server load."

Huh? Need to be more specific. What kind of implementation are we talking about? The most important caching is on the database itself. Tuning the database properly is everything. If a certain query is run often, the database should be caching it anyways. Not to mention if it is run often, the query should be hitting proper indices. The variance in speed of queries with changing the query a little or changing a setting on the database server is astounding. Apache is the same... the place to address this is in the database itself or Apache. Trying to share data/objects between users potentially across the world (am I really reading that right?) to save a few database hits is not what I would think would be a good start point.

Posted: Thu Mar 25, 2004 11:06 am
by aleigh
timvw wrote:I haven't seen any programs that provide you with a pool of objects that live all the time (But that doesn't want to say that they do not exist). I do know about some programs that provide you with such pools (fe: jboss appserver) but they all seem to have a database backend.
Almost all the large Java app servers allow you to do this; for an example ATG Dynamo.

Posted: Thu Mar 25, 2004 11:32 am
by aleigh
That eletrium guy is a little bit insane.

I am glad that in some particular instance in some particular place at some particular time with some particular app, caching didn't turn out to be useful to him. However this specific lesson sure isn't an object lesson in why applicationg caching should be avoided.

The database is not magical; and it is certainly not fast, and in many cases it's not even there. Advocating a general design that places emphasis on going to the backend database is like advocating a general design that prefers going to disk rather than RAM because it's good enough. You can always buy more machines, right? Oh but wait then you will need the database replicated, or you will have to go to n-tier. But that's simple and cheap! And it doesn't get any slower when you move the database outboard to another machine, nope.

Sure.

So anyways back to reality. I've got a PHP script that takes a list of words and attempts to match the words from a relatively large haystack. If individual words do not match pairs of words are recursed, up to some length, three or four words.

This page takes a relatively light amount of load, 5 or 10 times a second. Your knee-jerk reaction could be to do it in a database, but that would be pretty stupid; it would cause the latency to go way up to backend to Oracle and would also increase concurrency on the connection pool which would burn valuable sockets on the machine.

Plus once you go to SQL man then you have to write an admin, blah blah blah. Talk about billing your clients for wasted time; what's easier, doing all that or just editing an assoc in a php script.

So there you are running a PHP script a few times a second that has to read a 300 element array and parse it every time; not to mention the script itself has to be compiled into bytecode. I mean come on. Even TCL can store a bytecode compiled script for later excution.

If you want a more dramatic example, just think of a script that has to load configuration from a XML file every time. Yeah. XML is free to parse. Why it's practically pre-parsed.. I am sure that everyone agrees reading and parsing the php.ini file for every hit would be stupid; so why do we do it for our scripts. Thankfully these aren't real world examples, no one ever reads XML from their scripts.

So the right answer here is clearly that you be able to process the data and store it locally in RAM for later use. There's no reason to bring in the complexity of a database for small situations like that unless you already have the infrastructure in place, and there are plenty of times you don't.

How about apps that are multilingual? We all write good, multilingual apps, right? I developed a community system and all of the messages that can get written to the browser are stored in a big array and are referenced using ids. For example,

msgs[12] = 'Login denied.';

This is really great because if you want to make the app work in german you just modify the one file and set your strings. I'm sure anyone familiar with NLS will be on board with this idea. Except, wait, it's a big app and so there are hundreds, nay, thousands of messages. So PHP has to churn through this include file that contains all the mesages to build an array that might be used for only one or two messages. Yeah. That's efficient!

Maybe next Tuesday I'll move it to mysql and make sure my indexes are really in shape so it goes even faster; that way every page can call SQL to figure out what to print. No reason for PHP to let me just have a shared global array with this information, nope!

Yeah.

Posted: Thu Mar 25, 2004 11:51 am
by McGruff
It's great when people post here to share their knowledge but not so great when they call other posters insane...

Posted: Thu Mar 25, 2004 1:17 pm
by aleigh
McGruff wrote:It's great when people post here to share their knowledge but not so great when they call other posters insane...
I only said he was a little bit insane. And aren't we all?

Posted: Fri Mar 26, 2004 3:17 am
by eletrium
"That eletrium guy is a little bit insane. "

That's personal. No place for that here.

Now, why don't you re-read my post and react to specific passages. There is some miscommunication between my brain and your brain because I never said anything to suggest caching is not useful. I was saying be SURE that the problem was really in YOUR code before you tried to optimize it.

Please re-read and tell me how I said something to the effect of ""I am glad that in some particular instance in some particular place at some particular time with some particular app, caching didn't turn out to be useful to him. However this specific lesson sure isn't an object lesson in why applicationg caching should be avoided. "

Oh, by "little" app it had well over 1.5 million lines of code. Try learning the specifics of a sitation before you belittle it. Otherwise you belittle it in a way that is inaccurate. It was not little in any way. Which is why it sucked. Next time learn more about it so you can insult it properly please.

"The database is not magical; and it is certainly not fast,"

Uh, wrong. Period. We had Tyradyne clicking a 40 million record insert in less then a second. This is not MySQL friend. Some databases are fast, some are slow. Some fast ones used in some ways are slow. It is all situational. In the 1/5 million lines of code of a program I waorked on, timings using VTune showed that the most significant potion of time was in accessing the database. A slow database? Nope. The drivers were slow. The same Query run direct was pretty dang fast. It was jus tnot handled well by the drivers.

=============

"Advocating a general design that places emphasis on going to the backend database is like advocating a general design that prefers going to disk rather than RAM because it's good enough. "

No, it is like advocating examining the problem and trying smaller versions of your problem solution using both techniques and learning yourself which situations best are solved in which manner. God forbid people think of some option YOU are not well versed in for a situation YOU are not familiar with. It is all very situational. I am merely advocating they look in another avenue. Then look at the results and compare for themselves.

==============

"So anyways back to reality. "

Wow. Too many comments. Gonna leave that one alone there. You wanna look unprofessional people not gonna listin to you bro. Reality is reality, it is not what you or someone says it is.

==============

" it would cause the latency to go way up to backend to Oracle"

Ok. If you are an expert on Oracle I will accept this for fact. In my personal experience it was different but my own experience was with several databases depending on the day, and not one specific database for years on years. There are other databases then Oracle. Each with as many nuances as Oracle and with as many ways to royally F-up a server's settings to hell.

==============

"Plus once you go to SQL man then you have to write an admin"

This mean write a database admin a email? I always do my own SQL. The first time you do complex SQL it takes many tries and you can royally mess it up, but once you know how each database likes it you can write your own SQL fast and easy.

==============

"So the right answer here is clearly that you be able to process the data and store it locally in RAM for later use. "

Ok, this is the fun one. I just love this statement. Let's review the original post I was answering...

"In a web environment, what is the best method for caching objects?

I'm dubious about hitting the database too frequently and would like to construct commonly used objects only once in the lifetime of the webserver.

What is the best pattern for creating an object and having it available for all HTTP sessions? Is it even possible? "

Ok. SO we are hopefully on the same page again. You make the assumption that "web environment" means specifically and only an environment with the database, web server, and user on different computers. In my particular case I just run simpler stuff on the same box. I don't have to do much ballsy stuff atm, so I just have my database server running on the same box as my web server. IE PHP. IE PHP is in the SAME RAM as Oracle and MySQL. I PERSONALLY feel that Oracle or MySQL is more efficientin this case to handle things I can easily write SQL for then PHP, a script that needs to be interpreted and run on an engine I am personally unfamiliar with as far as top end performance. I know Oracle doens't choke until I get to a level high above where I am at, so I figure its RAM is better used then PHP. IF I get problems with timing, then I have the tools with which ti investigate. Not gonna do it till needed though.

NOW, considering all of the different flavors of setups for database servers and web servers, I do not see a clear choice that is perfect for every case out there. As far as a common case, I rather give the guy an answer he can apply to multiple situations and THINK about it himself then to give him a single just do B. It works.

Seriously, to be able to take a very general statement and make a specific decision is very interesting.

==============

"msgs[12] = 'Login denied.'; "

Self commenting code dude. Self commenting code. Maintainability 101. Try it. Just go to a Barnes & Noble and read Chapter3 in Martin Fowler's Refactoring.

==============

"It's great when people post here to share their knowledge but not so great when they call other posters insane..."

Thanks McGruff. But of course you actually understood what i was writing :)

Posted: Fri Mar 26, 2004 3:19 am
by eletrium
Oh yeah aleigh, if you would like to learn about REAL caching, please read my post on it in this thread: viewtopic.php?t=19634

It explains what caching actually is.

Posted: Fri Mar 26, 2004 9:44 am
by aleigh
eletrium wrote:"That eletrium guy is a little bit insane. "

That's personal. No place for that here.
So on that note I will ignore your inflammatory verbage.

I think the disconnect is that I was not talking about your application, I was talking about that other guy's application. I thought it was him we were giving advice to.
eletrium wrote:
"The database is not magical; and it is certainly not fast,"

Uh, wrong. Period. We had Tyradyne clicking a 40 million record insert in less then a second. This is not MySQL friend. Some databases are fast, some are slow. Some fast ones used in some ways are slow. It is all situational. In the 1/5 million lines of code of a program I waorked on, timings using VTune showed that the most significant potion of time was in accessing the database. A slow database? Nope. The drivers were slow. The same Query run direct was pretty dang fast. It was jus tnot handled well by the drivers.
It's true that you can make a database fast but more often than not databses are slower than dealing with things local to the machine, local to PHP, in RAM. Surely you will agree that there is just a physical truth that it takes less instructions to look up something in an assoc than it does to look something up in SQL.

eletrium wrote: " it would cause the latency to go way up to backend to Oracle"

Ok. If you are an expert on Oracle I will accept this for fact. In my personal experience it was different but my own experience was with several databases depending on the day, and not one specific database for years on years. There are other databases then Oracle. Each with as many nuances as Oracle and with as many ways to royally F-up a server's settings to hell.
This just goes back to the truth that going to a database through a connection takes more instructions and introduces more latency than looking up the information in RAM. And to be clear we're talking about simple things like I felt the original poster was suggesting; not handing 40 million rows of anything.
eletrium wrote: "Plus once you go to SQL man then you have to write an admin"

This mean write a database admin a email? I always do my own SQL. The first time you do complex SQL it takes many tries and you can royally mess it up, but once you know how each database likes it you can write your own SQL fast and easy.
That is convinient but I deliver products to third parties. It's easy to tell someone experience with HTTP to just modify a PHP script that has pre-defined arrays containing information, but another matter to try to teach them enough SQL and assume that they have access etc. When we develop solutions it's basically not an option to require that the client learn anything; so if I develop something that uses SQL it has to go out with a web admin. I just think that's professional, as well, but it depends on your client base of course.
eletrium wrote: "So the right answer here is clearly that you be able to process the data and store it locally in RAM for later use. "

Ok, this is the fun one. I just love this statement. Let's review the original post I was answering...

"In a web environment, what is the best method for caching objects?

I'm dubious about hitting the database too frequently and would like to construct commonly used objects only once in the lifetime of the webserver.

What is the best pattern for creating an object and having it available for all HTTP sessions? Is it even possible? "

Ok. SO we are hopefully on the same page again. You make the assumption that "web environment" means specifically and only an environment with the database, web server, and user on different computers. In my particular case I just run simpler stuff on the same box. I don't have to do much ballsy stuff atm, so I just have my database server running on the same box as my web server. IE PHP. IE PHP is in the SAME RAM as Oracle and MySQL. I PERSONALLY feel that Oracle or MySQL is more efficientin this case to handle things I can easily write SQL for then PHP, a script that needs to be interpreted and run on an engine I am personally unfamiliar with as far as top end performance. I know Oracle doens't choke until I get to a level high above where I am at, so I figure its RAM is better used then PHP. IF I get problems with timing, then I have the tools with which ti investigate. Not gonna do it till needed though.
It's mathematically impossible that:

msgs['login_deny']='Cannot process your login, sorry';
print msgs['login_deny'];

is slower than getting that out of the database and printing it, using any database, installed anywhere. Extend to all examples that appy.
eletrium wrote:
"msgs[12] = 'Login denied.'; "

Self commenting code dude. Self commenting code. Maintainability 101. Try it. Just go to a Barnes & Noble and read Chapter3 in Martin Fowler's Refactoring.
What are you talking about here? I specifically said that the goal of this was to provide NLS support, which I will point out is implemented in the same way in those libraries you use every day, and to an extend in the PHP code itself. Using msg ids (think errno (different history, admittedly), or oracle messages) is a long standard way of providing internationalization support... So I am curious, what was I supposed to do differently here that would have made using a database better than storing the messages in a PHP file?

I agree with the idea that a lot of people spend too much effort to realize small increases in performance when that time could have been better spent elsewhere, but I resent bad design when there is no other excuse for it. Using the blanket statement that you should write-code first and ask performance questions later I think is like saying you should build a car first and ask questions about how it performsn later.

The honest answer is that it is hard to design things that perform well in the beginning. That's the art part of this science, and there are lots of apps that are poorly designed in this world.

I think it's right to ask the question, why are there no shared global objects in PHP. We can have a <span style='color:blue' title='I'm naughty, are you naughty?'>smurf</span> contest if you want because egos are bruised now but I think the real question here is what is this support useful for and why don't we really have it.

I have a webserver (ashpool.com) that PHP natively supports. The server is threaded so you only have 1 interpreter rather than many; consequently it performs a lot better (uses less RAM) than more traditional Apache installations. The other side-effect of this is that with only one interpreter you only have one memory space so you really can share objects inside a single instance. I have a version of the driver that implements two new PHP calls that allow you to store session globals that are accessable between calls.

Based on my measurements doing things like only parsing structures and building assocs once results in very, very tangible increase in speed. Do not underestimate the resource requirements in allocating very large, especially very large multi-level, assoc arrays. PHP's zVal support is very wonderful because it is so flexible and even reasonable easy to use on the C side, but easy to use is another way of saying slow in most cases.

The answer in any other language then is just to not do it for every page! PHP is the ONLY language framed for the web I can think of that does not embrace shared global data storage.

My answer to this guy is that he should cache everything that he can easily cache that will be called a lot, preferably globally, and it seemed and still seems that was contrary to the approach that you were advocating, at least in my reading. My answer to this guy is that it is possible - but not embraced by PHP. And my answer to this guy is that this is a really good thing to be thinking about because it does have a lot of impact.

Posted: Fri Mar 26, 2004 1:41 pm
by eletrium
Ok, taking a more congenial tone in reflection of your congenial tone. Good to keep it clean so to speak.

Now on to the guts...man this is going to be fun.

"I think the disconnect is that I was not talking about your application, I was talking about that other guy's application. I thought it was him we were giving advice to. "

Uh, yeah. He wasn't buinding a specific application. He was asking what the best way to cache is. Let me quote YET AGAIN his post. Like I did in my last post.

ORIGINAL POST:
"In a web environment, what is the best method for caching objects?

I'm dubious about hitting the database too frequently and would like to construct commonly used objects only once in the lifetime of the webserver.

What is the best pattern for creating an object and having it available for all HTTP sessions? Is it even possible? "

I see no mention of a specific program here. Giving a specific answer saying the BEST way on a NON-specific question is misleading and frequently wrong.

-----------------------------

"It's true that you can make a database fast but more often than not databses are slower than dealing with things local to the machine, local to PHP, in RAM. "

I'm sorry, but if I am not mistaken the RAM the PHP is run in is on the web server. The web server runs the PHP and generates a HTML page and sends that to the client machine. It is a massive scripting engine built around manipulating text. Any low level programmer knows that is a costly endeavor in clocks. Databases are PURELY made for the storage of data for fast retrieval. They have 1000 programmers that are highly geeky working to figure out neat ways to make it retrieve data faster. And it usually runs on a beefy computer. Moreover, Database Engines have INTERNAL caches to optimize for data that is requested frequently.

RAM is fast if it is your own RAM, and only if you don't screw around and cache stuff badly. Read my post on caches I linked above. A linear search routine done properly can be up to a billion times faster depending on the volume of data. Generally, a proper cache can access data on a scale of log(n). (VERY generally). To answer the guy's question, GENARALLY trust the easiest way to write maintainable code. If that includes a lot of hitting the database, you DON"T KNOW how slow it is until you try it out.

When doing a job for Kroger, my company had a test on a Kroger proprietary database running on a 9,000 dollar Dell box. It ran in two hours clean. Hitting the database all the time. Kroger needed it to run in 4. Cool. But Kroger's 200,000 dollar box ran it in 24 hours. Hmmmm. Is it an issue with RAM? SQL? What? The way the database server on the 200,000 box was set up it was causeing severe issues with Querying. When the server settings were corrected the timings came more in line.

We have had to get timings on the actual MACHINE CODE going into the CPU itself to investigate this stuff within our own code base. Unless you are looking at the machine code itself, includeing breaking the law and looking at the machine code within the Oracle or other database server, you truly cannot tell whether or not it is running efficiently. Try it out by running a test and getting a timing. It is the only legal way to guess where the bottleneck is.

----------------------

" Surely you will agree that there is just a physical truth that it takes less instructions to look up something in an assoc than it does to look something up in SQL. "

No. It is only a conceptual truth. Unless you crack open the machine code or run timings using VTune, there is no physical truth in computers. Two different compilers can compile the same code two entirely different ways generating two entirely different set of instructions to the CPU.

So, how does a professional handle that? Go with the conceptual truth, which I concede, then test it to see if it runs fast enough. But it is a key, core, frequent, amateur mistake for programmers to think the conceptual is the reality. Until you have run through low level machine code, you have no appreciation for the amount a computer can f--- things up. It is cliche, but you cannot assume anything. You can BET it is that way, but don't rule it out if weird things start happening. Always keep in mind it might be utterly different then what you think it is.

---------------------------

"This just goes back to the truth that going to a database through a connection takes more instructions and introduces more latency than looking up the information in RAM. And to be clear we're talking about simple things like I felt the original poster was suggesting; not handing 40 million rows of anything. "

Here is where we agree. Sort of. The underlying point here is that there is never a single best way to do anything with computers. The better you understand the fundamentals the better your judgement in deciding when to use what techniques. We both agree that caching 40 million records is moot. Typically. But I can definately see instances where it makes a lot of sense. If the records have data from engineering, say X,Y, Z coordinates stored as floats and an index of some form, then each record has roughly 8 bytes * 3 + say 32 bytes for some complex index to search on. Lets round it to 60 bytes total. 40 million records is 2.4 billion bytes. That works out to 2.23517 gig. On a box that runs Teradata that's not much RAM. They load them up solid. Store 40 million nodes in a AVL Tree to lend to a good balance, and each search only takes 27 mathematical comparisons.

Take that to 30,000 records. That's one record per item in a large grocery store database. Such as Kroger's. We had a program that ran on a PDA, using an old Mac Plus proccessor, that stored the data for those 30,000 items, and you could search on them instantly. Meaning, as you entered a letter for an item, it would instantly narrow the 30,000 item list accordingly. Why? Because we wrote a custom database engine that was that efficient. ("We" admittedly = some crazy genius, I personally had nothing to do with it). My point? Databases are SUPPOSED to store and access data fast. If you need to store and access data, START out using what THOUSANDS of programmers worked on to do JUST that task.

Network latency? It is network specific. You have to try it out to see how it affects the timings. If it is on the same box as the web server, then no worries.

--------------------------

"This just goes back to the truth that going to a database through a connection takes more instructions and introduces more latency than looking up the information in RAM."

This one deserved two comments. Dude, where do you think database engines store their data? Once the initial Query is made, good database engines cache the data in RAM local to the server entirely for that purpose. If you can make NO Query at all to start out with, then local PHP might be faster if you save time from the initial disk read. But if you store it in files in PHP then you have to read from disk once anyways. Eitheer way you eat a disk read, which eats time at a factor of 1 million compared to a local read from RAM.

However, databases expect this and are entirely optimized to do faster disk reads. DB2 for example specifically uses B-Trees for this purpose. Why? Because they know it is coming and have put in code to optimize for that occurance.

------------------------

"That is convinient but I deliver products to third parties. It's easy to tell someone experience with HTTP to just modify a PHP script that has pre-defined arrays containing information, but another matter to try to teach them enough SQL and assume that they have access etc. "

This is a very good business decision on your part. But it is a business decision specific to your specific situation. My point has always been that you have to consider all of the variables and make your own decision specific to the case. Why? Because the original poster did not list specifics in his original request and I was addressing that.

------------------------

"I have a webserver (ashpool.com) that PHP natively supports. The server is threaded so you only have 1 interpreter rather than many; consequently it performs a lot better (uses less RAM) than more traditional Apache installations. "

Now THAT is great information. I'm all over that one. Thanks a lot :). Did not know that and will definately be using that.

------------------------

"The other side-effect of this is that with only one interpreter you only have one memory space so you really can share objects inside a single instance. "

Again, specific to you. Not in general.

------------------------

"Based on my measurements doing things like only parsing structures and building assocs once results in very, very tangible increase in speed. "

Bingo. Tests. Credible information. This I will take your word on. It is based on tested code. However, it needs to be noted that it is specific to youre setup. Our software runs on Db2, Oracle8, Oracle 9, MS SQL Server 2000, Interbase, and Teradata for database servers. For OS we are looking at Linux, Unix, Win98, WinNT, Win2k, WinXP. The same software runs differently for each database and operating system. Why? Many reasons, a primary reason being database drivers.

Posted: Fri Mar 26, 2004 3:10 pm
by aleigh
I think you are sort of missing the point by concentrating on storing lots of information. If you have 500 static messages storing them in PHP is going to be faster than storing them in a database as long as you are doing keyed lookups; basically I give PHP enough credit that their assoc hashes are as fast as (insert database here) in those sort of dimensions.

Even if the database is running on the same machine and running in the same RAM, that isn't fair to say it's the same then between the database and the PHP instance. There's a lot of interjoining code between those two, unless your PHP driver is magically using IPC shared memory and loading the database engine into the same process. Unlikely!

While all this is as specific an example as your notion of storing millions of records, I think what it strongly speaks to is a need to evaluate the data you have and realize the breaking point where the performance of doing it locally is overshadowed by either the convinience or more likely the speed of doing it it another engine; viz a viz the benefit to caching. I am sure that somewhere between storing 500 name/key values and 5 million GIS coordinates lies the middle ground and I think that's what really needs to be explored.

Posted: Fri Mar 26, 2004 10:21 pm
by eletrium
I agree pretty much with your third paragraph. But show me where I said something to the effect of "that isn't fair to say it's the same then between the database and the PHP instance".

I said in general I trust that databases are better optimized for storing and accessing data. To put it crudely, its a <span style='color:blue' title='I&#39;m naughty, are you naughty?'>smurf</span> database --> Its exactly what they are DESIGNED to do.

The thing that needs to be considered is the overhead in a single hit to the database. As we both know every hit to the database costs some specific overhead in time. The question as you have said, and I will add to, is whether caching it in the web server using PHP thus saving you that overhead, GAINS you more time savings then the overhead itself.

-----------------------

As far as Cacheing... you can read my post on it listed above or check out Data Structures and Algorithms on google.com. Read up on stuff like Hash Tables (hate them personally) Binary Search Trees, and Splay Trees.

Posted: Fri Mar 26, 2004 10:30 pm
by McGruff
eletrium: I'd rather you didn't put it crudely...

Ach, just when you two were making up.