Page 1 of 1
session MEMORY (not files) stored on client or web server???
Posted: Wed Aug 23, 2006 3:25 pm
by axiom82
1. Session files are stored in a temporary directory on the webserver. They are simply files and reserve no virtual memory. They will be deleted every 30 minutes so if they are 10kb in size, no big deal. that's 500mb for 50,000 users...sounds big but not for 50,000 users. mostly only a portion of that disk space will be used as the sessions are deleted frequently. Does anyone see a problem with the disk space being used here?
2. The session file is loaded into client memory, not server memory, I HOPE! THAT is the BIG QUESTION. Does anyone know? If the file is stored on the server and it's contents are only loaded into memory on the client, than my system is an excellent cache. But ARE THE VARIABLES from the session file loaded into CLIENT MEMORY.
I need to know if the session is stored on web server memory or client memory...this is the big question.
Thanks.
Posted: Wed Aug 23, 2006 3:31 pm
by volka
axiom82 wrote:I need to know if the session is stored on web server memory or client memory...this is the big question..
server-side.
How else could the php script access the data?
Posted: Wed Aug 23, 2006 3:56 pm
by axiom82
Ahh that might be an issue. So if I have a file with 20 variables loaded into memory and 100 users are using the file. I would have 2,000 variables in memory on my server?
Posted: Wed Aug 23, 2006 3:57 pm
by volka
if these are simultaneous requests yes, and 100 instances of that script too.
Posted: Wed Aug 23, 2006 4:14 pm
by Ollie Saunders
Worry about it if there is a problem, not before.
Posted: Wed Aug 23, 2006 4:16 pm
by axiom82
Crap. So, here is my true problem.
I have a catalog website. These mysql tables are used to list the catalog items. Basically, the department is a super category, the listing is the category, and the class is the sub category where products are actually displayed.
Department -> Listing -> Product Class -> Products Displayed
CREATE TABLE `department` (
`id` int(2) NOT NULL auto_increment,
`name` varchar(25) NOT NULL,
`description` varchar(255) NOT NULL,
`target_list_id` int(3) NOT NULL,
`target_class_id` int(4) NOT NULL,
`rank` int(2) NOT NULL,
`active` int(1) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`),
KEY `active` (`active`)
)
CREATE TABLE `listing` (
`id` int(3) NOT NULL auto_increment,
`name` varchar(25) NOT NULL,
`dept_id` int(2) NOT NULL,
`rank` int(2) NOT NULL,
`active` int(1) NOT NULL default '0',
PRIMARY KEY (`id`),
KEY `active` (`active`)
)
CREATE TABLE `class` (
`id` int(4) NOT NULL auto_increment,
`name` varchar(25) NOT NULL,
`list_id` int(3) NOT NULL,
`rank` int(2) NOT NULL,
`active` int(1) NOT NULL default '0',
PRIMARY KEY (`id`),
KEY `active` (`active`)
) ENGINE=MyISAM
------
Now, if I run a cron job at midnight to create a file containing a serialized version of this entire structure...all products in all listings in all departments included...the file would be about 100kb.
Then, the file could be unserialized into memory...this would be fine if it was only one copy located on the client machine...but this would absolutely destroy the memory on the web server...as we would have arrays in arrays in arrays with products in those sub arrays...for a possibility of 100,000 products.
That is just once instance of the code. So, it's recommended I just take the hit on recursive mysql queries.
Even though, I just think it's silly to query data from a database that has already been retrieved once. Why is there no cache system for php?
Posted: Wed Aug 23, 2006 4:34 pm
by Ollie Saunders

What are you chattin' 'bout boi?
Now, if I run a cron job at midnight to create a file containing a serialized version of this entire structure...all products in all listings in all departments included...the file would be about 100kb.
mysqldump?
Then, the file could be unserialized into memory...
why?
this would be fine if it was only one copy located on the client machine...
Why would you send this file to the client. The client is the machine that requests from the server i.e. the public.
but this would absolutely destroy the memory on the web server
Why?
That is just once instance of the code.
Why would you have more than one copy?
So, it's recommended I just take the hit on recursive mysql queries.
Eh?
Even though, I just think it's silly to query data from a database that has already been retrieved once.
Yes it is
Why is there no cache system for php?
Its called a variable
Posted: Wed Aug 23, 2006 4:54 pm
by axiom82
Not a mysql dump. It would be like a session file serialization. When the file is loaded, it unserializes the data into memory variables to use with the script.
The file would be used as an include file...creating 100's of variables containing information such as the structure for the website and the products that go with the structure. (department, listing, class, products)
So, instead of getting this information from a database each time a query is run...it gets it all from a serialized file remade each night. The idea sucks.
Originally, I wanted to create the structure of a department when it was clicked, then if a product was viewed, it would put it into the structure. So if the same product was viewed again, it would pull it from the structured array for that department...not the database. My script was great...it pulled data and relocated it into the structure only when required.
But, I did not take into account that the memory and disk space to include this type of session-based storage was unrealistic.
I just can't seem to find a method to cache my results. I guess the speed of queries is truly so fast that unless I have 500 people browsing at one time, it wont really matter. Web application caching is not truly possible for the massive catalog listing structure I require. So, it's back to querying the database every single time the page is refreshed...which in itself seems unrealistic.
Posted: Wed Aug 23, 2006 5:11 pm
by Ollie Saunders
Ohh I see what you are doing now.
The whole idea is bad, one of the worst I've seen. You have, however, identified problems in your own idea so well done. But in case you hadn't guessed here's why the idea is bad:
- You should not worry about the speed of anything until is a problem. Thing about the cpu cost verus implementation cost. As a rule write everything as high level as you like to make things easier and only optimize where necessary.
- You are as you said yourself making many copies of the same data which has very serious scalability problems and would be difficult if not impossible to solve and would only get worse.
- The data you create will be out of date the second somebody updates the database in any way
- Serializing and unserializing the data could well be slower than quering it
- A database is for storing data, it is designed for that and as a result very good at it, better than PHP is which is why PHP only handles selected bits.
People always seem to assume that because you need to connect to a mysql server and that it isn't a part of PHP itself its going to be slow fact is database are good at storing data and particularly good at getting it back quickly especially when there is a lot of it.
You may never get 500 people browsing your site and if that does happen how do you know it will be slow? Should it happen there are roughly 30 different things that can be done to optimize your database (this not being one of them). Pretty much the only optimization you should do at design time is removing redundant operations and duplication; writing libraries is a bit different.