Page 1 of 1
5 GB database?
Posted: Thu Apr 13, 2006 2:54 pm
by Deseree
Hi guys, I have an important question that I can't figure out the answer for.....
I have one mysql database with 5 tables right now ( soon to be 24 tables ), and the size is 4.9 gb so far....for those 5 tables.... it's got like 5 million rows of data in each table.
Is this going to cause any performance issues? Would it be better to break EACH table into it's own database or no difference? Reason I ask is because it takes a while for phpmyadmin to load atm, not sure if it's mysql's fault or just that my server is got alot of apache connections going ( but mysql is not connected to apache, the mysql is a backend that I use shell side to create static information once in a while).
So what is the best way to do this, it's ALOT of data I've got we're talking about like a 30-40 gig mysql database if I combine all 24 tables.
Posted: Thu Apr 13, 2006 3:03 pm
by John Cartwright
Not sure about each different table on it's own server, although I definantly would put the database on a different server.
Posted: Thu Apr 13, 2006 3:18 pm
by hawleyjr
Mysql is a very powerful database however, if you are trying to use it to handle a 30+ gig database I’d look into using a more appropriate database
Posted: Thu Apr 13, 2006 3:49 pm
by Deseree
lol, yea.... that's what i'm thinking....
what is a more appropriate database to use?
Data is called once per minute from the database right now. 1000 rows of data at random from any one of the tables.
I'm looking to optimize so it's not so crazy, lol.
Posted: Thu Apr 13, 2006 4:04 pm
by AKA Panama Jack
You might want to enable the internal query cache in mysql. This is normally turned off.
To enable it you have to tell it how much ram to use for the query cache.
mysql> SET GLOBAL query_cache_size = how_many_bytes_to_use;
If you have the memory set to it to 64-128 meg of ram. Allocate more if it is a dedicated database server and you have the memory.
This is an INTELLIGENT query cache. The query and the results of a query are stored in memory. If the same query is executed again the result set is retrieved from ram instead of querying the database. The query and result set is cleared from ram if a table it uses is updated through insert, update or delete.
This greatly reduces the database server load and is many times faster than any database abstraction layer with caching ability.
Posted: Thu Apr 13, 2006 4:26 pm
by Deseree
AKA Panama Jack wrote:You might want to enable the internal query cache in mysql. This is normally turned off.
To enable it you have to tell it how much ram to use for the query cache.
mysql> SET GLOBAL query_cache_size = how_many_bytes_to_use;
If you have the memory set to it to 64-128 meg of ram. Allocate more if it is a dedicated database server and you have the memory.
This is an INTELLIGENT query cache. The query and the results of a query are stored in memory. If the same query is executed again the result set is retrieved from ram instead of querying the database. The query and result set is cleared from ram if a table it uses is updated through insert, update or delete.
This greatly reduces the database server load and is many times faster than any database abstraction layer with caching ability.
uhm... one problem..... RANDOM extraction of 1000 rows of data from the mysql table each call...... i Do NOT want the same data, EVER, lol........ kinda busts ur idea on caching right? I have my own efficient caching in flat files.
Re: 5 GB database?
Posted: Thu Apr 13, 2006 5:25 pm
by Christopher
Deseree wrote:Is this going to cause any performance issues?
Well? Are there? You'd have to tell us. On a reasonable server dedicated to MySQL I have found that databases of that size perform quite well.
There are all the usual things you can do like server and MySQL tuning for that size tables. Hardware improvements can do wonders. Faster and more drives and more memory are cheap solutions to these types of problems.
You can also reduce row size -- you said nothing about what is in those rows. And you can also split large data sets into frequently and infrequently access records, first querying the smaller frequently accessed table and then, if there are no rows returned, querying the infrequently accessed table. Typically large data sets are due to accumulation of historical data and more current records are queried more often that older ones. There are as many ways to optimize as there are different problems.
But don't optimize until you have an actual performance problem.