Hi guys, I have an important question that I can't figure out the answer for.....
I have one mysql database with 5 tables right now ( soon to be 24 tables ), and the size is 4.9 gb so far....for those 5 tables.... it's got like 5 million rows of data in each table.
Is this going to cause any performance issues? Would it be better to break EACH table into it's own database or no difference? Reason I ask is because it takes a while for phpmyadmin to load atm, not sure if it's mysql's fault or just that my server is got alot of apache connections going ( but mysql is not connected to apache, the mysql is a backend that I use shell side to create static information once in a while).
So what is the best way to do this, it's ALOT of data I've got we're talking about like a 30-40 gig mysql database if I combine all 24 tables.
5 GB database?
Moderator: General Moderators
- John Cartwright
- Site Admin
- Posts: 11470
- Joined: Tue Dec 23, 2003 2:10 am
- Location: Toronto
- Contact:
- AKA Panama Jack
- Forum Regular
- Posts: 878
- Joined: Mon Nov 14, 2005 4:21 pm
You might want to enable the internal query cache in mysql. This is normally turned off.
To enable it you have to tell it how much ram to use for the query cache.
mysql> SET GLOBAL query_cache_size = how_many_bytes_to_use;
If you have the memory set to it to 64-128 meg of ram. Allocate more if it is a dedicated database server and you have the memory.
This is an INTELLIGENT query cache. The query and the results of a query are stored in memory. If the same query is executed again the result set is retrieved from ram instead of querying the database. The query and result set is cleared from ram if a table it uses is updated through insert, update or delete.
This greatly reduces the database server load and is many times faster than any database abstraction layer with caching ability.
To enable it you have to tell it how much ram to use for the query cache.
mysql> SET GLOBAL query_cache_size = how_many_bytes_to_use;
If you have the memory set to it to 64-128 meg of ram. Allocate more if it is a dedicated database server and you have the memory.
This is an INTELLIGENT query cache. The query and the results of a query are stored in memory. If the same query is executed again the result set is retrieved from ram instead of querying the database. The query and result set is cleared from ram if a table it uses is updated through insert, update or delete.
This greatly reduces the database server load and is many times faster than any database abstraction layer with caching ability.
uhm... one problem..... RANDOM extraction of 1000 rows of data from the mysql table each call...... i Do NOT want the same data, EVER, lol........ kinda busts ur idea on caching right? I have my own efficient caching in flat files.AKA Panama Jack wrote:You might want to enable the internal query cache in mysql. This is normally turned off.
To enable it you have to tell it how much ram to use for the query cache.
mysql> SET GLOBAL query_cache_size = how_many_bytes_to_use;
If you have the memory set to it to 64-128 meg of ram. Allocate more if it is a dedicated database server and you have the memory.
This is an INTELLIGENT query cache. The query and the results of a query are stored in memory. If the same query is executed again the result set is retrieved from ram instead of querying the database. The query and result set is cleared from ram if a table it uses is updated through insert, update or delete.
This greatly reduces the database server load and is many times faster than any database abstraction layer with caching ability.
- Christopher
- Site Administrator
- Posts: 13596
- Joined: Wed Aug 25, 2004 7:54 pm
- Location: New York, NY, US
Re: 5 GB database?
Well? Are there? You'd have to tell us. On a reasonable server dedicated to MySQL I have found that databases of that size perform quite well.Deseree wrote:Is this going to cause any performance issues?
There are all the usual things you can do like server and MySQL tuning for that size tables. Hardware improvements can do wonders. Faster and more drives and more memory are cheap solutions to these types of problems.
You can also reduce row size -- you said nothing about what is in those rows. And you can also split large data sets into frequently and infrequently access records, first querying the smaller frequently accessed table and then, if there are no rows returned, querying the infrequently accessed table. Typically large data sets are due to accumulation of historical data and more current records are queried more often that older ones. There are as many ways to optimize as there are different problems.
But don't optimize until you have an actual performance problem.
(#10850)