Page 1 of 1

SQL problems....Error 28

Posted: Sun Apr 02, 2006 10:24 pm
by Deseree
SELECT *
FROM `000_data_01`
WHERE `data_used` =0
ORDER BY RAND( )
LIMIT 10


MySQL said:

#1030 - Got error 28 from table handler
How do i fix this??? I have 2 million rows of data in table 000_data_01, and I want it to randomly return 10 rows.... and then mark those ten as used so they don't get used again....

Posted: Mon Apr 03, 2006 12:51 am
by Deseree
sorry about the double or tripple post, and I think Jcart said I thread jackked, so I may as well appologize for that too, I didn't know I did that, was having some serious internet and computer issues tonight..

Anyways...

I have a mysql database, and one of the tables has 2.5 million rows of data, no joke, and I need to pull 10 at random at a time. ORDER BY RAND() doesn't seem to work that large, or I'm getting that ERROR 28...

Can anyone advise how I can do what i need?


NEXT, how can I total the exact number of rows quickly, or grab the largest ID number since it's auto incrementing?

Thanks in advance~

I'd really appreciate some help, I can't get my brain around this issue, I really don't want to break this table into 100 tables of 250,000 or something, that seems silly and ridiculous~!

Posted: Mon Apr 03, 2006 2:48 am
by onion2k
Do you have an index on the table?

Posted: Mon Apr 03, 2006 8:51 am
by Deseree
onion2k wrote:Do you have an index on the table?
YES, index is set and auto incrementing 'id' column.

right now, my solution is just this:

Code: Select all

SELECT * 
FROM `000_data_01` 
WHERE `data_used` =0 AND
`id` >=0 AND
`id` <=99999
ORDER BY RAND( ) 

LIMIT 10
But this only will do random between zero and 100,000, what about the other 2.4 million?

When I set it to:

Code: Select all

SELECT * 
FROM `000_data_01` 
WHERE `data_used` =0 AND
`id` >=0 AND
`id` <=999999
ORDER BY RAND( ) 

LIMIT 10
note, the extra nine, that's zero to one million, I was getting the mysql error 28 again, and I was watching my server's specs, "df -h" particularly, and i saw that before and after the script was running /tmp has just about 2 gigs free space, only 36 megs used. WHILE running the zero to one million, it was showing /tmp usage was 99% of 2 gb. LoL ~!!!

so my idea now is to get the total number of rows, and using php i can break $x= 2.5 million, into 25 range increments of 100k each, and then use a random variable to randomly select one of those 25 increment ranges and then use that with sql.

It's hardly what I'd like to do as it's quite ecentric and too much extra coding, I just don't understand why mysql can't handle it?

I guess mysql can't return 2 million results into ORDER BY RAND() and then LIMIT 10.

If someone else wants to help me with this that'd be great!

Just use this is sql on a large table, you need a few million rows of course!

Code: Select all

SELECT * 
FROM `000_data_01` 
WHERE `data_used` =0 AND
`id` >=0 AND
`id` <=25000000
ORDER BY RAND( ) 
LIMIT 10
On to my next matter, i need to know how to get the total num rows of the table, #1, and then I also need to get the NAMES of all the tables in the database...

Thanks in advance, hopefully I can get this coded today.

Posted: Mon Apr 03, 2006 8:59 am
by feyd
Try an EXPLAIN on your "mega" query.. it may hint at why the error is showing up. Likely has to do with needed to potentially read every single row in the table.

Error 28 is "No space left on device" .. i.e. MySQL ran out of memory/swap trying to perform your request.

Posted: Mon Apr 03, 2006 9:09 am
by Deseree
feyd wrote:Try an EXPLAIN on your "mega" query.. it may hint at why the error is showing up. Likely has to do with needed to potentially read every single row in the table.

Error 28 is "No space left on device" .. i.e. MySQL ran out of memory/swap trying to perform your request.
yea, lol, i found that out the hard way!

I can't really limit it down anymore, it's a very simple table with a row of data, id row, and used row.

So the only way I can explain the query better to get less results is to use the id> and < that i explained above....

how about some help on the other problem, how do i return the number of rows in the table and how do i return the names of all the tables in the db?

Posted: Mon Apr 03, 2006 9:16 am
by feyd
DESCRIBE and SHOW TABLES

Posted: Mon Apr 03, 2006 9:20 am
by Benjamin
Your outta memory. Edit your my.cnf file and give MySQL some breathing room and your error will go away. You shouldn't have to select * anyway. You should only be selecting the fields you need.

What are you using this for anyway? :?

Posted: Mon Apr 03, 2006 10:34 am
by Deseree
agtlewis wrote:Your outta memory. Edit your my.cnf file and give MySQL some breathing room and your error will go away. You shouldn't have to select * anyway. You should only be selecting the fields you need.

What are you using this for anyway? :?
Small scripting job for two friends :) they have rather large .txt files, and want to import them into mysql, i did so, and works displaying when NOT random, but when I do ORDER BY RAND() it has problems with that much data.

ID | DATA FIELD | DATA FIELD 2 | USED
# | WHATEVER | WHATEVER | 0 or 1 depending on if it's been called yet.

Those are the three columns of the table.... so If I didn't use SELECT *, then using SELECT data_field,data_field2, will that be much better? I mean what difference will it be rather than returning the other two rows which are the id number ( tinyint ) and the used switch ( tinyint )

edting /etc/my.cnf .... what should I have , i 've just got the default cpanel one which is probably crap, i need "assistance" with "tweaking" my.cnf, php.ini and httpd.conf as well while i'm at it, heh...

Posted: Mon Apr 03, 2006 10:53 am
by Benjamin
I would assume if you only need 2 of the 3 fields you would save memory by only requesting 2 of them in the query. Here is a sample mysql config file.

Code: Select all

# Example mysql config file for large systems.
#
# This is for large system with memory = 512M where the system runs mainly
# MySQL.
#
# You can copy this file to
# /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is /var/lib/mysql) or
# ~/.my.cnf to set user-specific options.
#
# One can in this file use all long options that the program supports.
# If you want to know which options a program support, run the program
# with --help option.

# The following options will be passed to all MySQL clients
[client]
#password	= your_password
port		= 3306
socket		= /var/lib/mysql/mysql.sock

# Here follows entries for some specific programs

# The MySQL server
[mysqld]
port		= 3306
socket		= /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 256M
max_allowed_packet = 1M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size= 16M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8

# Don't listen on a TCP/IP port at all. This can be a security enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!
# 
#skip-networking

# Replication Master Server (default)
# binary logging is required for replication
log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id	= 1

# Replication Slave (comment out master section to use this)
#
# To configure this host as a replication slave, you can choose between
# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
#    the syntax is:
#
#    CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>,
#    MASTER_USER=<user>, MASTER_PASSWORD=<password> ;
#
#    where you replace <host>, <user>, <password> by quoted strings and
#    <port> by the master's port number (3306 by default).
#
#    Example:
#
#    CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
#    MASTER_USER='joe', MASTER_PASSWORD='secret';
#
# OR
#
# 2) Set the variables below. However, in case you choose this method, then
#    start replication for the first time (even unsuccessfully, for example
#    if you mistyped the password in master-password and the slave fails to
#    connect), the slave will create a master.info file, and any later
#    change in this file to the variables' values below will be ignored and
#    overridden by the content of the master.info file, unless you shutdown
#    the slave server, delete master.info and restart the slaver server.
#    For that reason, you may want to leave the lines below untouched
#    (commented) and instead use CHANGE MASTER TO (see above)
#
# required unique id between 2 and 2^32 - 1
# (and different from the master)
# defaults to 2 if master-host is set
# but will not function as a slave if omitted
#server-id       = 2
#
# The replication master for this slave - required
#master-host     =   <hostname>
#
# The username the slave will use for authentication when connecting
# to the master - required
#master-user     =   <username>
#
# The password the slave will authenticate with when connecting to
# the master - required
#master-password =   <password>
#
# The port the master is listening on.
# optional - defaults to 3306
#master-port     =  <port>
#
# binary logging - not required for slaves, but recommended
#log-bin

# Point the following paths to different dedicated disks
#tmpdir		= /tmp/		
#log-update 	= /path-to-dedicated-directory/hostname

# Uncomment the following if you are using BDB tables
#bdb_cache_size = 64M
#bdb_max_lock = 100000

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/lib/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/lib/mysql/
#innodb_log_arch_dir = /var/lib/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 256M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 64M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

Posted: Mon Apr 03, 2006 10:54 am
by Deseree
feyd wrote:DESCRIBE and SHOW TABLES

Code: Select all

SELECT `table`.`id`
FROM tabe
ORDER BY `table`.`id` DESC
LIMIT 1
ok that worked, it gives me the highest number in id, which will work it'd be cool if I could do based on auto_increment value but can't quite get

http://dev.mysql.com/doc/refman/5.0/en/describe.html

the syntax right....

DESCRIBE `table` `id`

That returns the information on the table's structure, it doesn't return the value of the auto increment..
Indexes:
Keyname Type Cardinality Action Field
data_field_1 UNIQUE 2411499 data_field_1
id INDEX None id
Create an index on columns

Space usage:
Type Usage
Data 418,463 KB
Index 118,883 KB
Total 537,346 KB

Row Statistics:
Statements Value
Format dynamic
Rows 2,411,499
Row length ø 177
Row size ø 228 Bytes
Next Autoindex 2,411,500
Creation Apr 02, 2006 at 03:54 PM
Last update Apr 03, 2006 at 10:00 AM
That is what I copied from phpmyadmin.... now how would I grab the
INDEXES UNIQUE data ? Because that is the number of rows in the DB...

Posted: Mon Apr 03, 2006 11:01 am
by Deseree
ok, interesting, thanks for the my.cnf to compare to.

My system's specs are:

Dual AMD Operton 246
4 gigs ram
300 gb 7200 sata hd

lol..... so what should I edit on ur file to make it better equiped for my system? I've got the /tmp dir with only 2 gb right now, I think i'm going to need to re partition it with say 10 gb but the thing is that isn't that efficient, you know ? lol, this data is going to be pulled ALOT. Say 30-60 queries per minute to the script

and the script only calls that one query thank god, returning 10-1000 rows of data each time at random.

I'm gonna search some more and figure out how to show the INDEXES data for the UNIQUE id column, that will give me the total rows of data and that's gonna help with my scripting....

:twisted: :twisted: :twisted: :twisted:

Posted: Mon Apr 03, 2006 11:07 am
by Deseree
bingo,

Code: Select all

SHOW INDEX FROM `table`
That works :) Thanks all, back to scripting...