Can't add BLOBS of over approx 10mb to MySql
Moderator: General Moderators
-
philmarsay
- Forum Newbie
- Posts: 6
- Joined: Wed Mar 21, 2007 3:26 am
Can't add BLOBS of over approx 10mb to MySql
I have got a real problem... I have a database system (MySQL) which has a front-end written in PHP.
One of the functions of the front-end is to allow file-upload into the MySQL database, as BLOBS.
I seem to have hit some unpassable limit of approximate 10mb (9mb works, 13 mb fails), of the files that I can get into the database.
I have upped the MAX_PACKET_SIZE in MySQL, and the wait timeout, I have upped the max execution, max input, and memory limits of PHP, all to no avail.
The field in the database is a LONGBLOB, so should be OK.
Can anyone pleeeease tell me that I am missing something obvious somewhere. I understand MySQL is supposed to be handle, theoretically BLOB sizes of up to 2 gig!
Here is the snippet of code that does the uploading... I know this code works, as it works with smaller files...(up to approx 9mb)
$attsql = "insert into attachments values(";
$attsql .= $current_id.",'".$filename."','".$description."','".$filetype."','".$filesize."','".$data."')";
mysql_connect("host", "user", "password");
mysql_select_db("database");
mysql_query($attsql);
If I output the $data to screen, it all seems to be there, but I think MySQL is just plain rejecting the query when the BLOB is big, as nothing is being inserted into the database. The PHP process runs, sends the query to MySQL, and, from that point, nothing happens. Thats how it seems to be.
I can't help thinking this is just a configuration of MySQL that I have not quite gotten right, but I sure can't find it!!
I am using PHP4 and MySQL 4.1
Please help, I'm losing sleep over it!!!!
One of the functions of the front-end is to allow file-upload into the MySQL database, as BLOBS.
I seem to have hit some unpassable limit of approximate 10mb (9mb works, 13 mb fails), of the files that I can get into the database.
I have upped the MAX_PACKET_SIZE in MySQL, and the wait timeout, I have upped the max execution, max input, and memory limits of PHP, all to no avail.
The field in the database is a LONGBLOB, so should be OK.
Can anyone pleeeease tell me that I am missing something obvious somewhere. I understand MySQL is supposed to be handle, theoretically BLOB sizes of up to 2 gig!
Here is the snippet of code that does the uploading... I know this code works, as it works with smaller files...(up to approx 9mb)
$attsql = "insert into attachments values(";
$attsql .= $current_id.",'".$filename."','".$description."','".$filetype."','".$filesize."','".$data."')";
mysql_connect("host", "user", "password");
mysql_select_db("database");
mysql_query($attsql);
If I output the $data to screen, it all seems to be there, but I think MySQL is just plain rejecting the query when the BLOB is big, as nothing is being inserted into the database. The PHP process runs, sends the query to MySQL, and, from that point, nothing happens. Thats how it seems to be.
I can't help thinking this is just a configuration of MySQL that I have not quite gotten right, but I sure can't find it!!
I am using PHP4 and MySQL 4.1
Please help, I'm losing sleep over it!!!!
Can you take a look into php's error log and mysql's error log? Is there something in that might be related to the issue?
And please add some error handling to your php script.
try
And please add some error handling to your php script.
try
Code: Select all
<?php
error_reporting(E_ALL);
ini_set('display_errors', true);
$mysql = mysql_connect("host", "user", "password") or die(mysql_error());
mysql_select_db("database", $mysql) or die(mysql_error());
/*
$filename
$description
$filetype
$filesize
$data
are those variables sanitized and fit for usage in a sql query?
see http://de3.php.net/security.database.sql-injection
*/
$attsql = "insert into
attachments
values(
$current_id,
'$filename',
'$description',
'$filetype',
'$filesize',
'$data'
)";
echo '<div>Debug, query-len: ', strlen($attsql), "</div>\n";
mysql_query($attsql. $mysql) or die(substr(mysql_error(), 0, 200));
echo '<div>Debug, affected-rows: ', mysql_affected_rows($mysql), "</div>\n";Why store such big files in the database? Store the meta data in the database and store the files on the server and then use .htaccess file with "deny from all" in it. PHP will still be able to access the files but not users.
This might explain your problem:
This might explain your problem:
Sounds like you're on a low-end/shared system which can't handle the memory.The theoretical limit in MySQL 4.0 is 2G, however each blob requires generally to have 3 copies of it in the memory (stored in various buffers) so you need a lot of memory, if you have large BLOBs stored in MySQL. This is the reason, why the theoretical limit can be reached only on 64bit systems. The Practical limits are around some hundreds of megs per BLOB.
- AKA Panama Jack
- Forum Regular
- Posts: 878
- Joined: Mon Nov 14, 2005 4:21 pm
Do you have any idea how SLOW it is going to be storing and retrieving large files from a database? Not to mention the incredible LOAD on the database server compared to storing the data as a file in a directory?
Contrary to what some people think databases were not meant to store large files. You should be storing the files in a special directory and storing a link to the file in the database.
You can probably get away with this on a dedicated database server but if your software is going to be run on a shared host I can almost guarantee any site running the software will be shut down pretty fast.
You need to rethink how you are going to be dealing with large files because your current method is about as far away from optimal or efficient as you can get.
Contrary to what some people think databases were not meant to store large files. You should be storing the files in a special directory and storing a link to the file in the database.
You can probably get away with this on a dedicated database server but if your software is going to be run on a shared host I can almost guarantee any site running the software will be shut down pretty fast.
You need to rethink how you are going to be dealing with large files because your current method is about as far away from optimal or efficient as you can get.
- RobertGonzalez
- Site Administrator
- Posts: 14293
- Joined: Tue Sep 09, 2003 6:04 pm
- Location: Fremont, CA, USA
Re: Can't add BLOBS of over approx 10mb to MySql
Filesystems are for files. Databases are for data. Why can't you store the files on the filesystem and database the meta data?philmarsay wrote:One of the functions of the front-end is to allow file-upload into the MySQL database, as BLOBS.
- Kieran Huggins
- DevNet Master
- Posts: 3635
- Joined: Wed Dec 06, 2006 4:14 pm
- Location: Toronto, Canada
- Contact:
Re: Can't add BLOBS of over approx 10mb to MySql
aye!Everah wrote:Filesystems are for files. Databases are for data. Why can't you store the files on the filesystem and database the meta data?philmarsay wrote:One of the functions of the front-end is to allow file-upload into the MySQL database, as BLOBS.
If it's filename collision you're worried about, store the file with it's md5 hash as the filename. This will also save space if there are multiple users with the same file
- AKA Panama Jack
- Forum Regular
- Posts: 878
- Joined: Mon Nov 14, 2005 4:21 pm
Re: Can't add BLOBS of over approx 10mb to MySql
I think a better way would be to store the file with a Unix timestamp added to the name. It will be almost impossible odds that you would have the same filename uploaded and stored at the same microtime.Kieran Huggins wrote:aye!Everah wrote:Filesystems are for files. Databases are for data. Why can't you store the files on the filesystem and database the meta data?philmarsay wrote:One of the functions of the front-end is to allow file-upload into the MySQL database, as BLOBS.
If it's filename collision you're worried about, store the file with it's md5 hash as the filename. This will also save space if there are multiple users with the same file
-
philmarsay
- Forum Newbie
- Posts: 6
- Joined: Wed Mar 21, 2007 3:26 am
read, and understood guys.
I guess I didn't really want to launch into a fairly big rewrite of the application to fit a fairly rare requirement. Also, the large benefit of having everything in the database is the ease of managing things like backups and/or db moves.
Never mind, my hand is forced, because I cant seem to fix the problem, and, as you guys so rightly say, other problems could well lie ahead anyway.
As each attachment is associated to a record in the system, and each record has a unique ID, I will just store in a subfolder for the ID of the record. Should be OK, yeah?
Thanks for your comments.
Phil
I guess I didn't really want to launch into a fairly big rewrite of the application to fit a fairly rare requirement. Also, the large benefit of having everything in the database is the ease of managing things like backups and/or db moves.
Never mind, my hand is forced, because I cant seem to fix the problem, and, as you guys so rightly say, other problems could well lie ahead anyway.
As each attachment is associated to a record in the system, and each record has a unique ID, I will just store in a subfolder for the ID of the record. Should be OK, yeah?
Thanks for your comments.
Phil
- Kieran Huggins
- DevNet Master
- Posts: 3635
- Joined: Wed Dec 06, 2006 4:14 pm
- Location: Toronto, Canada
- Contact:
my approach, while not human friendly, will (potentially):
- save space in the filesystem (identical files will simply overwrite each other, regardless of naming)
- automatically add a means to verify file integrity
- guarantee filename / filesystem compatibility
- make it more difficult for people to browse your filesystem natively (ok - I'm reaching on that one...
)
Also - watch how many files / directories are made in each folder - each filesystem has different soft limits after which performance suffers. If you'll have a lot of files you could consider creating a folder based on the first letter of the filename, and a second, third, etc (if needs be). Another nice thing is that the md5 guarantees a certain filename length as well. Example: /a/s/d/asdjh87sa98723y6eashdads656
- save space in the filesystem (identical files will simply overwrite each other, regardless of naming)
- automatically add a means to verify file integrity
- guarantee filename / filesystem compatibility
- make it more difficult for people to browse your filesystem natively (ok - I'm reaching on that one...
Also - watch how many files / directories are made in each folder - each filesystem has different soft limits after which performance suffers. If you'll have a lot of files you could consider creating a folder based on the first letter of the filename, and a second, third, etc (if needs be). Another nice thing is that the md5 guarantees a certain filename length as well. Example: /a/s/d/asdjh87sa98723y6eashdads656
- AKA Panama Jack
- Forum Regular
- Posts: 878
- Joined: Mon Nov 14, 2005 4:21 pm
The problem with using that naming approach is you never know what the file is since MD5 will not let you decode the string to get the real filename. You have to know the filename in advance. SO you might want to save the filename in the database.
The other drawback is if you need to find a file you can't easily find it because of the encoded filenames.
Your best bet is to use the real filename with a unix timestamp added to the name.
The other drawback is if you need to find a file you can't easily find it because of the encoded filenames.
Your best bet is to use the real filename with a unix timestamp added to the name.