Sunday, September 02, 2007

ERROR 1033 (HY000) on InnoDB configuration error

One of the key features MySQL often uses to advertise for their database is the modular architecture that allows them to have different storage engines below the same SQL layer. In practice the application and/or database designer can choose from a variety of low level data storage implementations that each offer different characteristics and may be chosen on a per table basis. (Even though I personally believe most designs will use one type of table for all tables of a particular schema).

The idea behind this is that for example people who do not need transactions should not have to worry about them at all – maybe there is a performance impact involved which they cannot afford to take. Moreover some specialized types of index or column might not be available on all engines. Basically the concept is very interesting and can be really useful for developers.

However there is a weakness that in my opinion needs some severe work to be done: The interface between the common SQL layer and the storage engines seems to be somewhat limited with respect to what storage engines can do to inform the level above about status and error conditions.

For example there is no (elegant) way to find out about the details of a constraint violation problem when using the InnoDB storage engine. While you will get an error message that some statement failed due to a violation of referential integrity constraints, you have to use the generic “show engine innodb status” command to get some details. However this will not only tell you about the error you care about at that particular moment, but will also give you lots of information on lots of other stuff inside InnoDB. This is however necessary, because you do not have any other means of find out out about those – e. g. when you are investigating a performance problem.

From what I learned from a consultant some time ago this is due to the limit interface specification through with MySQL itself (the upper layer) and the storage engines talk to each other. Because this protocol has to be somewhat generic messages from the bottom to the upper levels have to be somehow wrapped into some form of special result set which you then have to parse and understand on your own. Moreover if memory serves me right, there is a limitation on how much data can be transferred at a time (could be a limitation of the client as well). Because of this you will not even always get a full InnoDB status output, because it will be truncated if it gets bigger than 64k.

While this is not particularly nice it is a limitation I believe is acceptable, especially in the case of InnoDB, because the innodb_monitor feature allows you to get the full output into a log file.

What I consider much worse however, is that error messages from the underlying storage engine are often mapped to more generic MySQL messages in an unpredictable way.

Time and again I have run into problems that present you with an error message that has nothing to do with the actual problem. For example in a replication scenario you might get an error message 1236, claiming that there is something wrong with replication position counters, but it turns out that this message can also mean a full disk on the master. If you know enough about the implementation details you might see the way this message comes to pass, but if you are troubleshooting a production system this is not what you want to do. Moreover I tend to forget these peculiarities if they are just seldom enough. Just recently I found a machine spitting out strange errors on each and every query I issued (InnoDB):

mysql> select count(*) from child;

ERROR 1033 (HY000): Incorrect information in file: './test/child.frm'
mysql> 

Now, just having this message, what would you expect is wrong here? File corruption? Broken SCSI controller? Faulty memory?

When you use Google to search for “ERROR 1033 (HY000)” you will get all sorts of results, most of them suggesting to try myisam_check (not very useful for InnoDB) or the REPAIR statements. Often you will find someone who claims that restoring from the latest backup might be the only option.

While all of this is certainly true to solve the problems that this error message what originally intended to report, in my case they were all just leading in the wrong direction.

Turns out that something was wrong with the my.cnf configuration file. This was on a machine set up using the “Streaming Slave Deployment” mechanism I described in an earlier article. For some reason the script that usually adapts the config file automatically after downloading the data files had not been started, so a default my.cnf was still in place.

Unfortunately the InnoDB data file sizes did not match those downloaded from the master server. This is what my.cnf contained:

innodb_data_file_path=ibdata1:512M;ibdata2:100M:autoextend

This is a listing of the data directory:

-rw-rw---- 1 mysql mysql 555745280 2007-08-31 20:26 ibdata1
-rw-rw---- 1 mysql mysql 104857600 2007-08-31 20:26 ibdata2
-rw-rw---- 1 mysql mysql   5242880 2007-08-31 20:26 ib_logfile0
-rw-rw---- 1 mysql mysql   5242880 2007-08-31 20:24 ib_logfile1
drwxr-xr-x 2 mysql mysql      4096 2007-08-31 20:27 mysql
drwx------ 2 mysql mysql      4096 2007-05-12 12:24 test 

It may not be obvious, but 555745280 bytes for ibdata1 is not 512MB, but 530MB. Nevertheless the MySQL server started even with this wrong configuration. However every statement would fail with the message above.

Shutting down the server and correcting the line above to

innodb_data_file_path=ibdata1:530M;ibdata2:100M:autoextend

restored everything to a working state:

mysql> select count(*) from child;
+----------+
| count(*) |
+----------+
|        9 |
+----------+
1 row in set (0.03 sec) 

While I really like MySQL and find it generally rather easy to configure and get very high performance, this is definitely a major weakness I would like to see improved in future versions. For the time being I will try to make a post about anything that strikes me odd enough for someone to be interested in, too :)

18 comments:

garote said...

This blog page just SAVED MY LIFE.
Thank you!!!

Daniel Schneller said...

You're welcome...

Anonymous said...

FANTASTIC !!!!!!!
You helped me REALLY a lot.
Great Job !

Anonymous said...

This blog saved me as well. Thanks!

Anonymous said...

this entry saved me too. THANK YOU!!

Anonymous said...

Worked great for me too...
Moved 72GB db to new server with much larger settings in the new my.cnf... put in the old one and the error went away.
I then mysqldump'd the data, put the new startup db and my.cnf in place and reloaded teh mysqldump'd data with the new settings.
Thanks!

Ash said...

Indeed a life-saving information. This is fantastic! It worked in a snap with my server problem.

Thanks a lot Daniel and more power!

Ash (manila,philippines)

ericsean said...

this looks like the page that just keeps giving - thanks very much, helped a friend of mine out with this one

Martin Hansell said...

Hi Daniel... Yep, looks like you have save a lot of lives alredi! And I am hoping you can do the same for me :-)

When installing Perseus I added this line to my.cnf.

innodb_data_file_path=ibdata1:500M

Not only does it look incomplete compared to yours, but now I have some rather different data in my data directory output...

2007243 84056 -rwxrwx--- 1 mysql mysql 85983232 2009-03-04 17:27 ibdata1
3400717 10256 -rw-rw---- 1 mysql mysql 10485760 2009-02-10 20:14 ibdata1_old
3400718 5132 -rw-rw---- 1 mysql mysql 5242880 2009-03-04 17:27 ib_logfile0
3400719 5132 -rw-rw---- 1 mysql mysql 5242880 2009-03-04 07:52 ib_logfile1

Can you make any sense of this for me?
Many thanks
Martin

Martin Hansell said...

Hi Daniel,

Update...

I changed the line in my.cnf to 82 megs to reflect the output from my system (see above). That restored me to working order. Thanks!

But... how do I get the ibdata1 size to the actual required size.

Seems that MySQL is ignoring the line in the my.cnf (which was originally set to 500 megs).

Thanks
Martin

Daniel Schneller said...

You could mark this file as autoextending. However getting an existing file to precisely 500MB won't be possible. You can add a second data file (keeping the first one at 82MB) and specify that to be 418MB if you like.

Dave said...
This comment has been removed by the author.
Dave said...

MySQL v5.0.26 complains "Incorrect information in file", when InnoDB is not loaded. Use "SHOW VARIABLES" to check if MySQL is having INNODB. If not, uncomment the innodb_data_file_path as it throws a "syntax error"-Error on that.

maxcell said...

Awesome! you saved me some head aches! Thanks!

Eduardo Alcântara said...

Hehehe, it seems to be a connection between life and database :-P
Thank you for same mine too!

Anonymous said...

For other comments above I would like to note, that same/similiar error happends when you change
innodb_log_file_size
solution seems to be removing ib_logfile* after successful and safely shutted down mysql.

Ron said...

I feel stupid tryin to understand this.

Anonymous said...

I had this problem as well. My issue was the tmp dir mysql was set to use had permissions that were too strict. Once fixing the perms some things worked better but I needed to also bounce the entire mysql server. Then things worked right