We are currently preparing a MySQL 4.1 to MySQL 5.0 migration. First tests showed a very nasty problem, however.
One of our test cases incorporates queries against DECIMAL columns that use strings as the queried values. In MySQL 4.1 this works flawlessly. The reason behind this is that in contrast to 4.1 the newer server version does a (in my opinion very stupid) conversion from String to double, which in many cases cannot correctly store the precise value.
This may lead to very subtle bugs, especially when using an optimistic locking approach as we do. We only noticed the problem, because we got a ConcurrentModificationException, as an update query that contained a string-ized BigDecimal did not match any rows.
Right now this leaves us with little options but to not migrate to 4.1 as our application has several hundreds of thousands of lines where most of the database access is handled by an OR mapping layer, but there are also numerous cases of manually crafted SQL which would be hard to identify and analyse individually.
What I absolutely do not get is that with the introduction of precision math they also begin to use floats and doubles, whereas most people request the new math because it should make monetary calculations more reliable. Instead all of a sudden existing applications are likely to exhibit all sorts of weird problems, from calculation errors to completely different behaviour (see above). Answering complaints about this with a simple "this is documented behaviour" is a bad excuse if you ask me.
I do really like MySQL, it is a great product and it has served me well for years. I have always appreciated the involvement of the community very much, however cases like this may be what (at least partially at this time) makes the difference between the "really big players" and MySQL.