[SOLVED] How is DECIMAL(m, n) represented in a 64-bit system?


What is the relationship between a DECIMAL(m,n) column specification and the actual representation of that column in a 64-bit MySQL implementation?

I’m defining tables in a context where I know I need an exact value (hence DECIMAL) and I don’t know how sensitive I am to truncation errors in the decimal portion. I’d therefore like to choose a column specification that makes reasonable use of the underlying storage (I know it’s a 64-bit system).

I haven’t yet found an answer in the MySQL documentation despite a reasonable search.


It doesn’t matter if you’re using a 64-bit MySQL build. DECIMAL supports precision greater than 64 bits.

DECIMAL uses 4 bytes for each 9 digits, plus extra bytes for the leftover digits. For example, DECIMAL(32,0) supports up to 9+9+9+5 digits. It will use 4+4+4 bytes for the first 27 digits, then 3 more bytes for the remaining 5 digits. A total of 12+3, or 15 bytes.

The fractional part of the decimal value (after the decimal point) stores digits similarly. So DECIMAL(32,9) would support up to 9+9+5 digits for the integer portion and another 9 digits for the fractional portion. Thus 4+4+3 bytes for the integer and 4 bytes for the fractional part.

There’s a more detailed description with examples down to the byte, in the code comments for the decimal2bin() function here:

Answered By – Bill Karwin

Answer Checked By – Marie Seifert (BugsFixing Admin)

Leave a Reply

Your email address will not be published.