Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My only grievance, and a blocker, is that SQLite does not support decimal numbers, making it less practical to work with finance related data.


You probably already heard of storing amount in cents as an integer, but it's a pattern worth mentioning if it fits your use case.


I don't know how it's done in the US, but in Europe financial calculations are done with four digits after the dot.


So store the value as milli-cents? (haha)


Shouldn't that be centi-cents? ^^


Or in MySQL: innocents.


More likely: varchar(255)


varchar(255) == varchar(2000000) in SQLite

The type system in SQLite is not as strict as it is in other RDBMS.


It is possible, but makes is somewhat impractical, because you need to convert numbers back and forth every time you touch them on an application level. Not worth the extra headache.


Speaking from experience, if you exclusively deal with integers internally and then apply the appropriate formatting when outputting to the user (which you need to deal with regardless of internal representation), it makes dealing with monetary data so much easier. Integers are generally better supported, faster, and more convenient for most languages, databases, and serialization formats.


+1 adding my anecdata to this.

If you don’t need irrational numbers (few financial applications do?), the tuple of (value, scale) can take you very far.


you never really deal with irrationals ... do you mean floating point?


Nope, specifically meant irrationals.

Floating point (up to a certain amount of absurdity) you can describe with integers (value, scale).

Real numbers you can describe with integers (numerator, denominator, scale).

Irrationals are more difficult. That's why we try not to use them :). Hopefully your company's commission tiers are not defined in terms of circles or euclidean distances.


How do you handle things like multiplying by some interest rate of 3.125 or (1 + 0.03125)^-354 as examples.

Sure you can round at the end but then you have to worry about compounding errors.


I think that's just part of the process of deciding how small an increment each integer represents (dollars, cents, centi-cents, milli-cents).


Accounting rules often care more about consistency than precision per se. As long as the code follows accepted accounting rules (say bankers rounding) choosing a known precision works. Often GAAP and other standards don’t define all details for, say amortization, but if you use a scheme the code should always use that method.


Numeric type, as supported by other SQL databases, does exactly this. I do not want reinvent it on the application level.

I know how to do it, it is just not practical.


Do you then use variable length for the precision depending on the currency?


Not insisting, and could be seen as a bad practice, but I think I have experience worth sharing.

Unless you operate 2+degree currency polynomials, it may be okay to store money in minimal integer units at an application level too. It may be adjusted in only ui formatters. One should be careful with constants in code and external apis, but nothing you can’t get used to or catch with a test. We store timeouts in milliseconds, weights in grams, and so on anyway - not a new thing.

As an example, the primary database I maintain for our company is able to hold decimal .00 natively, but we use its fields as if they were multiplied by 1000, i.e. 12,345.00 is stored and operated as 12.35 (and goes as int64 1235 to ODS) (rounding 5.00 off is fine for this particular accounting model). Nobody made a three-digit mistake ever in a whole decade, because square/cubic money is not a thing naturally, and ratios & exchange rates work as usual. It is like storing cents, but the other way round.

Storing in-app amounts in floats has a downside of accumulating an error in for-looped calculations and the need for periodic rounding (although it should run on a really big dataset to collect that enough). If a language has a money type and it can be programmed accordingly, then a problem disappears. If not, then it persists either way in different forms.


I dunno... the only alternative seems to be storing as a float everywhere, since most coding languages don't support a decimal type... and using floats for currency honestly just feels scary to me, since they're not fixed-precision.

I mean I know there are min/max ranges in the float specification that can guarantee no loss of accuracy to two decimal places... but then I need to manually make sure there's never anywhere in the code that might ever sum up transactions to beyond that, or whatever. Or people thinking it's OK to divide and then multiply, which you're hopefully less likely to do if it's an integer.

I feel much more peace of mind storing currency data as integers and doing the conversion. It feels quite natural really, since I'm already used to doing that with datetimes for different timezones.


Python has Decimal, JavaScript has BigNum and so on. I disagree with you - most languages support arbitrary precision numbers. Floats are not needed or wanted.

You so not want to invent your own arbitrary precision numbers

https://floating-point-gui.de/formats/integer/


What is the drawback of just using a string as storage? Together with something like bcmath for calculations.

Strings does not overflow & are easy to transfer.


Applications can handle integers just fine, even more of a reason to use them. 64-bit integers will work everywhere seamlessly and you only need to convert once, when showing them in the UI and/or collecting user input.


Use a 64-bit integer and store the value as microdollars (or whatever main currency you're using). Simple, fast, correct, and compatible everywhere.



Just to give context, the link points out possible drawbacks with the approach mentioned above


> Integers have complete precision, but very limited range

You can store up to 900 trillion dollars with 0.01 cent resolution in an int64. I doubt many applications need more range than that.

> when [integers] overflow, they usually “wrap around” silently

If you want to write an application handling money, you really want to use a Money type of some description, regardless of whether the underlying storage format is an integer. So it's very easy to avoid silent overflows, since basically all languages that are prone to them also offer primitives that check perform addition, subtraction and multiplication with checks for over/under flow.

> The implicit decimal point is hard to change and extremely inflexible

If you have a Money type, it's not hard to change. The pain point would normally be serialization, but that's not necessarily different from using decimal floating point internally, since very few APIs, serialization protocols or even databases have native support for fp decimals either.

> The number of subunits in currencies change with time in the world.

I suspect Goverments are equally or more likely to drop trailing zeros (e.g. Iraq did not so long ago), which you also need to deal with anyway and which does not get easier by using a floating point format.


Wait, there are finance applications that don't forbid floating point numbers ??



I'd like you to meet my friend "Excel"...


I mean for professional applications, not amateur stuff.


Excel is used for very very expensive professional tasks.


They're talking about decimal numbers as a different thing from floating point numbers:

https://en.wikipedia.org/wiki/Decimal#Decimal_computation


You know its strange - We work with an incredible number of decimal facts (financial data), but we find that this has zero impact on our SQLite usage because we have policy where extremely complex business entities that are prone to rapid change are serialized to JSON and stored as BLOB columns. Within this JSON serialized data is where virtually all of our decimal values reside. Obviously, its not a solution for any arbitrary case, but you can usually find workarounds in interesting places further up the stack. We use only 3 data types in SQLite - BINARY, TEXT and INTEGER.


There is no BINARY type in SQLite. (However, the declared type mostly does not restrict the data that column may store. All columns can store blobs and nulls (unless NOT NULL is specified). Columns declared CHAR, VARCHAR, CLOB, CHARACTER, or TEXT can store only text, blob, and null; trying to store a number will convert it to text. Columns declared REAL, FLOAT, or DOUBLE cannot store integers; they are converted into floating numbers; if you try to store text representing a number, it is converted to that number. Columns declared INT, INTEGER, NUMERIC, or an unrecognized type (such as BINARY) can store any value, but text representing a number is converted to a number, and floating numbers which can be losslessly represented as integers are converted to integers. Columns declared BLOB or with no type specified can store any value.)


How do you do reporting on these blobs?


In addition to the other workarounds suggested, you can store the numbers in two columns - one for the coefficient and one for the power of ten, negated. For instance 2.50 becomes 250 and 2, meaning 250 x 10 ^ -2. You can then do math at the application level, or even in SQLite with user-defined functions or even with recursive CTEs if you’re a bit of a masochist.


There is a SQLite decimal extension, unfortunately still under development and not finished: https://chiselapp.com/user/lifepillar/repository/sqlite3deci...


Interesting extension. I'm not from the finance field so I don't know the kind of regulations/standards are used. Do you happen to know?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: