Fix decimal precision (decimal.InvalidOperation decimal.DivisionImpossible error)#207
Fix decimal precision (decimal.InvalidOperation decimal.DivisionImpossible error)#207rabidaudio wants to merge 5 commits intodatamill-co:masterfrom
Conversation
|
Why not making the default precision configurable in the target configuration? |
That would be a fine solution too. I can send an updated PR for this approach if you like.
This doesn't effect the database column types, only the in-python JSON Schema validation of the incoming data. But if the schema was updated with new precision then this code would recognize it. |
|
A PR for a config option would be great 👍 |
This reverts commit 6caf51c.
|
@hz-lschick done 👍 |
|
I seem to have a common problem with decimal.InvalidOperation: [<class 'decimal.DivisionImpossible'>] |
|
Hi @r-nyq we are looking into it and will try to merge it soon |
Problem
Error message (this is from target-snowflake but the problem is in the shared code):
The problem is that Singer's offical
tap-postgresusesminimum,maximum, andmultipleOfto effectively report the scale of the column.For example,
This breaks JSON schema validation as when validating
multipleOfit tries to doDecimal('0.000913808181253534') % Decimal('1E-38'), as the default decimal precision in python is too small (28 I believe).Solution
This is a well-known problem that has bitten several targets, for which there are several solutions. One is simply to set the precision to something arbitrarily high, like 40. Another, which the
pipelinewise-target-postgresdoes, is to simply not allow precision higher than some threshold.Here, I ported a solution I wrote for
meltano/target-postgreswhich sets Python's decimal precision to as large as it needs to be to match the schema. This solution was later ported tomeltano/target-snowflake. Open to other solutions though. I would request that any solution also be ported todatamill-co/target-snowflake.