The cell in Excel is probably a float. Depending on how it is rendered there will be a point where a given source (float) value will produce different target values for different target data types. This is a problem as old as floating point numbers.
Floats are good for measurements, but lousy for counts. What you are seeing could be the difference caused by that quality of a float.
The cell shows up as "Double" if I look at it in design view in an Access DB that has a link to it.
Hope the beancounters can accepts this fact that the number doesn't show up EXACTLY as it is in Excel. I've tried to maually insert the value with all 19 decimal and it shows ok.
I've seen this problem show up when I'm using a C++ application to insert values from Excel into SQL Server via ODBC also. And there the value is picked as a string and then used to build a SQL-Statement. I will get the same values as with the manual import into SQL Server.
Floats (reals and doubles) are perfect for dealing with measures, because floats work like measurements work... Counts (INT, MONEY, DECIMAL, etc) are perfect for counts, because they actually are counts. A measurement is an approximation that has some desired degree of precision, and is quick and easy to perform calculations of practically infinite scope at the desired degree of precsion. A count is exact, but fundamentally limited in terms of the kind of calculations it can do and the scope of values it can represent.
When you're dealing with money, you need a count. When you're dealing with computational measures such as C or e, or figuring anything to do with astronomy or weather, counts are almost worthless and floats are appropriate.
You can't use either tool to efficiently do the jobs that the other is built to handle... You can beat either of them into submission and they can pinch-hit for one another, but they are really two different classes of tools even though people sometimes confuse floats and counts when they don't grok them both.