I'm having trouble resolving an issue with the results of a column defined as a float. The problem is that the results returned on a development server are different from the results returned on the production server. Both servers are running sql2000 service pack 3, and both servers are on Windows 2000 service pack 3.
When running the statement "select factor from csm_security
where cusip = '07383FJB5' ", the results differ. The dev server returns 0.91531495529999995, but the prod server returns 0.91531496000000001. The result from the development server is what we expect, and require on the production server.
I've compared all configuration settings, and cannot see a difference.
Any ideas as to why this may be happening?
Thanks for any insight.
Most processors/hardware differ slightly in the way they deal with floating point arithmetic - if you are sure you SQL servers are identically configured, check out the documentation on your respective hardware...
You may even be able to change its config to some degree...
SQL Server supports two precisions of float, 4 byte and 8 byte
0.91531495529999995 has 15 digits of precision and is 8 bytes float
0.91531496000000001 has seven digits of precision and is 4 bytes
Something has caused a precision truncation in the code between the two servers, possibly a text to number conversion using different collations or some such oddness. Try explicitly setting all float declares to full precision, i.e:
declare @thing float(53).
and try and isolate the point at which precision truncation occurs.
Thanks for all the help & info.
It turns out that the problem was with the application. The production server was running a process through the back-end, but the developer was using the front-end version on the development server. There's a difference in code in front-end vs back-end, that was causing the discrepency in the data.
I'm always happy when it's not my fault ;-)