Results 1 to 15 of 15
  1. #1
    Join Date
    Apr 2006
    Posts
    157

    Unanswered: Does Performance Increase when changing a long integer to byte?

    hi everyone,

    Just a small question:

    Like the header says,

    I would just like to ask if I could get any improved performance by downgrading some fields from long integer to byte
    or some text(255) to text(50)

    aside from,file size benefits?


    thanks
    Only quitters quit!

  2. #2
    Join Date
    Jun 2005
    Location
    Richmond, Virginia USA
    Posts
    2,763
    Provided Answers: 19
    Don't know about the long integer to byte, but as I understand it
    text(255) to text(50) aside from,file size benefits?
    is not true. Fields in Access aren't fixed length; the field size set up in tables is a maximum size allowed. The size of the file is determined by (among other things) the amount of data actually in the fields. In other words, the data "Widget" will bloat a db the same amount whether it's in a field dimensioned as text(50) or text(250).
    Hope this helps!

    The problem with making anything foolproof...is that fools are so darn ingenious!

    All posts/responses based on Access 2003/2007

  3. #3
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    This might be useful:

    Byte Stores numbers from 0 to 255 (no fractions). - 1 byte
    Long Integer (Default) Stores numbers from –2,147,483,648 to 2,147,483,647 (no fractions). -4 bytes

    not sure if what Missinglinq said applies here too, I think this is the physical size regardless of value

    - GeorgeV
    George
    Home | Blog

  4. #4
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    I doubt it will have a significant impact on performace. it will have an impact or physical storage (and therefore probably time taken to read or write data) but, as an ex colleague assured me years ago, as far as the processor is concerned a byte value is the same as boolean value the same as a long value in modern PC's that means they are all 32 or 64 bit depending how swanky your PC is.. surely there cannot still be (m)any 16bit PC's out there.

    He always used longs in VB applications even for boolean values. his argument being that it helped for performance to use the native number format of the CPU, rather than the compiler having to cast the value to that format. Me I didnt understand it then, I dont really understand it now, but then he was an ex hardware guy... he used to be involved in using OCCAM on the Transputer.
    I'd rather be riding on the Tiger 800 or the Norton

  5. #5
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    Quote Originally Posted by syntaxerror
    I would just like to ask if I could get any improved performance by downgrading some fields from long integer to byte
    or some text(255) to text(50)
    Conclusion: No..?
    George
    Home | Blog

  6. #6
    Join Date
    Jun 2005
    Location
    Richmond, Virginia USA
    Posts
    2,763
    Provided Answers: 19
    I think healdem's probably right. IMHO, too many programmers still worry about things like performance and storage space, which were legitimate concerns in the past, but which have been made moot in light of todays PCs! When a database grows to the point where there's an erosion of performance, it's probably time to upsize to something other than Access.

    Limiting a number to 0-255 seems a bit foolhardy, unless you're using it to store a person's age! I would always go the other way, and use a datatype that will probably store more than I'll need. A frequent question here and on other forums has to do with Error 6 Overflow, and the posts are almost always the same. The app's been working fine for years and now it fails with Error 6! And it almost always results from accumulative numbers that have finally exceeded the storage ability of the numerical data type originally used.
    Hope this helps!

    The problem with making anything foolproof...is that fools are so darn ingenious!

    All posts/responses based on Access 2003/2007

  7. #7
    Join Date
    Dec 2002
    Location
    Préverenges, Switzerland
    Posts
    3,740
    strange question (unless you are still using a 6MHz 80286).
    what exactly are you doing that is slow ????

    i have one application that has been in use for around 12 years. it is a brute force sort-of-goal-seek something like setting up a school timetable to balance class sizes given pupil curricula & teacher availablity (try every valid permutation and select the 'best'). the original version used recordsets and arrays and involved many dozens of million loops with typical run-times around 100 minutes. changing datatypes had only a marginal effect. the current version uses SQL rather than recordsets/arrays and completes precisely the same task in less than 2 minutes.

    my totally unscientific summary: messing around with datatypes can save you maybe 2% of the execution time if you are lucky; change the method and you might save 98% of the execution time.

    if you are looking for speed improvements from datatype changes, odds-on there is something else wrong.

    izy
    currently using SS 2008R2

  8. #8
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    Quote Originally Posted by izyrider
    ...my totally unscientific summary: messing around with datatypes can save you maybe 2% of the execution time if you are lucky; change the method and you might save 98% of the execution time.
    ...
    agree
    you can often get really substantial gains form better methodology.. one of my bete noirs is seeign frequent use of domain functions in JET Access/ yes they are realtively easy to use, and can be a very powerfull tool used sparingly, if you use them within a loop or becasue effectively you can't be btohered to use the power of SQL then they can be a real resource hog. I had to work on a app which did a lot of calculations on averages. the intiial author made liberal use of dlookups amongst other things... the app wsa taking around 6 minutes to to perform a fairly simple calcuation... I ripped out the domain functions and found that effectively the saem function was sub 1 second, with everythign else the same. its exaclty the same argument on Access reports if you have a lot of subtotals or SQL aggregate fucntions in the footers consider doing the maths yoruself in the detail format event.... you may shock yourself at the spped incresae.. you wont notice it if you use a couple... but start hammering it using lots of std deviation, mean & mode calcs and just watch that minute hand spin by on the clock.... its a usability thing (I hate to see my userbase goossiping when they could be have their nose hard to the grindstone....Me I've always thought fagin was a sissy)
    I'd rather be riding on the Tiger 800 or the Norton

  9. #9
    Join Date
    Apr 2006
    Posts
    157

    Talking

    many thanks for the input guys,

    not sure if i got that last one right but, is this what healdem tried to say?

    that i'm better off using vb calculations and executing sql functions
    over using access' native special functions?


    again, thanks

    se
    Only quitters quit!

  10. #10
    Join Date
    Jan 2007
    Location
    California, USA
    Posts
    520
    I believe what Healdem was saying is to not OVER use the domain functions. One or two, even 6 to 7 would be more than fine in some code that is being executed once. But to put them into a loop, would slow down the process considerably.

  11. #11
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    thats it precisely

    they are a usefull tool, used appropriately (and usually that means sparingly they are a powerfull tool). Over used, or inappropriately and they can have a severe impact on perfomance. its a bit like seelcting the right tool to do a job.. sometimes the tool closest to hand is the right tool, sometimes it pays to go back to the shed and select the most appropriate tool.

    if you have a need to do these calculations in a loop then using lots of domain functions really hurts performance.

    my comment about reports is that if you are iterating througgh the data anway, then you dont need to request the data again, and again. AFAIK in a report footer each domain function requires a separate trrpp to the db.. doing it whiolst you are generating the report. so if its just one or too domain lookups then thats fine... more than that and domain functions start to hurt. its a matter of balance...

    Someways its a bit like do you open a separate recorsdet to do a lookup or do a dlookup..... sometimes its quicker one way, sometimes its quicker the other.
    I'd rather be riding on the Tiger 800 or the Norton

  12. #12
    Join Date
    Oct 2002
    Location
    Leicester - UK
    Posts
    820
    Yes as a byte requires less bit comparsions than a Long but it would be a faction of a second bonus per billion calculations though it will also reduce demand on the paging files and HDD but again unless you database is of a size thats normally too large for Access to handle you wont get much of a bonus. this is less true for text fields but basicly if your having speed issues you probably just need to imporve the effeicency of your processes and follow Healdem's advice.
    Definition of a Beginner, Someone who doesn't know the rules.

    Definition of an Expert, Someone who knows when to ignore the rules.

  13. #13
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    Quote Originally Posted by m.timoney
    Yes as a byte requires less bit comparsions than a Long but it would be a faction of a second bonus per billion calculations though it will also reduce demand on the paging files and HDD but again unless you database is of a size thats normally too large for Access to handle you wont get much of a bonus. this is less true for text fields but basicly if your having speed issues you probably just need to imporve the effeicency of your processes and follow Healdem's advice.

    my colleague's comment on integer data types was that to the CPU they are all the same a byte is an integer is a boolean is a long.... they are all being presented to the CPU in the same manner, shorter values are being padded to match the registers.. therefore it was usually quicker to use the base integer size of the CPU... that was purely looking at integer numbers within the CPU, and ignoring the affect outside the CPU
    you are right to point out that text / string values should be as small as possibly required.

    Personally Ive never yet got to the state where Ive needed to start looking for performance gains by shaving single steps in cpu code (and I fervently hope I never do).

    Usually you can find pretty big chunks of time by retuning your SQL, checking indexes, and trying to make sure the optimiser works as you expect it to. Adding redundant "where" clauses soemtimes throws a wobbler in JET. Using the wrong methodology can be a resource hog (eg iterating an updateable recordset when a simple SQL (or even a series of SQL statements can do the deed far faster)). There are occasions when a recordset may be more appropriate (eg if you have to maintain an audit trail, or its part of an existing process).
    I'd rather be riding on the Tiger 800 or the Norton

  14. #14
    Join Date
    Oct 2002
    Location
    Leicester - UK
    Posts
    820
    they may be right, i have to admit my work in assembler is a little out of date, if you wanted to compare a Int to a int you had to go though them a byte at a time.
    Definition of a Beginner, Someone who doesn't know the rules.

    Definition of an Expert, Someone who knows when to ignore the rules.

  15. #15
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    Quote Originally Posted by syntaxerror
    many thanks for the input guys,

    not sure if i got that last one right but, is this what healdem tried to say?

    that i'm better off using vb calculations and executing sql functions
    over using access' native special functions?


    again, thanks

    se
    Not quite. Basically you want to avoid calling scalar functions a large number of times. Once or twice - fine. Much more and there is probably a more efficient method.

    Remember that multiple calls do not only occur in loops. Pop a DSum into a field expression in a query and it will execute once per row returned (maybe more if, for example, you use distinct or Group By).

    A nice example of a crippling scalar function call is:
    Code:
    SELECT x
    FROM MyTable
    WHERE ISNULL(MyCol) = 0
    Vs
    Code:
    SELECT x
    FROM MyTable
    WHERE MyCol IS NOT NULL
    One is an internal SQL set based operator, the other a scalar function that is executed for each row (in this case for each row in the table not just the return).

    So anyway - no - wrapping up DSum or its equivelent in a VB function would not be beneficial. Look at avoiding scalar functions completely wherever possible in loops & queries that include more than ten or so rows.

    HTH
    Testimonial:
    pootle flump
    ur codings are working excelent.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •