Results 1 to 1 of 1
  1. #1
    Join Date
    Jun 2004

    Unanswered: Interpreiting what constitutes 'good' tk prof timing statistics


    via using tkprof to test how scalable my data model is when populated separately with 20k, 40k, 80k, 160k & 320k & running tkprof, when comparing timing statistics (cpu & elapsed), the statistics are proportional - in the sense that the elapsed times linearly increase as the cpu time does (see atts for image).

    i just wondered - as im inexperienced when it comes to scalability testing - if this kind of approach is a good way to identfy how scalable my data model is?

    or if there were better ways to use the attained statistics (i.e. via a ratio, or percentage or formula etc) to identify how to determine the scalability of the datamodel?

    looking forward to everyones views & oppinions,

    Attached Thumbnails Attached Thumbnails scalability.jpg  

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts