Unanswered: Interpreiting what constitutes 'good' tk prof timing statistics
via using tkprof to test how scalable my data model is when populated separately with 20k, 40k, 80k, 160k & 320k & running tkprof, when comparing timing statistics (cpu & elapsed), the statistics are proportional - in the sense that the elapsed times linearly increase as the cpu time does (see atts for image).
i just wondered - as im inexperienced when it comes to scalability testing - if this kind of approach is a good way to identfy how scalable my data model is?
or if there were better ways to use the attained statistics (i.e. via a ratio, or percentage or formula etc) to identify how to determine the scalability of the datamodel?