If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

 
Go Back  dBforums > General > Database Concepts & Design > Max Effiency in Query 1M Rows of data

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1 (permalink)  
Old
Registered User
 
Join Date: Feb 2013
Posts: 2
Question Max Effiency in Query 1M Rows of data

Hi Guys,

I'm new to dBforums and hope someone can help with this project I'm working on for work.

The gist of it is: I'm querying a table of 1M rows (1 million rows)
Question: Is there an efficient way to query the 1M rows, if you disregard the query code (it is a very straightforward and simple code). What I want to know is, which is faster: To query all 1M rows with one query, or break it down and query perhaps 1000 rows at a time and repeat for 1000 times?

On a side note, does querying scale linearly? (I.e. if it takes 1 min to query 10 rows, does it take 10 min to query 100 rows?)

Do you guys have any recommended ideal query subsize (eg. query 1567 rows at a time) that maximizes the efficiency of this query.

Note: By max efficiency, I mean to minimize the amount of time it takes to generate the table of data using the query.

Thanks guys for all your help.

Cheers
Reply With Quote
  #2 (permalink)  
Old
World Class Flame Warrior
 
Join Date: Jun 2003
Location: Ohio
Posts: 12,522
A million rows is chicken feed.

It is (almost) never better to break a query into subqueries. Looping through data consumes much more time and resources.

On an indexed table, queries time does not increase linearly with record counts. A table with 10 times as many rows does not take 10 times as long to process as a table with one tenth the rows.
__________________
If it's not practically useful, then it's practically useless.

blindman
www.chess.com: "sqlblindman"
www.LobsterShot.blogspot.com
Reply With Quote
  #3 (permalink)  
Old
Resident Curmudgeon
 
Join Date: Feb 2004
Location: In front of the computer
Posts: 14,427
In MOST cases, queries scale roughly as a function of log(n). More depends on your database engine of choice, hardware, etc than it does on pure row count.

For most of my clients running well designed schemas and reasonable queries (less than 100 rows returned to the client) the runtime is around one second. Most of those queries are running against relational database engines (Microsoft SQL, MySQL, DB2 on zOS, Oracle, etc) but a few are running various Big data tools/queries too.

The short answer is that there isn't a short answer. With only a few million rows to query, you ought to be able to get sub-second response for most cases.

-PatP
__________________
In theory, theory and practice are identical. In practice, theory and practice are unrelated.
Reply With Quote
  #4 (permalink)  
Old
Registered User
 
Join Date: Feb 2013
Posts: 2
Thanks for the quick response. Appreciate it.
Reply With Quote
  #5 (permalink)  
Old
Super Moderator
 
Join Date: Jun 2004
Location: Arizona, USA
Posts: 1,823
Do you need to retrieve all the records to the client? If you do, performance is going to suffer, because of sheer data transport.
__________________
Lou
使大吃一惊
"Lisa, in this house, we obey the laws of thermodynamics!" - Homer Simpson
"I have my standards. They may be low, but I have them!" - Bette Middler
"It's a book about a Spanish guy named Manual. You should read it." - Dilbert

Reply With Quote
Reply

Tags
database, efficiency, performance, query

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On