# Table structure for table `Details_02`
# Contains 272,058 Records
CREATE TABLE Details_02 (
dID bigint(20) NOT NULL default '0',
CardNo varchar(6) NOT NULL default '',
Date date NOT NULL default '0000-00-00',
Store int(11) NOT NULL default '0',
TranAmount float(5,2) NOT NULL default '0.00',
DiscAmount float(5,2) NOT NULL default '0.00',
Operator char(2) NOT NULL default ''
# Table structure for table `Employees`
# Contains 9,282 Records
CREATE TABLE Employees (
eID bigint(20) NOT NULL auto_increment,
Payroll int(11) NOT NULL default '0',
HomeBranch varchar(4) default NULL,
EmployeeNo varchar(6) NOT NULL default '',
Salutation varchar(5) NOT NULL default '',
Forename varchar(30) NOT NULL default '',
Surname varchar(30) NOT NULL default '',
EmpStatus char(1) default NULL,
LeaveDate date default NULL,
PRIMARY KEY (eID)
) TYPE=MyISAM COMMENT='Employees';
When I issue the following query it takes well over 4 hours to complete and return results.
SELECT d.CardNo, CONCAT(e.Forename,' ',e.Surname) AS 'Name', d.Date, d.Store, b.Desc, d.TranAmount, d.DiscAmount, d.Operator FROM Details_02 AS d, Employees AS e, Branches AS b
WHERE e.EmployeeNo = d.CardNo AND b.BranchID = d.Store
ORDER BY d.CardNo ASC;
Can someone tell me what I'm doing wrong. Any help offer would be greatfully rec'd.
Like you I am a newcomer to mySQL. My background is Oracle\SQL server. My comments would be:-
1. Your query brings back one line for every entry in the Details_02 table. Is this your intention? what purpose will you be putting this very large result set to?
2. In Oracle or SQL server you would have indexes set up on tables to help with key or common queries. I am pretty sure mySQL supports indexes. In this instance you should consider setting them up on the fields that you join the tables on (store number and employee\card number).
I agree with andrewb here. I'm fairly up to speed with MySQL but I can't see anything obvious that's wrong with your SQL. We have a large SQL Server database that I ported to MySQL to test various speed-related improvements.
An average query on our web app would join a primary table containing 50,000 rows to three instances of another table with almost 500,000 rows in - please, don't ask why
On a 1GHz Pentium III with 1Gb RAM (okay, a fast machine) this query took little under 15 seconds. Our tables are heavily indexed so it's worth throwing indexes on your tables just to see if it makes a significant difference.
Sorry if I'm repeating andrewb, but it seemed useful to make a real-world comparison
In my experience... (I have read all about this but can't remember the official reasons off hand )
Indexes will always make a speed increase in any select statement and the difference will become more apparent the larger the dataset you are working with. This is because the database engine can searth the index table very much quicker than the datatable to find which rows to include in the result set.
If you make the index a clustered index then this can help even more when you want to return an ordered result set.