A hospital database stores data for a total of seven years and there are 9 million records in each table. A report is run that only tries to find all patients that are taking Calcium. The query is very simple and only queries one table for the information. Such as
Select patientid, patientname, drug from xyz where drug = 'Calcium'
The report takes 35 minutes to generate this simple query which contains no joins.
What needs to be done in order to improve the speed of a query? What kind of maintenance needs to be done on such a large database?
The database has an Oracle database running in the background but Access is used on the frontend.
Could you explain what you mean by "Oracle database running in the background but Access is used on the frontend" ? What is exactly done between Oracle and Access ? Which data is on Oracle and which is on Access ? From the front-ent, are you querying data on the Access DB or on the Oracle DB ?
1. These are known to be notoriously non-normalized.
2. Previous suggestion to run this from the server is excellent, however you should also get an explain plan from the execution.
3. Examine the indexes and tables. If there is no index that contains the drug as the first column, you need to create a new index. I would suggest a composite index on drug, patient_name, patient_id. If Oracle can find ALL the information to satisfy the query in the index, it will not even access the table.
4. With this kind of usage, periodic large deletes, the tables should be rebuilt to lower the high water mark.
5. Collect statistics regularly with dbms_stats
6. Analyze the explain plan to determine if you are doing a FTS when an index could have been used. This may open up a second can of worms...