Users have developed one job in SAS which loads data from table T1 to T2. There are around 40 Lacks rows in table. Job is desgined such that data is executed row by row.
We are getting some perofrmance issue while executing this job. This job is taking almost 10 hrs to complete. Whenever user is giving restriction to number of rows (e.g. 5 Lacks), job is completing in 30 min. But if we remove row restriction, job is taking much time to complete.
There are no error messages recorded into server error log. If we checked server error log, actual time to complete job (insert rows) is 20 - 30 seconds.
I tunned IQ server (changed iqgovern, iqmc , iqtc) so that now job is completing in 6-7 hrs.
I have following question about this job.
1. How the data is processing while execution? Whenever any job is executed, is there any workspace is created in IQ?
2. Why the job is completing in 30 min if we put row restriction in it. Is there any database option exists which controls number of rows.
3. Is there any other parameter that is affecting performance.
Job is simple which loads data from source table to target table. Both tables are in same schema, same database, same user and same table structure. Target table is initially empty.
Any more suggestions to improve the performance of job are welcom.
Well at first glance I would say its related to your indexes or triggers.
Even though you don't mention you have them, I'm pretty sure you do.
The load is fast like you mentioned, but the indexes have to be rebuilt, thus the long time after the inserts. Also why its faster with a subset, the index rebuilt is faster with a smaller amount of data.
You can remove the indexes on the beggining of the job and then recreate them at the end.
The think is at the first time target table is empty. So is it required to rebuild indexes on it? There are no triggers on the table.
The job is designed such as it will truncate all data and then reloads again.