We have a very big DB (since my experience :-)) with SIEBEL (CRM... 1 TeraByte data, 2000 users, more than 20 Gb. tables...) and we have a question than nobody can explain...
For de first installation, we'd configured our DB whith DB_BLOCK_SIZE of 16Kb and try that one tablesize was 16 Gb (pctfree 40). Due to political instructions, we are trying to traspass this DB to DB_BLOCK_SIZE of 8Kb and the same table acquire the monstruous size of 26 Gb.!!! You can imagine that suppose a query in it!!! (the indexes for this table have 2 Gb. size!!!). This table has the same pctfree (40) and there is not chained rows in it ¿? (our first theory).
Have you casually any theory or idea that what can occur with it???
Originally posted by cyliew
Try to disable the autoextend.
And resize the table.
Thank's for your answer...
When we create it we put this parameter to OFF for the tablespaces/datafiles...
In the attach file, you can see this table script creation (columns, indexes,etc.). This table have more than 10 millions rows!!!... and this actual size is 26Gb (old size, with the same configuration, except DB_BLOCK_SIZE = 16 Gb).
Tell me if you need more information or more "clues" for it and I try to send to you.
May I know what exactly the issue that u are facing besides from what I can deduct about the latency of performance?
There can be lots of ways to troubleshoot the problem. if it's OS level, probably u can migrate the whole database to raid 0+1 from the normal disk configuration..
Do let me know the further clue and expectations.
Our DB Oracle 18.104.22.168 is over Symmetrix raid 0+1.
The application is Siebel version 7.0.4 (that no accept partitioned tables due to internal configuration).
All the application tablespaces are configured whith Locally Managed Tablespaces, Autoallocate at first, converted to Dictionary Managed and later to Locally Managed, because at this time, we can modify the next extent for TABLES/INDEXES that we decides.
Our Siebel application has a lot of objects (tables/indexes) with more than 1Gb in size!!!... but only one (the biggest!)... has this monstruous size (26 Gb). This table has 2Gb extent size!!!.
Are you sure it wont like partitioned tables (does it do DDL itself to look after tables & indexes). Most apps work transparently with partitioning without too many problems. You will also find that if you can use locally partioned indexes you can make things a lot more managable aswell as improving performance.
One other thing is you might be able to do is to reduce the index size by using compression on large indexes.
Yes, partitioned tables is transparent to the application, but our problem is that to migrate (across SIEBEL) the project between platforms dev-int-prod... then, SIEBEL only recognize traditional tables (not partitioned) and it's the reason that we can't use partitioned tables.