I have checked with the query you sent to me after analyze,
I have a table (f1 number,f2 varchar2(4000),f3 timestamp), and currently 6 rows are there
the output shows 0.12890625 (KB), is it right,
2. The size based on the actual length of the data in the row or as per the datawidth declared
I appreciate you quick replay, I need a your suggestion, for capacity plannning we need to work out for the actual size of the table rather the actual size of the data in it, bcause we do not know what will be the data size in the future recds, I am right.
Can I write a script to find the row_size of all the tables in the schema using the data dictionary tables. I appreciate if you could give me any inputs on this. As this task is urgent i need a short cut .
just select avg_row_len from user_tables. For all schemas use dba_tables.
To do capacity planning you need to be able to estimate the number of rows in your large tables. If you can estimate this then multiply by avg_row_len to get the size of your data. However when sizing a database you nned to take into account pctfree, size of block headers, block size etc.
Oh forgot to mention paste the list of tables and their row_size into excel. Add a column with your estimate for number of rows and get excel to add it all up. Dont forget to sort your tables by their tablespace so you can get a total for each tablespace.
Here's a little formula from sizing tables... There is overhead involved in the number of columns a table has, the number of columns over 250 bytes, etc ... Also consider, when creating a table to put all the
"NOT NULL" columns at the top of the table and together... If you have
a "NOT NULL" column after 10 "NULL" columns, the 10 NULL columns
MUST maintain a byte for placement purposes... That byte is NOT maintained if the "NOT NULL" columns are at the top and all "NULL"
columns are at the bottom ...
set verify off
/(100*(&&avg_row_length + 3 + &&number_of_columns
+ &&number_of_cols_over_250_bytes))) Blocks_Required
where Name = 'db_block_size';