07-19-02, 01:51 #1Registered User
- Join Date
- Jul 2002
- Winnipeg, Canada
Unanswered: Serial datatype as PK - Limitations???
In the database design that I am working on right now (my first in Postgresql) I am using a serial datatype as the primary key in a number of tables. This I believe gives me autonumbering equivalency with little effort.
Question - what are the practical limitations of using this datatype in a database. IE: the database that I am replacing has 100,000 entries in the transaction table after about 5 years (historical data). This database is being used by one site - what if I want to scale the new database to handle multiple sites ????
I understand that there should be some data maintenance facility to help with reducing the amount of active data but what happens when you have to maintain extremely large tables as you would in the operational research world or for doing data analysis???
I am very close to finalizing my database design but would really welcome some last minute input here - this is one of those measure twice cut once scenarios.
07-19-02, 09:10 #2Registered User
- Join Date
- Sep 2001
You don't have to worry. I think the range of the int4 type is enough for this amaount of records.
I think there will be no problem with that.
Data maintainance facility:
Depends on your needs.
These thigns are to specail. maybe you code a small envoironment where you can select and analyze datahttp://www.postgresql.org
--Postgresql is the only kind of thing--
08-19-02, 01:28 #3Registered User
- Join Date
- May 2002
- Florida, USA
The standard SERIAL type allows for incrementing a key up to approx. 2.1 billion records. That is good for most needs. Also, PostgreSQL now supports a BIGSERIAL type, based on int8, which allows for some astronomical number of records, more than a quintillion, I believe. (see Documentation ).
So basically, PostgreSQL will let your database scale far beyond your hardware's probably ability to even handle that amount of data There are already people working with 100+ GB databases in PostgreSQL, and it's only going to get bigger.
In terms of database maintenance, it is possible through some judicious use of triggers, etc... to "offline" older records to other tables, functioning as an archiving process. This allows you to keep your current active dataset small, while still allowing for historical searches as needed. (HANDLE WITH CARE!!) There are many other ways to maintain and archive data, and it really requires more study and thought than we can arrive at in this small forum thread.