var sidebar_align = 'right';
var content_container_margin = parseInt('290px');
var sidebar_width = parseInt('270px');
Unanswered: huge number of records for a session in postgres Log
I have come across a very peculiar situation.
We have a postgres installation 9.0. It was installed last year.
But we started implementation on it, just recently.
And therefore the need to develop a Logparser application.
During our preliminary parsing , What we discovered is just beyond the grasp of my knowledge.
It seems that during a certain period lastyear in November, it created a Session entry that holds more than
Fifty thousand records for a SINGLE SESSION (4ebccaa2.20c) . Yes that is 5 with five zeros
Although it never reoccurs and luckily we had csv option on during that period.
Where should i report such findings
I have uploaded that Part of Log at http://dl.dropbox.com/u/71964910/pg_log_with_lot_of_session.zip
What exactly is a "session entry"? That isn't standard Postgres terminology
The CSV Log is created in form of sessions, with each session having its own ID , Session line number, Session Start Time. There are also records related to session start and end.
Originally Posted by
I have no idea what you are talking about.
What about showing us the SQL statements and the corresponding table definitions (as CREATE TABLE)?
How did you import/export/create/edit that CSV file?
Where did this CSV file come from?
A CSV Log is created by enabling
log_destination = 'stderr,csvlog'
in postgres configuration file. It just is another kind of log file in CSV format with .csv extension usually in data/pg_log folder
We have developed a application in Csharp that then parses the CSV Log file and tells us about errors, sqlcommands and other server activities.
you can have a look at the zip file. it can be opened in notepad
for more information refer to PostgreSQL: Documentation: 9.1: Error Reporting and Logging
Well, the log contains one line for each statement the client has executed. Apparently that session executed 50.000 insert statements.
What is the problem?
yes, everything would have been ok IF those statements were processed by a client or an external application. The problem is these are server generated statements, executed on server , by the server.
PostgreSQL itself does not "generate" INSERT statements on its own. And that session also show that "CREATE TABLE" statements were executed. And that is definitely not something PostgreSQL would do on its own. So it has to be something in your applications that was doing that.
Originally Posted by
So what's that "pem.probe_schedule" table that is being updated or the "pemdata.oc_database" table that gets the majority of the INSERT statements. Which application is using them?
To me this looks like some kind of initial database loading was run there (setup tables, load data)
No it is not related to any of our applications.
if you run goog for Pemdata this is what i find
Creating user probe and alert in PEM 2.1 | Database Technologies
but i cannot make anything of it or what went wrong
Tags for this Thread