We just finished upgrading to Pervasive 11. Everything is running great but we are trying to close any bottlenecks that may still exist. We got some conflicting information from our software vendor. In our vendors instructions they suggested we assign pervasive cache to the same size as our folder that holds all the data files. for example if our data folder is 12 gig and we have 20 gigs of ram in the server we should subtract a minimum 3 gigs for the OS and assign 12 gig to the cache. so in this example we would assign 12 gigs to cache and that would leave 8 gig for the OS.
In our environment we have 8 gigs of ram in the server. our data folder is 15 gig . if we subtract 3 gig for the OS we only have 5 gig left to assign to the cache. We asked the vendor if we have a bottleneck and need to install more ram. They so no . Again they kinda of contradicted what they say in there information.
So do you pervasive gurus agree that the 8 gig is enough or should we add more ram. I know we could just add more ram , and that woud cover no matter, but the head honcho won't spend the money unless he thinks it is required. Thanks
I can't say for certain , i don't work around their environment full time. I get a sense that it would be about 50/50 on the reads and writes. you have users who do alot of looking up existing data, and at the same time doing alot of adding data. They do everything with this app. Generate P.O.'s , Quoting , shipping , inventory management, job workflow etc.
There are about 15 users on average on the system at the same time.
Like i say with the upgrade to server 11 we saw a huge increase in performance, and are very happy. We just thought why we were at it lets make sure we fix any other bottlenecks that may exist. The app and pervasive is running on a Dell Poweredge T310 Server. I think we are ok with the I/O performance. The software vendor had advised on the server and the specs. We were just a little confused about their conflicting advice on memory management in pervasive.
I'm preparing a new server for our PSQL app and find that the new PSQL measures that are available for the Windows performance Monitor allow me to change the settings and view the resultant cache hit ratios and disk usage charts. This is on a Windows server 2008 server with 32 GB of RAM. So, I suggest you look at the various measures to find an optimal set of settings for your application.
My PSQL performance is good if reading from the cache but not when reading from the disks.
The issue I have is that if the engines are stopped for a backup, the cache clears and all those lovely GB's of memory are useless until the cache builds up again.