Have been working with a client that has 48GB of memory installed and only 32GB allocated to SQL Server. It looks to be a 16 core system (16 cpu windows show up in Task Manager). I suppose I should run a perfmon trace to gather more information. Largest table is about 600 million rows (> 95% of the DB size), I am trying to help them partition their data for better performance but I am running into long query times in trying to import the data after the first 300 million or so.
For comparison another client had 64GB of memory installed on an 8-way server with largest table about 60 million rows (> 95% of the DB size).
As well as memory considerations look at : underlying storage - are you using RAID5 or RAID10? Are you batching the updates \inserts? 32 bit or 64 bit ?
For the import DB we were using RAID0 - which I realized after we started was a mistake, good for the import, but bad when the import DB becomes the production DB, I have asked them to convert this to RAID10. 64 bit, SQL Server Enterprise, batches of 100k (although there are many gaps in the data), I would use a lowID and a highID range on the clustered identity key but only for the new partition value. The select/inserts were consistently fast, it was determining the next minimum clustered identity key for a given partition value which was slowing down the deeper I got into the import - which I was suspect it could be memory related. We we running the import on their production server into a different DB and on a different logical drive.
ETA - I created a new index using (partition key, clustered ID) to see if that reduces the lag between batches