Results 1 to 5 of 5
  1. #1
    Join Date
    Jun 2005
    Posts
    319

    Unanswered: poor man's table partitioning implementation

    Our provider in the cloud does not support SQL 2008 Enterprise Edition and the company I work for doesn't want to pay the high licensing fees for it anyway so I am trying to come up with a poor man's table partitioning implementation.

    How does something like this look?

    We currently have a process in place that when a new object ID comes in if the table does not exist we automatically create the table and start inserting the relevant data.

    The problem is that as this table grows over a week or two the inserts get pretty expensive as the size of the table gets into the millions.

    The solution I am proposing is to rename all of our existing tables with _Archive suffixed so that we wipe the slate clean. At an off peak time (daily) we will perform these steps:

    1) Rename the base object table with suffix _Temp
    2) Insert into _Archive from _Temp
    3) Drop _Temp

    Step 1: The reason for this step is to not interrupt the current insert process, a new table will be created and this will allow us to drop the _Temp table when complete - we will have to alter some decisioning queries to query this table as well if it exists.
    Step 2: This step will still be expensive but I think this is our best option.

    Critiques?

  2. #2
    Join Date
    Jun 2005
    Posts
    319
    I never had success with indexed views, but I wonder if we created 1 table per day per objectId if that would be an even better solution (no hit on step 2 above).

  3. #3
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    What kind of latency can you tolerate on the inserts?
    Can you insert to a staging table instead, and then periodically insert the staging data into the live data in set-based operations rather than individual inserts?
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  4. #4
    Join Date
    Jun 2005
    Posts
    319
    decision reports are real time, currently since the destination inserts are backlogged our decision queries are hitting the source tables (as well as the destination tables) and summing the aggregate values together.

    The solution I outlined is essentially making the current object tables archive tables and allowing the system to create new object tables to act as staging tables.

    I think an indexed view type solution might be more optimal but like I said I have never successfully implemented indexed views, have they gotten better in 2008 or pretty much the same since 2000/2005? These tables would all be the same structure, down to the indexes so an indexed view covering a couple dozen tables might be better (1 table per day). I guess I just need to do some testing and see what I come up with.

  5. #5
    Join Date
    Jun 2005
    Posts
    319
    damn, looks like indexed views is only available in Enterprise Edition as well.

    looks like I might have to make a case to spend the extra money on a few servers for Enterprise Edition.
    Last edited by Gagnon; 11-30-10 at 14:15.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •