Page 1 of 2 12 LastLast
Results 1 to 15 of 21
  1. #1
    Join Date
    Nov 2003
    Posts
    13

    Unanswered: Updating Test, Dev Databases from Production

    Hi,

    Just looking for ideas on best ways to update Test and Development environments data regularly. Currently using Oracle 8i (moving to 9i soon) on Solaris. Have been searching around but found no really solid examples. Here are the guidelines I'm follwing:

    -- Test database needs to be updated every 2 weeks or 1 month with a full refresh of all the transactional/dynamic tables. (or maybe just a full refresh, although the static reference tables don't change)

    -- Dev database updated when needed, bearing in mind there can be test tables, procs, etc also in here. No need for production sized data - so only need a subset, if this is possible to do?

    -- need to take into account any changes to procedures, indexes, etc on dev as well. Also have views, and funtional based indexes as part of the DBs

    Any bright ideas or examples would be greatly appreciated!
    Thanks

    Mark
    Last edited by Tonka52; 11-17-03 at 12:15.

  2. #2
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    http://www.databee.com/databee.htm might be useful to you.
    For the most part you are searching for the Holy Grail.
    Stop & think about what you've asked for.
    Some mythical product is supposed to "know"
    not only all the data inter-relationships within your application
    it needs to keep track of what has change in order to "refresh"
    recently changed production data into development & test environments.
    TANSTAAFL!

  3. #3
    Join Date
    Nov 2002
    Location
    Desk, slightly south of keyboard
    Posts
    697
    Hi Guys,

    Anacedent hit the nail on the head - "thou seeketh the holy grail".

    Personally, I have an app which I point at an instance, it clears any given schema of tables, constraints, triggers, sequences, procs etc etc. Basically it drops all objects on a given schema. It's quicker than dropping and re-creating the user.

    I then import a live dump. Fortunately, so far I haven't played with Db's so big that any import of the new (live) dump takes more than a couple of hours or so.

    In terms of refreshing DEV, the most straightforward I have seen to date is where each developer gets his own schema. 99% of the time he creates synonyms to objects on the refreshed dump, when he needs to alter something he creates a copy on his local schema. This way he and only he is responsible for whats on his own DEV schema.

    Hth
    Bill

    P.S. Many people on here are well aware of my aspirations to write Oracle tools. Having been there, done that and got the T'Shirt I think something to handle DEV/TEST refreshes would be a good idea.

    I'll start on it just as soon as I've finished the new sensible version of TOAD, completed the MP3 Tag manager, put that new phone number in my phone, and oh yes, cleaned my keyboard :-)

    P.P.S Anacedent, WTF does TANSTAAFL mean!

  4. #4
    Join Date
    Nov 2003
    Posts
    13
    Thanks for your replies

    Billm - do you think dropping and recreating the user would take longer on a larger DB? The largest table is 11 million rows - but there are only a handful of "large tables", most ranging between 1 - 4 million rows. Do you know of any templates I can use as a guide for recreating everything?

    As for the dev schema solution - I agree with your idea and hopefully can implement that soon. However, the synonyms will probably still be pointing to the current dev schema rather than test.

    Anacedent - Thanks for the link, I've had a look at that databee software but I don't think it's going to handle a dump file several gigs large - it'll probably take 3 days just trying to load it through the network!

    Good "Holy grail" quote too!

    I suppose the question is which do you guys think is the better of the solution -
    1) Run an export and import for the required tables
    2) Create scripts, etc to do it all manually?

    Thanks again,
    Mark.

  5. #5
    Join Date
    Nov 2002
    Location
    Desk, slightly south of keyboard
    Posts
    697
    Hi Mark,

    I suppose I should have been clearer about 'quicker than droping and re-creating the user'. What I should have said is that you do not have to go through re-creating it (privileges, tablespace quota's etc), you only have to import the dump.

    I prefer to drop all object and run a full user import. It ensures there are no 'leftovers' (proc/views etc) on the destination schema which developers may start using thinking they're out on production.

    Cheers
    Bill

  6. #6
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    TAANSTAFL
    http://www.acronymfinder.com/af-quer...AAFL&Find=Find
    There Ain't No Such Thing As A Free Lunch.
    ------------------------------------------------------
    You can have it GOOD, CHEAP or FAST; pick any two!

  7. #7
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    If all you want is database/schema "structure" and don't care (initailly)
    about the data; I always use EXPORT (exp rows=no). For a schema export specifying NO ROWS allows you to move all the table structures, packages, procedures, triggers,etc (everything but the data). Then YOYO to get the ncessary and appropriate data into the tables.

  8. #8
    Join Date
    Nov 2003
    Posts
    13
    I considered importing just a shell originally, then importing data for the required tables.

    Is there a contraints issue with doing that? ie If I import the empty shell, will there be constraint violations or are they automatically turned off? (or not included?)

    Not too worried about the procedures - they're easy enough to remove and reload.

    I was thinking of just importing the required tables (rather than the schema) as there are several logging tables that take up probably half the overall size!

    "To start, please press any key....where's the any key?" - Homer. (Love that quote.)

    Cheers,
    Mark.

  9. #9
    Join Date
    Sep 2003
    Location
    NE Florida w/ view of co-workers
    Posts
    32
    Has anyone tried the transportable tablespace approach? Segment each system and keep objects in a designated tablespace, then just copy the appropriate db files when needed.

    This won't handle subset data problem and I'm not sure about procs, triggers and such - aren't they stored in the system ts, or maybe they can come over with the transported dictionary info.

  10. #10
    Join Date
    Jul 2003
    Posts
    2,296
    I drop all objects and then import production. Very easy not counting stupid partitioned tables. My prod has partitions but dev does not need them so I pre-create the tables that have partitions in production before the import.

    When merging data, I use a more complicated approach that works for our environment. I would tell you, but then I would have to take back my holy grail and then kill you.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  11. #11
    Join Date
    Nov 2003
    Posts
    13
    If I import just the tables, with no data. Then try to import a set of tables with data over that schema, it doesn't seem to like it because the tables already exist.

    Is there a way around this? ...or do I have to drop the tables?

  12. #12
    Join Date
    Jul 2003
    Posts
    2,296
    use ignore=Y

    I would also use constraint=N and indexes=N until all of your data is laoded. Once loaded, then import the constraints and indexes with rows=N
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  13. #13
    Join Date
    Nov 2003
    Posts
    13
    Thanks for your tips Mr Duck!

    I'm trying to load in the data at the moment and am getting the following errors :

    IMP-00003: ORACLE error 1659 encountered
    ORA-01659: unable to allocate MINEXTENTS beyond 32 in tablespace MYDATA

    IMP-00058: ORACLE error 1653 encountered
    ORA-01653: unable to extend table MYTABLE by 256 in tablespace MYDATA
    IMP-00028: partial import of previous table rolled back: 3997455 rows rolled back

    And then on creating indexes:
    IMP-00003: ORACLE error 1658 encountered
    ORA-01658: unable to create INITIAL extent for segment in tablespace MYDATA
    IMP-00017: following statement failed with ORACLE error 1658:

    Any ideas? Could it be some sort of permissions problem, or is it morelikely the tablespace itself?

  14. #14
    Join Date
    Nov 2003
    Location
    Bangalore, INDIA
    Posts
    333

    Thumbs up

    HI,

    Its the problem in Tablespace only.

    Failed to find sufficient contiguous space to allocate MINEXTENTS for the segment being created in that tablespace.

    For this add another datafile to this tablespace or try with the smaller value of next extentsize.

    NOTE, an extent must fit in one datafile. So one datafile must be able to house an extent of 473393152 bytes.
    SATHISH .

  15. #15
    Join Date
    Nov 2003
    Posts
    13
    Lovely. Thanks Sathish, it seems to be importing correctly now!

    The_duck - Just so I have understood you correctly I would follow these steps for imports :

    1) export dump of tables with data

    2) import dump with ignore=Y constraints=N and indexes=N

    3) import from same dump file as 2) with rows=N, ignore=N, constraints=Y, indexes=Y

    I take it I still need to recreate the views though?

    Thanks again!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •