From SQL BOL:
Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. Snapshot replication is best used as a method for replicating data that changes infrequently or where the most up-to-date values (low latency) are not a requirement. When synchronization occurs, the entire snapshot is generated and sent to Subscribers.
Snapshot replication would be preferable over transactional replication when data changes are substantial but infrequent. For example, if a sales organization maintains a product price list and the prices are all updated at the same time once or twice each year, replicating the entire snapshot of data after it has changed is recommended. Creating new snapshots nightly is also an option if you are publishing relatively small tables that are updated only at the Publisher.
Snapshot replication is often used when needing to browse data such as price lists, online catalogs, or data for decision support, where the most current data is not essential and the data is used as read-only. These Subscribers can be disconnected if they are not updating the data.
Snapshot replication is helpful when:
- Data is mostly static and does not change often. When it does change, it makes more sense to publish an entirely new copy to Subscribers.
- It is acceptable to have copies of data that are out of date for a period of time.
- Replicating small volumes of data in which an entire refresh of the data is reasonable.
Snapshot replication is mostly appropriate when you need to distribute a read-only copy of data, but it also provides the option to update data at the Subscriber. When Subscribers only read data, transactional consistency is maintained between the Publisher and Subscribers. When Subscribers to a snapshot publication must update data, transactional consistency can be maintained between the Publisher and Subscriber because the data is propagated using two-phase commit protocol (2PC),a feature of the immediate updating option. Snapshot replication requires less constant processor overhead than transactional replication because it does not require continuous monitoring of data changes on source servers. If the data set being replicated is very large, it can require substantial network resources to transmit. In deciding if snapshot replication is appropriate, you must consider the size of the entire data set and the frequency of changes to the data.
You're two choices would be merge or transactional replication (for doing incremental updates). Transactional is dependent on good network connections. Merge Replication is designed for (theoretically anyway) disconnected users who periodically connect to get updates.
Which you choose depends on what your needs are.
You might also consider scheduled updates using DTS or some other tool. Merge and transactional replication impose restrictions on schema updates that can sometimes be a real PITA.
Have you hugged your backup today?