I've got a system in which multiple servers, each running a copy of SQL Server Express 2008, host files for the application. When the files are accessed log records are written to the local database and then at regular intervals (hourly at the moment) the contents of the log tables are sent to a central master database server where they get added to master log tables ready for statistical analysis and reports. The local log tables then get cleared out to save space (it generates a hundred records or so a minute)
The idea is that the whole thing is distributed and tolerant of network faults at any point. If one of the host servers goes down the application just uses another, if the master database goes down the logs just pile up on the host servers until the master is available again to receive the logs.

Now I'm a web developer, not a dedicated database admin, so I've done it using what I know; A web service on the master database and a CLR stored procedure on the host to send the data to the service.

I'm looking at this system and I'm wondering if I've taken the wrong approach. It feels like there should be some sort of built in SQL Server tech to do it for me, probably more efficiently. I've looked at database replication but it doesnt seem to be quite what I want.

Does anyone know of another way of getting the same result? How would a database expert do it?