The statement does not execute and returns an error. I believe I can trap this failures in Profiler, but I'm not sure what type of overhead this would create. Several people have suggested triggers, but I'm not sure a trigger will execute on failed attempts, only successfull insert, updates and deletes.
If I take the Profiler approach I'm not sure it will show schema changes.
If you tell it to, SQL Profiler can track ANYTHING that goes to SQL Server. DML that works or fails, schema changes, and everything else. The question is: How much disk are you willing to dedicate to making this happen?
With the Profiler running "wide open" on a moderately busy server you are looking at 2-3 Tb of data in a 24 hour period... Once you've collected the data, you need to figure out what (if anything) you are going to do with it!
The old Chinese adage applies: Be careful what you wish for, you might get it!
We will only be monitoring two or three ids. Not sure if a domain group can be monitored, but if so we will monitor at the group level. This is for Sarbanes-Oxley complaince, which is basically very strict management of database systems for financial institutions. My thanks to Enron. Our DBAs are allowed to manage development and model office environments and a consulting company gets to manage production. Sarbanes-Oxley requires we keep an eye on the production DBAs by monitoring their activity. Profiler may not be the best approach. Even though we will be monitoring a small number of ids, SQL Server still needs to perform conditional logic against all user activity to see if the filter criteria is being met. A software tool may produce less overhead.