I've always used maintenance plans in the past, but only for backups.
However, I recently changed jobs as well and I am dealing with a new beast called TSM (Tivoli Storage Manager -- it's not so bad really, it has its pluses and minuses, but it's been a bear to learn). With TSM/TDPSql, the full backups are scheduled from the TSM server and written over the network to a device that essentially mimics a tape drive.
I'm only responsible for making sure that the transaction logs are backed up. I'm really not 100% comfortable with it. There are lots of gaps, but I'm learning to accommodate myself to the new environment.
Some additional thoughts:
1. I never trusted SQL Database maintenance plans for anything except backups. No optimizations, no defragmentation, no integrity checks.
2. Once per month, I scheduled a DBCC CHECKDB (usually on a weekend). I use sp_MSforeachdb 'DBCC CHECKDB [?]'. For large databases, you can use WITH_PHYSICAL_ONLY.
3. Once per month, I cycle the error log (if the server has not been restarted in the last 30 days).
4. Once per month, I delete backup history from the msdb database (there's an sp for this).
5. I created two maintenance plans for each instance: User Database Maintenance Plan (Full backups and Log backups) and System Database Maintenance Plan (Full backups only for master, model, and msdb). It's a little trickier with SQL 2005, at least until you get to SP2 which allows mulitple schedules per Maintenance plan.
6. I have separate jobs to script off and save:
a. DTS packages (doesn't work with SQL 2005 SSIS)
b. SQL Logins (bcp'd to a file on a secure share)
c. I'm working on one to script of jobs
7. Defragmentation I handle through another job and usually I handle it on an exception basis. Fragmentation issues can often be mitigated by design. Defragging indexes can be a real storage pig as the log file goes beserk.
When you get to managing so many instances, you HAVE to focus on standards and then manage by exception. Data files have to be placed in the same directory structure. Log files too. As much as possible, drive naming conventions have to be standardized across instances. Database settings need to conform to standards (eg, recovery model). You will also want to spend time documenting settings and saving them somewhere where they can be accessed in the event of a disaster.
It's not easy, but it does make for interesting work...
Have you hugged your backup today?