why do you want to partition them in the first place? Just because they have millions of rows?
Are these tables transactional, or do you only read from them? If you are only reading, then you can add appropriate indexes without worrying about hurting insert/update/delete perf (because you don't do any inserts/updates/deletes) and don't partition them at all.
In my job I work with tables in sql server that have 100's of millions of rows without problems (not transactional though) - the important thing is to make sure the indexes are correct. also you need decent disks of course.
I have a Sql Server 2005 database with many tables, each with millions of records within them.
They all have a Receive Date field, with records going back 10 years or so.
What would be the best way to partition it? I was thinking of partitioning them by years, but that would give me 10+ partitions -- would that be alot of overhead?
I have only tested functions/ schemes in 2005, but from my testing and reading I havent seen anything that would indicate 10 would be too many. Test it out. You need to realize that this whole concept/ feature is new to 2005, so you are blazing the path for the rest of us to follow.