If you have root/ssh access then you are better off using mysqlhotcopy rather than mysqldump, it will generate db binaries that you can move around as desired, and is much faster. I recommend running it without indices, which can be rebuilt fairly quickly if needed.
For transfer to another server use rsync via ssh.
For automation use cron (the system one, not the vb one).
My backup script is written in python, here it is in its entirety. Note it maintains a 14 day rolling backup, and keeps symlinks to the most recent in the top level of the backup tree.
Code:
#!/usr/bin/python
import string, time
import commands
doy = time.gmtime()[7] % 15
dbpath1 = "/backup/DBBACKUP/"
dbpath2 = "/backup/DBBACKUP/d%s/" % doy
lfile = open('/var/log/backuplog', 'a')
dbs = ['vb','another_db','a_third_db']
for db in dbs:
cmd ="/usr/bin/mysqlhotcopy -u root -p <mypassword> --allowold --noindices %s %s" % (db, dbpath2)
tlink = '%s%s' % (dbpath1, db)
slink = '%s%s' % (dbpath2, db)
cmd2 = 'rm %s' % tlink
cmd3 = 'ln -s %s %s' % (slink, tlink)
lfile.write(time.ctime() + '\n')
for x in (cmd, cmd2, cmd3):
lfile.write(x + '\n')
(s,o) = commands.getstatusoutput(x)
lfile.write(o + '\n')
in /etc/cron.d I have a script that contains:
MAILTO=
SHELL=/bin/sh
5 3 * * * root /usr/local/backupscripts/dbbackup.py
For the remote transfer you need to set up an ssh key, instructions can be found here
http://pkeck.myweb.uga.edu/ssh/ Personally, I run the remote backup manually every other day, since I don't trust the integrity of my backup host (a shared server)