Just tried this on a larger database and though I had timeout issues but it turns out that the database backup file gets too large for the file system and the server kills the script.
This is an issue which could be affecting people with large databases. If the backup file gets to 2Gigabytes (2,147,483,648 Bytes) it will stop creating the file and you will see "child pid xxxxx exit signal File size limit exceeded (25)" in the server error log. The backup file will be stuck at 2,147,483,647 Bytes.
This is because linux can only create files up to 2GB on a 32bit operating system.
To get around this just change the "Combine Files" setting to "No" and each table will be backed up individually. So long as an individual table is no large than 2GB it will work