Are you using SSH for your secure maintenance of your servers? – Sure.
Do you copy files with SCP between hosts? – Most likely.
Is there a need to transfer big files over slow and unreliable network connections (*)? – Could be.
Can SCP resume a download after the connection crashed? – No.
So why not simply use rsync over SSH for your file transfer. A minor drawback is, that unless you set up a rsync daemon (not appropriate for my case) you have to call rsync manually. Sadly rsync doesn’t offer something like “automatic retry in case of a connection failure”. (**)
Good for us, because now it’s tool time again; a single bash script does the trick:
#!/bin/sh # reliable file transfer # try rsync for x times I=0 MAX_RESTARTS=5 LAST_EXIT_CODE=1 while [ $I -le $MAX_RESTARTS ] do I=$(( $I + 1 )) echo $I. start of rsync rsync -av --partial --progress -e "ssh" firstname.lastname@example.org:~/MY_BIG_FILE . LAST_EXIT_CODE=$? if [ $LAST_EXIT_CODE -eq 0 ]; then break fi done # check if successful if [ $LAST_EXIT_CODE -ne 0 ]; then echo rsync failed for $I times. giving up. else echo rsync successful after $I times. fi
Ah, just a sidenote as I always forget the syntax: If you need to remote execute a command via SSH with variables from your local shell, take this:
CMD="test -e M_BIG_FILE || cp MY_BIG_FILE `hostname -s`-MY_BIG_FILE" ssh email@example.com $CMD
(*) If you only have an unstable satelitte link, even 150 MB are way too big.
(**) Make sure that you actually test over the network; using rsync with source and destination files on the same system deactivates the delta-calculation algorithm.