If you’ve been using SSH for any amount of time, you probably already had the need to copy a local file to your remote host.

For that you can use scp (secure copy). You specify the local file name, the target host and the user to be used to ssh, and the destination path on the server:

scp my_file.txt <user>@<hostname_or_ip>:/home/admin

What if you need to copy over an entire directory and all of its descendants? Just add the -r option:

scp -r my_directory <user>@<host>:/home/admin/some/path

Doing it better with rsync

scp is perfectly fine for a one time copy of a moderately sized directory.

If you however end up copying the same directory over and over (to update local changes on a server, for example), rsync is a better and much more efficient solution.

rsync -av my_directory <user>@<host>:/home/admin/some/path

With the -a option, rsync will use archival mode, preserving the modification timestamps, file permissions and carrying over symbolic links as such.

It might not be ideal for everyone (of course you can tune it!), but by keeping the modification timestamps intact, rsync will simply skip files that have not been modified since the last upload, saving you time and bandwidth.

Bonus: copying a directory to an AWS S3 bucket

If you have an S3 bucket, and the awscli tool installed, copying an entire directory over to the bucket is as simple as:

aws s3 cp my_directory s3://<my_bucket>/my/path

This also comes in the sync, which has a --delete option that allows to remove files from the bucket that have been deleted locally:

aws s3 sync --delete my_directory s3://<my_bucket>/my/path

Finally, the same command can be used to copy from S3 to your machine, simply reversing the order of the options:

aws s3 sync my_directory s3://<my_bucket>/my/path my_local_directory