Download a file with SSH/SCP, tar it inline and pipe it to openssl

image

I want to download a list of files from a SSH server, then put it in some kind of container (like a tar file) and finally encrypt it (e.g. with openssl).

The point of putting it into a archive is to keep the original filename, while the final encrypted file will have a different name.

So I am trying something like this:

This does not work; scp doesn't seem to pipe the file to tar as expected, and so the tar archive does not contain the downloaded file.

Is there a way to get this to work?

The scp command copies files, it doesn't care about stdin or stdout. Instead, perform the tar on the remote host, and encrypt the resulting stream:

Be careful if $filepath starts with / as it's (now) considered bad practice to try and create an archive with absolute paths. If this is the case you should consider using the -C flag to change directory to / and then using a relative path. For example, rather than this:

use this:

Or maybe

If you have a non-GNU tar command on the remote system, then these variants will work:

With sshfs you can mount a remote path over sftp locally, and use any file-based tool on it:

Other answers propose invoking tar on the remote machine, and that's probably the way I'd go too, especially for automation. But the sshfs approach does have some advantages:

Assuming the remote systems have a tar command that is either the GNU or libarchive (like on FreeBSD and derivatives) or toybox (as on Android) implementation¹, if you have that list of file paths in a $filepaths array, you can do:

(--no-recursion is so that for files of type directory, only the directory file is archived and not the contents recursively)

As @Kenster said, the scp of recent versions (9.0+) of openssh, won't support /dev/stdout as target file to mean "write to stdout", but even then that's pointless as tar won't be able to make an archive member out of the data read from a pipe as it needs to know the size of the file in advance to write it in the tar header.

Also, if using a pipe like that, you'd lose all the metadata from the remote file including name, permission, mtime, ownership... so generating a tar file would be pointless. To add more files to the archive, you'd also need to decrypt the file.

Here, we're passing the whole list NULL-delimited on tar's stdin (as forwarded by ssh). That's a lot safer and more reliable than passing them as arguments to tar, first because that means there's no limit to the size of the list and also because it can be extremely difficult to pass an arbitrary list of arguments to a remote command over ssh, especially when you don't know in advance what shell is going to be used to interpret the command line.

Beware that both GNU's and libarchive's tar remove the leading / off absolute paths, so with filepaths=(foo /bar), that will end up archiving the contents of /home/$user/foo (as the current working directory over ssh is generally the home directory by default) and /bar, but as foo and bar members respectively. Both implementations have the -P option to prevent that stripping, but best would be to make sure you either have only absolute paths or only relative paths.

¹ on non-GNU systems, GNU tar might still be available as gtar or at a different location such as /opt/gnu/bin/tar and on systems other than BSDs, libarchive's tar as bsdtar. star has equivalent options for that but is a lot less widespread than GNU's or libarchive's; busybox tar can have --no-recursion if built with FEATURE_TAR_LONG_OPTIONS enabled and -T if built with FEATURE_TAR_FROM enabled, but not --null. You can replace \0 with \n and remove the --null if you can guarantee none of the $filepaths contain newline characters.

Try running scp with the "-O" option:

Modern versions of scp use the SFTP protocol under the hood to do file transfers, and the SFTP support apparently tries to do file operations which fail on a pipe. "-O" tells scp to use the legacy SCP protocol, which ought to support writing to /dev/stdout. A quick demo on my system:

tar supports - to read or write stdin/out for the archive but not for a file to be archived/restored/listed. I don't know about others, but on my Ubuntu (with GNU tar) scp ... /dev/stdout | tar c[v]h /dev/stdin | ... works -- or -C/ dev/stdin to avoid a warning as per Chris Davies, or even -C/dev stdin. Of course this can't record the real filename in the archive, so you'll need to be careful when extracting although -C/dev stdin makes that a little easier/safer, and of course it doesn't preserve other metadata like modtime, permissions, and owner -- so what's really the point in making it a tar?

Ask AI
#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25 #26 #27 #28 #29 #30 #31 #32 #33 #34 #35 #36 #37 #38 #39 #40 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50 #51 #52 #53 #54 #55 #56 #57 #58 #59 #60 #61 #62 #63 #64 #65 #66 #67 #68 #69 #70