20 October 2017

Secure FTP on S3 with Chroot and Google Authenticator

Secure FTP is really SSH under the covers behaving like an FTP (file transport) service. And instructions exist leveraging S3 as an inexpensive file system backend. But getting SFTP and S3 deployed securely with users isolated took some detective work. Isn't it a odd that secure instructions always take work? Cobbled from multiple sources, here are flexible instructions to get an up to date, robust, secure installation going.

As I've been working a lot in AWS Gov Cloud, these instructions account for this wrinkle.

Multi-factor (MFA) client access is included, since this is quickly becoming a standard requirement.

These instructions assume the following;
  • Ubuntu 16.04 LTS with SSH- a mature Ubuntu version with long term support
  • AWS account with S3 bucket privileges
  • End users on Windows who need an "easy" secure method to share files
  • (optional) MFA soft tokens, such as Google Authenticator

One Time Setup

Patch up the system and install required binaries. 
sudo apt-get update
sudo apt-get upgrade
init 6
sudo apt-get install s3fs #used for mounting S3 filesystem
sudo apt-get install libpam-google-authenticator #MFA
Optionally enable MFA capability for SSH
Note: It is always a good idea when making SSH changes to leave a separate console open and then test the change. Otherwise it is easy to lose access to the box if a mistake is made.
sudo vim /etc/ssh/sshd_config
ChallengeResponseAuthentication yes
For now, protect the ubuntu user access by adding these lines to the sshd_config. It can be updated later once everything is complete. 
Match User ubuntu
        AuthenticationMethods publickey 
Restart SSH for the change to take effect.
sudo systemctl restart sshd.service
Update the PAM authentication module to use Google Authenticator. 
sudo vim /etc/pam.d/sshd
auth required pam_google_authenticator.so nullok
The nullock flags allows for a user who does not have Google Authenticator set up to be able to log in.

Create and Prepare the Mount Point for the S3 bucket

On AWS S3, create a non-public bucket with an account that has API keys - API keys are created in the IAM section of AWS. Using the AWS GUI to create a bucket with owner ACLs (the default) will work just fine.

Create a password file for the S3 file system.
sudo vim /etc/passwd-s3fs
The file format is as follows: bucketname:accessKeyID:secretKeyID

And then lock it down.
sudo chmod 640 /etc/passwd-s3fs
Create a mount point that is owned by root. 
sudo mkdir -p /s3/home
A side bar about mounting a Chroot Directory
In the world of SFTP, using a ChrootDirectory is necessary to limit file system access to users. Without it, SFTP users will be able to traverse all over the server file system and download anything a regular user has read access to, including the passwd file.
 
Mounting the bucket takes a bit of explanation and is very the much the secret sauce to getting this to work correctly. Most online instructions suggest a simple mount command as the root user or mounting with the allow_other flag to allow others to use the mount point.

Both of these options will fail when using a ChrootDirectory. A look at the man pages explains why:
ChrootDirectory 
Specifies the pathname of a directory to chroot(2) to after authentication. At session startup sshd(8) checks that all com- ponents of the pathname are root-owned directories which are not writable by any other user or group. After the chroot, sshd(8) changes the working directory to the user's home directory.
When mounting the bucket as the root user, the mount point permissions will be 700. This means only the root user will have access to what is below. Chroot will fail for SFTP users with a permission denied error: 
safely_chroot: stat("/s3/home/"): Permission denied
When mounting the bucket with the allow_other flag, the mount point permissions will be 777. This means the mount point is too permissive. Chroot will fail for SFTP users with a bad ownership error: 
fatal: bad ownership or modes for chroot directory component "/s3/home/"
The Goldilocks option is a little-known flag known as mount point umask (mp_umask). This should be set to 022 for ChrootDirectory to work correctly.

You're welcome.

Mounting the S3 Bucket

The mounting of the bucket should be tested prior to making it permanent.
sudo s3fs -o allow_other -o mp_umask=022 bucketname mountPoint
A first check is to confirm the bucket is mounted by running the mount command. 
s3fs on /s3/home type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
A second check should be done though because mounting does not mean anything is actually connected.
ls -l /s3/home
ls: cannot access '/s3/home': Transport endpoint is not connected
The "transport endpoint not connected" usually comes down to two possible issues: 
  1. the bucket name is not unique 
  2. the bucket is not in the default location (us-east-1)
Troubleshooting the s3fs command with the -d  and -f flags will help. Specifying the bucket url is always good practice anyway. In GovCloud implementations, it is required. 
sudo s3fs -o allow_other -o mp_umask=022 -o url=http://s3-us-gov-west-1.amazonaws.com (bucketname) (mountPoint
Once the bucket really is mounted, un-mount it and then add an enrry to /etc/fstab so it will be mounted at boot.
sudo fusermount -u /s3/home
echo s3fs#(bucketname) (mountPoint) fuse url=http://s3-us-gov-west-1.amazonaws.com,_netdev,rw,nosuid,nodev,use_sse,mp_umask=022,allow_other 0 0 >> /etc/fstab
sudo mount -a
With bucket mounted, the system is now ready for SFTP users

Creating New Groups for SFTP Users


Users who need to access the same files should be put into groups. The groups can then be locked down to specific directories on the bucket. Generally, this should be scripted out. But here are the basic steps to get started. 

Create a new group and set up the environment:
sudo addgroup (groupName)
# Create the chroot directory
sudo mkdir /s3/home/(groupName)/
sudo chmod g+rx /s3/home/(groupName)/

# Create the group-writable directory
sudo mkdir -p /s3/home/(groupName)/controlled/
sudo chmod g+rwx /s3/home/(groupName)/controlled/

# Add the full path to the group
sudo chgrp -R (groupName) /s3/home/(groupName)/
Then add that detail to sshd_config so any new SFTP user in that group is locked down to only using SFTP.
BUCKET_HOME="(mountPoint)"
GROUPNAME="(groupName)"

cat <<EOT>> /etc/ssh/sshd_config
    # Lock down the new group
    Match Group ${GROUPNAME}
    # Only allow SFTP and chroot to the required directory.
    ForceCommand internal-sftp
    ChrootDirectory ${BUCKET_HOME}/${GROUPNAME}/
    # Lock down SSH options
    PermitTunnel no
    AllowAgentForwarding no
    AllowTcpForwarding no
    X11Forwarding no
    EOT
And don't forget to restart SSH to catch the change. 
systemctl restart sshd.service

Add New Users 

Adding new users is straightforward now that everything has been set up.
sudo adduser --ingroup (groupName) (newUser)
passwd (newUser)
 Optionally set the MFA
sudo su - (newUser)
google-authenticator
And note the key for the user.

A successful test should do the following:
  • Allow the user to log into the system via SFTP 
  • The user should be directed on the chroot environment and only see the "controlled" folder
  • The user should be able to write and read to the "controlled" folder
  • The user should *not* be able to SSH into the system




No comments:

Post a Comment