Logging Scenario - Regular SSH Exports
Last modified on October 24, 2024
Scenario: You want to export ssh replay captures from your organization on a regular basis. This document explains how to do this using the sdm audit ssh
CLI command. Instructions are included for local export and for exporting to either AWS S3 cloud storage or Google Cloud Platform (GCP) cloud storage. For more command information, see the CLI Command Reference.
Initial Setup
Create a new Linux system user with restricted permissions to run the audit. In this example, we use sdm
. Download and install the Linux SDM client.
Create an Admin Token
To create an admin token, sign into the StrongDM Admin UI and go to Audit > API & Admin Tokens. From there you can create an admin token with the specific rights you require, which in this case is the Audit > SSH Captures permission only.
After you click Create, a dialog pops with the admin token. Copy the token, and save it for later use in /etc/sdm-admin.token
in the format SDM_ADMIN_TOKEN=<YOUR_TOKEN>
.
This file must be owned by your user.
chown sdm:sdm /etc/sdm-admin.token
For more details on creating admin tokens, see Create Admin Tokens.
Export to a JSON File
Set up a script to run a periodic SSH export. In the following example SSH export script, captured SSH sessions write to a JSON document every five minutes.
#!/bin/bash
export SDM_ADMIN_TOKEN=<insert admin token here>
START=$(date -d "5 minutes ago" '+%Y-%m-%dT%H:%M:00') # start of audit slice, defaulting to 5 minutes ago
FN=$(date -d "yesterday 00:00" '+%Y%m%d%H%M') # timestamp string to append to output filename
END=$(date '+%Y-%m-%d%TH:%M:00') # end of audit slice, defaulting to now, at the top of the minute
TARGET=/var/log/sdm # location where JSON files will be written
/opt/strongdm/bin/sdm audit ssh --from "$START" --to "$END" -j > "$TARGET/ssh.$FN.json"
Add a crontab entry
Although most Linux systems have locations to place scripts that run daily, weekly, or so on, the script is configured by default to run every five minutes. As such, our best bet is to place it directly into the crontab file for a user or for the system.
Add this line to the crontab of your choice, modifying the interval to match what you set in the script:
*/5 * * * * /path/to/script.sh
Export to Cloud Storage
If you configured logging to a cloud environment, use the following methods to extract SSH captures before or after log export.
SSH session extraction prior to export
Set up and run a periodic export in order to extract SSH sessions prior to shipping the logs to your cloud storage. The SSH captures are compressed and exported every hour.
#!/bin/bash
# day, hour, minute timestamp
TIMESTAMP=`date +'%Y%m%d%H%M'`
# to prevent overlapping records, do 61 min ago to 1 min ago
FROMTIME=`date --date="61 minutes ago" +'%Y-%m-%d %H:%M:%S'`
TOTIME=`date --date="1 minutes ago" +'%Y-%m-%d %H:%M:%S'`
SSHDIR=/path/to/save/ssh/sessions
TEMPDIR=/tmp
# this token needs only audit/ssh captures permission
export SDM_ADMIN_TOKEN=<token>
CLOUD_LOG_NAME=strongdm-log-$TIMESTAMP.gz
CLOUD_SSH_NAME=strongdm-ssh-$TIMESTAMP.gz
CLOUD_PATH=<scheme>://bucket/path/to/logs # change the cloud path <scheme> depending on the cloud (for example, s3 or gcp); note there is no trailing slash at the end of the path
export CLOUD_ACCESS_KEY_ID=<key>
export CLOUD_SECRET_ACCESS_KEY=<key>
# Ensure your environment variables are in place and gzip the data into either S3 (aws s3) or GCP (gsutil); this example uses S3
journalctl -q -o cat --since "$FROMTIME" --until "$TOTIME" -u sdm-proxy > $TEMPDIR/sdmaudit.log
cd $SSHDIR; sdm ssh split $TEMPDIR/sdmaudit.log
gzip $TEMPDIR/sdmaudit.log | aws s3 cp - $CLOUD_PATH/$CLOUD_LOG_NAME
sdm audit ssh --from "$FROMTIME" --to "$TOTIME" | \
gzip | aws s3 cp - $CLOUD_PATH/$CLOUD_SSH_NAME
Configure this script to run every hour in cron.
-j
option to perform this operation correctly (for example, sdm ssh split -j $TEMPDIR/sdmaudit.log
).SSH session extraction after export
To extract SSH sessions from exported logs, first determine the ID of the session you want to view. Do this by running
sdm audit ssh
with the relevant--from
and--to
flags, as in the following example.$ sdm audit ssh --from "2018-03-20" --to "2018-03-22" Time,Server ID,Server Name,User ID,User Name,Duration (ms),Capture ID,Hash 2018-03-21 20:51:16.098221 +0000 UTC,1334,prod-312-test,1016,Joe Admin,8572,4516ae2e-5d55-4559-a08c-8a0f514b579c,afb368770931a2aae89e6a8801b40eac44569d93 2018-03-21 20:53:01.4391 +0000 UTC,1334,prod-312-test,1016,Joe Admin,7515,fbd50897-1359-4b55-a103-68e4dafa494b,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d 2018-03-22 21:57:10.920914 +0000 UTC,1334,prod-312-test,1016,Joe Admin,10440,aa8dab30-685d-4180-a86b-bb1794d23756,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d 2018-03-22 23:16:40.170815 +0000 UTC,1334,prod-312-test,1016,Joe Admin,5433,7a8735cf-05c8-4840-89ae-42c6ad750136,883b03873229301e58fb6c9ccf1a3f584953d13c 2018-03-22 23:21:49.987304 +0000 UTC,1334,prod-312-test,1016,Joe Admin,4529,2324e5d7-398b-47cd-ace6-78b33f813e3f,883b03873229301e58fb6c9ccf1a3f584953d13c
Next, copy the logs from the relevant timeframe back down from your cloud storage. Please note that an SSH session may span several logs, so pay attention to the duration of the session as revealed in step 1.
Unzip the logs and compile them into a single file.
cat log1 log2 log3 > combined-logs
Run
sdm ssh split <logfile>
to extract all SSH sessions from this log. They are named after the session ID. At this point, you can view the relevant session file (in JSON format).$ sdm ssh split combined-logs 5783cb5e-e1c8-44ba-b8ee-4bc4d8c28c7d.ssh 9d880e13-f608-4fe0-b1e7-deeb35bb9f2c.ssh
-j
option to perform this operation correctly (for example, sdm ssh split -j combined-logs
).