Run the StrongDM Client on Docker

Last modified on September 17, 2024

Two options are available to run the StrongDM client on Docker:

Choosing the StrongDM Docker Service Client Container is an ideal option if you plan to use separate containers to run StrongDM in your cluster or Docker environment, or if you have yet to install any containers. Alternately, you may have containers with applications already installed or containers deployed in specific flavors of Linux. In such cases, adding StrongDM may be the best option.

The following guide provides instructions for both deployment options. For general information, see Containers and StrongDM.

Deploy the StrongDM Docker Service Client Container

This section describes how to deploy a Docker container that comes with the StrongDM client preinstalled. There are two primary methods to add the StrongDM client container to your containerized deployment: you can run it as a service so that it is persistently available or deploy it on an as-needed basis.

About

The StrongDM Docker Service Client Container is a lightweight Ubuntu 22.04-based Docker image with the StrongDM client binary pre-installed. This image can be obtained from ECR by running the following Docker command:

docker pull public.ecr.aws/strongdm/client

Note that you may obtain the same link from the Admin UI’s Downloads & Install page.

Authentication

A service account token needs to be added as an environment variable to the container. This token acts as your container’s login credentials, which allows you to restrict access at any time from the Admin UI. If you have not already set up a service token, please see the Service Accounts instructions.

For the service account to work effectively, port overrides and auto-connect should both be enabled for your organization. Enabling these settings ensures a consistent login procedure for your container during runtime. As these changes will affect your entire organization, please review our documentation.

Persistent deployment

For this example, the persistent container maps a container port (13307) to the same port on the host machine.

docker run -d -e SDM_SERVICE_TOKEN=$SERVICE_TOKEN -p 13307:13307 public.ecr.aws/strongdm/client

Validate the container connected successfully with sdm status from the container.

docker exec container-name sdm status

The output may be similar to the following:

DATASOURCE NAME                           STATUS        PORT      TYPE
zd918-ssms                                connected     11521     mssql
StrongDM-datasource1-sfo2-main_sdm_db     connected     13306     mysql
StrongDM-datasource1-sfo2-world           connected     13307     mysql

SERVER                                    STATUS        PORT      TYPE
i-09284a37e194e4a9d                       connected     14645     ssh
i-094451c7ae299e46f                       connected     38982     ssh
StrongDM-client1-sfo2                     connected     43264     ssh
StrongDM-database1-sfo2                   connected     43577     ssh
StrongDM-gateway2-sfo2                    connected     30572     ssh

By running sdm status we see that the published port, in this case 13307, refers to the datasource StrongDM-datasource1-sfo2-world.

At this point, the container should be operational and ready to accept connections on the exposed port.

Database connections

Use your normal database client to connect to the host port that is mapped to the container. The following output is trimmed for readability.

Check what containers are running:

docker ps

Locate the appropriate container ID.

CONTAINER ID        IMAGE                              PORTS
551cc9c06734        public.ecr.aws/strongdm/client     127.0.0.1:13307->13307/tcp

Check the specific container’s status.

docker exec 551cc9c06734 sdm status

Something similar to the following returns.

DATASOURCE NAME                           STATUS        PORT      TYPE
StrongDM-datasource1-sfo2-world           connected     13307     mysql

mysql -h 127.0.0.1 -P 13307

Check the database client connections page if you are unsure of the proper connection settings for your database client.

SSH connections

Similar to the DB connection, if the SSH connection port is exposed to the host machine, any SSH attempts to that port get routed through the StrongDM client binary in the container. The following output is trimmed for readability.

Check what containers are running.

docker ps

Locate the appropriate container ID.

CONTAINER ID        IMAGE                              PORTS
551cc9c06734        public.ecr.aws/strongdm/client     127.0.0.1:30572->30572/tcp

Check the specific container’s status.

docker exec 551cc9c06734 sdm status

Something similar to the following returns.

SERVER                                    STATUS        PORT      TYPE
StrongDM-gateway2-sfo2                    connected     30572     ssh

Make an SSH attempt to the port.

ssh localhost -p 30572

Verify you are routed through the StrongDM client binary.

[stronguser@strongdm-gateway2-sfo2 ~]$ # we are now connected via StrongDM

Run as a service

To simplify the deployment process, we recommend deploying the StrongDM Docker Service Client Container as a service. Below is a basic StrongDM service file to be used as an example. This file can be added to your systemd folder structure, such as /lib/systemd/system/strongdm.service.

[Unit]
Description=StrongDM
Wants=network-online.target
After=network-online.target
Requires=docker.service

[Service]
User=username
Group=groupname
Type=simple
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill %n
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull public.ecr.aws/strongdm/client:latest
ExecStart=/usr/bin/docker run \
      --name=%n \
      --rm \
      -p DATASOURCE_PORT:DATASOURCE_PORT \
      -e SDM_SERVICE_TOKEN=YOUR_SERVICE_TOKEN \
      public.ecr.aws/strongdm/client:latest
ExecStop=/usr/bin/docker kill %n

[Install]
WantedBy=multi-user.target

With the service file added, it can now be used with normal systemctl commands or the equivalent in your distro.

sudo systemctl start strongdm

To enable the service so that it is started automatically when the system boots up, use the enable command.

sudo systemctl enable strongdm

Per-job deployment

The StrongDM Docker Service Client Container lifecycle can also be automated to run on demand. When taking this approach, be mindful of loading times as they may vary depending on the environment. The following examples show how to add availability validations into a bash script.

Start the container

In this example, the Docker client binary is invoked to start the StrongDM Docker Service Client Container. Running the StrongDM Docker Service Client Container starts the StrongDM client binary automatically, but not instantly. Using the until command is one way to check that the StrongDM client binary is available. If it is not, then the script sleeps for one second and tries again until successful.

# Start StrongDM Docker Service Client Container
/usr/bin/docker run -d \
      --name=strongdm \
      --rm \
      -p 15432:15432 \
      -e SDM_SERVICE_TOKEN=service_account_token \
      public.ecr.aws/strongdm/client:latest

# Wait for StrongDM client binary to be available
until docker exec -it strongdm sdm status &>/dev/null;
do
  sleep 1
done

Wait for datasources to connect

This same until logic can be added to check if a datasource is ready. In the following example, the psql -l invocation checks if the connection is available without fully connecting. Once it returns successfully, you can run normal database operations.

# Wait for datasource connection to be ready
until psql -h 127.0.0.1 -l &>/dev/null;
do
  sleep 1
done

# Execute database operation
psql -h 127.0.0.1 << EOF >> /var/log/etl.log
SELECT first_name,
      last_name
FROM   users u
WHERE  u.created_at > current_date - '1 day'::interval;
EOF

Wait for the SSH server

Similarly, any SSH connections may have a slight delay between the StrongDM client binary being ready and the connection status becoming available.

# Wait for server to be ready
until ssh localhost -p 43577 exit &>/dev/null;
do
  sleep 1
done

# Execute ssh commands
ssh localhost -p 43577 << EOF
uptime
cat /etc/os-release
exit
EOF

Stop the StrongDM Docker Service Client Container

Perform docker stop on the StrongDM Docker Service Client Container before ending the script to terminate it gracefully.

# Stop with name specified during creation
docker stop strongdm

Put it all together

#!/usr/bin/env bash

# Start StrongDM Docker Service Client Container
/usr/bin/docker run -d \
      --name=strongdm \
      --rm \
      -p 15432:15432 \
      -e SDM_SERVICE_TOKEN=service_account_token \
      public.ecr.aws/strongdm/client:latest

# Wait for StrongDM client binary to be available
until docker exec -it strongdm sdm status &>/dev/null;
do
  sleep 1
done

# Wait for datasource connection to be ready
until psql -l -h 127.0.0.1 -p 15432  &>/dev/null;
do
  sleep 1
done

# Execute database operation
psql -h 127.0.0.1 -p 15432 <<EOF >> /var/log/etl.log
SELECT first_name,
      last_name
FROM   users u
WHERE  u.created_at > current_date - '1 day'::interval;
EOF

# Terminate container
docker kill strongdm

Avoid loops

To make your script a bit more robust, the number of connection attempts can be limited to prevent infinite loops. To do so, replace the relevant section in the script above with the following loop:

# Wait for StrongDM binary to be available
for i in {1..60};
do
  if psql -l -h 127.0.0.1 &>/dev/null;
  then
    break
  else
    sleep 1
  fi
done

Deploy the StrongDM Client on Your Existing Docker Container

Using StrongDM to control access management for your container deployments is similar to using your local StrongDM client binary. You can deploy StrongDM in fully automated workflows, ETL, jobs, and more. This section describes adding the StrongDM client to an existing Docker container.

Dockerfile StrongDM layer

To help you get started, the following examples demonstrate how to add the StrongDM client binary as a single layer to a Dockerfile.

Ubuntu Dockerfile script

FROM ubuntu:22.04

ENV SDM_HOME=/home/sdm/.sdm

RUN adduser --uid 9001 --ingroup root --disabled-password --gecos "" sdm \
    && apt-get update \
    # Install build and runtime dependencies
    && apt-get install --no-install-recommends -y \
        curl \
        unzip \
        psmisc \
        ca-certificates \
    # Download the StrongDM client binary
    && curl -J -O -L https://app.strongdm.com/releases/cli/linux \
    # Unzip it
    && unzip sdmcli* \
    # Install it
    && ./sdm install --user sdm --nologin \
    # Remove no longer needed build dependencies
    && apt-get remove -y \
        curl \
        unzip \
        ca-certificates \
    # Delete the zip file
    && rm sdmcli* \
    # Clean up APT
    && apt-get autoremove -y \
    && rm -rf /var/lib/apt/lists/*

ADD start.sh /start.sh

USER sdm

ENTRYPOINT ["/start.sh"]

CentOS Dockerfile script

FROM centos:7

ENV SDM_HOME=/home/sdm/.sdm

RUN adduser --uid 9001 --ingroup root --disabled-password --gecos "" sdm \
    && yum update -y \
    # Install build and runtime dependencies
    && yum install -y \
        unzip \
        psmisc \
        initscripts \
    # Download the StrongDM client binary
    && curl -J -O -L https://app.strongdm.com/releases/cli/linux \
    # Unzip it
    && unzip sdmcli* \
    # Install it
    && ./sdm install --user sdm --nologin \
    # Remove no longer needed build dependencies
    && yum erase -y \
        unzip \
    # Remove zip file
    && rm -f sdmcli*\
    # Clean up yum
    && yum clean all

ADD start.sh /start.sh

USER sdm

ENTRYPOINT ["/start.sh"]

Entrypoint script

#!/bin/sh

# logs into sdm
sdm login

# updates to latest release
sdm update

# starts listener manually
sdm listen --daemon &

# attempts sdm status until successful
until sdm status &> /dev/null;
do
  sleep 1
  echo "waiting for SDM to start"
done

/path/to/original/entrypoint

Alpine Dockerfile StrongDM layer

The following example demonstrates how to add StrongDM as a single layer to an Alpine Dockerfile.

Alpine Dockerfile script

FROM alpine:latest

ENV SDM_DOCKERIZED=true
ENV SDM_HOME=/home/sdm/.sdm/

RUN adduser --uid 9001 --ingroup root --disabled-password --gecos "" sdm \
    # Install build and runtime dependencies
    && apk --no-cache --update add curl libc6-compat openrc \
    # Update package list then upgrade running system
    && apk -U upgrade \
    # Download the SDM static library client (necessary for Alpine images)
    && curl -J -O -L https://app.strongdm.com/releases/cli/linux-static \
    # Unzip it
    && unzip -x sdm*.zip \
    # Install it
    && ./sdm install --user sdm --nologin \
    # Remove no longer needed build dependencies
    && rm sdmcli*
    

ADD start.sh /start.sh

USER sdm

ENTRYPOINT /start.sh

Alpine entrypoint script

#!/bin/sh

# logs into sdm
sdm login

# updates to latest release
sdm update

# starts listener manually
sdm listen --daemon &

# attempts sdm status until successful
until sdm status &> /dev/null;
do
  sleep 1
  echo "waiting for SDM to start"
done
/path/to/original/entrypoint

Build the image

  1. Create a new directory containing the entrypoint script (start.sh) and Dockerfile of choice.

  2. Run chmod +x start.sh to make the start.sh script executable.

  3. Run docker build -t sdmimage . to build the image.

  4. Check for the image with the following command:

    docker images
    

    Something similar to the following returns.

    REPOSITORY         TAG             IMAGE ID            CREATED             SIZE
    sdmimage           latest          defd8aa002ed        6 hours ago         51.6MB
    

Authenticate the StrongDM client

When using StrongDM in an automated fashion, access is validated and managed using a service account.

For the automated service to work effectively, port overrides and auto-connect should both be enabled. These settings ensure a consistent login procedure for your container during runtime. They affect your entire organization, so please review our documentation and contact your StrongDM administrator before making any changes. Any customization of local ports used by resources stops auto-connection to those resources by service accounts.

The StrongDM client binary looks for the environment variable SDM_ADMIN_TOKEN when authenticating requests. This variable can be added to the environment in a few different ways. If you followed the instructions in the previous section, you can start the image you just created and add the variable during runtime with the -e flag. For example, use docker run -d -e SDM_ADMIN_TOKEN=StrongDM_token sdmimage.

Add tokens in a Dockerfile or start script

To simplify the runtime command, the service token can also be added to either the Dockerfile or start.sh script. Either of these options requires building a new Docker image once you have made this change.

  • To add a service token to a Dockerfile, use ENV SDM_ADMIN_TOKEN=StrongDM_token.
  • To add a service token to a start script, use export SDM_ADMIN_TOKEN=StrongDM_token.
Top