Scheduled sslscan with GitLab and Docker

11 February 2023

In this post, we will schedule a scan of our website with sslscan using GitLab CI and Docker. We will also use a custom Docker image to run the scan and store the results in a file. We will also create a stage to analyze the results from the scan and either fail or pass the pipeline.

The repository for this post is available here.

Preparing the Docker image

First task is to prepare the Docker image that will be used to run the scan. It needs to be adapted to our needs. We will use the official sslscan Dockerfile as a base and add a few more things to it. Our starting point will be this Dockerfile (at the time of writing, sslscan version is 2.0.15). We need to just change the last lines. Clone/fork the repository and edit the Dockerfile.

FROM scratch

# Copy over the sslscan executable from the intermediate build container, along with the dynamic libraries it is dependent upon (see output of ldd, above).
COPY --from=builder /bin/* /bin/
COPY --from=builder /builddir/sslscan /bin/sslscan
COPY --from=builder /lib/ /lib/
# Fix for ARM builders - copy any architecture musl.
COPY --from=builder /lib/ld-musl-*.so.1 /lib/

# Drop root privileges.
# USER 65535:65535
# Remain as writable user.

ENTRYPOINT ["/bin/ash"]

First we change the path to copy sslscan binary to /bin/sslscan instead of /sslscan. We also copy the whole /bin directory to the final image to have a working shell. Then we need to remain as a writable user, so we comment out the USER line. Finally, we change the ENTRYPOINT to /bin/ash so we can run commands in the container. All the changes can be seen in this commit.

Now we can build the image and test if it indeed drops us into a shell, lets us write and can execute sslscan:

# In the directory with the Dockerfile and sslscan sources.
$ docker build -t mysslscan:latest .
[+] Building 44.8s (19/19) FINISHED
$ docker run --rm -it mysslscan:latest
  $ echo "Hello world!" > test.txt
  $ cat test.txt
  Hello world!
  $ sslscan --version
      OpenSSL 1.1.1t-dev  xx XXX xxxx

Making the Docker image available to GitLab

To share the image to all of our runners, we need a Docker registry, such as Docker Hub, AWS ECR or GitLab Container Registry. We will use Docker Hub because it is the easiest to set up. Just start and account and create a new public repository for our SSLScan image. That way we don't need to worry about authentication later on.

Next we can tag our local image and push it to the registry:

$ docker login # Use access token for better security.
$ docker tag mysslscan:latest ppabis/mysslscan:latest
$ docker push ppabis/mysslscan:latest

Setting up a GitLab CI pipeline

Now we can create a new project in GitLab and set up a pipeline. We will use a list of domains to scan and output the result to an XML file that will be uploaded as an artifact. We will also create a job to analyze the results with a Python script.

scan websites:
    name: ppabis/mysslscan:latest
    entrypoint: [""] # By default GitLab will try /bin/sh but we use ash.
  script: |-
    sslscan --targets=targets.txt --xml=results.xml
    paths: ["results.xml"]
    expire_in: 30 minutes

analyze results:
  image: python:3-alpine
  needs: ["scan websites"]
  script: |-
    python results.xml

targets.txt contains just list of domains to scan, one per line. (Also as of writing, SSLScan requires an empty line at the end of the file.) Let's keep now just as a placeholder for the next section.

import sys

if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: <results.xml>")
    print(f"Stub for analyzing results: {sys.argv[1]}")

Before we make a push to GitLab it's a good idea to configure a runner. This is easy to do on our local machine in a Docker container. Just follow the guide to register the runner and keep it in background. You can follow the same guide to run the runner on a server. To be sure that the runner executes the job, edit its settings in and mark the checkbox to allow it to run untagged jobs and turn off shared runners.

Analyzing the results

The results and what conditions we need to meet is up to the developers. Good practices such as using TLS 1.3, having a strong cipher suite (AES-256 vs AES-128) is dependent on the application and the infrastructure. For this example, we will just check if the server has disabled TLS older than 1.2 and if it has a strong enough cipher suite ("strong" or "acceptable").

First we will create a function for analyzing the enabled protocols. If any of the enabled protocols is in the list of insecure protocols, we will return False. Otherwise, we will return True.

def analyze_protocols(test):
    enabled = filter(lambda p: p.attrib.get('enabled') == "1", test.findall("protocol")) # Only enabled protocols.
    enabled = [f"{p.attrib.get('type')}{p.attrib.get('version')}" for p in enabled] # Convert to strings.
    for d in ["ssl2", "ssl3", "tls1.0", "tls1.1"]: # Insecure protocols.
        if d in enabled: # If any of these is in enabled.
            return False
    return True

Another function will be to analyze the cipher suites if they are marked as "strong" or "acceptable" by sslscan.

def analyze_ciphers(test):
    ciphers = test.findall("cipher") # Find ciphers.
    weak = filter(lambda c: c.attrib.get("strength") not in ["strong", "acceptable"], ciphers) # Non-acceptable/strong ciphers.
    return len(list(weak)) == 0 # The list must be empty.

We will create a dictionary of host => pass/fail and then print out hosts that failed. This will also influence our exit code.

def analyze(filename):
    results = {}
    with open(filename, "r") as f:
        et = ET.parse(f)
        tests = et.findall("ssltest")
        for test in tests:
            host = test.attrib.get("host")
            results[host] = analyze_protocols(test) and analyze_ciphers(test)

    return results

Replace stub print with a call to analyze and loop through the dictionary.

    results = analyze(sys.argv[1])
    exit_code = 0
    for host, passed in results.items():
        if not passed:
            exit_code = 1
            print(f"{host} FAIL.")
            print(f"{host} PASS.")

Now we can push the changes to GitLab and see the results. If all of the hosts pass, you can add TLS 1.2 and TLS 1.3 to the list of insecure protocols and see if the job fails.

Scheduling the pipeline to run periodically

We can set up a schedule to run the pipeline periodically. This is done in the settings of the project on GitLab. Just go to the project, "CI/CD" and "Schedules". If the pipeline fails, we should get an email notification. We can even use "Pipeline status emails" integration to send failure notifications to more recipients.

GitLab CI/CD Schedules

What about STARTTLS?

sslscan also supports STARTTLS type of connections such as SMTP, IMAP, POP3. We can scan such hosts by using option like --starttls-smtp. However, in this case we would need to run another instance of sslscan or create a new job in the pipeline. For example:

scan email servers:
    name: ppabis/mysslscan:latest
    entrypoint: [""] # By default GitLab will try /bin/sh but we use ash.
  script: |-
    sslscan --starttls-smtp --targets=targets-smtp.txt --xml=results-smtp.xml
    sslscan --starttls-imap --targets=targets-imap.txt --xml=results-imap.xml
      - "results-smtp.xml"
      - "results-imap.xml"
    expire_in: 30 minutes

And in the targets file we would have (remembering about the empty line at the end):