DevSecOps, the Pareto Principle, and You!
March 02, 2021
March 02, 2021
John Partee, Full Stack Machine Learning Engineer
When I’m working on a project, I tend to avoid anything extra until all of my code is written. Security, testing, and efficiency is all a problem saved for the future me. I need a viable product yesterday! But nothing is as permanent as a temporary solution. Lately I’ve started building in some degree of security and automation into my projects by default, and it’s saving me a ton of time. The rest of this post will showcase how you can start doing a little DevSecOps right now, easily, and for (mostly) free.
The Pareto principle is the age-old idea that 80% of your outcomes come from 20% of your efforts. If you’re just now starting with automation, keep this in mind! Any level of testing and security is a massive upgrade over none at all. Don’t spend a whole day tweaking the rules for SonarQube, just get it going. It will change the way your whole team works!
I’m biased! NTS is a GitLab partner, and it’s no secret that I’m a GitLab fanboy. If your team is working with one of their competitors, you have similar tools available with slightly different workflows. The ideas are the same!
This is also more “technical than average” for the NTS blog. If you’ve seen GitLab CI before, keep scrolling, there are code snippets you can steal.
First, a quick showcase of a very small part of what GitLab CI can do through examples.
GitLab CI has to be the easiest code automation system I’ve seen. GitHub has “Actions”, BitBucket has “Pipelines”, and Jenkins can run the same tasks in a pinch. The idea here is that we run a handful of jobs every time we push new code into source control. My first adventure in CI-land was using GitLab CI to build and push docker containers for an app I was working on, which saved me five minutes every time I pushed code. At 3-5 pushes a day, that makes a big difference.
The big idea with GitLab CI for me is that I push every task that I do every time I commit code into the CI pipeline.
It’s a little embarrassing, but this was my first ugly use of GitLab CI:
# .gitlab-ci.yml # Run jobs in this container by default image: docker/compose variables: # Hey compose container, here's where docker is! DOCKER_HOST: tcp://docker:2375/ services: # Hey git1ab, need this running please. Docker-in-docker tagged! - docker:18.09-dind before script: # Log into the registry before we build/push - echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin registry.ntsdev.net/v2 build frontend: script: # And finally, build and push all of our containers, inside of that docker/compose container! - docker-compose -f docker-compose-prod.yml build --parallel - docker-compose -f docker-compose-prod.yml push
The first line says we’ll run the jobs in the “docker/compose” container, which includes docker-compose functionality. We tell the container where the docker engine lives on line 4. On line 7 we log into the registry so we can push to it later. “build frontend” specifies a “job”, with a script to run in that compose container. If you’ve used docker-compose before, the rest is self-explanatory–We’re building a set of containers defined in the “docker-compose-prod.yml” file, then pushing them into the registry we logged into during the before script.
That’s a lot to process! Read over the comments in the file again if you’re lost. The big thing to take away here is we’re running these actions inside of a docker container. That is a big part of what makes this system so easy to use!
Now, what issues does my creation have? Well, we’re building every container at every push. Slow! If one container fails to build, the whole job fails, and we don’t know which one failed without looking at logs. These logs can be pretty long, so that has to go. In future iterations, I broke each container build out into their own job, which runs only when something in that folder changed.
# .gitlab-ci.yml Build Sinker: only: changes: - sinker/* script: - docker build -t registry.ntsdev.net/john.partee/project-xenolith/sinker:$CI_COMMIT_SHORT_SHA -t registry.ntsdev.net/john.partee/project-xenolith/sinker:latest ./sinker/. - echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin registry.ntsdev.net/v2 - docker push registry.ntsdev.net/john.partee/project-xenolith/sinker
Much like before, we’re building and pushing a container, but this time we’re just using docker commands. Instead of logging in before the script, we log into the registry after the build finishes. And, because of the conditions we specified on lines 3-5, we’re only doing this whole job when it’s needed. Each container has a job like this. Now our pipeline jobs are more isolated and run much faster!
Keep in mind your build process will vary depending on your platform, and what your code does. This works well enough for me for basic docker/Kubernetes stuff, but serverless web-apps or whatever else you’ve concocted will have different needs. Either way, if you can push it to CI (and you probably can), do it! It saves time!
Now, let’s add a security tool. I like SonarQube a lot. It’s free to start with, and remarkably cheap if you need their advanced features. I deployed it to our development Kubernetes cluster in a few minutes with their Helm chart. This gives us a nice UI that shows code quality hints, and vulnerabilities in our code. SonarSource also publishes a docker container version of their scanner, which makes our lives easier! To have SonarQube scan the whole repo, all I had to add to the CI file was the following:
Sonarqube Test: image: name: sonarsource/sonar-scanner-cli:latest entrypoint: # Their container does something when it starts by default # Entrypoint: '' stops that! - '' script: - sonar-scanner -D"sonar.projectKey=project-xenolith" -D"sonar.sources=." - D"sonar.host.url=http://sonar.ntsdev.net" -D"sonar.login=atotallyfakeapikeythatyougetfromthesonarqubeui"
Their GitLab CI Documentation is pretty good too. I ripped most of my config off of there. When you add the project in SonarQube, it will spit out the script you need to paste in above. Their cloud offering has some different options as well, this just works well for us on a closed developer network.
SonarQube found a lot of problems, which isn’t surprising. This was my first React app, and I was winging it.
My favorite thing about this tool is that it isn’t just security focused – If I did something ugly, it’ll let me know. Even better, it’ll give me an estimate on time to fix it! And of course, it screams about my lack of tests, which is something I really do need to fix. SonarQube can be used to halt pipelines and merges that don’t pass quality checks, which is handy with larger teams. If the intern writes 500 lines with no tests, we can shut down that push to production before anything bad happens!
Overall, this totally shifted how I viewed security. By having constant feedback, I can make things secure up front rather than dealing with all of these problems after a security audit. Happy auditors, happy life, right?
GitLab has some of the most impressive out-of-the-box security tools I’ve seen. SAST performs static code analysis similar to SonarQube, and DAST does dynamic scanning of your deployed web-app, with an optional active mode that can try to attack it. SAST and DAST are a great starting point, and just by including a few more templates we get pretty good secret detection, license compliance, and dependency checking, with virtually no extra work.
On a recent 2-week MVP sprint, we added these lines to our .gitlab-ci.yml file.
include: - template: Security/SAST.gitlab-ci.yml - template: Security/DAST.gitlab-ci.yml - template: Security/Dependency-Scanning.gitlab-ci.yml - template: Security/License-Scanning.gitlab-ci.yml - template: Security/Secret-Detection.gitlab-ci.yml variables: DAST_WEBSITE: https://mrb.dev.ntsdev.net DAST_FULL_SCAN_ENABLED: "true"
I had intentionally done no other security work, and I’m still not the best React developer, so there were vulnerabilities!
It looks rough, but I wanted to share this vulnerability report to highlight how important this is. These are problems I wouldn’t have known about otherwise. These are things that could have led to data loss, even if our AWS infrastructure behind the software was airtight. I am not a security engineer, but I can leverage the experience of security engineers by running these tools. The feedback loop is near real-time as well. I can make their recommended changes and watch the vulnerability count drop. Satisfying!
Automate stuff! Put tasks that you do every code push into a CI pipeline. Add security tools now. A lot of them are free and don’t require a lot of configuration. It will save you a ton of time and money in the future.
GitLab is my favorite tool for software development enablement, and this is another great example. If you have some money to spend on security, spend it on controlling where the vulnerabilities come from, not another solution for finding them after they’re causing problems!
And of course, if you or your team need help pushing towards DevSecOps we’re happy to help. We’ve enabled teams large and small, from cloud to edge. Automation is the way of the future, and we can help bring you there.
CI is a rabbit hole worth falling down just a little. Here’s just a few other things worth looking at for your pipelines:
All of these are free or have free versions available. Again, the 80/20 rule applies—Don’t deep dive any of these if you’re just starting out, just get some form of testing and security running and you’ll be in better shape.
WITH AUTOMATION, THE POSSIBILITIES ARE ENDLESS
At NTS we recognize that automation is not a destination, but instead a journey. For more information about NTS automation service offerings and the best practices for your organization to take on the road of automation success view Automation page or contact email@example.com.