As engineering teams accelerate and scale their cloud and CI/CD initiatives, the rate of change in production (and the business) starts to increase dramatically.
Continuously delivering small incremental code changes is proven to lower the risk of production incidents and downtime. In addition, tools like DataDog, Splunk, Dynatrace, and New Relic can detect application performance and quality issues almost instantly in production to assist with resolution.
However, the same is not valid for detecting the security, compliance, and data privacy risk of CI/CD deployments.
Identifying Security & Compliance Risk is Hard
Many CI/CD pipelines have security testing (SAST/DAST/SCA tools) stages, but these tests are limited by the visibility they provide, and the time they take to execute. Remember, elite performing teams are deploying to production multiple times a day so developers can’t wait hours for these tests to complete.
Teams can also be required to participate in ‘security reviews’ for CI/CD deployments so that security professionals can access the scope and risk of every change manually, based on documentation provided by engineering teams.
Making application code secure and compliant at the speed and scale of CI/CD is hard, and despite best efforts, it remains a slow, manual, and tedious process that requires people to scale and govern. The net result is a lot of security and compliance risk goes undetected in production.
We created Bionic to solve this problem.
As more code is deployed in production, there is more risk of applications ‘drifting’ out of their approved architecture design and intended operating boundaries.
Today, infrastructure as code (IAC), GitOps (config-as-code), and ephemeral compute all help ensure infrastructure ‘drift’ is rare. Sure, it might happen, but changing a Kubernetes replica set from 2 to 4 on a node doesn’t introduce any security or compliance risk.
Application ‘drift’, however, is common and elusive. Drift includes how code uses, accesses, and impacts other services, APIs, libraries, dependencies, and data flows within your environments.
Here are some simple examples of application drift:
- Microservice A introduces a call to a new 3rd party web service provider
- Microservice B makes a new call to a PII restricted database using non-encrypted communication
- Microservice C introduces a new hard-coded credential instead of referencing a token from a secrets manager
- Microservice D introduces and implements a new open-source library
- Microservice E starts to access an EU located database from a Non-EU datacenter
Not all drift or risk is important. However, examples 2 and 5 would breach GDPR guidelines, therefore introducing compliance risk for the business. With the rise of open-source software (OSS), example 4 can happen daily. The question is whether that code has vulnerabilities or presents a new risk for the business?
Focus on Code, Not Infrastructure
Your infrastructure or network does not control application architectures and data flows; they are controlled by application code (aka logic) written by developers. On top of that, application logic requires compiled binaries, configuration files, and environment variables.
CI/CD pipelines today are optimized for functional testing and speed. Making code secure and compliant requires pipelines to detect security, compliance, and data privacy risk.
Just because code is functional and fast doesn’t mean it’s secure and compliant.
As a computer science graduate who programmed Java in 1997, I can confirm that I was never once taught how to write secure or compliant code. The same is true for the four years and 500+ code reviews I attended as a software engineer working for a global systems integrator in London.
In 2021, with cyber security and government regulations, developing secure and compliant code is not an option; it’s table stakes.