Looking back at last week’s blog post, I realized that I’m not done talking about Good Intentions. If you can’t tell, I’m not a fan of administrative controls. If you have business rules, you can’t just tell people to follow them and expect it to happen. There will always be a moment where someone is busy, or trying to get something done for a customer, or working quickly, and will ignore or circumvent these business rules.
Let’s look at this via a scenario-based example. We have a developer trying to build a software product to deliver to a customer. This product requires the use of an open-sourced third-party library. But what is the probability of this library being compromised, and now your team has delivered a ransomware backdoor to your customer? It may seem far-fetched, but if you search the news, you’ll see this is more common than you might think.
This is a valid security threat vector that we want to mitigate. What can we do to prevent the above nightmare scenario from occurring?
Administrative Controls
To protect the environment from malware, ransomware, and viruses, we create the following business rule, “Only download open-source libraries from pypi or npm, or known Github repos. Don’t download from some random unknown Github repo.” This rule makes sense, right?
But…
It’s late at night before delivery, and a newer version of the library that provides much-needed capability is not in pypi or npm. However, the maintainer has posted the latest version on a new Github repo, citing that they have credentials issues with the current buildchain. So what is your team member developer probably going to do?
OK, I’ve asserted that administrative controls are not an adequate defense in this scenario. So what should you do instead?
Detective Controls
Hopefully, you’ve implemented consolidated logging across your environment. This is one of the information security foundations that you need to implement. And if you’ve done this, you can now set up log monitoring, search for downloads, and watch for downloads from unapproved sites.
So this is not a bad idea, but as you can see, does it prevent the issue? Maybe. Maybe developers will be more diligent in their actions, knowing they are being monitored. And maybe developers, knowing they are being monitored, will look for sneakier ways to download that library and at the same time start to distrust the information security team for “spying on them .”And let’s say that the monitoring system does detect a download…then what? You’ve probably already delivered to your customer, and it’s too late to prevent anything.
Back to the drawing board.
Preventative Controls
A comment I’ve made before is this: “Design your architecture in a way that it can never not be in compliance.” So if you don’t want people downloading libraries except from certain third-party hosting sites, then don’t provide open internet access. Too restrictive? Implement an outbound web proxy and only allow downloaded from an allowed list.
This idea works, but experience will tell you immediately that this is large, complex, and expensive to implement. This idea also causes performance impacts and unintended consequences. Have you ever tried to navigate to a website at work while researching something, only to be met by, “This website has been blocked by your network administrator.”?
Automation
So this preventative control above, out of the approaches we’ve discussed, seems to provide the best solution. It gives the developers boundaries and guardrails, and does a pretty good job in preventing the threat vector we discussed. But is it really the best way? Let’s look in a different direction, with innovation.
Note that there is an alternative approach to mitigating this risk: adequate static source code scans and antivirus/malware scans against downloaded software packages. This approach creates a different framework for the workflow, where the developer can download from wherever they deem fit but must run these scanning tools against the downloads and deliverables. Here’s the issue, though, with administrative controls. If you direct the developers to perform all these manual steps, do you really think they are going to happen? I’ve spent the last few weeks writing blog posts about how these types of boring, tedious, manual tasks simply don’t get done. And now we come back to that statement I made earlier, that is, “Design your architecture in a way that it can never not be in compliance.” Don’t make this a manual, tedious task. Instead, have the developers check in their code to a centralized repo that runs a DevSecOps pipeline that performs the antivirus and static source code scans. Now you’ve accomplished all your goals: Developers have freedom of motion, and your customers are protected from that known threat vector.
I am not a fan of administrative controls, but that doesn’t mean compromising security or quality. It just means that I am a believer in automation and innovation instead of just relying on good intentions.
One thought on “Good Intentions Are Not Enough, Part 2”