Integrating security into DevOps practices

DevOps as the cultural and technological shift in the software development has generated a huge space for improvements in the neighboring areas. To name one – Application security.

Since DevOps is embedded into every step of an idea on its way to the customer, it can be also used as the framework for driving security enhancements with the reduced costs – since automation and continuous delivery are built for the CI/CD needs. With a gentle security seasoning, an existing infrastructure will bring value to securing the product.

Where to start

As I said, we want security to affect all or most of those steps where DevOps transformation is already bringing value. Let’s just take a look at what we have on an abstract DevOps CD pipeline:

ci_insecure

It is a pretty straightforward deployment pipeline. It starts with requirements that are implemented into the code, which is covered with unit tests and built. The resulting artifact is deployed to staging where tested with automation, and also a code review takes place. When it is all done and succeeded, the change is merged to master, integration tests are running on the merge commit and artifacts are deployed to Production.

Secure CI/CD

Now, making no changes to the CD flow, we want to make the application more secure. Boxes in red are security features proposed to be added to the pipeline:

ci_secure

Security requirements

On the requirements planning stage (it can be a backlog grooming or sprint meeting), we instruct POs and engineers to analyze the security impact of the proposed feature and put mitigations/considerations into the task description. This step requires the team to understand the security profile of the application, the attacker profile and also have in place a classification of threats based on different factors (data exposure, endpoints exposure etc). This step requires some preliminary work to be done and is often ignored in the Agile environments. However, with a security embedded into the requirements, it becomes so much simpler for an engineer to actually fix possible issues before it gets exploited by an attacker. According to the famous calculation of the cost of fixing a possible failure, adding security to the design costs the less and brings most of the value.

In my experience, a separate field in the PBI or a dedicated section in the PBI template needs to be added to make sure the security requirements are not ignored.

Secure coding best practices

For an engineer who implements the feature, it is essential to have a reference how to make this or that particular security-related decision basing on a best practices reference document or guidance.  It can be a best-practices standard maintained by the company or the industry – but the team must agree on which particular practice/standard to follow. It should answer simple but obvious questions – for example, how to secure API? How to store password? When to use TLS?

Implementing this step brings consistency into secure side of the team’s coding. Also, it educates engineers and integrates best security coding practices into their routines forming a security-aware mindset.

Security-related Unit testing

This step assumes that we cover the highly risked functions and features of the code with unit tests first. It is important to maintain the tests fresh and increase coverage alongside with the ongoing development. One of the options is that for some risky features adding security unit tests is required for passing Code review.

Security-related Automated testing

In this step, the tests cover different scenarios of using/misusing the product. The goal is to make sure the security issues are addressed and verified with automation. Authorization, authentication, sensitive data exposure – to name a few areas to start with.

This set of tests needs to exist separately from the general test set providing visibility into the security testing coverage. Needs for implementing new automated security tests can be specified on the Requirements design stage and verified during Code review.

Static code analysis

This item doesn’t exist on the diagram but can also be mentioned. Security-related rules need to be enabled in the Static code analysis tool and be part of the quality gateway which determines whether a change is ready for production. There is a vast amount of different plugins and tools that allow performing automated analysis and fix what human eye may miss.

Security Code review

This code review needs to be done by a security-minded person or security champion from a specific AppSec team (if there is any). It is important to differ it from an ordinary CR and focus on the security impact and possible code flaws. Also, a person performing the review makes sure the Secure requirements are addressed, required unit/system tests are in place and the feature is good to go into the wild.

Security-related Automated testing

Similarly to automated test in the previous step with the only difference that here we test the system as the whole, after merging the change to the master.

Results

After all, we managed to reuse the existing process with adding a few key points related to security with clear rules and visible outcomes. DevOps is an amazing way helping us build a better product, and adding more improvements on this way on-the-go has never been easier.

Scripts to find WannaCry vulnerable VMs in VMWare vCenter

WannaCry ransomware hit the news by infecting high-profile targets via a security hole that existed in Windows prior to 13.03.2017 when it was patched.

I created a script that connects to the vCenter and checks if the latest hotfix in the system was installed before or after Microsoft released the patch. This doesn’t give 100% protection since some fixes might have been installed manually but the required one omitted.

However, in centralized IT environments that rely on turned on Windows Update service that applies all important updates, it might be a good way to check for the vulnerability.

Link to my repo:

https://github.com/doshyt/Wannacry-UpdatesScan

I work on bringing similar functionality to PS Remoting and also looking for ways to figure out if a new patch was installed but not the one fixing the problem.

UPDATE 19.05:

I added a useful script that performs a remote check for SMBv1 being enabled for Windows 8 / Server 2012 + machines. It can be run against a list of computer names / FQDNs.

https://github.com/doshyt/Wannacry-UpdatesScan/blob/master/checkSmbOn.ps1

Returns $true if SMBv1 is enabled on a system level.

To turn off SMBv1, execute the following command on a remote machine

Set-SmbServerConfiguration -EnableSMB1Protocol $false

 

Benefiting from TeamCity Reverse dependencies

Reverse Dependencies is the feature of TeamCity that allows building more complex build workflows with setting parameters from “parent” build down to their own snapshot dependencies. I’ve been looking for this functionality for a while and recently accidentally discovered how to make it working. Back then, I felt like I found a treasure 🙂

Example: one of the builds can be run with code analysis turned on – for using the results in the “parent” code analysis build (i.e. with SonarQube).  But you only need it when manually triggering a SonarQube build, in any other case (i.e. code checked-in to the repo) you don’t want TC to spend time on running Code analysis. The code analysis is turned on with a System property system.RunCodeAnalysis=TRUE in the “child” build.

And here is the trick  – the “parent” wants to set a property of the “child” build but can’t access it outside of the scope of own parameters. How would you do it in a common way? Maybe, create two different

How would you do it in a common way? Maybe, create two different builds – where one has the system.RunCodeAnalysis always TRUE and trigger it from the SonarQube build.

In this case, you end up having two almost duplicated builds that only exist because the “parent” can’t set properties of a “child”.

Reverse dependencies are here to help

With Reverse dependencies, it can!

This feature is not intuitively simple, and it doesn’t support TeamCity auto-substitution (as with using %%). So, you need to be careful with naming. This is how it works:

In the “parent” build (let’s call it SonarQube), you set to TRUE a parameter named

reverse.dep.PROJECT_NAME.PARAMETER_NAME

Here, PARAMTER_NAME is the exact name of a parameter that you want to rewrite in the “child” build. I.e. “system.RunCodeAnalysis”.

For this to work, you need to have a snapshot dependency to the “child” build enabled. In the “child” build you simply set system.RunCodeAnalysis=FALSE

When someone triggers the “parent” build. First, it rewrites the default (existing) value of the specified parameter (system.RunCodeAnalysis) in the “child” build with TRUE and starts this build.

In this case, you can use one build definition for sharing different tasks that couldn’t be done with one “core” build previously. A great use case is setting up a build with an automated TC trigger in which you can set parameters of a “child” build. Let’s say, you want to deploy Nightly to a specific environment using the same build definition that all developers use for building their projects.
To do so, you can set up a build with a trigger that builds a specific branch and also sets a target environment parameter through the Reverse dependency.

When using Reverse Dependencies, a handy way to check the actual values submitted to a build si the “Parameters” tab in the executed build – it shows which values were assigned to parameters, you can make sure that Reverse dependencies work as expected.

Caveats

  1. Reverse Dependencies are not substituted with actual parameter names from “child” builds – it is easy to make a mistake in the definition.
  2. When a project name is changed, you also need to change it manually in all Reverse dependencies.
  3. Try not to modify the build flow with Reverse Dependencies, touching only features that don’t affect build results in any way – otherwise you will get non-deterministic build configuration, in which the same build produces totally different artifacts. The best way to use it – is to specify some parameters which will be used by external parties, like setting einvrionemnts for Deployment or Publishing services, getting code analysis results etc.

Setting up LCM with DSC PullServer – cmdlets you need to know

Powershell DSC has a slightly steep learning curve – from the beginning, it is not that straight-forward to figure out how to read logs, how to trigger a consistency check or how to get updated configurations from the Pull server.

After building LCM.meta.mof file, you need to apply it to the machine – so that it enrolls itself with DSC PullServer and determines which configuration states to pull and apply. In fact, LCM is the heart of DSC on each node – and it requires some special treatment in order to deliver predictable results. So, to start with LCM, you need to point DSC engine to the folder where .meta.nof of LCM is stored and register it in the system. This works as follows:

Set-DscLocalConfigurationManager -Path PATH_TO_FOLDER

As far as I noticed, starting from Powershell 5.1 update, this command automatically pulls the resources from Pull server. Previously, it was necessary to trigger the resource update without waiting for the standard DSC consistency check interval – 15-30 minutes.

Update-DscConfiguration -Verbose -Wait

This command also comes handy when there was an update to resources in the server (a new configuration state was uploaded or a PS module version got updated).

When we want to start DSC consistency check (don’t mix it with Update-DscConfiguration, the latter only updates resources, it doesn’t run the consistency check) without waiting for the DSC scheduler to do it for us:

Start-DscConfiguration -UseExisting -Verbose -Wait

After updating resources, registering LCM and starting DSC check, we need to check the status. Here comes the trick – first we need to make sure that there is no consistency check in place – otherwise, we can’t get the status of LCM. So, we run:

Get-DscLocalConfigurationManager

This command returns a bunch of parameters – where we are mainly interested in LCMState.

Get-DscLocalConfigurationManager | Select-Object -ExpandProperty LCMState

It can be either “Idle”, “Busy” or report inconsistent configuration which leaves the LCM in a blocked state. When it is “Idle”, we are good to go – and check the actual result of applying a configuration State pulled from the Pull server.

Get-DscConfigurationStatus

The outputs are either “Failed” or “Success” – and this gives us an answer to the question whether the machine is in the desired state or something went terribly wrong.

And the last command – how to get rid of the DSC configuration:

Remove-DscConfigurationDocument -Stage Current,Previous,Pending

DSC stores two configurations for LCM – current (the last applied) and previous. When it ends up in the “pending” state, most likely, you have a problem with your LCM or State. After using this command for clean up, you may go and set updated LCM.

Making VMWare Integration Plugin 5.5 work in modern browser

When deploying a new OVF template to the vCenter 5.5 host from exported one, you may find yourself in trouble getting it done.

The desktop client may complain about invalid configuration for device 7 which is an extremely misleading message.

error7

According to support forums, to import OVF templates you must use a web interface. However, the web interface complains about the VMWare Integration plugin being not installed. It even offers you to download it.

But here is the trick – whenever you try to install its version 5.5 or 5.6 (since you need ESXi 5.5 compatibility), it simply never works with modern browsers – the plugin is not added to the browser extension list and not detected as installed. I assume this is caused by their enhanced security requirements to the installed extensions. Integration plugin from VMWare requires disk and system access and is silently blocked in newer browsers.

On the original requirements specification of the plugin, its stated its compatibility with IE 7 and 8.

The solution is simple – run your modern IE in the compatibility mode as IE 8! It works like a charm.

Nevertheless, I would recommend quitting the session after you finished using the plugin and come back to IE 11 to not expose your system to risk.

 

The Security Development Lifecycle book is available for downloading

Very recently, Microsoft has published online the foundation book describing SDL (Security Development Lifecycle).

security-development-lifecycle-no-cd

The principles behind the SDL were born as a response to the Windows Longhorn project reset in the early 2000s. Back then, the entire project was wiped out and started from scratch due to the presence of critical vulnerabilities in various components – according to MS insiders. At the time, Microsoft had a questionable reputation with regards to security of its products. Therefore, the company made a huge investment in security improvement. SDL was created as the common approach to developing products, starting from the very bottom to top – from design to release.

The book was published in good old 2006, which can be seen as the Stone age comparing to the threats and attack vectors present nowadays. Nevertheless, it still remains a valuable source of knowledge and actions for the teams and companies that struggle with improving a security of the products. In my opinion, it is impossible to deliver a secure solution without integrating SDL principles into every chunk of the development process.

The most recent overview of SDL can be found at the dedicated Microsoft page.

The best part of it is the set of tools and instruments designed and used by MS at each of the steps of SDL – with links for downloads. It can be seen as a great reference to the spectrum of problems that SDL solves – you don’t have to replicate it to your organization in the exact way it works at MS but at least it helps understand the challenges and possible solutions.

Install Powershell 5.0 on Windows 8.1 with a non en-US locale

Windows Management Framework 5 (aka Powershell 5) fails to install if your Windows was installed with a different locale than en-US. In my case, it was en-GB, so it is not a big deal, right?

Well, not exactly. After downloading WMF 5.0 update package,  it fails to apply the update   – saying “Update is not applicable to your computer”. Do not expect to get something more verbose, neither find any useful information in system logs.

After desperate surfing through MS support tickets and trying different fixes, it turned out that the last suspect was the locale configuration. Ironically, that MS support engineers from Seattle couldn’t verify the problem since they all have en-US Windows installed.

So far, the only working way to install WMF 5 (which means, PS 5.0) if it fails to apply the update is to change the locale setting – which is a non-trivial task. It requires running the system DISM utility in the Offline mode (when current Windows installation is not loaded). Also, it requires obtaining en-US language pack .cab archive. And finally, you may even brick your boot configuration if don’t run it properly. Sounds exciting, let’s start then!

  1. Set the default language and interface language to en-US (Control Panel – Language – Advanced Options)
  2. Prepare a bootable Win 8.1 USB installation drive – I used the same image as for the initial installation. Just write it to USB (Win32DiskImager is a great tool for this).
  3. Download en-US language pack. It can’t be found as a separate package from the official resources. What I did was to use MSDN subscription downloads page and grab an installation media of Windows 8.1 Language Pack – it is DVD with a bunch of language packs on it. After, mount the ISO and navigate to “langpacks/en-US”. save the .cab file from it to the convenient location on your drive. I.e. C:\lp.cab
  4. Boot into troubleshooting mode with command prompt – from running Windows session, press Restart while pressing the “Shift” key. The system will log out and troubleshooting options menu will be loaded from the USB. troubleshoot1 Navigate to Troubleshoot -> Advanced options -> Command Prompttroubleshoot
  5. In the command prompt, run the DISM utility: dism /Image:C:\ /Add-Package /PackagePath:C:\lp.cab.
  6. Do not change the locale here. It is possible and sometimes described as one of the steps to apply, i.e using dism ... /Set-Syslocale , but better don’t. – it made my machine to fail to boot until I reverted this fix back.
  7. Boot normally – the language pack was installed but not yet applied. Open “Control Panel\All Control Panel Items\Language”, select “English – United States” and click “Options” on the right side. Under “Windows display language” there will be a link to set the current locale to it. In a different situation, I’ve seen it being done from the “Control Panel -> Language -> Advanced Settings -> Change Locale” menu.
  8. After signing in and out – you may check that the locale has been changed from the elevated command prompt: `dism /Online /Get-Intl`.troubleshoot3
  9. Now, the WMF 5 update can be applied – it might firstly install some sort of fix and ask for a reboot. Afterward, run the installer again – and you will get your precious PS 5.0

 ps

This is it! I hope, MS folks will fix this issue soon – so the update can be applied to a system with any locale.