Pandora FMS server with docker-compose

Docker-compose is amazing, this tool allows you to literally deploy complex clusters of containers with one command. Previously, I had a seamless experience running ELK (Elastic, Logstash, Kibana) with docker-compose and now decided to give a try to Pandora with it.

Pandora FMS is a great tool for monitoring and securing the infrastructure since it provides insights into anomalies that may happen to your servers. And it is open source and free!

To start with, I found 3 containers required to run the Pandora Server on the Docker Hub of Pandora FMS : MySQL DB instance initialized with Pandora DB, Pandora Server, and Pandora Console.

All the deployment magic happens within each container, and my task was only to create some infrastructure and orchestration for them with help of docker-compose.

I put the result on Github – pandora-docker-compose.

Here is a quick overview of  what it does:

  • Creates a dedicated network and assigns IPs to containers
  • Configures Postfix for sending emails to admins (int he default container it was not working)
  • Synchronizes time with Docker host
  • Maps Pandora DB files to local host folder so that you can back them up and restore

 

Building secure CI/CD pipeline with Powershell DSC. Part 2: CI for IaaC

In the previous post, I described how to build CI-as-a-code with DSC Pull server with a focus on the security by using partial configurations and updating the on-demand.

Here comes the next challenge. Now, we want the DSC States that define Infrastructure configuration to be easily modifiable and deployable. As if we wanted to patch against Wannacry simply by adding a corresponding Windows update patch to the DSC Security state. “git push” – and in a short while all nodes in the CI pipeline would be secured. Below, I show how it can be done with script examples that can help you kick-start your CI for IaaC.

Before we start…

First, I’ll remind you some terminology.

DSC Configuration (State) is a Powershell-compiled file that needs to be placed on the Pull server. It defines the system configuration and is expected to get updates frequently.

Local Configuration Manager (LCM) is a Powershell-compiled file that has to be deployed to a target node. This file tells the node about the Pull server and which Configurations to get from it. This file is used once to register the node. Updates to LCM are rare and happen only in the case when the Pull server needs to be changed or new states added to the node configuration. However, we previously split the states into three groups – CI configuration state, Security state, and General system configuration. Given this structure, you can update only states without creating new types of them. Therefore, no LCM changes are required.

Also, keep in mind the fact that LCMs are totally different for targets with Powershell v4 (WMF4) and v5. And you need different machines to build them.

Looking from the delivery perspective, the difference between LCM and States is that LCMs need to be applied with administrative permissions and require running some node-side Powershell. In one of my previous posts, you can find more info on the most useful cmdlets for LCM.

On the contrary, States are easy to update and get to work – you only need to compile and drop them to the Configurations folder of the Pull server. No heavy-weight operations required. So, we design our CI for IaaC for frequent deployment of States in mind.

Building CI for IaaC

For the CI the first thing is always the source control. One of my former colleagues loved to ask the question at interviews: “For which type of a software project would you use source control?” And one and the only correct answer was: “For absolutely any”.

So, we don’t want to screw up that interview, and also our CI pipeline, therefore, we got the DSC States and LCMs under source control. Next, we want States and LCMs to be built on the code check-in time. The states will be published to the Pull server immediately,  while LCMs can be stored on the secure file share without direct applying them to the node.

ci_for_dsc

Building the artifacts is not a big deal – I’ll leave the commands out of this blog post. But what is still missing is how the nodes get LCMs. My way of doing it is to have a script that iterates over nodes and applies corresponding LCMs from the file share to them. I call it “Enroll to DSC”. Which is pretty fair since it happens when we need either to enroll a new node to a server or get some new states into it.

Here is the example of such script that uses Windows remoting in place from my Github. You can find details in README.md

Summary

By creating CI for IaaC we bring the best of DevOps practices to the way we handle the build infrastructure. In fact, having an abstract CI in place already simplifies our job, and after we are done – the CI itself becomes more reliable and controllable structure. You can deliver any updates to CI with CI it within seconds – isn’t it what CI supposed to be for? Quite a nice recursion example, I think 🙂

Building secure CI/CD pipeline with Powershell DSC. Part 1: overview

From my experience, way too often CI/CD pipelines suffer from the lack of security and general configuration consistency. There still might be an IaaC solution in place but it usually focuses on delivering a minimal functionality that is required for building a product and/or recreating the infrastructure if needed as fast as possible. Only a few of CI pipelines were built with security in mind.

I liked Powershell DSC for being native to the Windows stack and intensively developing feature modules to avoid gloomy scripting and hacking into the system’s guts. This makes it a good choice for delivering IaaC with Windows-specific security in mind.

DSC crash course

First, a short introduction of Powershell DSC in the Pull mode. In this configuration, DSC States or Configurations are deployed to and taken into use by the Pull server. States define what our node system configuration needs to look like, which features to have, which users to be admins, which apps installed etc – pretty much anything.

Configuration FirewallConfig
{


Import-DscResource -ModuleName PSDesiredStateConfiguration -ModuleVersion 1.1
Import-DscResource -ModuleName xNetworking -ModuleVersion 3.1.0.0

 xFirewall TCPInbound
 {
     Action = 'Allow'
     Direction = 'Inbound'
     Enabled = $true
     Name = 'TCP Inbound'
     LocalPort = '443'
     Protocol = 'TCP' 
 }

}

Each State needs to be built into the configuration resource of specific “.mof” format. Then, the state.mof need to be placed into “Configurations” folder on the Pull server together with its checksum file. Once the files are there, they can be used by nodes.

The second piece of config is the Local Configuration Manager file. This is the basic configuration for a Node that instructs it where to find the Pull server, how to get authorized with it and which states to use. More information is available in the official documentation.

To start using the DSC, you need to:

  • setup a pull server (once)
  • build a state .mof and checksum file and place it on the pull server (many times)
  • build a LocalConfigurationManager .mof and place it on the node (once or more)
  • instruct node to use LCM file (once or more)

After this, a node contacts the Pull server and receives one or more configuration according to its LCM. Then, a node starts a consistency against the states and correcting any difference it finds.

Splitting states from the Security perspective

It is the states that are going to be changed once we want to modify the configuration of enrolled nodes. I think it is a good practice to split a node state into pieces – to better control security and system settings of machines in the CI/CD cluster. For example, we can have the same security set of rules and patches that we want to apply to all our build machines but keep their tools and environment configurations corresponding to their actual build roles.

Let’s say, we split Configuration of Build node 1 into the following pieces:

  • Build type 1 state – all tools and environment settings required for performing a build (unique per build role)
  • Security and patches – updates state, particular patches we want to apply, firewall settings (same for all machines in the CI/CD)
  • General setting – system setting that (same for all machines in the CI/CD)

dsc_states

In this case, we make sure that security is consistent across CI/CD cluster no matter what role a build machine has. We can easily add more patches or rules, rebuild a configuration mof file and place it on the Pull server. The next time the node checks for the state, it will fetch the latest configuration and perform the update.

In the next post, I will explain how to build a simple CI pipeline for the CI – or how to deliver LCMs to nodes and configuration mof’s to Pull server with using a centralized CI server.

 

Ansible: loops and registering results of commands

Ansible has a great feature of registering results of a performed step. But recently I stumbled upon a need to do it against actions from a loop and then using the registered result in the next steps which are also looped. Most often, it can be used to perform some action only in case if a previous action changed a file.

Firstly, we register `action_state` variable. Its sub-dictionary `results` contains attribute `changed` which can be accessed as `action_state.results.changed`. Now, we need to get it into use in the next looped step. It can be done with ease using `with_together` operator:

- name: copy a file
  copy:
    src: '{{ source }}'
    dest: '{{ target }}'
  with_items: yourlist
  when: item.state == 'present'
  register: action_state

- name: create .ssh dir if file changed in the previous step
  sudo_user: '{{ item.0.user }}'
  file:
    state: directory
    path: '/home/{{ item.0.user }}/.ssh'
    mode: 0700
  with_together:
  - yourlist
  - action_state.results
  when: item.1.changed and item.0.state == 'present'

Here, for the `yourlist` we copied some files to a target machine (i.e. public keys to some temp location) in the loop and then for each of the files performed an action only if a file was changed.

Removing unused Docker containers

Those who work with Docker know how fast is free space disappearing, especially, in case you are experimenting with some new features and don’t pay much attention to cleaning up. As the result, a lot of empty untagged containers can eat tens of gigabytes of space.

Here is the one-liner I found to be very useful for removing old stuff:

docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

Actually, I’m not the one who came up with this answer, but I spent a few hours trying different commands and googling for the answer.

Here is the full Docker cheat sheet that I bumped into and found it being quite good itself:

https://github.com/wsargent/docker-cheat-sheet