How to restrict by regions and instance types in AWS with IAM

The use case is easy, and if you work with AWS I’m pretty sure that you have faced this requirement at some point: I don’t want a certain group of users of a particular AWS account to create anything anywhere. I had to configure the security of one of our AWS accounts to only allow users to work with EC2 and a few other AWS services in only two regions (N. Virginia and Ireland in this case). In addition to that, and to keep our budget under control, we wanted to limit the instance types they can use, in this example we will only allow to use EC2 instances that are not bigger than 16GB of RAM (for a quick view of all available EC2 instances types see http://www.ec2instances.info).

Thanks to the documentation and AWS Support, I came across this solution (as an example). The only issue is that, at the moment, we can not hide features in the AWS Console, but at least AWS Support is very clear and supportive on that. They know how challenging is IAM for certain requirements.

Go to IAM -> Policies -> Create Policy -> Create Your Own Policy and use the next json code or in this gist link  as reference to write your own based on your requirements. After that you have to attach that policy to the role/user/group you want to.

Hope this helps.

[ES] Análisis Forense en AWS: introducción

English version here.

AWS siempre está monitorizando cualquier uso no autorizado de sus/nuestros recursos. Si tienes docenas de servicios ejecutándose en AWS, en algún momento serás avisado de un incidente debido a varias razones como compartir accidentalmente una contraseña en Github, una mala configuración de un servidor que lo hace fácil de atacar, servicios con vulnerabilidades, DoS o DDoS, 0days, etc… Así que debes estar preparado para realizar un análisis forense y/o gestionar la respuesta ante incidentes de tu infraestructura en AWS.

Recuerda, encaso de incidente debes mantener la calma y procura seguir un procedimiento definido, no dejes este proceso en manos del azar porque probablemente tu o tu jefe este lo suficientemente nervioso como para no esperar ni pensar. Siempre es mucho mejor seguir una guía previamente testeada que tu intuición (que ya la usarás después).

ADVERTENCIA: si has llegado a este artículo de forma desesperada haciendo una búsqueda en Google, te recomiendo probar todos los comandos antes en un entorno controlado como tu laboratorio o sistemas de pruebas. Como decía, deberías tener una guia de respuesta a incidentes y proceso de análisis forense antes de que ocurra un incidente.

En este artículo quiero recomendar algunos pasos y trucos que nosotros hemos usado en algún momento. Doy por hecho que tienes el cliente de linea de comandos de AWS ya instalado, si no es así mira aquí: http://docs.aws.amazon.com/general/latest/gr/GetTheTools.html. Todos los comandos están basados en una posible instancia comprometida en EC2 (Linux), pero la mayoría de estos comandos “aws” se pueden usar también para servidores Windows aunque no lo he probado. Todas estas acciones también se pueden realizar mediante la AWS Cosole. Estos serían algunos de los pasos a tener en cuenta:

1) Desactiva o borra el Access Key. Si una AWS Access Key ha sido comprometida  (AWS te lo hará saber en un correo electrónico u tu lo notarás pagando una gran factura) o te das cuenta que lo has publicado en Github:

aws iam list-access-keys
aws iam update-access-key --access-key-id AKIAIOSFODNN7EXAMPLE \
--status Inactive --user-name Bob
aws iam delete-access-key --access-key AKIDPMS9RO4H3FEXAMPLE \
--user-name Bob

2) En caso de que la Key sea comprometida, comprueba si algún recurso ha sido creado usando esa Key, en todas las regiones. Es común ver que alguien ha usado tus claves para lanzar instancias EC2 en otras regiones de AWS así que comprueba todas buscando instancias que te parezcan sospechosas. Aquí un ejemplo para buscar instancias creadas en la región us-east-1 desde el 9 de Marzo de 2016:

aws ec2 describe-instances --region us-east-1 \
--query 'Reservations[].Instances[?LaunchTime>=`2016-03-9`][].{id: InstanceId, type: InstanceType, launched: LaunchTime}'

3) Contacta con el equipo de soporte de AWS y avísales del incidente, están siempre dispuestos a ayudar y en caso necesario escalarán al equipo de seguridad de AWS.

4) Aisla la instancia, en este caso cuando hablo de YOUR.IP.ADDRESS.HERE, puede ser la IP pública de tu oficina o un servidor intermedio donde saltar y hacer el análisis:

    • Crea un security group para aislar la isntancia, ojo con la diferencia entre EC2-Classic y EC2-VPC, apunta el Group-ID
aws ec2 create-security-group --group-name isolation-sg \
--description "Security group to isolate EC2-Classic instances"
aws ec2 create-security-group --group-name isolation-sg \
--description "Security group to isolate a EC2-VPC instance" --vpc-id vpc-1a2b3c4d 
# where vpc-1a2b3c4d is the VPC ID that the instance is member of
    • Configura una regla para permitir SSH solo desde tu IP pública, aunque primero debes saber tu IP pública:
dig +short myip.opendns.com @resolver1.opendns.com
aws ec2 authorize-security-group-ingress --group-name isolation-sg \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32
aws ec2 authorize-security-group-ingress --group-id sg-BLOCK-ID \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32 
# note the difference between both commands in group-name \
and group-id, sg-BLOCK-ID is the ID of your isolation-sg
    • Los EC2-Classic Security Groups no soportan reglas de trafico saliente (solo entrante). Sin embargo, para EC2-VPC Security Groups, reglas de trafico saliente se puede configurar con estos comandos:
aws ec2 revoke-security-group-egress --group-id sg-BLOCK-ID \
--protocol '-1' --port all --cidr '0.0.0.0/0’ 
# removed rule that allows all outbound traffic
aws ec2 authorize-security-group-egress --group-id sg-BLOCK-ID \
--protocol 'tcp' --port 80 --cidr '0.0.0.0/0’ 
# place a port or IP if you want to enable some other outbound \
 traffic otherwise do not execute this command.
    • Aplica ese Security Group a la instancia comprometida:
aws ec2 modify-instance-attribute --instance-id i-INSTANCE-ID \
--groups sg-BLOCK-ID 
# where sg-BLOCK-ID is the ID of your isolation-sg
aws iam put-user-policy --user-name MyUser --policy-name MyPowerUserRole \
--policy-document file://C:\Temp\MyPolicyFile.json

5) Etiqueta la instancia para marcarla como “en investigación”:

aws ec2 create-tags –-resources i-INSTANCE-ID \
–tags "Key=Environment, Value=Quarantine:REFERENCE-ID"

6) Guarda los metadatatos de la instancia:

    • Más información sobre la instancia comprometida:
aws ec2 describe-instances --instance-ids i-INSTANCE-ID > forensic-metadata.log
or
aws ec2 describe-instances --filters "Name=ip-address,Values=xx.xx.xx.xx"
    • La salida de consola, puede ser útil dependiendo del tipo de ataque o compromiso aunque recuerda que deberías tener un sistema de logs centralizado:
aws ec2 get-console-output --instance-id i-INSTANCE-ID

7) Crea un Snapshot del volumen o volúmenes de la instancia comprometida para el análisis forense:

aws ec2 create-snapshot –-volume-id vol-xxxx –-description "IR-ResponderName- Date-REFERENCE-ID"
    • Ese snapshot no se modificará o montará, sino que trabajaremos con un volumen.

8) Ahora podemos seguir dos caminos, Parar la instancia:

aws ec2 stop-instances --instance-ids i-INSTANCE-ID
    • Dejarla ejecutándose, si podemos, en cuyo caso deberíamos aislarla también desde dentro  (iptables) y hacer un volcado de la memoria RAM usando LiME.

9) Crea un Volume desde el snapshot:

    • Piensa que región vas a usar, y otras opciones como  –region us-east-1 –availability-zone us-east-1a –volume-type y personaliza los comandos siguientes:
aws ec2 create-volume --snapshot-id snap-abcd1234
    • Toma nota del volumen:
aws ec2 describe-volumes

10) Monta ese volumen en tu distribución favorita de análisis forense y comienza la investigación.

Iré añadiendo más información en futuros artículos, por ahora esto es una introducción adecuada.

Si quieres aprender mucho más sobre este tema, voy a dar un curso online sobre Análisis forense en AWS, GCE y Azure en español con Securizame, más info aquí.

Forense2-1

Algunas referencias que he usado:

https://securosis.com/blog/my-500-cloud-security-screwup

https://securosis.com/blog/cloud-forensics-101

http://www.slideshare.net/AmazonWebServices/sec316-your-architecture-w-security-incident-response-simulations

http://sysforensics.org/2014/10/forensics-in-the-amazon-cloud-ec2/

Forensics in AWS: an introduction

Spanish version here.

AWS is always monitoring unauthorized usage of their/our resources up in the cloud. If you have dozens of services running on AWS, at some point, you are likely to be warned about a security issue due to a variety of reasons like accidentally sharing a Key in Github, server misconfiguration making it easily exploitable, services with vulnerabilities, DoS or DDoS, 0days, etc… So be ready to perform a forensics and/or incident response to your AWS infrastructure.

 

Remember, in case of a security incident keep calm and follow a predefined procedure, don’t leave the process to a random behavior because probably your boss or you are nervous and unable to wait. It is always much better to have a proven guide to follow that just follow your intuition (you will use your intuition later).

 

WARNING: you may have come to this article in a desperate try looking for a solution in Google, in this case I recommend you to test all commands mentioned here before in your lab environment. You should have an incident response and forensics guide with some information like this before the incident actually happens.

 

In this article I want to write up some recommended steps and also tips and tricks we have “eventually” done. I assume you have AWS command line tools installed correctly otherwise look at here http://docs.aws.amazon.com/general/latest/gr/GetTheTools.html. All commands are based on Linux EC2 compromised server but most of the “aws” CLI commands can also be used for Windows servers (not tested though). If you are wondering if you may perform all actions mentioned below using AWS Console UI, yes, but I think using command line is faster and straightforward to follow in case of an incident:

 

1) Disable or delete the Access Key. If your AWS Access Key has been compromised (AWS will let you know in the communication or in case you noticed that thru a different manner, i.e.: looking at your code published in GitHub)

aws iam list-access-keys
aws iam update-access-key --access-key-id AKIAIOSFODNN7EXAMPLE \
--status Inactive --user-name Bob
aws iam delete-access-key --access-key AKIDPMS9RO4H3FEXAMPLE \
--user-name Bob

2) In case of compromised Key, check if new and unexpected resources have been spin up using the compromised key, in all regions. It is common to see that someone used your compromised Key to launch EC2 instances in any other AWS region, so check all of them looking for new and suspected instances. Here an example to look for new instances launched in us-east-1 since March 9th 2016:

aws ec2 describe-instances --region us-east-1 \
--query 'Reservations[].Instances[?LaunchTime>=`2016-03-9`][].{id: InstanceId, type: InstanceType, launched: LaunchTime}'

3) Contact AWS Support and let them know about the security incident, they are always willing to help and give advice. They also may scale to AWS Security Team if needed.

4) Isolate the forensic instance, in this case when I talk about YOUR.IP.ADDRESS.HERE, it could be your office public IP or an intermediate hosted server to hop off or to do the analysis:

  • Create a security group to isolate your instance, note the difference between EC2-Classic and EC2-VPC, take note of Group-ID
aws ec2 create-security-group --group-name isolation-sg \
--description “Security group to isolate EC2-Classic instances”
aws ec2 create-security-group --group-name isolation-sg \
--description “Security group to isolate a EC2-VPC instance” \
--vpc-id vpc-1a2b3c4d \
# where vpc-1a2b3c4d is the VPC ID that the instance is member of
  • Set a rule to allow SSH access from your public IP only, but first we have to know our public IP:
dig +short myip.opendns.com @resolver1.opendns.com
aws ec2 authorize-security-group-ingress --group-name isolation-sg \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32
aws ec2 authorize-security-group-ingress --group-id sg-BLOCK-ID \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32 \
# note the difference between both commands in group-name \
and group-id, sg-BLOCK-ID is the ID of your isolation-sg
  • In EC2-Classic Security Groups don’t support outbound rules. However, for EC2-VPC Security Groups, outbound rules can be set with these commands:
aws ec2 revoke-security-group-egress --group-id sg-BLOCK-ID \
--protocol ‘-1’ --port all --cidr ‘0.0.0.0/0’ \
# removed rule that allows all outbound traffic
aws ec2 authorize-security-group-egress --group-id sg-BLOCK-ID \
--protocol ‘tcp’ --port 80 --cidr ‘0.0.0.0/0’ \
# place a port or IP if you want to enable some other \
outbound traffic otherwise do not execute this command.
  • Apply that Security Group to the compromised instance:
aws ec2 modify-instance-attribute --instance-id i-INSTANCE-ID \
--groups sg-BLOCK-ID \
# where sg-BLOCK-ID is the ID of your isolation-sg
aws iam put-user-policy --user-name MyUser --policy-name MyPowerUserRole \
--policy-document file://C:\Temp\MyPolicyFile.json

5) Tag instance to mark it as under investigation:

aws ec2 create-tags --resources i-INSTANCE-ID \
--tags “Key=Environment, Value=Quarantine:REFERENCE-ID”

6) Save instance/s metadata:

  • Information about the compromised instance:
aws ec2 describe-instances --instance-ids i-INSTANCE-ID > forensic-metadata.log
or
aws ec2 describe-instances --filters “Name=ip-address,Values=xx.xx.xx.xx”
  • Console output, can be useful depending on the attack but you should have a centralized/dedicated log server outside each instance.
aws ec2 get-console-output --instance-id i-INSTANCE-ID

7) Create Snapshot of the volume/s on the compromised instance/s for forensics analysis:

aws ec2 create-snapshot –-volume-id vol-xxxx \
–-description “IR-ResponderName- Date-REFERENCE-ID”

That snapshot won’t be changed or mounted, we will work with a Volume.

8) Now we can follow 2 paths: Stop the instance.

aws ec2 stop-instances --instance-ids i-INSTANCE-ID
  • or Leave it running, if we can, then isolate it from inside (iptables) and dump its RAM memory to a file using LiME.

9) Create a Volume from the taken snapshot to be used later for analysis:

  • Consider using –region us-east-1 –availability-zone us-east-1a –volume-type standard with your own setup.
aws ec2 create-volume --snapshot-id snap-abcd1234
  • Now take note of your new volume:
aws ec2 describe-volumes

10) Mount that volume with your favorite forensics distribution and Run the investigation.

I will add more information in next blog post but I think this is a good introduction.

If you want to learn much more on this topic, I will be giving an online training about AWS, GCE and Azure Forensics in Spanish with Securizame, more info here.

Some cool references and good reads:

https://securosis.com/blog/my-500-cloud-security-screwup

https://securosis.com/blog/cloud-forensics-101

http://www.slideshare.net/AmazonWebServices/sec316-your-architecture-w-security-incident-response-simulations

http://sysforensics.org/2014/10/forensics-in-the-amazon-cloud-ec2/

 

Docker Security Tools: Audit and Vulnerability Assessment

Dec 1st 2015: first version of this article published
Dec 2nd 2015: UPDATED OpenSCAP section with Atomic scan information and references
Dec 7th 2015: UPDATED Twistlock section, after a session/demo with the vendor. Conclusions updated.
Dec 14th 2015: UPDATED OpenSCAP section with a link of a demo made by @ianmiell
Dec 16th 2015: UPDATED the tools list with a new one called Scalock. Updated the conclusion section as well.
Dec 17th 2015: UPDATED Scalock section after some corrections they made me by email (thanks guys btw). I also fixed some typos.
April 7th 2017: UPDATED Scalock renamed to Aqua Security
Let’s suppose you are working in Security. Now, your company decides to run some applications in containers, they choose Docker, after some weeks or months testing it they want to go live, and suddenly someone says “should we do a security audit before going to production?”, the rest of the story is you and an audit to a Docker environment.
You can use all your existing arsenal and procedures your are familiar to audit the application running in the containers (file permissions, logs, etc.) but what about the containers, images, dockerfiles, docker servers or even the clustering and orchestration platform? This article is about that.
Considerations for this particular audit:
  1. Check if images and packages inside images are up-to-date and are free of security vulnerabilities.
  2. Audit automatization, we must be able to automatize all checks. That will save us a precious time and we can run it as often as we require, forget about to do it manually unless you are just testing or learning.
  3. Container links and volumes. If you use read-only filesystem in your running container “docker diff” can help you to find issues.
  4. The bigger an image is the harder the audit will be, reduce as much as you can the size of your images.
  5. The host kernel is the shared point between all containers in the same server, keep that kernel up-to-date.
Once said that, I want to give you an overview of the existing tools I have found to achieve your duty mentioned above. I have probably missed other tools, if so, please point me to them in the comments.
  1. Docker Bench for Security:
    • Description: The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. Those checks are based on all recommendations taken from the CIS Docker 1.6 Benchmark document.
    • Focus: mostly Docker server and few tips for images and containers.
    • Language: Shell script
    • Methodology: Run the script in the same server where Docker is running or from a container. It will create a shell report with INFO, WARN or PASS alerts.
    • License: Apache 2.0
    • Installation/usability level: Easy
    • Demo/Presentationhttps://youtu.be/8mUm0x1uy7c?t=18m15s
    • More about audit and vulnerabilities assessment from Docker Inc:
      • Project Nautilus: presented during Docker CON 2015 in Barcelona: https://www.youtube.com/watch?v=fLfFFtOHRZQ& Project Nautilus, the new image scanning and vulnerability detection service for official repos on Docker Hub. As in @diogomonica words “Nautilus is already working on the background on all the official images”. Nautilus looks for any suspicious piece of software. Is not depending on public vulnerabilities data bases nor based on Linux distros, instead, it looks for vulnerabilities using their own data base. We will have more information soon and probably a closer look by Q1 2016. (Thanks Diogo for the info).
    • My comments: From the Docker server/daemon configuration point of view this is the best tool you can use to make sure you are in the right path. Definitely I would use this tool but in conjunction with others, keep reading.
  2. OpenSCAP Container Compliance:
    • Description: Based on the same philosophy as its parent project OpenSCAP that supports CVE scan, multiple report formats and custom policies. Specific instructions and packages for RedHat 7 are here. Note: SCAP is U.S. standard maintained by National Institute of Standards and Technology (NIST). The OpenSCAP project is an open source collection of tools for implementing and enforcing this standard.
    • Focus: Images and Containers
    • Language: Shell script
    • Methodology: run the oscap-docker command against an image or container and get the results on a very helpful and descriptive html report.
    • License: GPL v3
    • Installation/usability level: Easy
    • Demo/Presentation: https://zwischenzugs.wordpress.com/2015/12/14/888/
    • My comments: If you use RedHat/Fedora/CentOS based containers this is highly recommended for you.
    • UPDATE (Dec 2nd 2015): If you use Atomic they have recently released a new feature that allows you to scan containers for vulnerabilities using OpenSCAP, see this blog post here and code here.
  3. CoreOS Clair:
    • Description: Clair is a container vulnerability analysis service. It works as an API that analyzes every container layer to find known vulnerabilities using existing package managers such as Debian (dpkg), Ubuntu (dpkg), CentOS (rpm). It also can be used from the command line as showed here. It provides a list of vulnerabilities that threaten a container, and can notify users when new vulnerabilities that affect existing containers become known. It is being used by https://quay.io/
    • Focus: Images and Containers
    • Language: Go
    • Methodology: Used via API or command line it extract all layers of the image, notifies if vulnerabilities are found whenever they found it because it stores all the information in a data base, it also manages its own vulnerability database updates from known vulnerability sources.
    • License: Apache v2
    • Installation/usability level: Hard
    • Demo/Presentationhttps://coreos.com/blog/vulnerability-analysis-for-containers/
    • My comments: I couldn’t make it work in CentOS 7.1. I will add more info here as soon as I got something new.
  4. Banyan Collector:
    • Description: the BanyanOps guys are who started a the discussion about the huge amount of vulnerable images available in Docker Hub and that was responded in detail by @jpetazzo here. As the author says “it is a framework for Static Analysis of Docker container images”. That means that is does more than security analysis.
    • Focus: Images
    • Language: Go
    • Methodology: Even though it can run in a container, banyan collector can run form command line and connect to a given Docker registry to perform its analysis. See how it works in detail here.
    • License: Apache 2.0
    • Installation/usability level: Medium-Hard
    • Demo/Presentation: N/A
    • My comments: It is very oriented to check registries more than a pure vulnerability assessment tool.
  5. Lynis:
    • Description: Lynis is a Linux, Mac and Unix security auditing and system hardening tool that includes a module to audit Dockerfiles. It also shows some Docker server statistics and check permissions.
    • Focus: Dockerfile
    • Language: Shell script
    • Methodology: just run Lynis with the proper options and Dockerfile path and Lynis will take a look to the files installed and some other parameters inside the file.
    • License: GPL v3
    • Installation/usability level: N
    • Demo/Presentation:
    • My comments: You can hit two birds with one stone but not really useful for docker audit yet. I know the author is willing to add more support to Docker.
  6. Twistlock:
    • Description: As in the author words: Twistlock scans container images in registries, on developer workstations, or on production servers. We detect and report vulnerabilities in the Linux distribution layer, app frameworks, and even your customer app packages. In addition to the Open Source threat feeds it uses commercial threat feeds. Their solution also offers access control to actions based in users and groups and a very interesting Runtime defense that allows to monitor and act upon security based in roles, behaviors, compliance, malicious actions and more.
    • Focus: images, containers, packages. Made for Docker and Kubernetes or Mesos.
    • Language: Shell script, Javascript and Go.
    • Methodology: it uses NIST to find CVEs and the Docker CIS for vulnerability assessment. It does more than just that, features like advanced access control, runtime defense, monitoring and continuous integration. A container called defender has to run in every host and a central console collect and manages all of them from a central location.
    • License: commercial depending on number of hosts. Free Developer Edition up to 2 hosts without support.
    • Installation/usability level: Not tested, I have seen a live presentation and demo run by de vendor.
    • Demo/Presentationhttps://www.youtube.com/watch?v=SMCYHFDfSzk
    • My comments: Nothing much to say since I could’t play with it or see it in action. I will add more info once I have something else. I have had a meeting with the vendor and have a better view about what the product is, and it is the most complete solution I have seen so far. They cover enterprise grade security, they are starting and is a brand new product with just a few customer, the product has a big room to improve and add new features but it is covering in a smart way most of the requirements at this moment and with enough granularity that allow us to improve Docker security. Finally it is important to highlight that it is not just an auditing tool, it is a managed security tool for Docker.
  7. Bitnami Stacksmith:
    • Description: it is a tool to quickly generate custom Dockerfiles (as per Bitnami words: a declarative API to create containers), is not intended to be a security tool but it has that cool feature that helps you to detect outdated and vulnerable components while building your Dockerfiles or even in existing containers built in Stacksmith. It sends you an email when a compoenent has to be updated.
    • Focus: Dockerfiles, images and containers.
    • Language: unknown
    • Methodology: it uses an external public CVE scores https://cve.mitre.org DB to find CVEs of the given components for vulnerability assessment.
    • License: SaaS
    • Installation/usability level: Easy
    • Demo/Presentation: https://www.youtube.com/watch?v=4A24pD-P_N4
    • My comments: As SaaS it seems to be a very easy tool, from the security point of view it gives the user a clear view of the status of the container components which is very helpful to figure out if we have vulnerable or outdated containers.
  8. Dockscan
    • Description: a brand new tool, in a very early stage, released 2 weeks ago, it was presented at BlackHat Europe Arsenal. As per the author: Dockscan is a vulnerability assessment and audit tool for Docker and container installations. It will report on Docker installation security issues as well as Docker container configurations. The tool helps both system administrator administering Docker to help them secure Docker, as well as security auditors and penetration testers who need to audit Docker installation.
    • Focus: Docker server
    • Language: Ruby
    • Methodology: it uses some the existing CIS Docker 1.6 Benchmark best practices. Can work in local and remote Docker installations.
    • License: GPL v2
    • Installation/usability level: easy
    • Demo/Presentation: N/A
    • My comments: It has a very short list of features yet but looks interesting, I would keep an eye on it but not to be used as a mature tool by now.
  9. Drydock:  (do not confuse it with Dry-dock cluster)
    • Description: As per the author: drydock is a Docker security audit tool written in Python. It was initially inspired by Docker Bench for Security but aims to provide a more flexible way for assessing Docker installations and deployments. drydock allows easy creation and use of custom audit profiles in order to eliminate noise and false alarms. Reports are saved in JSON format for easier parsing. drydock makes heavy use of docker-py client API to communicate with Docker. It is based on CIS Docker 1.6 Benchmark.
    • Focus: Docker server and containers
    • Language: Python
    • Methodology: it uses some the existing CIS Docker 1.6 Benchmark best practices to check server configuration options.
    • License: GPL v2
    • Installation/usability level: Easy
    • Demo/Presentation: N/A
    • My comments: It is in a very early stage of development yet, seems to be ahead of Dockscan. Let’s see what’s next with this tool. Not mature enough to consider as a player.
  10. Batten:
    • Description: Hardening and auditing tool for docker hosts and containers. It is pretty much the same as Drydock or Docker Bench for Security.
    • Focus: Docker server and containers
    • Language: Go
    • Methodology: run as container and check the server and containers following the  CIS Docker 1.6 Benchmark.
    • License: MIT
    • Installation/usability level: Easy
    • Demo/Presentation: N/A
    • My comments: Nothing different to what Drydock or Docker Bench for Security does.
  11. Scalock (now known as Aqua Security):
    • Description: By the author: Scalock secures every stage of the container lifecycle. Scalock provides a comprehensive security solution for virtual containers by adding visibility and control to containerized environments, enabling organizations to scale-out without security limitations even on a very large scale. We support major container platforms, including Docker, CoreOS, VMWare and Microsoft Windows. Secures virtualized containers on every level: containers, hosts and applications.
    • Focus: images, containers, packages. Made for Docker and Kubernetes, CoreOS, VMWare and Microsoft Windows.
    • Language: Go and C/C++.
    • Methodology:It works pretty much in the same way as Twistlock does, using a central server and agent containers running in privileged mode on every Docker host. It uses Docker Bench for server configuration security best practices, it also uses public vulnerabilities DB to check outdated packages (RPMs and/or Debs) and code libraries (Java, Python, PHP, NodeJS, etc.) inside containers and images using their own scanner database. It can also control AuthZ/AuthN and implements runtime defense to protect containers from other containers, users or attackers. They use their own kernel module to improve the container isolation.
    • License: commercial depending on number of hosts. In BETA status right now.
    • Installation/usability level: Not tested, I have seen a live presentation and demo run by de vendor. It looks straighforward to use.
    • Demo/Presentation: N/A
    • My comments: They contacted me after I published this article. They show me more or less what the product can do and how it looks like. It is the biggest competitor of Twistlock at this momment but it is in a very early stage. As its competitor it has a huge room to improve and to add more security capabilities once they are coming to Docker like user namespaces. It is not just an auditing tool, it does that correctly it is a runtime defense tool as well.
Conclusion:
  • What of these tools would you use? Considering the early stage of most of them, I would use Docker Bench for Security, OpenSCAP and probably Bitnami Stacksmith. And I would keep and eye on the others. UPDATE: After the meeting I have had with @mwithrow, Director of Architecture at Twistlock, and see the product details I think that it is the most complete solution so far. See Twistlock section for details. UPDATE: no doubt that by now we have to keep an eye to Scalock as well, I’m really looking forward to see their move since Docker is announcing new security features every month or less.
  • Most of these tools are very new with months or weeks since they were released, there is a big room to improve them and adapt them to a more enterprise scale security. It is a good starting point to address audit and vulnerabilities assessment of our container ecosystem regardless it is a production, test or development environment. Looking forward to see what have to say the big security vendors about it.
  • In favor of all of them, I have to say that is hard to keep them updated since Docker is growing really quick and releasing versions with a bunch of new features (including security improvements) almost every week. So it is a tough race just try to keep any of these tools up to date.  I guess is the price to pay working with emerging technology.
  • In other post I would like to discuss more in detail about security in orchestration and how to achieve a proper audit on solutions like Kubernetes.
  • What about incident response? That’s another good point to cover in a blog post.
  • There are more coming, I’m looking forward to see what the big fishes have to say about it (Google, MS, AWS, etc.).

The 10 commandments to avoid disabling SELinux

Well, they are 10 ideas or commands actually 😉
Due to my new role at Alfresco as Senior DevOps Security Architect, I’m doing some new cool stuff (that I will be publishing here soon) and also learning a lot and helping a little bit with my knowledge on security to the DevOps team.
One of the goals I promised myself was to “never disable SELinux”, even if that means to learn more about it and spend time on it. I may say that it’s being a worth it investment of my time and here you go some results of it.

This article is not about what is or what is not SELinux, you have the Wikipedia for that. But a brief description could be: a MAC (Mandatory Access Control) implementation in Linux that prevents a process to access to other processes or files that is supposed to not to have access (open, read, write files, etc.)

If you are here is because you want to finally start using SELinux and you are really interested on make it work, to tame this wild horse. Let me just say something, if you are really worry about security and have dozens of Linux servers in production, keep SELinux enabled, keep it “Enforcing”, no question.
Once said that, here is my list. It is not an exhaustive list, I’m looking forward to see your insights in the comments:
  1. Enable SELinux in Enforcing mode:
    • In configuration files (need restart)
      • /etc/sysconfig/selinux (RedHat/CentOS 6.7 and older)
      • /etc/selinux/config (RedHat/CentOS 7.0 and newer)
    • Through commands (no restart required)
      • setenforce Enforcing
    • To check the status use
      • sestatus # or command getenforce
  2. Use the right tools. To do cool things you need cool tools, we will need some of them:
    • yum install -y setools-console policycoreutils-python setroubleshoot-server
    • policycoreutils-python comes with the great semanage command, the lord of the SELinux commands
    • setools-console comes with seinfosesearch and sechecker among others
    • from setroubleshoot-server package we will use sealert to easily identify issues
  3. Get to know what is going on: Dealing with SELinux happens mostly during installation, configuration and tests of Linux services. Therefore, in case something in your system is not working properly or in the same manner as with SELinux disabled. When you are configuring and installing a service or application on a server and something is not working as expected, not starting as it should to, you always think “Damn SELinux, let’s disable it”. Forget about that, you have to check the proper place to see what is going on with it: the Audit logs. Check /var/log/audit/audit.log and look for lines with “denied”.
    • tail -f /var/log/audit/audit.log | perl -pe ‘s/(\d+)/localtime($1)/e’
    • the perl command is to convert the Epoch time (or UNIX or POSIX time) inside the audit.log file to human readable time.
  4. See the extended attributes in the file system that SELinux use:
    • ls -ltraZ # most important here is the Z
    • ls -ltraZ /etc/nginx/nginx.conf will show:
      • -rw-r–r–. root root system_u:object_r:httpd_config_t:s0 /etc/nginx/nginx.conf
      • where system_u: is the user (not always a user of the system), object_r: role and  httpd_config_t: is the object type, other objects can be a directory, a port or socket and types of an object can be a config file, log file, etc.; finally s0 means the level or category of that object.
  5. See the SELinux attributes that applies to a running process:
    • ps auxZ
      • You need to know this command in case of issues.
  6. Who am I for SELinux:
    • id -Z
      • You need to know this command in case of issues.
  7. Check, enable or disable defined modes (enforcing or permissive) per deamon:
    • getsebool -a # list all current status
    • setsebool -P docker_connect_any 1 # allow Docker to connect to all TCP ports
    • semanage boolean -l # is another alternative command
    • semanage fcontext -l # to see al contexts where SELinux applies
  8. Add a non default directory or file to be used by a given daemon:
    • For a folder used by a service, i.e.: change Mysql data directory:
      • Change your default data directory in /etc/my.cnf
      • semanage fcontext -a -t mysqld_db_t “/var/lib/mysql-default/(/.*)?”
      • restorecon -Rv /var/lib/mysql-default
      • ls -lZ /var/lib/mysql-default
    • For a new file used by a service, i.e.: a new index.html file for Apache:
      • semanage fcontext -a -t httpd_sys_content_t ‘/myweb/web1/html/index.html’
      • restorecon -v ‘/myweb/web1/html/index.html’
  9. Add a non default port to be used by a given service:
    • i.e.: If you want nginx to listen in other additional port:
      • semanage port -a -t http_port_t -p tcp 2100
      • semanage port -l | grep  http_port # check if the change is effective
  10. Spread the word!
    • SELinux is not easy but writing easy tips make people using it and making the Internet a safer place!

Alfresco, NAS or SAN, that’s the question!

The main requirement on the shared storage is being able to cross-mount the storage between the Alfresco servers. Whether this is done via an NAS or SAN is partly a decision around which technology your organization’s IT department can best support. Faster storage will have positive implications on the performance of the system, with Alfresco recommending throughput at 200 MB/sec.

NAS allows us to mount the content store via NFS or CIFS on all Alfresco servers, and they are able to read/write the same file system at the same time. The only real requirement is that the OS on which Alfresco is installed supports NFS (which is any Linux box actually). NFS tends to be cheaper and easier, but is not the fastest option. It is typically sufficient, though.

SAN is typically faster and more reliable, but obviously more expensive and complex (dedicated hardware and configuration requirements). In order to read/write from all Alfresco servers from/to the SAN, special file system types are necessary. For Red Hat, we use GFS2, other Linux flavors use OCFS or many others.

You are maybe thinking what happen in case of having multiple Alfresco servers writing to the same LUN could result in corruption (especially in header files), so it sounds like NAS (NFS/CIFS) would take care of that issue, however, if using a SAN, the filesystem must be managed properly to allow for read/write from multiple servers. For the Alfresco stand point, you don’t have to take care of that in both SAN or NAS approaches because Alfresco manages the I/O such that no collisions or corruption occur.

Note: If using a SAN, ensure the file system is managed properly to allow for read/write from multiple servers.

I also wanted to share this presentation I did internally some time ago but I think it would be useful.

Screencast: Alfresco One AOS (Alfresco Office Services)

Alfresco Office Services is the new implementation made for Alfresco One (former Enterprise) which allows an user to “Edit Online” a document with MS Office straight from Alfresco Share, provides a fully-compatible SharePoint repository. This new implementation replaces the existing VTI (Microsoft Office SharePoint Protocol Support) already in Alfresco Community.

With Alfresco Office Services (AOS) you can access Alfresco directly from your Microsoft Office applications.  This means that you can browse, open, and save Microsoft Office files (Word, PowerPoint, and Excel) in Alfresco without the need to access Alfresco through Chrome, Firefox, or another web browser. (See oficial documentation here). With AOS you also can connect Alfresco as a network drive or shared folder.
The main differences between this new AOS and the existing VTI are:
  • Removed Jetty embedded server therefore not need to use port 7070.
  • Select document type when saving document form MS Office to Alfresco and fill the type properties within MS Office.
  • AOS is part of the core, not need to install an additional AMP.
  • A new ROOT.war and _vti_bin.war have to be deployed (included in Alfresco One), if you are upgrading from previous versions please check this information.
Here is a quick demo about how it works and how to use it:

Buenas Prácticas de Seguridad en Docker

DockerLogoNota: Este artículo lo escribí para SecurityByDefault.com el 18/05/2015, espero que lo disfrutéis.
Docker es una plataforma abierta que permite construir, portar y ejecutar aplicaciones distribuidas, se basa en contenedores que corren en Linux y funcionan tanto en máquinas físicas como virtuales simplemente usando un runtime. Está escrito en Go y usa librerías del sistema operativo así como funcionalidades del kernel de Linux en el que se ejecuta. Consta de un engine con API RESTful y un cliente que pueden ejecutarse en la misma máquina o en máquinas separadas. Es Open Source (Apache 2.0) y gratuito.
Los contenedores existen desde hace muchos años, Docker no ha inventado nada en ese sentido, o casi nada, pero no hay que quitarles mérito, están en el momento adecuado y aportan las características y herramientas concretas que se necesitan en la actualidad, donde la portabilidad, escalabilidad, alta disponibilidad y los microservicios en aplicaciones distribuidas son cada vez más utilizados, y no sólo eso, sino que también son mejor entendidos por la comunidad de desarrolladores y administradores de sistemas. Cada vez se desarrollan menos aplicaciones monolíticas y más basadas en módulos o en microservicios, que permiten un desarrollo más ágil, rápido y a la vez portable. Empresas de sobra conocidas como Netflix, Spotify o Google e infinidad de Start ups usan arquitecturas basadas en microservicios en muchos de los servicios que ofrecen.
Te estarás preguntando ¿Y no es más o menos lo mismo que hacer un chroot de una aplicación? Sería como comparar una rueda con un coche. El concepto de chroot es similar ya que se trata de aislar una aplicación, pero Docker va mucho más allá, sería un chroot con esteroides, muchos esteroides. Por ejemplo, puede limitar y controlar los recursos a los que accede la aplicación en el contenedor, generalmente usan su propio sistema de archivos como UnionFS o variantes como AUFS, btrfs, vfs, Overlayfs o Device Mapper que básicamente son sistemas de ficheros en capas. La forma de controlar los recursos y capacidades que hereda del host es mediante namespaces y cgroups de Linux. Esas opciones de Linux no son nuevas en absoluto, pero Docker lo hace fácil y el ecosistema que hay alrededor lo ha hecho tan utilizado.
Adicionalmente, la flexibilidad, comodidad y ahorro de recursos de un contenedor es mayor a la que aporta una máquina virtual o un servidor físico, esto es así en muchos casos de uso, no en todos. Por ejemplo, tres servidores web para un cluster con Nginx en una VM con una instalación de Linux CentOS mínima ocuparía unos 400MB, multiplicado por 3 máquinas sería total de uso en disco de 1,2 GB, con contenedores serían 400MB las mismas 3 máquinas corriendo ya que usa la misma imagen para múltiples contenedores. Eso es sólo por destacar una característica interesante a nivel de recursos. Otro uso muy común de Docker es la portabilidad de aplicaciones, imagina una aplicación que solo funciona con Python 3.4 y hacerla funcionar en un sistema Linux con Python 2.x es complicado, piensa en lo que puede suponer en un sistema en producción actualizar Python, con contenedores sería casi automático, descargar la imagen del contenedor y ejecutar la aplicación de turno.
Solo por ponernos en situación de la envergadura Docker, unos números alrededor del producto y la compañía (fuente aquí):
  • 95 millones de dólares de inversión.
  • Valorada en 1.000 millones de dólares.
  • Más de 300 millones de descargas en 96 releases desde marzo de 2013
Pero un contenedor no es para todo, ni hay que volverse loco “dockerizando” cualquier cosa, aunque no es este el sitio para esa reflexión. Al cambiar la forma de desarrollar, desplegar y mantener aplicaciones, también cambia en cierto modo la forma de securizar estos nuevos actores.
Docker aporta seguridad en capas, aísla aplicaciones entre ellas y del host sin usar grandes recursos, también se pueden desplegar contenedores en máquinas virtuales lo que aporta otra capa adicional de aislamiento (estaréis pensando en VENOM pero eso es otra película que no afecta directamente a Docker). Dada la arquitectura de Docker y usando buenas prácticas, aplicar parches de seguridad al anfitrión o a aplicaciones suele ser más rápido y menos doloroso.
Buenas Prácticas de Seguridad:
Aunque la seguridad es algo innato en un contenedor, desde Docker Inc. están haciendo esfuerzos por la seguridad, por ejemplo, contrataron hace unos meses a ingenieros de seguridad de Square, que no son precisamente nuevos en el tema. Ellos, junto a compañías como VMware entre otras, han publicado recientemente un extenso informe de sobre buenas prácticas de seguridad en Docker en el CIS. Gracias a este informe tenemos acceso a más de 90 recomendaciones de seguridad a tener siempre en cuenta cuando vamos a usar Docker en producción. En la siguiente tabla podemos ver las recomendaciones de seguridad sugeridas, algunas son muy obvias pero un check list así nunca viene mal:
1. Recomendaciones a nivel de host
1.1. Crear una partición separada para los contenedores 
1.2. Usar un Kernel de Linux actualizado 
1.3. No usar herramientas de desarrollo en producción
1.4. Securizar el sistema anfitrión 
1.5. Borrar todos los servicios no esenciales en el sistema anfitrión
1.6. Mantener Docker actualizado 
1.7. Permitir solo a los usuarios autorizados controlar el demonio Docker
1.8. Auditar el demonio Docker  (auditd)
1.9. Auditar el fichero o directorio de Docker – /var/lib/docker 
1.10. Auditar el fichero o directorio de Docker – /etc/docker 
1.11. Auditar el fichero o directorio de Docker – docker-registry.service 
1.12. Auditar el fichero o directorio de Docker – docker.service 
1.13. Auditar el fichero o directorio de Docker – /var/run/docker.sock 
1.14. Auditar el fichero o directorio de Docker – /etc/sysconfig/docker 
1.15. Auditar el fichero o directorio de Docker – /etc/sysconfig/docker-network 
1.16. Auditar el fichero o directorio de Docker – /etc/sysconfig/docker-registry 
1.17. Auditar el fichero o directorio de Docker – /etc/sysconfig/docker-storage 
1.18. Auditar el fichero o directorio de Docker – /etc/default/docker 
 
2. Recomendaciones a nivel de Docker Engine (daemon)
2.1 No usar el driver obsoleto de ejecución de lxc 
2.2 Restringir el tráfico de red entre contenedores 
2.3 Configurar el nivel de logging deseado 
2.4 Permitir a Docker hacer cambios en iptables 
2.5 No usar registros inseguros (sin TLS)
2.6 Configurar un registro espejo local
2.7 No usar aufs como driver de almacenamiento
2.8 No arrancar Docker para escuchar a  una IP/Port o Unix socket diferente
2.9 Configurar autenticación TLS para el daemon de Docker
2.10 Configurar el ulimit por defecto de forma apropiada

3. Recomendaciones a nivel de configuración de Docker
3.1 Verificar que los permisos del archivo docker.service están como root:root 
3.2 Verificar que los permisos del archivo docker.service están en 644 o más restringidos 
3.3 Verificar que los permisos del archivo docker-registry.service están como root:root 
3.4 Verificar que los permisos del archivo docker-registry.service están en 644 o más restringidos
3.5 Verificar que los permisos del archivo docker.socket están como root:root 
3.6 Verificar que los permisos del archivo docker.socket están en 644 o más restringidos
3.7  Verificar que los permisos del archivo de entorno Docker (/etc/sysconfig/docker o /etc/default/docker) están como root:root 
3.8 Verificar que los permisos del archivo de entorno Docker (/etc/sysconfig/docker o /etc/default/docker) están en 644 o más restringidos
3.9 Verificar que los permisos del archivo /etc/sysconfig/docker-network (si se usa systemd) están como root:root 
3.10 Verificar que los permisos del archivo /etc/sysconfig/docker-network están en 644 o más restringidos
3.11  Verificar que los permisos del archivo /etc/sysconfig/docker-registry (si se usa systemd) están como root:root
3.12 Verificar que los permisos del archivo /etc/sysconfig/docker-registry (si se usa systemd) están en 644 o más restringidos
3.13 Verificar que los permisos del archivo /etc/sysconfig/docker-storage (si se usa systemd) están como root:root 
3.14 Verificar que los permisos del archivo /etc/sysconfig/docker-storage (si se usa systemd) están en 644 o más restringidos 
3.15 Verificar que los permisos del directorio /etc/docker están como root:root 
3.16 Verificar que los permisos del directorio /etc/docker están en 755 o más restrictivos 
3.17 Verificar que los permisos del certificado del registry están como root:root 
3.18 Verificar que los permisos del certificado del registry están en 444 o más restringidos 
3.19 Verificar que los permisos del certificado TLS CA están como root:root 
3.20 Verificar que los permisos del certificado TLS CA están en 444 o más restringidos 
3.21 Verificar que los permisos del certificado del servidor Docker están como root:root 
3.22 Verificar que los permisos del certificado del servidor Docker están en 444 o más restringidos 
3.23 Verificar que los permisos del archivo de clave del certificado del servidor Docker están como root:root 
3.24 Verificar que los permisos del archivo de clave del certificado del servidor Docker están en 400 
3.25 Verificar que los permisos del archivo de socket de Docker están como root:docker 
3.26 Verificar que los permisos del archivo de socket de Docker están en 660 o más restringidos 
 
4 Imágenes de Contenedores y Dockerfiles
4.1 Crean un usuario para el contenedor
4.2 Usar imágenes de confianza para los contenedores 
4.3 No instalar paquetes innecesarios en el contenedor
4.4 Regenerar las imágenes si es necesario con parches de seguridad
 
5 Runtime del contenedor
5.1 Verificar el perfil de AppArmor (Debian o Ubuntu) 
5.2 Verificar las opciones de seguridad de SELinux (RedHat, CentOS o Fedora) 
5.3 Verificar que los contenedores esten ejecutando un solo proceso principal
5.4 Restringir las Linux Kernel Capabilities dentro de los contenedores 
5.5 No usar contenedores con privilegios   
5.6 No montar directorios sensibles del anfitrión en los contenedores
5.7 No ejecutar ssh dentro de los contenedores
5.8 No mapear puertos privilegiados dentro de los contenedores
5.9 Abrir solo los puertos necesarios en un contenedor
5.10 No usar el modo “host network” en un contenedor 
5.11 Limitar el uso de memoria por contenedor 
5.12 Configurar la prioridad de uso de CPU apropiadamente 
5.13 Montar el sistema de ficheros raíz de un contenedor como solo lectura
5.14 Limitar el tráfico entrante al contenedor mediante una interfaz específica del anfitrión
5.15 Configurar la política de reinicio ‘on-failure’ de un contenedor a 5 
5.16 No compartir PID de procesos del anfitrión con contenedores
5.17 No compartir IPC del anfitrión con contenedores 
5.18 No exponer directamente dispositivos del anfitrión en contenedores
5.19 Sobre-escribir el ulimit por defecto en tiempo de ejecución solo si es necesario
 
6 Operaciones de Seguridad en Docker
6.1 Realizar auditorías de seguridad tanto en el anfitrión como en los contenedores de forma regular
6.2 Monitorizar el uso, rendimiento y métricas de los contenedores
6.3 Endpoint protection platform (EPP) para contenedores (si las hubiese) 
6.4 Hacer Backup de los datos del contenedor 
6.5 Usar un servicio centralizado y remoto para recolección de logs
6.6 Evita almacenar imágenes obsoletas, sin etiquetar correctamente o de forma masiva.   
6.7 Evita almacenar contenedores obsoletos, sin etiquetar correctamente o de forma masiva.
En algunos casos, hay recomendaciones que merecen un artículo por si solas. Si quieres profundizar más en este tema recuerda que los pormenores de estos aspectos de seguridad y auditoría los ampliaremos durante el curso online Hardening de Windows, Linux e Infraestructuras” en el que colaboraré junto a Lorenzo Martínez, Yago Jesús, Juan Garrido y Pedro Sanchez, todo un lujo de curso en el que aportaré mi granito de arena con seguridad en Docker completando el módulo de Hardening Linux. Más información aquí: https://www.securizame.com/curso-online-de-hardening-de-sistemas-windows-y-linux-e-infraestructuras_yj/
Para otros posibles artículos en el futuro me parece interesante ver algunas consideraciones de seguridad en Docker Hub y otros componentes relacionados, así como auditorías de contenedores con Lynis.
Recursos y referencias:

 

Book review: “Learning Alfresco Web Scripts”

Recently PackPublishing has released  the book “Learning Alfresco Web Scripts” written by Ramesh Chauhan.https://www.packtpub.com/web-development/learning-alfresco-web-scripts In a nutshell it is an starting point to learn how to develop web scripts from scratch to success.

If you often read this blog, you may already know what Alfresco is and how it works. As per the Alfresco Wiki: A Web Script is simply a service bound to a URI which responds to HTTP methods such as GET, POST, PUT and DELETE. While using the same underlying code, there are broadly two kinds of Web Scripts: data and presentation Web Scripts.

The book shows the reader what to know to be a web script developer: understand the Alfresco web script framework and how it works, components and architecture, writing a web script from scratch, types and options of web scripts with its components, how to use them from third party applications (which is very interesting in order to integrate Alfresco with others), embed Java in Web Scripts also knows as Java-backed web scripts, using Web Scripts with Java-script as well. Get to know all deployment options, debugging and troubleshooting, and also the very important maven options available with web scripts deployments.

I liked this book because it goes from very foundational information to really deep level concepts, so if you are looking to start learning web scripts from scratch and go beyond, it is a good option to have a single point of consultation. This is a pure web scripts book, if you are looking for a 5.0 updated book this is not your book, because it doesn’t cover Aikau, but remember that it covers most importan topics to start working with different flavours of web scripts. And after all, it is oriented for both beginners and advanced developers.