Prowler 2.0: New release with improvements and new checks ready for re:Invent and BlackHat EU

Taking advantage of this week AWS re:Invent and  next week BlackHat Europe, I wanted to push forward a new version of Prowler.

In case you are new to Prowler:

Prowler is an AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool. It follows guidelines of the CIS Amazon Web Services Foundations Benchmark and DOZENS of additional checks including GDPR and HIPAA groups. Official CIS benchmark for AWS guide is here.

This new version has more than 20 new extra checks (of +90), including GDPR and HIPAA group of checks as for a reference to help organizations to check the status of their infrastructure regarding those regulations. Prowler has also been refactored to allow easier extensibility. Another important feature is the JSON output that allows Prowler to be integrated, for example, with Splunk or Wazuh (more about that soon!). For all details about what is new, fixes and improvements please see the release notes here: https://github.com/toniblyx/prowler/releases/tag/2.0

For me, personally, there are two main benefits of Prowler. First of all, it helps many organizations and individuals around the world to improve their security posture on AWS, and using just one easy and simple command, they realize what do they have to do and how to get started with their hardening. Second, I’m learning a lot about AWS, its API, features, limitations, differences between services and AWS security in general.

Said that, I’m so happy to present Prowler 2.0 in BlackHat Europe next week in London! It will be at the Arsenal

and I’ll talk about AWS security, and show all new features, how it works, how to take advantage of all checks and output methods and some other cool things. If you are around please come by and say hello, I’ve got a bunch of laptop sticklers! Here all details, Location:  Business Hall, Arsenal Station 2. Date: Wednesday, December 5 | 3:15pm-4:50pm. Track Vulnerability Assessment. Session Type: Arsenal

BIG THANKS!

I want to thank the Open Source community that has helped out since first day, almost a thousand stars in Github and more than 500 commits talk by itself. Prowler has become pretty popular out there and all the community support is awesome, it motivates me to keep up with improvements and features. Thanks to you all!!

Prowler future?

Main goals for future versions are: to improve speed and reporting, including switch base code to Python to support existing checks and new ones in any language.

If you are interested on helping out, don’t hesitate to reach out to me. \m/

Prowler 1.6: AWS Security Best Practices Assessment and Forensics Readiness Tool

It looks like Prowler has become a popular tool for those concerned about AWS security. I just made Prowler to solve an internal requirement we have here in Alfresco. I decided to make it public and I started getting a lot of feedback, pull requests, comments, advices, bugs reported, new ideas and I keep pushing to make it better and more comprehensive following all what cloud security community seems to need.
I know Prowler is not the best tool out there but it does what I wanted it to do: “Take a picture of my AWS account (or accounts) security settings and tell me from where to start working to improve it”. Do the basics, at least. And that’s what it does. I would use other tools to track service change, etc., I discuss that also in my talks.
Currently, Prowler performs 74 checks (for an entire list run `prowler -l`), being 52 of them part of the CIS benchmark.

Digital Forensics readiness capabilities into Prowler 1.6

`prowler -c forensics-ready`
I’m into DFIR, I love it and I read lot about cloud digital forensics and incident response, I enjoy investing my time R&D about that subject. And I’m concerned about random or targeted attacks to cloud infrastructure. For the talk I’m doing today at the SANS Cloud Security Summit 2018 in San Diego, I wanted to show something new and I thought about adding new checks to Prowler related to forensics and how to make sure you have all (or as much) what you need to perform a proper investigation in case of incident, logs that are not enabled by default in any AWS account by the way. Some of those checks are included and well described in the current CIS benchmark for AWS, or even in the CIS benchmark for AWS three tiers web deployments (another hardening guide that is way less popular but pretty interesting too), but there are checks that are not included anywhere. For example, I believe it is good idea to keep record of your API Gateway logs in your production accounts or even your ELB logs, among many others. So when you run  `prowler -c forensics-ready` now you will get the status of your resources across all regions, and you can make sure you are logging all what you may eventually need in case of security incident. Currently these are the checks supported (https://github.com/toniblyx/prowler#forensics-ready-checks):
  • 2.1 Ensure CloudTrail is enabled in all regions (Scored)
  • 2.2 Ensure CloudTrail log file validation is enabled (Scored)
  • 2.3 Ensure the S3 bucket CloudTrail logs to is not publicly accessible (Scored)
  • 2.4 Ensure CloudTrail trails are integrated with CloudWatch Logs (Scored)
  • 2.5 Ensure AWS Config is enabled in all regions (Scored)
  • 2.6 Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Scored)
  • 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Scored)
  • 4.3 Ensure VPC Flow Logging is Enabled in all VPCs (Scored)
  • 7.12 Check if Amazon Macie is enabled (Not Scored) (Not part of CIS benchmark)
  • 7.13 Check if GuardDuty is enabled (Not Scored) (Not part of CIS benchmark)
  • 7.14 Check if CloudFront distributions have logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.15 Check if Elasticsearch Service domains have logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.17 Check if Elastic Load Balancers have logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.18 Check if S3 buckets have server access logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.19 Check if Route53 hosted zones are logging queries to CloudWatch Logs (Not Scored) (Not part of CIS benchmark)
  • 7.20 Check if Lambda functions are being recorded by CloudTrail (Not Scored) (Not part of CIS benchmark)
  • 7.21 Check if Redshift cluster has audit logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.22 Check if API Gateway has logging enabled (Not Scored) (Not part of CIS benchmark)
Screenshot while running `forensics-ready` group of checks, here only showing 3 of the first checks that are part of that group
I haven’t added yet a RDS logging check and I’m probably missing many others so please feel free to open an issue in Github and let me know!
If you want to check out my slide deck used during my talk at the SANS Cloud Security Summit 2018 in San Diego, look at here: https://github.com/toniblyx/SANSCloudSecuritySummit2018

Cloud Forensics: CAINE7 on AWS

caine-7-accessories-481x460If you work with AWS, you may have to perform a forensics analisys at some point. As discussed in previous articles here, there are many tasks we can achieve in the cloud.
Here is a quick quide based on AWS-CLI on how to install, upload and use the well known CAINE7 distribution up in the Amazon Cloud importing it as an EC2 AMI:
  • First of all start CAINE7.iso as live CD in Virtualbox,  12GB of disk in VHD format will be fine ( if you don’t use VHD or you have VMDK instead you can convert it with “VBoxManage clonemedium CAINE7.vmdk  CAINE7.vhd –format vhd”)
  • Inside CAINE:
    • Run BlockON/OFF app from Desktop icon, select your virtual hard drive and make it Writable.
    • Go to Menu / System / Administration / gParted
    • In gParted  Device / Create Partition Table… msdos
    • Partition new create a 10GB partition and leave the rest empty
    • Create another partition linux-swap for the remaining 2GB
    • Edit – Apply all operations
    • Run Systemback (installer) form the Desktop icon.
    • System Install, fill the form with user full name: caine, system user: ec2-user, your password and hostname: caine. Then Next
    • Select the 10GB partition and set the mount point /
    • Click Next and the installation will start
  • Once the installation is finished you can stop the virtual machine, remove the liveCD, start it and log in to the VM again to do some additional steps inside your just installed CAINE7.
  • Update and upgrade:
    • sudo apt-get update; sudo apt-get upgrade
  • Install aws-cli:
    • sudo pip install aws-cli
  • Now we will install some dependences needed to get access via RDP once we run CAINE in AWS, just like if it is in our local workstation.
    • sudo apt-get install xrdp curl
    • sudo sed -i s/port=-1/port=ask-1/g /etc/xrdp/xrdp.ini
    • sudo sed -i s#/\.\ \/etc\/X11\/Xsession#mate-session#g /etc/xrdp/startwm.sh
    • sudo service xrdp restart
  • Extra: install the Amazon EC2 Simple Systems Manager (SSM) agent to process Run Command requests remotely and automated:
    • cd /tmp
    • curl https://amazon-ssm-<region>.s3.amazonaws.com/latest/debian_386/amazon-ssm-agent.deb -o amazon-ssm-agent.deb
    • dpkg -i amazon-ssm-agent.deb
  • Now we have to upload this VM VHD file to a S3 bucket, it will be around 8GBaws-logo1.png
    • aws s3 cp CAINE7.vhd  s3://your-forensics-tools-bucket/CAINE7.vhd
    • This will take time, depending on your bandwith.
  • If the AWS IAM user you are running doesn’t have proper permissions, you should review and follow these prerequisites http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html
  • Then we can import this virtual hard drive as AWS AMI. Firs create a json file like below to use it as parameter for the import task (caine7vm.json):
[{
    “Description”: “CAINE7”,
    “Format”: “vhd”,
    “UserBucket”: {
        “S3Bucket”: “your-forensics-tools-bucket”,
        “S3Key”: “CAINE7.vhd”
    }
}]
  • Lets perform the import:
    • aws ec2 import-image –description “CAINE7” –disk-containers file://caine7vm.json –profile default –region us-east-1
    • NOTE: you probably don’t need to specify profile or region.
  • The import taks may take some minutes, depending on how big is the VHD and how busy is AWS by that time. To check the status use this command:
    • aws ec2 describe-import-image-tasks –profile default –region us-east-1 –query ‘ImportImageTasks[].[ImportTaskId,StatusMessage,Progress]’
    • or this one with your custom “import-ami-XXXXX”
    • aws ec2 describe-import-image-tasks –profile default –region us-east-1 –query ‘ImportImageTasks[].[ImportTaskId,StatusMessage,Progress]’ –cli-input-json “{ \”ImportTaskIds\”: [\”import-ami-XXXXX\”]}”
    • You will see “StatusMessage”: “pending” –> “validated”–> “converting” –> “preparing to boot” –> “booted” –> “preparing ami” –> “completed”
  • Once it is completed, look for your brand new AMI id:
    • aws ec2 describe-images –owners self –profile default –region us-east-1 –filters “Name=name,Values=import-ami-XXXXX”
  • Good, we know the AMI id so let’s create a new instance inside an existing VPC and a Public Subnet (I use t2.medium with 2GB of RAM), please use your own Security Group with RDP and SSH open and your own ssh keyname:
    • aws ec2 run-instances –image-id ami-XXXX –count 1 –instance-type t2.medium –key-name YOURKEY –security-group-ids sg-YOURSG –subnet-id subnet-YOURPUBLIC –profile default –region us-east-1
  • Add it a tag for better identification:
    • aws ec2 create-tags –resources i-XXXX –tags Key=Name,Value=Investigator –profile default –region us-east-1
  • At this point you can attache a public IP to the instance and get access to it.
  • First allocate a public Elastic IP:
    • aws ec2 allocate-address –domain vpc-XXXX –profile default –region us-east-1
  • Then associate that new Elastic IP to our just launched CAINE7 instance (changeeipalloc-XXXX):
    • aws ec2 associate-address –instance-id i-XXXX –allocation-id eipalloc-XXXX –profile security –region us-east-1
  • Now open your favorite remote desktop application and access to your CAINE7, remember you will be asked for the username and password you set when CAINE was installed in your VirtualBox VM:

Screenshot 2016-06-13 23.09.56

  • Now you should be in!

Screenshot 2016-06-16 13.40.33

[ES] Análisis Forense en AWS: introducción

English version here.

AWS siempre está monitorizando cualquier uso no autorizado de sus/nuestros recursos. Si tienes docenas de servicios ejecutándose en AWS, en algún momento serás avisado de un incidente debido a varias razones como compartir accidentalmente una contraseña en Github, una mala configuración de un servidor que lo hace fácil de atacar, servicios con vulnerabilidades, DoS o DDoS, 0days, etc… Así que debes estar preparado para realizar un análisis forense y/o gestionar la respuesta ante incidentes de tu infraestructura en AWS.

Recuerda, encaso de incidente debes mantener la calma y procura seguir un procedimiento definido, no dejes este proceso en manos del azar porque probablemente tu o tu jefe este lo suficientemente nervioso como para no esperar ni pensar. Siempre es mucho mejor seguir una guía previamente testeada que tu intuición (que ya la usarás después).

ADVERTENCIA: si has llegado a este artículo de forma desesperada haciendo una búsqueda en Google, te recomiendo probar todos los comandos antes en un entorno controlado como tu laboratorio o sistemas de pruebas. Como decía, deberías tener una guia de respuesta a incidentes y proceso de análisis forense antes de que ocurra un incidente.

En este artículo quiero recomendar algunos pasos y trucos que nosotros hemos usado en algún momento. Doy por hecho que tienes el cliente de linea de comandos de AWS ya instalado, si no es así mira aquí: http://docs.aws.amazon.com/general/latest/gr/GetTheTools.html. Todos los comandos están basados en una posible instancia comprometida en EC2 (Linux), pero la mayoría de estos comandos “aws” se pueden usar también para servidores Windows aunque no lo he probado. Todas estas acciones también se pueden realizar mediante la AWS Cosole. Estos serían algunos de los pasos a tener en cuenta:

1) Desactiva o borra el Access Key. Si una AWS Access Key ha sido comprometida  (AWS te lo hará saber en un correo electrónico u tu lo notarás pagando una gran factura) o te das cuenta que lo has publicado en Github:

aws iam list-access-keys
aws iam update-access-key --access-key-id AKIAIOSFODNN7EXAMPLE \
--status Inactive --user-name Bob
aws iam delete-access-key --access-key AKIDPMS9RO4H3FEXAMPLE \
--user-name Bob

2) En caso de que la Key sea comprometida, comprueba si algún recurso ha sido creado usando esa Key, en todas las regiones. Es común ver que alguien ha usado tus claves para lanzar instancias EC2 en otras regiones de AWS así que comprueba todas buscando instancias que te parezcan sospechosas. Aquí un ejemplo para buscar instancias creadas en la región us-east-1 desde el 9 de Marzo de 2016:

aws ec2 describe-instances --region us-east-1 \
--query 'Reservations[].Instances[?LaunchTime>=`2016-03-9`][].{id: InstanceId, type: InstanceType, launched: LaunchTime}'

3) Contacta con el equipo de soporte de AWS y avísales del incidente, están siempre dispuestos a ayudar y en caso necesario escalarán al equipo de seguridad de AWS.

4) Aisla la instancia, en este caso cuando hablo de YOUR.IP.ADDRESS.HERE, puede ser la IP pública de tu oficina o un servidor intermedio donde saltar y hacer el análisis:

    • Crea un security group para aislar la isntancia, ojo con la diferencia entre EC2-Classic y EC2-VPC, apunta el Group-ID
aws ec2 create-security-group --group-name isolation-sg \
--description "Security group to isolate EC2-Classic instances"
aws ec2 create-security-group --group-name isolation-sg \
--description "Security group to isolate a EC2-VPC instance" --vpc-id vpc-1a2b3c4d 
# where vpc-1a2b3c4d is the VPC ID that the instance is member of
    • Configura una regla para permitir SSH solo desde tu IP pública, aunque primero debes saber tu IP pública:
dig +short myip.opendns.com @resolver1.opendns.com
aws ec2 authorize-security-group-ingress --group-name isolation-sg \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32
aws ec2 authorize-security-group-ingress --group-id sg-BLOCK-ID \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32 
# note the difference between both commands in group-name \
and group-id, sg-BLOCK-ID is the ID of your isolation-sg
    • Los EC2-Classic Security Groups no soportan reglas de trafico saliente (solo entrante). Sin embargo, para EC2-VPC Security Groups, reglas de trafico saliente se puede configurar con estos comandos:
aws ec2 revoke-security-group-egress --group-id sg-BLOCK-ID \
--protocol '-1' --port all --cidr '0.0.0.0/0’ 
# removed rule that allows all outbound traffic
aws ec2 authorize-security-group-egress --group-id sg-BLOCK-ID \
--protocol 'tcp' --port 80 --cidr '0.0.0.0/0’ 
# place a port or IP if you want to enable some other outbound \
 traffic otherwise do not execute this command.
    • Aplica ese Security Group a la instancia comprometida:
aws ec2 modify-instance-attribute --instance-id i-INSTANCE-ID \
--groups sg-BLOCK-ID 
# where sg-BLOCK-ID is the ID of your isolation-sg
aws iam put-user-policy --user-name MyUser --policy-name MyPowerUserRole \
--policy-document file://C:\Temp\MyPolicyFile.json

5) Etiqueta la instancia para marcarla como “en investigación”:

aws ec2 create-tags –-resources i-INSTANCE-ID \
–tags "Key=Environment, Value=Quarantine:REFERENCE-ID"

6) Guarda los metadatatos de la instancia:

    • Más información sobre la instancia comprometida:
aws ec2 describe-instances --instance-ids i-INSTANCE-ID > forensic-metadata.log
or
aws ec2 describe-instances --filters "Name=ip-address,Values=xx.xx.xx.xx"
    • La salida de consola, puede ser útil dependiendo del tipo de ataque o compromiso aunque recuerda que deberías tener un sistema de logs centralizado:
aws ec2 get-console-output --instance-id i-INSTANCE-ID

7) Crea un Snapshot del volumen o volúmenes de la instancia comprometida para el análisis forense:

aws ec2 create-snapshot –-volume-id vol-xxxx –-description "IR-ResponderName- Date-REFERENCE-ID"
    • Ese snapshot no se modificará o montará, sino que trabajaremos con un volumen.

8) Ahora podemos seguir dos caminos, Parar la instancia:

aws ec2 stop-instances --instance-ids i-INSTANCE-ID
    • Dejarla ejecutándose, si podemos, en cuyo caso deberíamos aislarla también desde dentro  (iptables) y hacer un volcado de la memoria RAM usando LiME.

9) Crea un Volume desde el snapshot:

    • Piensa que región vas a usar, y otras opciones como  –region us-east-1 –availability-zone us-east-1a –volume-type y personaliza los comandos siguientes:
aws ec2 create-volume --snapshot-id snap-abcd1234
    • Toma nota del volumen:
aws ec2 describe-volumes

10) Monta ese volumen en tu distribución favorita de análisis forense y comienza la investigación.

Iré añadiendo más información en futuros artículos, por ahora esto es una introducción adecuada.

Si quieres aprender mucho más sobre este tema, voy a dar un curso online sobre Análisis forense en AWS, GCE y Azure en español con Securizame, más info aquí.

Forense2-1

Algunas referencias que he usado:

https://securosis.com/blog/my-500-cloud-security-screwup

https://securosis.com/blog/cloud-forensics-101

http://www.slideshare.net/AmazonWebServices/sec316-your-architecture-w-security-incident-response-simulations

http://sysforensics.org/2014/10/forensics-in-the-amazon-cloud-ec2/

Forensics in AWS: an introduction

Spanish version here.

AWS is always monitoring unauthorized usage of their/our resources up in the cloud. If you have dozens of services running on AWS, at some point, you are likely to be warned about a security issue due to a variety of reasons like accidentally sharing a Key in Github, server misconfiguration making it easily exploitable, services with vulnerabilities, DoS or DDoS, 0days, etc… So be ready to perform a forensics and/or incident response to your AWS infrastructure.

 

Remember, in case of a security incident keep calm and follow a predefined procedure, don’t leave the process to a random behavior because probably your boss or you are nervous and unable to wait. It is always much better to have a proven guide to follow that just follow your intuition (you will use your intuition later).

 

WARNING: you may have come to this article in a desperate try looking for a solution in Google, in this case I recommend you to test all commands mentioned here before in your lab environment. You should have an incident response and forensics guide with some information like this before the incident actually happens.

 

In this article I want to write up some recommended steps and also tips and tricks we have “eventually” done. I assume you have AWS command line tools installed correctly otherwise look at here http://docs.aws.amazon.com/general/latest/gr/GetTheTools.html. All commands are based on Linux EC2 compromised server but most of the “aws” CLI commands can also be used for Windows servers (not tested though). If you are wondering if you may perform all actions mentioned below using AWS Console UI, yes, but I think using command line is faster and straightforward to follow in case of an incident:

 

1) Disable or delete the Access Key. If your AWS Access Key has been compromised (AWS will let you know in the communication or in case you noticed that thru a different manner, i.e.: looking at your code published in GitHub)

aws iam list-access-keys
aws iam update-access-key --access-key-id AKIAIOSFODNN7EXAMPLE \
--status Inactive --user-name Bob
aws iam delete-access-key --access-key AKIDPMS9RO4H3FEXAMPLE \
--user-name Bob

2) In case of compromised Key, check if new and unexpected resources have been spin up using the compromised key, in all regions. It is common to see that someone used your compromised Key to launch EC2 instances in any other AWS region, so check all of them looking for new and suspected instances. Here an example to look for new instances launched in us-east-1 since March 9th 2016:

aws ec2 describe-instances --region us-east-1 \
--query 'Reservations[].Instances[?LaunchTime>=`2016-03-9`][].{id: InstanceId, type: InstanceType, launched: LaunchTime}'

3) Contact AWS Support and let them know about the security incident, they are always willing to help and give advice. They also may scale to AWS Security Team if needed.

4) Isolate the forensic instance, in this case when I talk about YOUR.IP.ADDRESS.HERE, it could be your office public IP or an intermediate hosted server to hop off or to do the analysis:

  • Create a security group to isolate your instance, note the difference between EC2-Classic and EC2-VPC, take note of Group-ID
aws ec2 create-security-group --group-name isolation-sg \
--description “Security group to isolate EC2-Classic instances”
aws ec2 create-security-group --group-name isolation-sg \
--description “Security group to isolate a EC2-VPC instance” \
--vpc-id vpc-1a2b3c4d \
# where vpc-1a2b3c4d is the VPC ID that the instance is member of
  • Set a rule to allow SSH access from your public IP only, but first we have to know our public IP:
dig +short myip.opendns.com @resolver1.opendns.com
aws ec2 authorize-security-group-ingress --group-name isolation-sg \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32
aws ec2 authorize-security-group-ingress --group-id sg-BLOCK-ID \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32 \
# note the difference between both commands in group-name \
and group-id, sg-BLOCK-ID is the ID of your isolation-sg
  • In EC2-Classic Security Groups don’t support outbound rules. However, for EC2-VPC Security Groups, outbound rules can be set with these commands:
aws ec2 revoke-security-group-egress --group-id sg-BLOCK-ID \
--protocol ‘-1’ --port all --cidr ‘0.0.0.0/0’ \
# removed rule that allows all outbound traffic
aws ec2 authorize-security-group-egress --group-id sg-BLOCK-ID \
--protocol ‘tcp’ --port 80 --cidr ‘0.0.0.0/0’ \
# place a port or IP if you want to enable some other \
outbound traffic otherwise do not execute this command.
  • Apply that Security Group to the compromised instance:
aws ec2 modify-instance-attribute --instance-id i-INSTANCE-ID \
--groups sg-BLOCK-ID \
# where sg-BLOCK-ID is the ID of your isolation-sg
aws iam put-user-policy --user-name MyUser --policy-name MyPowerUserRole \
--policy-document file://C:\Temp\MyPolicyFile.json

5) Tag instance to mark it as under investigation:

aws ec2 create-tags --resources i-INSTANCE-ID \
--tags “Key=Environment, Value=Quarantine:REFERENCE-ID”

6) Save instance/s metadata:

  • Information about the compromised instance:
aws ec2 describe-instances --instance-ids i-INSTANCE-ID > forensic-metadata.log
or
aws ec2 describe-instances --filters “Name=ip-address,Values=xx.xx.xx.xx”
  • Console output, can be useful depending on the attack but you should have a centralized/dedicated log server outside each instance.
aws ec2 get-console-output --instance-id i-INSTANCE-ID

7) Create Snapshot of the volume/s on the compromised instance/s for forensics analysis:

aws ec2 create-snapshot –-volume-id vol-xxxx \
–-description “IR-ResponderName- Date-REFERENCE-ID”

That snapshot won’t be changed or mounted, we will work with a Volume.

8) Now we can follow 2 paths: Stop the instance.

aws ec2 stop-instances --instance-ids i-INSTANCE-ID
  • or Leave it running, if we can, then isolate it from inside (iptables) and dump its RAM memory to a file using LiME.

9) Create a Volume from the taken snapshot to be used later for analysis:

  • Consider using –region us-east-1 –availability-zone us-east-1a –volume-type standard with your own setup.
aws ec2 create-volume --snapshot-id snap-abcd1234
  • Now take note of your new volume:
aws ec2 describe-volumes

10) Mount that volume with your favorite forensics distribution and Run the investigation.

I will add more information in next blog post but I think this is a good introduction.

If you want to learn much more on this topic, I will be giving an online training about AWS, GCE and Azure Forensics in Spanish with Securizame, more info here.

Some cool references and good reads:

https://securosis.com/blog/my-500-cloud-security-screwup

https://securosis.com/blog/cloud-forensics-101

http://www.slideshare.net/AmazonWebServices/sec316-your-architecture-w-security-incident-response-simulations

http://sysforensics.org/2014/10/forensics-in-the-amazon-cloud-ec2/