Getting started with AWS Certificate Manager (and Route53)

…with your own domain not hosted in Amazon Route 53 and a wild card certificate.
A main premise I follow when it comes to deploying or architecting any service in the cloud, whatever vendor I use, is full encryption between layers and intent to add elasticity on each service (adding them to what AWS calls Auto Scaling Groups).
For a pretty cool new project (wink wink) that we are working in the Security Operations Group at Alfresco, we need to deploy a bunch of AWS resources and we want to use https between all the services. Since we will use AutoScaling groups and ELB, we want to configure all ELB with HTTPS and to do so we have to provide the certificates. We can do that manually or automated with CloudFormation, and the CloudFormation option is what we have chosen as in many other projects. We also want to use our own certificates.
This article is to show you how to create your own wild card certificate with AWS Certificate Manager and use Route53 for a subdomain that you own. For example, if you have a domain that is not hosted in Route 53, like in my case with blyx.com which is hosted at joker.com using their own name servers. I’ll use a subdomain called cloud.blyx.com for this example.
Long story short, the whole process is something like this:
  1. Add a hosted zone in Route 53
  2. Configure your DNS server to point the custom subdomain to Route 53
  3. Create the wildcard certificate
So let’s go ahead:
  1. Start creating a hosted zone in Route 53. This new hosted zone will be a subdomain of our main domain that we will be able to manage entirely in AWS for our wild card certificates and obviously for our load balancers and URLs domain names instead of using default amazonaws.com names, in my case, I create a hosted zone called “cloud.blyx.com”:

  1. Now we have to go to our DNS server and add all records that we got in the previous step, in my case I will do it using joker.com web panel, if you use Bind or other solution you have to create a NS zone called cloud.blyx.com (something like that, like a 3rd level domain) and then point the Name servers that we got from AWS Route 53 above. Here an example, easy:

3. Once we have all DNS steps done, let’s create our wild card certificate with AWS Certificate Manager for *.cloud.blyx.com. Remember that to validate the certificate creation you will get an email from AWS and you will have to approve the request by following the instructions on the email:

This is the email to approve the request:

Here is the approval page:

Once it is approved you will see it as “issued”. And you are ready to use it. Now, from CloudFormation we can call Route 53 and use your own certificate to make all communications through HTTPS when needed.
Hope you get this article helpful. The Trooper is coming!

10 Security Concepts You Must Know (in 5 minutes with GoT)

Here is a lightning talk I have recorded recently. I did it internally at Alfresco for an Engineering meeting but I think is good idea to share it and take advantage of the coming new season of Game of Thrones 😉

You have links and resources also available below the video.

If you want to use the ppt for your own use you can download it from here in my GitHub:

https://github.com/toniblyx/10-security-concepts-with-got

All references and recommended reads about the subject are here:

http://www.infoworld.com/article/3144362/devops/10-key-security-terms-devops-ninjas-need-to-know.html

https://danielmiessler.com/study/encoding-encryption-hashing-obfuscation/

https://blog.instant2fa.com/how-to-build-a-wall-that-stops-white-walkers-fantastic-threat-models-ef2dfa7864a4

 

Bypassing AWS IAM: How important it is to look closely at your policies

If you are dealing everyday with dozens of users in AWS and you like to have (or believe that you have) control over them; that you like to believe that you drive them like a good flock of sheep, you will feel my pain, and I’ll feel yours.

We manage multiple AWS accounts, for many purposes. Some accounts with more restrictions than others, we kinda control and deny to use some regions, some instance types, some services, etc. Just for security and budget control (like you do as well, probably).

That being said, you are now a “ninja” of AWS IAM because you have to add, remove, create, change, test and simulate easy and complex policies pretty much everyday, to make your flock trustfully follow its shepherd.

But dealing with users is great to test the strength of your policies. I have a policy where explicitly denied a list of instance types to be used (a black list with “ec2:RunInstances”). Ok, it denies to create them, but not to stop them, change instance type and start them again. You may end up feeling that your control is like this:

Let me show you all the technical details and a very self-explanatory demo in this video:

What do you think? Is it an expected behavior? It is actually. But I also think that the “ec2:ModifyInstanceAttribute” control should be more granular and should have “instanceType” somehow related to “ec2:RunInstances”. A limitation from AWS IAM, I guess.
In case you want to try by yourself, here you go below all commands I used (you will have to change the instance id, profile and region), if you want to copy a similar IAM policy, look at here in my blog post How to restrict by regions and instance types in AWS with IAM:
# create an allowed instance
aws ec2 run-instances --image-id ami-c58c1dd3 \
--count 1 --instance-type t2.large --key-name sec-poc \
--security-group-ids sg-12b5376a --subnet-id subnet-11fe4e49 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# stop instance
aws ec2 stop-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# change instance type
aws ec2 modify-instance-attribute --instance-id i-0152bc219d24c5f25 \
--instance-type "{\"Value\": \"i2.2xlarge\"}" \
--profile soleng --region us-east-1

# start instance type
aws ec2 start-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# terminate instance
aws ec2 terminate-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

Automate or Die! My next talk at RootedCON 2017 in Madrid

UPDATED!  My talk will be on March, Friday the 3rd at 11AM (Sala 25)
Regardless I’ve given many talks in Spain during the last 18 years, It has been a while since I don’t do a talk in a security congress. I think last time was NcN when I presented phpRADmin in 2006.
I have to confess that I was mad to talk at RootedCON. Living abroad for more than four years now, the RootedCON has been a reference event for Spanish speakers and I always have been following it very closely, I think it is one of the most popular security conferences in Spain.
Last year I tried to attend with a “Docker Security” paper but it wasn’t good enough, and honestly I didn’t work much on the paper itself. This time I worked on a more decent paper (and better tittle as well) and voila! My talk was approved.
And what I’m gonna talk about? Security in IaaS, attacks, hardening, incident response, forensics and all about its automation. Despite I will talk about general concept related to AWS, Azure and GCP, I will show specific demos and threats in AWS and I will go in detail with some caveats and hazards in AWS. My talk is called “Automate or die! How to survive to an attack in the Cloud” and you have more details here.
If you are in Spain or around the place, don’t miss the opportunity to learn from people like Mikko Hypponen, Paul Vixie, Hugo Teso, Juan Garrido or Chema Alonso. As you may see in the full list, there are 3 days plenty of good material to improve your skills from very good professionals, they also offer a training day. And compared to the price of security cons in other countries, this one is not expensive at all.
My talk will be on March, Friday the 3rd at 11AM (Sala 25). Looking forward to see you there!

Hardening assessment and automation with OpenSCAP in 5 minutes

SCAP (Security Content Automation Protocol) provides a mechanism to check configurations, vulnerability management and evaluate policy compliance for a variety of systems. One of the most popular implementations of SCAP is OpenSCAP and it is very helpful for vulnerability assessment and also as hardening helper.
In this article I’m going to show you how to use OpenSCAP in 5 minutes (or less). We will create reports and also dynamically hardening a CentOS 7 server.
Installation for CentOS 7:
yum -y install openscap openscap-utils scap-security-guide
wget http://people.redhat.com/swells/scap-security-guide/RHEL/7/output/ssg-rhel7-ocil.xml -O /usr/share/xml/scap/ssg/content/ssg-rhel7-ocil.xml
Create a configuration assessment report in xccdf (eXtensible Configuration Checklist Description Format):
oscap xccdf eval --profile stig-rhel7-server-upstream \
--results $(hostname)-scap-results-$(date +%Y%m%d).xml \
--report $(hostname)-scap-report-$(date +%Y%m%d)-after.html \
--oval-results --fetch-remote-resources \
--cpe /usr/share/xml/scap/ssg/content/ssg-rhel7-cpe-dictionary.xml \
 /usr/share/xml/scap/ssg/content/ssg-centos7-xccdf.xml
Now you can see your report, and it will be something like this (hostname.localdomain-scap-report-20161214.html):
 
See also different group rules considered:
You can go through the fails in red and see how to fix them manually or dynamically generate a bash script to fix them. Take a note of the Score number that your system got, it will be a reference after hardening.
In order to generate a script to fix all needed and harden the system (and improve the score), we need to know our report result-id, we can get it running this command using the results xml file:
export RESULTID=$(grep TestResult $(hostname)-scap-results-$(date +%Y%m%d).xml | awk -F\" '{ print $2 }')
Run oscap command to generate the fix script, we will call it fixer.sh:
oscap xccdf generate fix \
--result-id $RESULTID \
--output fixer.sh $(hostname)-scap-results-$(date +%Y%m%d).xml
chmod +x fixer.sh
Now you should have a fixer.sh script to fix all issues, open and edit it if needed. For instance, remember that the script will enable SELINUX and do lots of changes to Auditd configuration. If you have a different configuration you can run commands like bellow after running ./fixer.sh to keep SElinux permissive and in case you can change some actions of Auditd.
sed -i "s/^SELINUX=.*/SELINUX=permissive/g" /etc/selinux/config
sed -i "s/^space_left_action =.*/space_left_action = syslog/g" /etc/audit/auditd.conf
sed -i "s/^admin_space_left_action =.*/admin_space_left_action = syslog/g" /etc/audit/auditd.conf
Then you can build a new assessment report to see how much it improved your system hardening (note I added -after to the files name):
oscap xccdf eval --profile stig-rhel7-server-upstream \
--results $(hostname)-scap-results-$(date +%Y%m%d)-after.xml \
--report $(hostname)-scap-report-$(date +%Y%m%d)-after.html \
--oval-results --fetch-remote-resources \
--cpe /usr/share/xml/scap/ssg/content/ssg-rhel7-cpe-dictionary.xml \
 /usr/share/xml/scap/ssg/content/ssg-centos7-xccdf.xml
Additionally, we can generate another evaluation report of the system in OVAL format (Open Vulnerability and Assessment Language):
oscap oval eval --results $(hostname)-oval-results-$(date +%Y%m%d).xml \
--report $(hostname)-oval-report-$(date +%Y%m%d).html \
/usr/share/xml/scap/ssg/content/ssg-rhel7-oval.xml
OVAL report will give you another view of your system status and configuration ir order to allow you improve it and follow up, making sure your environment reaches the level your organization requires.
Sample OVAL report:
Happy hardening!

Deploy the new Alfresco One Reference Architecture in few minutes with AWS CloudFormation

…On AWS, in High Availability, Auto scalable and Multi AZ support.

Back in 2013 we (at Alfresco) released an AWS CloudFormation template that allow you to deploy an Alfresco Enterprise cluster in Amazon Web Services and I talked about it here.
Today, I’m proud to announce that we have rewritten that template to make it work with our new modern stack and version of Alfresco One (5.1). We also put a lot of effort in place to make this new template an Alfresco One Reference Architecture, not only because it is in hight availability, but because we use our latest automation tools like Chef-Alfresco and our experience on tuning but also best practices learned during our latest benchmarks and related to architecture security. In addition to that and to make it faster to deploy, we are using the official Alfresco One AMI published in the AWS Marketplace.
I’d like to mention some features you will find in this new template:
  • All Alfresco and Index nodes will be placed inside a Virtual Private Cloud (VPC).
  • Each Alfresco and Index nodes will be in a separate Availability Zone (same Region).
  • We use Alfresco One 5.1.1.1 with Alfresco Offices Services and Google Docs plugin.
  • All configuration is done automatically using Chef-Alfresco, you don’t really need to know about Chef to make this work.
  • An Elastic Load Balancer instance with “sticky” sessions based on the Tomcat JSESSIONID.
  • Shared content store is in a S3 bucket.
  • MySQL database on RDS instances in Multi-AZ mode.
  • We use a pre-baked AMI. Our official Alfresco One AMI published in the AWS Marketplace, based on CentOS 7.2 and with an all-in-one configuration that we reconfigure automatically to work for this architecture and save time.
  • Auto-scaling rules that will add extra Alfresco and Index nodes when certain performance thresholds are reached.
  • HTTPS access to Alfresco Share not enabled by default but all set to enable it.
As a result of this deployment you will get this environment:
aws-alfresco
Our CloudFormation Template and additional documentation is available in Github. https://github.com/Alfresco/alfresco-cloudformation-chef
In the video below you can see a quick demo about how to deploy this infrastructure in just few minutes of user intervention. Isn’t is cool? Do you know how much time you save doing it this way? And also set up a production and test environment exactly the same way, faster, easier and cheaper!

Prowler: an AWS CIS Security Benchmark Tool

screenshot-2016-09-14-22-43-39In this blog post I’m happy to announce the recent release of Prowler: an AWS CIS Security Benchmark Tool.

At Alfresco we run several workloads on AWS and, like many others companies, we use multiple AWS accounts depending on use cases, projects, etc.

To make sure we have a foundation security controls applied to each  account, AWS counts with a service called Trusted Advisor which has, among other features, a section for Security Best practices, it checks some services and give us some recommendations to improve Security of our account, 3 checks are free the rest of them (12) are available only for customers with Business or Enterprise support plan:

screenshot-2016-09-14-17-26-09

Trusted Advisor is fine, but it is not enough comprehensive and it is not free. Here is a screenshot of Trusted Advisor in the AWS Console on a Business support plan account:
screenshot-2016-09-14-17-30-09

In addition to that AWS service, few months ago the Center or Internet Security (CIS) along with Amazon Web Services and others, released the CIS AWS Foundations Benchmark. In that document we can find a collection of audit checks and remediations that cover the security foundations for these main areas in AWS:

  • Identity and Access Management (15 checks)
  • Logging (8 checks)
  • Monitoring (16 checks)
  • Networking (4 checks)

The 89 pages guide goes through 43 recommendations by explaining why that check is important, how to audit it and how to remediate it in case you don’t have it properly configured.

If you try to follow all these checks manually it may take you a couple of days to have all of them checked. This is why in Alfresco we decided to write a tool to make it faster, thus I wrote “Prowler”, a command line tool based on AWS-CLI that creates a report in a minute and shows you how is your AWS account configured in terms of security (using fancy color codes).

screenshot-2016-09-13-09-31-07

Prowler, whose name comes from the Iron Maiden song with the same name, works in Linux, OSX and Windows (with Cygwin), with AWS-CLI installed. It also requires an AWS account with at least the SecurityAudit policy applied as specified in the documentation. For more information, details and sample reports visit the project repository in Github here https://github.com/toniblyx/aws-cis-security-benchmark.

Please, go ahead, check it out and give me feedback!

Hope it helps!

UPDATE! Right after I published this post, I was pointed in Twitter by @MonkeySecurity about Scout2, which is a tool we use here since long time and it is very helpful. It has many different checks and it is complementary to Prowler, don’t forget to give it a try! And also use SecurityMonkey if you are not doing so already!

UPDATE2! Another tool to perform AWS security checks is the CloudSploit Scans, more info here.

 

 

Cloud Forensics: CAINE7 on AWS

caine-7-accessories-481x460If you work with AWS, you may have to perform a forensics analisys at some point. As discussed in previous articles here, there are many tasks we can achieve in the cloud.
Here is a quick quide based on AWS-CLI on how to install, upload and use the well known CAINE7 distribution up in the Amazon Cloud importing it as an EC2 AMI:
  • First of all start CAINE7.iso as live CD in Virtualbox,  12GB of disk in VHD format will be fine ( if you don’t use VHD or you have VMDK instead you can convert it with “VBoxManage clonemedium CAINE7.vmdk  CAINE7.vhd –format vhd”)
  • Inside CAINE:
    • Run BlockON/OFF app from Desktop icon, select your virtual hard drive and make it Writable.
    • Go to Menu / System / Administration / gParted
    • In gParted  Device / Create Partition Table… msdos
    • Partition new create a 10GB partition and leave the rest empty
    • Create another partition linux-swap for the remaining 2GB
    • Edit – Apply all operations
    • Run Systemback (installer) form the Desktop icon.
    • System Install, fill the form with user full name: caine, system user: ec2-user, your password and hostname: caine. Then Next
    • Select the 10GB partition and set the mount point /
    • Click Next and the installation will start
  • Once the installation is finished you can stop the virtual machine, remove the liveCD, start it and log in to the VM again to do some additional steps inside your just installed CAINE7.
  • Update and upgrade:
    • sudo apt-get update; sudo apt-get upgrade
  • Install aws-cli:
    • sudo pip install aws-cli
  • Now we will install some dependences needed to get access via RDP once we run CAINE in AWS, just like if it is in our local workstation.
    • sudo apt-get install xrdp curl
    • sudo sed -i s/port=-1/port=ask-1/g /etc/xrdp/xrdp.ini
    • sudo sed -i s#/\.\ \/etc\/X11\/Xsession#mate-session#g /etc/xrdp/startwm.sh
    • sudo service xrdp restart
  • Extra: install the Amazon EC2 Simple Systems Manager (SSM) agent to process Run Command requests remotely and automated:
    • cd /tmp
    • curl https://amazon-ssm-<region>.s3.amazonaws.com/latest/debian_386/amazon-ssm-agent.deb -o amazon-ssm-agent.deb
    • dpkg -i amazon-ssm-agent.deb
  • Now we have to upload this VM VHD file to a S3 bucket, it will be around 8GBaws-logo1.png
    • aws s3 cp CAINE7.vhd  s3://your-forensics-tools-bucket/CAINE7.vhd
    • This will take time, depending on your bandwith.
  • If the AWS IAM user you are running doesn’t have proper permissions, you should review and follow these prerequisites http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html
  • Then we can import this virtual hard drive as AWS AMI. Firs create a json file like below to use it as parameter for the import task (caine7vm.json):
[{
    “Description”: “CAINE7”,
    “Format”: “vhd”,
    “UserBucket”: {
        “S3Bucket”: “your-forensics-tools-bucket”,
        “S3Key”: “CAINE7.vhd”
    }
}]
  • Lets perform the import:
    • aws ec2 import-image –description “CAINE7” –disk-containers file://caine7vm.json –profile default –region us-east-1
    • NOTE: you probably don’t need to specify profile or region.
  • The import taks may take some minutes, depending on how big is the VHD and how busy is AWS by that time. To check the status use this command:
    • aws ec2 describe-import-image-tasks –profile default –region us-east-1 –query ‘ImportImageTasks[].[ImportTaskId,StatusMessage,Progress]’
    • or this one with your custom “import-ami-XXXXX”
    • aws ec2 describe-import-image-tasks –profile default –region us-east-1 –query ‘ImportImageTasks[].[ImportTaskId,StatusMessage,Progress]’ –cli-input-json “{ \”ImportTaskIds\”: [\”import-ami-XXXXX\”]}”
    • You will see “StatusMessage”: “pending” –> “validated”–> “converting” –> “preparing to boot” –> “booted” –> “preparing ami” –> “completed”
  • Once it is completed, look for your brand new AMI id:
    • aws ec2 describe-images –owners self –profile default –region us-east-1 –filters “Name=name,Values=import-ami-XXXXX”
  • Good, we know the AMI id so let’s create a new instance inside an existing VPC and a Public Subnet (I use t2.medium with 2GB of RAM), please use your own Security Group with RDP and SSH open and your own ssh keyname:
    • aws ec2 run-instances –image-id ami-XXXX –count 1 –instance-type t2.medium –key-name YOURKEY –security-group-ids sg-YOURSG –subnet-id subnet-YOURPUBLIC –profile default –region us-east-1
  • Add it a tag for better identification:
    • aws ec2 create-tags –resources i-XXXX –tags Key=Name,Value=Investigator –profile default –region us-east-1
  • At this point you can attache a public IP to the instance and get access to it.
  • First allocate a public Elastic IP:
    • aws ec2 allocate-address –domain vpc-XXXX –profile default –region us-east-1
  • Then associate that new Elastic IP to our just launched CAINE7 instance (changeeipalloc-XXXX):
    • aws ec2 associate-address –instance-id i-XXXX –allocation-id eipalloc-XXXX –profile security –region us-east-1
  • Now open your favorite remote desktop application and access to your CAINE7, remember you will be asked for the username and password you set when CAINE was installed in your VirtualBox VM:

Screenshot 2016-06-13 23.09.56

  • Now you should be in!

Screenshot 2016-06-16 13.40.33

Security Monkey deployment with CloudFormation template

netflix-security-monkey-overview-1-638In order to give back to the Open Source community what we take from it (actually from the Netflix awesome engineers), I wanted to make this work public, a CloudFormation template to easily deploy and configure Security Monkey in AWS. I’m pretty sure it will help many people to get their AWS infrastructure more secure.

Security Monkey is a tool for monitoring and analyzing the security of our Amazon Web Services configurations.

You are maybe thinking on AWS CloudTrail or AWS Trusted Advisor, right? This is what the authors say:
“Security Monkey predates both of these services and meets a bit of each services’ goals while having unique value of its own:
CloudTrail provides verbose data on API calls, but has no sense of state in terms of how a particular configuration item (e.g. security group) has changed over time. Security Monkey provides exactly this capability.
Trusted Advisor has some excellent checks, but it is a paid service and provides no means for the user to add custom security checks. For example, Netflix has a custom check to identify whether a given IAM user matches a Netflix employee user account, something that is impossible to do via Trusted Advisor. Trusted Advisor is also a per-account service, whereas Security Monkey scales to support and monitor an arbitrary number of AWS accounts from a single Security Monkey installation.”

cloud-formationNow, with this provided CloudFormation template you can deploy SecurityMonkey pretty much production ready in a couple of minutes.

For more information, documentation and tests visit my Github project: https://github.com/toniblyx/security_monkey_cloudformation