Prowler 2.8.0 – Ides of March

The Ides of March is an instrumental song that opens the second studio album of Iron Maiden called Killers. This song is great as an opening, March is the month when spring starts in my side of the world, is always time for optimism. Ides of March also means 15 of March in the Roman calendar (and the day of the assassination of Julius Caesar). Enjoy the song here.

We have put our best to make this release and with important help of the Prowler community of cloud security engineers around the world, thank you all! Special thanks to the Prowler full time engineers @jfagoagas, @n4ch04 and @sergargar! (and Bruce, my dog) ❤️

prowler-team-pic

Important changes in this version (read this!):

Now, if you have AWS Organizations and are scanning multiple accounts using the assume role functionality, Prowler can get your account details like Account Name, Email, ARN, Organization ID and Tags and add them to CSV and JSON output formats. More information and usage here.

New Features

  • 1 New check for S3 buckets have ACLs enabled by @jeffmaley in #1023 :
    7.172 [extra7172] Check if S3 buckets have ACLs enabled - s3 [Medium]
  • feat(metadata): Include account metadata in Prowler assessments by @toniblyx in #1049

Enhancements

Fixes

  • Fix issue extra75 reports default SecurityGroups as unused #1001 by @jansepke in #1006
  • Fix issue extra793 filtering out network LBs #1002 by @jansepke in #1007
  • Fix formatting by @lorchda in #1012
  • Fix docker references by @mike-stewart in #1018
  • Fix(check32): filterName base64encoded to avoid space problems in filter names by @n4ch04 in #1020
  • Fix: when prowler exits with a non-zero status, the remainder of the block is not executed by @lorchda in #1015
  • Fix(extra7148): Error handling and include missing policy by @toniblyx in #1021
  • Fix(extra760): Error handling by @lazize in #1025
  • Fix(CODEOWNERS): Rename team by @jfagoagas in #1027
  • Fix(include/outputs): Whitelist logic reformulated to exactly match input by @n4ch04 in #1029
  • Fix CFN CodeBuild example by @mmuller88 in #1030
  • Fix typo CodeBuild template by @dlorch in #1010
  • Fix(extra736): Recover only Customer Managed KMS keys by @jfagoagas in #1036
  • Fix(extra7141): Error handling and include missing policy by @lazize in #1024
  • Fix(extra730): Handle invalid date formats checking ACM certificates by @jfagoagas in #1033
  • Fix(check41/42): Added tcp protocol filter to query by @n4ch04 in #1035
  • Fix(include/outputs):Rolling back whitelist checking to RE check by @n4ch04 in #1037
  • Fix(extra758): Reduce API calls. Print correct instance state. by @lazize in #1057
  • Fix: extra7167 Advanced Shield and CloudFront bug parsing None output without distributions by @NMuee in #1062
  • Fix(extra776): Handle image tag commas and json output by @jfagoagas in #1063
  • Fix(whitelist): Whitelist logic reformulated again by @n4ch04 in #1061
  • Fix: Change lower case from bash variable expansion to tr by @lazize in #1064
  • Fix(check_extra7161): fixed check title by @n4ch04 in #1068
  • Fix(extra760): Improve error handling by @lazize in #1055
  • Fix(check122): Error when policy name contains commas by @plarso in #1067
  • Fix: Remove automatic PR labels by @jfagoagas in #1044
  • Fix(ES): Improve AWS CLI query and add error handling for ElasticSearch/OpenSearch checks by @lazize in #1032
  • Fix(extra771): jq fail when policy action is an array by @lazize in #1031
  • Fix(extra765/776): Add right region to CSV if access is denied by @roman-mueller in #1045
  • Fix: extra7167 Advanced Shield and CloudFront bug parsing None output without distributions by @NMuee in #1053
  • Fix(filter-region): Support comma separated regions by @thetemplateblog in #1071

New Contributors

Full Changelog: 2.7.0…2.8.0

Prowler 2.0: New release with improvements and new checks ready for re:Invent and BlackHat EU

Taking advantage of this week AWS re:Invent and  next week BlackHat Europe, I wanted to push forward a new version of Prowler.

In case you are new to Prowler:

Prowler is an AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool. It follows guidelines of the CIS Amazon Web Services Foundations Benchmark and DOZENS of additional checks including GDPR and HIPAA groups. Official CIS benchmark for AWS guide is here.

This new version has more than 20 new extra checks (of +90), including GDPR and HIPAA group of checks as for a reference to help organizations to check the status of their infrastructure regarding those regulations. Prowler has also been refactored to allow easier extensibility. Another important feature is the JSON output that allows Prowler to be integrated, for example, with Splunk or Wazuh (more about that soon!). For all details about what is new, fixes and improvements please see the release notes here: https://github.com/toniblyx/prowler/releases/tag/2.0

For me, personally, there are two main benefits of Prowler. First of all, it helps many organizations and individuals around the world to improve their security posture on AWS, and using just one easy and simple command, they realize what do they have to do and how to get started with their hardening. Second, I’m learning a lot about AWS, its API, features, limitations, differences between services and AWS security in general.

Said that, I’m so happy to present Prowler 2.0 in BlackHat Europe next week in London! It will be at the Arsenal

and I’ll talk about AWS security, and show all new features, how it works, how to take advantage of all checks and output methods and some other cool things. If you are around please come by and say hello, I’ve got a bunch of laptop sticklers! Here all details, Location:  Business Hall, Arsenal Station 2. Date: Wednesday, December 5 | 3:15pm-4:50pm. Track Vulnerability Assessment. Session Type: Arsenal

BIG THANKS!

I want to thank the Open Source community that has helped out since first day, almost a thousand stars in Github and more than 500 commits talk by itself. Prowler has become pretty popular out there and all the community support is awesome, it motivates me to keep up with improvements and features. Thanks to you all!!

Prowler future?

Main goals for future versions are: to improve speed and reporting, including switch base code to Python to support existing checks and new ones in any language.

If you are interested on helping out, don’t hesitate to reach out to me. \m/

My arsenal of AWS security tools

I’ve been using and collecting a list of helpful tools for AWS security. This list is about the ones that I have tried at least once and I think they are good to look at for your own benefit and most important: to make your AWS cloud environment more secure.

They are not in any specific order, I just wanted to group them somehow. I have my favorites depending on the requirements but you can also have yours once you test them.

Feel free to send a pull request for improvements or add more tools (open source only in this list) here:

New additions at https://github.com/toniblyx/my-arsenal-of-aws-security-tools

 

Defensive (Hardening, Security Assessment, Inventory)

Offensive:

Continuous Security Auditing:

DFIR:

Development Security:

S3 Buckets Auditing:

Training:

Others:

Prowler 1.6: AWS Security Best Practices Assessment and Forensics Readiness Tool

It looks like Prowler has become a popular tool for those concerned about AWS security. I just made Prowler to solve an internal requirement we have here in Alfresco. I decided to make it public and I started getting a lot of feedback, pull requests, comments, advices, bugs reported, new ideas and I keep pushing to make it better and more comprehensive following all what cloud security community seems to need.
I know Prowler is not the best tool out there but it does what I wanted it to do: “Take a picture of my AWS account (or accounts) security settings and tell me from where to start working to improve it”. Do the basics, at least. And that’s what it does. I would use other tools to track service change, etc., I discuss that also in my talks.
Currently, Prowler performs 74 checks (for an entire list run `prowler -l`), being 52 of them part of the CIS benchmark.

Digital Forensics readiness capabilities into Prowler 1.6

`prowler -c forensics-ready`
I’m into DFIR, I love it and I read lot about cloud digital forensics and incident response, I enjoy investing my time R&D about that subject. And I’m concerned about random or targeted attacks to cloud infrastructure. For the talk I’m doing today at the SANS Cloud Security Summit 2018 in San Diego, I wanted to show something new and I thought about adding new checks to Prowler related to forensics and how to make sure you have all (or as much) what you need to perform a proper investigation in case of incident, logs that are not enabled by default in any AWS account by the way. Some of those checks are included and well described in the current CIS benchmark for AWS, or even in the CIS benchmark for AWS three tiers web deployments (another hardening guide that is way less popular but pretty interesting too), but there are checks that are not included anywhere. For example, I believe it is good idea to keep record of your API Gateway logs in your production accounts or even your ELB logs, among many others. So when you run  `prowler -c forensics-ready` now you will get the status of your resources across all regions, and you can make sure you are logging all what you may eventually need in case of security incident. Currently these are the checks supported (https://github.com/toniblyx/prowler#forensics-ready-checks):
  • 2.1 Ensure CloudTrail is enabled in all regions (Scored)
  • 2.2 Ensure CloudTrail log file validation is enabled (Scored)
  • 2.3 Ensure the S3 bucket CloudTrail logs to is not publicly accessible (Scored)
  • 2.4 Ensure CloudTrail trails are integrated with CloudWatch Logs (Scored)
  • 2.5 Ensure AWS Config is enabled in all regions (Scored)
  • 2.6 Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Scored)
  • 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Scored)
  • 4.3 Ensure VPC Flow Logging is Enabled in all VPCs (Scored)
  • 7.12 Check if Amazon Macie is enabled (Not Scored) (Not part of CIS benchmark)
  • 7.13 Check if GuardDuty is enabled (Not Scored) (Not part of CIS benchmark)
  • 7.14 Check if CloudFront distributions have logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.15 Check if Elasticsearch Service domains have logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.17 Check if Elastic Load Balancers have logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.18 Check if S3 buckets have server access logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.19 Check if Route53 hosted zones are logging queries to CloudWatch Logs (Not Scored) (Not part of CIS benchmark)
  • 7.20 Check if Lambda functions are being recorded by CloudTrail (Not Scored) (Not part of CIS benchmark)
  • 7.21 Check if Redshift cluster has audit logging enabled (Not Scored) (Not part of CIS benchmark)
  • 7.22 Check if API Gateway has logging enabled (Not Scored) (Not part of CIS benchmark)

Screenshot while running `forensics-ready` group of checks, here only showing 3 of the first checks that are part of that group

I haven’t added yet a RDS logging check and I’m probably missing many others so please feel free to open an issue in Github and let me know!
If you want to check out my slide deck used during my talk at the SANS Cloud Security Summit 2018 in San Diego, look at here: https://github.com/toniblyx/SANSCloudSecuritySummit2018

Getting started with AWS Certificate Manager (and Route53)

…with your own domain not hosted in Amazon Route 53 and a wild card certificate.
A main premise I follow when it comes to deploying or architecting any service in the cloud, whatever vendor I use, is full encryption between layers and intent to add elasticity on each service (adding them to what AWS calls Auto Scaling Groups).
For a pretty cool new project (wink wink) that we are working in the Security Operations Group at Alfresco, we need to deploy a bunch of AWS resources and we want to use https between all the services. Since we will use AutoScaling groups and ELB, we want to configure all ELB with HTTPS and to do so we have to provide the certificates. We can do that manually or automated with CloudFormation, and the CloudFormation option is what we have chosen as in many other projects. We also want to use our own certificates.
This article is to show you how to create your own wild card certificate with AWS Certificate Manager and use Route53 for a subdomain that you own. For example, if you have a domain that is not hosted in Route 53, like in my case with blyx.com which is hosted at joker.com using their own name servers. I’ll use a subdomain called cloud.blyx.com for this example.
Long story short, the whole process is something like this:
  1. Add a hosted zone in Route 53
  2. Configure your DNS server to point the custom subdomain to Route 53
  3. Create the wildcard certificate
So let’s go ahead:
  1. Start creating a hosted zone in Route 53. This new hosted zone will be a subdomain of our main domain that we will be able to manage entirely in AWS for our wild card certificates and obviously for our load balancers and URLs domain names instead of using default amazonaws.com names, in my case, I create a hosted zone called “cloud.blyx.com”:

  1. Now we have to go to our DNS server and add all records that we got in the previous step, in my case I will do it using joker.com web panel, if you use Bind or other solution you have to create a NS zone called cloud.blyx.com (something like that, like a 3rd level domain) and then point the Name servers that we got from AWS Route 53 above. Here an example, easy:

3. Once we have all DNS steps done, let’s create our wild card certificate with AWS Certificate Manager for *.cloud.blyx.com. Remember that to validate the certificate creation you will get an email from AWS and you will have to approve the request by following the instructions on the email:

This is the email to approve the request:

Here is the approval page:

Once it is approved you will see it as “issued”. And you are ready to use it. Now, from CloudFormation we can call Route 53 and use your own certificate to make all communications through HTTPS when needed.
Hope you get this article helpful. The Trooper is coming!

Bypassing AWS IAM: How important it is to look closely at your policies

If you are dealing everyday with dozens of users in AWS and you like to have (or believe that you have) control over them; that you like to believe that you drive them like a good flock of sheep, you will feel my pain, and I’ll feel yours.

We manage multiple AWS accounts, for many purposes. Some accounts with more restrictions than others, we kinda control and deny to use some regions, some instance types, some services, etc. Just for security and budget control (like you do as well, probably).

That being said, you are now a “ninja” of AWS IAM because you have to add, remove, create, change, test and simulate easy and complex policies pretty much everyday, to make your flock trustfully follow its shepherd.

But dealing with users is great to test the strength of your policies. I have a policy where explicitly denied a list of instance types to be used (a black list with “ec2:RunInstances”). Ok, it denies to create them, but not to stop them, change instance type and start them again. You may end up feeling that your control is like this:

Let me show you all the technical details and a very self-explanatory demo in this video:

What do you think? Is it an expected behavior? It is actually. But I also think that the “ec2:ModifyInstanceAttribute” control should be more granular and should have “instanceType” somehow related to “ec2:RunInstances”. A limitation from AWS IAM, I guess.
In case you want to try by yourself, here you go below all commands I used (you will have to change the instance id, profile and region), if you want to copy a similar IAM policy, look at here in my blog post How to restrict by regions and instance types in AWS with IAM:
# create an allowed instance
aws ec2 run-instances --image-id ami-c58c1dd3 \
--count 1 --instance-type t2.large --key-name sec-poc \
--security-group-ids sg-12b5376a --subnet-id subnet-11fe4e49 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# stop instance
aws ec2 stop-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# change instance type
aws ec2 modify-instance-attribute --instance-id i-0152bc219d24c5f25 \
--instance-type "{\"Value\": \"i2.2xlarge\"}" \
--profile soleng --region us-east-1

# start instance type
aws ec2 start-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# terminate instance
aws ec2 terminate-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

Deploy the new Alfresco One Reference Architecture in few minutes with AWS CloudFormation

…On AWS, in High Availability, Auto scalable and Multi AZ support.

Back in 2013 we (at Alfresco) released an AWS CloudFormation template that allow you to deploy an Alfresco Enterprise cluster in Amazon Web Services and I talked about it here.
Today, I’m proud to announce that we have rewritten that template to make it work with our new modern stack and version of Alfresco One (5.1). We also put a lot of effort in place to make this new template an Alfresco One Reference Architecture, not only because it is in hight availability, but because we use our latest automation tools like Chef-Alfresco and our experience on tuning but also best practices learned during our latest benchmarks and related to architecture security. In addition to that and to make it faster to deploy, we are using the official Alfresco One AMI published in the AWS Marketplace.
I’d like to mention some features you will find in this new template:
  • All Alfresco and Index nodes will be placed inside a Virtual Private Cloud (VPC).
  • Each Alfresco and Index nodes will be in a separate Availability Zone (same Region).
  • We use Alfresco One 5.1.1.1 with Alfresco Offices Services and Google Docs plugin.
  • All configuration is done automatically using Chef-Alfresco, you don’t really need to know about Chef to make this work.
  • An Elastic Load Balancer instance with “sticky” sessions based on the Tomcat JSESSIONID.
  • Shared content store is in a S3 bucket.
  • MySQL database on RDS instances in Multi-AZ mode.
  • We use a pre-baked AMI. Our official Alfresco One AMI published in the AWS Marketplace, based on CentOS 7.2 and with an all-in-one configuration that we reconfigure automatically to work for this architecture and save time.
  • Auto-scaling rules that will add extra Alfresco and Index nodes when certain performance thresholds are reached.
  • HTTPS access to Alfresco Share not enabled by default but all set to enable it.
As a result of this deployment you will get this environment:
aws-alfresco
Our CloudFormation Template and additional documentation is available in Github. https://github.com/Alfresco/alfresco-cloudformation-chef
In the video below you can see a quick demo about how to deploy this infrastructure in just few minutes of user intervention. Isn’t is cool? Do you know how much time you save doing it this way? And also set up a production and test environment exactly the same way, faster, easier and cheaper!

Prowler: an AWS CIS Security Benchmark Tool

screenshot-2016-09-14-22-43-39In this blog post I’m happy to announce the recent release of Prowler: an AWS CIS Security Benchmark Tool.

At Alfresco we run several workloads on AWS and, like many others companies, we use multiple AWS accounts depending on use cases, projects, etc.

To make sure we have a foundation security controls applied to each  account, AWS counts with a service called Trusted Advisor which has, among other features, a section for Security Best practices, it checks some services and give us some recommendations to improve Security of our account, 3 checks are free the rest of them (12) are available only for customers with Business or Enterprise support plan:

screenshot-2016-09-14-17-26-09

Trusted Advisor is fine, but it is not enough comprehensive and it is not free. Here is a screenshot of Trusted Advisor in the AWS Console on a Business support plan account:
screenshot-2016-09-14-17-30-09

In addition to that AWS service, few months ago the Center or Internet Security (CIS) along with Amazon Web Services and others, released the CIS AWS Foundations Benchmark. In that document we can find a collection of audit checks and remediations that cover the security foundations for these main areas in AWS:

  • Identity and Access Management (15 checks)
  • Logging (8 checks)
  • Monitoring (16 checks)
  • Networking (4 checks)

The 89 pages guide goes through 43 recommendations by explaining why that check is important, how to audit it and how to remediate it in case you don’t have it properly configured.

If you try to follow all these checks manually it may take you a couple of days to have all of them checked. This is why in Alfresco we decided to write a tool to make it faster, thus I wrote “Prowler”, a command line tool based on AWS-CLI that creates a report in a minute and shows you how is your AWS account configured in terms of security (using fancy color codes).

screenshot-2016-09-13-09-31-07

Prowler, whose name comes from the Iron Maiden song with the same name, works in Linux, OSX and Windows (with Cygwin), with AWS-CLI installed. It also requires an AWS account with at least the SecurityAudit policy applied as specified in the documentation. For more information, details and sample reports visit the project repository in Github here https://github.com/toniblyx/aws-cis-security-benchmark.

Please, go ahead, check it out and give me feedback!

Hope it helps!

UPDATE! Right after I published this post, I was pointed in Twitter by @MonkeySecurity about Scout2, which is a tool we use here since long time and it is very helpful. It has many different checks and it is complementary to Prowler, don’t forget to give it a try! And also use SecurityMonkey if you are not doing so already!

UPDATE2! Another tool to perform AWS security checks is the CloudSploit Scans, more info here.

 

 

Security Monkey deployment with CloudFormation template

netflix-security-monkey-overview-1-638In order to give back to the Open Source community what we take from it (actually from the Netflix awesome engineers), I wanted to make this work public, a CloudFormation template to easily deploy and configure Security Monkey in AWS. I’m pretty sure it will help many people to get their AWS infrastructure more secure.

Security Monkey is a tool for monitoring and analyzing the security of our Amazon Web Services configurations.

You are maybe thinking on AWS CloudTrail or AWS Trusted Advisor, right? This is what the authors say:
“Security Monkey predates both of these services and meets a bit of each services’ goals while having unique value of its own:
CloudTrail provides verbose data on API calls, but has no sense of state in terms of how a particular configuration item (e.g. security group) has changed over time. Security Monkey provides exactly this capability.
Trusted Advisor has some excellent checks, but it is a paid service and provides no means for the user to add custom security checks. For example, Netflix has a custom check to identify whether a given IAM user matches a Netflix employee user account, something that is impossible to do via Trusted Advisor. Trusted Advisor is also a per-account service, whereas Security Monkey scales to support and monitor an arbitrary number of AWS accounts from a single Security Monkey installation.”

cloud-formationNow, with this provided CloudFormation template you can deploy SecurityMonkey pretty much production ready in a couple of minutes.

For more information, documentation and tests visit my Github project: https://github.com/toniblyx/security_monkey_cloudformation

How to restrict by regions and instance types in AWS with IAM

The use case is easy, and if you work with AWS I’m pretty sure that you have faced this requirement at some point: I don’t want a certain group of users of a particular AWS account to create anything anywhere. I had to configure the security of one of our AWS accounts to only allow users to work with EC2 and a few other AWS services in only two regions (N. Virginia and Ireland in this case). In addition to that, and to keep our budget under control, we wanted to limit the instance types they can use, in this example we will only allow to use EC2 instances that are not bigger than 16GB of RAM (for a quick view of all available EC2 instances types see http://www.ec2instances.info).

Thanks to the documentation and AWS Support, I came across this solution (as an example). The only issue is that, at the moment, we can not hide features in the AWS Console, but at least AWS Support is very clear and supportive on that. They know how challenging is IAM for certain requirements.

Go to IAM -> Policies -> Create Policy -> Create Your Own Policy and use the next json code or in this gist link  as reference to write your own based on your requirements. After that you have to attach that policy to the role/user/group you want to.

Hope this helps.