Resources for Protecting against Public S3 Bucket Exposure

Hi friends – its been a while. My new gig is kicking my butt! Growing and learning quite a lot, for sure. I need to keep good information coming out, so when I saw this story about third-party developed Facebook app datasets being exposed due to a mis-configured bucket permission, I felt compelled to put together some ways to help remediate this.

Although ACLs and Bucket policies are great for protecting against leaky buckets for those who understand AWS and IAM very well,  they are not something one would understand in 30 seconds – so I think some people avoid learning them to simply get their jobs done fast.  Also, I think there is a general lack of Security awareness in the wild, so people do things to get a job done quickly without thinking through the implications of what they are doing… making buckets public. I believe that these two reasons are at dead center of all of the bucket leaks you read about in the papers.  

First, let’s talk about detecting public buckets.

AWS has done a good job adding additional methods to detect public buckets. A year or so ago this little icon appeared in the console next to any public bucket when looking at your buckets in S3 menu:

So it made it a bit more obvious, but that still was not enough. AWS made it so Trusted Advisor Reports could check S3 Access and Flag buckets.

Also, to take a quick detour into the inventory aspect of Info Security, think about leveraging Amazon Macie for detecting certain data types (PII) are in S3 buckets and how that data is being accessed.

That’s nice, but let’s take some actions . .

Next, you can integrate Trusted Advisor with CloudWatch, so you can take actions on Trusted Advisor’s checks, but this will only fire AFTER Trusted Advisor has ran….so still not quite enough to stop bucket leaks that would occur in-between the times TA is run.

Next level up, and one of my favorites, is using AWS Config to monitor and respond to public buckets – this is powerful because it requires no human interaction…. almost.  The Lambda script in the linked tutorial only notifies if a bucket permission has been changed. To fully automate I really recommend that you customize the script and add some teeth to it, so it will remediate the bucket policy. An example script is here.

Problem still not solved you say, a user should not even be allowed to set their own bucket permission? Could not agree more.. so…

I’d rather just prevent S3 Public Access in the first place.

Now, there is Amazon S3 block public access which gives the Account Administrator more power to block users from introducing ACLs with open permissions onto a bucket in the first place, by blocking this at the Account level.

And Last –  would sit a Corporate Security Policy about creating any kind of Public sharing on any Cloud or third party service without explicit permission from the Security Team. Education and acknowledgement of this policy by every employee yearly is a must.

I hope this helps! Stay Safe! Stay Secure!

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with
Posted in Uncategorized | Leave a comment

How you can stop an ex-employee from deleting your AWS EC2 Instances.

Greetings, Programs! I saw this story and had to share some insight. I believe you will find it valuable!  Here we have a real world example of an employee who was fired from his job and then leveraged a co-worker’s credentials to log in to the his former employers AWS account and delete critical infrastructure. You can prevent this!

1. MFA. The primary mitigation for this type of attack is using Multi-Factor Authentication on all IAM accounts; or at a minimum, any account that has any permissions above READ. Had the stolen account ( that had the


permission had an MFA token associated, this most likely would have not have occurred. The Sophos guys pointed this out as well, but I had to mention it since it is critical. In addition to what Sophos mentioned, I offer the following thoughts:

2. Password rotation comes to mind here as well. It is not clear how the suspect obtained the credentials; nor how old the credentials were, but having a strong rotation policy can mitigate against the stolen credential attack, also leverage rotation with API access keys as well. CIS 1.0 Benchmarks call for 90 day API key age.

3. Policy. I know this is not a popular, shiny Security tool, but a good old fashioned strict corporate policy strictly prohibiting sharing of credentials between employees; audited and enforced could create a culture where employees don’t share, period.

4. AWS Guard Duty could have picked this up! How you ask? The ex-employee used a valid credential, right? Yes, but that account likely never logged in from the IP address used in the attack. There are two Guard Duty alarms that could have made Security Operations Personnel aware within 5 mins / or automation could shut down access within 5 minutes. Add the two alerts below to your Operations playbook for investigation; or if you are more bold and sure there are not false positives – create automated that immediately locks the account from these.

UnauthorizedAccess:IAMUser/ConsoleLoginSuccess.B  – This finding informs you that multiple successful console logins for the same IAM user were observed around the same time in various geographical locations. Such anomalous and risky access location pattern indicates potential unauthorized access to your AWS resource.

UnauthorizedAccess:IAMUser/ConsoleLogin – This finding informs you that a specific principal in your AWS environment is exhibiting behavior that is different from the established baseline.  ( different IP used to login ! ) This principal has no prior history of login activity using this client application from this specific location. Your credentials might be compromised.

5. The ex-employee was the threat that manifested itself.  So much Security hype is on China, Russia and APT, so it is important implement controls that monitor for and mitigate against internal threats.

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with



Posted in AWS, AWS GuardDuty | Leave a comment

AWS Networking Specialty Exam Resources


Greetings, Programs! Happy 2019!  I’ve updated the list I made last year of resources for the AWS Networking Specialty Exam.  Besides the FREE resources listed below which includes many great re:Invent videos as well as links to specific Amazon literature for each topic!

For my own studies, I am also leveraging a few paid resources; the Network Speciality course  as well as a good old-fashioned paperback book (as shown above)   Official AWS Network Specialty book

UPDATE: March 7, 2019 – The story of this exam continues. I took my first run at it on Jan 30, 2019 and miss by what I believe to be 2 questions, based on the score report I was mailed. I blame 90% of that fail on me, not studying in depth enough and 10% percent to some very badly worded questions. 🙂 As such, in addition to the guide below, I will add in some links here that I wished I had read + critical notes at the bottom of the page. I am taking my second run later this month!

UPDATE: March 16th, 2019 – Passed Networking Specialty Exam!

AWS Networking Specialty Exam

Domain 1: Design and implement hybrid IT network architectures at scale

1.1 Implement connectivity for hybrid IT

VPN backup over Direct Connect
AWS Single Data Center HA Network Design
AWS Multiple Data Center HA Network Design
How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
AWS Cloud Hub

1.2 Given a scenario, derive an appropriate hybrid IT architecture connectivity solution

AWS Direct Connect Resiliency Recommendations
AWS re:Invent 2018: AWS VPN Solutions (NET304)
AWS re:Invent 2017: Extending Data Centers to the Cloud: Connectivity Options and Co (NET301)
Amazon VPC to VPC Connectivity Options
Read this whole page on Invalid VPC Peering Connections
Read my note on Accessing Interface EndPoints
Read VPC Peering with Specific Routes

1.3 Explain the process to extend connectivity using AWS Direct Connect

AWS re:Invent 2018: AWS Direct Connect: Deep Dive (NET403)
Direct Connect FAQ Page
AWS re:Invent 2018: PrivateLink for Partners: Connectivity, Scale, Security (GPSTEC306)
Direct Connect VIFS

1.4 Evaluate design alternatives that leverage AWS Direct Connect

AWS Single Data Center HA Network Design
AWS Multiple Data Center HA Network Design
High Level Architecture

1.5 Define routing policies for hybrid IT architectures

Choosing a Route 53 Routing Policy
AWS re:Invent 2017: DNS Demystified: Global Traffic Management with Amazon Route 53 (NET302)

Domain 2.0: Design and implement AWS networks

2.1 Apply AWS networking concepts

AWS re:Invent 2017: Networking State of the Union (NET205)
VPC EndPoints and Gateway EndPoints
VPC Endpoint Limitations
Direct Connect Limits
VPC Peering Basics
VPC Flow Logs Basics
VPC Flow Log Limitations
Application Load Balancer Target Groups

2.2 Given customer requirements, define network architectures on AWS

AWS re:Invent 2018: AWS Transit Gateway and Transit VPCs, Reference Architectures (NET402)
VPC Peering scenarios
Direct Connect Deep Dive
Enable Enhanced Networking

2.3 Propose optimized designs based on the evaluation of an existing implementation

Routing Policies and BGP
Direct Connect Gateways
AWS Route Priority

2.4 Determine network requirements for a specialized workload

Load Testing CloudFront
Benchmarking EC2 instances with iperf
AWS re:Invent 2018: [REPEAT 1] Optimizing Network Performance for Amazon EC2 Instances (CMP308-R1)
NLB and ALB working together

2.5 Derive an appropriate architecture based on customer and application requirements

AWS re:Invent 2017: How to Design a Multi-Region Active-Active Architecture (ARC319)

2.6 Evaluate and optimize cost allocations given a network design and application data flow

Domain 3.0: Automate AWS tasks

Cloud Formation Template Basics
re:Invent 2017 Deep Dive on AWS Cloud Formation

3.1 Evaluate automation alternatives within AWS for network deployments

3.2 Evaluate tool-based alternatives within AWS for network operations and management

Domain 4.0: Configure network integration with application services

4.1 Leverage the capabilities of Route 53

AWS re:Invent 2017: DNS Demystified: Global Traffic Management with Amazon Route 53 (NET302)
Logging DNS Queries
Route 53 Concepts

4.2 Evaluate DNS solutions in a hybrid IT architecture

4.3 Determine the appropriate configuration of DHCP within AWS

DHCP options

4.4 Given a scenario, determine an appropriate load balancing strategy within the AWS ecosystem

AWS re:Invent 2017: Elastic Load Balancing Deep Dive and Best Practices (NET402)
ELB Connection Draining – Remove Instances From Service With Care
4.5 Determine a content distribution strategy to optimize for performance
AWS re:Invent 2017: Amazon CloudFront Flash Talks: Best Practices on Configuring, Se (CTD301)
AWS re:Invent 2017: Introduction to Amazon CloudFront and AWS Lambda@Edge (CTD201)
How to Create a Cloud Front Distribution

4.6 Reconcile AWS service requirements with network requirements

AWS Managed interfaces

Domain 5.0: Design and implement for security and compliance

5.1 Evaluate design requirements for alignment with security and compliance objectives

AWS VPC Security Capabilities

5.2 Evaluate monitoring strategies in support of security and compliance objectives

AWS Config Scenarios for Compliance

5.3 Evaluate AWS security features for managing network traffic
AWS WAF Resources
IPv6 Security Groups

5.4 Utilize encryption technologies to secure network communications

 AWS Summit San Francisco 2018 – AWS Certificate Manager Private Certificate Authority
AWS Certificate Manager
Introducing AWS Certificate Manager Private Certificate Authority (CA) – AWS Online Tech Talks

Domain 6.0: Manage, optimize, and troubleshoot the network

6.1 Given a scenario, troubleshoot and resolve a network issue
 VPC Flow Logs – Investigate & Troubleshoot network issues in AWS at VPC, Subnet or ENI Level

EXTRA!!  Critical Notes on specific subjects:

Read Cisco’s Guide to BGP MED

BGP Med influences incoming traffic from Peers. Lowest MED is preffered. Influence a way into your AS when multiple entry points exist. MED is not transitive beyond one AS.

Read AWS BGP Communities and Local Pref

BGP Local Preference Influences outbound routing. Highest Local Pref Preferred. Prefer an exit point from an AS is multiple exit points exist. This is passed to AWS to prefer a path back to customer.  LOCAL PREF is considered BEFORE AS PATH in BGP Route selection. AWS uses BGP communities to set tag for local Pref:
7224:7100 = low, 7224:7200 = med, 7224:7300 =high
BGP Route Selection:
1. Longest Prefix
2. Local Preference (Highest Preferred)
3. Shortest AS PATH
4. Lower MED
5. then if all of these are the same, ( equal cost load sharing )

BGP: Route control with communities

Tag your routes -> TO AWS with:
7224:9100 = your routes will stay in local AWS Region
7224:9200 = your routes will propagate to all regions in the continent
7224:9300 = your routes will propagate to all Public AWS Regions
AWS tagged routes toward you(customer):
7224:8100 = you get routes from same AWS Region
7224:8200 = you get routes from all regions in continent
no tag = global routes are propagated into your route table
VPC and Direct Connect Route Selection:
1. VPC Local Routes preferred first ( even if getting a more specific prefix of same CIDR from customer)
2. Longest Prefix match
3. StaticRoutes
4. Propagation
a. Direction Connect over VPN
b. VPN Static routes
c. VPN Dynamic routes
Routing with VPC EndPoints
Multiple routes to different services in a route table is OK
Multiple routes to same service is different route table is OK
NO Multiple routes to SAME service in SAME route table!
Routetable APIs do not work with VPC EndPoints at this time

Direct Connect and AWS VPN

Direct Connect Gateway( DX GW) Rules:

No transit communications, No hub communications ( only communications between Private VIF and VGW), Direct Connect Gateways are ‘account scoped’
Setting up: 1. Associate DX GW to VGW, 2. Private VIF to DX GW
DX GW can have 30 VIFS

AWS VPN Supports:

NAT-T, 4 byte ASN, CloudWatch Metrics, AES-256, Priv ASN for AMZN side of BGP
Ports needed open on Customer FW for AWS VPN: UDP 500, IP protocol 50 and UDP 4500 ( for NAT-T )
1. One Virtual Private Gateway to many customer gateways
2. Unique BGP ASN for each customer GW
3. Each site to site VPN connection advertises its own specific routes

Direct Connect

VIFS; to set up a VIF you need to specify:
Address Family IPv4 or IPv6
Peer IP address
-Public VIF – you MUST specify
-Private VIF you can auto generate
-IPv6 AMZN allocates a /125
-Public you must own it
-Private, must be in range 64512 – 65535
BGP MD5 Key ( provide your own or AMZN can auto generate )
Direct Connect LAGS
You need one LOA/CFA for each individual (physical) Connection
Max 0f 4 connections in a LAG
All LAGS operate Active/Active
All LAG Links must be same bandwidth
Creating a DX from the Console:
You need to specify DX location and Port Speed
Direct Connect Rules:
Auto-negotiation needs to be disabled
802.1q support
BGP + BGP MD5 is required
1000 BASE-LX or 10BASELR Single Mode Fiber

Tools you should know about:

tracepath Tool / Preferred to check MTU between hosts
When using ICMP, the destination unreachable codes are important:
host unreachable = 1
protocol unreachable = 2
port unreachable = 3
Fragmentation is needed, but DF not set = 4
AWS WorkSpace necessities. ( I dont understand why the emphasis on Workspaces in this Exam, as its not part of core AWS networking, it feels like the exam is being used as a marketing tool for this product. btw – I fumbled this on my first run through the exam )
A User Directory is required for Workspaces, [ AWS Managed AD, AD Connector(to your AD, Simple AD ]
Each Workspaces implementation has two ENIs
ENI(eth0) mgmt and streaming
ENI(eth1) Directory
Workspaces ports
443 Auth session
4172 UDP / TCP Health Checks
Workspaces requires two Private Subnets + One Public Subnet
For access control, Workspaces uses a concept called IP access control Group, limit of 25 IP addresses
For MFA with Workspaces, an on-prem RADIUS server is required
AWS Appstream
Each AppStream 2.0 streaming instance has the following network interfaces:
The customer network interface provides connectivity to the resources within your VPC, as well as the internet, and is used to join the streaming instance to your directory.
The management network interface is connected to a secure AppStream 2.0 management network. It is used for interactive streaming of the streaming instance to a user’s device, and to allow AppStream 2.0 to manage the streaming instance.
The management network interface IP address range is The following ports must be open on the management network interface of all streaming instances:
Inbound TCP on port 8300. This is used for establishment of the streaming connection.
Inbound TCP on port 8443. This is used for management of the streaming instance by AppStream 2.0.
AppStream Port 443 is used for HTTPS communication between AppStream 2.0 users’ devices and streaming instances.
Appstream Port 53 is used for communication between AppStream 2.0 users’ devices and your DNS servers.
Enhanced Networking
Depending on your instance type, enhanced networking can be enabled on:
Elastic Network Adaptor(ENA)
Intel 82599 Virtual Function (VF) interface
Linux Kernel 3.2 or greater required. Must be an HVM instance. Must be in VPC ( no EC2 Classic )
Verify ENA driver exists, output will show VIF ( different than DX VIF 🙂 )
[ec2-user ~]$ ethtool -i eth0
driver: vif
Enhanced Network supports 25 Gbps between instances
Placement Groups General Rules
You cannot merge placement groups
T2 mediums cannot be in Placement Groups
One instance cannot span multiple groups
Cannot delete until all instances are term’d or deleted.
Single AZ only / 10 Gbps Flow
“for applications that benefit from low network latency and high throughput”
If you receive ‘capacity errors, stop + start all instances and try launch again’
Max network speed is limited by the slower of the two instances.
Network traffic to the internet and over an AWS Direct Connect connection to on-premises resources is limited to 5 Gbps.
Can span AZs! MAX of 7 members.
“recommend for apps that have a small # of servers that need to be kept separate”
does not share same underlying hardware.
Spread placement groups are not supported for Dedicated Instances or Dedicated Hosts.
Spread Accross multiple partitions, limts failure domain to only one partition
7 partitions per AZ, YES can span AZs. The number of instances that you can launch in a partition placement group is limited only by your account limits.
Partition placement group with Dedicated Instances can have a maximum of two partitions.
Partition placement groups are not supported for Dedicated Hosts.
Partition placement groups are currently only available through the API or AWS CLI.

VPC Networking and Services

VPC EndPoints (PrivateLink)
InterfaceEndPoints: Connect to Services powered by private link, services hosted by other AWS partners in thier own VPCs.
Instances DO NOT require Public IPs
Interface EndPoints CAN be accessed through Direct Connect
Choose one subnet per AZ
Up to 16 GB per AZ
Supports only TCP|
REgionally Scoped
No tags, IPv4 only
Gateway EndPoints ( S3/DynamoDB)
EndPoints are supported in same Region
Gateway EndPoints cannot be extended out of VPC, NOT reachable via Direct Connect, VPN or Peering!
Must use DNS / must be enabled in VPC
You cannot use a prefix list in an outboud ACL to allow/deny traffic to endpoint, you use the CIDR in ACL.
Default Endpoint Policy allows for access any any access to S3 resource, to lock down:
Use routetables and Bucket Policies to restrict access to a specific VPC EndPoint (not SRC IP). Bucket Policies for EndPoints cannot use private IPs
Gateway Endpoint must be in subnet route table

Route 53

You need these enabled to enable DNS in your VPC:
enableDnsHostnamesIndicates whether the instances launched in the VPC get public DNS hostnames. If this attribute is true, instances in the VPC get public DNS hostnames, but only if the enableDnsSupport attribute is also set to true.
enableDnsSupport If this attribute is false, the Amazon-provided DNS server in the VPC that resolves public DNS hostnames to IP addresses is not enabled.
Hosted Zone gets:
NS Record ( w/ four servers )
Start of Authority (SOA)
You can create a delegation set of four servers in your hosted zone, you can use across multiple zone.  You create a Delegation set with AWS CLI or API only.
DNS Hybrid  VPC to on-prem
Create a DHCP Options set that includes Directory Services, set to the VPC the directory is in; any instances that resolve DNS in that VPC point to the domain the directory is in and resolve names. Directory should also have a conditional forwarder for on-prem DNS server for domains that are on prem + forward to route 53 for any non-authoritative answers
DNS Hybrid on-prem to VPC
Configure a DNS forwarder on-prem to forward requests to Simple AD (over DX or VPN)
Simple AD receives the request and (if need-be) points to Route53 for address.
Route 53 responds to Simple AD
Simple AD replies back to on-prem
“Simple AD is one of the Easiest ways for on prem devices to access private hosted zones”
Ec2 DNS and VPC Peering
Resolution of Public DNS hostname to private IP when queried from the peered VPC:
Modify peering connection and enable ” allow dns resolution from accepter VPC (vpc-id) to private IP” + enableDNSSupport and EnableDNSHostnames must be on on both VPCs
Logging DNS quieries ( for public hosted zones ) contain:
domain or subdomain requested
date and time
DNS record type
R53 edge which responded
Response code
(noticeably absent is requestor IP. WHY?? )
Route 53 HealthChecks
HealthChecks of other health checks
HealthChecks that monitor an endpoint
HealthChecks that monitor CloudWatch Alarms
http, https – status code of 2xx or 3xx within two seconds after connecting.
TCP- Route 53 must be able to establish a TCP connection with the endpoint within ten second
HTTP and HTTPS health checks with string matching – As with HTTP and HTTPS health checks, Route 53 must be able to establish a TCP connection with the endpoint within four seconds, and the endpoint must respond with an HTTP status code of 2xx or 3xx within two seconds after connecting.
After a Route 53 health checker receives the HTTP status code, it must receive the response body from the endpoint within the next two seconds


Costs: Invalidation requests cost + PriceClass
Data Transfer rate: 40 Gbps
CloudFront varied response based on:
UserAgent, Language, Protocol,Cookies
GET Option does not always route back to origin
PUT does not get cached
For RMTP distributions:
CloudFront CANNOT forward query string to origin
Use Signed URL
Signed URLs VS Cookies
Signed URLs carry additional info on expiry on date
Use for RMTP
Use for individual files
Use if user agent does not support Cookies
Cookies support multiple files
Use cookies if you dont want to change URL
Files like videos in HLS
Lambda @ Edge
Lambda at edge can be triggered in 4 separate parts.
Viewier Request, Origin Request [ URI header mod, object change, path change ]
Origin Response, Viewer Response [ change object returned, modify what is cached ]

Transit VPC / Hybrid VPC Connectivity Scenarios / Rules

VPC Peering
No A – > B -> C routing. ( no transitive routing )
No overlapping CIDR ( you can do part of VPC CIDR tho if it does not overlap )
Inter-region peering is “secure communication”
Transit VPC reasons / properties
Reduce the number of Tunnels required
Use a Security Layer
Address overlap of IP between on-prem and VPC
Highly Available
Scale Globally
Transit VPC Design 1
Typically done with a pair of SoftwareVPNs ( Cisco CSR 1000v ) in transit VPC
Tunnel between CSR 1000v and on-prem
OR Direct Connect can be used with Transit VPC using a Detached VGW.
Tunnels between CSR 1000v and each VPC’s VPG to which you need to connect
Why use detached VGW? leverage a detached virtual private gateway (VGW) to conceptually attach a VGW to a data center. In this approach, a customer creates a VGW, then adds a spoke VPC tag (default tag key transitvpc:spoke, default tag value true) without attaching the VGW to a specific VPC. This will cause the VGW to be automatically connected to the transit VPC CSR instances, which will start broadcasting any routes they have learned to the new VGW.
Transit VPC Design 2
For inter-spoke ( just the spoke VPCs) VPC peering can be used to communicate between just the spokes instead of sending traffic back to the transit VPC … IF  spokes are trusted.
Lambda to Access VPC
access resources in private VPC
create-function --vpc-config -SubnetIds / Security Group Ids
Accessing AWS Public Services in a remote Region:
Direct Connect gateway in any public Region. Use it to connect your AWS Direct Connect connection over a private virtual interface to VPCs in your account that are located in different Regions. For more information, see Direct Connect Gateways.
Alternatively, you can create a public virtual interface for your AWS Direct Connect connection and then establish a VPN connection to your VPC in the remote Region
VPC General
Internet Gateways have no bandwidth limitation
Flow Logs, to create you need:
CloudWatch Log Group
IAM Role
Flow Log Contents
version, account id, interface id, src ip, dst ip, src port, dst port, protocol, #of packets
NAT Gateway Cost: Per Hour change + Data Processing Charges
NAT Gateway = 10 Gbps, if you need mroe, distribute load into multiple subnets and use one NAT GW per subnet
NAT Gateway supports TCP, UDP and ICMP, No security Groups


Collection of resources in AWS is a single unit called a stack
‘StackSets’ extend functionality, allowing you to create, update or delete stacks across multiple accounts.
For templates, The only required top-level object is the Resources object, which must declare at least one resource.
For template, Parameters object. A parameter contains a list of attributes that define its value and constraints against its value. The only required attribute is Type, which can be String, Number, or an AWS-specific type.
Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stack resource in your template to reference other template
Change sets allow you to see how proposed changes to a stack might impact your running resources before you implement them.
Alternate Resources:’s blog is really good, as is this blog


Posted in AWS, AWS Certified Solutions Architect | Leave a comment

AWS InterfaceEndpoints vs. GatewayEndpoints (accessing from VPN, DX or Peering)

As I ramp up for my second run at that AWS Network Specialty Exam, I want to re-iterate this major difference between VPC Interface EndPoints and VPC GatewayEndPoints:

An interface endpoint can be accessed through AWS VPN connections or AWS Direct Connect connections. Interface endpoints can be accessed through intra-region VPC peering connections from Nitro instances. Interface endpoints can be accessed through inter-region VPC peering connections from any type of instance.

(Gateway)Endpoint connections cannot be extended out of a VPC. Resources on the other side of a VPN connection, VPC peering connection, AWS Direct Connect connection, or ClassicLink connection in your VPC cannot use the endpoint to communicate with resources in the endpoint service

This makes sense because Interface endpoints are exactly that, a virtual interface(NIC card) in your VPC, whereas Gateway EndPoints are more of a routing device to get you to a Public service, [ s3, Dynamo DB ]

TO access S3 from DirectConnect, use a Public VIF, as recommended here:


The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with
Posted in AWS, AWS Certified Solutions Architect | Leave a comment

AWS GuardDuty is part of Defense in Depth, not a ‘Silver Bullet’.

As a huge advocate of AWS GaurdDuty Intrusion detection system, I cannot recommend it enough to people. Like all of Security and Security products, no one solution or one vendor should ever be considered a ‘Silver Bullet’.

With GuardDuty specifically, an area of growth for the Product in its current state is that it can take up to five minutes for GuardDuty to report a finding to CloudWatch logs. That’s five minutes before YOU KNOW anything happened, five minutes before your automation can kick in. Five minutes the hacker has to do what he needs to do. It’s not REAL TIME.

To close that gap, I recommend using multiple EndPoint Software Security Solution(s) as part of your Cloud Defense in Depth strategy.  There are various players in this space, but the key aspect is that you pick your EndPoint solutions based on your Architecture and your environment.

For example, if you run a lot of Linux workloads, I recommend deploying OSSEC client for realtime intrusion detection at the host level. (OSSEC does have a windows client as well). For Linux, OSSEC can be configured for ‘active response‘ as well, running  custom commands in response, or automatically adding a rule to the local iptables to block attacks. Again, this is all real-time event processing. OSSEC also provides File Integrity Monitoring(FIM) requirement that can be customized.

For the Windows, I have been successful using CB – CarbonBlack Defense, ( you use what solution works best for your environment, I am not selling CarbonBlack here) With Carbon Black Defense, you can do some very fine grained tuning about what types of applications run on your windows systems, from only allowing signed code to run, custom code application paths, checking trusted apps and shutting down any other process, plus many more buttons and triggers. CarbonBlack is also an ‘Active Defense’ product where it stops malicious code REAL TIME. I have used the MalwareBytes Enterprise Desktop EndPoint client for Windows – and although not as Cloud friendly as CB from an API perspective, the MB detection and response engine work well for REAL TIME.

Last, for a more advanced type of Security EndPoint Solution.. check out Guardicore. In addition to threat standard threat detection and FIM,  thier CentraCore Product provides application flow visibility, Container Security (in the build process too). The Real time threat response re-routes attacks to a ‘deception engine'(honeypot) and attempts to gather intel. Guardicore also does some reputation analysis for IP and Domain.

The list goes on… The key premise here is Defense in Depth. You can use all of those endpoints together.  Even as continuous development improves the GuardDuty Product, (if AWS makes it REAL TIME), I would still recommend multiple IDS/ IPS solutions where their core functions would overlap  if drawn out into a nice Venn – that’s what you want – the failure of one system should never negatively affect your Security posture.

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with

Posted in Uncategorized | Leave a comment

Parsing GuardDuty Alerts by Finding Type with CloudWatch Events

Hello fellow GaurdDuty enthusiasts. As promised from my earlier post, I wanted to share CloudWatch Events triggers that parse GuardDuty alerts by finding type.  I have been using GuardDuty since it was announced in late 2017, and for the bulk of that time, I was parsing GuardDuty alerts in CloudWatch Events using severity to invoke my Security Automation Lambda Functions.

I now recommend NOT using GuardDuty severity in the CloudWatch Event trigger for any automation you want to invoke. Why? Simply, you don’t control the finding type to severity level mapping and thus, you could invoke automation when you really don’t want to do so.

I found that what I believe to be ‘low-level/low-threat’ Guard Duty alerts were coming in with higher severity types, and  an example of this: as of 02/20/2019, the Guard Duty finding: UnauthorizedAccess:EC2/TorIPCaller is still mapped to Severity 5. If you have automation that stops an EC2 based on severity 5, then every TOR visitor would invoke this.

What else comes in as Severity 5? Backdoor, Trojan, etc.. stuff you actually may want to act on. If someone hits my website from a TOR IP; I am going to treat that different than if my web server resolves an IP to a Command and Control.

Amazon does not publish a definitive mapping of Finding Type to Severity type that I have been able to find. The only way I have found to get the mapping  for each alert is to generate each finding type with the api:

aws guardduty create-sample-findings --detector-id <redacted> --finding-types UnauthorizedAccess:EC2/TorIPCaller

And the examine the alert JSON for the severity… ugh.

The better way: Build your CloudWatch Event on the specific finding:

  "source": [
  "detail-type": [
    "GuardDuty Finding"
  "detail": {
    "type": [
For the example above, simply replace “Backdoor…” with the FindingType of your choice.

For each unique CloudWatch event you create based on individual GuardDuty finding type, select a different target (Lambda Security automation) to invoke the specific remediation action that is appropriate!

Last bit of advice… ALWAYS test your CloudWatch triggers when introducing a new alert or new syntax into an existing alert. AWS has this Github repo for testing GuardDuty alerts based on real events, but I spun this up and I found that my GuardDuty did not actually show alert on any of the DNS queries included in their script, so if no alert is generated, no CloudWatch Rule can be triggered. For 100% certainty for triggering CloudWatch Rules you write, just use the GuardDuty API:

aws guardduty create-sample-findings –detector-id <redacted> –finding-types

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with

Posted in Uncategorized | Leave a comment

Security Automation Scripts from AWSLabs

Hi Friends, its been a little while since I’ve written. It’s been a busy 2019! I started my new gig as an AWS TAM and I’ve STILL been studying diligently for the AWS Networking Speciality Exam.

Security Automation can do a lot of good on your AWS account. The bad guys are automating their stuff, so you need to as well. Its nice to know you have automation working for you during the day and at night while you sleep. One of my chief aims is to continue my own professional development in Cloud Security Automation.. so stay tuned for more posts like this!

For now, here are some links great AWSLabs scripts in GIT for automating certain aspects of your AWS Security Infrastructure; and then below that, I’ve added links to my own repo where I have done work in this area.


Remediation for disabling of Cloud Trail

EC2 Auto Clean-Room Forensics

IAM Access Denied Responder 

GIT Secrets Checks your repos to ensure you are not posting API Keys to GIT!

Amazon Guard Duty Tester

AWS CIS Security BenchMark Who says you need expensive vendors to do this for you?

AWS Inspector Finding Forwarder

AWS Inspector Auto Remediate

AWS Inspector Agent Auto Deploy

HensonEngineer GIT:

OSSEC Automation for installing OSSEC and Auto Server for IAC purposes

Access Key Age Checker ( and Disabler )

Auto Remediate 3389 and 22 Security Groups. This one is meant to run in response to an event, so YOU need to configure the event that will trigger it… such as Cloud Trail Security Group changes

and as a BONUS, here is an amazing post from AWS on:

How to Use AWS Config to Monitor for and Respond to Amazon S3 Buckets Allowing Public Access


Legal disclaimer: the opinions of the author of this blog do not necessarily reflect those of Amazon. This is not an official Amazon blog. 

Posted in Uncategorized | Leave a comment