Don’t let Technology distract us from Higher Achievement and our True Purpose

Photo Copyright of the Napoleon Hill Foundation

Hi Friends. If you are good with it, we are taking a slight detour from Security, AWS for this post. I’d like to talk about one of my mentors, Napoleon Hill –  and what he might think about achieving high success in today’s high tech, social media, information abundant, connected world.

First, a quick note: I missed meeting Napoleon Hill in my lifetime by one year. He died in 1971 and I was born in 1972. That has not stopped him from being my mentor. In 1952, a five evening lecture was recorded at the University of Chicago entitled ‘The Science of Success’ – 11 hours of lecture in total – the sum of Napoleon’s Life experience and insight into his own principles in the form of classroom lecture. This recordings were later released as ‘Your Right to Be Rich’ on Compact Disc in the early 2000’s – that is when I found them, and became intimately integrated into Hill’s Philosophy.

One of the core tenants for Success in Napoleon Hill’s Philosophy is ‘Controlled Attention’ – Simply put, this is the practice of focusing your energy and attention onto your definite major purpose. Controlled Attention, combined with imagination, allows us to visualize our desire and goals, bring them into to life, through creating plans, drawing on bar napkins, talking to others about our desire, etc. Controlled attention channels our energies into the one thing we want above any other – and moves us in that direction.

Switching gears; and tying the story together, some pals of mine and I were eating lunch one day and having a conversation about TV and they were astonished I had not seen all Seasons of Game of Thrones, as well as a whole other list of popular nerdy TV series now streaming on Netflix and Amazon which they asked me about. While this was going on, I remember taking a casual glance around the cafe, noticing just about everyone there had their eyes and thumbs glued to their phone. At the moment, my thoughts took a journey down the rabbit hole. . . What would my mentor think about our modern world?

He would see that as a ‘SmartPhone’, ever connected Society, we are continuously distracted. Every last minute of our lives is filled with checking our phones, status updates, twitter, news, who is doing what on Facebook, iFunny memes, Apps, the list goes on and on and on. He would see that  Hollywood now has produced more content than we could ever watch in a lifetime and and made all that content available at once so we can now binge watch. He would see that most of us bring our phones to bed at night. He would see our faces are pointed toward all kinds of screens all the time.

He would see that believe automatic, habitual use of all our electronic devices and constant consumption of media erodes our creativity and self-driven ‘Controlled Attention’; Controlled Attention and Creativity that are desparately needed for us to grow, learn and succeed.

Yes, if Napoleon Hill were here, now, today –  in this time, he would advocate for long periods of disconnection from our Electronic devices.  He would advocate disconnection from Social Media and tell us to turn away from the limitless content on our TVs.  He would tell us to spend time writing in our creative journals, getting together with our Mastermind Alliances and use that our time to engage in passionate self-suggestion, to build our Faith in ourselves and our own capabilities. When we do use the internet, Dr. Hill would advocate to use it as a tool of bringing one’s self closer to one’s goal. Ultimately, he would tell us to focus on our Definite Major Purpose!

This message is more for me than it is for you. A message to be mindful of all that time spent swiping your phone, mindlessly surfing – and get back on track. Engage your imagination. Draw a picture. Write in your journal and connect with yourself in a deep meaningful way and ensure you are on track with the great destiny you have imagined for yourself!

 

 

SOPHOS – Security SOS Botnet Webinar Write-up

“SOPHOS – Security SOS Botnet Webinar” Write-up by Chris Henson

VERY early last Thursday, I attended the Sophos Security SOS ‘ Botnets – Malware that makes you part of the problem’ Webinar.  The Webinar was early because it was hosted late in the day in the UK. The main speaker was Paul Ducklin. Paul knows his stuff when it comes to malware; as do many Engineers at Sophos; as that team has some of the most extensive technical  write-ups on malware behavior out there.

As usual, I took notes, so I wanted to share them here:

-BEGIN WEBINAR NOTES –

Info about Botnets:

There is a rise in Bot Builder Tools, semi-custom software packs; where the operator can customize phishing [dropper ] campaigns; and can utilize the bots in a variety of ways. Bots can be customized to report back / call home on specific attributes of the computer on which they take over: Current level of patch installed, Disk space, GPU, memory, enumerate processes running, enumerate security products installed, etc. . .

Web based Botnet consoles have knobs / dials / tools and give out various types of information about the botnet they control in dashboard layout; Geo-location, OS type, etc, target, who reported in, how long ago. .

This data can then be used to conscript the bot into a specific type of botnet:

  • if you have infected many machines with high GPU capabilities, then those machines could go to a bit-coin mining botnet.
  • if initial infect is a corporate machine; the data about security tool sets in stalled may be valuable to other bad guys.
  • if the machines are found that have HUGE diskspace, those machines become part of a storage botnet.
  • If you are a average machine, or an IoT, you get conscripted into a DDoS botnet that can be rented out.

Bots – smaller, more basic kits, simply act as downloaders :

  • for other kinds of Software, sometimes even “legitimate” ad-ware where companies are paid on each time their ad-ware is installed.
  • for more specific botnets, tbd later by the attacker, SPAM, keylogging
  • when machine is sold to other bad guy, they decide what to download
  • multiple bots [ a machine can be a part of more than one botnet ]

Bots and Ransomware:

After a Bot has exceeded its useful life; the attacker may try to get another $200 – $600 and have the bot’s last job be to install ransomware.  The reverse is true. Ransomware can also have extra code that install bots; so even after you pay, the machine is still infected.

Keeping bots off your Computer:

  • Patch, patch and patch – reduce the risk surface.
  • Remove Flash from your machine [ Adobe Flash has been #1 target of infections ]
  • Do not run JAVA in your browser
    • Oracle recently modified base JAVA install to run as an App only on machine; and NOT as an applet in their browser
  • Things like home router; cameras, IoT, always get the latest vendor firmware
    • if device is old, and vulnerable, time top scrap it and get a new one

Detecting bots:

  • Microsoft Sys internal tools set to see process.
  • Wireshark Tools
  • [ my own note ] Security Onion with BRO, ELSA installed, getting a tapped or spanned feed from suspected machine

 

One click could have protected the data of 198 Million People: Amazon AWS

 

A major Security event involving a breach caused by user error occurred recently in AWS. Website thehill.com reports   that “25 terabytes of files contained in an Amazon cloud account that could be browsed without logging in”  These files were RNC owned and had voter data.

Being that the article read “25 TB of Files“, it’s not too far of a stretch to say these were files[objects] stored in an S3 Bucket, or (buckets).  Here is the crazy thing, from an AWS Security perspective, literally one click could have protected all the files.  [ Simply un-checking “read” from the everyone group ].  Take a look at the screen shot down the page a bit to see exactly what I mean.

In this instance, the contractor managing the RNC Voter Data Files strayed away from the default S3 bucket configuration, which is:

” By default, all Amazon S3 resources—buckets, objects, and related subresources (for example, lifecycle configuration and website configuration)—are private: only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy.”

One can only guess that the Contractor in-charge of these files was trying to give access to a small group of people that did not have AWS accounts and simply checked the Read for Everyone on the Entire Bucket.

Even if this was the practice, the fact that the Read for Everyone was left checked over time is simply . .mind boggling.

There are so many way this could have been prevented. Bucket Policies come to mind as well; where among many of the custom security access policies you can create; access controls can be applied to lock down a bucket for  anonymous users, (users without AWS accounts), by specifying referring website or their source IP if extended read access is needed.

Amazon make it easy to secure S3 buckets, first, by the Default Policy, second, by literally having a place where you can click one box. Simply no excuse for this breach!

[update: The original find is here on upgaurd.com  and confirmed my suspicion below, that the files were indeed stored in an S3 bucket! ]

Sony PlayStation 2017 E3 Ticket Site [ www.gofobo.com ] Down ALL DAY

 

Like many other enthusiasts, I was excited to get the opportunity to purchase tickets to my Local Theatre to experience Sony PlayStation’s E3 LIVE simulcast on the big screen!

The link to get tickets is at this site:

https://www.playstation.com/en-us/campaigns/2017/e3experience/

which points to a 3rd party ticket provider, gofobo.com  with this URL:

http://www.gofobo.com/PlaystationE32017

At approximately 10 AM PT http://www.gofobo.com/  CRASHED HARD and has been down ever since. 

Main response code all day has been http 503 – Service Unavailable.  Now it is showing as a 404 not found. ( screen shot above ) One attempt earlier in the afternoon brought up the main gofobo.com page once today; but then said that the “PlayStationE32017” code was invalid.

Earlier today, GoFobo had two public IP’s registered; I tried them both. No go.

All other requests have hung, or been met with 503 (until now which has turned to 404) –  I think this is really gofobo.com simply being overwhelmed  by Sony Playstation fans – FAILURE TO SCALE. This could have been another intentional, malicious DDoS against Sony, or maybe perhaps a human error killed it.   I was able to get tickets within 5 mins last year and I don’t remember gofobo.com as part of that.   That 404 happening on their main site is because I believed they moved their site to new digs:

At present,  4:11 PT, it appears they are shifting their DNS records around . . .. ( there were only two IP entries before and they were different IPs ) from a previous dig at 1 PM.

Here is a DIG now:

;; ANSWER SECTION:

http://www.gofobo.com. 148 IN CNAME screenings-346088557.us-west-2.elb.amazonaws.com.

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 54.191.95.244

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.35.41.68

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.32.184.40

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.25.144.120

So… looks like they are moving this to AWS! Thinking this move happened when the 503 error code became a 404.

AWS Certified Solutions Architect Associate ELB & AutoScaling Study Sheet

AWS Elastic Load Balancer is the “card dealer” that evenly distributes “cards” [traffic ] across “card players” [ EC2 instances ] .

Works across EC2 instances in multiple Availability Zones

  • supports http, https, TCP and SSL traffic / listeners
  • uses Route 53 DNS CNAME only
  • supports internet facing and internal
  • supports SSL offload / SSL termination at ELB, relieving load from EC2 instances

Idle Connection Timeout and Keep Alive Options

 

ELB sets the Idle timeout at 60 seconds for both connections; and will timeout if data is still being transferred.  Increase this setting for longer operations, ( file uploads ), etc.

For https and http listeners, use Keep Alive  load balancer to re-use back-end connections, reducing CPU.

AWS Cloud Watch for ELB and EC2

Service for monitoring all AWS resources and application in near real time. Collect and track metrics, collect and monitor log files, set alarms and react to changes in AWS environment. [ SNS notifications, kick off auto scaling group ]

Basic Monitoring / Every 5 minutes  [ DEFAULT ]

Detailed Monitoring / every 1 minute ( more expensive ) 

Each account limited to 5000 alarms.

Metrics data retained two weeks by default.

CloudWatch Logs Agent available for automated way to send log data to CloudWatch Logs for EC2 if running AWS Linux or Ubuntu.

The AWS/EC2 namespace includes the following default instance metrics:

CPU Metrics, Disk Metrics, Network Metrics,.

Auto Scaling and Launch Configuration

A Launch Configuration is basically a template that AWS Auto Scaling will use to spin up new instances. Launch Configurations are composed of:

  • AMI
  • EC2 instance type
  • Security Group
  • Instance Key Pair

Auto-Scaling is basically provisioning servers on demand and releasing them when no longer needed – you spin up more servers when there is peak demand; e.g., black Friday, World Series ticket sales . .

Auto-Scaling Plans:

Maintain Current Instance Levels – health checks on current instances; and if one dies; another will replace it.

Manual Scaling – This is a bad name for this group; because the auto-scaling itself is still automatic, the metrics input is manual .. e.g., you tell a change in the min, max capacity [ metrics, think max CPU, etc.. ] of group and Autoscaling will spin up more instances when your metrics are seen.

Scheduled Scaling – For predictable behavior [ Black friday thru christmas ] all actions performed automatically as a function of data and time.

Dynamic Scaling – you define different parameters, using cloud watch logs, network bandwidth ,etc

Scaling Policy

A scaling policy is used by Auto scaling with Cloud Watch alarms to determine when your AS group should scale in or scale out. Each Cloud watch alarm watches  a single metric and sends a message when the metric breaches a threshold.

AWS Certified Architect Associate Database Study Sheet

Amazon Databases

RDS

Amazon RDS ( Relational Database Service ) has operational benefits; simplifies setup, scaling and ops of a relational DB in AWS. Ideal for users to spend more time focusing on application itself; while RDS offloads admin tasks, like backups, patching, scaling and replication.

Currently supported MySQL, PostGreSQL, Maria DB, Oracle, SQL Server and Amazon Aurora. Built on Amazon Elastic block storage and can scale up to 4 to 6 TB in provisioned storage and up to 30,000 IOPS;

Amazon RDS supports three Storage types:

-Magnetic: Cost effective Storage that is ideal for apps with light I/O requirements

-General Purpose ( SSD ) faster than magnetic, burst to meet spikes; good for small to med DB

-Provisioned IOPS SSD designed for I/O intensive workloads, random I/O throughput

Min Size for SSD EBS: 1 GiB

Max Size for SSD EBS: 16 TiB

Amazon Aurora DB:

Commercial grade database; cost effective and open source. 5 X Performance of mySQL. Aurora consists of a Primary Instance for READ WRITE and an Amazon Aurora Replica with is a RO. Aurora Scaling: 2 copies of your data in each AZ, with a min of three availability zones, 6 copies of your data.

Backups and Restore

RPO – Recovery Point Objective is defined as the max time of data loss that is ok in the event of a failure or outage event.

RTO – Recovery Time Objective is defined as the max amount of downtime that is permitted to recover from backup and get back to normal ops.

Automated backups feature for RDS: Enables point in time recovery of DB istance. RDS does a Full Daily Back up ( during your preferred back up window ) + captures transaction logs.  Once a day backups will be retained by default; min default retention is 7 days; max retention period is 35 days.  Will occur during a pre-defined 30 min window

** When you delete an RDS instance; all backups are deleted by default** 

You are given the chance to create a snapshot when you delete an RDS instance.

Manual snapshots are not deleted, however. .

Manual Snapshots: Can be performed at any time.  Can only be restore to point in time they were created. Kept until you explicitly delete them.

High Availability and Multi-AZ

Multi-Az deployments allow you to create a DB cluster across, well, you guessed it – Multiple Availability Zones.  This is to increase availability, not performance. DB Failure over in the event of an outage is fully automatic and requires no administrative intervention. Replicates from master DB to to slave instance using synchronous replication.  Route 53 will resolve the new address in event of failover.

Amazon RDS will initiate a failover in the event of:

Loss of availability in Primary AZ.

Loss of Network connectivity to primary DC.

Compute unit failure in primary DB

Storage failure on primary DB

Read Replicas for Increased Performance Horizontally Scaling

  • Read replicas are not for Availability – for increased READ performance 
  • Scale beyond the capacity of a single DB instance for read -heavy workloads.
  • Handle read traffic while DB instance is unavailable
  • Offload reporting adjacent a replica instead of primary
  • Uses asynchronous replication when there is change to the Primary
  • Read Replicas are for these three RDS types: MySQL, MAriaDB and PostGreSql

 Multi-AZ RDS instances + Backups:

When Multi AZ is used on an RDS instance, I/O is not suspended on primary during a backup, since the backups are taken from standby.

AWS DB Security

 

Use IAM Policies with fine-grained access that limit what DB adminstrators can do

Deploy RDS instances into a VPC private subnet

Restrict access to DB using ACL

Restrict access with Security Groups

Rotate Keys and Passwords

AWS RedShift Datawarehouse

OLTP – Online transaction Processing – operations that are frequently writing and changing data. Actions performed on standard DBs.

OLAP – Online Analytical Processing – For datawarehouse. Complex query against large datasets. ” For example, where online transaction processing (OLTP) applications typically store data in rows, Amazon Redshift stores data in columns, using specialized data compression encodings for optimum memory usage and disk I/O”

AWS Redshift is a fast, powerful, fully managed petabyte scale DWH service in the Cloud. Give fast querying abilities over structured data using standard SQL commands to support interactive queying over large datasets.

 

NoSQL Database and Amazon Dynamo DB

 

Traditional DB; tables have a pre-defined schema, table name, primary key, column names and data types.

NoSQL DB are non-relation DBs; no existing traditional table for data stores. Example formats:

-Document DB

-Graph Stores

Key/VAlue Stores

Wide Column Stores

DynamoDB is an AWS NoSQL Service; fully managed, extremely fast with predictable performance by automatically distributing data and traffic for a table over multiple partitions.  All data on high performance SSD drives.  Protects data by replicating data across multiple AZ within an AWS Region.

DynamoDB only requires you have a primary key attribute ; but you don’t need to define attr names and data types in advance. Each attr in an item is a key value pair / can be single valued or multi-valued

{

CarName = “Red5”

CarVendor = “Suzuki”

CarVIN = “12345678890abcdefg”

}

Eventually consistent reads: When data is read; the repsonse may not reflect the results of a recently completed WRITE.

Strongly consistent READS: When this type of request is given; Dyanmo DB returens  a response with most up to date writes

 

AWS Certified Architect Associate S3 Study Sheet

AWS S3

Amazon S3 is durable, scalable Cloud object Storage based on key-value pair . Number of objects you can store is unlimited; largest object size is 5 TB; largest single PUT is 5 GB.

A Bucket is a logical container for objects stored on S3; simple flat folder with no system hierarchy. ( however there is a logical hierarchy, like [ Folder/File ] Objects in AWS S3 buckets are automatically replicated on multiple devices in multiple facilities within a region.  100 buckets per account by default. Buckets have unique names. Can be 63 characters. Prefixes and delimiters may be used in key names. Data is managed as objects using an API; buckets can host STATIC web content only. S3 buckets support SSL encryption of data in transit and data at rest.

AWS Objects are private by Default and only accessible to the owner

AWS S3 Storage Classes

Standard S3 Storage [ default Storage Class ] is 99.9999999%Durability and 99.99% Available [ don’t confuse the two ] Low Latency and high throughput.  Supports SSL in transit and at rest.  Supports LifeCycle Management for migration of Objects.

Standard IA Storage ( Infrequently Accessed ) is optimized for long lived and infrequently accessed data. 99.9999999% Durability and 99.90% Availability. Min object size 128KB and greater than 30 days  Ideal for long term storage, backups and as a data store for DR. Supports SSL in transit and at rest.  Supports LifeCycle Management for migration of Objects.

Reduced Redundancy Storage ( RRS ) is optimized for non-critical, reproducible data,that is stored at lower levels of redundancy. Reduced Storage Cost. 99.99% Durable and 99.99% Available. 99.99% Durability and 99.99% Availability. Designed to sustain the loss of a single facility.

Use Cases: ( thumbnails, transcoded media or other processed data that can be reproduced easily )

Amazon Glacier is optimized for Data Archiving. Extremely Low Cost. Retrieval can be up to several hours/ Vault Lock feature enforces compliance via a lockable key. [ 3 -5 hour retrieval ]

Glacier Uses cases include: [ Media Asset Archiving, Healthcare Information Archiving, Scientific Data Storage, Digital Preservation, Magnetic Tape Replacement ].  You can restore up to 5% of your data for free each month [ you can set up data retrieval policy to eliminate going over free-tier ]

Glacier as a Standalone Service: Data is stored in encrypted archives that can be as large as 40TB Vaults are containers for Archives and vaults can be locked for compliance

All Classes Support AWS Life Cycle Management Polices; transition to different class; and expiration of objects.

S3 supports Multi Factor  (MFA ) Delete – to protect form accidental deletes. MFA Delete can only be enabled by the root account.

S3 DATA CONSISTENCY MODELS

  • S3 provides for read after write consistency for PUTS of new objects
  • S3 provides for eventual consistency for overwrite PUTS and DELETES

AWS S3 Security

Bucket Policies – JSON language to create; you can grant permissions to users [ allow / deny ] to perform specific actions all or part of objects in buckets. Broad Rules across all requests. Can restrict http referrer or IP

Bucket ACL – Grant specific permissions R, W and Full_control to specifc users

IAM Policy – grant IAM users fine-grained control to thier S3 buckets while maintaining full control of everything else

Encryption

SSE-S3 Keys – Check box style encryption solution where AWS is responsible for key mgmt and key protection. All objects are encrypted with a unique key. The actual object is then encrypted further by a separate master key.; a new mater key is issued monthly; with AWS rotating keys.

SSE-KMS – Amazon handles key mgmt and protection for S3, but you manage the keys. Separate permissions for master key; and provides auditing so you can see who access the object with the key.; allows you to view any failed attempts.

SSE-C Used when customer wnats to maintain keys; but does not want to maintain an encryption library. AWS will encrypt and decrypt objects; while customer maintains full control of keys.

Client Side Encryption

This is sued when you want to encrypt data BEFORE sending it to AWS S3. Client has most control; maintains end-to-end control of encryption process. You have two options:

Use an AWS KMS managed customer master key

Use a client side master key

Pre-Signed URLs: Use Pre-signed URLS for time-limited download access

Bucket Versioning

Allows you to preservice, retrieve and restore every version of every object stored in the bucket. Once version is turned on; it cannot be turned off, only disabled.

Cross Region Replication

Allows you to asynchronously replicate all new objects in the source host bucket in one AWS Region to a target bucket in another region. And metadata and ACL associated with the object are part of the replication. If CRR is enabled while objects are in the bucket; it won’t affect those; only NEW objects.  Versioning must be enabled for both source and destination buckets for CRR to work + you must have an IAM policy to give permission to replicate objects on your behalf.

Event Notifications and Logging

Event notifications are set at the bucket level and can trigger a message in Amazon SNS or SQS or store an action in AWS Lambda in response to an upload or delete of an object (by PUT, POST, COPY, DELETE  or multipart upload completion). You can configure Event notifications through the S3 console, trough REST API or by using Amazon SDK.

Logging is off by default. When you enable logging for the source bucket; you must choose a target bucket.

Encrypt your AWS API Key with GPG

In AWS, when you create a user in IAM and you give that user ‘programatic’ access, AWS will give you that user’s API key. there are two major rules one must follow with the API key.

  1. NEVER hard code your API key into your code.
  2. Never store your API unencrypted.

To help with #2, in Linux you can just use GPG

First install it, for Ubuntu:

    sudo apt-get install gnupg2 -y

#or for RHEL:/Centos

 

    yum install gnupg2

 

and then just run it against the text file where your API keys are:

  1. Encrypt the file with the command
    gpg -c API.txt
  2. Enter a unique password for the file and hit Enter.
  3. Verify the newly typed password by typing it again and hitting Enter.

 

      4. Looking at the output of an an ls -hal the original file is still there; so

    rm -rf API.txt

      5. When ready, Decrypt the file with the command

    gpg API.txt

Crazy Ransomware based on NSA tool spreads across the Globe!

The Security Defenders are working overtime today to stop WCry [ WannaCry ]. Right now, 45,000 attacks of the WannaCry ransomware are reported in 74 countries around the world. This is pretty bad, there are reports of Hospitals being shut down due to this, Service Providers shutting down their computers and many other reports out there of companies affected.

According to Kaspersky, WCry is leveraging an SMBv2 Remote code execution, derived from the NSA Tool kit.    Here is the US-CERT Confirmation of WCry.

MS17-010 is the Patch MS released in April  to address the vulnerability this piece of ransomware is using and if this is not on your system; you are vulnerable.

Close Firewall ports 445/139 & 3389.

Here is a Live MAP at Malware Tech, monitoring the Spread

Kaspersky’s Global Lab has a Solid Write Forensic Up

Fig 1, Machine infected with WCry  Image via Twitter Malware Hunter Team.

 

AWS Workshop Automating Security in the Cloud in Denver: Day 2

Day 2 – Another gorgeous day in Denver, looking west at the Rockies from main event room on the 11 floor of the Marriott. A great day to learn about Cloud Security in AWS.

Day kicked off with presentation from  Cloud Compliance Vendor Telos, their Solution Xacta

Again, forgive brevity, partial sentences; these are literally, my notes transcribed from as fast as I could write in my notebook.

Good stuff started when Nathan McGuirt, Public Sector Manager, AWS, took the stage for AWS Network Security; and this is where my notes begin:

Nathan opens: Almost everything is an API and can be automated. APIs automated network discovery ( things network tools used to do )

AWS Network configs are centralized. VPC is NOT a vlan or a VRF! Pressures to migrate to large, shared networks are all but gone.

Micro-segmentation allows for smaller networks; are more granular in function, reduce Application risk via isolation, reduce (effect of) deployment risks of new systems

Security Groups (SG) have become the most important piece of Security in AWS; Security groups are essentially stateful Firewalls that can be deployed per-instance / per VPC.

Two EC2 instances in the same Security Group don’t communicate by default. Security Groups are flexible and rules can be set to allow another Security Group as the source of the traffic.

The idea is Role Based Security group / permissions to communicate based on Role. Security Group traffic shows up in VPC flow logs in a netflow format.

VPC subnet ACL’s  ( access control lists ) are secondary layer of protection. ACLS are NOT stateful; and ephemeral ports have to be allowed on return traffic. ACLs are better suited for solving broad problems; communication between VPCs.

Route tables are now thought of as another Security mechanism for isolation and to control permissions; route tables create an extra set of protections.

Stop thinking about filtering at layer 3,4 and focus on filtering application:

  Ingress filtering tech: Reverse Proxy, (Elastic Load Balancer), Application Load balance with WAF, CloudFront CDN with WAF

  Egress filtering: Proxies / NAT

Moving outside VPC – VPC Peering – inside inside routes, Virtual Private Gateway to connect to on-prem legacy VPN hardware, AWS Direct Connect( AWS on prem fiber )

No transitive VPC peering, no A – > B -> C

 – some work-arounds, using proxies/ virtual router AMIs and routing to them instead of VPC ( does not scale large )

 – Nathan talked about a Management Services VPC that has all needed services in it( auth, monitoring, etc. )  and connecting all VPCS to that.

VPC design; plan early, plan carefully,  think about subnets and how you want to scale VPC before you implement it; since subnets cannot be removed from VPC once built.

Coud Formation and Change Control.  New AWS feature called ‘ Change Sets’ allow you to stage your Cloud formation builds and assign different permissions to developers, approversand implementer.

Talked about how powerful AWS config https://aws.amazon.com/config/   is; scans envrinments, object history, change logs, given snapshots of whole AWS environment .

Logging VPC flow logs / netflows for all things – goes to CloudWatch for every instance in the VPC. Rather than watching the noisy edge; Combine tight Security Groups with VPC flow logs within / and between other VPCs and look at what is getting denied – people trying to move sideways.  Watch for config changes that impair you ability to see config changes. ( e.g.., disabling of Cloud Trail ) – Automated remediation is possible thru APIs. Watch for dis-allowed configurations

Next up was Jim Jennis AWS Solutions Architect;

 Jim has a solid background – worked for US Coast Guard.

Jim: Automate Deploy Monitor in Security SDLC, build in Security and verify before deployment

Design>Package>Constrain>Deploy

Using Cloud Formation to describe AWS resources, JSON templates, key value pairs

How Security in the Cloud is different:

– Everything is an API Call

– Security Infrastructure should be Cloud Aware

– Security Features as APIs

– Automate Everything

Use Federation to leverage AD accounts of existing staff to get access to AWS and lock down tasks to their specific tasks.

[ Trust required with AD/ required ]

Preso Back to Tim and Nathan McGuirk together

 AWS cli matches the one in the API – live demo AWS command line (  learning AWS CLI would be a good way into learning API )

Nathan using AWS cli to:

  find EBS volumes that are unencrypted

  find Security Groups / with ssh allowed

  give a list of instances attached to groups found from last command

  ( Tim said this could be used to find IAM groups with resource of * )

Tim: this demo is the base to automation to find things that are not supposed to be there; run them multiple times a day

to locate things that are happening out of scope in our environment;

USE accounts and VPCs to limit last blast radius of compromise

 – Accounts are inherently isolated from one another provide isolation of both access and visibility for deployed resources

    – compromise of one account limits impact

Place Hardened templates in Service catalog; and employees can only select your hardened templates / AMIs when spinning up new instances

My Thoughts:  Security people that want to protect AWS need to learn to understand APIs, code  to fully leverage automation, and use the amazon cli and benefit. Queries are powerful can be leveraged to tell you very specific things in an environment; I’ve seen things today that make the old network tools I used to use look like Ford model-T.  Security Professionals and Network need to adapt and become application-centric thinkers. AWS has incredible built in tools to ensure proper secure design from the creation of  hardened AMI / CF templates, to be able to lock down those the deployment of pre-approved instances. Security Groups and IAM make it to where you can get very granular on the permission level; as  Even applications can be granted permissions through access roles.

When thinking about everything I have done in the last 20 years as a Network Engineer / Sys Admin / Security Engineer; what I experienced the last two days really reminded me the scene in ‘Dr Strange’ movie;  where Strange shows up at Kamar-Taj temple – and begins his true teaching. The ancient one says tell him that what he had learned before got him far in life; but he needed to embrace a whole new and completely different way of thinking.

For me, I am not as reluctant as Strange was, but the idea of embracing the concept that an entire infrastructure is just CODE; servers, routers, firewalls are now just coded objects and APIs are the “neurons” that make it all work is just as impactful.

I am grateful to AWS Engineers and the Vendors for coming to Denver!

THANK YOU!!!