One click could have protected the data of 198 Million People: Amazon AWS

 

A major Security event involving a breach caused by user error occurred recently in AWS. Website thehill.com reports   that “25 terabytes of files contained in an Amazon cloud account that could be browsed without logging in”  These files were RNC owned and had voter data.

Being that the article read “25 TB of Files“, it’s not too far of a stretch to say these were files[objects] stored in an S3 Bucket, or (buckets).  Here is the crazy thing, from an AWS Security perspective, literally one click could have protected all the files.  [ Simply un-checking “read” from the everyone group ].  Take a look at the screen shot down the page a bit to see exactly what I mean.

In this instance, the contractor managing the RNC Voter Data Files strayed away from the default S3 bucket configuration, which is:

” By default, all Amazon S3 resources—buckets, objects, and related subresources (for example, lifecycle configuration and website configuration)—are private: only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy.”

One can only guess that the Contractor in-charge of these files was trying to give access to a small group of people that did not have AWS accounts and simply checked the Read for Everyone on the Entire Bucket.

Even if this was the practice, the fact that the Read for Everyone was left checked over time is simply . .mind boggling.

There are so many way this could have been prevented. Bucket Policies come to mind as well; where among many of the custom security access policies you can create; access controls can be applied to lock down a bucket for  anonymous users, (users without AWS accounts), by specifying referring website or their source IP if extended read access is needed.

Amazon make it easy to secure S3 buckets, first, by the Default Policy, second, by literally having a place where you can click one box. Simply no excuse for this breach!

[update: The original find is here on upgaurd.com  and confirmed my suspicion below, that the files were indeed stored in an S3 bucket! ]

Advertisements
Posted in Uncategorized | Leave a comment

Sony PlayStation 2017 E3 Ticket Site [ www.gofobo.com ] Down ALL DAY

 

Like many other enthusiasts, I was excited to get the opportunity to purchase tickets to my Local Theatre to experience Sony PlayStation’s E3 LIVE simulcast on the big screen!

The link to get tickets is at this site:

https://www.playstation.com/en-us/campaigns/2017/e3experience/

which points to a 3rd party ticket provider, gofobo.com  with this URL:

http://www.gofobo.com/PlaystationE32017

At approximately 10 AM PT http://www.gofobo.com/  CRASHED HARD and has been down ever since. 

Main response code all day has been http 503 – Service Unavailable.  Now it is showing as a 404 not found. ( screen shot above ) One attempt earlier in the afternoon brought up the main gofobo.com page once today; but then said that the “PlayStationE32017” code was invalid.

Earlier today, GoFobo had two public IP’s registered; I tried them both. No go.

All other requests have hung, or been met with 503 (until now which has turned to 404) –  I think this is really gofobo.com simply being overwhelmed  by Sony Playstation fans – FAILURE TO SCALE. This could have been another intentional, malicious DDoS against Sony, or maybe perhaps a human error killed it.   I was able to get tickets within 5 mins last year and I don’t remember gofobo.com as part of that.   That 404 happening on their main site is because I believed they moved their site to new digs:

At present,  4:11 PT, it appears they are shifting their DNS records around . . .. ( there were only two IP entries before and they were different IPs ) from a previous dig at 1 PM.

Here is a DIG now:

;; ANSWER SECTION:

http://www.gofobo.com. 148 IN CNAME screenings-346088557.us-west-2.elb.amazonaws.com.

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 54.191.95.244

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.35.41.68

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.32.184.40

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.25.144.120

So… looks like they are moving this to AWS! Thinking this move happened when the 503 error code became a 404.

Posted in Uncategorized | Leave a comment

Amazon AWS Certified Solutions Architect SWF / SQS Study Sheet

Simple WorkFlow Service – SWF

Web service to coordinate  work across distributed application components [ Human tasks outside of process can be included as well ]  – Tasks represent invocations of logical steps in Applications.

SWF Task is assigned once, never duplicated.

SWF Tasks can be stored for up to one year

SWF keeps track of all tasks in an application

SWF ACTORS

  • Workflow Starters [ Application or event ] that kicks off workflow
  • Workflow Deciders Controls the flow of activity based on outcomes of task state
  • Activity Workers – programs that interact with SWF to get tasks, process them and return results

Simple Queue Service – SQS

SQS is a Web Service that gives access  to message queues that can be used to store messages while they are waiting to be processed.

SQS is a distributed Queue System that enables applications to queue messages that one part of an app generates to be consumed by another [ de-coupled ] part of that application.

De-Couple Application components so they can run independently; SQS acts as a buffer between components.

SQS is “Pull based” , meaning instances poll and ask it for work.

Messages are 256 KB [ and can be in 64 KB chunks ]

Messages can be stored in SQS for:

  • as little as 1 min
  • DEFAULT of 4 days
  • up to 14 days

For SQS STANDARD QUEUE: VisibilityTimeOut is the amount of time that the message is “invisible” in the SQS queue after a EC2 (or other reading software) retrieves that message.

  • If job is process BEFORE the VisibilityTimeOut expires, messages is deleted from queue
  • If job is not processed within VisibilityTimeOut, the message will become “visible” again and another EC2 will pull it; possibly resulting in same message being delivered twice.

VisibilityTimeOut MAX is 12 hours 

SQS [ Standard Queue ] will guarantee a message is delivered at least once.

  • but will NOT guarantee message order
  • but will NOT guarantee message is ONLY delivered once ( e.g. could be delivered twice )

Long Polling vs. Short Polling: In almost all cases, Amazon SQS long polling is preferable to short polling. Long-polling requests let your queue consumers receive messages as soon as they arrive in your queue while reducing the number of empty ReceiveMessageResponse instances returned.

Long-Polling does not return a response until message is in message queue. [ will save money, because you are not polling an empty queue ]

Short-Polling, returns immediately; even if queue is empty.

Posted in AWS, AWS Certified Solutions Architect | Leave a comment

AWS Certified Solutions Architect Associate ELB & AutoScaling Study Sheet

AWS Elastic Load Balancer is the “card dealer” that evenly distributes “cards” [traffic ] across “card players” [ EC2 instances ] .

Works across EC2 instances in multiple Availability Zones

  • supports http, https, TCP and SSL traffic / listeners
  • uses Route 53 DNS CNAME only
  • supports internet facing and internal
  • supports SSL offload / SSL termination at ELB, relieving load from EC2 instances

Idle Connection Timeout and Keep Alive Options

 

ELB sets the Idle timeout at 60 seconds for both connections; and will timeout if data is still being transferred.  Increase this setting for longer operations, ( file uploads ), etc.

For https and http listeners, use Keep Alive  load balancer to re-use back-end connections, reducing CPU.

AWS Cloud Watch for ELB and EC2

Service for monitoring all AWS resources and application in near real time. Collect and track metrics, collect and monitor log files, set alarms and react to changes in AWS environment. [ SNS notifications, kick off auto scaling group ]

Basic Monitoring / Every 5 minutes  [ DEFAULT ]

Detailed Monitoring / every 1 minute ( more expensive ) 

Each account limited to 5000 alarms.

Metrics data retained two weeks by default.

CloudWatch Logs Agent available for automated way to send log data to CloudWatch Logs for EC2 if running AWS Linux or Ubuntu.

The AWS/EC2 namespace includes the following default instance metrics:

CPU Metrics, Disk Metrics, Network Metrics,.

Auto Scaling and Launch Configuration

A Launch Configuration is basically a template that AWS Auto Scaling will use to spin up new instances. Launch Configurations are composed of:

  • AMI
  • EC2 instance type
  • Security Group
  • Instance Key Pair

Auto-Scaling is basically provisioning servers on demand and releasing them when no longer needed – you spin up more servers when there is peak demand; e.g., black Friday, World Series ticket sales . .

Auto-Scaling Plans:

Maintain Current Instance Levels – health checks on current instances; and if one dies; another will replace it.

Manual Scaling – This is a bad name for this group; because the auto-scaling itself is still automatic, the metrics input is manual .. e.g., you tell a change in the min, max capacity [ metrics, think max CPU, etc.. ] of group and Autoscaling will spin up more instances when your metrics are seen.

Scheduled Scaling – For predictable behavior [ Black friday thru christmas ] all actions performed automatically as a function of data and time.

Dynamic Scaling – you define different parameters, using cloud watch logs, network bandwidth ,etc

Scaling Policy

A scaling policy is used by Auto scaling with Cloud Watch alarms to determine when your AS group should scale in or scale out. Each Cloud watch alarm watches  a single metric and sends a message when the metric breaches a threshold.

Posted in AWS, AWS Certified Solutions Architect, Cloud Security, Uncategorized | Leave a comment

WannaCry Decrypt Tools are Now Available!

WannaCry Decrypt Tools are Now Available!

You’ve gotta love the good guys!

 

Benjamin Delpy, coded a tool “WanaKiwi,”  which makes it easier for everyone to remove the WannaCry-infected file decryption.

https://github.com/gentilkiwi/wanakiwi/releases

and run it on Windows using the command prompt.

Full write-up on the decoder here on The Hacker News

Posted in Cyber Security | Leave a comment

AWS Certified Architect Associate VPC Study Sheet

AWS VPC

As a Network Engineer; it fascinates me what Amazon has done to virtualize the Network in its Virtual Private Cloud.   Here go the notes!

AWS VPC is a logically isolated section of the AWS Cloud, a virtual network in which you can launch your EC2 instances that can be private or public.

All AWS VPCs contain: subnets, route tables, DHCP option sets, Security Groups and ACLs.

Optional VPC elements are: Internet Gateways, Elastic IP. Elastic Network interfaces, EndPoints, Peering, NAT servers or Gateways, Virtual Private Gateway, Customer Gateways, and VPNs.

Largest Subnet in a VPC is /16  Smallest Subnet is /28.

One Subnet per one Availability Zone; and do not span Availability Zones 

VPC Route Tables:

VPC has a Router (Implicit).

VPC comes with a “Main” Route table which you can change; and you can also create separate routes tables within your VPC that are not associated with “Main”.  Each subnet you create has to be associated with one of the route tables.

VPC Internet Gateways [ IGW]:

An AWS VPC Internet Gateway is a Horizontally Scaled, redundant, highly VPC component that allows communication between your EC2 instances and the internet. [ Basically default route Target out points to the IGW ]

IGW’s must be attached to VPCs

Route table must have a 0.0.0.0/0 route to send all non-VPC traffic out.

ACLs and Security Groups MUST be configured so the bad guys don’t get in .

Below image is from AWS: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

VPC DHCP Option SETS

AWS automatically creates and associates a DHCP Option Set for your VPC and sets two options: 1. DNS servers 2. Domain-Name. These are set to Amazon default DNS and Domain name for your region.  To assign your own Domain name; you can create a custom DHCP option set and configure the following:

  1. DNS servers
  2. Domain-Name
  3. NTP Servers
  4. NetBios Name servers
  5. NetBios Node type.

Elastic IP addresses [ EIP ]: 

These are AWS Public IP Addresses that you can allocate to your account from their larger pool; reachable from anywhere on the internet.

  • Create the EIP first for your VPC; and then assign it to an EC2
  • EIPs are specific to a region
  • One to one relationship between network interfaces and EIP
  • EIPs can be moved from one EC2 instance to a different EC2 instances.
  • EIPs remain with your account until you release them.
  • There are charges incurred fro EIPs allocated to your AWS account; when they are not in use.
  • charged when instance is stopped; charged when un-attached
  • Free only only one EIP per instance and instance is running.

Elastic Network Interfaces [ EIN ]:

This is a network interface available within a single VPC that can be attached to an instance; and are associated with a subnet when they are created

  • Can have one public and multiple private IPs
  • Can exist independently of Instance
  • Allow you to create a management Network, use Network and Security AMIs/Appliances create dual-homed solutions.

VPC EndPoints

VPC Endpoints allows you to create a private connection between your VPC and other AWS servers without going over the internet. Works with Route TAbles; where Endpoint for particular service can be a target.

VPC Peering

Allows for communication between two VPCs; e.g., communication from instances in one VPC to instances in another VPC.

  • you can create peering with your VPC and:
    • another VPC in your account
    • VPC in another AWS account
  • within a single region

Peering Rules:

  • no peering between VPC that have matching over overlapping CIDR blocks
  • cannot peer with VPC in different Regions
  • No transitive peering
  • No more than one peering connection between two VPCs

Security Groups (SG) in a VPC

A security Group is a stateful Firewall that controls inbound and outbound network traffic to individual EC2 instances and other AWS resources.  All EC2 instances must be launched into a Security Group. Only the Default Security Group  allows communications between all resources in that same Security Group . Instances in Security Groups you create cannot talk to each other by default.

  •  500 SGs per AWS VPC
  • 50 inbound, 50 outbound rules for each SG
  • 5 SG’s per network interface
  • applied selectively to individual instances
  • Can specify ALLOW rules / but no DENY [ whitelist ]
  • By default, no inbound traffic is allowed from anything not in SG
  • New SGs by Default have a permit all outbound.
  • Stateful
  • Evaluates ALL rules before deciding permit / deny

Network Access Lists (ACLs)

so you Cisco guys, this is pretty much the same. . .

Subnet Level, state-less, number set of rules, processed top down.  VPCs have a default ACL associated with every subnet that allows all traffic in and out. When you crate an ACL, its deny all until you create rules.

  • Supports allow Rules and Deny rules
  • State-less, return traffic MUST be called out
  • processed in order
  • Because at Subnet level, applied to all instances in that subnet

NAT Instances ( AMI ) on VPC

AMIs on AWS that have the amzn-ami-vpc-nat – use is taking traffic from private subnet in VPC and forwarding it to the IGW ( Internet Gateway ) .

You need a SG will appropriate Rules / in and out

Launched in a PUBLIC subnet in VPC

-Disable Source / Destination Check of NAT or it won’t work 

Subnet with PROV host will have NAT host as the destination in the route table:

0.0.0.0/0 goes to > [NAT instance name]

NAT Gateway on VPC

Designed to operation like the NAT AMI, but easier to manage; ( no EC2 isntance to patch ); and highly avaialble within an AZ.

VPN, VPG and CGW in a VPC

A VPG ( Virtual Provate Gateway ) is the VPn concentrator on the AWS side of a VPN connection.

A CGW represents a physical device on the customer’s network; thier end of the tunnel.

The VPN handshake must be initiated on the customer’s side.

EC2 Virtualized Types

Hardware Virtual Machines  (HVM) vs. ParaVirtual Machines  (PV)

HVM AMIs – fully virtualized set of hardware and boot executes master boot record of block device of image; has support for special VM extensions ( GPU accelerator ).

PV-AMI – Use PV-GRUB boot loader; runs on hardware without explicit support for VM; but no special extensions; currently only C3 and M3 types can be used.

T2 instances must be launched into a VPC ( not supported in classic )

T2 must be on HVM AMI

Recommended use current generation instance types on HVM AMIs

Posted in AWS, AWS Certified Solutions Architect, Cloud Security | Leave a comment

AWS Certified Architect Associate Database Study Sheet

Amazon Databases

RDS

Amazon RDS ( Relational Database Service ) has operational benefits; simplifies setup, scaling and ops of a relational DB in AWS. Ideal for users to spend more time focusing on application itself; while RDS offloads admin tasks, like backups, patching, scaling and replication.

Currently supported MySQL, PostGreSQL, Maria DB, Oracle, SQL Server and Amazon Aurora. Built on Amazon Elastic block storage and can scale up to 4 to 6 TB in provisioned storage and up to 30,000 IOPS;

Amazon RDS supports three Storage types:

-Magnetic: Cost effective Storage that is ideal for apps with light I/O requirements

-General Purpose ( SSD ) faster than magnetic, burst to meet spikes; good for small to med DB

-Provisioned IOPS SSD designed for I/O intensive workloads, random I/O throughput

Min Size for SSD EBS: 1 GiB

Max Size for SSD EBS: 16 TiB

Amazon Aurora DB:

Commercial grade database; cost effective and open source. 5 X Performance of mySQL. Aurora consists of a Primary Instance for READ WRITE and an Amazon Aurora Replica with is a RO. Aurora Scaling: 2 copies of your data in each AZ, with a min of three availability zones, 6 copies of your data.

Backups and Restore

RPO – Recovery Point Objective is defined as the max time of data loss that is ok in the event of a failure or outage event.

RTO – Recovery Time Objective is defined as the max amount of downtime that is permitted to recover from backup and get back to normal ops.

Automated backups feature for RDS: Enables point in time recovery of DB istance. RDS does a Full Daily Back up ( during your preferred back up window ) + captures transaction logs.  Once a day backups will be retained by default; min default retention is 7 days; max retention period is 35 days.  Will occur during a pre-defined 30 min window

** When you delete an RDS instance; all backups are deleted by default** 

You are given the chance to create a snapshot when you delete an RDS instance.

Manual snapshots are not deleted, however. .

Manual Snapshots: Can be performed at any time.  Can only be restore to point in time they were created. Kept until you explicitly delete them.

High Availability and Multi-AZ

Multi-Az deployments allow you to create a DB cluster across, well, you guessed it – Multiple Availability Zones.  This is to increase availability, not performance. DB Failure over in the event of an outage is fully automatic and requires no administrative intervention. Replicates from master DB to to slave instance using synchronous replication.  Route 53 will resolve the new address in event of failover.

Amazon RDS will initiate a failover in the event of:

Loss of availability in Primary AZ.

Loss of Network connectivity to primary DC.

Compute unit failure in primary DB

Storage failure on primary DB

Read Replicas for Increased Performance Horizontally Scaling

  • Read replicas are not for Availability – for increased READ performance 
  • Scale beyond the capacity of a single DB instance for read -heavy workloads.
  • Handle read traffic while DB instance is unavailable
  • Offload reporting adjacent a replica instead of primary
  • Uses asynchronous replication when there is change to the Primary
  • Read Replicas are for these three RDS types: MySQL, MAriaDB and PostGreSql

 Multi-AZ RDS instances + Backups:

When Multi AZ is used on an RDS instance, I/O is not suspended on primary during a backup, since the backups are taken from standby.

AWS DB Security

 

Use IAM Policies with fine-grained access that limit what DB adminstrators can do

Deploy RDS instances into a VPC private subnet

Restrict access to DB using ACL

Restrict access with Security Groups

Rotate Keys and Passwords

AWS RedShift Datawarehouse

OLTP – Online transaction Processing – operations that are frequently writing and changing data. Actions performed on standard DBs.

OLAP – Online Analytical Processing – For datawarehouse. Complex query against large datasets. ” For example, where online transaction processing (OLTP) applications typically store data in rows, Amazon Redshift stores data in columns, using specialized data compression encodings for optimum memory usage and disk I/O”

AWS Redshift is a fast, powerful, fully managed petabyte scale DWH service in the Cloud. Give fast querying abilities over structured data using standard SQL commands to support interactive queying over large datasets.

 

NoSQL Database and Amazon Dynamo DB

 

Traditional DB; tables have a pre-defined schema, table name, primary key, column names and data types.

NoSQL DB are non-relation DBs; no existing traditional table for data stores. Example formats:

-Document DB

-Graph Stores

Key/VAlue Stores

Wide Column Stores

DynamoDB is an AWS NoSQL Service; fully managed, extremely fast with predictable performance by automatically distributing data and traffic for a table over multiple partitions.  All data on high performance SSD drives.  Protects data by replicating data across multiple AZ within an AWS Region.

DynamoDB only requires you have a primary key attribute ; but you don’t need to define attr names and data types in advance. Each attr in an item is a key value pair / can be single valued or multi-valued

{

CarName = “Red5”

CarVendor = “Suzuki”

CarVIN = “12345678890abcdefg”

}

Eventually consistent reads: When data is read; the repsonse may not reflect the results of a recently completed WRITE.

Strongly consistent READS: When this type of request is given; Dyanmo DB returens  a response with most up to date writes

 

Posted in Uncategorized | Leave a comment